diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaixh b/data_all_eng_slimpj/shuffled/split2/finalzzaixh new file mode 100644 index 0000000000000000000000000000000000000000..684b66c21f53161927a6f0659d2bd1b14954460b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaixh @@ -0,0 +1,5 @@ +{"text":"\\subsection*{Acknowledgments}\nWe thank Denny Wu, Xiaoyu Liu, Dongruo Zhou, Vedant Nanda, Ziyan Yang, Xiaoxiao Li, Jiahao Su, Wei Hu, Bo Han, Simon S. Du, Justin Brody, and Don Perlis for helpful feedback and insightful discussions. \nAdditionally, we thank Houze Wang, Qin Yang, Xin Li, Guodong Zhang, Yixuan Ren, and Kai Wang for help with computing resources.\nThis research is partially performed while Jingling Li is a remote research intern at the Vector Institute and the University of Toronto.\nLi and Dickerson were supported by an ARPA-E DIFFERENTIATE Award, NSF CAREER IIS-1846237, NSF CCF-1852352, NSF D-ISN \\#2039862, NIST MSE \\#20126334, NIH R01 NLM-013039-01, DARPA GARD \\#HR00112020007, DoD WHS \\#HQ003420F0035, and a Google Faculty Research Award.\nBa is supported in part by the CIFAR AI Chairs program, LG Electronics, and NSERC.\nXu is supported by NSF CAREER award 1553284 and NSF III 1900933. Xu is also partially supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. \nZhang is supported by ODNI, IARPA, via the BETTER Program contract \\#2019-19051600005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. \n\n{\n\\small\n\\bibliographystyle{unsrtnat}\n\\typeout{}\n\n\\subsection{Related Work}\n\\label{sec:related}\nA commonly studied type of noisy label is the random label noise, where the noisy labels are drawn i.i.d.~from a uniform distribution.\nWhile neural networks trained with random labels easily overfit~\\citep{zhang2016understanding}, it has been observed that networks learn simple patterns first~\\cite{arpit2017closer}, converge faster on downstream tasks~\\cite{maennel2020neural}, and benefit from memorizing atypical training samples~\\cite{feldman2020neural}.\n\nAccordingly, many recent works on noisy label training are based on the assumption that when trained with noisy labels, neural networks would first fit to clean labels~\\cite{ lyu2019curriculum, han2018co, jiang2018mentornet,li2020dividemix, liu2020early} and learn useful feature patterns~\\cite{hendrycks2018using, lee2019robust, bahri2020deep, wu2020topological}.\nYet, these methods are often more effective on random label noise than on more realistic label noise (i.e., class-dependent and instance-dependent label noise).\n\nMany works on representation learning have investigated the features preferred by a network during training~\\cite{arpit2017closer, hermann2020shapes, shah2020pitfalls, sanyal2020benign}, and how to interpret or control the learned representations on clean data~\\cite{alain2016understanding, hermann2020shapes, hermann2019origins, montavon2018methods, yuan2020clime}. \nOur paper focuses more on the predictive power rather than the explanatory power in the learned representations.\nWe adapt the method in~\\cite{alain2016understanding} to measure the predictive power in representations, and we study learning from noisy labels rather than from a clean distribution.\n\nOn noiseless settings, prior works show that neural networks have the inductive bias to learn simple patterns~\\cite{arpit2017closer, hermann2020shapes, shah2020pitfalls, sanyal2020benign}. \nOur work formalizes what is considered as a simple pattern for a given network via architectural alignments, and we extend the definition of alignment in~\\cite{Xu2020What} to noisy settings.\n\n\\section{Introduction}\n\\label{sec:intro}\nSupervised learning starts with collecting labeled data.\nYet, high-quality labels are often expensive.\nTo reduce annotation cost, we collect labels from non-experts~\\cite{snow2008cheap, welinder2010multidimensional, yan2014learning, yu2018learning} or online queries~\\cite{blum2003noise, jiang2020beyond, liu2011noise}, which are inevitably noisy. \nTo learn from these noisy labels, previous works propose many techniques, including modeling the label noise~\\cite{natarajan2013learning, liu2015classification, yao2020dual}, designing robust losses~\\cite{ghosh2017robust, lyu2019curriculum, wang2019symmetric, zhang2018generalized}, adjusting loss before gradient updates~\\cite{arazo2019unsupervised, chang2017active, han2020sigua, hendrycks2018using, ma2018dimensionality, patrini2017making, reed2014training, song2019selfie, wang2017multiclass}, selecting trust-worthy samples~\\cite{lyu2019curriculum, song2019selfie, chen2019understanding, han2018co, jiang2018mentornet, malach2017decoupling, nguyen2019self, shen2019learning, wang2018iterative, yu2019does}, designing robust architectures~\\cite{bekker2016training, chen2015webly, goldberger2016training, han2018masking, jindal2016learning, li2020understanding, sukhbaatar2014training, yao2018deep}, applying robust regularization in training~\\cite{goodfellow2014explaining, hendrycks2019using, jenni2018deep, pereyra2017regularizing, tanno2019learning, zhang2017mixup}, using meta-learning to avoid over-fitting~\\cite{garcia2016noise, li2019learning}, and applying semi-supervised learning~\\cite{nguyen2019self, ding2018semi, li2020dividemix, liu2020early, yan2016robust} to learn better representations.\n\nWhile these methods improve some networks' robustness to noisy labels, we observe that their effectiveness depends on how well the network's architecture aligns with the target\/noise functions, and they are less effective when encountering more realistic label noise that is class-dependent or instance-dependent.\nThis motivates us to investigate an understudied topic: how the network's architecture impacts its robustness to noisy labels.\n\nWe formally answer this question by analyzing how a network's architecture aligns with the target function and the noise.\nTo start, we measure the robustness of a network via the predictive power in its learned representations (Definition~\\ref{def:predict_power}), as\nmodels with large test errors may still learn useful predictive hidden representations~\\cite{arpit2017closer, maennel2020neural}. \nIntuitively, the predictive power measures how well the representations can predict the target function. \nIn practice, we measure it by training a linear model on top of the learned representations using a small set of clean labels and evaluate the linear model's test performance~\\cite{alain2016understanding}.\n\nWe find that a network having a more aligned architecture with the target function is more robust to noisy labels due to its more predictive representations, whereas a network having an architecture more aligned with the noise function is less robust.\nIntuitively, a \\textit{good} alignment between a network's architecture and a function exists if the architecture can be decomposed into several modules such that each module can simulate one part of the function with a \\textit{small} sample complexity.\nThe formal definition of alignment is in Section~\\ref{subsec:alignment_formal}, adapted from~\\cite{Xu2020What}. \n\nOur proposed framework provides initial theoretical support for our findings on a simplified noisy setting (Theorem~\\ref{thm:main}).\nEmpirically, we validate our findings on synthetic graph algorithmic tasks by designing several variants of Graph Neural Networks (GNNs), whose theoretical properties and alignment with algorithmic functions have been well-studied~\\cite{Xu2020What, du2019graph, xu2020neural}. \nMany noisy label training methods are applied to image classification datasets, so we also validate our findings on image domains using different architectures.\n\nMost of our analysis and experiments use standard neural network training. \nInterestingly, we find similar results when using DivideMix~\\cite{li2020dividemix}, a SOTA method for learning with noisy labels:\nfor networks less aligned with the target function, the SOTA method barely helps and sometimes even hurts test accuracy; whereas for more aligned networks, it helps greatly.\n\nFor well-aligned networks, the predictive power of their learned representation could further improve the test performance of SOTA methods, especially on class-dependent or instance-dependent label noise where current methods on noisy label training are less effective. \nMoreover, on Clothing1M~\\cite{xiao2015learning}, a large-scale dataset with real-world label noise, the predictive power of a well-aligned network's learned representations could even outperform some sophisticated methods that use clean labels. \n\nIn summary, we investigate how an architecture's alignments with different (target and noise) functions affect the network's robustness to noisy labels, in which we discover that despite having large test errors, networks well-aligned with the target function can still be robust to noisy labels when evaluating their predictive power in learned representations. \nTo formalize our finding, we provide a theoretical framework to illustrate the above connections.\nAt the same time, we conduct empirical experiments on various datasets with various network architectures to validate this finding.\nBesides, this finding further leads to improvements over SOTA noisy-label-training methods on various datasets and under various kinds of noisy labels (Tables~\\ref{table:cifar10_sym}-\\ref{table:webvision} in Appendix~\\ref{suppsec:add_exp_results}).\n\\section{Theoretical Framework} \n\\label{sec:prelim}\nIn this section, we introduce our problem settings, give formal definitions for ``predictive power'' and ``alignment,'' and present our main hypothesis as well as our main theorem. \n\n\\subsection{Problem Settings}\nLet $\\mathcal{X}$ denote the input domain, which can be vectors, images, or graphs.\nThe task is to learn an underlying target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ on a noisy training dataset $S := \\lbrace (\\x_i, y_i) \\rbrace_{i \\in \\mathcal{I}} \\bigcup \\ \\lbrace (\\x_i, \\hat{y}_i) \\rbrace_{i \\in \\mathcal{I}'}$, \nwhere $y := f(\\x)$ denotes the true label for an input $\\x$, and $\\hat{y}$ denotes the noisy label. \nHere, the set $\\mathcal{I}$ contains indices with clean labels, and $\\mathcal{I}'$ contains indices with noisy labels.\nWe denote $\\frac{|\\mathcal{I}'|}{|S|}$ as the \\textit{noise ratio} in the dataset $S$.\nWe consider both regression and classification problems.\n\n\\textbf{Regression settings.} We consider a label space $\\mathcal{Y} \\subseteq \\mathbb{R}$ and two types of label noise: \na) \\textbf{additive label noise}~\\cite{hu2019simple}: $\\hat{y} := y + \\epsilon$, where $\\epsilon$ is a random variable independent from $\\x$; \nb) \\textbf{instance-dependent label noise}: $\\hat{y} := g(\\x)$ where $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$ is a noise function dependent on the input.\n\n\\textbf{Classification settings.} We consider a discrete label space with $C$ classes: $\\mathcal{Y} = \\{1,2, \\cdots, C\\}$, and three types of label noise: \na) \\textbf{uniform label noise}: $\\hat{y} \\sim \\text{Unif}(1, C)$, where the noisy label is drawn from a discrete uniform distribution with values between $1$ and $C$, and thus is independent of the true label; \nb) \\textbf{flipped label noise}: $\\hat{y}$ is generated based on the value of the true label $y$ and does not consider other input structures;\nc) \\textbf{instance-dependent label noise}: $\\hat{y} := g(\\x)$ where $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$ is a function dependent on the input $\\x$'s internal structures.\nPrevious works on noisy label learning commonly study uniform and flipped label noise. \nA few recent works~\\cite{cheng2020learning, wang2020proselflc} explore the instance-dependent label noise as it is more realistic.\n\n\\subsection{Predictive Power in Representations}\nA network's robustness is often measured by its test performance after trained with noisy labels. \nYet, since models with large test errors may still learn useful representations, we measure the robustness of a network by how good the learned representations are at predicting the target function --- the predictive power in representations.\nTo formalize this definition, we decompose a neural network $\\mathcal{N}$ into different modules $\\mathcal{N}_1, \\mathcal{N}_2, \\cdots$, where each module can be a single layer (e.g., a convolutional layer) or a block of layers (e.g., a residual block).\n\n\\begin{definition}\n\\label{def:predict_power}\n\\textit{(Predictive power).} Let $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ denote the underlying target function where the input $\\x \\in \\mathcal{X}$ is drawn from a distribution $\\mathcal{D}$. \nLet $\\mathcal{C} := \\lbrace (\\x_i, y_i) \\rbrace_{i=1}^{m}$ denote a small set of clean data (i.e., $y_i = f(\\x_i)$). \nGiven a network $\\mathcal{N}$ with $n$ modules $\\mathcal{N}_j$, let $h^{(j)}(\\x)$ denote the representation from module $\\mathcal{N}_j$ on the input $\\x$ (i.e., the output of $\\mathcal{N}_j$). \nLet ${L}$ denote the linear model trained with the clean set $\\mathcal{C}$ where we use $h^{(j)}(\\x)$ as the input, and $y_i$ as the target value during training.\nThen the predictive power of representations from the module $\\mathcal{N}_i$ is defined as \n\\begin{align}\n \\pred_{j}(f, \\mathcal{N}, \\mathcal{C}) := \\mathop{\\mathbb{E}}_{\\x \\sim \\mathcal{D}} \n\\left[l\\left(f(\\x), {L}(h^{(j)}(\\x))\\right) \\right],\n\\end{align}\nwhere $l$ is a loss function used to evaluate the test performance on the learning task. \n\\end{definition}\n\\textbf{Remark.} Notice that smaller $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C})$ indicates better predictive power; i.e., the representations are better at predicting the target function. \nWe empirically evaluate the predictive power using linear regression to obtain a trained linear model ${L}$, which avoids the issue of local minima as we are solving a convex problem; then we evaluate ${L}$ on the test set.\n\n\\subsection{Formalization of Alignment}\n\\label{subsec:alignment_formal}\nOur analysis stems from the intuition that a network would be more robust to noisy labels if it could learn the target function more easily than the noise function.\nThus, we use architectural alignment to formalize what is easy to learn by a given network. \\citet{Xu2020What} define the alignment between a network and a deterministic function via a sample complexity measure (i.e., the number of samples needed to ensure low test error with high probability) in a PAC learning framework (Definition 3.3 in~\\citet{Xu2020What}). \nIntuitively, a network aligns well with a function if each network module can easily learn one part of the function with a small sample complexity.\n\n\\begin{definition}\n\\label{def:alignment}\n\\textit{(Alignment, simplified based on~\\citet{Xu2020What}).} \nLet $\\mathcal{N}$ denote a neural network with $n$ modules $\\mathcal{N}_j$. \nGiven a function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ which can be decomposed into $n$ functions $f_j$ (e.g., $f(\\x) = f_1(f_2(...f_n(\\x)))$), the alignment between the network $\\mathcal{N}$ and $f$ is defined via \n\\begin{align}\n \\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) := \\max_{j} \\mathcal{M}_{{A}_j}(f_j, \\mathcal{N}_j, \\epsilon, \\delta),\n\\end{align}\nwhere $\\mathcal{M}_{{A}_j}(f_j, \\mathcal{N}_j, \\epsilon, \\delta)$ denotes the sample complexity measure for $\\mathcal{N}_j$ to learn $f_j$ with $\\epsilon$ precision at a failure\nprobability $\\delta$ under a learning algorithm $A_j$.\n\n\\textbf{Remark.} \nNotice that smaller $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta)$ indicates better alignment between network $\\mathcal{N}$ and function $f$. \nIf $f$ is obtuse or does not have a structural decomposition, we can choose $n=1$, and the definition of alignment degenerates into the sample complexity measure for $\\mathcal{N}$ to learn $f$.\nAlthough it is sometimes non-trivial to compute the exact alignment for a task without clear algorithmic structures, we could break this complicated task into sub-tasks, and it would be easier to measure the sample complexity of learning each sub-task. \n\n\\end{definition}\n\\citet{Xu2020What} further prove that better alignment implies better sample complexity and vice versa.\n\\begin{theorem} \n\\label{thm:xu2020}\n\\textit{(Informal;~\\cite{Xu2020What})} \nFix $\\epsilon$ and $\\delta$.\nGiven a target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ and a network $\\mathcal{N}$, suppose $\\left\\lbrace x_i \\right\\rbrace_{i=1}^M$ are i.i.d. samples drawn from a distribution $\\mathcal{D}$, and let $y_i := f(x_i)$. \nThen $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) \\leq M$\nif and only if there exists a learning algorithm $A$ such that\n\\begin{align}\n \\mathbb{P}_{x \\sim \\mathcal{D}} \\left[ \\| f_{\\mathcal{N}, A}(x) - f(x) \\| \\leq \\epsilon \\right] \\geq 1- \\delta,\n\\end{align}\nwhere $f_{\\mathcal{N}, A}$ is the function generated by $A$ on the training data $\\left\\lbrace x_i, y_i \\right\\rbrace_{i=1}^M$.\n\\end{theorem}\n\\textbf{Remark.} Intuitively, a function $f$ (with a decomposition $\\{f_j\\}_j$) can be efficiently learned by a network $\\mathcal{N}$ (with modules $\\{\\mathcal{N}_j\\}_j$) iff each $f_j$ can be efficiently learned by $\\mathcal{N}_j$. \n\nWe further extend Definition~\\ref{def:alignment} to work with a random process $\\mathcal{F}$ (i.e., a set of all possible sample functions that describes the noisy label distribution). \n\\begin{definition}\n\\label{def:alignment_extend}\n\\textit{(Alignment, extension to various noise functions).} \nGiven a neural network $\\mathcal{N}$ and a random process $\\mathcal{F}$,\nfor each $f \\in \\mathcal{F}$, the alignment between $\\mathcal{N}$ and $f$ is measured via $\\max_{j} \\mathcal{M}_{{A}_j}(f_j, \\mathcal{N}_j, \\epsilon, \\delta)$ based on Definition~\\ref{def:alignment}.\nThen the alignment between $\\mathcal{N}$ and $\\mathcal{F}$ is defined as\n\\[\n \\textit{Alignment}^{*}(\\mathcal{N}, \\mathcal{F}, \\epsilon, \\delta) := \\sup_{f\\in\\mathcal{F}} \\max_{j} \\mathcal{M}_{{A}_j}(f_j, \\mathcal{N}_j, \\epsilon, \\delta),\n \\vspace{-0.5em}\n\\]\nwhere $\\mathcal{N}$ can be decomposed differently for various $f$.\n\\end{definition}\n\n\\subsection{Better Alignment Implies Better Robustness (Better Predictive Power)}\nBuilding on the definitions of \\textit{predictive power} and \\textit{alignment}, we hypothesize that\na network better-aligned with the target function (smaller $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta)$) would learn more predictive representations (smaller $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C})$) when trained on a given noisy dataset.\n\\begin{hypothesis}\n\\label{thm:hypothesis} (Main Hypothesis). \nLet $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ denote the target function. \nFix $\\epsilon$, $\\delta$, a learning algorithm $A$, a noise ratio, and a noise function $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$ (which may be a drawn from a random process).\nLet S denote a noisy training dataset and $\\mathcal{C}$ denote a small set of clean data.\nThen for a network $\\mathcal{N}$ trained on $S$ with the learning algorithm $A$, \n\\begin{align}\n\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta)\\downarrow \\ \\Longrightarrow \\pred_{j}(f, \\mathcal{N}, \\mathcal{C})\\downarrow,\n\\end{align}\nwhere $j$ is selected based on the network's architectural alignment with the target function (for simplicity, we consider $j = n-1$ in this work).\n\\end{hypothesis}\n\nWe prove this hypothesis for a simplified case where the target function shares some common structures with the noise function (e.g., class-dependent label noise). \nWe refer the readers to Appendix~\\ref{suppapp:thm} for a full statement of our main theorem with detailed assumptions. \n\\begin{theorem}\n\\label{thm:main}\n\\textit{(Main Theorem; informal)} \nFor a target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ and a noise function $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$, consider a neural network $\\mathcal{N}$ well-aligned with $f$ such that $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C})$ is small when training $\\mathcal{N}$ on clean data (i.e., $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C}) < c$ for some small constant $c$). If there exists a function $h$ on the input domain $\\mathcal{X}$ such that $f$ and $g$ can be decomposed as follows: $\\forall x \\in \\mathcal{X}$, $f(\\x) = f_r(h(\\x))$ with $f_r$ being a linear function, and $g(\\x) = g_r(h(\\x))$ for some function $g_r$, then the representations learned by $\\mathcal{N}$ on the noisy dataset still have a good predictive power with $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C}) < c$.\n\\end{theorem}\n\nWe further provide empirical support for our hypothesis via systematic experiments on various architectures, target and noise functions across both regression and classification settings.\n\\section{Experiments on Graph Neural Networks}\n\\label{sec:gnn}\nWe first validate our hypothesis on synthetic graph algorithmic tasks by designing GNNs with different levels of alignments to the underlying target\/noise functions. \nWe consider regression tasks.\nThe theoretical properties of GNNs and their alignment with algorithmic regression tasks are well-studied~\\cite{Xu2020What, du2019graph, xu2020neural, sato2019approximation}.\nTo start, we conduct experiments on different types of additive label noise and extend our experiments to instance-dependent label noise, which is closer to real-life noisy labels.\n\n\\textbf{Common Experimental Settings.} \nThe training and validation sets always have the same noise ratio, the percentage of data with noisy labels. \nWe choose mean squared error (MSE) and Mean Absolute Error (MAE) as our loss functions.\nDue to space limit, the results using MAE are in Appendix~\\ref{appsec:mae}.\nAll training details are in Appendix~\\ref{suppsec:gnn_training_details}.\nThe test error is measured by mean absolute percentage error (MAPE), a relative error metric.\n\n\\subsection{Background: Graph Neural Networks}\nGNNs are structured networks operating on graphs with MLP modules~\\cite{battaglia2018relational, scarselli2009graph, xu2018how, xu2018representation, xu2021optimization, liao2021information, cai2021graphnorm}. \nThe input is a graph ${\\mathcal{G}}=(V,E)$ where each node $u \\in V$ has a feature vector $\\x_u$, and we use $\\mathcal{N}(u)$ to denote the set of neighbors of $u$.\nGNNs iteratively compute the node representations via message passing: (1) the node representation $\\bm{h}_u$ is initialized as the node feature: $\\bm{h}_u^{(0)} = \\x_u$; (2) in iteration $k=1..K$, the node representations $\\bm{h}_u^{(k)}$ are updated by aggregating the neighboring nodes' representations with MLP modules~\\cite{gilmer2017neural}. \nWe can optionally compute a graph representation $\\bm{h}_{{\\mathcal{G}}}$ by aggregating the final node representations with another MLP module.\nFormally,\n\\begin{align}\n\\vspace{-0.5em}\n\\bm{h}_u^{(k)} &:= \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(k)} \\Big( \\bm{h}_u^{(k - 1)}, \\bm{h}_v^{(k - 1)} \\Big), \\\\\n\\bm{h}_{{\\mathcal{G}}} &:= \\text{MLP}^{(K+1)} \\Big( \\sum_{u \\in {\\mathcal{G}}} \\bm{h}_u^{(K)} \\Big).\n\\vspace{-0.5em}\n\\end{align}\nDepending on the task, the output is either the graph representation $\\bm{h}_{{\\mathcal{G}}}$ or the final node representations $\\bm{h}_u^{(K)}$.\nWe refer to the neighbor aggregation step for $\\bm{h}_u^{(k)} $ as \\emph{aggregation} and the pooling step for $\\bm{h}_{{\\mathcal{G}}}$ as \\emph{readout}. Different tasks require different aggregation and readout functions.\n\n\\subsection{Additive Label Noise}\n\\label{subsubsec:unstructured_noise}\n\n\\begin{figure}\n\\vspace{-1em}\n\\centering\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{max_sum_gnn_alignment.pdf}\n \\captionof{figure}{\\textbf{Max-sum GNN aligns well with the task maximum degree.} \n Max-sum GNN $h_{{\\mathcal{G}}}$ can be decomposed into two modules: $\\text{Module}^{(1)}$ and $\\text{Module}^{(2)}$, and the target function $f({\\mathcal{G}})$ can be similarly divided as $f({\\mathcal{G}}) = f_2(f_1({\\mathcal{G}}))$. As the nonlinearities of the target function have been encoded in the GNN's architecture, $f({\\mathcal{G}})$ can be easily learned by $h_{{\\mathcal{G}}}$: $f_1(\\cdot)$ can be easily learned by $\\text{Module}^{(1)}$, and $f_2(\\cdot)$ is the same as $\\text{Module}^{(2)}$. }\n \\label{fig:maxdeg_illustrate}\n\\end{minipage}\n\\hspace{1.0em}\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{GNN_max_sum_gaussian_mean10_reg_true_label.pdf}\n \\captionof{figure}{\\textbf{PCA visualization of hidden representations} from a max-sum GNN trained with additive label noise drawn from $\\mathcal{N}(10,\\,15)$ at 100\\% noise ratio. Each dot denotes a single training example and is colored with its true label. The x-axis and y-axis denote the projected values at the first and second principal components. As the colors change gradually from left to right, the largest principal component of the representations have a clear linear relationship with the true labels. }\n \\label{fig:maxdeg_graph_embed}\n\\end{minipage}\n\\end{figure}\n\n\\citet{hu2019simple} prove that MLPs are robust to additive label noises with zero mean, if the labels are drawn i.i.d.~from a Sub-Gaussian distribution. \n\\citet{wu2020optimal} also show that linear models are robust to zero-mean additive label noise even in the absence of explicit regularization.\nIn this section, we show that a GNN \\textit{well-aligned} to the target function not only achieves low test errors on additive label noise with zero-mean, but also learns \\textit{predictive} representations on noisy labels that are drawn from non-zero-mean distributions despite having large test error.\n\n\\textbf{Task and Architecture.} The task is to compute the maximum node degree:\n\\begin{align}\n\\label{eq:maxdeg}\nf({\\mathcal{G}}) &:= \\text{max}_{u \\in {\\mathcal{G}}} \\sum\\limits_{v \\in \\mathcal{N}(u)} 1.\n\\end{align}\nWe choose this task as we know which GNN architecture aligns well with this target function---a 2-layer GNN with max-aggregation and sum-readout (max-sum GNN):\n\\begin{align}\n\\label{eq:max_sum_gnn}\n\\bm{h}_{{\\mathcal{G}}} &:= \\text{MLP}^{(2)} \\Big( \\text{max}_{u \\in {\\mathcal{G}}} \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(1)} \\Big({\\bm{h}}_u, {\\bm{h}}_v\\Big) \\Big), \\\\\n{\\bm{h}}_u &:= \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(0)} \\Big({\\bm{x}}_u, {\\bm{x}}_v\\Big).\n\\end{align}\nFigure~\\ref{fig:maxdeg_illustrate} demonstrates how exactly the max-sum GNN aligns with $f({\\mathcal{G}})$.\nIntuitively, they are well-aligned as the MLP modules of max-sum GNN only need to learn simple constant functions to simulate $f({\\mathcal{G}})$. \nBased on Figure~\\ref{fig:maxdeg_illustrate}, we take the output of $\\text{Module}^{(2)}$ as the learned representations for max-sum GNNs when evaluating the predictive power.\n\n\\textbf{Label Noise.} We corrupt labels by adding independent noise ${\\epsilon}$ drawn from three distributions: Gaussian distributions with zero mean $\\mathcal{N}(0,\\,40)$ and non-zero mean $\\mathcal{N}(10,\\,15)$, and a long-tailed Gamma distribution with zero-mean $\\Gamma(2,\\,1\/15)-30$. \nWe also consider more distributions with non-zero mean in Appendix~\\ref{appsec:additive}. \n\n\\textbf{Findings.} In Figure~\\ref{fig:maxdeg_unstructured}, while the max-sum GNN is robust to \\textit{zero-mean} additive label noise (dotted yellow and purple lines), its test error is much higher under non-zero-mean noise $\\mathcal{N}(10,\\,15)$ (dotted red line) as the learned signal may be ``shifted'' by the non-centered label noise. \nYet, max-sum GNNs' learned representations under these three types of label noise all predict the target function well when evaluating their predictive powers with 10\\% clean labels (solid lines in Figure~\\ref{fig:maxdeg_unstructured}).\n\nMoreover, when we plot the representations (using PCA) from a max-sum GNN trained under 100\\% noise ratio with $\\epsilon \\sim \\mathcal{N}(10,\\,15)$, the representations indeed correlate well with true labels (Figure~\\ref{fig:maxdeg_graph_embed}). \nThis explains why the representation learned under noisy labels can recover surprisingly good test performance despite that the original model has large test errors.\n\nThe predictive power of randomly-initialized max-sum GNNs is in Table~\\ref{table:unstructured_noise} (Appendix~\\ref{appsec:random}).\n\n\\begin{figure}\n\\vspace{-1em}\n\\centering\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{maxdeg_unstructured_noise_early_stop_reg_improve.pdf}\n \\captionof{figure}{\\textbf{Representations are very predictive for a GNN well-aligned with the target function under additive label noise.} On the maximum degree task, the representations' predictive power (solid lines) achieves low test MAPE ($< 5\\%$) across all three types of noise for the max-sum GNN, despite that the model's test MAPE (dotted lines) may be quite large (for non-zero-mean noise). We average the statistics over 3 runs using different random seeds.}\n \\label{fig:maxdeg_unstructured}\n\\end{minipage}\n\\hspace{1.0em}\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{maxdeg_structured_noise_reg_loss_improve_on_last_epoch.pdf}\n \\captionof{figure}{\\textbf{Representations are more predictive for GNNs more aligned with the target function, and less predictive for GNNs more aligned with the noise function}. \n On the maximum node feature task, while all three GNNs have large test errors under high noise ratios (dotted lines), the predictive power (solid lines) in representations from Deepset (yellow) and max-max GNN (red) greatly reduces the test MAPE. \n In contrast, the representation's predictive power for max-sum GNN barely reduces the model's test MAPE (tiny gap between dotted and solid purple lines).}\n \\label{fig:gnn_structured}\n\\end{minipage}\n\\vspace{-1em}\n\\end{figure}\n\n\\subsection{Instance-Dependent Label Noise}\n\\label{subsubsec:structured_noise}\nRealistic label noise is often instance-dependent. For example, an option is often incorrectly priced in the market, but its incorrect price (i.e., the noisy label) should depend on properties of the underlying stock.\nSuch instance-dependent label noise is more challenging, as it may contain \\textit{spurious signals} that are easy to learn by certain architectures. \nIn this section, we evaluate the representation' predictive power for three different GNNs trained with instance-dependent label noise.\n\n\\textbf{Task and Label Noise.}\nWe experiment with a new task---computing the maximum node feature: \n\\begin{align}\n\\label{eq:max_node_feature}\nf({\\mathcal{G}}) := \\text{max}_{u\\in {\\mathcal{G}}} ||{\\bm{x}}_u||_{\\infty}.\n\\end{align}\nTo create a instance-dependent noise, we randomly replace the label with the maximum degree:\n\\begin{align}\ng({\\mathcal{G}}) :=\\text{max}_{u \\in {\\mathcal{G}}} \\sum\\limits_{v \\in \\mathcal{N}(u)} 1.\n\\end{align}\n\n\\textbf{Architecture.} We consider three GNNs: DeepSet~\\cite{zaheer2017deep}, max-max GNN, and max-sum GNN. \nDeepSet can be interpreted as a special GNN that does not use neighborhood information: \n\\begin{align}\n h_{{\\mathcal{G}}} = \\text{MLP}^{(1)} \\Big( \\text{max}_{u \\in {\\mathcal{G}}} \\text{MLP}^{(0)} \\Big({\\bm{x}}_u\\Big) \\Big).\n\\end{align}\nMax-max GNN is a 2-layer GNN with max-aggregation and max-readout. Max-sum GNN is the same as the one in the previous section.\n\nDeepSet and max-max GNN are well-aligned with the target function $f({\\mathcal{G}})$, as their MLP modules only need to learn simple linear functions. \nIn contrast, max-sum GNN is more aligned with $g({\\mathcal{G}})$ than $f({\\mathcal{G}})$ since neither its MLP modules or sum-aggregation module can efficiently learn the max-operation in $f({\\mathcal{G}})$~\\cite{Xu2020What, xu2020neural}.\n\nMoreover, DeepSet cannot learn $g({\\mathcal{G}})$ as the model ignores \\textit{edge information}.\nWe take the hidden representations before the last MLP modules in all three GNNs and compare their predictive power.\n\n\\textbf{Findings.}\nWhile all three GNNs have large test errors under high noise ratios (dotted lines in Figure~\\ref{fig:gnn_structured}), the predictive power in representations from GNNs more aligned with the target function --- DeepSet (solid yellow line) and max-max GNN (solid red line) --- significantly reduces the original models' test errors by 10 and 1000 times respectively. \nYet, for the max-sum GNN, which is more aligned with the noise function, training with noisy labels indeed destroy the internal representations such that they are no longer to predict the target function --- its representations' predictive power (solid purple line) barely decreases test error. \nWe also evaluate the predictive power of these three types of randomly-initialized GNNs, and the results are in Table~\\ref{table:structured_noise} (Appendix~\\ref{appsec:random}).\n\\section{Experiments on Vision Datasets}\nMany noisy label training methods are benchmarked on image classification; \nthus, we also validate our hypothesis on image domains.\nWe compare the representations' predictive power between MLPs and \\mbox{CNN-based} networks using 10\\% clean labels (all models are trained until they could perfectly fit the noisy labels, a.k.a., achieving close to 100\\% training accuracy). \nWe further evaluate the predictive power in representations learned with SOTA methods.\nPredictive power on networks that aligned well with the target function could further improve SOTA method's test performance (Section~\\ref{sec:eval_sota}).\nThe final model also outperforms some sophisticated methods on noisy label training which also use clean labels (Appendix~\\ref{appsec:compare_sota}). \nAll our experiment details are in Appendix~\\ref{suppsec:vision_training_details}.\n\\vspace{-0.5em}\n\\subsection{MLPs vs. \\mbox{CNN-based} networks}\nTo validate our hypothesis, we consider several target functions with different levels of alignments to MLPs and CNN-based networks. \nAll models in this section are trained with standard procedures without any robust training methods or robust losses.\n\\paragraph{Datasets and Label Noise.} We consider two types of target functions: one aligns better with \\mbox{CNN-based} models than MLPs, and the other aligns better with MLPs than \\mbox{CNN-based} networks. \n\n\\textbf{1).} \\textbf{CIFAR-10} and \\textbf{CIFAR-100}~\\cite{krizhevsky2009learning} come with clean labels.\nTherefore, we generate two types of noisy labels following existing works: (1) \\textbf{uniform label noise} randomly replaces the true labels with all possible labels, and (2) \\textbf{flipped label noise} swaps the labels between similar classes (e.g., deer$\\leftrightarrow$horse, dog$\\leftrightarrow$cat) on CIFAR-10~\\cite{li2020dividemix}, or flips the labels to the next class on CIFAR-100~\\cite{natarajan2013learning}.\n\n\\textbf{2).} \\textbf{CIFAR-Easy} is a dataset modified on CIFAR-10 with labels generated by procedures in Figure~\\ref{fig:our_cifar10_demo} --- the class\/label of each image depends on the location of a special pixel.\nWe consider three types of noisy labels on CIFAR-Easy: (1) \\textbf{uniform label noise} and (2) \\textbf{flipped label noise} (described as above); and (3) \\textbf{instance-dependent label noise} which takes the original image classification label as the noisy label.\n\n\\begin{figure}\n\\centering\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{cat.pdf}\n \\captionof{figure}{\\textbf{Synthetic Labels on CIFAR-Easy.} For each image, we mask a pixel at the top left corner with pink color. Then the synthetic label for this image is the location of the pink pixel\/mask (i.e., the cat image in the above example has label 4).}\n \\label{fig:our_cifar10_demo}\n\\end{minipage}%\n\\hspace{1.0em}\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{sample_complexity.pdf}\n \\captionof{figure}{\\textbf{Sample complexity of MLPs and CNNs on CIFAR-Easy.} Both MLPs and CNNs can achieve 100\\% test accuracy given sufficient training examples, but MLPs need far fewer examples than CNNs and thus are more sample-efficient on CIFAR-Easy.}\n \\label{fig:sample_complex}\n\\end{minipage}\n\\vspace{-1em}\n\\end{figure}\n\n\\input{cifar_baseline_compare}\n\\input{our_cifar10_compare}\n\\input{baseline}\n\\input{our_cifar10}\n\n\\paragraph{Architectures.}\n\\label{exp:our_cifar10}\nOn CIFAR-10\/100, we evaluate the predictive power in representations for three architectures: 4-layer MLPs, 9-layer CNNs, and 18-layer PreAct ResNets~\\cite{he2016identity}.\nOn CIFAR-Easy, we compare between MLPs and CNNs. \nWe take the representations before the penultimate layer when evaluating the predictive power for these networks.\n\nAs the designs of \\mbox{CNN-based} networks (e.g., CNNs and ResNets) are similar to human perception system because of the receptive fields in convolutional layers and a hierarchical extraction of more and more abstracted features~\\cite{lecun1995convolutional, kheradpisheh2016deep}, \\mbox{CNN-based} networks are expected to \\textit{align better} with the target functions than MLPs on image classification datasets (e.g., \\mbox{CIFAR-10\/100}).\n\nOn the other hand, on CIFAR-Easy, while both CNNs and MLPs can generalize perfectly given sufficient training examples, MLPs have a much smaller sample complexity than CNNs (Figure~\\ref{fig:sample_complex}).\nThus, both MLP and CNN are \\textit{well-aligned} with the target function on CIFAR-Easy, but MLP is \\textit{better-aligned} than CNN according to Theorem~\\ref{thm:main}.\nMoreover, since the instance-dependent label on CIFAR-Easy \\ is the original image classification label, CNN is also \\textit{aligned} with this instance-dependent noise function on CIFAR-Easy.\n\n\\paragraph{Experimental Results.}\nFirst, we empirically verify our hypothesis that \\emph{networks better-aligned with the target function have more predictive representations}. \nAs expected, across most noise ratios on CIFAR-10\/100, the representations in \\mbox{CNN-based} networks (i.e., CNN and ResNet) are more predictive than those in MLPs (Figure~\\ref{fig:cifar_baseline_compare}) under both types of label noise.\nMoreover, the predictive power in representations learned by less aligned networks (i.e., MLPs) sometimes are even worse than the vanilla-trained models' test performance, suggesting that the noisy representations on less aligned networks may be more corrupted and less linearly separable.\nOn the other hand, across all three types of label noise on CIFAR-Easy, MLPs, which align better with the target function, have more predictive representations than CNNs (Figure~\\ref{fig:our_cifar10_compare}). \n\nWe also observe that \\emph{models with similar test performance could have various levels of predictive powers in their learned representations}. \nFor example, in Figure~\\ref{fig:our_cifar10_compare}, while the test accuracies of MLPs and CNNs are very similar on CIFAR-Easy\\ under flipped label noise (i.e., dotted purple and yellow lines overlap), the predictive power in representations from MLPs is much stronger than the one from CNNs (i.e., solid purple lines are much higher than yellow lines).\nThis also suggests that when trained with noisy labels, if we do not know which architecture is more aligned with the underlying target function, we can evaluate the predictive power in their representations to test alignment.\n\nWe further discover that \\textit{for networks well-aligned with the target function, its learned representations are more predictive when the noise function shares more mutual information with the target function}. \nWe compute the empirical mutual information between the noisy training labels and the original clean labels across different noise ratios on various types of label noise. \nThe predictive power in representations improves as the mutual information increases (Figure~\\ref{fig:mutual_info} in Appendix~\\ref{suppsec:add_exp_results}).\nThis explains why the predictive power for a network is often higher under flipped noise than uniform noise: at the same noise ratio, flipped noise has higher mutual information than uniform noise. \nMoreover, comparing across the three datasets in Figure~\\ref{fig:mutual_info}, we observe the growth rate of a network's predictive power w.r.t. the mutual information depends on both the intrinsic difficulties of the learning task and the alignment between the network and the target function.\n\n\\vspace{-0.5em}\n\\subsection{Predictive Power in Representations for Models Trained with SOTA Methods}\n\\label{sec:eval_sota}\n\\vspace{-0.5em}\nAs previous experiments are on standard training procedures, we also validate our hypothesis on models learned with SOTA methods on noisy label training. \nWe evaluate the representations' predictive power for models trained with the SOTA method, DivideMix~\\cite{li2020dividemix}, which leverages techniques from semi-supervised learning to treat examples with unreliable labels as unlabeled data. \n\nWe compare (1) the test performance for models trained with standard procedures on noisy labels (denoted as \\textbf{Vanilla training}), (2) the SOTA method's test performance (denoted as \\textbf{DivideMix}), and (3) the predictive power in representations from models trained with DivideMix in (2) (denoted as \\textbf{DivideMix's Predictive Power}).\n\nWe discover that \\textit{the effectiveness of DivideMix also depends on the alignment between the network and the target\/noise functions}.\nDivideMix only slightly improves the test accuracy of MLPs on CIFAR-10\/100 (Table~\\ref{table:cifar_baseline}), and DivideMix's predictive power does not improve the test performance of MLPs, either.\nIn Table~\\ref{table:mlp_ourcifar}, DivideMix also barely helps CNNs as they are well-aligned with the instance-dependent noise, where the noisy label is the original image classification label.\n \nMoreover, we observe that \\textit{even for networks well-aligned with the target function, DivideMix may only slightly improve or do not improve its test performance at all} (e.g., red entries of DivideMix on MLPs in Table~\\ref{table:mlp_ourcifar}).\nYet, the representations learned with DivideMix can still be very predictive: the predictive power can achieve over 50\\% improvements over DivideMix for CNN-based models on CIFAR-10\/100 (e.g., 80\\% flipped noise), and the improvements can be over 80\\% for MLPs on CIFAR-Easy\\ (e.g., 90\\% uniform noise).\n\nTables~\\ref{table:cifar_baseline} and~\\ref{table:mlp_ourcifar} shows that the representations' predictive power on networks well aligned with the target function could further improve SOTA test performance. \nAppendix~\\ref{appsec:compare_sota} further demonstrates that on large-scale datasets with real-world noisy labels, the predictive power in well-aligned networks could outperform sophisticated methods that also use clean labels (Table~\\ref{table:clothing} and Table~\\ref{table:webvision}). \n\\section{Concluding Remarks}\n\\vspace{-0.5em}\nThis paper is an initial step towards formally understanding how a network's architectures impacts its robustness to noisy labels. \nWe formalize our intuitions and hypothesize that a network better-aligned with the target function would learn more predictive representations under noisy label training. \nWe prove our hypothesis on a simplified noisy setting and conduct systematic experiments across various noisy settings to further validate our hypothesis.\n\nOur empirical results along with Theorem~\\ref{thm:main} suggest that knowing more structures of the target function can help design more robust architectures. \nIn practice, although an exact mathematical formula for a decomposition of a given target function is often hard to obtain, a high-level decomposition of the target function often exists for real-world tasks \nand will be helpful in designing robust architectures --- a direction undervalued by existing works on learning with noisy labels. \n\n\n\\section{Additional Experimental Results}\n\\label{suppsec:add_exp_results}\nIn this section, we include additional experimental results for the predictive power in (a) representations from randomly initialized models (Appendix~\\ref{appsec:random}), (b) representations learned under different types off additive label noise (Appendix~\\ref{appsec:additive}) and (c) representations learned with a robust loss function (Appendix~\\ref{appsec:mae}). \nWe further demonstrates that the predictive power in well-aligned networks could even outperform sophisticated methods that also utilize clean labels (Appendix~\\ref{appsec:compare_sota}). \n\n\\subsection{Predictive Power of Randomly Initialized Models}\n\\label{appsec:random}\nWe first evaluate the predictive power of randomly initialized models (a.k.a., untrained models), and we compare their results with GNNs trained on clean data (a.k.a., 0\\% noise ratio).\n\n\\begin{table}[ht]\n \\vspace{-0.5em}\n\t\\centering\n\t\\small\n\t\\caption{\n\t\tPredictive power in representations from random and trained max-sum GNNs on the maximum degree task (Section~\\ref{subsubsec:unstructured_noise}). Notice that lower test MAPE denotes better test performance.\n\t\t}\n\t\\label{table:unstructured_noise}\n\t\\vskip 0.1in\n\t\\begin{tabular}\t{l |c|c }\n\t\t\\toprule\t \t\n\t\t\t\\multirow{2}{*}{\\bf Model} & \\multicolumn{2}{c}{\\bf Test MAPE} \\\\\n\t\t\t\\cmidrule{2-3}\n\t\t\t& Random & Trained \\\\\n\t\t\t\\midrule\t\t\t\n\t\t\tMax-sum GNN & 12.74 $\\pm$ 0.57 & 0.37 $\\pm$ 0.08 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\vspace{-0.5em}\n\\end{table}\t\t\n\n\\begin{table}[ht]\n \\vspace{-0.5em}\n\t\\centering\n\t\\small\n\t\\caption{\n\t\tPredictive power in representations from various types of random and trained GNNs on the maximum node feature task (Section~\\ref{subsubsec:structured_noise}). Notice that lower test MAPE denotes better test performance.\n\t\t}\n\t\\label{table:structured_noise}\n\t\\vskip 0.1in\n\t\\begin{tabular}\t{l |c|c|c|c }\n\t\t\\toprule\t \t\n\t\t\t\\multirow{2}{*}{\\bf Model} & \\multicolumn{2}{c|}{\\bf Test MAPE} & \\multicolumn{2}{c}{\\bf Test MAPE (log scale)} \\\\\n\t\t\t\\cmidrule{2-5}\n\t\t\t& Random & Trained & Random & Trained \\\\\n\t\t\t\\midrule\t\t\t\n\t\t\tDeepSet & 5.14e-05 & 1.06e-05 & -4.29 & -4.97 \\\\ \\hline\n\t\t\tMax-max GNN & 0.794 & 0.0099 & -0.10 & -2.00 \\\\ \\hline\n\t\t\tMax-sum GNN & 54.28 & 3.08 & 1.73 & 0.488 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\vspace{-0.5em}\n\\end{table}\t\n\n\\subsection{Additive Label Noise on Graph Algorithmic Datasets}\n\\label{appsec:additive}\n We conduct additional experiments on additive label noise drawn from distributions with larger mean and larger variance. We consider four such distributions: Gaussian distributions $\\mathcal{N}(10,\\,30)$ and $\\mathcal{N}(20,\\,15)$, a long-tailed Gamma distribution with mean equal to $10$: $\\Gamma(2, \\dfrac{1}{15}) - 20$, and another long-tailed t-distribution with mean equal to $10$: $\\mathcal{T}(\\nu=1) + 10$. \n Figure~\\ref{fig:app_maxdeg_unstructured} demonstrates that for a GNN well aligned to the target function, its representations are still very predictive even under non-zero mean distributions with larger mean and large variance. \n\\input{app_gnn_unstructured}\n\n\\vspace{-1em}\n\\subsection{Training with a Robust Loss Function}\n\\label{appsec:mae}\nWe also train the models with a robust loss function--Mean Absolute Error (MAE), and we observe similar trends in the representations' predictive power as training the models using MSE (Figure~\\ref{fig:app_maxdeg_l1}).\n\\begin{align}\n \\text{loss} = \\sum_{i=1}^n |y_\\text{true} - y_\\text{pred}|.\n\\end{align}\n\n\\begin{figure*}[h!]\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{maxdeg_unstructured_noise_early_stop_l1_improve.pdf}\n \\vspace{-1em}\n \\caption{Test errors of max-sum GNNs on the maximum degree task with additive label noise}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.44\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{maxdeg_structured_noise_l1_loss_improve_on_last_epoch.pdf}\n \\caption{Test errors of three different GNNs on the maximum node feature task with instance-dependent label noise}\n \\end{subfigure}\n \\caption{\\textbf{Predictive power in representations trained with MAE}. For GNNs trained with MAE, the predictive power in representations exhibits similar trends as models trained with MSE. The robust loss function, MAE, is more helpful in learning more predictive representations under smaller noise ratios. \n }\n \\label{fig:app_maxdeg_l1}\n \\vspace{-0.25em}\n\\end{figure*}\n\n\\input{mutual_information}\n\n\\subsection{Comparing with Sophisticated Methods Using Clean Labels}\n\\label{appsec:compare_sota}\n\\input{cifar10_sym_noise}\n\\input{cifar10_asym_noise}\n\\input{cifar100_sym_noise}\n\\input{cifar100_asym_noise}\n\\input{clothing}\n\\input{webvision}\nIn previous experiments (section~\\ref{sec:eval_sota}), we have shown that the predictive power in well-aligned models could further improve the test performance of SOTA methods on noisy label training. \nAs we use a small set of clean labels to measure the predictive power, we also wonder how the improvements obtained by the predictive power compare with the sophisticated methods that also use clean labels.\n\n\\subsubsection{Sophisticated Methods Using Clean Labels}\nIn our experiments, we consider the following methods which use clean labels: L2R~\\cite{ren2018learning}, MentorNet~\\cite{jiang2018mentornet}, SELF~\\cite{nguyen2019self}, GLC~\\cite{hendrycks2018using}, Meta-Weight-Net~\\cite{shu2019meta}, and IEG~\\cite{zhang2019ieg}.\nBesides, as the SOTA method, DivideMix, keeps dividing the training data into labeled and unlabeled sets during training, we also compare with directly using clean labels in DivideMix: we mark the small set of clean data as labeled data during the semi-supervised learning step in DivideMix.\nWe denote this method as \\textit{DivideMix w\/ Clean Labels (DwC)} and further measure the predictive power in representations learned by DwC.\n\n\\subsubsection{Datasets}\nWe conduct experiments on CIFAR-10\/100 with synthetic noisy labels and on two large-scale datasets with real-world noisy labels: Clothing1M and Webvision.\n\n\\textbf{Clothing1M}~\\cite{xiao2015learning} has real-world noisy labels with an estimated 38.5\\% noise ratio.\nThe dataset has a small human-verified training data, which we use as clean data.\nFollowing recent method~\\cite{li2020dividemix}, we use 1000 mini-batches in each epoch to train models on Clothing1M. \n\n\\textbf{WebVision}~\\cite{li2017webvision} also has real-world noisy labels with an estimated 20\\% noise ratio. \nIt shares the same 1000 classes as ImageNet~\\cite{deng2009imagenet}.\nFor a fair comparison, we follow~\\cite{jiang2018mentornet} to create a mini WebVision dataset with the top 50 classes from the Google image subset of WebVision. \nWe train all models on mini WebVision dataset and evaluate on both the WebVision and ImageNet validation sets.\nWe select 100 images per class from ImageNet training data as clean data.\n\n\\subsubsection{Experimental Settings}\nWe use the same architectures and hyperparameters as DivideMix: an 18-layer {PreAct Resnet~\\cite{he2016identity}} for \\mbox{CIFAR-10\/100}, a ResNet-50 pre-trained on ImageNet for Clothing1M, and Inception-ResNet-V2~\\cite{szegedy2016inception} for WebVision.\nWe use the test accuracy reported in the original papers whenever possible, and the accuracy for L2R~\\cite{ren2018learning} are from~\\cite{zhang2019ieg}. For IEG, we use the reported test accuracy obtained by ResNet-29 rather than WRN28-10, because ResNet-29 has a comparable number of parameters as the \\mbox{PreAct ResNet-18} we use.\n\nAs CIFAR-10\/100 do not have a validation set, we follow previous works to report the averaged test accuracy over the last 10 epochs: we measure the predictive power in representations for models from these epochs and report the averaged test accuracy. \nFor Clothing1M and Webvision, we use the associated validation set to select the best model and measure the predictive power in its representations.\n\n\\subsubsection{Results}\nTables~\\ref{table:cifar10_sym}-\\ref{table:cifar100_asym} show the results on CIFAR-10 and CIFAR-100 with uniform and flipped label noise, where \\textbf{boldfaced numbers} denote test accuracies better than all methods we compared with.\nWe see that across different noise ratios on CIFAR-10\/100 with flipped label noise, the predictive power in representations remains roughly the same as the test performance of the model trained on clean data for a network well-aligned with the target function, which matches with Lemma~\\ref{thm:main_extend}. For CIFAR-10 with uniform label noise, the predictive power in representations achieves better test performance using only 10 clean labels per class on most noise ratios; for CIFAR-100 with uniform label noise, the predictive power in representations could achieve better test performance using only 50 labels per class.\n\nMoreover, we observe that adding clean data to the labeled set in DivideMix (DwC) may barely improve the model's test performance when the noise ratio is small and under flipped label noise.\nAt 90\\% uniform label noise, DwC can greatly improve the model's test performance, and the predictive power in representations can achieve a even higher test accuracy with the same set of clean data used to train DwC.\n\nOn Clothing1M, we compare the predictive power in representations learned by DivideMix with existing methods that use the small set of human-verified data: CleanNet~\\cite{lee2018cleannet}, F-correction~\\cite{patrini2017making} and Self-learning~\\cite{han2019deep}. \nAs these methods also use the clean subset to fine-tune the whole model, we follow similar procedures to fine-tune the model (trained by DivideMix) for 10 epochs and then select the best model based on the validation accuracy to measure the predictive power in its representations.\nThe predictive power in representations could further improve the test accuracy of DivideMix by around 6\\% and outperform IEG, CleanNet, and F-correction (Table~\\ref{table:clothing}). The improved test accuracy is also competitive to~\\cite{han2019deep}, which uses a much more complicated learning framework. \n\nOn Webvision, the predictive power also improves the model's test performance (Table~\\ref{table:webvision}). \nThe improvement is less significant than on Clothing1M as the estimated noise ratio on Webvision (20\\%) is smaller than Clothing1M (38.5\\%).\n\\section{Experimental Details}\n\\label{suppsec:exp_details}\n\n\\subsection{Computing Resources}\nWe conduct all the experiments on one NVIDIA RTX 2080 Ti GPU, except for the experiment on the WebVision dataset~\\cite{li2017webvision} (Table~\\ref{table:webvision}), which uses 4 GPUs concurrently.\n\n\\subsection{Measuring the Predictive Power}\nWe use linear regression to train the linear model when measuring the predictive power in representations. For representations from all models except MLPs, we use ordinary least squares linear regression (OLS). When the learned representations are from MLPs, we e use ridge regression with \\mbox{penalty = 1} since we find the linear models trained by OLS may easily overfit to the small set of clean labels. \n\n\\subsection{Experimental Details on GNNs}\n\\label{suppsec:gnn_training_details}\n\\paragraph{Common settings.} In the generated datasets, each graph ${\\mathcal{G}}$ is sampled from Erd\\H{o}s-R\\'enyi random graphs with an edge probability uniformly chosen from $\\{0.1, 0.2, \\cdots, 0.9\\}$. This sampling procedure generates diverse graph structures. The training and validation sets contain 10,000 and 2,000 graphs respectively, and the number of nodes in each graph is randomly picked from $\\{20, 21, \\cdots, 40\\}$. The test set contains 10,000 graphs, and the number of nodes in each graph is randomly picked from $\\{50, 51, \\cdots, 70\\}$. \n\n\\subsubsection{Additive Label Noise} \n\\paragraph{Dataset Details.} In each graph, the node feature $\\x_u$ is a scalar randomly drawn from $\\{1, 2, \\cdots, 100\\}$ for all $u \\in {\\mathcal{G}}$.\n\n\\paragraph{Model and hyperparameter settings.} We consider a 2-layer GNN with max-aggregation and sum-readout (max-sum GNN):\n\\[\n\\bm{h}_G = \\text{MLP}^{(2)} \\Big( \\text{max}_{u \\in {\\mathcal{G}}} \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(1)} \\Big({\\bm{h}}_u, {\\bm{h}}_v\\Big) \\Big), \\\\\n{\\bm{h}}_u = \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(0)} \\Big({\\bm{x}}_u, {\\bm{x}}_v\\Big).\n\\]\nThe width of all MLP modules are set to $128$. The number of layers are set to $3$ for $\\text{MLP}^{(0)}$ and $\\text{MLP}^{(1)}$. The number of layers are set to $1$ for $\\text{MLP}^{(2)}$.\nWe train the max-sum GNNs with loss function MSE or MAE for 200 epochs. We use the Adam optimizer with default parameters, zero weight decay, and initial learning rate set to 0.001. The batch size is set to 64. We early-stop based on a noisy validation set. \n\n\\subsubsection{Instance-Dependent Label Noise.} \n\\paragraph{Dataset Details.} \nSince the task is to predict the maximum node feature and we use the maximum degree as the noisy label, the correlation between true labels and noisy labels are very high on large and dense graphs if the node features are uniformly sampled from $\\{1, 2, \\cdots, 100\\}$.\nTo avoid this, we use a two-step method to sample the node features.\nFor each graph ${\\mathcal{G}}$, we first sample a constant upper-bound $M_{\\mathcal{G}}$ uniformly from $\\{20, 21, \\cdots, 100\\}$. For each node $u \\in {\\mathcal{G}}$, the node feature ${\\bm{x}}_u$ is then drawn from $\\{1, 2, \\cdots, M_{\\mathcal{G}}\\}$.\n\n\\paragraph{Model and hyperparameter settings.} We consider a 2-layer GNN with max-aggregation and sum-readout (max-sum GNN), a 2-layer GNN with max-aggregation and max-readout (max-max GNN), and a special GNN (DeepSet) that does not use edge information: \n\\[ h_G = \\text{MLP}^{(1)} \\Big( \\text{max}_{u \\in {\\mathcal{G}}} \\ \\text{MLP}^{(0)} \\Big({\\bm{x}}_u\\Big) \\Big). \\] \nThe width of all MLP modules are set to $128$. The number of layers is set to $3$ for $\\text{MLP}^{(0)}, \\text{MLP}^{(1)}$ in max-max and max-sum GNNs and for $\\text{MLP}^{(0)}$ in DeepSet. The number of layers is set to $1$ for $\\text{MLP}^{(2)}$ in max-max and max-sum GNNs and for $\\text{MLP}^{(1)}$ in DeepSet.\nWe train these GNNs with MSE or MAE as the loss function for 600 epochs. We use the Adam optimizer with zero weight decay. \nWe set the initial learning rate to $0.005$ for DeepSet and $0.001$ for max-max GNNs and max-sum GNNs. \nThe models are selected from the last epoch so that they can overfit the noisy labels more.\n\n\\subsection{Experimental Details on Vision Datasets}\n\\label{suppsec:vision_training_details}\n\\paragraph{Neural Network Architectures.} Table~\\ref{tab:models-cnn} describes the 9-layer CNN~\\cite{miyato2018virtual} used on CIFAR-Easy\\ and \\mbox{CIFAR-10\/100}, which contains 9 convolutional layers and 19 trainable layers in total. Table~\\ref{tab:models-mlp} describes the 4-layer MLP used on CIFAR-Easy\\ and \\mbox{CIFAR-10\/100}, which has 4 linear layers and ReLU as the activation function.\n\n\\begin{table}[ht]\n \\centering\n \\begin{minipage}[t]{0.45\\textwidth}\n \\caption{9-layer CNN on CIFAR-Easy\\ and \\mbox{CIFAR-10\/100}.}\n \\label{tab:models-cnn}\n \\vskip 0.1in\n \\centering\\small\n \\begin{tabular*}{\\textwidth}{l@{\\extracolsep{\\fill}}c}\n \\toprule\n \\multirow{1}*{Input}\n & 32$\\times$32 Color Image \\\\\n \\midrule\n \\multirow{5}*{Block 1}\n & Conv(3$\\times$3, 128)-BN-LReLU \\\\\n & Conv(3$\\times$3, 128)-BN-LReLU \\\\\n & Conv(3$\\times$3, 128)-BN-LReLU \\\\\n & MaxPool(2$\\times$2, stride = 2) \\\\\n & Dropout(p = 0.25) \\\\\n \\midrule\n \\multirow{5}*{Block 2}\n & Conv(3$\\times$3, 256)-BN-LReLU \\\\\n & Conv(3$\\times$3, 256)-BN-LReLU \\\\\n & Conv(3$\\times$3, 256)-BN-LReLU \\\\\n & MaxPool(2$\\times$2, stride = 2) \\\\\n & Dropout(p = 0.25) \\\\\n \\midrule\n \\multirow{4}*{Block 3}\n & Conv(3$\\times$3, 512)-BN-LReLU \\\\\n & Conv(3$\\times$3, 256)-BN-LReLU \\\\\n & Conv(3$\\times$3, 128)-BN-LReLU \\\\\n & GlobalAvgPool(128) \\\\\n \\midrule\n Score & Linear(128, 10 or 100) \\\\\n \\bottomrule\n \\end{tabular*}\n \n \\end{minipage}\n \\hfill\n \\begin{minipage}[t]{0.45\\textwidth}\n \\caption{4-layer FC on CIFAR-Easy\\ and \\mbox{CIFAR-10\/100}.}\n \\label{tab:models-mlp}\n \\vskip 0.1in\n \\begin{tabular*}{\\textwidth}{lc}\n \\toprule\n \\multirow{1}*{Input}\n & 32$\\times$32 Color Image \\\\\n \\midrule\n \\multirow{3}*{Block 1}\n & Linear(32$\\times$32$\\times$3, 512)-ReLU \\\\\n & Linear(512, 512)-ReLU \\\\\n & Linear(512, 512-ReLU \\\\\n \\midrule\n Score & Linear(512, 10 or 100) \\\\\n \\bottomrule\n \\end{tabular*}\n \\end{minipage}\n \\vspace{-1ex}\n\\end{table}\n\n\\paragraph{Vanilla Training.} For models trained with standard procedures, we use SGD with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. For ResNets and CNNs, the initial learning rate is set to $0.1$ on CIFAR-10\/100 and $0.01$ on CIFAR-Easy. For MLPs, the initial learning rate is set to $0.01$ on CIFAR-10\/100 and $0.001$ on CIFAR-Easy. The initial learning rate is multiplied by 0.99 per epoch on CIFAR-10\/100, and it is decayed by 10 after 150 and 225 epochs on CIFAR-Easy.\n\n\\paragraph{Train Models with SOTA Methods.} We use the same set of hyperparameter settings from DivideMix~\\cite{li2020dividemix} to obtain corresponding trained models and measure the predictive power in representations from these models.\n\nOn CIFAR-10\/100 with flipped noise, we only use the small set of clean labels to train the linear model in our method, and the clean subset is randomly selected from the training data.\nOn CIFAR-10\/100 with uniform noise, the clean labels we use are from examples with highest model uncertainty~\\cite{lewis1994sequential}. \nBesides the clean set, we also use randomly-sampled training examples labeled with the model's original predictions to train the linear model. \nWe use 5,000 such samples under 20\\%, 40\\%, 50\\%, and 80\\% noise ratios, and we use 500 such samples under 90\\% noise ratio. \n\\section{Theoretical Results}\n\\label{suppapp:thm}\nWe first provide a formal version of Theorem~\\ref{thm:xu2020} based on~\\cite{Xu2020What}. Theorem~\\ref{thm:xu2020} connects a network's architectural alignment with the target function to its learned representations' predictive power \\textit{when trained on clean data}. \n\\begin{theorem} \n\\label{thm:main_formal}\n{(Better alignment implies better predictive power on clean training data; \\cite{Xu2020What}). } \nFix $\\epsilon$ and $\\delta$.\nGiven a target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ that can be decomposed into functions $f_1, ..., f_n$ and given a network $\\mathcal{N}$, where $\\mathcal{N}_1, ..., \\mathcal{N}_n$ are $\\mathcal{N}$'s modules in sequential order,\nsuppose the training dataset $\\mathcal{S} := \\left\\lbrace \\x_j, y_j \\right\\rbrace_{j=1}^M$ contains $M$ i.i.d. samples drawn from a distribution with clean labels $y_j := f(x_j)$. \nThen under the following assumptions, $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) \\leq M$\nif and only if there exists a learning algorithm $A$ such that the network's last module $\\mathcal{N}_n$'s representations learned by $A$ on the training data $\\mathcal{S}$ have predictive power $\\pred_{n}(f, \\mathcal{N}, \\mathcal{S}) \\leq \\epsilon$ with probability $1-\\delta$.\n \\vspace{0.05in} \\\\\n Assumptions: \\vspace{0.05in} \\\\\n \\textbf{(a)} We train each module $\\mathcal{N}_i$'s sequentially: for each $\\mathcal{N}_i$, the input samples are $\\{h^{(i-1)}(\\x_j),f_i(h^{(i-1)}(\\x_j))\\}_{j=1}^M$ with $h^{(0)}(\\x) =\\x$.\n Notice that each input $h^{(i-1)}(\\x_j)$ is the output from the previous modules, but its label is generated by the function $f_{i}$ on $h^{(i-1)}(\\x_j)$. \\\\\n \\textbf{(b)} For the clean training set $\\mathcal{S}$, let $\\mathcal{S}' := \\left\\lbrace \\hat{\\x}_j, y_j \\right\\rbrace_{j=1}^M$ denote the perturbed training data ($\\hat{\\x}_j$ and $\\x_j$ share the same label $y_j$).\n Let $f_{\\mathcal{N}, A}$ and $f'_{\\mathcal{N}, A}$ denote the functions obtained by the learning algorithm $A$ operating on $\\mathcal{S}$ and $\\mathcal{S}'$ respectively.\n Then for any $\\x \\in \\mathcal{X}$, $\\| f_{\\mathcal{N}, A}(\\x) - f'_{\\mathcal{N}, A}(\\x) \\| \\leq L_0 \\cdot \\max_{\\x_j \\in \\mathcal{S}} \\| \\x_j - \\hat{\\x}_j \\|$, for some constant $L_0$. \\\\\n \\textbf{(c) } For each module $\\mathcal{N}_i$, let $\\hat{f}_i$ denotes its corresponding function learned by the algorithm $A$. Then for any $\\x, \\hat{\\x} \\in \\mathcal{X}$, $ \\| \\hat{f}_j (\\x) - \\hat{f}_j (\\hat{\\x}) \\| \\leq L_1 \\| \\x - \\hat{\\x} \\|$, for some constant $L_1$. \n\\end{theorem}\n\nWe have empirically shown that Theorem~\\ref{thm:main_formal} also hold when we train the models on noisy data. \nMeanwhile, we prove Theorem~\\ref{thm:main_formal} for a simplified noisy setting where the target function and noise function share a common feature space, but have different prediction rules. \nFor example, the target function and noise function share the same feature space under flipped label noise (in classification setting). Yet, their mappings from the learned features to the associated labels are different.\n\n\\begin{theorem}\n\\label{thm:main_extend}\n{(Better alignment implies better predictive power on noisy training data). } \nFix $\\epsilon$ and $\\delta$. \nLet $\\left\\lbrace \\x_j \\right\\rbrace_{j=1}^M$ be i.i.d. samples drawn from a distribution. \nGiven a target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ and a noise function $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$, let $y := f(\\x)$ denote the true label for an input $\\x$, and $\\hat{y} := g(\\x)$ denote the noisy label of $\\x$. \nLet $\\hat{\\mathcal{S}} := \\lbrace (\\x_j, y_j) \\rbrace_{j=1}^N \\bigcup \\ \\lbrace (\\x_j, \\hat{y}_j) \\rbrace_{j=N+1}^M$ denote a noisy training set with $M-N$ noisy samples for some $N \\in \\{1,2,\\cdots,M\\}$.\nGiven a network $\\mathcal{N}$ with modules $\\mathcal{N}_i$, suppose $\\mathcal{N}$ is well-aligned with the target function $f$ (i.e., the alignment between $\\mathcal{N}$ and $f$ is less than $M$ --- $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) \\leq M$). \nThen under the same assumptions in Theorem~\\ref{thm:main_formal} and the additional assumptions below, there exists a learning algorithm $A$ and a module $\\mathcal{N}_i$ such that when training the network $\\mathcal{N}$ on the noisy data $\\hat{\\mathcal{S}}$ with algorithm $A$, the representations from its $i$-th module have predictive power $\\pred_{i}(f, \\mathcal{N}, \\mathcal{C}) \\leq \\epsilon$ with probability $1-\\delta$, where $\\mathcal{C}$ is a small set of clean data with a size greater than the number of dimensions in the output of module $\\mathcal{N}_i$. \n\n\\vspace{0.05in} \nAdditional assumptions (a simplified noisy setting): \\vspace{0.05in} \\\\\n\\textbf{(a)} There exists a function $h$ on the input domain $\\mathcal{X}$ such that the target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ and the noise function $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$ can be decomposed as: $f(\\x) = f_r(h(\\x))$ with $f_r$ being a linear function and $g(\\x) = g_r(h(\\x))$ for some function $g_r$. \\\\\n\\textbf{(b)} $f_r$ is a linear map from a high-dimensional space to a low-dimensional space. \\\\ \n\\textbf{(c)} The loss function used in measuring the predictive power is mean squared error (denoted as $\\|\\cdot\\|$) . \n\\end{theorem}\n\n\\textbf{Remark.} Theorem~\\ref{thm:main_extend} suggests that the representations' predictive power for models well aligned with the target function should remain roughly similar across different noise ratios under flipped label noise. Empirically, we observe similar phenomenons in Figures~\\ref{fig:cifar_baseline_compare}-\\ref{fig:our_cifar10_compare}, and in Tables~\\ref{table:cifar10_asym} and~\\ref{table:cifar100_asym}. \nSome discrepancy between the experimental and theoretical results could exist under vanilla training as Theorem~\\ref{thm:main_extend} assumes sequential training, which is different from standard training procedures.\n\n\\paragraph{Proof of Theorem~\\ref{thm:main_extend}.} According to the definition of alignment in Definition~\\ref{def:alignment}, since $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) \\leq M$ and $f(\\x) = f_r(h(\\x))$, we can find a sub-structure (denoted as $\\mathcal{N}_{sub}$) in the network $\\mathcal{N}$ with sequential modules $\\{\\mathcal{N}_1, \\cdots, \\mathcal{N}_i\\}$ such that $\\mathcal{N}_{sub}$ can efficiently learn the function $h$ (i.e., the sample complexity for $\\mathcal{N}_{sub}$ to learn $h$ is no larger than $M$). \nAccording to Theorem~\\ref{thm:main_formal}, applying sequential learning to train $\\mathcal{N}_{sub}$ with labels $h(\\x)$, the representations of $\\mathcal{N}_{sub}$ will have predictive power $\\pred_{i}(h, \\mathcal{N}_{sub}, \\mathcal{C}) \\leq \\epsilon$ with probability $1-\\delta$. \n\nSince for each input $\\x$ in the noisy training data $\\hat{\\mathcal{S}}$, its label can be written as $f_r(h(\\x))$ (if it is clean) or $g_r(h(\\x))$ (if it is noisy), when the network $\\mathcal{N}$ is trained on $\\hat{\\mathcal{S}}$ using sequential learning, its sub-structure $\\mathcal{N}_{sub}$ can still learn $h$ efficiently (i.e., $\\mathcal{M}_{A} (h, \\mathcal{N}_{sub}, \\epsilon, \\delta) \\leq M$ for some learning algorithm $A$). Thus, the representations learned from the noisy training data $\\hat{\\mathcal{S}}$ can still be very predictive (i.e., $\\pred_{i}(h, \\mathcal{N}_{sub}, \\mathcal{C}) \\leq \\epsilon$ with probability $1-\\delta$). \n\nSince $f_r$ is a linear map from a high-dimensional space to a low-dimensional space, and the clean data $\\mathcal{C}$ has enough samples to learn $f_r$ ($\n|\\mathcal{C}|$ is larger than the input dimension of $f_r$), the linear model $L$ learned by linear regression can also generalize $f_r$ (since linear regression has a closed form solution in this case as the problem is over-complete).\nTherefore, as $\\pred_{i}(h, \\mathcal{N}_{sub}, \\mathcal{C}) \\leq \\epsilon$, $\\pred_{i}(f, \\mathcal{N}_{sub}, \\mathcal{C}) \\leq \\epsilon$ also holds.\nNotice that $\\pred_{i}(f, \\mathcal{N}_{sub}, \\mathcal{C}) = \\pred_{i}(f, \\mathcal{N}, \\mathcal{C})$ as $\\mathcal{N}_{i}$ is also the $i$-th module in $\\mathcal{N}$.\nHence, we have shown that there exist some module $\\mathcal{N}_i$ such that $\\pred_{i}(f, \\mathcal{N}, \\mathcal{C}) \\leq \\epsilon$ with probability $1-\\delta$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\n\nArea and orientation preserving diffeomorphisms of the standard 2-disc, referred to as symplectomorphisms of $\\disc$, allow decompositions in terms of \\emph{positive} twist diffeomorphisms.\nUsing the latter decomposition we utilize the Conley index theory of discrete braid classes as introduced in \\cite{BraidConleyIndex,GVV-pre} in order to obtain a\nMorse type forcing theory of periodic points: a priori information about periodic points determines a mapping class which may force additional periodic points.\n\n\n\n{\\let\\thefootnote\\relax\\footnotetext{* \\textit{Institute of Computer Science and Computational Mathematics, Jagiellonian University, Krak\\'ow}}}\n\n{\\let\\thefootnote\\relax\\footnotetext{** \\textit{Department of Mathematics, VU University, Amsterdam}}}\n\n\n\\newpage\n\n\n\\begin{sloppypar}\n\n\\section{Prelude}\nLet $\\disc\\subset\\plane$ be the standard unit 2-disc with coordinates $z=(x,y)\\in \\plane$, and let $\\omega=dx\\wedge dy$ be the standard area 2-form on $\\plane$.\nA diffeomorphism $F\\colon\\disc \\to\\disc$ is said to be symplectic if $F^*\\omega = \\omega$ --- area and orientation preserving ---\nand is referred to as a \\emph{symplectomorphism} of $\\disc$.\nSymplectomorphisms of the 2-disc form a group which is denoted by $\\Symp(\\disc)$. \nA diffeomorphism $F$\nis \\emph{Hamiltonian} if it is given as the time-1 mapping of a Hamiltonian system\n\\begin{equation}\\label{HE} \n \\begin{aligned}\n \\dot{x}&=\\partial_y H(t,x,y); \\\\ \\dot{y}&=-\\partial_xH(t,x,y),\n \\end{aligned}\n\\end{equation}\nwhere $ H \\in C^{\\infty}(\\rr\\times \\disc)$ a the Hamiltonian function with the additional property that $H(t, \\cdot)|_{\\partial \\disc} = const.$ for all $ t \\in \\rr$. The set of Hamiltonians satisfying these requirements is denoted by ${\\mathcal{H}}(\\disc)$ and\nthe associated flow of \\eqref{HE} is denoted by $\\psi_{t,H}$. The group $\\Ham(\\disc)$ signifies the group of Hamiltonian diffeomorphisms of $\\disc$.\n Hamiltonian diffeomorphisms are symplectic by construction. For the 2-disc these notions are equivalent, i.e. $\\Symp(\\disc) = \\Ham(\\disc)$ and we may therefore study Hamiltonian systems in order to prove properties about symplectomorphisms of $\\disc$, cf.\\ \\cite{Boyland2005}, and Appendix \\ref{sec:sympMCG}.\n\nA subset $B\\subset \\disc$ is a invariant set for $F$ if $F(B) = B$.\n We are interested in finite invariant sets.\n Such invariant sets consist of periodic points, i.e.\npoints $z\\in\\disc$ such that $F^k(z) = z$ for some $k\\ge 1$.\nSince $\\partial \\disc$ is also invariant, periodic point are either in ${\\rm int~}\\disc$, or $\\partial \\disc$.\n\nThe main result of this paper concerns a\n \\emph{forcing} problem. Given a finite invariant set $B\\subset \\inter \\disc$ for $F \\in \\Symp(\\disc)$, do there \nexist additional periodic points? More generally, does there exist a finite invariant set $A\\subset\\inter \\disc$, with $A\\cap B=\\varnothing$? This is much alike a similar question for the discrete dynamics on an interval, where \nthe famous Sharkovskii theorem establishes a forcing order among periodic points based on their period.\nIn the 2-dimensional situation such an order is much harder to establish, cf.\\ \\cite{Boyland2005}.\nThe main results are based on braiding properties of periodic points and are stated and proved in Section \\ref{sec:main1}.\nThe braid invariants introduced in this paper add additional information to existing invariants in the area-preserving case.\nFor instance, Example \\ref{exm:exist3} describes a braid class which forces additional invariant sets solely in the area-preserving case\nhence extending the non-symplectic methods described in \\cite{JiangZheng}.\n\n\nThe theory in this paper can be further\ngenelarized to include symplectomorphisms of bounded subsets of $\\rr^2$ with smooth boundary, eg. annuli, and symplectomorphisms of $\\rr^2$.\n\n\\vskip.3cm\n\n{\\bf Acknowledgements:} AC was supported by the Foundation for Polish Science under the MPD Programme `Geometry\nand Topology in Physical Models', co-financed by the EU European Regional Development Fund,\nOperational Program Innovative Economy 2007-2013.\n\n\\section{Mapping class groups}\n\\label{sec:discinv}\nA priori knowledge of finite invariant sets $B$ for $F$\ncategorize mappings in so-called mapping classes.\nTraditionally mapping class groups are defined for orientation preserving homeomorphisms,\n cf.\\ \\cite{G3} for an overview. \nDenote by $\\Homeo^+(\\disc)$ the space of orientation preserving homeomorphisms and by $\\Homeo^+_0(\\disc)$ the homeomorphisms that leave the boundary point wise invariant.\nTwo homeomorphisms $F,G\\in \\Homeo^+(\\disc)$ are isotopic if there exists an isotopy $\\phi_t$, with $\\phi_t\\in\\Homeo^+(\\disc)$ for all $t\\in [0,1]$,\nsuch that $\\phi_0=F$ and $\\phi_1 = G$.\nThe equivalence classes in $\\pi_0\\bigl(\\Homeo^+(\\disc)\\bigr) = \\Homeo^+(\\disc)\/\\!\\!\\sim$ are called \\emph{mapping classes} and form a group under composition. The latter is referred to as the \\emph{mapping class group} of the 2-disc and is denoted by $\\Mod(\\disc)$. \nFor homeomorphisms that leave the boundary point wise invariant the mapping class group is denoted by $\\Mod_0(\\disc) = \\pi_0\\bigl(\\Homeo^+_0(\\disc)\\bigr)$. In Appendix \\ref{sec:MCGclbr} we provide proofs of the relevant facts about mapping class groups.\n\\begin{prop}\n\\label{prop:MCG11}\nBoth mapping class groups $\\Mod(\\disc)$ and $\\Mod_0(\\disc)$ are trivial.\n\\end{prop}\nThe mapping class groups $\\Mod(\\disc)$ and $\\Mod_0(\\disc)$ may also be defined using diffeomorphisms, cf.\\ Appendix \\ref{sec:MCGclbr}. \nIn Proposition \\ref{prop:mapclass1}, we show that $\\pi_0\\bigl(\\Symp(\\disc)\\bigr) = \\Mod(\\disc)$ and in Propositon\n \\ref{prop:mapclass3} we show that $\\Ham(\\disc) = \\Symp(\\disc)$, which implies that every homeomorphism, or diffeomorphism is isotopic to\n a Hamiltonian symplectomorphism.\n\n\\begin{prop}\n\\label{prop:MCG11a}\n$\\pi_0\\bigl(\\Symp(\\disc)\\bigr) = \\pi_0\\bigl(\\Ham(\\disc)\\bigr) =\\Mod(\\disc)\\cong 1$.\n\\end{prop}\n\n\n\nMore refined information about mapping classes is obtained by considering finite invariant sets $B$.\nThis leads to the notion of the \\emph{relative mapping classes}. \nTwo homeomorphisms $F,G\\in \\Homeo^+(\\disc)$ are of the same mapping class \\emph{relative to} $B$ if there\nexists an isotopy $\\phi_t$, with $\\phi_t\\in\\Homeo^+(\\disc)$ and $\\phi_t(B) = B$ for all $t\\in [0,1]$, such that\n$\\phi_0=F$ and $\\phi_1=G$. The subgroup of such homeomorphisms is denoted by $\\Homeo^+(\\disc\\rel B)$ and $\\Homeo^+_0(\\disc\\rel B)$\nin case $\\partial\\disc$ is point wise invariant.\nThe associated mapping class groups are denoted by \n$\\Mod(\\disc\\rel B) = \\pi_0\\bigl(\\Homeo^+(\\disc\\rel B) \\bigr)$ and\n$\\Mod_0(\\disc\\rel B) = \\pi_0\\bigl(\\Homeo^+_0(\\disc\\rel B) \\bigr)$ respectively.\n\n\\begin{prop}\n\\label{prop:MCG12}\n$\\Mod(\\disc\\rel B)\\cong {\\mathscr{B}}_m\/Z({\\mathscr{B}}_m)$ and $\\Mod_0(\\disc\\rel B)\\cong {\\mathscr{B}}_m$,\nwhere ${\\mathscr{B}}_m$ is the Artin braid group, with $m = \\# B$ and $Z({\\mathscr{B}}_m)$ is the center of the braid group.\n\\end{prop}\n\nLet $\\C_m\\disc$ be the \\emph{configuration space} of unordered configurations of $m$ points in\n$\\disc$. \n\\emph{Geometric braids} on $m$ strands on $\\disc$ are closed loops in $\\C_m\\disc$ based at $B_0 = \\{z_1,\\cdots,z_m\\}$, where the points $z_i$ are defined\nas follows: $z_i = (x_i,0)$, $x_0=-1$, and $x_{i+1}= x_i + 2\/(m+1)$.\nThe \\emph{classical braid group} on $\\disc$ is the fundamental group $\\pi_1\\bigl(\\C_m\\disc,B_0\\bigr)$ and is denote by ${\\mathcal{B}}_m\\disc$.\nThe (algebraic) \\emph{Artin braid group} ${\\mathscr{B}}_{m}$ is a free group spanned by the $m-1$ generators $\\sigma_{i}$, modulo\nfollowing relations:\n\\begin{align}\\label{eqn:braidrel}\n \\begin{cases}\n \\sigma_{i} \\sigma_{j} = \\sigma_{j} \\sigma_{i}, & \\ |i-j| \\geq 2,\\ i,j \\in \\{1, \\dots ,m-1\\} \\\\\n \\sigma_{i} \\sigma_{i+1} \\sigma_{i} = \\sigma_{i+1} \\sigma_{i} \\sigma_{i+1}, & \\ 1\\le i \\le m-2.\n \\end{cases}\n\\end{align} \nFull twists are denoted algebraically by $\\square= (\\sigma_{1} \\dots \\sigma_{m-1})^{m}$ and generate the center of the braid group ${\\mathscr{B}}_m$.\nPresentation of words consisting only of the $\\sigma_i$'s (not the inverses) and the relations in \\eqref{eqn:braidrel} form a monoid\nwhich is called the \\emph{positive braid monoid} ${\\mathscr{B}}_m^+$.\n\nThere exists a canonical isomorphism ${\\bm{i}}_m\\colon {\\mathscr{B}}_m \\to {\\mathcal{B}}_m\\disc$, cf.\\ \\cite[Sect.\\ 1.4]{Birman}.\nFor closed loops ${\\bm{\\beta}}(t)$ based at $B\\in \\C_m\\disc$ we have a canonical isomorphism ${\\bm{j}}_B\\colon\\pi_1\\bigl(\\C_m\\disc,B\\bigr)\\to \\pi_1\\bigl(\\C_m\\disc,B_0\\bigr) ={\\mathcal{B}}_m\\disc$.\nLet $p\\colon [0,1]\\to \\C_m\\disc$ be a path connecting $B_0$ to $B$, then define ${\\bm{j}}_B\\bigl([{\\bm{\\beta}}]_B\\bigr) := [(p\\cdot {\\bm{\\beta}})\\cdot p^*]_{B_0}\n= [p\\cdot({\\bm{\\beta}}\\cdot p^*)]_{B_0}$, where $p^*$ is the inverse path connecting $B$ to $B_0$. The definition of ${\\bm{j}}_B$ is independent of the chosen path $p$.\nThis yields the isomorphism \n$\n\\imath_B = {\\bm{i}}_m^{-1}\\circ {\\bm{j}}_B\\colon \\pi_1\\bigl(\\C_m\\disc,B\\bigr) \\to\n{\\mathscr{B}}_m.\n$\n\nThe construction of the isomorphism $\\Mod_0(\\disc\\rel B) \\cong {\\mathcal{B}}_{m}\\disc$ can be understood as follows, cf.\\ \\cite{Birman}, \\cite{Birman2}. For \n$F\\in \\Homeo^+_0(\\disc\\rel B)$\nchoose an isotopy $\\phi_t\\in \\Homeo^+_0(\\disc), \\ t \\in [0,1]$, \nsuch that $\\phi_1=F$.\nSuch an isotopy exists since $\\Homeo^+_0(\\disc)$ is contractible, cf.\\ Propostion \\ref{prop:MCG11}.\nFor $G\\in [F]\\in \\Mod_0(\\disc\\rel B)$, the composition and scaling of the isotopies defines isotopic braids based at $B\\in \\C_m\\disc$.\nThe isomorphism $\\jmath_B\\colon \\Mod_0(\\disc\\rel B) \\to {\\mathscr{B}}_m$ is given by $\\jmath_B([F]) = \\iota_B\\bigl(d_*^{-1}([F])\\bigr)= \\imath_B([{\\bm{\\beta}}]_B)$, with ${\\bm{\\beta}}(t) = \\phi_t(B)$\nthe geometric braid generated by $\\phi_t$. The isomorphism $d_*$ is given in Appendix \\ref{subsec:braidMCG} and $[{\\bm{\\beta}}]_B$ denotes the homotopy class in $\\pi_1\\bigl(\\C_m\\disc,B\\bigr)$.\nFor $\\Mod(\\disc\\rel B)$ we use the same notation for the isomorphism which is given by\n\\[\n\\jmath_B\\colon\\Mod(\\disc\\rel B) \\cong {\\mathscr{B}}_{m}\/Z({\\mathscr{B}}_m),\\quad [F] \\mapsto \\jmath_B([F])= \\beta\\!\\!\\!\\!\\mod\\square,\n\\]\nwhere $\\beta = \\imath_B\\bigl( [{\\bm{\\beta}}]_B\\bigr)$.\nThe above mapping class groups can also be defined using diffeomorphisms and symplectomorphisms. \n\n\\begin{prop}\n\\label{prop:MCG12a} \n $ \\pi_0\\bigl(\\Ham(\\disc\\rel B)\\bigr) =\\Mod(\\disc\\rel B)\\cong {\\mathscr{B}}_m\/Z({\\mathscr{B}}_m)$.\n\\end{prop}\nIn Appendix \\ref{sec:sympMCG} we show that $\\pi_0\\bigl(\\Symp(\\disc\\rel B)\\bigr) = \\Mod(\\disc\\rel B)$ and that $\\Symp(\\disc\\rel B) = \\Ham(\\disc\\rel B)$\nand therefore that every mapping class can be represented by Hamiltonian symplectomorphisms.\n\n\n\n\n\n\\section{Braid classes}\n\\label{subsec:2color}\nConsidering free loops in a configuration space as opposed to based loops leads to classes of closed braids, which are the key tool for studying periodic points.\n\n\n\n\\subsection{Discretized braids}\n\\label{subsec:discbr}\nFrom \\cite{BraidConleyIndex} we recall the notion of positive piecewise linear braid diagrams and discretized braids.\n\\begin{defn}\n\\label{PL}\nThe space of {\\em discretized period $d$ closed braids on $n$ strands},\ndenoted $\\Conf^d_m$, is the space of all pairs $({\\bm{b}},\\tau)$ where\n$\\tau\\in S_m$ is a permutation on $m$ elements, and ${\\bm{b}}$ is an \nunordered set of $m$ {\\em strands}, ${\\bm{b}}=\\{{\\bm{b}}^\\mu\\}_{\\mu=1}^m$,\ndefined as follows:\n\\begin{enumerate}\n\\item[(a)]\n\teach strand \n\t${\\bm{b}}^\\mu=(x^\\mu_0,x^\\mu_1,\\ldots,x^\\mu_d)\\in\\rr^{d+1}$\n\tconsists of $d+1$ {\\em anchor points} $x_j^\\mu$;\n\\item[(b)] $x^\\mu_d = x^{\\tau(\\mu)}_0$\n\tfor all $\\mu=1,\\ldots,m$;\n\\item[(c)]\n\tfor any pair of distinct strands ${\\bm{b}}^\\mu$ and ${\\bm{b}}^{\\mu'}$\n\tsuch that $x^\\mu_j=x^{\\mu'}_j$ for some $j$,\n\tthe \\emph{ transversality} condition\n\t $\\bigl(x^\\mu_{j-1}-x^{\\mu'}_{j-1}\\bigr)\n\t\\bigl(x^\\mu_{j+1}-x^{\\mu'}_{j+1}\\bigr) < 0$ holds.\n\\end{enumerate}\n\\end{defn}\n\n\\begin{rem}\nTwo discrete braids $({\\bm{b}},\\tau)$ and $( {\\bm{b}}', \\tau')$ are close if the strands ${\\bm{b}}^{\\zeta(\\mu)}$ and $ {\\bm{b}}'^\\mu$\nare close in $\\rr^{md}$ for some permutation $\\zeta$ such that $ \\tau'=\\zeta\\tau\\zeta^{-1}$.\nWe suppress the permutation $\\tau$ from the notation. \nPresentations via the braid monoid ${\\mathscr{B}}_m^+$ store the permutations.\n\\end{rem}\n\n\\begin{defn}[cf.\\ \\cite{BraidConleyIndex}]\n\\label{defn:closure}\nThe closure $\\bar\\Conf_m^d$ of the space $\\Conf_m^d$ consists of pairs $({\\bm{b}},\\tau)$ for which (a)-(b) in Definition \\ref{PL} are satisfied.\n\\end{defn}\n\nThe path components of $\\Conf_m^d$ are the \\emph{discretized braids classes} $[{\\bm{b}}]$.\nBeing in the same path connected component is an equivalence relation on $\\Conf_m^d$, where the braid classes are the\nequivalence classes expressed by the notation ${\\bm{b}},{\\bm{b}}'\\in [{\\bm{b}}]$, and ${\\bm{b}}\\sim{\\bm{b}}'$. \nThe associated permutations $\\tau$ and $\\tau'$ are conjugate. A path connecting ${\\bm{b}}$ and ${\\bm{b}}'$ is called a \\emph{positive isotopy} and the equivalence relation is referred to \\emph{positively isotopic}.\n\nTo a configuration ${\\bm{b}}\\in\\Conf_m^d$ one can associate a \n\\emph{piecewise linear braid diagram} $\\Bd({\\bm{b}})$. For \neach strand ${\\bm{b}}^\\mu\\in {\\bm{b}}$, consider the piecewise-linear (PL) \ninterpolation\n\\begin{equation}\\label{interpolate1}\n\\Bd^{\\mu}(t) := x^\\mu_{\\floor{d\\cdot t}}+(d\\cdot t-\\floor{d\\cdot t})\n\t(x^\\mu_{\\ceil{d\\cdot t}}-x^\\mu_{\\floor{d\\cdot t}}),\n\\end{equation}\nfor $t\\in[0,1]$.\nThe braid diagram $\\Bd({\\bm{b}})$ is then defined to be the \nsuperimposed graphs of all the functions $\\Bd^{\\mu}(t)$.\nA braid diagram $\\Bd({\\bm{b}})$ is not only a good bookkeeping tool for keeping track of the strands\nin $\\Bd({\\bm{b}})$, but also plays natural the role of a braid diagram projection with only positive intersections, cf.\\ Section \\ref{subsec:discbrinv12}.\n\nThe set of $t$-coordinates of intersection points in $\\Bd({\\bm{b}})$ is denoted by $\\{t_i\\}$, $i=1,\\cdots,|{\\bm{b}}|$, where $|{\\bm{b}}|$ is the total number of\nintersections in $\\Bd({\\bm{b}})$ counted with multiplicity. The latter is also referred to as the \\emph{word metric} and is an invariant for ${\\bm{b}}$.\nA discrete braid ${\\bm{b}}$ is \\emph{regular} if all points $t_i$ and anchor points $x_j^\\mu$ are distinct.\nThe regular discrete braids in $[{\\bm{b}}]$ form a dense subset and every discrete braid is positively isotopic to a regular discrete braid. \nTo a regular discrete braid ${\\bm{b}}$ one can assign a unique positive word $\\beta = \\beta({\\bm{b}})$ defined as follows:\n\\begin{equation}\n\\label{eqn:word1}\n{\\bm{b}} \\mapsto \\beta({\\bm{b}}) = \\sigma_{k_1} \\cdots \\sigma_{k_\\ell},\n\\end{equation}\n where $k_i$ and $k_i +1$ are the positions that intersect at $t_i$, cf.\\ \\cite[Def.\\ 1.13]{Dehornoy1}. \nOn the positive braid monoid ${\\mathscr{B}}_m^+$ two positive words $\\beta$ and $\\beta'$ are positively equal,\nnotation $\\beta \\doteq\\beta'$, if they represent the same element in ${\\mathscr{B}}_m^+$ using the relations in \\eqref{eqn:braidrel}.\nOn ${\\mathscr{B}}_m^+$ we define an equivalence relation which acts as an analogue of conjugacy in the braid group, cf.\\ \\cite[Sect.\\ 2.2]{BDV}.\nFor a given word $\\sigma_{i_1}\\cdots\\sigma_{i_n}$, define the relation\n\\[\n\\sigma_{i_1}\\sigma_{i_2}\\cdots\\sigma_{i_n} \\equiv \\sigma_{i_2}\\cdots\\sigma_{i_n}\\sigma_{i_1}.\n\\]\n\n\\begin{defn}\n\\label{defn:equiv12}\nTwo positive words $\\beta,\\beta'\\in {\\mathscr{B}}_m^+$ are \\emph{positively conjugate}, notation\n$\\beta \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} \\beta'$, if there exists a sequence of words $\\beta_0,\\cdots,\\beta_\\ell\\in {\\mathscr{B}}_m^+$, with $\\beta_0=\\beta$ and $\\beta_\\ell =\\beta'$, such\nthat for all $k$, either $\\beta_k\\doteq \\beta_{k+1}$, or $\\beta_k\\equiv \\beta_{k+1}$\n\\end{defn}\n\nPositive conjugacy is an equivalence relation on ${\\mathscr{B}}_m^+$ and the set of positive conjugacy classes $\\llbracket\\beta\\rrbracket$ of the braid monoid ${\\mathscr{B}}_m^+$ is denoted by\n$\\CC {\\mathscr{B}}_m^+$. \n\nThe above defined assignment ${\\bm{b}} \\mapsto \\beta({\\bm{b}})$ can be extended to all discrete braids. A discrete braid ${\\bm{b}}$ is positively isotopic to a regular braid ${\\bm{b}}'$ and the mapping $\\Conf_m^d \\to \\CC{\\mathscr{B}}_m^+$, given by ${\\bm{b}} \\mapsto \\llbracket\\beta({\\bm{b}})\\rrbracket$, is well-defined\nby choosing $\\beta({\\bm{b}})$ to be any representative in the positive conjugacy class $\\llbracket\\beta({\\bm{b}}')\\rrbracket$.\nObserve that for fixed $d$ the mapping $\\Conf_m^d \\to \\CC{\\mathscr{B}}_m^+$ is not surjective.\n\n\\begin{rem}\nThe positive conjugacy relation defined in Definition \\ref{defn:equiv12} is symmetric by construction since it is defined on finite words.\nFor instance, consider $\\sigma_1\\sigma_2\\sigma_3 \\equiv \\sigma_2\\sigma_3\\sigma_1$.\nThe question whether the $\\sigma_2\\sigma_3\\sigma_1 \\equiv \\sigma_1\\sigma_2\\sigma_3$ is answered as follows:\n$\\sigma_2\\sigma_3\\sigma_1 \\equiv \\sigma_3\\sigma_1\\sigma_2 \\equiv \\sigma_1\\sigma_2\\sigma_3$, which, by Definition \\ref{defn:equiv12},\nshows that $\\sigma_2\\sigma_3\\sigma_1 \\equiv \\sigma_1\\sigma_2\\sigma_3$.\n\\end{rem}\n\n\n\nThe presentation of discrete braids via words in ${\\mathscr{B}}_m^+$ yields the \n following alternative equivalence relation.\n\\begin{defn}\n\\label{defn:topeq}\nTwo discretized braids ${\\bm{b}}, {\\bm{b}}'\\in \\Conf_m^d$ are \\emph{topologically equivalent} if $\\beta({\\bm{b}}) \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} \\beta({\\bm{b}}')$ in ${\\mathscr{B}}_m^+$, i.e. \n$\\beta({\\bm{b}})$ and $\\beta({\\bm{b}}')$ are positively conjugate.\nNotation: ${\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{b}}'$.\n\\end{defn}\n\nSummarizing, ${\\bm{b}}\\sim{\\bm{b}}'$ implies ${\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{b}}'$\nand $\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}$ defines a coarser equivalence relation on $\\Conf_m^d$.\nThe equivalence classes with respect to $\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}$ are denote by $[{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$.\nThe converse is not true in general, cf.\\ \\cite[Fig.\\ 8]{BraidConleyIndex}.\nFollowing \\cite[Def.\\ 17]{BraidConleyIndex}, a discretized braid class $[{\\bm{b}}]$ is \\emph{free} if\n$[{\\bm{b}}] = [{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$.\n\\begin{prop}[\\cite{BraidConleyIndex}, Prop.\\ 27]\n\\label{prop:free}\nIf $d>|{\\bm{b}}|$, then $[{\\bm{b}}]$ is a free braid class.\n\\end{prop}\n\nTaking $d$ sufficiently large is a sufficient condition to ensure free braid classes, but this condition is not a necessary condition.\n\n\\begin{figure}[tb]\n\\centering\n\\begin{tikzpicture}[xscale=0.4, yscale=0.4, line width = 1.5pt]\n \\draw[-] (0,0) -- (2,6) -- (4,0); \n \\draw[fill] (0,0) circle (0.07)\n (2,6) circle (0.07)\n (4,0) circle (0.07);\n\n \\draw[-] (0,2) -- (2,2) -- (4,2); \n \\draw[fill] (0,2) circle (0.07)\n (2,2) circle (0.07)\n (4,2) circle (0.07);\n\n\n \\draw[-] (0,4) -- (2,4) -- (4,4);\n \\draw[fill] (0,4) circle (0.07)\n (2,4) circle (0.07)\n (4,4) circle (0.07);\n\\foreach \\x in {0,4}\n \\draw[line width=1.0pt, -] (\\x,-1) -- (\\x,7); \n\n\\end{tikzpicture}\n\\qquad\n\\qquad\n\\begin{tikzpicture}[xscale=0.4, yscale=0.4, line width = 1.5pt]\n \\draw[-] (0,6) -- (2,0) -- (4,6); \n \\draw[fill] (0,6) circle (0.07)\n (2,0) circle (0.07)\n (4,6) circle (0.07);\n\n \\draw[-] (0,2) -- (2,2) -- (4,2); \n \\draw[fill] (0,2) circle (0.07)\n (2,2) circle (0.07)\n (4,2) circle (0.07);\n\n\n \\draw[-] (0,4) -- (2,4) -- (4,4);\n \\draw[fill] (0,4) circle (0.07)\n (2,4) circle (0.07)\n (4,4) circle (0.07);\n\\foreach \\x in {0,4} \n \\draw[line width=1.0pt, -] (\\x,-1) -- (\\x,7); \n\n\\end{tikzpicture}\n\\qquad\n\\qquad\n\\begin{tikzpicture}[xscale=0.4, yscale=0.4, line width = 1.5pt]\n \\draw[-] (0,0) -- (2,6) -- (4,3.5) -- (6,0); \n \\draw[fill] (0,0) circle (0.07)\n (2,6) circle (0.07)\n (4,3.5) circle (0.07)\n (6,0) circle (0.07);\n\n \\draw[-] (0,2) -- (2,2) -- (4,2) -- (6,2); \n \\draw[fill] (0,2) circle (0.07)\n (2,2) circle (0.07)\n (4,2) circle (0.07)\n (6,2) circle (0.07);\n\n\n \\draw[-] (0,4) -- (2,4) -- (4,4) -- (6,4);\n \\draw[fill] (0,4) circle (0.07)\n (2,4) circle (0.07)\n (4,4) circle (0.07)\n (6,4) circle (0.07);\n\\foreach \\x in {0,6}\n \\draw[line width=1.0pt, -] (\\x,-1) -- (\\x,7); \n\\end{tikzpicture}\n\\caption{The left and middle diagrams show representatives ${\\bm{b}},{\\bm{b}}'\\in \\Conf_3^2$ in Example \\ref{exm:free2}.\nThe right diagram shows a representative the same topological braid class in $\\Conf_3^3$ (free).}\n\\label{fig:free1}\n\\end{figure}\n\n\n\\begin{exm}\\label{exm:free2}\nGiven the braids ${\\bm{b}}\\in \\Conf_3^2$ with ${\\bm{b}}^1 = (1,4,1)$, ${\\bm{b}}^2 = (2,2,2)$ and ${\\bm{b}}^3 = (3,3,3)$ and consider the braid\nclass $[{\\bm{b}}]$, see Figure \\ref{fig:free1}[left and middle]. Since ${\\bm{b}}$ is regular, $\\beta({\\bm{b}})$ is uniquely defined and $\\beta({\\bm{b}}) = \\sigma_1\\sigma_2^2\\sigma_1$.\nAlso define ${\\bm{b}}'\\in \\Conf_3^2$ with ${\\bm{b}}'^1 = (4,1,4)$, ${\\bm{b}}'^2={\\bm{b}}^2$ and ${\\bm{b}}'^3={\\bm{b}}^3$ and the braid\nclass $[{\\bm{b}}']$. Since ${\\bm{b}}'$ is also regular we have the unique braid word $\\beta({\\bm{b}}') = \\sigma_2\\sigma_1^2\\sigma_2$.\nObserve that $ \\sigma_1\\sigma_2^2\\sigma_1\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} \\sigma_2\\sigma_1^2\\sigma_2$, which implies that ${\\bm{b}}$ and\n${\\bm{b}}'$ are topologically equivalent. However, ${\\bm{b}}$ and ${\\bm{b}}'$ are not positively isotopic in $\\Conf_3^2$ and $[{\\bm{b}}]$ and\n$[{\\bm{b}}']$ are two different path components of $\\Conf_3^2$.\nThe positive conjugacy class of $\\sigma_1\\sigma_2^2\\sigma_1$ is given by $\\llbracket\\sigma_1\\sigma_2^2\\sigma_1\\rrbracket = \\{\\sigma_1\\sigma_2^2\\sigma_1,\\sigma_2^2\\sigma_1^2,\n\\sigma_2\\sigma_1^2\\sigma_2,\\sigma_1^2\\sigma_2^2\\}$. The words $\\sigma_2^2\\sigma_1^2$ and $\\sigma_1^2\\sigma_1^2$\nare not represented in $\\Conf_3^2$.\nIf we consider $ {\\bm{b}}''\\in \\Conf_3^3$ given by ${\\bm{b}}'' = \\bigl\\{ (1,4,1,1), (2,2,2,2), (3,3,3,3)\\bigr\\}$, then\nthe associated braid class $[{\\bm{b}}'']$ is free, which confirms that the condition in Proposition \\ref{prop:free} is not a necessary\ncondition, see Figure \\ref{fig:free1}[right].\n\\end{exm}\n\n\n\n Let \n$\\beta = \\sigma_{i_1} \\cdots \\sigma_{i_d}\\in {\\mathscr{B}}_m^+$ be a positive braid word, then define\n\\[\n\\ev_q(\\beta) := {\\bm{b}} = \\{ {\\bm{b}}^\\mu\\} \\in \\Conf_m^{d+q},\\quad {\\bm{b}}^\\mu = (x_j^\\mu),\\quad\\mu=1,\\cdots, m,~~q\\ge 0,\n\\]\nwith $x_0^\\mu = \\mu$, $x_j^\\mu = x_0^{\\sigma_{i_1} \\cdots \\sigma_{i_j}(\\mu)}$, $j=1,\\cdots,d$, and $x_{d+q}^\\mu =\\cdots = x_d^\\mu$. \nThe expression $\\sigma_{i_1} \\cdots \\sigma_{i_j}(\\mu)$, $\\mu = 1,\\cdots,m$ describes the permutation of the set $\\{1,\\cdots,m\\}$, where\n$\\sigma_{i_1} \\cdots \\sigma_{i_j}$ is regarded as a concatenation of permutations given by the generators $\\sigma_i$ interpreted as\na basic permutation of $i$ and $i+1$.\nBy Proposition \\ref{prop:free}, $[\\ev_q(\\beta)]$ is free for all $q\\ge 1$, and every\n $\\llbracket\\beta\\rrbracket\\in \\CC{\\mathscr{B}}_m^+$ defines a free discrete braid class $[\\ev_q(\\beta)]$ in $\\Conf_m^{d+q}$\n for all $q\\ge 1$. \n\n\n\\subsection{Discrete 2-colored braid classes}\n\\label{subsec:discbr2}\nOn closed configuration spaces we define the following product:\n\\[ \n\\bar\\Conf_n^d\\times\\bar\\Conf_m^d \\to \\bar\\Conf_{n+m}^d,\\quad ({\\bm{a}},{\\bm{b}}) \\mapsto {\\bm{a}}\\sqcup{\\bm{b}},\n\\]\nwhere ${\\bm{a}}\\sqcup{\\bm{b}}$ is the disjoint union of the strands in ${\\bm{a}}$ and ${\\bm{b}}$ regarded as an element in $\\bar\\Conf_{n+m}^d$.\nThe definition yields a canonical permutation on the labels in ${\\bm{a}}\\sqcup{\\bm{b}}$.\nDefine the space of \\emph{2-colored discretized braids} as the space of ordered pairs\n\\begin{equation}\n\\label{eqn:2color}\n\\Conf_{n,m}^d := \\bigl\\{ {\\bm{a}}\\rel{\\bm{b}}:=({\\bm{a}},{\\bm{b}})~|~ {\\bm{a}}\\sqcup{\\bm{b}} \\in \\Conf_{n+m}^d\\bigr\\}.\n\\end{equation}\nThe strand labels in ${\\bm{a}}$ range from $\\mu=1,\\cdots,n$ and the strand labels in ${\\bm{b}}$ range from $\\mu=n+1,\\cdots,n+m$.\nThe associated permutation $\\tau_{{\\bm{a}},{\\bm{b}}} = \\tau_{\\bm{a}} \\oplus\\tau_{\\bm{b}} \\in S_{n+m}$, where\n $\\tau_{\\bm{a}}\\in S_n$ and $\\tau_{\\bm{b}} \\in S_m$, and $\\tau_{\\bm{a}}$ acts on the labels\n$\\{1,\\cdots,n\\}$ and $\\tau_{\\bm{b}}$ acts on the labels $\\{{n+1},\\cdots,{n+m}\\}$.\nThe strands ${\\bm{a}} = \\{x_j^{\\mu}\\}$, $\\mu=1,\\cdots,n$ are the \\emph{red}, or \\emph{free} strands and\nthe strands ${\\bm{b}} = \\{x_j^{\\mu}\\}$, $\\mu=n+1,\\cdots,n+m$ are the \\emph{black}, or \\emph{skeletal} strands.\nA path component $[{\\bm{a}}\\rel{\\bm{b}}]$ in $\\Conf_{n,m}^d$ is called a \\emph{2-colored discretized braid class}.\nThe canonical projections are given by\n$\\varpi\\colon\\Conf_{n,m}^d \\to \\Conf_m^d$ with ${\\bm{a}}\\rel{\\bm{b}} \\mapsto {\\bm{b}}$ and by\n$\\varpi^*\\colon\\Conf_{n,m}^d \\to \\Conf_n^d$ with ${\\bm{a}}\\rel{\\bm{b}} \\mapsto {\\bm{a}}$.\nThe mapping $\\varpi$ yields a fibration\n\\begin{equation}\n\\label{eqn:fiberbundle2}\n[{\\bm{a}}]\\rel {\\bm{b}} \\to [{\\bm{a}}\\rel {\\bm{b}}] \\to [{\\bm{b}}].\n\\end{equation}\n The pre-images\n$\\varpi^{-1}({\\bm{b}}) = [{\\bm{a}}]\\rel {\\bm{b}}\\subset \\Conf_{n}^d$, are called the \\emph{relative discretized braid class fibers}.\n\nThere exists a natural embedding $\\Conf_{n,m}^d \\hookrightarrow \\Conf_{n+m}^d$, defined by ${\\bm{a}}\\rel{\\bm{b}} \\mapsto {\\bm{a}}\\sqcup{\\bm{b}}$.\nVia the embedding we define the notion of topological equivalence of two 2-colored discretized braids:\n${\\bm{a}}\\rel{\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{a}}'\\rel{\\bm{b}}'$ if ${\\bm{a}}\\sqcup{\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{a}}'\\sqcup{\\bm{b}}'$. The associated equivalence classes are denoted by $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$, which are\nnot necessarily connected sets in $\\Conf_{n,m}^d$. A 2-colored discretized braid class $[{\\bm{a}}\\rel{\\bm{b}}]$ is free if $[{\\bm{a}}\\rel{\\bm{b}}] = [{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$.\nIf $d>|{\\bm{a}}\\sqcup{\\bm{b}}|$, then $[{\\bm{a}}\\rel{\\bm{b}}]$ is free by Proposition \\ref{prop:free}. \n\nThe set of collapsed singular braids in $\\bar \\Conf_{n,m}^d$ is given by:\n\\[\n\\begin{aligned}\n\\Sigma^- := \\{{\\bm{a}}\\rel{\\bm{b}}\\in \\bar\\Conf_{n,m}^d~|~ {\\bm{a}}^\\mu =~&{\\bm{a}}^{\\mu'}, \\hbox{or~} {\\bm{a}}^\\mu={\\bm{b}}^{\\mu'}\\\\\n&\\hbox{for some~~}\\mu\\not=\\mu',~\\hbox{and~} {\\bm{b}}\\in \\Conf_m^d\\}.\n\\end{aligned}\n\\]\nA 2-colored discretized braid class $[{\\bm{a}}\\rel{\\bm{b}}]$ is \\emph{proper} if \n$\\partial [{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}} \\cap \\Sigma^- = \\varnothing$.\nIf a braid class $[{\\bm{a}}\\rel{\\bm{b}}]$ is not proper it is called \\emph{improper}.\nIn \\cite{BDV} properness is considered in a more general setting. The notion of properness in this paper coincides with weak properness in \\cite{BDV}.\n\nA 2-colored discretized braid class $[{\\bm{a}}\\rel{\\bm{b}}]$\nis called \\emph{bounded} if its fibers are bounded as sets in $\\rr^{nd}$.\nNote that $[{\\bm{a}}\\rel{\\bm{b}}]$\nis \\emph{not} a bounded set in $\\rr^{(n+m)d}$.\n\n\n\\subsection{Algebraic presentations}\n\\label{subsec:algpres}\nDiscretized braid classes are presented via the positive conjugacy classes of the positive braid monoid ${\\mathscr{B}}_m^+$.\nFor 2-colored discretized braids we seek a similar presentation.\n\nIn order to keep track of colors we define coloring on words in ${\\mathscr{B}}_{n+m}^+$.\nWords in ${\\mathscr{B}}_{n+m}^+$ define associated permutations $\\tau$ and the permutations $\\tau$ yield partitions of the set $\\{1,\\cdots,n+m\\}$.\nLet $\\gamma \\in {\\mathscr{B}}_{n+m}^+$ be a word for which the induced partition contains a union of equivalence classes $\\aset \\subset \\{1,\\cdots,n+m\\}$\nconsisting of $n$ elements. The set $\\aset$ is the \\emph{red coloring} of length $n$ and the remaining partitions are colored black, denoted by $\\bset$. \nThe pair $(\\gamma,\\aset)$ is \ncalled a 2-colored positive braid word, see Figure \\ref{fig:relative1}.\nFor a given coloring $\\aset \\subset \\{1,\\cdots,n+m\\}$ of length $n$ the set of all words $(\\gamma,\\aset)$ forms a monoid which is denoted by ${\\mathscr{B}}^+_{n,m,\\aset}$ and is referred as\nthe \\emph{2-colored braid monoid} with coloring $\\aset$.\n\n\nTwo pairs $(\\gamma,\\aset)$ and $(\\gamma',\\aset')$ are positively conjugate if $\\gamma\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} \\gamma'$ and\n$\\aset' = \\zeta^{-1}(\\aset)$, where $\\zeta$ is a permutation conjugating the induced permutations $\\tau_\\gamma$ and $\\tau_{\\gamma'}$, i.e. $\\tau_{\\gamma'} = \\zeta\\tau_\\gamma\\zeta^{-1}$.\nIf $\\xi$ is another permutation such that $\\tau_{\\gamma'} = \\xi\\tau_\\gamma\\xi^{-1}$, then \n$ \\zeta\\tau_\\gamma\\zeta^{-1} = \\xi\\tau_\\gamma\\xi^{-1}$. This implies that $\\tau_\\gamma = \\zeta^{-1}\\xi\\tau_\\gamma\\xi^{-1}\\zeta$\nand thus $\\xi^{-1}\\zeta(\\aset) = \\aset$, which is equivalent to \n$\\zeta^{-1}(\\aset) = \\xi^{-1}(\\aset)$. This shows\nthat the conjugacy relation in well-defined.\nPositive conjugacy for 2-colored braid words is again denoted by $(\\gamma,\\aset) \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} (\\gamma', \\aset')$\nand a conjugacy class is denoted by $\\llbracket\\gamma,\\aset\\rrbracket$.\nThe set of 2-colored positive conjugacy classes \nwith red colorings of length $n$ is denoted by $\\CC{\\mathscr{B}}_{n,m}^+$.\n\nThe words corresponding to the different colors $(\\gamma,\\aset)$ can be derived from the information in $(\\gamma,\\aset)$.\nLet $\\aset_0\\subset \\aset$ be a cycle of length $\\ell\\le n$ and let $k \\in \\aset_0$. If $\\gamma = \\sigma_{i_1}\\cdots \\sigma_{i_d}$, then we define\nan $\\ell$-periodic sequence $\\{k_j\\}$, with \n\\[\nk_0 = k,\\quad\\hbox{and}\\quad k_{j} = \\sigma_{i_j}(k_{j-1}), ~~~j=1,\\cdots,\\ell d,\n\\]\nby considering the word $\\gamma^\\ell$. Now use the following rule: if $k_j-k_{j-1} \\not = 0$, remove \n$\\sigma_{i_{j'}}$ from $\\gamma$, for $j=1,\\cdots, \\ell d$, where $j' =j \\!\\!\\mod d \\in \\{1,\\cdots,d\\}$.\nMoreover, $\\sigma_{i_j}$ is replaced by $\\sigma_{i_j-1}$, if $k_j=k_{j-1}0$ and $\\partial_3\\mathcal{R}_j>0$ is called a \\emph{parabolic recurrence relation}.\nFrom \\cite[Lem.\\ 55-57]{BraidConleyIndex} there exists a parabolic recurrence relation ${\\mathcal{R}}=\\{{\\mathcal{R}}_j\\}$\nsuch that ${\\bm{b}}$ is a zero for ${\\mathcal{R}}$, i.e. ${\\mathcal{R}}_j(x_{j-1}^{\\mu_\\nu},x_j^{\\mu_\\nu},x_{j+1}^{\\mu_\\nu})=0$ for all $j\\in \\zz$ and for all ${\\nu}=1,\\cdots,m$.\nThe recurrence relation ${\\mathcal{R}}$ may regarded as vector field and is integrated via the equations\n\\begin{equation}\n\\label{parabolicvectorfield}\n \\frac{d}{ds}x^{{\\mu_\\nu}}_{j}=\\mathcal{R}_{j}(x^{{\\mu_\\nu}}_{j-1},x^{{\\mu_\\nu}}_{j},x^{{\\mu_\\nu}}_{j+1}),\\quad \\nu=1,\\cdots,m.\n\\end{equation}\nLet $N$ denoted the closure in $\\rr^{nd}$ of $[{\\bm{a}}]\\rel {\\bm{b}}$.\nBy \\cite[ Prop.\\ 11 and Thm.\\ 15]{BraidConleyIndex}, the set $N$ is an isolating neighborhood for the parabolic flow generated by\nEquation \\eqref{parabolicvectorfield}.\nWe define ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}})$ as the homotopy Conley index of $\\Inv(N,{\\mathcal{R}})$, cf.\\ \\cite{BraidConleyIndex}, \\cite{ConleyIndex}.\nThe Conley index is independent of the choice of parabolic recurrence relations ${\\mathcal{R}}$ for which ${\\mathcal{R}}({\\bm{b}})=0$, cf.\\ \\cite[Thm.\\ 15(a)-(b)]{BraidConleyIndex},\nas well as the choice of the fiber, i.e. ${\\bm{a}}\\rel{\\bm{b}} \\sim {\\bm{a}}'\\rel{\\bm{b}}'$, then ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}}) = {\\mathrm{h}}({\\bm{a}}'\\rel{\\bm{b}}')$, cf.\\ \\cite[Thm.\\ 15(c)]{BraidConleyIndex}.\nThis makes ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}})$ an invariant of the discrete 2-colored braid class $[{\\bm{a}}\\rel{\\bm{b}}]$.\n\nThere is an intrinsic way to define ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}})$ without using parabolic recurrence relations.\nWe define $N^-\\subset \\partial N$ to be the set of boundary points for which the word\nmetric is locally maximal.\nThe pair $(N,N^-)$ is an index\npair for any parabolic system ${\\mathcal{R}}$ such that ${\\mathcal{R}}({\\bm{b}})=0$, and\nthus by the independence of Conley index on ${\\mathcal{R}}$, the pointed homotopy type\nof\n$N\/N^-$ gives the Conley index: \n$\n{\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}}) = [N\/N^-],\n$\nsee Figure \\ref{fig:conley1} and \\cite[Sect.\\ 4.4]{BraidConleyIndex} for more details on the construction.\n\n\\begin{figure}[hbt]\n\\centering\n\\begin{tikzpicture}[xscale=0.5, yscale=0.4, line width = 1.5pt]\n \\draw[-] (0,4) -- (2,5) -- (4,4); \n \\draw[fill] (0,4) circle (0.07)\n (2,5) circle (0.07)\n (4,4) circle (0.07);\n \\draw[-] (0,1) -- (2,0) -- (4,1);\n \\draw[fill] (0,1) circle (0.07)\n (2,0) circle (0.07)\n (4,1) circle (0.07);\n\n \\draw[-] (0,5) -- (2,1) -- (4,5); \n \\draw[fill] (0,5) circle (0.07)\n (2,1) circle (0.07)\n (4,5) circle (0.07);\n \\draw[-] (0,0) -- (2,4) -- (4,0);\n \\draw[fill] (0,0) circle (0.07)\n (2,4) circle (0.07)\n (4,0) circle (0.07);\n\n \\draw[-, color=red] (0,2) -- (2,2.5) -- (4,2);\n \\draw[color=red,fill] (0,2) circle (0.07)\n (2,2.5) circle (0.07)\n (4,2) circle (0.07);\n\\foreach \\x in {0,4}\n \\draw[line width=1.0pt, -] (\\x,-1) -- (\\x,6); \n\\end{tikzpicture}\n\\qquad\n\\qquad\n\\begin{tikzpicture}[xscale=0.5, yscale=0.5, line width = 1.5pt]\n \\draw[-,color=white] (1,1);\n \\fill[color=red!90] (2,6) -- (6,6) -- (6,2) -- (2,2) -- (2,6);\n \\draw[-] (2,6) -- (6,6) -- (6,2) -- (2,2) -- (2,6); \n \\draw[->] (1.5,5) -- (2.5,5);\n \\draw[->] (1.5,4) -- (2.5,4);\n \\draw[->] (1.5,3) -- (2.5,3);\n\n \\draw[<-] (1.5+4,5) -- (2.5+4,5);\n \\draw[<-] (1.5+4,4) -- (2.5+4,4);\n \\draw[<-] (1.5+4,3) -- (2.5+4,3);\n \n \\draw[<-] (5,1.5) -- (5,2.5);\n \\draw[<-] (4,1.5) -- (4,2.5);\n \\draw[<-] (3,1.5) -- (3,2.5);\n\n \\draw[->] (5,1.5+4) -- (5,2.5+4);\n \\draw[->] (4,1.5+4) -- (4,2.5+4);\n \\draw[->] (3,1.5+4) -- (3,2.5+4);\n\\end{tikzpicture}\n\\qquad\n\\qquad\n\\begin{tikzpicture}[xscale=0.6, yscale=0.8, line width = 1.5pt]\n \\draw[-,color=white] (1,1);\n \\draw[fill=red!90] (1,3.5) ellipse (2.0 and 1.0);\n \\draw[fill=white] (0.5,3.15) ellipse (1.0 and 0.6);\n \\draw[fill] (0.07,2.63) circle (0.07);\n\\end{tikzpicture}\n\\caption{The Conley index for the braid in Example \\ref{exm:exist2}.\nThe homotopy of the pionted space in given by $h({\\bm{a}}\\rel{\\bm{b}}) = \\sbb^1$.}\n\\label{fig:conley1}\n\\end{figure}\n\nThe invariant ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}})$ is not necessarily invariant with respect to the number of discretization points $d$.\nIn order to have invariance also with respect to $d$, another invariant for discrete braid classes was introduced in \\cite{BraidConleyIndex}.\nConsider the equivalence class induced by the relation ${\\bm{a}}\\rel{\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{a}}'\\rel{\\bm{b}}'$ on $\\Conf_{n,m}^d$, which defines the class\n$[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ of proper discrete 2-colored braids.\nVia the projection $\\varpi\\colon [{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}} \\to [{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ we obtain fibers $\\varpi^{-1}({\\bm{b}})$.\nSuppose $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ is a bounded class, i.e. all fibers $\\varpi^{-1}({\\bm{b}})$ are bounded sets in $\\rr^{nd}$.\nFollowing \\cite[Def.\\ 18]{BraidConleyIndex}\nthe closure $N$ of a fiber\n$\\varpi^{-1}({\\bm{b}})$ is an isolating neighborhood since ${\\bm{a}}\\rel{\\bm{b}}$ is proper.\nDefine ${\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}})$ as the homotopy Conley index of $N$.\nIf $[{\\bm{a}}_k]\\rel {\\bm{b}}$ are the fibers belonging to the components $[{\\bm{a}}_k\\rel{\\bm{b}}]$ of $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$, then \n\\begin{equation}\n\\label{eqn:braidindex22}\n{\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}) := \\bigvee_{k} {\\mathrm{h}}([{\\bm{a}}_k]\\rel{\\bm{b}}).\n\\end{equation}\n\nDefine the following extension mapping\n $\\E:\\Conf_m^d\\to\\Conf_m^{d+1}$, cf.\\ \\cite{BraidConleyIndex},\nvia concatenation with the trivial braid of period one: \n\\begin{equation}\n\t(\\E{\\bm{b}})^\\mu := \\left\\{\n\t\\begin{array}{cl}\n\t\tx_j^{\\mu} &\tj=0,\\ldots,d; \t\\\\\n\t\tx_d^{\\mu} & j=d+1 .\n\t\\end{array}\\right.\n\\end{equation}\nProperness remains unchanged under the extension mapping $\\E$, however boundedness may not be preserved.\nDefine the skeletal augmentation:\n\\[\n\\A\\colon \\Conf_m^d \\to \\Conf_{m+2}^d,\\quad {\\bm{b}} \\mapsto \\A{\\bm{b}} = {\\bm{b}}^* = {\\bm{b}}\\cup {\\bm{b}}^-\\cup{\\bm{b}}^+,\n\\]\nwhere ${\\bm{b}}^- =\\{\\min_{\\mu} \\{x_j^\\mu\\} - 1\\}_j$ and ${\\bm{b}}^+ =\\{\\max_{\\mu} \\{x_j^\\mu\\} + 1\\}_j$.\nIf $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ is bounded, then\n ${\\mathrm{h}}([{\\bm{a}}_k]\\rel{\\bm{b}}) = {\\mathrm{h}}([{\\bm{a}}_k]\\rel {\\bm{b}}^*)$ for all $k$ and therefore ${\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}) = {\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*)$.\nOne can define second skeletal augmentation:\n\\[\n\\B\\colon \\Conf_m^d \\to \\Conf_{m+2}^d,\\quad {\\bm{b}} \\mapsto \\B{\\bm{b}}={\\bm{b}}^\\# = {\\bm{b}}\\cup {\\bm{b}}^s\\cup{\\bm{b}}^n,\n\\]\nwhere ${\\bm{b}}^s =\\{(-1)^j\\min_{\\mu} \\{x_j^\\mu\\} - (-1)^j\\}_j$ and ${\\bm{b}}^n =\\{(-1)^j\\max_{\\mu} \\{x_j^\\mu\\} + (-1)^j\\}_j$.\nAs before, if $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ is bounded, then\n ${\\mathrm{h}}([{\\bm{a}}_k]\\rel{\\bm{b}}) = {\\mathrm{h}}([{\\bm{a}}_k]\\rel {\\bm{b}}^\\#)$ for all $k$ and therefore ${\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}) = {\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^\\#)$.\n\n\nConsider the proper, bounded 2-colored braid classes $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ and $[\\E{\\bm{a}}\\rel\\E{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$.\nThe main result in \\cite[Thm.\\ 20]{BraidConleyIndex} is the Stabilization Theorem which states that\n\\begin{equation}\n\\label{eqn:braidindex13}\n{\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*) = {\\mathrm{H}}(\\E{\\bm{a}}\\rel \\E{\\bm{b}}^*).\n\\end{equation}\nThe independence of ${\\mathrm{H}}$ on the skeleton ${\\bm{b}}$ can be derived from the Stabilization Theorem.\nSince a 2-colored discretized braid class is free when $d$ is sufficiently large, we have that $[\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^*]$ is free\nfor some $p>0$ sufficiently large, and by stabilization ${\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*) = {\\mathrm{H}}(\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^*)$.\nLet ${\\bm{a}} \\rel {\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{a}}'\\rel{\\bm{b}}'$, then $\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^* \\sim \\E^p{\\bm{a}}'\\rel \\E^p{\\bm{b}}'^*$.\nBy \\cite[Thm.\\ 15(c)]{BraidConleyIndex}, a continuation can be constructed which proves that\n${\\mathrm{H}}(\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^*) ={\\mathrm{H}}(\\E^p{\\bm{a}}'\\rel \\E^p{\\bm{b}}'^*)$. Consequently,\n\\[\n{\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*) = {\\mathrm{H}}(\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^*) = {\\mathrm{H}}(\\E^p{\\bm{a}}'\\rel \\E^p{\\bm{b}}'^*) = {\\mathrm{H}}({\\bm{a}}'\\rel{\\bm{b}}'^*),\n\\]\nwhich shows that the index ${\\mathrm{H}}$ only depends on the topological type $\\llbracket \\gamma,\\aset\\rrbracket$, with $\\gamma=\\beta({\\bm{a}}\\rel{\\bm{b}})$.\n\\begin{defn}\n\\label{defn:discrbrinv}\nLet $\\llbracket\\gamma,\\aset\\rrbracket$ be proper, positive conjugacy class. Then, the \\emph{braid Conley index} is defined as\n\\begin{equation}\n\\label{eqn:relbrinv}\n{\\mathrm{H}}\\llbracket\\gamma,\\aset\\rrbracket := {\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*).\n\\end{equation}\n\\end{defn}\nThe braid Conley index ${\\mathrm{H}}$ may be computed using any\nrepresentative ${\\bm{a}}\\rel{\\bm{b}}^*$ for any sufficiently large $d$ and any associated recurrence relation ${\\mathcal{R}}$.\n\nFinally, we mention that besides the extension $\\E$, we also have a \\emph{half twist} extension operator $\\T$:\n\\begin{equation}\n\t(\\T{\\bm{b}})^\\mu := \\left\\{\n\t\\begin{array}{cl}\n\t\tx_j^{\\mu} &\tj=0,\\ldots,d \t\\\\\n\t\t-x_d^{\\mu} & j=d+1 .\n\t\\end{array}\\right.\n\\end{equation}\nEvery discretized braid can be dualized via the mapping $\\{x^\\mu_j\\} \\mapsto \\{(-1)^j x_j^\\mu\\}$. On $\\Conf_m^{2d}$ this yields\na well-defined operator $\\D\\colon\\Conf_m^{2d}\\to \\Conf_m^{2d}$ mapping proper, bounded discretized braid classes $[{\\bm{a}}\\rel{\\bm{b}}]$ to proper, bounded discretized braid classes\n$[\\D{\\bm{a}}\\rel\\D{\\bm{b}}]$.\nFrom \\cite[Cor.\\ 31]{BraidConleyIndex} we recall the following result.\nLet ${\\bm{a}}\\rel{\\bm{b}}\\in \\Conf_{n,m}^{2d}$ be proper, then\n\\begin{equation}\n\\label{eqn:thedual}\n{\\mathrm{H}}\\bigl(\\T^2\\circ \\D({\\bm{a}}\\rel{\\bm{b}}^*)\\bigr) = {\\mathrm{H}}\\bigl(\\D({\\bm{a}}\\rel{\\bm{b}}^*)\\bigr) \\wedge \\sbb^{2n},\n\\end{equation}\nwhere the wedge is the $2n$-suspension of the Conley index.\n\nFrom the singular homology $H_*({\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*))$ the Poincar\\'e polynomial \nis denoted by $P_t({\\bm{a}}\\rel{\\bm{b}}^*)$, or $P_t\\llbracket\\gamma,\\aset\\rrbracket$ in terms of the topological type.\nThis yields an important invariant: $|P_t({\\bm{a}}\\rel{\\bm{b}}^*)| = |P_t\\llbracket\\gamma,\\aset\\rrbracket|$, which is the number of monomial term in the Poincar\\'e polynomial.\n\n\n\n\n\\section{The variational formulation}\nFor a given symplectomorphism $F\\in \\Symp(\\disc)$\n the problem of finding periodic points can be reformulated in terms of parabolic recurrence relations.\n\n\n\\subsection{Twist symplectomorphisms}\n\n\nLet $F(x,y) = \\bigl(f(x,y),g(x,y)\\bigr)$ be a symplectomorphism of $\\plane$, with $f,g$ smooth functions on $\\plane$.\nRecall that $F\\in \\Symp(\\plane)$ is a \\emph{positive} twist symplectomorphism if\n\\[\n\\frac{\\partial f(x,y)}{\\partial y} >0.\n\\]\nFor twist symplectomorphisms there exists a variational principle for finding periodic points, cf.\\ \\cite{LeCalvez}, \\cite{Moser}.\nSuch a variational principle also applies to symplectomorphisms that are given as a composition:\n\\[\nF= F_d\\circ \\cdots \\circ F_1,\n\\]\nwith $F_j\\in \\Symp(\\plane)$ positive twist symplectomorphisms for all $j$.\nIt is important to point out that $F$ itself is \\emph{not} twist in general.\nAn important question is whether every mapping $F\\in \\Symp(\\plane)$ can be written as a composition of (positive) twist symplectomorphisms, cf.\\ \\cite{LeCalvez}.\nSuppose $F\\in \\Ham(\\plane)$,\nand $F$ allows a Hamiltonian isotopy $\\psi_{t,H}$ with appropriate asymptotic conditions near infinity,\nsuch that $\\psi_{t_i,H}\\circ\\psi_{t_{i-1},H}^{-1}$ is close to the identity mapping in the $C^1$-norm for sufficiently small time steps $t_i-t_{i-1}$. Then, define $G_i = \\psi_{t_i,H}\\circ\\psi_{t_{i-1},H}^{-1}$, $i=1,\\cdots,k$,\nand $F = G_k\\circ \\cdots \\circ G_1$.\nWe remark that in this construction\n the individual mappings $G_i$ are not twist necessarily.\nThe following observation provides a decomposition consisting solely of positive twist symplectomorphisms.\nConsider the $90^o$ degree clockwise rotation\n\\[\n\\psi(x,y) = (y,-x), \\quad \\psi^4={\\rm id},\n\\]\nwhich is positive twist symplectomorphism.\nThis yields the decomposition:\n\\begin{equation}\n\\label{eqn:decomp12}\nF = (G_k\\circ \\psi) \\circ\\psi\\circ\\psi\\circ\\psi\\circ \\cdots \\circ (G_1\\circ\\psi)\\circ\\psi\\circ\\psi\\circ\\psi,\n\\end{equation}\nwhere $F_{4i} = G_{i}\\circ \\psi$ and $F_j=\\psi$ for $j\\not = 4i$ for some $i$ and $d=4k$.\nSince the mappings $G_i$ are close to the identity, the compositions $G_i\\circ \\psi$ are positive twist symplectomorphisms.\nThe above procedure intertwines symplectomorphisms with $k$ full rotations. As we will see later on this results in\npositive braid representations of mapping classes.\nThe choice of $\\psi$ is arbitrary since other rational rotations also yield twist symplectomorphisms.\n\nFor symplectomorphisms $F \\in \\Symp(\\disc)$ we establish a similar decomposition in terms of positive twist symplectomorphisms, with the additional property that the decomposition can be extended to symplectomorphisms of $\\plane$,\nwhich is necessary to apply the variational techniques in \\cite{BraidConleyIndex}.\n\n\\subsection{Interpolation}\\label{subsec:interpl}\n\nA symplectomorphism $F\\in \\Symp(\\plane)$ satisfies the \\emph{uniform twist condition} if the there exists a $\\delta>0$ such that\n\\begin{equation}\n\\label{eqn:twist}\n\\delta^{-1} \\ge \\frac{\\partial f(x,y)}{\\partial y} \\ge \\delta >0,\\quad \\forall (x,y)\\in \\plane.\n\\end{equation}\nThe subset of such symplectomorphism is denoted by $SV(\\plane)$, cf.\\ \\cite{LeCalvez}.\nA result by Moser implies that all symplectomorphisms of $\\plane$ with a uniform twist condition \nare Hamiltonian.\n\\begin{prop}[cf.\\ \\cite{Moser}]\\label{MoserThm}\n Let $F \\in SV(\\plane)$. Then, there exists a Hamiltonian $H \\in \\mathcal{H}(\\plane)$\n such that $0<\\delta \\le H_{yy} \\le \\delta^{-1}$ and\n $\\psi_{1,H} = F$, where $\\psi_{t,H}$ is the associated Hamiltonian flow.\nAll orbits of $\\psi_{t,H}$ project to straight lines in $(t,x)$-plane, and $\\psi_{t,H}\\in SV(\\plane)$ for all $t\\in (0,1]$.\n\\end{prop}\n\nFor completeness we give a self-contained proof of Proposition \\ref{MoserThm}, which is the same as the proof\n in \\cite{Moser} modulo a few alterations.\n\n\\begin{proof}\nFollowing \\cite{Moser} we consider action\n integral $\\int_{0}^{1} L(t,x(t),\\dot{x}(t)) dt$\n for functions $x(t)$ with $x(0)=x_0$ and $x(1) = x_1$.\n We require that extremals are affine lines, i.e. $\\ddot x(t)=0$. \nFor extremals the action is given by $S(x_0,x_1) = \\int_{0}^{1} L(t,x(t),\\dot{x}(t)) dt$ and we seek\nLagrangians such that $S={\\bm{h}}$, where ${\\bm{h}}$ is the generating function for $F$.\n %\n For Lagrangians this implies \n\\begin{equation}\n\\tfrac{d}{dt}(\\partial_pL)-\\partial_xL = ( \\partial_{t}+p \\partial_{x} )\\partial_p L-\\partial_x L=0,\n \\label{straightEL}\n\\end{equation}\nwhere $p=\\dot x$. Solving the first order partial differential equation yields\n $L=L_{0}(t,x,p)+p \\partial_x m + \\partial_t m$,\nwith\n\\begin{equation}\n L_{0}:=-\\int_{0}^{p}(p-p')\\partial^2_{x_0 x_1}{\\bm{h}}(x-p't,x+p'(1-t)) dp' .\n \\label{Fzero}\n\\end{equation}\nand $m= m(t,x)$ to be specified later, cf.\\ see \\cite{Moser} for details.\nThe extremals $x(t)$ are also extremals for $L_0$. Let $S_{0}(x_{0},x_{1}) = \\int_{0}^{1} L_0(t,x(t),\\dot{x}(t)) dt$, then\n\\begin{equation}\n \\int_{0}^{1}p \\partial_x m(t,x(t))+\\partial_t m(t,x(t)) dt = m(1,x_{1})-m(0,x_{0})\n\\end{equation}\nand hence \n %\n\\begin{equation}\n S(x_{0},x_{1})=S_{0}(x_{0},x_{1})+m(1,x_{1})-m(0,x_{0}).\n\\end{equation}\nDifferentiating $S$ yields\n\\begin{equation*}\n\n \\partial_{x_0} S = -\\partial_p L(0,x_{0},x_{1}-x_{0}),\\quad\n\n \\partial_{x_1} S = \\partial_p L(0,x_{1},x_{1}-x_{0})\n \\label{Sdiff}\n\\end{equation*}\nand for the mixed derivat\n\\begin{equation}\n \\partial^2_{x_0 x_1} S_0(x_0,x_1)=-\\partial^2_{pp}L(0,x_{0},x_{1}-x_{0})=\\partial^2_{x_{0} x_{1}}{\\bm{h}}(x_{0},x_{1}).\n \\label{FppRel}\n\\end{equation}\nThen, $S_{0}(x_{0},x_{1})-h(x_{0},x_{1})=u(x_{0})+v(x_{1})$ and the choice\n\\begin{equation*}\n m(t,x):=(1-t)u(x)-tv(x)\n\\end{equation*}\nimplies $S={\\bm{h}}$. Differentiating the relation $y=-\\partial_x {\\bm{h}}(x,x_{1})$ with respect to $y$ and using the fact that\n$x_1 = f(x,y)$, yields\n\\begin{equation*}\n\\begin{aligned}\n1&= -\\partial^2_{y x}{\\bm{h}} (x,x_1) = - \\partial^2_{x x_{1}}{\\bm{h}} (x,x_{1})\\partial_yf(x, x_1)\\\\\n&= -\\partial^2_{x x_{1}}{\\bm{h}}(x,x_{1})\\partial_y f(x, x_{1})\n \\end{aligned}\n\\end{equation*}\nand thus $\\delta\\le \\partial_y f \\le \\delta^{-1}$ if and only if $-\\delta^{-1} \\leq \\partial^2_{x x_{1}}{\\bm{h}} \\leq -\\delta$.\nBy relation \\eqref{FppRel} we have $\\partial^2_{pp}L \\in [ \\delta, \\delta^{-1} ]$.\n\nThe Hamiltonian is obtained via the Legendre transform\n\\begin{equation}\n H(t,x,y):=yp-L(t,x,p),\n \\label{Ltransform}\n\\end{equation}\nwhere \n\\begin{equation}\n y=\\partial_p L(t,x,p),\n \\label{Ltransform2}\n\\end{equation}\nand we can solve for $p$, i.e. $p = \\lambda(x,y)$. As before, differentiating \\eqref{Ltransform2} gives\n$1=\\partial^2_{pp}L\\cdot \\partial_y \\lambda$ and differentiating \\eqref{Ltransform} gives $\\partial_yH = \\lambda$. Combining these two identities yields $\\partial^2_{pp}L\\cdot \\partial^2_{yy}H=1$, from which the desired property $\\partial^2_{yy}H \\in [ \\delta, \\delta^{-1} ]$ follows.\n\nFrom the above analysis we obtain the following expression for the isotopy $\\psi_{t,H}$:\n\\begin{equation}\n\\label{eqn:MoserIso}\n\\psi_{t,H}(x,y) = \\Bigl(x+\\lambda(x,y)t,\\partial_pL\\bigl(t,x+\\lambda(x,y)t,\\lambda(x,y)\\bigr) \\Bigr).\n\\end{equation}\nLet $\\pi_x$ denote the projection onto the $x$-coordinate.~Then, $\\partial_y \\pi_x \\psi_{t,H}(x,y) =\\partial_y \\lambda(x,y) t = \\partial^2_{yy}H t$, which proves that\n$\\psi_{t,H}$ is positive twist for all $t\\in (0,1]$.\n\\end{proof}\n\n\nUsing Proposition \\ref{MoserThm} we obtain\na decomposition of symplectomorphisms $F\\in \\Symp(\\plane)$ as given in \\eqref{eqn:decomp12}\nand which satisfy additional properties such that the discrete braid invariants in \\cite{BraidConleyIndex} are applicable.\n\n\\begin{prop}\\label{Interpolation2} \nLet $F \\in \\Symp(\\disc)$. Then, there exists an isotopy $\\phi_{t} \\subset \\Symp(\\plane)$ for all $t\\in [0,1]$, an integer $d\\in \\nn$ and \n a sequence $\\{t_{j} \\}_{j=0}^{d} \\subset [0,1]$ with $t_j = j\/d$,\n such that\n \\begin{enumerate}\n \\item[(i)] $\\phi_{0}= \\id$, $\\phi_{1}|_{\\disc}=F$;\n \\item[(ii)] $\\phi_{t}$ is smooth with respect to $t$ on the intervals $[t_j,t_{j+1}]$ (piecewise smooth);\n \\item[(iii)] $\\widehat F_j :=\\phi_{t_{j}} \\circ \\phi_{t_{j-1}}^{-1} \\in SV(\\plane)$ for all $1\\le j \\le d$, and $F_j:=\\widehat F_j|_{\\disc}$;\n \\item[(iv)] the projection of the graph of $\\phi_{t}(x,y)$ onto $(t,x)$-plane is linear on the intervals $t \\in (t_{j-1},t_{j})$ for all $1\\le j \\le d$, and for all $(x,y) \\in \\plane$;\n \\item[(v)] $\\phi_{t} (\\disc) \\subset [-1,1] \\times \\rr $ for all $t\\in [0,1]$;\n \\item[(vi)] the points $z_\\pm = (\\pm 2,0)$ \n\n are fixed points of $\\widehat F_j = \\phi_{t_{j}} \\circ \\phi_{t_{j-1}}^{-1}$ for all $1\\le j \\le d$;\n \\item[(vii)] the points $z'_\\pm = (\\pm 4,0)$ are period-2 points of $\\widehat F_j = \\phi_{t_{j}} \\circ \\phi_{t_{j-1}}^{-1}$ for all $1\\le j \\le d$,\n i.e. $\\widehat F_j(z'_\\pm) = z'_\\mp = -z'_\\pm$, for all $j$.\n \\end{enumerate}\nThe decomposition\n\\begin{equation}\n\\label{eqn:decomp14}\n\\widehat F = \\widehat F_d\\circ \\cdots \\circ \\widehat F_1,\n\\end{equation}\n is a generalization of the decomposition given in \\eqref{eqn:decomp12}.\n\\end{prop}\n\nThe isotopy constructed in Proposition \\ref{Interpolation2} is called a \\emph{chained Moser isotopy}.\nBefore proving Proposition \\ref{Interpolation2} we construct analogues of the rotation mapping used in \\eqref{eqn:decomp12}.\n\n\\begin{lem}\\label{AlternatePsi} \nFor every integer $\\ell\\ge 3$ there exists \n a positive Hamiltonian twist diffeomorphism $\\Psi$ of the plane $\\plane$,\nsuch that: \n\\begin{enumerate}\n\\item[(i)] the restriction $\\Psi|_{\\disc}$ is a rotation over angle $2\\pi\/\\ell$ and $\\Psi|_{\\disc}^\\ell = \\id$;\n\\item[(ii)] the points $z_\\pm=(\\pm 2,0)$ are fixed points for $\\Psi$;\n\\item[(iii)] the points $z'_\\pm=(\\pm 4,0)$ are period-2 points for $\\Psi$, i.e. $\\Psi(z'_\\pm) = z'_\\mp$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nA linear rotation mapping on $\\plane$ is a positive twist mapping for all rotation angles $\\vartheta\\in (0,\\pi)$.\nThe generating function for a rotation is given by\n\\begin{equation}\n\\label{eqn:rot12}\n{\\bm{h}}_\\vartheta(x,x') = \\tfrac{1}{2}\\cot(\\vartheta) x^2 - \\csc(\\vartheta) x x' + \\tfrac{1}{2}\\cot(\\vartheta) x'^2.\n\\end{equation}\nIn order to construct the mappings $\\Psi$ we construct special generating functions.\nLet $\\ell\\ge 3$ be an integer and let $\\vartheta_\\ell = 2\\pi\/\\ell \\in (0,\\pi)$.\nConsider generating functions of the form\n\\begin{equation}\n{\\bm{h}}_\\Psi(x,x') = \\xi_\\ell(x) - \\csc(\\vartheta_\\ell) x x' + \\xi_\\ell(x'),\n\\end{equation}\nwhich generate positive twist mappings for all $\\ell\\ge 3$.\nWe choose $\\xi_\\ell$ as follows: $\\xi_\\ell(x) = \\frac{1}{2}\\cot(\\vartheta_\\ell) x^2$ for all $|x|\\le 1$, $\\xi_\\ell(x) = \\frac{1}{2}\\csc(\\vartheta_\\ell) x^2$ for all $3\/2\\le |x|\\le 5\/2$, and $\\xi_\\ell(x) = -\\frac{1}{2}\\csc(\\vartheta_\\ell) x^2$ for all $7\/2\\le |x|\\le 9\/2$.\nThe mapping $\\Psi$ is defined by $h_\\Psi$ and \n$y = -\\partial_1 {\\bm{h}}_\\Psi(x,x')$ and $y' = \\partial_2 {\\bm{h}}_\\Psi(x,x')$.\\footnote{To simplify notation we express the derivatives of ${\\bm{h}}$ with respect to its two coordinates \n by $\\partial_1{\\bm{h}}$ and $\\partial_2{\\bm{h}}$.} \nFor $|x|,|x'|\\le 1$, the generating function restricts to \\eqref{eqn:rot12} which yields the rotation over $\\vartheta_\\ell$ on $\\disc$ and establishes (i).\nFor $3\/2\\le x,x'\\le 5\/2$ we have \n\\[\ny = \\csc(\\vartheta_\\ell)(x'-x),\\quad\\hbox{and}\\quad y' = \\csc(\\vartheta_\\ell)(x'-x),\n\\]\nwhich verifies that $z_+$ are fixed points and same holds for $z_-$, completing the verification of (ii).\nFor $7\/2\\le x \\le 9\/2$ and $-9\/2\\le x'\\le -7\/2$, we have \n\\[\ny = \\csc(\\vartheta_\\ell)(x'+x),\\quad\\hbox{and}\\quad y' = -\\csc(\\vartheta_\\ell)(x'+x),\n\\]\nthen $z'_+$ is mapped to $z'_-$ and similarly $z'_-$ is mapped to $z'_+$, which completes (iii) and proof of the lemma.\n\\end{proof}\n\n\nIn order to extend chained Moser isotopies yet another type of Hamiltonian twist diffeomorphism is needed.\n\n\\begin{lem}\\label{AlternatePsi3} \nFor every integer $\\ell\\ge 3$ there exists \n a positive Hamiltonian twist symplectomorphism $\\Upsilon$ of the plane $\\plane$,\nsuch that: \n\\begin{enumerate}\n\\item[(i)] the restriction $\\Upsilon|_{\\disc}$ is a rotation over angle $2\\pi\/\\ell$, i.e. $\\Upsilon|_{\\disc}^{\\ell} = \\id$;\n\\item[(ii)] the points $z_\\pm=(\\pm 2,0)$ and $z'_\\pm=(\\pm 4,0)$ are period-2 points for $\\Upsilon$, i.e. $\\Upsilon(z'_\\pm) = z'_\\mp$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nAs before consider\n generating functions of the form\n\\begin{equation}\n\\label{eqn:rot14}\n{\\bm{h}}_\\Upsilon(x,x') = \\xi_\\ell(x) - \\csc(\\vartheta_\\ell) x x' + \\xi_\\ell(x'),\n\\end{equation}\nwhich generate positive twist mappings for all $\\ell\\ge 3$.\nWe choose $\\xi_\\ell$ as follows: $\\xi_\\ell(x) = \\frac{1}{2}\\cot(\\vartheta_\\ell) x^2$ for all $|x|\\le 1$, and $\\xi_\\ell(x) = -\\frac{1}{2}\\csc(\\vartheta_\\ell) x^2$ for all $3\/2\\le |x|\\le 9\/2$.\nThe mapping $\\Upsilon$ is defined by $h_\\Upsilon$.\nFor $|x|,|x'|\\le 1$, the generating function restricts to \\eqref{eqn:rot12} which yields the rotation over $\\vartheta_\\ell$ on $\\disc$ and establishes (i).\nFor $3\/2\\le x,\\le 9\/2$ and $-9\/2\\le x'\\le -7\/2$, we have \n\\[\ny = \\csc(\\vartheta_\\ell)(x'+x),\\quad\\hbox{and}\\quad y' = -\\csc(\\vartheta_\\ell)(x'+x),\n\\]\nthen $z_+$ and $z'_+$ are mapped to $z_-$ and $z'_-$ respectively and similarly $z_-$ and $z'_-$ are mapped to $z_+$ and $z'_+$ respectively, which completes proof.\n\\end{proof}\n\n\n\n\n\\begin{proof}[Proof of Proposition \\ref{Interpolation2}]\nConsider the subgroup $\\Symp_{c}(\\plane)$ formed by compactly supported symplectomorphisms of the plane.\\footnote{A symplectomorphism is compactly supported in $\\plane$ if it is the identity outside a compact subset of $\\plane$\n}\n Recall that due to the uniform twist property the set $SV(\\plane)$ is open in the topology given by $C^{1}$-convergence on compact sets, cf.\\ \\cite{LeCalvez}. \n Let $\\Psi\\in \\Ham(\\plane)$ be given by Lemma \\ref{AlternatePsi} for some $\\ell\\ge 3$.\n Then, there exists\nan open neighborhood ${\\mathscr{W}} \\subset \\Symp_{c}(\\plane)$ of the identity, such that $\\varphi\\circ \\Psi\\in SV(\\plane)$ for all $\\varphi \\in {\\mathscr{W}}$.\n\nFor $F\\in \\Symp(\\disc)$, Proposition \\ref{prop:MCG11a}\nprovides a Hamiltonian\n $H_{} \\in \\mathcal{H}(\\disc)$ such that $F = \\psi_{1,H}$. \n Let $H^\\dagger$ be a smooth extension to $\\rr\\times\\plane$ and ${\\mathscr{U}}_\\epsilon(\\disc) = \\{z\\in \\plane~|~|z|<1+\\epsilon\\}$ and\nlet $\\alpha \\colon\\plane \\rightarrow \\rr$ \nbe a smooth bump function satisfying $\\alpha|_{\\disc}=1$, $\\alpha = 0$ on $\\plane\\setminus{\\mathscr{U}}_\\epsilon(\\disc)$.\nTake $\\epsilon\\in (0,1\/2)$ and define $\\widetilde H = \\alpha H^\\dagger$ with $\\widetilde H \\in {\\mathcal{H}}(\\plane)$.\nThe associated Hamiltonian isotopy is denoted by $\\psi_{t,\\widetilde H}$ and $\\widetilde F = \\psi_{1,\\widetilde H}\n\\in \\Ham(\\plane)$. Moreover, $\\psi_{t,\\widetilde H}$\nequals the identity on ${\\plane\\setminus{\\mathscr{U}}_\\epsilon(\\disc)}$, i.e. $\\psi_{t,\\widetilde H}$ is supported in ${\\mathscr{U}}_\\epsilon(\\disc)$,\nand $\\widetilde F|_{\\disc} = F$.\n\nFix $\\ell\\ge 3$ and choose $k>0$\n sufficiently large such that\nthe symplectomorphisms \n\\begin{equation}\n G_{i}=\\psi_{i\/k,\\widetilde H} \\circ \\psi_{(i-1)\/k,\\widetilde H}^{-1}, \\ i \\in \\{1, \\dots,k \\}\n\\end{equation}\nare elements of ${\\mathscr{W}}$. Each $G_{i}$ restricted to $\\disc$ can be decomposed as follows:\n\\begin{equation}\n G_{i}|_\\disc = \\bigl(G_{i}|_\\disc \\circ \\Psi\\bigr) \\circ \\underbrace{\\Psi'\\circ\\cdots\\circ\\Psi'}_{\\kappa(\\ell-1)},\\quad \\ell\\ge3,~\\kappa\\in \\nn,\n\\end{equation}\nwhere $\\Psi$ and $\\Psi'$ are obtained from Lemma \\ref{AlternatePsi} by choosing rotation angles $2\\pi\/\\ell$ and $2\\pi\/\\kappa\\ell$ respectively. \nObserve that $\\Psi \\circ \\Psi'^{\\kappa(\\ell-1)}|_{\\disc} = \\id$.\nFrom $\\widetilde F$ we define the mapping $\\widehat F\\in \\Symp(\\plane)$:\n\\begin{equation} \n \\widehat{F}=(G_{k} \\circ \\Psi) \\circ \\underbrace{\\Psi' \\circ \\dots \\circ \\Psi'}_{\\kappa(\\ell-1)}\\circ\\cdots\\circ (G_{1} \\circ \\Psi) \\circ \\underbrace{\\Psi'\\circ\\cdots\\circ \\Psi'}_{\\kappa(\\ell-1)}.\n\\end{equation}\nBy construction we have $\\widehat F|_\\disc = F$.\nLet $\\ell_\\kappa= \\kappa(\\ell-1) +1$ and $d = \\ell_\\kappa k$ and put \n\\begin{equation}\n \\widehat F_{j} = \\begin{cases} G_{j\/\\ell_\\kappa} \\circ \\Psi &\\mbox{for }~~ j \\in \\{\\ell_\\kappa, 2\\ell_\\kappa,\\cdots, d\\}\n \\\\ \\Psi' &\\mbox{for } j \\in \\{1, \\dots, d \\} \\backslash \\{\\ell_\\kappa, 2\\ell_\\kappa, \\dots, d \\}. \\end{cases}\n\\end{equation} \nwith $\\widehat F_j \\in SV(\\plane)$ for $j \\in \\{1, \\dots, d \\}$ and $F_j = \\widehat F_j|_{\\disc}$.\nUsing the latter we obtain a decomposition of $F$ as given in \\eqref{eqn:decomp12},\nand with the additional property that the mappings $F_j$ extend to twist symplectomorphisms of the $\\plane$,\nwhich proves \\eqref{eqn:decomp14}.\n\nEach symplectomorphism $\\widehat F_j$ can be connected to identity by a Hamiltonian path.\nLet $H^j$ be the Hamiltonian given by Proposition \\ref{MoserThm}, which connects\n$\\widehat F_j$ to the identity via the Moser isotopy $\\psi_{s,H^j}$, $s\\in [0,1]$.\nLet $t_{j}=j\/d$ for all $j \\in \\{0, \\dots, d \\}$ and define \n\\begin{equation}\n\\label{eqn:theisotopy}\n\\phi_t = \\psi_{s^j(t),H^j} \\circ \\widehat F_{j-1} \\circ\\cdots\\circ \\widehat F_0,\\quad t\\in [t_{j-1},t_{j}], ~~j\\in \\{1,\\cdots,d\\},\n\\end{equation}\nwith $s^j(t) = d(t-t_j)$ and $\\widehat F_0 = \\id$.\nObserve that, by construction, $\\phi_{t_j}\\circ \\phi_{t_{j-1}}^{-1} = \\widehat F_j$, for all\n$j=1,\\cdots,d$ and (i) - (iv) is satisfied.\nCondition (v) follows from (iv) and from the fact that each $\\widehat F_j$ leaves the disc $\\disc$ invariant.\n\n All the symplectomorphisms in the decomposition are supported in the disc ${\\mathscr{U}}_\\epsilon(\\disc)$, hence Conditions (ii) and (iii) of Lemma\n \\ref{AlternatePsi} \n imply Properties (vi) and (vii).\n\\end{proof}\n\n\\begin{rem}\n\\label{rem:extendedMI}\nThe chained Moser isotopies in Proposition \\ref{MoserThm} can be extended with two more parameters $r\\ge 0$ and $\\rho\\ge 0$.\nConsider the decoposition\n\\begin{equation}\n\\label{eqn:decomp16}\n\\widehat F = \\widehat F_d\\circ \\cdots \\circ \\widehat F_1 \\circ \\underbrace{\\Psi^{\\ell_r}_r\\circ\\cdots\\circ\\Psi^{\\ell_r}_r}_r \\circ\n\\underbrace{\\Upsilon^{\\ell_\\rho}_\\rho\\circ\\cdots\\circ\\Upsilon^{\\ell_\\rho}_\\rho}_\\rho, \n\\end{equation}\nwhere $\\Psi_r^{\\ell_r}|_\\disc = \\id$ and $\\ell_r\\ge 3$, and $\\Upsilon_\\rho^{\\ell_\\rho}|_\\disc = \\id$ and $\\ell_\\rho\\ge 3$.\nWe can again define an Moser isotopy as in \\eqref{eqn:theisotopy} with $d$ replaced by $d+r\\ell_r+\\rho\\ell_\\rho$. \nThe isotopy is again called a chained Moser isotopy and denoted by $\\phi_t$, and the extended period will again be denoted by $d$.\nThe strands $\\phi_t(z_\\pm)$ link with the cylinder $[0,1]\\times\\disc$ and with each other with linking number $2\\rho$.\n\\end{rem}\n\n\n\n\n\n\n\n\\subsection{The discrete action functional}\\label{subsec:genfun}\n\nLet $F \\in \\Symp(\\disc)$ be the given symplectomorphism of the 2-disc and let $\\{ \\phi_{t} \\}_{t \\in \\rr}$ and $\\{t_{j} \\}_{j \\in \\{ 0, \\dots, d \\} }$ \nbe the associated continuous isotopy and sequence of discretization times as given in Proposition~\\ref{Interpolation2}\nfor the extension $\\widehat F$.\nThe isotopy is extended periodically, that is $ \\phi_{t+s} = \\phi_{t} \\circ \\phi_{1}^{s} $ and $t_{j+s d} = s + t_{j}$ for all $s \\in \\mathbb{Z}$.\nThe decomposition of $\\widehat F$ given by Proposition \\ref{Interpolation2} yields a periodic sequence of positive twist symplectomorphisms $\\{\\widehat F_j\\}$, with $\\widehat F_j =\n\\phi_{t_{j+1}} \\circ \\phi_{t_{j}}^{-1} \\in SV(\\plane)$ and $\\widehat F_{j+d} = \\widehat F_j$.\n\n\n\\begin{defn}\nA sequence $\\{ (x_{j},y_{j}) \\}_{j \\in \\mathbb{Z}}$ is a \\textit{full orbit} for the system $\\{ \\widehat F_j\\}$ if\n\\[\n(x_{j+1},y_{j+1}) = \\widehat F_j (x_{j},y_{j}), \\quad j\\in \\zz.\n\\]\nIf $(x_{j+n},y_{j+d}) = (x_j,y_j)$ for all $j$, then $\\{ (x_{j},y_{j}) \\}_{j \\in \\mathbb{Z}}$ is called an \\emph{$d$-periodic sequence}\nfor the system $\\{ \\widehat F_j\\}$.\n\\end{defn}\n\n\n\n For every twist symplectomorphism $\\widehat F_j \\in SV(\\plane)$ \nwe assign a generating function\n ${\\bm{h}}_{j}={\\bm{h}}_{j}(x_{j}, x_{j+1})$ on the $x$-coordinates, which implies that\n $y_{j}= -\\partial_1 {\\bm{h}}_j$\n and $y_{j+1}=\\partial_2 {\\bm{h}}_j$.\n From the twist property it follows that\n \\begin{equation}\\label{hmonotonicity}\n \\partial_{1} \\partial_{2} {\\bm{h}}_{j} < 0,\\quad \\forall j\\in \\zz.\n \\end{equation}\n Note that the sequence $\\{{\\bm{h}}_{j} \\}$ is $d$-periodic. \n \n Define the \\textit{action functional} $W_{d}:\\rr^{\\mathbb{Z}\/d\\mathbb{Z}} \\rightarrow \\rr$ by\n \\begin{equation}\n \\label{eqn:action1}\n W_{d}\\bigl(\\{x_j\\}\\bigr):= \\sum\\limits_{j=0}^{d-1} {\\bm{h}}_{j} (x_j, x_{j+1}).\n \\end{equation}\nA sequence $\\{x_j\\}$ is a critical point of $W_d$ if and only if\n\\begin{equation}\n\\label{eqn:parabrec}\n \\mathcal{R}_{j}(x_{j-1}, x_{j}, x_{j+1}) := -\\partial_{2} {\\bm{h}}_{j-1} \\left(x_{j-1}, x_{j}\\right) - \\partial_{1} {\\bm{h}}_j \\left(x_{j}, x_{j+1}\\right)=0,\n\\end{equation}\n for all $j \\in \\mathbb{Z}$.\n The $y$-coordinates satisfy $y_{j}=\\partial_{2} {\\bm{h}}_{j-1}\\left(x_{j-1},x_{j}\\right)$.\n \n Periodicity and exactness of $\\mathcal{R}_j$ is immediate. The monotonicity follows directly from inequality~\\eqref{hmonotonicity}.\nA periodic point $z$, i.e. $F^d(z) = z$, is equivalent to the periodic sequence $\\{ (x_{j},y_{j}) \\}_{j \\in \\mathbb{Z}}$, with $z=(x_0,y_0)\\in \\disc$.\nSince $z=(x_0,y_0)\\in \\disc$, the invariance of $\\disc$ under $F$ implies that $(x_{j},y_{j})\\in \\disc$ for all $j$.\nThe above considerations yield the following variational principle.\n\\begin{prop}\n\\label{prop:varprin1}\nA $d$-periodic sequence $\\{ (x_{j},y_{j}) \\}_{j \\in \\mathbb{Z}}$ is an $d$-periodic orbit for the system $\\{ \\widehat F_j\\}$ if and only if\nthe sequence of $x$-coordinates $\\{ x_{j} \\}$ is a critical point of $W_d$. \n\\end{prop}\n\nThe idea of periodic sequences can be generalized to periodic configurations.\nLet $\\{B_j\\}_{j\\in \\zz}$, $B_j = \\{(x_j^\\mu,y_j^\\mu)~|~\\mu=1,\\cdots,m\\}\\in \\C_m(\\plane)$ and $B_{j+d} = B_j$ for all $j$. \nSuch a sequence $\\{B_j\\}$ is a $d$-periodic sequence for $\\{\\widehat F_j\\}$ if $\\widehat F_j(B_j) = B_{j+1}$ for all $j\\in \\zz$.\n\nFor a $d$-periodic sequence $\\{B_j\\}$, the $x$-projection yields a discretized braid ${\\bm{b}} = \\{{\\bm{b}}^\\mu\\} = \\{x_j^\\mu\\}$, cf.\\ Definition \\ref{PL}.\nThe above action functional can be extended to the space of discretized braids $\\Conf_m^d$:\n\\begin{equation}\n\\label{eqn:action2}\nW_d({\\bm{b}}) := \\sum_{\\mu=1}^m W_d({\\bm{b}}^\\mu), \n\\end{equation}\nwhere $W_d({\\bm{b}}^\\mu)$ is given by \\eqref{eqn:action1}. This yields the following extension of the variational principle.\n\\begin{prop}\n\\label{prop:varprin2}\nA $d$-periodic sequence $\\{ B_j \\}_{j \\in \\mathbb{Z}}$, $B_j\\in \\C_m(\\plane)$, is a $d$-periodic sequence of configurations for the system $\\{ \\widehat F_j\\}$ if and only if\nthe sequence of $x$-coordinates ${\\bm{b}}=\\{ x^\\mu_{j} \\}$ is a critical point of $W_d$ on $\\Conf_m^d$. \n\\end{prop}\n\nA discretized braid ${\\bm{b}}$ that is stationary for $W_d$ if it satisfies the parabolic recurrence relations in \\eqref{eqn:parabrec} for all $\\mu$\nand the periodicity condition in Definition \\ref{PL}(b).\nIn Section \\ref{sec:braiding} we show that $d$-periodic sequences of configurations $\\{B_j\\}$ for the system $\\{ \\widehat F_j\\}$\nyields geometric braids.\n\n\n\n\n\n\n\\section{Braiding of periodic points}\n\\label{sec:braiding}\nFor symplectomorphisms $F\\in \\Symp(\\disc)$, with a finite invariant set $B\\subset \\inter \\disc$, the mapping class\ncan be identified via a chained Moser isotopy.\n\n\\begin{prop}\n\\label{prop:traceout1}\nLet $B\\subset\\inter\\disc$, with $\\# B=m$, be a finite invariant set for $F\\in \\Symp(\\disc)$ and let\n$\\phi_t$ be a chained Moser isotopy given in Proposition \\ref{Interpolation2}. Then, ${\\bm{\\beta}}(t) = \\phi_t(B)$ represents a geometric braid based at $B\\in \\C_m\\disc$ with only positive crossings and $\\beta = \\imath_B\\bigl([{\\bm{\\beta}}]_B\\bigr)$ is a\npositive word in\nthe braid monoid ${\\mathscr{B}}_m^+$.\nThe $x$-projection ${\\bm{b}}(t) = \\pi_x{\\bm{\\beta}}(t)$ on the $(t,x)$-plane is a (continuous) piecewise linear braid diagram.\n\\end{prop}\n\n Proposition \\ref{prop:MCG12} implies that the associated positive braid word $\\beta\\in \\mathcal{B}_m$, derived from the braid diagram $\\pi_x\\phi_t(B)$ determines the mapping class of $F$\nrelative to $B$.\nIf the based path ${\\bm{\\beta}}(t)=\\phi_t(B)$ is regarded as a \\emph{free loop} $\\sbb^1 \\to \\C_m\\disc$, i.e. discarding the base point, then ${\\bm{\\beta}}$ is referred to as a \\emph{closed} geometric braid in $\\disc$.\n\\begin{defn}\n\\label{defn:acylindrical}\nLet ${\\bm{\\beta}}$ be a geometric braid in $\\disc$. A component of ${\\bm{\\beta}}'\\subset {\\bm{\\beta}}$ is called \\emph{cylindrical in} ${\\bm{\\beta}}$ if ${\\bm{\\beta}}'$ can be deformed onto $\\partial \\disc$ as a closed geometric braid. Otherwise ${\\bm{\\beta}}'$ is called \\emph{acylindrical}. A union of components ${\\bm{\\beta}}'$ is called cylindrical\/acylindrical in ${\\bm{\\beta}}$ if all members are.\n\\end{defn}\n\n\\begin{rem}\n\\label{rmk:acylindrical}\nA positive conjugacy class $\\llbracket \\gamma,\\aset\\rrbracket$ is associated with braid classes in $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ \nin $\\Conf_{n,m}^{d+q}$, cf.\\ Section \\ref{subsec:algpres}. If for a representative ${\\bm{a}}\\rel{\\bm{b}}$ it holds that $\\Bd({\\bm{a}})$ is cylindrical\/acylindrical\nin ${\\bm{\\gamma}} =\\Bd({\\bm{a}})\\rel\\Bd({\\bm{b}})$,\nthen $\\llbracket \\gamma,\\aset\\rrbracket$ is said to cylindrical\/acylindrical, cf.\\ Definition \\ref{defn:proper2}.\n\\end{rem}\n\n\nLet $z,z'\\in \\plane$ be distinct points with the property that $\\widehat F^n(z) = z$ and $\\widehat F^n(z') = z'$, for some $n\\ge 1$, and where $\\widehat F = \\phi_1$ and\n$\\phi_t$ a chained Moser isotopy constructed in Proposition \\ref{Interpolation2}.\nDefine the continuous functions $z(t) = \\phi_t(z)$ and $z'(t) = \\phi_t(z')$ and let $x(t)$ and $x'(t)$ the $x$-projection of $z(t)$ and $z'(t)$ respectively.\nBy Proposition \\ref{Interpolation2}, $x(t)$ and $x'(t)$ are (continuous) piecewise linear functions that are uniquely determined by the sequence\n$\\{t_j\\}_{j=0}^{nd}$, $t_j = j\/d$.\n\n\\begin{lem}[cf.\\ \\cite{BraidConleyIndex}]\n\\label{lem:propergraphs}\nThe two $x$-projections $x(t)$ and $x'(t)$ form a (piecewise linear) braid diagram, i.e.\\ no tangencies.\nThe intersection number $\\iota\\bigl(x(t),x'(t)\\bigr)$, given as the total number of intersections of the graphs of $x(t)$ and $x'(t)$\non the interval $t\\in [0,n]$,\n is well-defined and even.\n\\end{lem}\n\n\\begin{proof}\nLet $x_j = x(t_j)$ and $x_j' = x'(t_j)$, $j=0,\\cdots,nd$ and by the theory in Section \\ref{subsec:genfun} the sequences satisfy the parabolic recurrence relations\n$\\mathcal{R}_{j}(x_{j-1}, x_{j}, x_{j+1}) =0$ and $\\mathcal{R}_{j}(x'_{j-1}, x'_{j}, x'_{j+1}) =0$.\nSuppose the sequences $\\{x_j\\}$ and $\\{x'_j\\}$ have a tangency at $x_j = x_j'$ (but are not identically equal). Then, either $x'_{j-1}x'_{j+1}$.\nLet $\\tau\\in [t_j,t_{j+1}]$ be the intersection point and $x(\\tau) = x'(\\tau) = x_*$.\nAfter rescaling and shifting to the interval $[0,1]$ we have\n$x(s(t)) = x_j + (x_{j+1}-x_j)s(t)$, $s(t) = d(t-t_j) \\in [0,1]$ and the same for $x'(s(t))$.\nRecall that $\\phi_t$ is given by \n\\eqref{eqn:theisotopy} and therefore by \\eqref{Ltransform2}, \n\\[\ny(s(\\tau)) = \\partial_p L^j\\bigl(s(\\tau),x_*,x_{j+1}-x_j), \\quad y'(s(\\tau)) = \\partial_pL^j\\bigl(s(\\tau),x_*,x'_{j+1}-x'_j),\n\\]\nwhere $L^j$ are the Lagrangians for the Moser isotopies $\\psi_{t,H^j}$ in Proposition \\ref{MoserThm}.\nSince $\\partial^2_{pp}L^j\\ge \\delta>0$ and $x_{j+1}-x_j > x'_{j+1}-x'_j$, we conclude that $y(s(\\tau)) > y'(s(\\tau))$.\nBy reversing the role of $x$ and $x'$, i.e.\\ $x_j>x'_j$ and $x_{j+1}x'_{j+1}$.\nAs in the previous case\n\\[\n\\begin{aligned}\ny(s(\\tau)) &= \\partial_pL^{j-1} \\bigl(1,x_*,x_*-x_{j-1}) = \\partial_pL^j\\bigl(0,x_*,x_{j+1}-x_*);\\\\\ny'(s(\\tau)) &= \\partial_pL^{j-1} \\bigl(1,x_*,x_*-x'_{j-1}) = \\partial_pL^j\\bigl(0,x_*,x'_{j+1}-x_*),\n\\end{aligned}\n\\]\nand since $x_*-x_{j-1}>x_*-x'_{j-1}$ (and $x_{j+1}-x_* > x'_{j+1}-x_*$) we conclude that $y(s(\\tau)) >y'(s(\\tau))$. Reversing the role of $x$ and $x'$ yields\n$y(s(\\tau))1$, we\nuse $F^k$ instead.\nConsider three different 2-colored braid words: $\\gamma_0 = \\gamma$ as above, $\\gamma_{-1} = \n \\sigma_4\\sigma_1\\sigma_2\\sigma_3^2\\sigma_2\\sigma_1\\sigma_4$, and\n$\\gamma_1 = \\sigma_4\\sigma_1\\sigma_3\\sigma_2^2\\sigma_3\\sigma_1\\sigma_4$.\nFor all three cases the skeletal word is given by $\\beta$, the coloring is given by $\\aset=\\{3\\}$.\nConsider a symbolic sequence $\\{a_i\\}_{i=1}^k$, $a_i\\in \\{-1,0,1\\}$, then\nthe positive conjugacy class $(\\gamma,\\aset)$, with $\\aset = \\{3\\}$, and\n\\[\n\\gamma = \\gamma_{a_0} \\cdot\\gamma_{a_1} \\cdots \\gamma_{a_{k-1}}\\cdot \\gamma_{a_k},\n\\]\nis proper and acylindrical, except for $a_i=-1$, or $a_i=1$ for all $i$.\nIf follows that $(\\gamma,\\aset) \\mapsto \\beta^k$. In \\cite[Sect.\\ 4.5]{BraidConleyIndex} the braid invariant is given by ${\\mathrm{H}}\\llbracket\\gamma,\\aset\\rrbracket = \\sbb^r$,\nwhere $r$ is the number of zeroes in $\\{a_i\\}$.\nThis procedure produces many $k$-periodic point for $F$ that are forced by the invariant set $B$. As matter of fact one obtains a lower bound on the \ntopological entropy of $F$.\n\\end{exm}\n\n\\begin{exm}\n\\label{exm:exist3}\nLet $F\\in \\Symp(\\disc)$ possess an invariant set $B$ consisting of three points,\nand let the mapping class of $[F]$ relative to\n$B$ be represented by a positive braid word $\\beta = {\\bm{i}}_m^{-1}[{\\bm{\\beta}}]$,\nwith ${\\bm{\\beta}} = \\bigl\\{{\\bm{\\beta}}^1(t), {\\bm{\\beta}}^2(t), {\\bm{\\beta}}^3(t)\\bigr\\}$ being a geometric representative of \n\\[\n\\beta = \\sigma_1\\sigma_2^2\\sigma_1^2\\sigma_2^2\\sigma_1\\sigma_2\\sigma_1\\sigma_2^2\\sigma_1,\n\\]\ncf.\\ Figure \\ref{ex:nontriv1}.\nFor the intersection numbers it holds that $\\iota\\bigl({\\bm{\\beta}}^1(t),{\\bm{\\beta}}^2(t)\\bigr) = \\iota\\bigl({\\bm{\\beta}}^1(t),{\\bm{\\beta}}^3(t)\\bigr) = 6$ and\n $\\iota\\bigl({\\bm{\\beta}}^2(t),{\\bm{\\beta}}^3(t)\\bigr) = 1 < 6$.\nFrom considerations in~\\cite[Sect.\\ 9.2]{BraidConleyIndex} it follows that through the black strands ${\\bm{\\beta}}$ one can plot a single red strand ${\\bm{\\alpha}}$, \nsuch that the 2-colored braid class \n$\\llbracket\\gamma,\\aset\\rrbracket$ represented by their union \n\\[\n\\gamma = \\sigma_2 \\sigma_1 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_3 \\sigma_1\n\\sigma_2 \\sigma_2 \\sigma_1 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_1 \\sigma_2 \\sigma_1 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_1 \\sigma_2,\n\\]\nwith $\\aset = \\{2\\}$, cf.\\ Figure \\ref{ex:nontriv1}, \nis proper, acylindrical and of nontrivial index.\nThe intersection numbers for ${\\bm{\\alpha}}$ are\n $\\iota\\bigl({\\bm{\\alpha}}(t),{\\bm{\\beta}}^2(t)\\bigr) = \\iota\\bigl({\\bm{\\alpha}}(t),{\\bm{\\beta}}^3(t)\\bigr) = 4$.\nIf the intersection numbers are chosen more generally, i.e.\\\n$\\iota\\bigl({\\bm{\\beta}}^1(t),{\\bm{\\beta}}^2(t)\\bigr) = \\iota\\bigl({\\bm{\\beta}}^3(t),{\\bm{\\beta}}^3(t)\\bigr) = 2p$ and\n $\\iota\\bigl({\\bm{\\beta}}^2(t),{\\bm{\\beta}}^3(t)\\bigr) = r < 2p$, and $\\iota\\bigl({\\bm{\\alpha}}(t),{\\bm{\\beta}}^2(t)\\bigr) = \\iota\\bigl({\\bm{\\alpha}}(t),{\\bm{\\beta}}^3(t)\\bigr) = 2q$,\nwhere $r > 0$ and $p \\geq 2$. If $r < 2q < 2p$, then \nthe singular homology of $\\llbracket\\gamma,\\aset\\rrbracket$ is given by\n$$\nH_{k}\\bigl({\\mathrm{H}}\\llbracket\\gamma,\\aset\\rrbracket\\bigr) = \\begin{cases}\n \\mathbb{R}: k= 2q,\\ 2q+1,\\\\\n 0: \\text{elsewise}.\\\\\n\\end{cases}\n$$\nBy Theorem~\\ref{thm:main1} we conclude that there are at least two additional distinct fixed points $A_1, A_2$\nwith $\\jmath_{A_i\\cup B}([F]) = \\gamma\\!\\!\\mod \\square, \\ i = 1,2$. \nIn addition,\nvia concatenating braid diagrams one can produce an infinite number of periodic solutions of different periods, cf.\\ \\cite[Lemma 47]{BraidConleyIndex}.\nThe above \nforcing result\nis specific for area-preserving mapping of the 2-disc, or $\\rr^2$ and \\emph{not} true for arbitrary diffeomorphisms of the 2-disc.\n\nFor example consider the time-1 mapping $F\\colon \\disc\\to\\disc$ given by the differential equation $\\dot{r} = r(r-a_1)(r-a_2)(r-1)$ and $\\dot{\\theta} = g(r)>0$,\nwith $g(a_1) = \\pi$ and $g(a_2) = 6\\pi$, and $0 d_{ij}^v$\\\\\nThe scheduled maintenance date is later than the sample means that the maintenance date is too late and the defect occurs on the use. Still, $d_{ij}^v$ is ``Sampled Due date'' in Figure~\\ref{f1}, but the scheduled maintenance date $D_{ij}$ is ``Maintenance date b''. In this case, $D_{ij}$ is later than $d_{ij}^v$, and the vehicle will break down on the road. In our algorithm, the number of failures will be increased by one.\n\\\\\nCase 3) $~D_{ij} = d_{ij}^v$\\\\\nThe ideal situation is that the maintenance date is scheduled on the due date. The component can be maintained exactly at the date that the component is broken. In this case, there is no penalty or failure.\n\nThe averages of the penalty costs and the number of failures from $1000$ due date samples will be used as the penalty cost and expected number of failures for the scheduled maintenance date of the component. For each operation (the single-component operation or group operation), its cost consists of three parts: the set-up cost of the car, the maintenance costs and the penalty costs of all components of the operation. The penalty cost of components is a part of the total cost, and the expected number of failures of components is the third objective to be minimized in our multi-objective optimization.\n\n\n\\subsection{Implementation of Evolutionary Algorithm Operators}\nTo solve our application problem with an EA, there are several basic issues we need to deal with, such as, how to represent an individual or solution in the population (Chromosome Encoding); how to take these chromosomes into a process of evolution (Genotype-Phenotype Mapping); how to create variations of solutions in each iteration (Genetic Operators). Details of these topics are given in the following subsections.\n\n\\subsubsection{Chromosome Encoding}\nIn our algorithm, a three-vector chromosome (Figure~\\ref{f0}) is proposed to represent an individual, and the three vectors are:\n\\begin{itemize\n \\item Group structure vector: the group structures of components.\n \\item Starting time vector: the starting times of operations.\n \\item Workshop assignment vector: the workshops for operations.\n\\end{itemize}\n\n\\begin{figure*}[!htbp]\n\\hspace{-0.7cm}\n\\includegraphics[height=0.45in, width=5.2in]{chrom1.png}\n\\caption{Three-vector chromosome.}\n\\label{f0}\n\\end{figure*}\n\nThe group structure vector gives the information of which components are in the same group, it is initialized by randomly picking a feasible group structure for each car (check the details in \\cite{wang2019vehicle}). The generation of the starting time vector should be later than the generation of the group structure vector because the starting time of each operation is determined by the execution window which is the entire execution window of the component for a single-component operation or the execution window intersection for a group operation. A time spot is randomly selected from the execution window or execution window intersection for each operation in order to initialize the starting time vector. \n\nA workshop is considered as ``several workshops\" based on its capacity (the number of teams). By this way, the schedule for each workshop team can be achieved from the solution. For example, consider that two workshops have three and four repairing teams respectively. Then, group operations can be randomly assigned to seven ``workshops'', the former three and the latter four represent the corresponding teams in two workshops.\n\n\\subsubsection{Genotype-Phenotype Mapping}\nTo use the power of EAs to obtain a better population, we need to evaluate each chromosome and give the better ones higher probabilities to produce offspring. This is done by genotype-phenotype mapping or decoding the chromosome. In our problem, it is to convert an individual into a feasible schedule to calculate the objectives and constraints which represent the relative superiority of a chromosome. The genotype-phenotype mapping can be easily achieved in our algorithm because the group structure, the starting time and the workshop team of the operations can be acquired directly from each individual. When converting an individual into a schedule, it is possible that the processing times of two or more operations assigned to the same workshop team are overlapping since the starting time of each operation is decided in the starting time vector. In this situation, the principle of first-come-first-served is followed: the starting time and processing time of the earlier started operation remain the same; the starting time of the later started operation is delayed until the completion of the previous operation; the processing time of the later started operation remains the same; while, an extra waiting time is added to the later started operation as a penalty because the vehicle waits in the workshop for the maintenance.\n\n\\subsubsection{Genetic Operators}\nIn accordance with the problem and its encoding, specific crossover and mutation operators have been designed for our problem (check the details in \\cite{wang2019vehicle}). Both operators are applied separately to the three parts of the chromosome.\n\nFor the group structure vector, multi-point crossover can be used as crossover operator and the number of cutting points depends on the length of the vector. The same cutting points can be applied to the starting time vector when performing crossover. However, the change on the group structure vector as a consequence of the crossover may result in the invalidity of genes in the starting time vector because it is possible that the group members and execution window intersections have changed due to the new group structure. Therefore, when performing the crossover on the starting time vector, the starting times of all operations should be checked based on the new group structure and a new starting time is produced randomly from the correct intersection in the case that the starting time of an operation is invalid. The multi-point crossover can be applied to the workshop assignment vector as well. \n\nThe mutation operator alters one or more gene values in a chromosome. Similarly, the mutation should be operated on the group structure vector first due to its impact on the starting time vector; the starting time of operations should be checked and corrected after the mutation is done on the group structure vector. Afterwards, several gene values can be altered in the staring time vector and workshop assignment vector to generate a new individual.\n\n\n\\section{Proposed Preference based Algorithm}\n\\label{sec:ap-di-moea}\nAs the number of objectives and decision variables increases, the number of non-dominated solutions tends to grow exponentially \\cite{pal2018decor}. This brings more challenges on achieving efficiently a solution set with satisfactory convergence and diversity. At the same time, a huge number of solutions is needed to approximate the entire Pareto front. However, a big population means more computational time and resources. To overcome these difficulties, we propose an automatic preference based MOEA, which can generate the preference region or the region of interest (ROI) automatically and find non-dominated solutions in the preference region instead of the entire Pareto front. The automatic preference based MOEA is developed based on the framework of DI-MOEA (Diversity-Indicator based Multi-Objective Evolutionary Algorithm) \\cite{wang2019diversity}. We call our new algorithm AP-DI-MOEA.\n \nDI-MOEA is an indicator-based MOEA, it has shown to be competitive to other MOEAs on common multi-objective benchmark problems. Moreover, it is invariant to the shape of the Pareto front and can achieve evenly spread Pareto front approximations.\nDI-MOEA adopts a hybrid selection scheme:\n\\begin{itemize\n \\item The ($\\mu$ + $\\mu$) generational selection operator is used when the parent population can be layered into multiple dominance ranks. The intention is to accelerate convergence until all solutions are non-dominated.\n \\item The ($\\mu$ + 1) steady state selection operator is adopted in the case that all solutions in the parent population are mutually non-dominated and the diversity is the main selection criterion to achieve a uniform distribution of the solutions on the Pareto front.\n\\end{itemize}\n\nDI-MOEA employs non-dominated sorting as the first ranking criterion; the diversity indicator, i.e., the Euclidean distance based geometric mean gap indicator, as the second, diversity-based ranking criterion to guide the search. Two variants of DI-MOEA, denoted as DI-1 and DI-2, exist, which use the crowding distance and diversity indicator, respectively, as the second criteria in the ($\\mu$ + $\\mu$) generational selection operator. While, to ensure the uniformity of the final solution set, the diversity indicator is used by both variants in the ($\\mu$ + 1) steady state selection operator. Analogously, two variants of AP-DI-MOEA, i.e., AP-DI-1 and AP-DI-2, are derived from the two variants of DI-MOEA\n\n\n\nThe workings of AP-DI-MOEA are outlined in Algorithm 1. (Exceedance of) $Enum\\_P$ is a predefined condition (In our algorithm, $Enum\\_P$ is the number of evaluations.) to divide the algorithm into two phases: learning phase and decision phase. In the learning phase, the algorithm explores the possible area of Pareto optimal solutions and finds the rough approximations of the Pareto front. In the decision phase, the algorithm identifies the preference region and finds preferred solutions. When the algorithm starts running and satisfies $Enum\\_P$ at some moment, the first preference region will be generated and $Enum\\_P$ will be updated for determining a new future moment when the preference region needs to be updated. The process of updating $Enum\\_P$ continues until the end. The first $Enum\\_P$ is a boundary line. Before it is satisfied, AP-DI-MOEA runs exactly like DI-MOEA to approximate the whole Pareto front; while, after it is satisfied, the preference region is generated automatically and AP-DI-MOEA finds solutions focusing on the preference region. The subsequent values of $Enum\\_P$ define the later moments to update the preference region step by step, eventually, a precise ROI with a proper size can be achieved.\n\n\\begin{figure*}[!htbp]\n\\vspace{-3.5cm}\n\\hspace{-3cm}\n\\includegraphics[height=10in]{pg_0002.pdf}\n\\end{figure*}\n\n\\begin{figure*}[!htbp]\n\\hspace{-2.5cm}\n\\includegraphics[width=7in]{al2.pdf}\n\\end{figure*}\n\n\n\\iffalse \n\\addtocounter{algorithm}{1}\n\\begin{algorithm}[!htbp]\n\\setstretch{0.8}\n \t\\caption{Finding the knee point and defining the preference region.}\n \\label{algorithm:2}\n \t\\begin{algorithmic}[1]\n \\STATE $n \\leftarrow$ the number of objectives;\n \\STATE $P_t \\leftarrow$ current population;\n \\STATE $\\epsilon$; \/\/parameter ($>$0) for distinguishing convex\/concave shape;\n \\STATE $popsize \\leftarrow |P_t|$; \/\/population size\n \n \\STATE Declare $Q[n]$; \/\/upper quartile objective values of $P_t$ \n \\STATE Declare $L[n]$; \/\/worst objective values of $P_t$\n \\STATE Declare $knee[n]$; \/\/knee point of $P_t$\n \\STATE Declare $P\\_region[n]$; \/\/preference region of $P_t$\n \\STATE Declare $Expoints[n][n]$; \/\/extreme points\n \\STATE $foundknee \\leftarrow false$;\n \n \n \\FOR{{\\bf each} $i \\in \\{ 1, \\dots, n\\}$}\n \\STATE sort($P_t$) by the $i$th objective in ascending order;\n \\STATE $Q[i] \\leftarrow P_t$.get\\_index$(\\frac{3}{4}\\times popsize)$.get\\_obj($i$); \/\/upper quartile value of the $i$th objective \n \\STATE $L[i] \\leftarrow P_t$.get\\_index$(popsize)$.get\\_obj($i$);\/\/the largest (worst) value of the $i$th objective\n \\ENDFOR\n \\FORALL{solution $s \\in P_t$}\n \\IF{$s$.get\\_obj($i=1,...,n) > Q[i]$ }\n \\STATE remove $s$ from $P_t$;\n \\ENDIF\n \\ENDFOR\n \\STATE $Expoints[\\centerdot][\\centerdot] \\leftarrow$ extreme points in $P_t$;\n \\STATE $num_a\\leftarrow$ the number of points in concave region of hyperplane formed by $Expoints[\\centerdot][\\centerdot]$;\n \\STATE $num_v \\leftarrow |P_t|-num_a$; \/\/the number of points in convex region\n \\IF{$(num_v - num_a > \\epsilon)$}\n \\STATE \/\/roughly convex shape\n \\STATE remove solutions in concave region from $P_t$;\n \\ELSIF{$(num_a - num_v > \\epsilon)$}\n \\STATE \/\/roughly concave shape\n \\STATE remove solutions in convex region from $P_t$;\n \\ELSE\n \\FORALL{solution $s\\in P_t$}\n \\STATE calculate hypervolume of $s$ with reference point $L[\\centerdot]$;\n \\STATE update the largest hypervolume value ($max\\_h$);\n \\ENDFOR\n \\STATE $knee[\\centerdot] \\leftarrow$ solution with $max\\_h$;\n \\STATE $foundknee \\leftarrow true$;\n \\ENDIF\n \\IF{($foundknee == false$)}\n \\FORALL{solution $s\\in P_t$}\n \\STATE calculate distance between $s$ and hyperplane formed by $Expoints[\\centerdot][\\centerdot]$;\n \\STATE update the largest distance ($max\\_d$);\n \\ENDFOR\n \\STATE $knee[\\centerdot] \\leftarrow$ solution with $max\\_d$;\n \\ENDIF\n \\FOR{{\\bf each} $i \\in \\{ 1, \\dots, n\\}$}\n \\STATE $P\\_region[i] \\leftarrow knee[i] + (L[i]-knee[i]) \\times 85\\%$\n \\ENDFOR\n \t\\end{algorithmic}\n\\end{algorithm}\n\\fi\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=3.3in]{knee-explain.png} \n\\caption{Finding the knee point in bi-dimensional space.} \n\\label{knee}\n\\end{figure}\n\nThe first\/new preference region is formed based on the population at the moment when the condition of $Enum\\_P$ is satisfied, especially the knee point of the population. Algorithm 2 gives the details of line 41 in Algorithm 1, it introduces the steps of finding the knee point of a non-dominated solution set and constituting a hypercube shaped preference region according to the knee point. Figure~\\ref{knee} also gives an illustration of finding the knee point in bi-dimensional space. Firstly, the upper quartile objective values (line 13 in Algorithm 2) in the solution set are used as a boundary to define outliers and solutions outside this boundary are removed (line 16-20 in Algorithm 2). The extreme solutions (the solutions with the maximum value in one objective) (line 21 in Algorithm 2) are then found inside the boundary and a hyperplane is formed based on the extreme solutions. In a bi-dimensional space (Figure~\\ref{knee}), the hyperplane is only a line connecting two extreme solutions. According to the numbers of points below and above the hyperplane (line 22 - 23 in Algorithm 2), the shape of the solution set can be roughly perceived.\nWe will distinguish between ``convex'' and ``concave'' regions. Points in the \\textit{convex} (\\textit{concave}) \\textit{region} are dominating (dominated by) at least one point in the hyperplane spanned by the extreme points. However, when the number of the points in the convex region and the number of points in the concave region is close enough, it implies that the shape of the current solution set is almost linear. This occurs both when the true Pareto front is linear and when the solution set is converged very well in a small area of the Pareto front. A parameter $\\epsilon$ then is used to represent the closeness and it is a small number decided by the size of the solution set. In the case that the shape of the current solution set is (almost) linear, the solution with the largest hypervolume value with regards to the worst objective vector (line 14 in Algorithm 2) is adopted as the knee point (line 32 - 36 in Algorithm 2). While, under the condition that the shape of the current solution set is convex or concave, the solution in the convex or concave region with the largest Euclidean distance to the hyperplane is chosen as the knee point (line 39 - 42 in Algorithm 2). After the knee point is found, the preference region can be determined based on the knee point by the following formula:\n\\begin{align}\nP\\_region[i] = knee[i] + (L[i]-knee[i]) \\times 85\\%\n\\end{align}\n\nLet $i$ denotes the $i$th objective, as in Algorithm 2, $L[i]$ is the worst value of the $i$th objective in the population, $knee[i]$ is the $i$th objective value of the knee point and $P\\_region[i]$ is the upper bound of the $i$th objective. W.l.o.g. we assume the objectives are to be minimized and the lower bound of preference region is the origin point. According to the formula, we can see that the first preference region is relatively large (roughly 85\\% of the entire Pareto front). With the increase in the number of iteration, the preference region will be updated and becomes smaller and smaller because every preference region picks 85\\% of the current Pareto front. Eventually, we want the preference region can reach a proper range, say, 15\\% of the initial Pareto front. The process of narrowing down the preference region step by step can benefit the accuracy of the preference region.\n\nIn the interest of clarity, Algorithm 1 only shows the workings of AP-DI-1, the workings of AP-DI-2 can be obtained by replacing crowding distance with the diversity indicator contribution.\nIn the ($\\mu$ + $\\mu$) generational selection operator (line 14 - 36 in Algorithm 1), when there is no preference region, the second ranking criteria (the crowding distance for AP-DI-1; the diversity indicator for AP-DI-2) for all solutions on the last front are calculated and the population will be truncated based on non dominated sorting and the second ranking criteria (line 28 - 29 in Algorithm 1). While, if a preference region already exists, both the second ranking criteria and Euclidean distance to the knee point for all solutions on the last front are calculated and the population will be truncated based on first non dominated sorting, then the second ranking criteria, lastly, Euclidean distance to the knee point (line 31 - 32 in Algorithm 1). In the ($\\mu$ + 1) steady state selection operator (line 38 - 59 in Algorithm 1), firstly, the value of $Enum\\_P$ is compared with the current number of evaluations to determine if a (new) preference region should be generated. When it is time to do so, the preference region is generated through Algorithm 2 (line 41 in Algorithm 1), at the same time, the value of $Enum\\_P$ is updated to the next moment when the preference region is to be updated (line 42 in Algorithm 1). There are different strategies to assign the values of $Enum\\_P$. In our algorithm, we divide the whole computing budget into two parts, the first half is used to find an initial entire Pareto front approximation, and the second half is used to update the preference region and find solutions in the preference region. Assume the total computing budget is $Enum\\_T$ (the number of evaluations), then the first value of $Enum\\_P$ is $\\frac{1}{2}\\times Enum\\_T$. Due to the reason that we expect a final preference region with a size of around 15\\% of the initial entire Pareto front and each new preference region takes 85\\% of the current Pareto front, according to the formula: $0.85^{12} \\approx 0.14$, the value of $Enum\\_P$ can be updated by the following formula:\n\\begin{align}\nEnum\\_P = Enum\\_P + (Enum\\_T\/2)\/12\n\\end{align}\n\nAnother half of budget can be divided into $12$ partial-budgets and a new preference region is constituted after each partial-budget. In the end, the final preference region is achieved and solutions focusing on this preference region are obtained. For the rest part of the ($\\mu$ + 1) steady state selection operator, likewise, when there is a preference region, three ranking criteria (1. non-dominated sorting; 2. diversity indicator; 3. the Euclidean distance to the knee point) work together to achieve a well-converged and well-distributed set of Pareto optimal solutions in the preference region.\n\n\\section{Experimental Results}\n\\label{sec:experiments}\n\\subsection{Experimental Design}\nIn this section, simulations are conducted to demonstrate the performance of proposed algorithms on both benchmark problems and our real-world application problems. All experiments are implemented based on the MOEA Framework (\\url{http:\/\/www.moeaframework.org\/}), which is a Java-based framework for multi-objective optimization.\n\nFor the two variants of AP-DI-MOEA: AP-DI-1 and AP-DI-2, their performances have been compared with DI-MOEA: DI-1, DI-2 and NSGA-III \\cite{deb2014evolutionary}. We compare our algorithm with NSGA-III because NSGA-III is a representative state-of-the-art evolutionary multi-objective algorithm and it is very powerful to handle problems with non-linear characteristics. For bi-objective benchmark problems, algorithms are tested on ZDT1 and ZDT2 with 30 variables. For three objective benchmark problems, DTLZ1 with 7 variables and DTLZ2 with 12 variables are tested. For the real-world application problem of VFMSO, experiments have been conducted on two instances with different sizes. The configurations of the two instances, such as the predicted RUL probability distribution, the processing time and maintenance cost of each component, the set-up time and cost of each car, are made available on \\url{http:\/\/moda.liacs.nl}. On every problem, we run each algorithm $30$ times with different seeds, while the same $30$ different seeds are used for all algorithms. All the experiments are performed with a population size of $100$; and for bi-objective problems, experiments are run with a budget of $22000$ (objective function) evaluations, DTLZ three objective problems with a budget of $120000$ evaluations, the VFMSO problems with a budget of $1200000$ evaluations. This setting is chosen to be more realistic in the light of the applications in scheduling that we ultimately want to solve.\n\n\n\n\n\n\n\\subsection{Experiments on bi-objective problems}\n\nBi-objective problems are optimized with a total budget of $22000$ evaluations, when the number of evaluations reaches $10000$ times, the first preference region is generated, then after every $1200$ evaluations, the preference region will be updated. Figure~\\ref{fig:ZDT1} shows the Pareto front approximations from a typical run on ZDT1 (left column) and ZDT2 (right column). The graphs on the upper row are obtained from DI-1 and AP-DI-1, while the graphs on the lower row are from DI-2 and AP-DI-2. In each graph, the entire Pareto front approximation from DI-MOEA and the preferred solutions from AP-DI-MOEA (or \\textit{AP solutions}) are presented, at the same time, the preference region of AP-DI-MOEA is also shown by the gray area.\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[ZDT1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt1-di1-tdi1-w.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt1-di2-tdi2-w.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[ZDT2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt2-di1-tdi1-w.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt2-di2-tdi2-w.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\n\\caption{Pareto front approximation on ZDT1 and ZDT2.}\n\\label{fig:ZDT1}\n\\end{figure}\n\nBesides the visualization of the Pareto fronts, we also compute the knee point of the entire final Pareto front approximation from DI-MOEA via the strategy described in Algorithm 2. For each run of DI-MOEA and AP-DI-MOEA with the same seed, the following two issues have been checked: \n\\begin{itemize\n\\item If the knee point from DI-MOEA is in the preference region achieved by its derived AP-DI-MOEA;\n\\item If the knee point from DI-MOEA is dominated by or dominating AP solutions; or if it is a non-dominated solution (mutually non-dominated with all AP solutions).\n\\end{itemize}\n\nTable~\\ref{table-kneezdt1} shows the results of 30 runs. For ZDT1 problem, all 30 knee points from DI-1 and DI-2 are in the preference regions from AP-DI-1 and AP-DI-2 respectively; in all these knee points, 10 from DI-1 and 7 from DI-2 are dominated by AP solutions. For ZDT2 problem, most knee points are not in corresponding preference regions, but for those in the preference regions, almost all of them are dominated by AP solutions. Please note that when a knee point from DI-MOEA is outside of the preference region from AP-DI-MOEA, it is not possible that it can dominate any AP solutions because all AP solutions are in the preference region and only solutions in the left side of the gray area can dominate AP solutions. \n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on ZDT problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{ZDT1} & \\multicolumn{2}{|c|}{ZDT2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 20 & 23 & 1 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 10 & 7 & 9 & 9 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 0 & 0 & 20 & 20 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneezdt1}\n\\end{table} \n\nWe also perform the same comparison between AP-DI-MOEA and NSGA-III, the results are shown in Table~\\ref{table-kneezdt1-nsga3}. For ZDT1 problem, all knee points from NSGA-III are in the preference regions from AP-DI-MOEA. Some of these knee points dominate AP solutions. For ZDT2 problem, most knee points from NSGA-III are not in the preference regions and these knee points are incomparable with AP solutions. For the knee points in the preference regions, all three dominating relations with AP solutions appear. For both problems, when the knee point from NSGA-III is dominating AP solutions, it only dominates one AP solution.\n\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on ZDT problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{ZDT1} & \\multicolumn{2}{|c|}{ZDT2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 14 & 19 & 3 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 2 & 3 \\\\\n \\cline{2-6}\nregion & Dominating & 16 & 11 & 4 & 6 \\\\\n\\hline\nOutside & Incomparable & 0 & 0 & 21 & 20 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneezdt1-nsga3}\n\\end{table} \n\nInstead of spreading the population across the entire Pareto front, we only focus on the preference region. To ensure that our algorithm can guide the search towards the preference region and the achieved solution set is distributed across the preference region, we compare the performance of AP-DI-MOEA, DI-MOEA and NSGA-III in the preference region. For each Pareto front approximation from DI-MOEA and NSGA-III, the solutions in the corresponding preference region from AP-DI-MOEA are picked, and we compare these solutions with AP solutions through the hypervolume indicator. The point formed by the largest objective values over all solutions in the preference region is adopted as the reference point when calculating the hypervolume indicator. It has been found that all hypervolume values of new solution sets from DI-MOEA and NSGA-III in the preference region are worse than the hypervolume values of the solution sets from AP-DI-MOEA, which proves that the mechanism indeed works in practice. Figure~\\ref{box:ZDT} shows box plots of the distribution of hypervolume indicators over 30 runs.\n\n\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[ZDT1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt1-di1-box1.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt1-di2-box2.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[ZDT2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt2-di1-box1.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt2-di2-box2.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\caption{Boxplots comparing the hypervolume values on ZDT1 and ZDT2.}\n\\label{box:ZDT}\n\\end{figure}\n\n\n\\subsection{Experiments on three objective problems}\nDTLZ1 and DTLZ2 are chosen as three objective benchmark problems to investigate our algorithms. They are performed with a total budget of $120000$ fitness evaluations, when the evaluation reaches $60000$ times, the first preference region is formed, then after every $5000$ evaluations, the preference region is updated. Figure~\\ref{fig:dtlz} shows the Pareto front approximations from a typical run on DTLZ1 (left column) and DTLZ2 (right column). The upper graphs are obtained from DI-1 and AP-DI-1, while the lower graphs are from DI-2 and AP-DI-2. In each graph, the Pareto front approximations from DI-MOEA and corresponding AP-DI-MOEA are given. Since the target region is actually an axis aligned box, the obtained knee region (i.e., the intersection of the axis aligned box with the Pareto front) has an inverted triangle shape for these two benchmark problems.\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[DTLZ1 3 objective problem]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.5in,width=4.4in]{dtlz1-di1-tdi1-w.png}\\\\\n \\vspace{0.45cm}\n \\includegraphics[height=2.5in,width=4.4in]{dtlz1-di2-tdi2-w.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\hspace{2cm}\n\\subfigure[DTLZ2 3 objective problem]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4.4in]{dtlz2-di1-tdi1-w.png}\\\\\n \n \\includegraphics[height=2.7in,width=4.4in]{dtlz2-di2-tdi2-w.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on DTLZ1 and DTLZ2.}\n\\label{fig:dtlz}\n\\end{figure*}\n\n\n\nTable~\\ref{table-kneedtlz} shows the space and dominance relation of the knee point from DI-MOEA and the solution set from AP-DI-MOEA over 30 runs. For DTLZ1 problem, most knee points from DI-MOEA are in their respective preference regions and all knee points are mutually non-dominated with AP solutions. For DTLZ2 problem, we observed that more knee points are not in the corresponding preference regions. This is because too few solutions from DI-MOEA are in the preference region. For DTLZ1 problem, six solutions from DI-MOEA are in the corresponding preference region on average for each run, while, for DTLZ2 problem, only less than two solutions are in the corresponding preference region on average. Therefore, we can see that on the one side, it is normal that many knee points from the entire Pareto fronts are not in their corresponding preference regions; on the other side, our aim of finding more fine-grained resolution in the preference region has been well achieved because only few solutions can be obtained in the preference region if we spread the population across the entire Pareto front. At the same time, one knee point from DI-1 on DTLZ2 is dominated by solutions from the corresponding AP-DI-1, which proves that AP-DI-MOEA can converge better than DI-MOEA because AP-DI-MOEA focuses on the preference region.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on DTLZ problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{DTLZ1} & \\multicolumn{2}{|c|}{DTLZ2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 29 & 27 & 10 & 13\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 1 & 0 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 1 & 3 & 19 & 17 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneedtlz}\n\\end{table} \n\n\nAP-DI-1 and AP-DI-2 have also been compared with NSGA-III in the same way. Table~\\ref{table-kneedtlz_nsga3} shows the comparison result. For DTLZ1, the average number of solutions from NSGA-III in the corresponding preference regions from AP-DI-MOEA is six. Still, almost all knee solutions from NSGA-III are in the preference region. For DTLZ2, the average number of solutions from NSGA-III in the corresponding preference region from AP-DI-MOEA is less than one, while, in more than half of 30 runs, the knee points from NSGA-III are still in the preference region. To some extent, it can be concluded that the preference regions from AP-DI-MOEA are accurate. It can also be observed that AP-DI-1 behaves better than AP-DI-2 on DTLZ2, because two knee points from NSGA-III dominate the solutions from AP-DI-2.\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on DTLZ problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{DTLZ1} & \\multicolumn{2}{|c|}{DTLZ2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 &AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 30 & 29 & 14 & 17\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 1 & 1 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 2 \\\\\n\\hline\nOutside & Incomparable & 0 & 1 & 15 & 10 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneedtlz_nsga3}\n\\end{table} \n\nSimilarly, we pick from DI-MOEA and NSGA-III solutions which are in the corresponding preference region of AP-DI-MOEA, and the hypervolume indicator value is compared between these solutions and AP solutions. It has been found that all hypervolume values of solutions from AP-DI-MOEA are better than those of solutions from DI-MOEA and NSGA-III. The left column of Figure~\\ref{box:dtlz} shows box plots of the distribution of hypervolume values over 30 runs on DTLZ1, and the right column shows the hypervolume comparison on DTLZ2.\n\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[DTLZ1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{dtzl1-di1-box.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{dtlz1-di2-box.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[DTLZ2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{dtlz2-di1-box.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{dtlz2-di2-box.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\n\\caption{Boxplots comparing the hypervolume values on DTLZ1 and DTLZ2.}\n\\label{box:dtlz}\n\\end{figure}\n\nIn our experiments, we decide half of the total budget is used to find an initial Pareto front because it turned out to be a good compromise: half budget for the initial Pareto front and another half budget for the solutions focusing on the preference region. We also run experiments using 25\\% and 75\\% of the total budget for the initial Pareto front. Figure~\\ref{fig:dtlz-budget} presents the entire Pareto front from DI-MOEA and the Pareto front from AP-DI-MOEA with different budgets for the initial Pareto front. The left two images are on DTLZ1 and the right two images are on DTLZ2. The uppper two images are from DI-1 and AP-DI-1; the lower two images are from DI-2 and AP-DI-2. In the legend labels, 50\\%, 25\\% and 75\\% indicate the budgets which are utilized to find the initial entire Pareto front. It can be observed that the preference region from AP-DI-MOEA with 50\\% of budget are located on a better position than with 25\\% and 75\\% budgets, and the position of the preference region from AP-DI-MOEA with 50\\% of budget is more stable. Therefore, in our algorithm, 50\\% of budget is used before the generation of preference region.\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[DTLZ1 3 objective problem]{\n \\begin{minipage}[t]{0.50\\linewidth}\n \\centering\n \\includegraphics[height=2.6in,width=4.28in]{dtlz1-di1-tdi1-3combine-1.png}\\\\\n \n \\includegraphics[height=2.6in,width=4.28in]{dtlz1-di2-tdi2-3combine-3.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\hspace{1.5cm}\n\\subfigure[DTLZ2 3 objective problem]{\n \\begin{minipage}[t]{0.50\\linewidth}\n \\centering\n \\includegraphics[height=2.6in,width=4.2in]{dtlz2-di1-tdi1-3combine-16.png}\\\\\n \n \\includegraphics[height=2.6in,width=4.2in]{dtlz2-di2-tdi2-3combine-16.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation by different budgets generating initial Pareto front.}\n\\label{fig:dtlz-budget}\n\\end{figure*}\n\n\\subsection{Experiments on Vehicle Fleet Maintenance Scheduling Optimization}\nThe budget of $1200000$ evaluations has been used on the real-world application problems, and $600000$ of them are for the initial Pareto front. After that, the preference region is updated after every $50000$ evaluations.\nThe VFMSO problem has been tested with different sizes. Figure~\\ref{fig:20cars} shows Pareto front approximations of a problem with $20$ cars and $3$ workshops (V1), and each car contains $13$ components: one engine, four springs, four brakes and four tires \\cite{van2019modeling}. It can be observed that AP-DI-1 and AP-DI-2 can zoom in the entire Pareto front and find solutions in the preference region, at the same time, both AP-DI-1 and AP-DI-2 converge better than their corresponding DI-1 and DI-2. A similar conclusion can be drawn from Pareto fronts approximations of the problem with $30$ cars and $5$ workshops (V2) in Figure~\\ref{fig:30cars}.\n\n\nIn Figure~\\ref{fig:2030cars}, We put the Pareto front approximations from DI-MOEA, AP-DI-MOEA and NSGA-III on V1 (left) and V2 (right) together. The behaviours of DI-1, DI-2 and NSGA-III are similar on V1, so are the behaviours of AP-DI-1 and AP-DI-2 on this problem. While, DI-2 and AP-DI-2 converge better than DI-1 and AP-DI-1 on V2 problem. The behaviour of NSGA-III is between that of DI-1 and DI-2.\n\n\n\\begin{figure*}[htbp]\n\\hspace{-3.8cm}\n\\subfigure[DI-1 \\& AP-DI-1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di1-tdi1-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3.2cm}\n\\subfigure[DI-2 \\& AP-DI-2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di2-tdi2-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on VFMSO problem with 20 cars and 3 workshops.}\n\\label{fig:20cars}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\hspace{-3.8cm}\n\\subfigure[DI-1 \\& AP-DI-1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di1-tdi1-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3.2cm}\n\\subfigure[DI-2 \\& AP-DI-2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di2-tdi2-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on VFMSO problem with 30 cars and 5 workshops.}\n\\label{fig:30cars}\n\\end{figure*}\n\n\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[V1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di1-tdi1-di2-tdi2-ns3.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3cm}\n\\subfigure[V2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di1-tdi1-di2-tdi2-ns3.png}\\\\\n \n \\end{minipage}%\n}%\n\n\\caption{Pareto front approximation on VFMSO problem by DI-MOEA, AP-DI-MOEA and NSGA-III.}\n\\label{fig:2030cars}\n\\end{figure*}\n\n\nTable~\\ref{table-v1} gives the space and dominance relation of knee points from DI-MOEA and solutions from AP-DI-MOEA on these two VFMSO problems. For both problems, only few knee points from DI-MOEA are in the preference regions of AP-DI-MOEA, and the main reason is that the Pareto front of AP-DI-MOEA converges better than that of DI-MOEA, in some cases, the Pareto front of DI-MOEA cannot even reach the corresponding preference region. More importantly, it can be observed that most knee points from DI-MOEA, no matter whether in the preference region or outside of the preference region, are dominated by the solutions from AP-DI-MOEA. This phenomenon is even more obvious for the application problem with bigger size and run with the same budget as the smaller one: for V2, 90\\% of knee points from DI-MOEA are dominated by the solutions from AP-DI-MOEA.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on V1 and V2.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{V1} & \\multicolumn{2}{|c|}{V2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 0 & 0 & 0 & 0\\\\\n\\cline{2-6}\npreference & Dominated & 9 & 7 & 9 & 6 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 4 & 9 & 3 & 3 \\\\\n\\cline{2-6}\np-region & Dominated & 17 & 14 & 18 & 21 \\\\\n\\hline\n\\end{tabular}\n\\label{table-v1}\n\\end{table} \n\n\nTable~\\ref{table-v1-nsga3} gives the space and dominance relation of knee points from NSGA-III and AP solutions. For both problems, again, most knee points from NSGA-III are not in the preference regions of AP-DI-MOEA. Some knee points from NSGA-III are dominated by AP solutions and most of them are incomparable with AP solutions. \n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on V1 and V2.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{V1} & \\multicolumn{2}{|c|}{V2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 0 & 0 & 0 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 1 & 3& 2 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 1 & 1 \\\\\n\\hline\nOutside & Incomparable & 23 & 24 & 21 & 18 \\\\\n\\cline{2-6}\np-region & Dominated & 7 & 5 & 5 & 8 \\\\\n\\hline\n\\end{tabular}\n\\label{table-v1-nsga3}\n\\end{table} \n\n\\iffalse \n The right image of Figure~\\ref{fig:30cars-2} presents the entire Pareto front approximations of V2 from four different MOEAs: DI-1, DI-2, RVEA and NSGA-III. It can be seen that DI-MOEA (both DI-1 and DI-2) and NSGA-III converge to the similar area in the end, while, RVEA reaches another area of the objective space. Table~\\ref{table_hy} provides the average hypervolume value of the four Pareto fronts from 30 runs and the reference point for each run is formed by the largest objective value from all solutions. It can be seen that DI-2 behaves the best and RVEA the worst.\n\n\n\\begin{table}[htbp]\n\\caption{Hypervolume values}\n\\label{table_hy}\n\\begin{center}\n\\begin{tabular}{l|c}\n\\hline\nDI-1 & 0.0525\\\\\n\\hline\nDI-2 & 0.0576\\\\\n\\hline\nRVEA & 0.0202\\\\\n\\hline\nNSGAIII & 0.0534\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\\fi\n\n\n\n\n\n\\section{CONCLUSIONS}\n\\label{sec:conclusion}\nIn this paper, a preference based multi-objective evolutionary algorithm, AP-DI-MOEA, is proposed. In the absence of explicitly provided preferences, the knee region is usually treated as the region of interest or preference region. Given this, AP-DI-MOEA can generate the knee region automatically and can find solutions with a more fine-grained resolution in the knee region. This has been demonstrated on the bi-objective problems ZDT1 and ZDT2, and the three objective problems DTLZ1 and DTLZ2. In the benchmark, the new approach was also proven to perform better than NSGA-III which was included in the benchmark as a state-of-the-art reference algorithm.\n\nThe research for the preference based algorithm was originally motivated by a real-world optimization problem, namely, Vehicle Fleet Maintenance Scheduling Optimization (VFMSO), which is described in this paper in a new formulation as a three objective discrete optimization problem. A customized set of operators (initialization, recombination, and mutation) is proposed for a multi-objective evolutionary algorithm with a selection strategy based on DI-MOEA and, respectively, AP-DI-MOEA. The experimental results of AP-DI-MOEA on two real-world application problem instances of different scales show that the newly proposed algorithm can generate preference regions automatically and it (in both cases) finds clearly better and more concentrated solution sets in the preference region than DI-MOEA. For completeness, it was also tested against NSGA-III and a better approximation in the preference region was observed by AP-DI-MOEA .\n\nSince our real-world VFMSO problem is our core issue to be solved, and its Pareto front is convex, we did not consider problems with an irregular shape.\nIt would be an interesting question how to adapt the algorithm to problems with more irregular shapes. Besides, the proposed approach requires a definition of knee points. Future work will provide a more detailed comparison of different variants of methods to generate knee points, as they are briefly introduced in Section \\ref{sec:literature}. In the application of maintenance scheduling, it will also be important to integrate robustness and uncertainty in the problem definition. It is desirable to generate schedules that are robust within a reasonable range of disruptions and uncertainties such as machine breakdowns and processing time variability.\n\n\n\n\\section*{Acknowledgment}\nThis work is part of the research programme Smart Industry SI2016 with project name CIMPLO and project number 15465, which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO).\n\n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\nThe fleet maintenance scheduling optimization (VFMSO) problem was initially proposed in \\cite{wang2019vehicle} due to the increasing demand by companies, corporations, and organizations of all sorts which rely on vehicle fleets to deliver products and services and need to maintain vehicles for safety reasons. In the problem, a vehicle fleet, such as a taxi fleet, a bus fleet, etc., can be maintained in multiple separate workshops according to a maintenance schedule. To be specific, each workshop has its own capacity and ability, meaning that on the one hand, each workshop has its own team and each team can work on only one car at the same time; on the other hand, each workshop is limited to the maintenance of the specific component(s) due to restrictions in the equipment or skill level of the staff. The maintenance schedule is optimized for each component based on its remaining useful lifetime (RUL) which has been predicted through predictive approaches or models \\cite{elattar2016prognostics}. Furthermore, the cost and time which are needed by different workshops are considered because it is possible that the maintenance of the same component produces different costs and workloads when the operation is performed in different workshops. The VFMSO problem is essential because it can not only ensure the safety of vehicles for use; at the same time, it can lead to low maintenance costs and longer lives for vehicles as well.\n\nTo enhance the approach in \\cite{wang2019vehicle}, to be specific, to handle the uncertainty in the problem and apply it on new application scenarios, in this paper, we improve it from the following two aspects:\n\n\\begin{itemize}\n\\item[1.] There exists a lot of uncertainty when we use the predicted RUL for each component as its due date, because no matter how accurate the predictive model is, it is still possible that the component will break on other dates: before the due date or later. Therefore, instead of only the RUL, we decide to involve the RUL probability distribution as the foundation to assign the maintenance time in the scheduling optimization.\n\n\\item[2.] The VFMSO problem usually leads to a large and complex solution space, however, finding the most preferred solution is the ultimate goal. To this end, AP-DI-MOEA (Automatic Preference based DI-MOEA) is developed based on the framework of DI-MOEA (Diversity-Indicator based Multi-Objective Evolutionary Algorithm) \\cite{wang2019diversity}. The new algorithm can generate the preference region automatically and find solutions with a more fine-grained resolution in the preference region.\n\\end{itemize}\n\nThis paper is organized as follows. Section \\ref{sec:formulation} formulates the enhanced VFMSO problem. A literature review on preference based optimization is provided in Section \\ref{sec:literature}. The customized multi-objective evolutionary algorithm for the enhanced VFMSO is introduced in Section \\ref{sec:customizedalg}, and in Section \\ref{sec:ap-di-moea}, we explain AP-DI-MOEA. Section \\ref{sec:experiments} presents and discusses experiments and their results. Lastly, Section \\ref{sec:conclusion} concludes the paper and outlines directions for future work.\n\n\n\\section{Problem Formulation}\n\\label{sec:formulation}\nFor a car fleet operated by an operator, the components of cars (e.g., springs, brakes, tires or the engine) can fail and should be maintained regularly. Some separate workshops are available for the maintenance of the car fleet, and the repair time and maintenance cost are known for each component in each workshop. Beside the time and cost for repairing the car component, a fixed set-up cost and set-up time are also considered for each visit of a car to a workshop, which correspond to the cost and time required for the preparation of the maintenance operation. \n\nThe enhanced VFMSO problem addressed in this paper is defined as follows:\n\\begin{enumerate}\n\\item There are $n$ cars $C=\\{{C_{1},C_{2}},\\cdots,C_{n}\\}$ and $m$ workshops $W=\\{{W_{1},W_{2}},\\\\\\cdots,W_{m}\\}$.\n\n\\item Each car $C_i$ comprises $l_i$ components to be maintained for $i=1,\\cdots,n$.\n\n\\item For each component $O_{ij}$ ($j=1,\\cdots,l_i$), i.e., the $j$th component of car $C_i$, there is a set of workshops capable of repairing it. The set of workshops is represented by $W_{ij}$ which is a subset of $W$.\n\n\\item The processing time for maintaining component $O_{ij}$ in workshop $W_k$ is predefined and denoted by $p_{ijk}$.\n\n\\item The cost for maintaining component $O_{ij}$ in workshop $W_k$ is predefined and denoted by $q_{ijk}$.\n\n\\item The set-up time of car $C_i$ in workshop $W_k$ is predefined and denoted by $x_{ik}$.\n\n\\item The set-up cost of car $C_i$ in workshop $W_k$ is predefined and denoted by $y_{ik}$.\n\n\\item The number of teams in workshop $W_k$ is predefined and denoted by $z_k$.\n\n\\item The previous repair time of component $O_{ij}$ is recorded and denoted by $L_{ij}$.\n\\end{enumerate}\n\nAt the same time, the following assumptions are made:\n\\begin{enumerate}\n\\item All workshops and teams are available at the time of the optimization and assumed to be continuously available.\n\n\\item All the components are independent from each other.\n\n\\item Times required for transport of cars from\/to workshops are included in the maintenance time and cost of cars, and the set-up time\n\n\\item Environmental changes (such as car accidents) are not considered here.\n\n\\item There are no precedence constraints among the components of different cars. Cars are maintained on a first-come-first-served basis.\n\n\\item Each team can only work on one operation at a time and an operation, once started, must run to completion.\n\n\\item No operation can start before the completion of the previous operation.\n\\end{enumerate}\n\nTwo constraints are considered in the problem. As mentioned earlier, each workshop can only repair specific components, and this is the first constraint. Another constraint is that the maintenance periods of different operations for the same car should not overlap. It is obviously wrong if two overlapping maintenance operations of a car are assigned to different workshops because one car cannot be in two different workshops at the same time. If two overlapping maintenance operations of a car are assigned to the same workshop, it is not correct either because these two maintenance operations should be grouped together as one operation in this case. The grouping strategy will be explained in Section \\ref{sec:customizedalg}.\n\nThree objectives are assumed to be relevant for the vehicle fleet operator, which are the total workload, total cost and expected number of failures. In a multi-objective optimization problem, the objectives typically are conflicting, i.e., achieving the optimal value for one objective requires some compromise on other objectives. In our problem, the fact that a faster maintenance usually is more expensive leads to the conflict between the first two objectives. The expected number of failures counts the times when the vehicles are broken on the road. Here, the expected value is used because the actual value is unknown at the time of the optimization due to uncertainties in the predictions. When the expected number of failures is large, less maintenance tasks are performed, therefore, the workload and cost can drop. \n\nLet $T_k$ denote the sum of the times spent for all operations that are processed in workshop $W_k$; $M_i$ the sum of all costs spent for all maintenance operations of car $C_i$; $F_{ij}$ the number of failures of component $O_{ij}$. Three objectives can be defined as: \n\\begin{flalign}\n&\\text{Minimize the total workload:} ~~ f_1 = \\sum_{k=1}^{m}T_k &&\\\\\n&\\text{Minimize the total cost:} ~~ f_2 = \\sum_{i=1}^{n}M_i &&\\\\\n\\nonumber\n&\\text{Minimize the expected number of failures:} \\\\\n&f_3 = \\sum_{i=1}^{n}\\sum_{j=1}^{l_i} \\mathbb{E}(F_{ij})\n\\end{flalign}\n\n\n\\section{LITERATURE REVIEW}\n\\label{sec:literature}\n\nMulti-objective scheduling optimization is a major topic in the research of manufacturing systems. Its fundamental task is to organize work and workloads to achieve comprehensive optimization in multiple aspects, such as the processing time, processing cost and production safety, by deploying resources, setting maintenance time and processing sequence. In past decades, this issue has received a great deal of interest and research in different fields, such as scheduling of charging\/discharging for electric vehicles \\cite{zakariazadeh2014multi}; scheduling in cloud computing \\cite{ramezani2015evolutionary}; scheduling of crude oil operations \\cite{hou2015pareto}; scheduling in the manufacturing industry to reduce carbon emissions \\cite{ding2016carbon}; scheduling medical treatments for resident patients in a hospital \\cite{jeric2012multi}; scheduling for Internet service providers \\cite{bhamare2017multi}, and so on.\n\n\nAs a typical workshop style, the flexible job shop scheduling problem (FJSP) is an essential branch of production planning problems. The FJSP consists of a set of independent jobs to be processed on multiple machines, and each job contains several operations with a predetermined order. It is assumed that each operation must be processed in specified processing time on a specific machine out of multiple alternatives. The problem has been extensively studied in the literature (for example, \\cite{chiang2013simple}, \\cite{yuan2015multiobjective}, \\cite{gao2019review}). The FJSP is the research basis of the maintenance scheduling optimization problem and many real-world problems extend the standard FJSP by adding specific features. \\cite{ozguven2010mathematical} considers FJSP-PPF (process plan flexibility), where jobs can have alternative process plans. It is assumed that the process plans are known in advance and that they are represented by linear precedence relationships. Because only one of the alternative plans has to be adopted for each job, the FJSP-PPF deals with not only routing and sequencing sub-problems, but also the process plan selection sub-problem. In this paper, a mixed-integer linear programming model is developed for the FJSP-PPF. In \\cite{demir2014effective}, a mathematical model and a genetic algorithm are proposed to handle the feature of overlapping in operations. It is assumed that a lot which contains a batch of identical items is transferred from one machine to the next only when all items in the lot have completed their processing, therefore, sublots are transferred from one machine to the next for processing without waiting for the entire lot to be processed at the predecessor machine, meaning that starting a successor operation of job is not necessary to finish of its predecessor completely. Three features are considered in \\cite{yu2017extended}, which are (1) job priority; (2) parallel operations: some operations can be processed simultaneously; (3) sequence flexibility: the sequence of some operations can be exchanged. A mixed integer liner programming formulation (MILP) model is established to formulate the problem and an improved differential evolution algorithm is designed. Because of unexpected events occurring in most of the real manufacturing systems, there is a new type of scheduling problem known as the dynamic scheduling problem. This type of problem considers random machine breakdowns, adding new machine, new job arrival, job cancellation, changing processing time, rush order, rework or quality problem, due date changing, etc. Corresponding works on the FJSP include \\cite{fattahi2010dynamic}, \\cite{al2011robust}, \\cite{shen2015mathematical}, \\cite{ahmadi2016multi}. Compared with the standard FJSP, our VFMSO problem has some special properties: (1) flexible sequence: the sequence of the components is not predefined, but mainly influenced by the RUL probability distribution. (2) multiple problem parameters: besides the processing time, other problem parameters like the maintenance cost, set-up time, set-up cost, repair teams, etc, also have impacts on the result.\n\n\n\nOur real-world problem, like many other multi-objective optimization problems, can lead to a large objective space. However, finding a well-distributed set of solutions on the Pareto front requests a large population size and computational effort. Therefore, instead of spreading a limited size of individuals across the entire Pareto front, we decide to only focus on a part of the Pareto front, to be specific, the search for solutions will be only guided towards the preference region which, in our algorithm, is determined by the knee point. It has been argued in the literature that knee points are most interesting solutions, naturally preferred solutions and most likely the optimal choice of the decision maker (DM) \\cite{das1999characterizing, mattson2002minimal, deb2003multi, branke2004finding}. \n\n\n\n\nThe knee point is a point for which a small improvement in any objective would lead to a large deterioration in at least one other objective. In the last decade, several methods have been presented to identify knee points or knee regions. Das \\cite{das1999characterizing} refers the point where the Pareto surface ``bulges\" the most as the knee point, and this point corresponds to the farthest solution from the convex hull of individual minima which is the minima of the single objective functions. Zitzler \\cite{zitzler2004tutorial} defines $\\epsilon$-dominance: a solution $a$ is said to $\\epsilon$-dominate a solution $b$ if and only if $f_i(a)+\\epsilon \\geq f_i(b) ~\\forall i=1,...,m$ where $m$ is the number of objectives. A solution with a higher $\\epsilon$-dominance value with respect to the other solutions in the Pareto front approximation, is a solution having higher trade-offs and in this definition corresponds to a knee point. The authors of \\cite{yu2018method} propose to calculate the density of solutions projected onto the hyperplane constructed by the extreme points of the non-dominated solutions, then identify the knee regions based on the solution density. \n\nDifferent algorithms of applying knee points in MOEA have also been proposed.\nBranke \\cite{branke2004finding} modifies the second criterion in NSGA-II \\cite{deb2002fast}, and replaces the crowding distance by either an angle-based measure or a utility-based measure. The angle-based method calculates the angle between an individual and its two neighbors in the objective space. The smaller the angle, the more clearly the individual can be classified as a knee point. However, this method can only be used for two objective problems. In the utility-based method, a marginal utility function is suggested to approximate the angle-based measure in the case of more than two objectives. The larger the external angle between a solution and its neighbors, the larger the gain in terms of linear utility obtained from substituting the neighbors with the solution of interest. However, the utility-based measure is not suited for finding knees in concave regions of the Pareto front.\n\nRachmawati \\cite{rachmawati2006multi, rachmawati2006multi2} proposes a knee-based MOEA which computes a transformation of original objective values based on a weighted sum niching approach. The extent and the density of coverage of the knee regions are controllable by the parameters for the niche strength and pool size. The strategy is susceptible to the loss of less pronounced knee regions.\n\nSch{\\\"u}tze \\cite{schutze2008approximating} investigates two strategies for the approximation of knees of bi-objective optimization problems with stochastic search algorithms. Several new definitions for identifying knee points and knee regions for bi-objective optimization problems has been suggested in \\cite{deb2011understanding} and the possibility of applying them has also been discussed.\n\nBesides the knee points, the reference points, which are normally provided by the DM, have also been used to find a set of solutions near reference points. Deb \\cite{deb2006reference} proposes an MOEA, called R-NSGA-II, by which a set of Pareto optimal solutions near a supplied set of reference points can be found. The dominance relation together with a modified crowding distance operator is used in this methodology. For all solutions of the population, the distances to all reference points are calculated and ranked. The lowest rank (over all reference points) of a solution is used as its crowding distance. Besides, a parameter $\\epsilon$ is used to control the spread of obtained solutions. Bechikh proposes KR-NSGA-II \\cite{bechikh2010searching} by extending R-NSGA-II. Instead of obtaining the reference points from the DM, in KR-NSGA-II, the knee points are used as mobile reference points and the search of the algorithm was guided towards these points. The number of knee points of the optimization problem is needed as prior information in KR-NSGA-II.\n\nGaudrie \\cite{gaudrie2019targeting} uses the projection (intersection in case of a continuous front) of the closest non-dominated point on the line connecting the estimated ideal and nadir points as default preference. Conditional Gaussian process simulations are performed to create possible Pareto fronts, each of which defines a sample for the ideal and the nadir point, and the estimated ideal and nadir are the medians of the samples.\n\nRachmawati and Srinivasan \\cite{rachmawati2009multiobjective} evaluate the worthiness of each non-dominated solution in terms of compromise between the objectives. The local maxima is then identified as potential knee solutions and the linear weighted-sums of the original objective functions are optimized to guide solutions toward the knee regions. \n\nAnother idea of incorporating preference information into evolutionary multi-objective optimization is proposed in \\cite{thiele2009preference}. They combine the fitness function and an achievement scalarizing function containing the reference point. In this approach, the preference information is given in the form of a reference point and an indicator-based evolutionary algorithm IBEA \\cite{zitzler2004indicator} is modified by embedding the preference information into the indicator. Various further preference based MOEAs have been suggested, e.g., \\cite{braun2011preference, ramirez2017knee, wang2017new}. \n\nIn our proposed algorithm, i.e., AP-DI-MOEA, we adopt the method from \\cite{das1999characterizing} to identify the knee point, design the preference region based on the knee point, and guide the search towards the preference region. The advantages of our algorithm are: (1) no prior knowledge is used in identifying the knee point and knee region; (2) the preference region is generated automatically and narrowed down step by step to benefit its accuracy; (3) our strategy cannot only handle bi-objective optimization problems, but also tri- and many-objective problems; (4) although we integrate the strategy with DI-MOEA, it may be integrated with any standard MOEAs (such as NSGA-II \\cite{deb2002fast}, SMS-EMOA \\cite{beume2007sms} and others); (5) the proposed algorithm is capable of finding preferred solutions for multi-objective optimization problems with linear, convex, concave Pareto fronts and discrete problems.\n\n\\section{Customized Algorithm for Vehicle Fleet Maintenance Scheduling Optimization}\n\\label{sec:customizedalg}\nFor our real-world VFMSO problem, we first define the execution window for each component based on its predicted RUL probability distribution which is assumed to be a normal distribution. The execution window suggests that the maintenance of the component can only start at a time spot inside the window. The mean ($\\mu$) and standard deviation ($\\sigma$) of the predicted RUL probability distribution determine the interval of the execution window, which is defined as: [$\\mu -2\\times \\sigma$, $\\mu +2\\times \\sigma$]. The interval is chosen relatively long because 95\\% of the values are within two standard deviations of the mean, therefore, maintenance before or after the interval hardly makes sense.\n\nAfter the determination of the execution window, the following two special strategies have been taken to improve the process of scheduling optimization: \n\\begin{itemize}\n\\item Grouping components.\n\\item Obtaining the penalty cost and expected number of failures by Monte Carlo simulation.\n\\end{itemize}\nLastly, evolutionary algorithm (EA) is chosen to solve this real-world application problem due to its powerful characteristics of robustness and flexibility to capture global solutions of complex combinatorial optimization problems. Moreover, EAs are well suited to solve multi-objective optimization problems due to their ability to approximate the entire Pareto front in a single run.\n\n\\subsection{Grouping Components}\nIt would be troublesome and also a waste of time and effort to send a car to workshops repeatedly in a short period of time to repair different components. In our algorithm, since each component has its execution window for its maintenance, it is possible to combine the maintenance of several components to one visit if their execution windows overlap. Especially, by grouping the maintenance of multiple components into one maintenance operation, the set-up cost and set-up time are charged only once for the complete group of components. \n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[height=120pt, width=260pt]{newgroup.png} \n\\caption{Possible groups for a car with eight components.} \n\\label{f2}\n\\end{figure}\n\nFigure~\\ref{f2} represents the execution windows of eight components of a car. The overlap of the execution windows shows the possibility of grouping these components. Combining components can only be effective if there is a common overlap of the execution window for components from the same car, and the starting time of the group operation must lie within the common overlap. In this example, component $c1$ can be grouped with $c2$ and\/or $c3$ due to the overlap between their execution windows. Other possible group structures can be deduced in the same manner. \n\n\\subsection{Monte Carlo Simulation}\nWithin the execution window of a component, an arbitrary time can be chosen as the starting time for maintaining the component. The maintenance time of each component should be as close as possible to its real due date, because:\n\\begin{itemize}\n\\item Performing the maintenance too early results in higher maintenance costs in the long term, because more maintenance tasks have to be done. \n\\item The risk of breaking down on the road will increase if the maintenance date is too late.\n\\end{itemize}\nTherefore, we use Monte Carlo simulation to simulate the ``real'' due dates for each component. In our experiments, stability can be achieved at a few hundred samples, in our case, $1000$ samples of the due date are generated in the execution window of each component according to its predicted RUL probability distribution (see Section IV). Figure~\\ref{f1} shows an example of the execution window evolved from the predicted RUL probability distribution of a component. After 1000 sampled due dates are generated in the execution window, the scheduled maintenance date of the component is compared with these samples one by one, and each comparison can lead to three situations. Let us use $d_{ij}^v$ to denote the $v$th due date sample of component $O_{ij}$; and $D_{ij}$ the scheduled maintenance date of component $O_{ij}$. Three possibilities after the comparison are:\n\\\\\nCase 1) $~D_{ij} < d_{ij}^v$\\\\\nThe scheduled maintenance date is earlier than the sample (or the ``real'' due date) means that the component will be maintained before it is broken. In this case, its useful life between the maintenance date and the due date will be wasted. Therefore, a corresponding penalty cost is imposed to reflect the waste. To calculate the penalty cost, a linear penalty function is suggested based on the following assumptions:\n\\begin{itemize}\n\\item If a component is maintained when it is new or the previous maintenance has just completed, the penalty cost would be the full cost of maintaining it, which is $c+s$: the maintenance cost of the component and the set-up cost of the car;\n\\item If a component is maintained at exactly its due date, the penalty cost would be 0.\n\\end{itemize}\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[ width=4in]{newpenalty.png} \n\\caption{Execution window of a component.} \n\\label{f1}\n\\end{figure}\n\\vspace{-0.0cm}\n\nAssume $d_{ij}^v$ is ``Sampled Due date'' in Figure~\\ref{f1}, and $D_{ij}$ is ``Maintenance date a'', in this case, $D_{ij}$ is earlier than $d_{ij}^v$. The penalty cost of ``Maintenance date a'' for ``Sampled Due date'' would be the vertical dotted line above ``Maintenance date a''.\n\\\\\nCase 2) $~D_{ij} > d_{ij}^v$\\\\\nThe scheduled maintenance date is later than the sample means that the maintenance date is too late and the defect occurs on the use. Still, $d_{ij}^v$ is ``Sampled Due date'' in Figure~\\ref{f1}, but the scheduled maintenance date $D_{ij}$ is ``Maintenance date b''. In this case, $D_{ij}$ is later than $d_{ij}^v$, and the vehicle will break down on the road. In our algorithm, the number of failures will be increased by one.\n\\\\\nCase 3) $~D_{ij} = d_{ij}^v$\\\\\nThe ideal situation is that the maintenance date is scheduled on the due date. The component can be maintained exactly at the date that the component is broken. In this case, there is no penalty or failure.\n\nThe averages of the penalty costs and the number of failures from $1000$ due date samples will be used as the penalty cost and expected number of failures for the scheduled maintenance date of the component. For each operation (the single-component operation or group operation), its cost consists of three parts: the set-up cost of the car, the maintenance costs and the penalty costs of all components of the operation. The penalty cost of components is a part of the total cost, and the expected number of failures of components is the third objective to be minimized in our multi-objective optimization.\n\n\n\\subsection{Implementation of Evolutionary Algorithm Operators}\nTo solve our application problem with an EA, there are several basic issues we need to deal with, such as, how to represent an individual or solution in the population (Chromosome Encoding); how to take these chromosomes into a process of evolution (Genotype-Phenotype Mapping); how to create variations of solutions in each iteration (Genetic Operators). Details of these topics are given in the following subsections.\n\n\\subsubsection{Chromosome Encoding}\nIn our algorithm, a three-vector chromosome (Figure~\\ref{f0}) is proposed to represent an individual, and the three vectors are:\n\\begin{itemize\n \\item Group structure vector: the group structures of components.\n \\item Starting time vector: the starting times of operations.\n \\item Workshop assignment vector: the workshops for operations.\n\\end{itemize}\n\n\\begin{figure*}[!htbp]\n\\hspace{-0.7cm}\n\\includegraphics[height=0.45in, width=5.2in]{chrom1.png}\n\\caption{Three-vector chromosome.}\n\\label{f0}\n\\end{figure*}\n\nThe group structure vector gives the information of which components are in the same group, it is initialized by randomly picking a feasible group structure for each car (check the details in \\cite{wang2019vehicle}). The generation of the starting time vector should be later than the generation of the group structure vector because the starting time of each operation is determined by the execution window which is the entire execution window of the component for a single-component operation or the execution window intersection for a group operation. A time spot is randomly selected from the execution window or execution window intersection for each operation in order to initialize the starting time vector. \n\nA workshop is considered as ``several workshops\" based on its capacity (the number of teams). By this way, the schedule for each workshop team can be achieved from the solution. For example, consider that two workshops have three and four repairing teams respectively. Then, group operations can be randomly assigned to seven ``workshops'', the former three and the latter four represent the corresponding teams in two workshops.\n\n\\subsubsection{Genotype-Phenotype Mapping}\nTo use the power of EAs to obtain a better population, we need to evaluate each chromosome and give the better ones higher probabilities to produce offspring. This is done by genotype-phenotype mapping or decoding the chromosome. In our problem, it is to convert an individual into a feasible schedule to calculate the objectives and constraints which represent the relative superiority of a chromosome. The genotype-phenotype mapping can be easily achieved in our algorithm because the group structure, the starting time and the workshop team of the operations can be acquired directly from each individual. When converting an individual into a schedule, it is possible that the processing times of two or more operations assigned to the same workshop team are overlapping since the starting time of each operation is decided in the starting time vector. In this situation, the principle of first-come-first-served is followed: the starting time and processing time of the earlier started operation remain the same; the starting time of the later started operation is delayed until the completion of the previous operation; the processing time of the later started operation remains the same; while, an extra waiting time is added to the later started operation as a penalty because the vehicle waits in the workshop for the maintenance.\n\n\\subsubsection{Genetic Operators}\nIn accordance with the problem and its encoding, specific crossover and mutation operators have been designed for our problem (check the details in \\cite{wang2019vehicle}). Both operators are applied separately to the three parts of the chromosome.\n\nFor the group structure vector, multi-point crossover can be used as crossover operator and the number of cutting points depends on the length of the vector. The same cutting points can be applied to the starting time vector when performing crossover. However, the change on the group structure vector as a consequence of the crossover may result in the invalidity of genes in the starting time vector because it is possible that the group members and execution window intersections have changed due to the new group structure. Therefore, when performing the crossover on the starting time vector, the starting times of all operations should be checked based on the new group structure and a new starting time is produced randomly from the correct intersection in the case that the starting time of an operation is invalid. The multi-point crossover can be applied to the workshop assignment vector as well. \n\nThe mutation operator alters one or more gene values in a chromosome. Similarly, the mutation should be operated on the group structure vector first due to its impact on the starting time vector; the starting time of operations should be checked and corrected after the mutation is done on the group structure vector. Afterwards, several gene values can be altered in the staring time vector and workshop assignment vector to generate a new individual.\n\n\n\\section{Proposed Preference based Algorithm}\n\\label{sec:ap-di-moea}\nAs the number of objectives and decision variables increases, the number of non-dominated solutions tends to grow exponentially \\cite{pal2018decor}. This brings more challenges on achieving efficiently a solution set with satisfactory convergence and diversity. At the same time, a huge number of solutions is needed to approximate the entire Pareto front. However, a big population means more computational time and resources. To overcome these difficulties, we propose an automatic preference based MOEA, which can generate the preference region or the region of interest (ROI) automatically and find non-dominated solutions in the preference region instead of the entire Pareto front. The automatic preference based MOEA is developed based on the framework of DI-MOEA (Diversity-Indicator based Multi-Objective Evolutionary Algorithm) \\cite{wang2019diversity}. We call our new algorithm AP-DI-MOEA.\n \nDI-MOEA is an indicator-based MOEA, it has shown to be competitive to other MOEAs on common multi-objective benchmark problems. Moreover, it is invariant to the shape of the Pareto front and can achieve evenly spread Pareto front approximations.\nDI-MOEA adopts a hybrid selection scheme:\n\\begin{itemize\n \\item The ($\\mu$ + $\\mu$) generational selection operator is used when the parent population can be layered into multiple dominance ranks. The intention is to accelerate convergence until all solutions are non-dominated.\n \\item The ($\\mu$ + 1) steady state selection operator is adopted in the case that all solutions in the parent population are mutually non-dominated and the diversity is the main selection criterion to achieve a uniform distribution of the solutions on the Pareto front.\n\\end{itemize}\n\nDI-MOEA employs non-dominated sorting as the first ranking criterion; the diversity indicator, i.e., the Euclidean distance based geometric mean gap indicator, as the second, diversity-based ranking criterion to guide the search. Two variants of DI-MOEA, denoted as DI-1 and DI-2, exist, which use the crowding distance and diversity indicator, respectively, as the second criteria in the ($\\mu$ + $\\mu$) generational selection operator. While, to ensure the uniformity of the final solution set, the diversity indicator is used by both variants in the ($\\mu$ + 1) steady state selection operator. Analogously, two variants of AP-DI-MOEA, i.e., AP-DI-1 and AP-DI-2, are derived from the two variants of DI-MOEA\n\n\n\nThe workings of AP-DI-MOEA are outlined in Algorithm 1. (Exceedance of) $Enum\\_P$ is a predefined condition (In our algorithm, $Enum\\_P$ is the number of evaluations.) to divide the algorithm into two phases: learning phase and decision phase. In the learning phase, the algorithm explores the possible area of Pareto optimal solutions and finds the rough approximations of the Pareto front. In the decision phase, the algorithm identifies the preference region and finds preferred solutions. When the algorithm starts running and satisfies $Enum\\_P$ at some moment, the first preference region will be generated and $Enum\\_P$ will be updated for determining a new future moment when the preference region needs to be updated. The process of updating $Enum\\_P$ continues until the end. The first $Enum\\_P$ is a boundary line. Before it is satisfied, AP-DI-MOEA runs exactly like DI-MOEA to approximate the whole Pareto front; while, after it is satisfied, the preference region is generated automatically and AP-DI-MOEA finds solutions focusing on the preference region. The subsequent values of $Enum\\_P$ define the later moments to update the preference region step by step, eventually, a precise ROI with a proper size can be achieved.\n\n\\begin{figure*}[!htbp]\n\\vspace{-3.5cm}\n\\hspace{-3cm}\n\\includegraphics[height=10in]{pg_0002.pdf}\n\\end{figure*}\n\n\\begin{figure*}[!htbp]\n\\hspace{-2.5cm}\n\\includegraphics[width=7in]{al2.pdf}\n\\end{figure*}\n\n\n\\iffalse \n\\addtocounter{algorithm}{1}\n\\begin{algorithm}[!htbp]\n\\setstretch{0.8}\n \t\\caption{Finding the knee point and defining the preference region.}\n \\label{algorithm:2}\n \t\\begin{algorithmic}[1]\n \\STATE $n \\leftarrow$ the number of objectives;\n \\STATE $P_t \\leftarrow$ current population;\n \\STATE $\\epsilon$; \/\/parameter ($>$0) for distinguishing convex\/concave shape;\n \\STATE $popsize \\leftarrow |P_t|$; \/\/population size\n \n \\STATE Declare $Q[n]$; \/\/upper quartile objective values of $P_t$ \n \\STATE Declare $L[n]$; \/\/worst objective values of $P_t$\n \\STATE Declare $knee[n]$; \/\/knee point of $P_t$\n \\STATE Declare $P\\_region[n]$; \/\/preference region of $P_t$\n \\STATE Declare $Expoints[n][n]$; \/\/extreme points\n \\STATE $foundknee \\leftarrow false$;\n \n \n \\FOR{{\\bf each} $i \\in \\{ 1, \\dots, n\\}$}\n \\STATE sort($P_t$) by the $i$th objective in ascending order;\n \\STATE $Q[i] \\leftarrow P_t$.get\\_index$(\\frac{3}{4}\\times popsize)$.get\\_obj($i$); \/\/upper quartile value of the $i$th objective \n \\STATE $L[i] \\leftarrow P_t$.get\\_index$(popsize)$.get\\_obj($i$);\/\/the largest (worst) value of the $i$th objective\n \\ENDFOR\n \\FORALL{solution $s \\in P_t$}\n \\IF{$s$.get\\_obj($i=1,...,n) > Q[i]$ }\n \\STATE remove $s$ from $P_t$;\n \\ENDIF\n \\ENDFOR\n \\STATE $Expoints[\\centerdot][\\centerdot] \\leftarrow$ extreme points in $P_t$;\n \\STATE $num_a\\leftarrow$ the number of points in concave region of hyperplane formed by $Expoints[\\centerdot][\\centerdot]$;\n \\STATE $num_v \\leftarrow |P_t|-num_a$; \/\/the number of points in convex region\n \\IF{$(num_v - num_a > \\epsilon)$}\n \\STATE \/\/roughly convex shape\n \\STATE remove solutions in concave region from $P_t$;\n \\ELSIF{$(num_a - num_v > \\epsilon)$}\n \\STATE \/\/roughly concave shape\n \\STATE remove solutions in convex region from $P_t$;\n \\ELSE\n \\FORALL{solution $s\\in P_t$}\n \\STATE calculate hypervolume of $s$ with reference point $L[\\centerdot]$;\n \\STATE update the largest hypervolume value ($max\\_h$);\n \\ENDFOR\n \\STATE $knee[\\centerdot] \\leftarrow$ solution with $max\\_h$;\n \\STATE $foundknee \\leftarrow true$;\n \\ENDIF\n \\IF{($foundknee == false$)}\n \\FORALL{solution $s\\in P_t$}\n \\STATE calculate distance between $s$ and hyperplane formed by $Expoints[\\centerdot][\\centerdot]$;\n \\STATE update the largest distance ($max\\_d$);\n \\ENDFOR\n \\STATE $knee[\\centerdot] \\leftarrow$ solution with $max\\_d$;\n \\ENDIF\n \\FOR{{\\bf each} $i \\in \\{ 1, \\dots, n\\}$}\n \\STATE $P\\_region[i] \\leftarrow knee[i] + (L[i]-knee[i]) \\times 85\\%$\n \\ENDFOR\n \t\\end{algorithmic}\n\\end{algorithm}\n\\fi\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=3.3in]{knee-explain.png} \n\\caption{Finding the knee point in bi-dimensional space.} \n\\label{knee}\n\\end{figure}\n\nThe first\/new preference region is formed based on the population at the moment when the condition of $Enum\\_P$ is satisfied, especially the knee point of the population. Algorithm 2 gives the details of line 41 in Algorithm 1, it introduces the steps of finding the knee point of a non-dominated solution set and constituting a hypercube shaped preference region according to the knee point. Figure~\\ref{knee} also gives an illustration of finding the knee point in bi-dimensional space. Firstly, the upper quartile objective values (line 13 in Algorithm 2) in the solution set are used as a boundary to define outliers and solutions outside this boundary are removed (line 16-20 in Algorithm 2). The extreme solutions (the solutions with the maximum value in one objective) (line 21 in Algorithm 2) are then found inside the boundary and a hyperplane is formed based on the extreme solutions. In a bi-dimensional space (Figure~\\ref{knee}), the hyperplane is only a line connecting two extreme solutions. According to the numbers of points below and above the hyperplane (line 22 - 23 in Algorithm 2), the shape of the solution set can be roughly perceived.\nWe will distinguish between ``convex'' and ``concave'' regions. Points in the \\textit{convex} (\\textit{concave}) \\textit{region} are dominating (dominated by) at least one point in the hyperplane spanned by the extreme points. However, when the number of the points in the convex region and the number of points in the concave region is close enough, it implies that the shape of the current solution set is almost linear. This occurs both when the true Pareto front is linear and when the solution set is converged very well in a small area of the Pareto front. A parameter $\\epsilon$ then is used to represent the closeness and it is a small number decided by the size of the solution set. In the case that the shape of the current solution set is (almost) linear, the solution with the largest hypervolume value with regards to the worst objective vector (line 14 in Algorithm 2) is adopted as the knee point (line 32 - 36 in Algorithm 2). While, under the condition that the shape of the current solution set is convex or concave, the solution in the convex or concave region with the largest Euclidean distance to the hyperplane is chosen as the knee point (line 39 - 42 in Algorithm 2). After the knee point is found, the preference region can be determined based on the knee point by the following formula:\n\\begin{align}\nP\\_region[i] = knee[i] + (L[i]-knee[i]) \\times 85\\%\n\\end{align}\n\nLet $i$ denotes the $i$th objective, as in Algorithm 2, $L[i]$ is the worst value of the $i$th objective in the population, $knee[i]$ is the $i$th objective value of the knee point and $P\\_region[i]$ is the upper bound of the $i$th objective. W.l.o.g. we assume the objectives are to be minimized and the lower bound of preference region is the origin point. According to the formula, we can see that the first preference region is relatively large (roughly 85\\% of the entire Pareto front). With the increase in the number of iteration, the preference region will be updated and becomes smaller and smaller because every preference region picks 85\\% of the current Pareto front. Eventually, we want the preference region can reach a proper range, say, 15\\% of the initial Pareto front. The process of narrowing down the preference region step by step can benefit the accuracy of the preference region.\n\nIn the interest of clarity, Algorithm 1 only shows the workings of AP-DI-1, the workings of AP-DI-2 can be obtained by replacing crowding distance with the diversity indicator contribution.\nIn the ($\\mu$ + $\\mu$) generational selection operator (line 14 - 36 in Algorithm 1), when there is no preference region, the second ranking criteria (the crowding distance for AP-DI-1; the diversity indicator for AP-DI-2) for all solutions on the last front are calculated and the population will be truncated based on non dominated sorting and the second ranking criteria (line 28 - 29 in Algorithm 1). While, if a preference region already exists, both the second ranking criteria and Euclidean distance to the knee point for all solutions on the last front are calculated and the population will be truncated based on first non dominated sorting, then the second ranking criteria, lastly, Euclidean distance to the knee point (line 31 - 32 in Algorithm 1). In the ($\\mu$ + 1) steady state selection operator (line 38 - 59 in Algorithm 1), firstly, the value of $Enum\\_P$ is compared with the current number of evaluations to determine if a (new) preference region should be generated. When it is time to do so, the preference region is generated through Algorithm 2 (line 41 in Algorithm 1), at the same time, the value of $Enum\\_P$ is updated to the next moment when the preference region is to be updated (line 42 in Algorithm 1). There are different strategies to assign the values of $Enum\\_P$. In our algorithm, we divide the whole computing budget into two parts, the first half is used to find an initial entire Pareto front approximation, and the second half is used to update the preference region and find solutions in the preference region. Assume the total computing budget is $Enum\\_T$ (the number of evaluations), then the first value of $Enum\\_P$ is $\\frac{1}{2}\\times Enum\\_T$. Due to the reason that we expect a final preference region with a size of around 15\\% of the initial entire Pareto front and each new preference region takes 85\\% of the current Pareto front, according to the formula: $0.85^{12} \\approx 0.14$, the value of $Enum\\_P$ can be updated by the following formula:\n\\begin{align}\nEnum\\_P = Enum\\_P + (Enum\\_T\/2)\/12\n\\end{align}\n\nAnother half of budget can be divided into $12$ partial-budgets and a new preference region is constituted after each partial-budget. In the end, the final preference region is achieved and solutions focusing on this preference region are obtained. For the rest part of the ($\\mu$ + 1) steady state selection operator, likewise, when there is a preference region, three ranking criteria (1. non-dominated sorting; 2. diversity indicator; 3. the Euclidean distance to the knee point) work together to achieve a well-converged and well-distributed set of Pareto optimal solutions in the preference region.\n\n\\section{Experimental Results}\n\\label{sec:experiments}\n\\subsection{Experimental Design}\nIn this section, simulations are conducted to demonstrate the performance of proposed algorithms on both benchmark problems and our real-world application problems. All experiments are implemented based on the MOEA Framework (\\url{http:\/\/www.moeaframework.org\/}), which is a Java-based framework for multi-objective optimization.\n\nFor the two variants of AP-DI-MOEA: AP-DI-1 and AP-DI-2, their performances have been compared with DI-MOEA: DI-1, DI-2 and NSGA-III \\cite{deb2014evolutionary}. We compare our algorithm with NSGA-III because NSGA-III is a representative state-of-the-art evolutionary multi-objective algorithm and it is very powerful to handle problems with non-linear characteristics. For bi-objective benchmark problems, algorithms are tested on ZDT1 and ZDT2 with 30 variables. For three objective benchmark problems, DTLZ1 with 7 variables and DTLZ2 with 12 variables are tested. For the real-world application problem of VFMSO, experiments have been conducted on two instances with different sizes. The configurations of the two instances, such as the predicted RUL probability distribution, the processing time and maintenance cost of each component, the set-up time and cost of each car, are made available on \\url{http:\/\/moda.liacs.nl}. On every problem, we run each algorithm $30$ times with different seeds, while the same $30$ different seeds are used for all algorithms. All the experiments are performed with a population size of $100$; and for bi-objective problems, experiments are run with a budget of $22000$ (objective function) evaluations, DTLZ three objective problems with a budget of $120000$ evaluations, the VFMSO problems with a budget of $1200000$ evaluations. This setting is chosen to be more realistic in the light of the applications in scheduling that we ultimately want to solve.\n\n\n\n\n\n\n\\subsection{Experiments on bi-objective problems}\n\nBi-objective problems are optimized with a total budget of $22000$ evaluations, when the number of evaluations reaches $10000$ times, the first preference region is generated, then after every $1200$ evaluations, the preference region will be updated. Figure~\\ref{fig:ZDT1} shows the Pareto front approximations from a typical run on ZDT1 (left column) and ZDT2 (right column). The graphs on the upper row are obtained from DI-1 and AP-DI-1, while the graphs on the lower row are from DI-2 and AP-DI-2. In each graph, the entire Pareto front approximation from DI-MOEA and the preferred solutions from AP-DI-MOEA (or \\textit{AP solutions}) are presented, at the same time, the preference region of AP-DI-MOEA is also shown by the gray area.\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[ZDT1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt1-di1-tdi1-w.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt1-di2-tdi2-w.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[ZDT2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt2-di1-tdi1-w.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt2-di2-tdi2-w.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\n\\caption{Pareto front approximation on ZDT1 and ZDT2.}\n\\label{fig:ZDT1}\n\\end{figure}\n\nBesides the visualization of the Pareto fronts, we also compute the knee point of the entire final Pareto front approximation from DI-MOEA via the strategy described in Algorithm 2. For each run of DI-MOEA and AP-DI-MOEA with the same seed, the following two issues have been checked: \n\\begin{itemize\n\\item If the knee point from DI-MOEA is in the preference region achieved by its derived AP-DI-MOEA;\n\\item If the knee point from DI-MOEA is dominated by or dominating AP solutions; or if it is a non-dominated solution (mutually non-dominated with all AP solutions).\n\\end{itemize}\n\nTable~\\ref{table-kneezdt1} shows the results of 30 runs. For ZDT1 problem, all 30 knee points from DI-1 and DI-2 are in the preference regions from AP-DI-1 and AP-DI-2 respectively; in all these knee points, 10 from DI-1 and 7 from DI-2 are dominated by AP solutions. For ZDT2 problem, most knee points are not in corresponding preference regions, but for those in the preference regions, almost all of them are dominated by AP solutions. Please note that when a knee point from DI-MOEA is outside of the preference region from AP-DI-MOEA, it is not possible that it can dominate any AP solutions because all AP solutions are in the preference region and only solutions in the left side of the gray area can dominate AP solutions. \n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on ZDT problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{ZDT1} & \\multicolumn{2}{|c|}{ZDT2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 20 & 23 & 1 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 10 & 7 & 9 & 9 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 0 & 0 & 20 & 20 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneezdt1}\n\\end{table} \n\nWe also perform the same comparison between AP-DI-MOEA and NSGA-III, the results are shown in Table~\\ref{table-kneezdt1-nsga3}. For ZDT1 problem, all knee points from NSGA-III are in the preference regions from AP-DI-MOEA. Some of these knee points dominate AP solutions. For ZDT2 problem, most knee points from NSGA-III are not in the preference regions and these knee points are incomparable with AP solutions. For the knee points in the preference regions, all three dominating relations with AP solutions appear. For both problems, when the knee point from NSGA-III is dominating AP solutions, it only dominates one AP solution.\n\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on ZDT problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{ZDT1} & \\multicolumn{2}{|c|}{ZDT2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 14 & 19 & 3 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 2 & 3 \\\\\n \\cline{2-6}\nregion & Dominating & 16 & 11 & 4 & 6 \\\\\n\\hline\nOutside & Incomparable & 0 & 0 & 21 & 20 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneezdt1-nsga3}\n\\end{table} \n\nInstead of spreading the population across the entire Pareto front, we only focus on the preference region. To ensure that our algorithm can guide the search towards the preference region and the achieved solution set is distributed across the preference region, we compare the performance of AP-DI-MOEA, DI-MOEA and NSGA-III in the preference region. For each Pareto front approximation from DI-MOEA and NSGA-III, the solutions in the corresponding preference region from AP-DI-MOEA are picked, and we compare these solutions with AP solutions through the hypervolume indicator. The point formed by the largest objective values over all solutions in the preference region is adopted as the reference point when calculating the hypervolume indicator. It has been found that all hypervolume values of new solution sets from DI-MOEA and NSGA-III in the preference region are worse than the hypervolume values of the solution sets from AP-DI-MOEA, which proves that the mechanism indeed works in practice. Figure~\\ref{box:ZDT} shows box plots of the distribution of hypervolume indicators over 30 runs.\n\n\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[ZDT1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt1-di1-box1.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt1-di2-box2.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[ZDT2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt2-di1-box1.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt2-di2-box2.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\caption{Boxplots comparing the hypervolume values on ZDT1 and ZDT2.}\n\\label{box:ZDT}\n\\end{figure}\n\n\n\\subsection{Experiments on three objective problems}\nDTLZ1 and DTLZ2 are chosen as three objective benchmark problems to investigate our algorithms. They are performed with a total budget of $120000$ fitness evaluations, when the evaluation reaches $60000$ times, the first preference region is formed, then after every $5000$ evaluations, the preference region is updated. Figure~\\ref{fig:dtlz} shows the Pareto front approximations from a typical run on DTLZ1 (left column) and DTLZ2 (right column). The upper graphs are obtained from DI-1 and AP-DI-1, while the lower graphs are from DI-2 and AP-DI-2. In each graph, the Pareto front approximations from DI-MOEA and corresponding AP-DI-MOEA are given. Since the target region is actually an axis aligned box, the obtained knee region (i.e., the intersection of the axis aligned box with the Pareto front) has an inverted triangle shape for these two benchmark problems.\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[DTLZ1 3 objective problem]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.5in,width=4.4in]{dtlz1-di1-tdi1-w.png}\\\\\n \\vspace{0.45cm}\n \\includegraphics[height=2.5in,width=4.4in]{dtlz1-di2-tdi2-w.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\hspace{2cm}\n\\subfigure[DTLZ2 3 objective problem]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4.4in]{dtlz2-di1-tdi1-w.png}\\\\\n \n \\includegraphics[height=2.7in,width=4.4in]{dtlz2-di2-tdi2-w.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on DTLZ1 and DTLZ2.}\n\\label{fig:dtlz}\n\\end{figure*}\n\n\n\nTable~\\ref{table-kneedtlz} shows the space and dominance relation of the knee point from DI-MOEA and the solution set from AP-DI-MOEA over 30 runs. For DTLZ1 problem, most knee points from DI-MOEA are in their respective preference regions and all knee points are mutually non-dominated with AP solutions. For DTLZ2 problem, we observed that more knee points are not in the corresponding preference regions. This is because too few solutions from DI-MOEA are in the preference region. For DTLZ1 problem, six solutions from DI-MOEA are in the corresponding preference region on average for each run, while, for DTLZ2 problem, only less than two solutions are in the corresponding preference region on average. Therefore, we can see that on the one side, it is normal that many knee points from the entire Pareto fronts are not in their corresponding preference regions; on the other side, our aim of finding more fine-grained resolution in the preference region has been well achieved because only few solutions can be obtained in the preference region if we spread the population across the entire Pareto front. At the same time, one knee point from DI-1 on DTLZ2 is dominated by solutions from the corresponding AP-DI-1, which proves that AP-DI-MOEA can converge better than DI-MOEA because AP-DI-MOEA focuses on the preference region.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on DTLZ problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{DTLZ1} & \\multicolumn{2}{|c|}{DTLZ2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 29 & 27 & 10 & 13\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 1 & 0 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 1 & 3 & 19 & 17 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneedtlz}\n\\end{table} \n\n\nAP-DI-1 and AP-DI-2 have also been compared with NSGA-III in the same way. Table~\\ref{table-kneedtlz_nsga3} shows the comparison result. For DTLZ1, the average number of solutions from NSGA-III in the corresponding preference regions from AP-DI-MOEA is six. Still, almost all knee solutions from NSGA-III are in the preference region. For DTLZ2, the average number of solutions from NSGA-III in the corresponding preference region from AP-DI-MOEA is less than one, while, in more than half of 30 runs, the knee points from NSGA-III are still in the preference region. To some extent, it can be concluded that the preference regions from AP-DI-MOEA are accurate. It can also be observed that AP-DI-1 behaves better than AP-DI-2 on DTLZ2, because two knee points from NSGA-III dominate the solutions from AP-DI-2.\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on DTLZ problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{DTLZ1} & \\multicolumn{2}{|c|}{DTLZ2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 &AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 30 & 29 & 14 & 17\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 1 & 1 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 2 \\\\\n\\hline\nOutside & Incomparable & 0 & 1 & 15 & 10 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneedtlz_nsga3}\n\\end{table} \n\nSimilarly, we pick from DI-MOEA and NSGA-III solutions which are in the corresponding preference region of AP-DI-MOEA, and the hypervolume indicator value is compared between these solutions and AP solutions. It has been found that all hypervolume values of solutions from AP-DI-MOEA are better than those of solutions from DI-MOEA and NSGA-III. The left column of Figure~\\ref{box:dtlz} shows box plots of the distribution of hypervolume values over 30 runs on DTLZ1, and the right column shows the hypervolume comparison on DTLZ2.\n\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[DTLZ1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{dtzl1-di1-box.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{dtlz1-di2-box.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[DTLZ2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{dtlz2-di1-box.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{dtlz2-di2-box.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\n\\caption{Boxplots comparing the hypervolume values on DTLZ1 and DTLZ2.}\n\\label{box:dtlz}\n\\end{figure}\n\nIn our experiments, we decide half of the total budget is used to find an initial Pareto front because it turned out to be a good compromise: half budget for the initial Pareto front and another half budget for the solutions focusing on the preference region. We also run experiments using 25\\% and 75\\% of the total budget for the initial Pareto front. Figure~\\ref{fig:dtlz-budget} presents the entire Pareto front from DI-MOEA and the Pareto front from AP-DI-MOEA with different budgets for the initial Pareto front. The left two images are on DTLZ1 and the right two images are on DTLZ2. The uppper two images are from DI-1 and AP-DI-1; the lower two images are from DI-2 and AP-DI-2. In the legend labels, 50\\%, 25\\% and 75\\% indicate the budgets which are utilized to find the initial entire Pareto front. It can be observed that the preference region from AP-DI-MOEA with 50\\% of budget are located on a better position than with 25\\% and 75\\% budgets, and the position of the preference region from AP-DI-MOEA with 50\\% of budget is more stable. Therefore, in our algorithm, 50\\% of budget is used before the generation of preference region.\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[DTLZ1 3 objective problem]{\n \\begin{minipage}[t]{0.50\\linewidth}\n \\centering\n \\includegraphics[height=2.6in,width=4.28in]{dtlz1-di1-tdi1-3combine-1.png}\\\\\n \n \\includegraphics[height=2.6in,width=4.28in]{dtlz1-di2-tdi2-3combine-3.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\hspace{1.5cm}\n\\subfigure[DTLZ2 3 objective problem]{\n \\begin{minipage}[t]{0.50\\linewidth}\n \\centering\n \\includegraphics[height=2.6in,width=4.2in]{dtlz2-di1-tdi1-3combine-16.png}\\\\\n \n \\includegraphics[height=2.6in,width=4.2in]{dtlz2-di2-tdi2-3combine-16.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation by different budgets generating initial Pareto front.}\n\\label{fig:dtlz-budget}\n\\end{figure*}\n\n\\subsection{Experiments on Vehicle Fleet Maintenance Scheduling Optimization}\nThe budget of $1200000$ evaluations has been used on the real-world application problems, and $600000$ of them are for the initial Pareto front. After that, the preference region is updated after every $50000$ evaluations.\nThe VFMSO problem has been tested with different sizes. Figure~\\ref{fig:20cars} shows Pareto front approximations of a problem with $20$ cars and $3$ workshops (V1), and each car contains $13$ components: one engine, four springs, four brakes and four tires \\cite{van2019modeling}. It can be observed that AP-DI-1 and AP-DI-2 can zoom in the entire Pareto front and find solutions in the preference region, at the same time, both AP-DI-1 and AP-DI-2 converge better than their corresponding DI-1 and DI-2. A similar conclusion can be drawn from Pareto fronts approximations of the problem with $30$ cars and $5$ workshops (V2) in Figure~\\ref{fig:30cars}.\n\n\nIn Figure~\\ref{fig:2030cars}, We put the Pareto front approximations from DI-MOEA, AP-DI-MOEA and NSGA-III on V1 (left) and V2 (right) together. The behaviours of DI-1, DI-2 and NSGA-III are similar on V1, so are the behaviours of AP-DI-1 and AP-DI-2 on this problem. While, DI-2 and AP-DI-2 converge better than DI-1 and AP-DI-1 on V2 problem. The behaviour of NSGA-III is between that of DI-1 and DI-2.\n\n\n\\begin{figure*}[htbp]\n\\hspace{-3.8cm}\n\\subfigure[DI-1 \\& AP-DI-1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di1-tdi1-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3.2cm}\n\\subfigure[DI-2 \\& AP-DI-2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di2-tdi2-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on VFMSO problem with 20 cars and 3 workshops.}\n\\label{fig:20cars}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\hspace{-3.8cm}\n\\subfigure[DI-1 \\& AP-DI-1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di1-tdi1-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3.2cm}\n\\subfigure[DI-2 \\& AP-DI-2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di2-tdi2-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on VFMSO problem with 30 cars and 5 workshops.}\n\\label{fig:30cars}\n\\end{figure*}\n\n\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[V1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di1-tdi1-di2-tdi2-ns3.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3cm}\n\\subfigure[V2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di1-tdi1-di2-tdi2-ns3.png}\\\\\n \n \\end{minipage}%\n}%\n\n\\caption{Pareto front approximation on VFMSO problem by DI-MOEA, AP-DI-MOEA and NSGA-III.}\n\\label{fig:2030cars}\n\\end{figure*}\n\n\nTable~\\ref{table-v1} gives the space and dominance relation of knee points from DI-MOEA and solutions from AP-DI-MOEA on these two VFMSO problems. For both problems, only few knee points from DI-MOEA are in the preference regions of AP-DI-MOEA, and the main reason is that the Pareto front of AP-DI-MOEA converges better than that of DI-MOEA, in some cases, the Pareto front of DI-MOEA cannot even reach the corresponding preference region. More importantly, it can be observed that most knee points from DI-MOEA, no matter whether in the preference region or outside of the preference region, are dominated by the solutions from AP-DI-MOEA. This phenomenon is even more obvious for the application problem with bigger size and run with the same budget as the smaller one: for V2, 90\\% of knee points from DI-MOEA are dominated by the solutions from AP-DI-MOEA.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on V1 and V2.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{V1} & \\multicolumn{2}{|c|}{V2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 0 & 0 & 0 & 0\\\\\n\\cline{2-6}\npreference & Dominated & 9 & 7 & 9 & 6 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 4 & 9 & 3 & 3 \\\\\n\\cline{2-6}\np-region & Dominated & 17 & 14 & 18 & 21 \\\\\n\\hline\n\\end{tabular}\n\\label{table-v1}\n\\end{table} \n\n\nTable~\\ref{table-v1-nsga3} gives the space and dominance relation of knee points from NSGA-III and AP solutions. For both problems, again, most knee points from NSGA-III are not in the preference regions of AP-DI-MOEA. Some knee points from NSGA-III are dominated by AP solutions and most of them are incomparable with AP solutions. \n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on V1 and V2.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{V1} & \\multicolumn{2}{|c|}{V2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 0 & 0 & 0 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 1 & 3& 2 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 1 & 1 \\\\\n\\hline\nOutside & Incomparable & 23 & 24 & 21 & 18 \\\\\n\\cline{2-6}\np-region & Dominated & 7 & 5 & 5 & 8 \\\\\n\\hline\n\\end{tabular}\n\\label{table-v1-nsga3}\n\\end{table} \n\n\\iffalse \n The right image of Figure~\\ref{fig:30cars-2} presents the entire Pareto front approximations of V2 from four different MOEAs: DI-1, DI-2, RVEA and NSGA-III. It can be seen that DI-MOEA (both DI-1 and DI-2) and NSGA-III converge to the similar area in the end, while, RVEA reaches another area of the objective space. Table~\\ref{table_hy} provides the average hypervolume value of the four Pareto fronts from 30 runs and the reference point for each run is formed by the largest objective value from all solutions. It can be seen that DI-2 behaves the best and RVEA the worst.\n\n\n\\begin{table}[htbp]\n\\caption{Hypervolume values}\n\\label{table_hy}\n\\begin{center}\n\\begin{tabular}{l|c}\n\\hline\nDI-1 & 0.0525\\\\\n\\hline\nDI-2 & 0.0576\\\\\n\\hline\nRVEA & 0.0202\\\\\n\\hline\nNSGAIII & 0.0534\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\\fi\n\n\n\n\n\n\\section{CONCLUSIONS}\n\\label{sec:conclusion}\nIn this paper, a preference based multi-objective evolutionary algorithm, AP-DI-MOEA, is proposed. In the absence of explicitly provided preferences, the knee region is usually treated as the region of interest or preference region. Given this, AP-DI-MOEA can generate the knee region automatically and can find solutions with a more fine-grained resolution in the knee region. This has been demonstrated on the bi-objective problems ZDT1 and ZDT2, and the three objective problems DTLZ1 and DTLZ2. In the benchmark, the new approach was also proven to perform better than NSGA-III which was included in the benchmark as a state-of-the-art reference algorithm.\n\nThe research for the preference based algorithm was originally motivated by a real-world optimization problem, namely, Vehicle Fleet Maintenance Scheduling Optimization (VFMSO), which is described in this paper in a new formulation as a three objective discrete optimization problem. A customized set of operators (initialization, recombination, and mutation) is proposed for a multi-objective evolutionary algorithm with a selection strategy based on DI-MOEA and, respectively, AP-DI-MOEA. The experimental results of AP-DI-MOEA on two real-world application problem instances of different scales show that the newly proposed algorithm can generate preference regions automatically and it (in both cases) finds clearly better and more concentrated solution sets in the preference region than DI-MOEA. For completeness, it was also tested against NSGA-III and a better approximation in the preference region was observed by AP-DI-MOEA .\n\nSince our real-world VFMSO problem is our core issue to be solved, and its Pareto front is convex, we did not consider problems with an irregular shape.\nIt would be an interesting question how to adapt the algorithm to problems with more irregular shapes. Besides, the proposed approach requires a definition of knee points. Future work will provide a more detailed comparison of different variants of methods to generate knee points, as they are briefly introduced in Section \\ref{sec:literature}. In the application of maintenance scheduling, it will also be important to integrate robustness and uncertainty in the problem definition. It is desirable to generate schedules that are robust within a reasonable range of disruptions and uncertainties such as machine breakdowns and processing time variability.\n\n\n\n\\section*{Acknowledgment}\nThis work is part of the research programme Smart Industry SI2016 with project name CIMPLO and project number 15465, which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{introduction}\n\nThe properties of physical systems in the vicinity of a critical\npoint, such as critical exponents and amplitude ratios, can be\nextracted by a variety of methods, ranging from exact solutions to\nMonte Carlo simulations.\n\nIn the absence of exact results, one of the most successful approaches\nis based on the investigation of the strong-coupling series expansion,\nwhich enjoys the property of a finite radius of convergence, often\n(but not necessarily) coinciding with the extent of the\nhigh-temperature phase. More generally, when no singular points occur\non the real axis of the complex coupling plane, it is possible to\nexploit strong-coupling results even beyond the convergence radius by\nanalytic continuations, which are based on appropriate resummation\nmethods. Extending the length of the strong-coupling series and\nimproving the accuracy of the resummations are therefore the two most\ncompelling tasks within this approach to the study of the behavior of\nsystems in the critical region.\n\nAs part of an extended program of strong-coupling calculations we have\nrecently computed an extended series expansion of all nontrivial\ntwo-point Green's functions\n\\begin{equation}\nG(x) = \\left<{\\vec s}(0)\\cdot{\\vec s}(x)\\right>\n\\end{equation}\nfor the nearest-neighbor lattice formulation of two-dimensional \n${\\rm O}(N)$ $\\sigma$ models on the square, triangular, and honeycomb\nlattices, respectively up to 21st, 15th, and 30th order in the\nstrong-coupling expansion parameter $\\beta$. A complete presentation\nof our strong-coupling computations for ${\\rm O}(N)$ $\\sigma$ models\nin two and three dimensions will appear in a forthcoming paper.\nA preliminary report of our calculations can be found in \nRef.~\\cite{lattice95}.\n\nThe relevance of a better understanding of 2-$d$ ${\\rm O}(N)$ $\\sigma$\nmodels cannot be overestimated. They appear in condensed matter\nliterature as prototype models for critical phenomena that are\nessentially restricted to two-dimensional layers, including some\ninstances of high-$T_c$ superconductivity. Moreover, they can be\nemployed as model field theories sharing some of the most peculiar\nfeatures of four-dimensional gauge theories, such as asymptotic\nfreedom and spontaneous mass generation. This last statement must\nhowever be qualified, since the above-mentioned properties, according\nto common lore, are possessed only by those 2-$d$ ${\\rm O}(N)$ models\nsuch that $N>2$.\n\nWe focus here on these asymptotically free models, analyzing their\nstrong-coupling expansion in order to extract information that may be\nrelevant to the description of their continuum limit\n($\\beta\\to\\infty$), assuming $\\beta_c=\\infty$ to be the only\nsingularity on the real axis. This hypothesis is favored by all\nnumerical evidence as well as by the successful application of the\nextrapolation techniques that we shall discuss in the present paper.\nThe analysis of our strong-coupling series for \nmodels with $N\\geq 2$ is presented in Ref.~\\cite{Nm2}.\n\nIt is obviously quite hard to imagine that strong-coupling techniques\nmay be really accurate in describing the divergent behavior of such\nquantities as the correlation length and the magnetic susceptibility.\nNevertheless, as our calculations will explicitly confirm, the\nstrong-coupling analysis may provide quite accurate continuum-limit\nestimates when applied directly to dimensionless,\nrenormalization-group invariant ratios of physical quantities. Two\nbasic ideas will make this statement more convincing.\n\n(i) For any dimensionless, renormalization-group invariant ratio\n$R(\\beta)$, when $\\beta$ is sufficiently large we may expect a\nbehavior \n\\begin{equation}\nR(\\beta)-R^*\\sim {1\\over \\xi^2(\\beta)},\n\\label{scalR}\n\\end{equation}\nwhere $R^*$ is the fixed point (continuum) value and $\\xi$ is the\n(diverging) correlation length. Hence a reasonable estimate of $R^*$\nmay be obtained at the values of $\\beta$ corresponding to large but\nfinite correlation lengths, where the function $R(\\beta)$ flattens.\nThis is essentially the same idea underlying Monte Carlo studies of\nasymptotically free theories, based on the identification of the\nso-called scaling region.\n\n(ii) On physical grounds, it is understandable that $\\beta$ is not\nnecessarily the most convenient variable to parameterize phenomena\noccuring around $\\beta=\\infty$. An interesting alternative is based\non the observation that the strong-coupling series of the internal\nenergy\n\\begin{equation}\nE = \\beta + O(\\beta^3)\n\\end{equation}\nmay be inverted to give $\\beta$ as a series in $E$. This series may\nbe substituted into other strong-coupling expansions, obtaining\nexpressions for physical quantities as power series in $E$. It might\nnow be easier to reach the continuum limit, since it now occurs at a\nfinite value of the expansion variable, i.e., $E\\to1$.\n\nWe hope to convince the reader that, by exploiting these ideas,\nstate-of-the-art strong-coupling calculations can be made at least as\naccurate as the best Monte Carlo simulations presently available,\nwhen applied to dimensionless renormalization-group invariant quantities.\n\nWe must stress that the analysis of the strong-coupling series\ncalculated on different lattices offers a possibility of testing\nuniversality, and, on the other side, once universality is assumed, it\nrepresents a further check for possible systematic errors and allows\ntheir quantitative estimate; this estimate is usually a difficult task\nin strong-coupling extrapolation methods such as those based on Pad\\'e\napproximants and their generalizations.\n\nOur physical intuition of the behavior of ${\\rm O}(N)$ models is\nstrongly guided by our knowledge of their large-$N$ behavior, and by\nthe evidence of a very weak dependence on $N$ of the dimensionless\nratios. In order to extend our understanding to those lattices that\nhave not till now received a systematic treatment, and also in order\nto establish a benchmark for the strong-coupling analysis, we decided\nto start our presentation with a detailed comparative study of the\nlarge-$N$ limit of various lattices, in the nearest-neighbor\nformulation. To the best of our knowledge, only the large-$N$\nsolution on the square lattice was already known explicitly\n\\cite{sqNi}.\n\nThe paper is organized as follows:\n\nIn Sec.~\\ref{secNi} we present the large-$N$ limit solution of ${\\rm\n O}(N)$ $\\sigma$ models on the square, triangular and honeycomb\nlattices, in the nearest-neighbor formulation, calculating several\nphysical quantities and showing explicitly the expected universality\nproperties. The triangular- and honeycomb-lattice results are\noriginal, and possess some intrinsic reasons of interest. However,\nreaders willing to focus on square-lattice results are advised to jump\nto Sec.~\\ref{SCA} after reading Subs.~\\ref{secse} and \\ref{secsqNi},\nwhere the notation is fixed.\n\nSec.~\\ref{SCA} is devoted to a detailed analysis of the available\nstrong-coupling series of $G(x)$ and other physical quantities on the\nsquare, triangular, and honeycomb lattices. Most of the results we\nshall show there concern the $N=3$ model. The basic motivation for\nthis choice lies in the observation that all dependence in $N$ is\nmonotonic between 3 and $\\infty$; hence the discussion of higher-$N$\nresults would be only a boring repetition of the considerations\npresented here. The reader not interested in the analysis of\ntriangular and honeycomb lattices may skip most of the discussion, by\nfocusing on Subs.~\\ref{scsq}, where further definitions are introduced\nand the square-lattice series are analyzed, and on Subs.~\\ref{concl},\nwhere all conclusions are drawn.\n\nApps.~\\ref{apptr} and \\ref{appex} provide the derivation and the\ntechnical details of the large-$N$ calculations on the triangular and\nhoneycomb lattices respectively. We present as well the calculation\nof the $\\Lambda$-parameters.\n\nApp.~\\ref{singNinf} is a study of the complex temperature\nsingularities of the $N=\\infty$ partition functions on the \ntriangular and honeycomb lattices.\n\nIn Apps.~\\ref{appscsq}, \\ref{appsctr} and \\ref{appscex} we present,\nfor selected values of $N$, the strong-coupling series of some\nrelevant quantities on the square, triangular, and honeycomb lattice\nrespectively.\n\n\n\\section{The large-$\\protect\\bbox{N}$ limit of lattice \n$\\protect\\bbox{{\\rm O}(N)}$ $\\protect\\bbox{\\sigma}$ models}\n\\label{secNi}\n\n\n\\subsection{The large-$\\protect\\bbox{N}$ saddle point equation}\n\\label{secse}\n\nThe nearest-neighbor lattice formulations on square, triangular and\nhoneycomb lattices are defined by the action\n\\begin{equation}\nS_L= -N\\beta\\sum_{\\rm links} {\\vec s}_{x_l}\\cdot {\\vec s}_{x_r},\n\\qquad {\\vec s}_x\\cdot {\\vec s}_x = 1,\n\\label{lattaction}\n\\end{equation}\nwhere $\\vec s$ is a $N$-component vector, the sum is performed over\nall links of the lattice and $x_l,x_r$ indicate the sites at the ends\nof each link. The coordination number is $c=4,6,3$ respectively for\nthe square, triangular and honeycomb lattice. The lattice spacing\n$a$, which represents the length unit, is defined to be the length of\na link. The volume per site is then $v_s=1,\\sqrt{3}\/2, 3\\sqrt{3}\/4$\n(in unit of $a^2$) respectively for the square, triangular, and\nhoneycomb lattice.\n\nStraightforward calculations show that the correct continuum\nlimit of ${\\rm O}(N)$ $\\sigma$ models, \n\\begin{equation}\nS= {N\\over 2t} \\int d^2x\\, \\partial_\\mu {\\vec s}(x)\\cdot\n\\partial_\\mu {\\vec s}(x),\n\\qquad {\\vec s}(x)\\cdot {\\vec s}(x) = 1,\n\\label{contaction}\n\\end{equation}\nis obtained by identifying \n\\begin{equation}\nt={1\\over \\beta}, \\ {1\\over \\sqrt{3}\\beta},\\ \n{\\sqrt{3}\\over \\beta},\n\\label{temp}\n\\end{equation}\nrespectively for the square, triangular and honeycomb lattice.\nNotice that \n\\begin{equation}\n\\lambda\\equiv t\\beta = {4v_s\\over c}\n\\label{tbeta}\n\\end{equation}\nis the distance between nearest-neighbor\nsites of the dual lattice in unit of the lattice spacing $a$.\n\nWhen the number of field components $N$ per site goes to infinity,\none can use a saddle point equation to evaluate the partition \nfunction. Replacing the constraint $\\vec s_x^{\\,2}=1$\nby a Fourier integral over \na conjugate variable $\\alpha_x$, we write the partition\nfunction as\n\\begin{eqnarray}\nZ&&\\propto \\int \\prod_x d{\\vec s}_x \\,\\delta( \\vec s_x^{\\,2}-1)\\,\n\\exp N\\beta \\sum_{\\rm links} {\\vec s}_{x_l}\\cdot {\\vec s}_{x_r}\n\\nonumber \\\\\n&&\\propto\\int \\prod_x d\\phi_x d\\alpha_x \\,\\exp \nN\\left[ \\sum_x i{\\alpha_x\\over 2}\\left( 1 - \\phi_x^2\\right)\n-{\\beta\\over 2}\\sum_{\\rm links} \\left( \n\\phi_{x_l}-\\phi_{x_r}\\right)^2\\right].\n\\label{fp}\n\\end{eqnarray}\nIntegrating out the $\\phi$ variables we arrive at\nthe expression \n\\begin{equation}\nZ\\propto \\int d\\alpha_x \\,\\exp {N\\over 2}\n\\left( \\sum_x i\\alpha_x - {\\rm Tr}\\,\\ln R\\right),\n\\label{fp2}\n\\end{equation}\nwhere \n\\begin{equation}\nR_{xy}= -{1\\over t} \\Delta_{xy} + i\\alpha_x\\delta_{xy},\n\\label{R}\n\\end{equation}\nand $\\Delta_{xy}$ is a generalized Laplacian operator, such\nthat \n\\begin{equation}\n\\lambda\\, \\sum_{\\rm links} \\left(\\phi_{x_l}-\\phi_{x_r}\\right)^2\n= -\\sum_{x,y} \\phi_x \\Delta_{xy} \\phi_y.\n\\label{Q}\n\\end{equation}\n\nThe large-$N$ limit solution is obtained from the variational \nequation with respect to $\\alpha_x$.\nLooking for a translation invariant solution we set\n\\begin{equation}\ni\\alpha_x={v_s\\over t}\\,z.\n\\label{costalp}\n\\end{equation}\nThe matrix $R$ then becomes\n\\begin{equation}\nR_{xy}= {1\\over t} \\left[ -\\Delta_{xy}+zv_s\\delta_{xy}\\right],\n\\label{R2}\n\\end{equation}\nand the saddle point equation is written as\n\\begin{equation}\n1=\\lim_{N_s\\rightarrow\\infty}\n{1\\over N_s} {\\rm Tr}\\,R^{-1},\n\\label{spe}\n\\end{equation}\nwhere $N_s$ is the number of sites.\n\nThe large-$N$ fundamental two-point Green's function is\nobtained by\n\\begin{equation}\nG(x-y)= R^{-1}_{xy}.\n\\label{NiGx}\n\\end{equation}\n\nIn order to calculate the trace of $R^{-1}$, the easiest\nprocedure consists in Fourier transforming the operator\n$R$. Such transformation is straightforward on lattices,\nlike square and triangular lattices, whose sites\nare related by a translation group, and in these cases it\nyields the diagonalization of the matrix $R_{xy}$.\nThe honeycomb lattice, not possessing a full translation\nsymmetry, presents some complications. \nIn this case a partial diagonalization of $R_{xy}$ can be \nachieved following the procedure outlined in Ref.~\\cite{SCUN2}.\n\n\n\\subsection{The square lattice}\n\\label{secsqNi}\n\nTurning to the momentum space the variational equation\nbecomes\n\\begin{equation}\n{1\\over t}=\\beta =\\int_{-\\pi}^\\pi {d^2 k\\over (2\\pi)^2}\n{1\\over \\widehat{k}^2+z}={1\\over 2\\pi} \n\\rho_{\\rm s}(z) K\\left( \\rho_{\\rm s}(z) \\right), \n\\label{sesq}\n\\end{equation}\nwhere \n\\begin{equation}\n\\rho_{\\rm s}(z)=\\left(1 + {1\\over 4}z\\right)^{-1},\n\\label{rhos}\n\\end{equation}\nand $K$ is the complete integral of the first kind.\n\nLet's define the moments of $G(x)$\n\\begin{equation}\nm_{2j}\\equiv\\sum_x (x^2)^j \\,G(x).\n\\label{momgx}\n\\end{equation}\nStraightforward calculations lead to the following\nresults\n\\begin{equation}\n\\chi\\equiv m_0 = {t\\over z},\n\\label{chisqin}\n\\end{equation}\n\\begin{equation}\n\\xi_{G}^2 \\equiv M_{G}^{-2} \\equiv \n{m_2\\over 4\\chi}= {1\\over z},\n\\label{xigsqin}\n\\end{equation}\n\\begin{equation}\n u\\equiv {m_2^2\\over \\chi m_4}=\n{1\\over 4}\\left( 1 + {z\\over 16}\\right)^{-1}.\n\\label{omsqin}\n\\end{equation}\nNotice that in the large-$N$ limit the renormalization constant of the\nfundamental field is $Z=t$. $u$ is a renormalization-group invariant\nquantity.\n\nThe mass-gap should be extracted from the long distance\nbehavior of the two-point Green's function, which is also\nrelated to the imaginary momentum singularity of the \nFourier transform of $G(x)$.\nIn the absence of a strict rotation invariance, one actually\nmay define different estimators of the mass-gap having \nthe same continuum limit.\nOn the square lattice one may consider $\\mu_{\\rm s}$ and \n$\\mu_{\\rm d}$ obtained respectively by the equations\n\\begin{eqnarray}\n&&\\tilde{G}^{-1}(p_1=i\\mu_{\\rm s},p_2=0)=0,\\nonumber \\\\\n&&\\tilde{G}^{-1}\\left(p_1=i{\\mu_{\\rm d}\\over\\sqrt{2}},\np_2=i{\\mu_{\\rm d}\\over\\sqrt{2}}\\right) = 0.\n\\label{msmd}\n\\end{eqnarray}\n$\\mu_{\\rm s}$ and $\\mu_{\\rm d}$ \ndetermine respectively the long\ndistance behavior of the side and diagonal wall-wall\ncorrelations constructed with $G(x)$. \nIn generalized Gaussian models, such as the large-$N$ limit\nof ${\\rm O}(N)$ models, it turns out convenient to define\nthe following quantities \n\\begin{eqnarray}\n&&M_{\\rm s}^2= 2\\left( {\\rm cosh} \n\\mu_{\\rm s} - 1\\right),\\nonumber \\\\\n&&M_{\\rm d}^2= 4\\left( {\\rm cosh} \n{\\mu_{\\rm d}\\over \\sqrt{2}} -1\\right).\n\\label{MsMd}\n\\end{eqnarray}\nIn the continuum limit \n\\begin{equation}\n{M_{\\rm s}\\over \\mu_{\\rm s}}\\,,{M_{\\rm d}\n\\over \\mu_{\\rm d}}\\rightarrow 1,\n\\label{msmd2}\n\\end{equation}\ntherefore $M_{\\rm s}$ and $M_{\\rm d}$ may be also used\nas estimators of the mass-gap. \n\nIn the large-$N$ limit \n\\begin{equation}\nM_{\\rm s}^2 = M_{\\rm d}^2 = z=M_{G}^2.\n\\label{msmdmg}\n\\end{equation}\n\nThe rotational invariance of $G(x)$ at large distance,\n$d\\gg\\xi$, is checked by the ratios $\\mu_{\\rm s}\/\\mu_{\\rm d}$.\nUsing the above results one can evaluate the scaling violation terms:\n\\begin{equation}\n{\\mu_{\\rm s}\\over \\mu_{\\rm d}}=\n{ \n\\ln\\left( {1\\over 2}\\sqrt{z} + \\sqrt{ 1 + \n{1\\over 4}z}\\right)\\over\n\\sqrt{2}\\ln\\left( {1\\over 2\\sqrt{2}}\\sqrt{z} + \\sqrt{ 1 + \n{1\\over 8}z}\\right)}\n = 1 - {1\\over 48}z + {71\\over 23040}z^2+\nO\\left(z^3\\right).\n\\label{rotviol}\n\\end{equation}\n \nAnother test of scaling is provided by the ratio\n\\begin{equation}\n{\\mu_{\\rm s}\\over M_{G}}\n= {2\\over \\sqrt{z}}\n\\ln\\left( {\\sqrt{z}\\over 2} + \\sqrt{ 1 + \n{z\\over 4}}\\right)=\n1 - {1\\over 24}z + {3\\over 640}z^2+\nO\\left(z^3\\right).\n\\label{scalviol}\n\\end{equation}\n\nThe internal energy can be easily calculated obtaining\n\\begin{equation}\nE \\equiv \\langle {\\vec s}_x\\cdot {\\vec s}_{x+\\mu} \\rangle =\nR^{-1}_{x,x+\\mu}=\n1\\,-\\,{1\\over 4\\beta}\\,+\\,{z\\over 4}.\n\\label{energysq}\n\\end{equation}\nTherefore\n\\begin{equation}\n{1\\over 2}\\sum_\\mu\n\\langle ({\\vec s}_{x+\\mu}-{\\vec s}_x)^2 \\rangle =\n{1\\over 2\\beta}\\,-\\,{z\\over 2},\n\\label{condlatt}\n\\end{equation}\nwhere the term proportional to $z$ is related to the condensate $T$ of\nthe trace of the energy-momentum tensor~\\cite{CRcond}\n\\begin{equation}\n{\\beta(t)\\over 2t^2}\n\\partial_\\mu {\\vec s}(x)\\cdot\\partial_\\mu {\\vec s}(x).\n\\label{temtr}\n\\end{equation}\nIn the large-$N$ limit\n\\begin{equation}\n\\beta(t)=-{1\\over 2\\pi}t^2,\n\\label{beta}\n\\end{equation}\ntherefore from the expression (\\ref{energysq}) we deduce\n\\begin{equation}\n{T\\over M_{G}^2}={1\\over 4\\pi}.\n\\label{condsq}\n\\end{equation}\n\nAnother interesting quantity which can be evaluated in the large-$N$\nlimit is the zero-momentum four-point renormalized coupling constant,\ndefined by\n\\begin{equation}\ng_r = -{\\chi_4\\over \\chi^2\\xi_{G}^2}\n\\label{gr0}\n\\end{equation}\nwhere\n\\begin{equation}\n\\chi_4 = \\sum_{x,y,z} \\langle {\\vec s}_0\\cdot {\\vec s}_x \n\\, {\\vec s}_y\\cdot {\\vec s}_z \\rangle_c.\n\\label{chi4}\n\\end{equation}\n$g_r$ is zero in the large-$N$ limit, where the theory is Gaussian-like\nand thus $\\chi_4=0$. \nIts value in the continuum limit\n\\begin{equation}\ng_r^* = {8\\pi\\over N}+ O\\left({1\\over N^2}\\right)\n\\label{gr1}\n\\end{equation}\ncan be also evaluated in the large-$N$ expansion of\nthe continuum formulation of the ${\\rm O}(N)$ models~\\cite{gr}.\nOn the square lattice, by using the saddle point equation we find\n\\begin{equation}\nNg_r = -2 {\\partial \\ln z\\over \\partial \\beta},\n\\label{gr2}\n\\end{equation}\nwhich can be made more explicit by writing\n\\begin{equation}\nNg_r = 4\\pi{1+\\rho_{\\rm s}\\over \\rho_{\\rm s} E(\\rho_{\\rm s})} =\n8\\pi\\left[1 + {z\\over 8}\\left(\\ln {z\\over 32}\\,+\\,2\\right) \n+ O(z^2)\\right],\n\\label{gr3}\n\\end{equation}\nwhere $E$ is an elliptic function.\n\nAll the above results can be expressed as functions of $\\beta$\nby solving the saddle point equation.\nConcerning asymptotic scaling, and therefore \nsolving the saddle point equation at large $\\beta$, one finds\n\\begin{equation}\nM_{G}\\simeq 4\\sqrt{2}\\,\\exp \\left(-{2\\pi\\over t}\\right).\n\\label{asysq}\n\\end{equation}\n\nThe analytic structure of the various observables\nhas been investigated in Ref.~\\cite{BCMO}.\nThe complex $\\beta$-singularities are square-root branch points,\nindeed quantities like $\\chi$ and $\\xi_G^2$\nbehave as \n\\begin{equation}\nA(\\beta)+B(\\beta) \\sqrt{\\beta-\\beta_s}\n\\end{equation}\naround a singular point $\\beta_s$, where $A(\\beta)$ and $B(\\beta)$ are\nregular in the neighborhood of $\\beta_s$. The singularities closest\nto the origin are located at $\\bar{\\beta}=0.32162\\,(\\pm 1\\pm i)$.\nSuch singularities determine the convergence radius of the\nstrong-coupling expansion, which is therefore $\\beta_r=0.45484$,\ncorresponding to a correlation length $\\xi_{G}=3.17160$.\n\n\n\\subsection{The triangular lattice}\n\\label{sectrNi}\n\nOn the triangular lattice, using the results of App.~\\ref{apptr}, \nthe saddle point equation can be written as\n\\begin{equation}\n{1\\over t}=\\sqrt{3}\\beta = \\int^\\pi_{-\\pi} {dk_1\\over 2\\pi}\n\\int^{2\\pi\/\\sqrt{3}}_{-2\\pi\/\\sqrt{3}}\n{dk_2\\over 2\\pi} {1\\over \\Delta(k)+z}\n\\label{setr}\n\\end{equation}\nwhere \n\\begin{equation}\n\\Delta(k)=4\\left[ 1 - {1\\over 3}\\left(\n\\cos k_1+2\\cos {k_1\\over 2}\\cos {\\sqrt{3}k_2\\over 2}\\right)\\right]\n\\label{deltatr}\n\\end{equation}\nand the momentum integration is performed over the Brillouin\nzone corresponding to a triangular lattice.\nBy rather straightforward calculations (making also use\nof some of the formulas of Ref.~\\cite{Gradshteyn})\nthe saddle point equation can be written as\n\\begin{equation}\n{1\\over t}=\\sqrt{3}\\beta =\n{1\\over 2\\pi} \\left( 1 + {z\\over 6}\\right)^{-1\/4}\\,\\rho_{\\rm t}(z)\n\\,K(\\rho_{\\rm t}(z)),\n\\label{setr2}\n\\end{equation}\nwhere\n\\begin{equation}\n \\rho_{\\rm t}(z)= \\left( 1+{z\\over 6}\\right)^{1\/4}\\,\n\\left[ {1\\over 2} + {z\\over 8} + {1\\over 2}\n\\left( 1 + {z\\over 6}\\right)^{1\/2} \\right]^{-1\/2}\\, \n\\left[ {5\\over 2} + {3z\\over 8} - {3\\over 2}\n\\left( 1 + {z\\over 6}\\right)^{1\/2} \\right]^{-1\/2}. \n\\label{setr3}\n\\end{equation}\n\nUsing the results of App.~\\ref{apptr} one can find\n\\begin{equation}\n\\chi={t\\over v_s z}={2\\over 3\\beta z},\n\\label{chitrin}\n\\end{equation}\n\\begin{equation}\n \\xi_{G}^2 \\equiv M_{G}^{-2} = {1\\over z}\\, ,\n\\label{xitrin}\n\\end{equation}\n\\begin{equation}\nu\\equiv {m_2^2\\over \\chi m_4}\n={1\\over 4}\\left( 1 + {z\\over 16}\\right)^{-1} .\n\\label{omtrin}\n\\end{equation}\n\nAn estimator of the mass-gap $\\mu_{\\rm t}$ can be extracted from the\nlong distance behavior of the \nwall-wall correlation function defined in\nEq.~(\\ref{walldeftr}), indeed for $x\\gg 1$\n\\begin{equation}\nG_{\\rm t}^{(\\rm w)}(x)\\propto e^{-\\mu_{\\rm t} x}.\n\\label{ldgtr}\n\\end{equation}\nIn the large-$N$ limit one finds\n\\begin{equation}\nM^2_{\\rm t}\\equiv{8\\over 3}\\left( {\\rm cosh} {\\sqrt{3}\\over 2}\n\\mu_{\\rm t} -1\\right)=z=M^2_{G}.\n\\label{Mtr}\n\\end{equation}\nA test of scaling is provided by the ratio \n\\begin{equation}\n{\\mu_{\\rm t}\\over M_{G}}=\n{2\\over \\sqrt{3z}}\n{\\rm Arccosh}\\,\\left[ 1 + {3\\over 8}z \\right]\n = 1-{1\\over 32}z + {9\\over 10240} z^2\n+O\\left( z^3\\right),\n\\label{scalvioltr}\n\\end{equation}\nwhere scaling violations are of the same order as those \nfound on the square lattice for the corresponding\nquantity, cfr.\\ Eq.~(\\ref{scalviol}).\n\nThe internal energy is given by the following\nexpression\n\\begin{equation}\nE=\\langle {\\vec s}_{x_l}\\cdot {\\vec s}_{x_r}\\rangle =\n1 - {1\\over 6\\beta} + {z\\over 4} ,\n\\label{etr}\n\\end{equation}\nleading again to the result (\\ref{condsq}) \nfor the condensate of the trace of the\nenergy-momentum tensor, in agreement with universality. \n\nWe calculated $g_r$ on the triangular lattice, finding \nthe following expression (in the derivation we made use of the\nsaddle point equation (\\ref{setr}))\n\\begin{equation}\nNg_r = - {2\\over \\sqrt{3}} {\\partial \\ln z\\over \\partial \\beta},\n\\label{gr2t}\n\\end{equation}\nwhich can be written in a more explicit form using \nEq.~(\\ref{setr2}): \n\\begin{eqnarray}\n Ng_r&=&4\\pi\\left( 1 + {1\\over 6}z\\right)^{1\/4}\n{1\\over z}\\left[ {E(\\rho_{\\rm t})\\over 1-\\rho_{\\rm t}^2} \n{\\partial \\rho_{\\rm t}\\over \\partial z}- \n{1\\over 24}\\left(1+{1\\over 6}z\\right)^{-1}\\rho_{\\rm t} \n K(\\rho_{\\rm t})\\right]^{-1}\n\\nonumber \\\\\n&=&\n8\\pi\\left[1 + {z\\over 8}\\left(\\ln {z\\over 48}\\,+\\,{11\\over 6}\\right) \n+ O(z^2)\\right],\n\\label{gr3t}\n\\end{eqnarray}\nwhere the continuum value of $Ng_r$, obtained for $z\\rightarrow 0$, is\nin agreement with the results (\\ref{gr1}) and (\\ref{gr3}).\n\nIn the weak coupling region $t\\rightarrow 0$ the\nsaddle point equation leads to the asymptotic\nscaling formula\n\\begin{equation}\nM_{G}\\simeq 4\\sqrt{3} \\exp \\left( \n-{ 2\\pi\\over t}\\right).\n\\label{asytr}\n\\end{equation}\nThe equations (\\ref{asysq}) and\n(\\ref{asytr}) are in agreement with the large-$N$ limit of the \nratio of the $\\Lambda$-parameters of the square \nand triangular lattice formulations\ncalculated in App.~\\ref{apptr}, cfr.\\ Eq.~(\\ref{ratioltr2}),\nusing perturbation theory.\n\nWe have investigated the analytic structure in the complex\n$\\beta$-plane. Details of such study are presented in\nApp.~\\ref{singNinf}. As on the square lattice, the singularities are\nsquare-root branch points. Those closest to the origin are placed at\n$\\bar{\\beta}= 0.206711\\pm \\,i\\,0.181627$, leading to a convergence\nradius for the strong-coupling expansion $\\beta_r=0.275169$, which\ncorresponds to a correlation length $\\xi_{G}=2.98925$.\n\n\n\\subsection{The honeycomb lattice}\n\\label{iNhl}\n\nThe analysis of models defined on the honeycomb lattice presents a few\nsubtleties caused by fact that, unlike square and triangular lattices,\nnot all sites are related by a translation, not allowing a\nstraightforward definition of a Fourier transform. Nevertheless,\nobserving that sites at even distance in the number of lattice links\nform triangular lattices, one can define a Fourier-like transformation\nthat partially diagonalizes the Gaussian propagator (up to $2\\times 2$\nmatrices)~\\cite{SCUN2}. In this section we present the relevant\nresults, some details of their derivation are reported in\nApp.~\\ref{appex}.\n\nUsing the expression of $R^{-1}$ of Eq.~(\\ref{greengex}) \nwe write the saddle point equation in the following form\n\\begin{equation}\n{1\\over t}={\\beta\\over\\sqrt{3}}=\n\\int^{{2\\over3}\\pi}_{-{2\\over3}\\pi} {dk_1\\over 2\\pi}\n\\int^{\\pi\/\\sqrt{3}}_{-\\pi\/\\sqrt{3}}\n{dk_2\\over 2\\pi} {1+{1\\over 4}z\\over \n\\Delta(k)+z\\left(1+{1\\over 8}z\\right)}\n\\label{seex}\n\\end{equation}\nwhere \n\\begin{equation}\n\\Delta(k)={8\\over 9}\\left[ 2 - \\cos {\\sqrt{3}\\over 2}k_2\n\\left( \\cos {3\\over 2}k_1 + \\cos {\\sqrt{3}\\over 2}k_2\\right) \n\\right],\n\\label{deltaex}\n\\end{equation}\nand integrating over the momentum we arrive at\n\\begin{equation}\n{1\\over t}={\\beta\\over\\sqrt{3}}=\n{1\\over 2\\pi} \\left( 1 + {z\\over 4}\\right)^{1\/2}\n\\rho_{\\rm h}(z) \\,K(\\rho_{\\rm h}(z)),\n\\label{seex2}\n\\end{equation}\nwhere\n\\begin{equation}\n\\rho_{\\rm h}(z)= \\left( 1 + {z\\over 4}\\right)^{1\/2} \n\\left( 1 + {3z\\over 8}\\right)^{-3\/2} \n\\left( 1 + {z\\over 8}\\right)^{-1\/2}. \n\\label{seex3}\n\\end{equation}\n\nFrom Eq.~(\\ref{greengex}) we also derive\n\\begin{equation}\n\\chi ={t\\over v_s z}={4\\over 3\\beta z},\n\\label{chihoin}\n\\end{equation}\n\\begin{equation}\n\\xi_{G}^2 \\equiv M^{-2}_{G}=\n{1\\over z},\n\\label{xihoin}\n\\end{equation}\n\\begin{equation}\nu ={1\\over 4}\\left( 1 + {z\\over 16}\\right)^{-1}.\n\\label{omhoin}\n\\end{equation}\n\nThe two orthogonal wall-wall correlation functions \n$G^{(\\rm w)}_{\\rm v}(x)$ and $G^{(\\rm w)}_{\\rm h}(x)$ defined in\nEqs.~(\\ref{g1}) and (\\ref{g2}) allow one to define two estimators of\nthe mass-gap from their long distance behavior\n\\begin{eqnarray}\nG^{(\\rm w)}_{\\rm v}(x)\\propto e^{-\\mu_{\\rm v} x},\\nonumber\\\\ \nG^{(\\rm w)}_{\\rm h}(x)\\propto e^{-\\mu_{\\rm h} x},\n\\label{ldbg}\n\\end{eqnarray}\nwhere $x$ is the distance between the two walls in \nunit of the lattice spacing.\nIn the continuum limit $\\mu_{\\rm v}=\\mu_{\\rm h}$ \nand they both reproduce\nthe physical mass propagating in the fundamental channel.\nAs on the square and triangular lattices, it is convenient\nto define the quantities\n\\begin{eqnarray}\n&&M_{\\rm v}^2 = {8\\over 9}\\left( {\\rm cosh} {3\n\\mu_{\\rm v}\\over 2} - 1\\right),\\nonumber \\\\\n&&M_{\\rm h}^2 = {8\\over 3}\\left( {\\rm cosh} \n{\\sqrt{3}\\mu_{\\rm h}\\over 2} -1\\right),\n\\label{M1M2}\n\\end{eqnarray}\nwhich, in the continuum limit, \nare also estimators of the mass gap. \nIn the large-$N$ limit one finds\n\\begin{eqnarray}\n&&M_{\\rm v}^2 = z\\left( 1+ {z\\over 8}\\right),\\nonumber \\\\\n&&M_{\\rm h}^2=z.\n\\label{M1M22}\n\\end{eqnarray}\nNotice that in the continuum large-$N$ limit the result \n\\begin{equation}\n{M\\over M_{G}}=1,\n\\label{momg}\n\\end{equation}\nwhere $M$ is any mass-gap\nestimator, is found for all lattice formulations considered.\n\nOn the honeycomb lattice the maximal violation of full rotational\nsymmetry occurs for directions differing by a $\\pi\/6$ angle, and\ntherefore, taking into account its discrete rotational symmetry, also\nby a $\\pi\/2$ angle. So a good test of rotation invariance of $G(x)$ at\nlarge distance is provided by the ratio $\\mu_{\\rm v}\/\\mu_{\\rm h}$:\n\\begin{equation}\n{\\mu_{\\rm v}\\over \\mu_{\\rm h}}=\n{\n{\\rm Arccosh}\\,\\left[ 1 + {9\\over 8}z \\left( 1 + {1\\over 8}\nz\\right)\\right]\n\\over\n\\sqrt{3}{\\rm Arccosh}\\,\\left[ 1 + {3\\over 8}z \\right]}\n= 1+{1\\over 640} z^2\n+O\\left( z^3\\right).\n\\label{rotscalho}\n\\end{equation}\nAs expected from the better rotational symmetry of the honeycomb\nlattice, rotation invariance is set earlier than for the square\nlattice, indeed the $O(z)$ scaling violation is absent.\n\nA test of scaling is provided by the ratio \n\\begin{equation}\n{\\mu_{\\rm h}\\over M_{G}}=\n{2\\over \\sqrt{3z}}\n{\\rm Arccosh}\\,\\left[ 1 + {3\\over 8}z \\right]\n= 1-{1\\over 32}z + {9\\over 10240} z^2\n+O\\left( z^3\\right),\n\\label{scalviolho}\n\\end{equation}\nwhere scaling violations are of the same order of those \nfound on the square lattice for the corresponding\nquantity, cfr.\\ Eq.~(\\ref{scalviol}).\n\nThe internal energy is given by\n\\begin{equation}\nE=1 - {1\\over 3\\beta} + {z\\over 4}\n=1 - {1\\over 3\\beta} + {M_{G}^2\\over 4},\n\\label{eneex}\n\\end{equation}\nwhere the term proportional to $M_{G}^2$\nverifies again universality.\n\nIn the weak coupling region $t\\rightarrow 0$ the\nsaddle point equation leads to the asymptotic\nscaling formula\n\\begin{equation}\nM_{G}\\simeq 4 \\exp \\left( \n-{ 2\\pi\\over t}\\right).\n\\label{asyex}\n\\end{equation}\nThe equations (\\ref{asysq}) and\n(\\ref{asyex}) are in agreement with the large-$N$ limit of the \nratio of the $\\Lambda$-parameters of the square \nand triangular lattice formulations\ncalculated in App.~\\ref{appex}, cfr.\\ Eq.~(\\ref{ratiolho2}),\nusing perturbation theory.\n\nIn Fig.~\\ref{asyiN} we compare asymptotic scaling from\nthe various lattices considered, plotting the ratio\nbetween $M_{G}$ and the corresponding\nasymptotic formula (cfr.\\ Eqs.~(\\ref{asysq}),\n(\\ref{asytr}) and (\\ref{asyex})).\nNotice that in the large-$N$ limit corrections to \nasymptotic scaling are $O(M^2_{G})$, in that corrections\n$O(1\/\\ln M_{G})$ are suppressed by a factor $1\/N$.\n\nWe have investigated the analytic structure in the complex \ntemperature-plane of the $N=\\infty$ model on the honeycomb lattice\n(details are reported in App.~\\ref{singNinf}).\nAs on the square and triangular lattices,\nsingularities are square-root branch points, and those closest to the\norigin are placed on the imaginary axis\nat $\\bar{\\beta}=\\pm i 0.362095$. \nThe convergence radius for the strong-coupling expansion\nis associated to a quite small correlation length:\n$\\xi_{G}=1.00002$.\n\n\n\\section{Continuum results from strong coupling}\n\\label{SCA}\n\n\n\\subsection{Analysis of the series}\n\\label{analysis}\n\nIn this section we analyze the strong-coupling series of some of the\nphysical quantities which can be extracted from the two-point\nfundamental Green's function. We especially consider dimensionless\nrenormalization-group invariant ratios, whose value in the scaling\nregion, i.e., their asymptotic value for $\\beta\\rightarrow \\infty$,\nconcerns the continuum physics. Some strong-coupling series for\nselected values of $N$ are reported in the Apps.~\\ref{appscsq},\n\\ref{appsctr} and \\ref{appscex} respectively for the square,\ntriangular and honeycomb lattices. The series in the energy are\nobtained by inverting the strong-coupling series of the energy\n$E=\\beta+O(\\beta^3)$ and substituting into the original series in\n$\\beta$.\n\nOur analysis of the series of dimensionless renormalization \ngroup invariant ratios of physical quantities,\nsuch as those defined in the previous section, is \nbased on Pad\\'e approximant (PA) techniques.\nFor a review on the resummation techniques cfr.\\ \nRef.~\\cite{Guttmann}.\n\nPA's are expected to converge well to meromorphic\nanalytic functions. More flexibility is achieved by applying\nthe PA analysis to the logarithmic derivative \n(Dlog-PA analysis), and therefore enlarging the class\nof functions which can be reproduced to those having\nbranch-point singularities.\nThe accuracy and the convergence of the PA's \ndepend on how well the function considered, \nor its logarithmic derivative, can be reproduced by a \nmeromorphic analytic function, and may change when considering\ndifferent representations of the same quantity.\nBy comparing the results from different series representations\nof the same quantity one may check for possible\nsystematic errors in the resummation procedure employed.\n\nIn our analysis we constructed $[l\/m]$ PA's and Dlog-PA's\nof both the series in $\\beta$ and in the energy.\n$l,m$ are the orders of the polynomials\nrespectively at the numerator and at the denominator\nof the ratio forming the $[l\/m]$ PA of the series at hand, \nor of its logarithmic derivative (Dlog-PA).\nWhile $[l\/m]$ PA's provide directly \nthe quantity at hand, in a Dlog-PA analysis one gets \na $[l\/m]$ approximant by reconstructing the original quantity\nfrom the $[l\/m]$ PA of its logarithmic derivative,\ni.e., a $[l\/m]$ Dlog-PA of the series $A(x)=\\sum_{i=0}^\\infty a_i x^i$\nis obtained by\n\\begin{equation}\nA_{l\/m}(x) = a_0\\exp \\int_0^x \ndx' \\, {\\rm Dlog}_{l\/m} A(x').\n\\label{appA}\n\\end{equation}\nwhere ${\\rm Dlog}_{l\/m} A(x)$ indicates the $[l\/m]$ PA\nof the logarithmic derivative of $A(x)$.\n\nWe recall that a $[l\/m]$ PA uses $n=l+m$ terms of the series,\nwhile a $[l\/m]$ Dlog-PA requires $n=l+m+1$ terms.\nContinuum estimates are then obtained by evaluating the approximants\nof the energy series at $E=1$, and those of the $\\beta$\nseries at a value of $\\beta$ corresponding to a reasonably\nlarge correlation length. \n\nAs final estimates we take the average of the results from the\nquasi-diagonal (i.e., with $l\\simeq m$) PA's using all available terms\nof the series. The errors we will display are just indicative, and\ngive an idea of the spread of the results coming from different PA's.\nThey are the square root of the variance around the estimate of the\nresults, using also quasidiagonal PA's constructed from shorter\nseries. Such errors do not always provide a reliable estimate of the\nuncertainty, which may be underestimated especially when the structure\nof the function (or of its logarithmic derivative) is not well\napproximated by a meromorphic analytic function. In such cases a more\nreliable estimate of the real uncertainty should come from the\ncomparison of results from the analysis of different series\nrepresenting the same quantity, which in general are not expected to\nhave the same analytic structure.\n\nIn the following of this section we present \nthe main results obtained from our strong-coupling analysis.\nMost of them will concern the $N=3$ case.\n\n\n\\subsection{The square lattice}\n\\label{scsq}\n\nOn the square lattice we have calculated the two-point Green's\nfunction up to $O(\\beta^{21})$, from which we have extracted\nstrong-coupling series of the quantities $E$, $\\chi$, $\\xi_{G}^2$,\n$u$, $M_{\\rm s}^2$, $M_{\\rm d}^2$, already introduced in\nSec.~\\ref{secsqNi}, and of the ratios $r\\equiv M_{\\rm s}^2\/M_{\\rm\n d}^2$, $s\\equiv M_{\\rm s}^2\/M_{G}^2$. Some of the above series for\nselected values of $N$ are reported in App.~\\ref{appscsq}. Our\nstrong-coupling series represent a considerable extension of the 14th\norder calculations of Ref.~\\cite{Luscher}, performed by means of a\nlinked cluster expansion, which have been rielaborated and analyzed in\nRef.~\\cite{Butera}. We also mention recent works where the linked\ncluster expansion technique has been further developed and\ncalculations of series up to 18th order~\\cite{Reisz} and 19th\norder~\\cite{Butera2} for d=2,3,4 have been announced.\n\nIn order to investigate the analytic structure in the complex\n$\\beta$-plane we have performed a study of the singularities of the\nDlog-PA's of the strong-coupling series of $\\chi$ and $\\xi_{G}^2$. As\nexpected by asymptotic freedom, no indication of the presence of a\ncritical point at a finite real value of $\\beta$ emerges from the\nstrong-coupling analysis of $N\\geq 3$ models, confirming earlier\nstrong-coupling studies~\\cite{Butera}. The singularities closest to\nthe origin, emerging from the Dlog-PA analysis of $\\chi$ and\n$\\xi_{G}^2$, are located at a pair of complex conjugate points, rather\nclose to the real axis in the $N=3$ case (where $\\bar{\\beta}\\simeq\n0.59\\pm i 0.16$) and moving, when increasing $N$, toward the\n$N=\\infty$ limiting points $\\bar{\\beta}=0.32162\\,(1\\pm i)$. In Table\n\\ref{zeroes} such singularities are reported for some values of $N$.\nThe singularity closest to the origin determines the convergence\nradius of the corresponding strong-coupling series. For example for\n$N=3$ the strong-coupling convergence radius turns out to be\n$\\beta_r\\simeq 0.61$, which corresponds to a quite large correlation\nlength $\\xi\\simeq 65$. We recall that the partition function on the\nsquare lattice has the symmetry $\\beta \\rightarrow -\\beta$, which must\nbe also realized in the locations of its complex singularities.\n\nBy rotation invariance the ratio \n$r\\equiv M_{\\rm s}^2\/M_{\\rm d}^2$ should go to one\nin the continuum limit. Therefore the analysis of such ratio\nshould be considered as a test of the procedure employed \nto estimate continuum physical quantities. \nIn the large-$N$ limit $r=1$ at all values of $\\beta$.\nThis is not anymore true \nat finite $N$, where the strong-coupling series of \n$M_{\\rm s}^2$ and $M_{\\rm d}^2$ differ from each other, \nas shown in App.~\\ref{appscsq}.\nFrom $G(x)$ up to $O(\\beta^{21})$ we could calculate\nthe ratio $r$ up to $O(\\beta^{14})$. \nThe results of our analysis of the series of $r$ for $N=3$\nare summarized in Table \\ref{sqr}. There we report\nthe values of the PA's and Dlog-PA's of the $E$-series at $E=1$, \nand of those of the $\\beta$-series at $\\beta=0.55$,\nwhich corresponds to a reasonably large \ncorrelation length $\\xi\\simeq 25$.\nWe considered PA's and Dlog-PA's with \n$l+m\\geq 11$ and $m\\geq l\\geq 5$.\nThe most precise determinations of $r^*$,\nthe value of $r$ at the continuum limit, come from Dlog-PA's,\nwhose final estimates are $r^*=1.0000(12)$ from the $E$-approximants,\nand $r^*=1.0002(6)$ from the $\\beta$-approximants (at $\\beta=0.55$).\nThe precision of these results is remarkable.\n\nFor all $N\\geq 3$ the violation of rotation invariance in the large\ndistance behavior of $G(x)$, monitored by the ratio \n$\\mu_{\\rm s}\/\\mu_{\\rm d}$, turns out quantitatively very close to that\nat $N=\\infty$ when considered as function of $\\xi_G$ (in a plot the\n$N=3$ curve of $\\mu_{\\rm s}\/\\mu_{\\rm d}$ versus $\\xi_{G}$ as obtained\nfrom the strong-coupling analysis would be hardly distinguishable from\nthe exact $N=\\infty$ one). $\\mu_{\\rm s}\/\\mu_{\\rm d}$ is one within\nabout one per mille already at $\\xi\\simeq 4$.\n\nCalculating a few more components of $G(x)$ at larger orders\n(i.e., those involved by the wall-wall correlation\nfunction at distance 6 and 7 respectively up to $O(\\beta^{22})$\nand $O(\\beta^{23})$),\nwe computed the ratio \n\\begin{equation}\ns\\equiv {M_{\\rm s}^2\\over M_{G}^2}\n\\label{sdef}\n\\end{equation}\nup to $O(\\beta^{16})$, by applying the technique described in\nRefs.~\\cite{SCUN1,RV}. We recall that at $N=\\infty$ we found $s=1$\nindependently of $\\beta$. No exact results are known about the\ncontinuum limit $s^*$ of the ratio $s$, except for its large-$N$\nlimit: $s^*=1$. Both large-$N$ and Monte Carlo estimates indicate a\nvalue very close to one. From a $1\/N$ expansion~\\cite{Flyv,CR}:\n\\begin{equation}\ns^*= 1 - {0.006450\\over N} + O\\left( {1\\over N^2}\\right).\n\\label{largeNs}\n\\end{equation}\nMonte Carlo simulations at $N=3$~\\cite{Meyer}\ngave $s=0.9988(16)$ at $\\beta={1.7\/3}=0.5666...$ ($\\xi\\simeq 35$), and\n$s=0.9982(18)$ at $\\beta=0.6$ ($\\xi\\simeq 65$),\nleading to the estimate $s^*=0.9985(12)$.\n\nIn Table \\ref{sqs} we report, for $N=3$, the values of PA's and\nDlog-PA's of the energy and $\\beta$ series of $s$ respectively at\n$E=1$ and at $\\beta=0.55$. We considered PA's and Dlog-PA's with\n$l+m\\geq 13$ and $m\\geq l\\geq 5$. Combining PA and Dlog-PA results,\nour final estimates are $s^*=0.998(3)$ from the $E$-approximants, and\n$s^*=0.998(1)$ from the $\\beta$ approximants evaluated at\n$\\beta=0.55$, in full agreement with the estimates from the $1\/N$\nexpansion and Monte Carlo simulations. With increasing $N$, the\ncentral estimate of $s^*$ tends to be closer to one.\n\nThe scaling-violation pattern of the quantity $\\mu_{\\rm s}\/M_{G}$ for\n$N=3$ is similar to the pattern for $N=\\infty$ (cfr.\\ \nEq.~(\\ref{scalviol})), i.e., it is stable within a few per\nmille for $\\xi\\gtrsim 5$.\n\nAnother dimensionless renormalization-group invariant quantity we have\nconsidered is $u\\equiv m_2^2\/(\\chi m_4)$, whose large-$N$ limit has\nbeen calculated in the previous section, cfr.\\ Eq.~(\\ref{omsqin}). \nAt finite $N$ its continuum limit $u^*$ is not known. \nFrom the expression of the self-energy calculated up to\n$O(1\/N)$~\\cite{Flyv,CR,CRselfenergy}, one can obtain\n\\begin{equation}\nu^*= {1\\over 4}\\left[ 1 - {0.006198\\over N} \n + O\\left( {1\\over N^2}\\right) \\right].\n\\label{largeNu}\n\\end{equation}\nIt is interesting to notice that the $O(1\/N)$ correction in\nEqs.~(\\ref{largeNs}) and (\\ref{largeNu}) is very small.\n\nAt $N=3$ the analysis of the $O(\\beta^{21})$ strong-coupling series of\n$u$ detected a simple pole close to the origin at $\\beta_0=-0.085545$\nfor the $\\beta$-series, and at $E_0=-0.086418$ for the energy series,\ncorresponding to $M^2_{G}=-16.000$, which, within the precision of our\nstrong-coupling estimate, is also the location of the pole in the\ncorresponding $N=\\infty$ expression (\\ref{omsqin}). Being a simple\npole, this singularity can be perfectly reproduced by a standard PA\nanalysis, and indeed we found PA's to be slightly more stable than\nDlog-PA's in the analysis of $u$. The results concerning $N=3$,\nreported in Table \\ref{sqom} (for PA's with $l+m\\geq 16$ and $m\\geq\nl\\geq 8$), lead to the estimates $u^*=0.2498(6)$ from the energy\nanalysis, and $u^*=0.2499(5)$ form the $\\beta$ analysis (at\n$\\beta=0.55$). The agreement with the large-$N$ formula (\\ref{largeNu})\nis satisfactory.\nIn Fig.~\\ref{figomsq} the curve $u(E)$ as obtained\nfrom the $[10\/10]$ PA and the exact curve $u(E)$ at $N=\\infty$ (cfr.\\ \nEq.~\\ref{omsqin}) are plotted, showing almost no differences.\n\nIn Table \\ref{sum} we give a\nsummary of the determinations of $r^*$, $s^*$, and $u^*$ \nfrom PA's and Dlog-PA's of the energy and $\\beta$-series.\n\nWe mention that we also tried to analyze series in the variable\n\\begin{equation}\nz={I_{N\/2}(N\\beta)\\over I_{N\/2-1}(N\\beta)},\n\\label{chcoeff}\n\\end{equation}\nwhich is the character coefficient of the fundamental representation.\nAs for the $E$-series, the continuum limit should be reached at a\nfinite value $z\\rightarrow 1$, and estimates of $r^*$,$s^*$ and $u^*$\nmay be obtained evaluating the approximants of the corresponding\n$z$-series at $z=1$. We obtained results much less precise than those\nfrom the analysis of the $E$-series. Maybe because of the\nthermodynamical meaning of the internal energy, resummations by PA's\nand Dlog-PA's of the $E$-series turn out much more effective,\nproviding rather precise results even at the continuum limit $E=1$.\n\nThe strong-coupling approach turns out to be less effective to the\npurpose of checking asymptotic scaling. In Table \\ref{mc}, we\ncompare, for $N=3,4,8$, $\\xi_{G}$ as obtained from the plain \n21st order series of $\\xi_{G}^2$ and from its Dlog-PA's with\nsome Monte Carlo results available in the literature. Resummation\nby integral approximants~\\cite{IA} provides results substantially\nequivalent to those of Dlog-PA's. For $N=3$ Dlog-PA's follow\n Monte Carlo data reasonably well up to about the convergence radius\n$\\beta_r\\simeq 0.6$ of the strong-coupling expansion, but they fail\nbeyond $\\beta_r$. On the other hand it is well known that for $N=3$\nthe asymptotic scaling regime is set at larger\n$\\beta$-values~\\cite{CEPS}. More sophisticated analysis can be found\nin Refs.~\\cite{Butera,Bonnier}, but they do not seem to lead to a\nconclusive result about the asymptotic freedom prediction in the\n$O(3)$ $\\sigma$ model. At larger $N$, the convergence radius\ndecreases, but on the other hand the asymptotic scaling regime should\nbe reached earlier. At $N=4$ and $N=8$ the 21st order plain\nseries of $\\xi_{G}^2$ provides already quite good estimates of $\\xi_G$\nwithin the convergence radius when compared with Monte Carlo results.\nAgain Pad\\'e-type resummation fails for $\\beta>\\beta_r$. We mention\nthat at $N=4$ the convergence radius $\\beta_r\\simeq 0.60$ corresponds\nto $\\xi_G\\simeq 25$, and at $N=8$ $\\beta_r\\simeq 0.55$ corresponds to\n$\\xi_G\\simeq 8$.\n\nIn order to check asymptotic scaling we consider the ratio\n$\\Lambda_{\\rm s}\/\\Lambda_{2l}$, where\n$\\Lambda_{\\rm s}$ is the effective $\\Lambda$-parameter which\ncan be extracted by\n\\begin{equation}\n\\Lambda_{\\rm s}\\equiv\\left( {\\Lambda_{\\rm s}\\over M}\\right)\nM={M\\over R_{\\rm s}},\n\\label{effla}\n\\end{equation}\nwhere $M$ is \nan estimator of the mass-gap,\n$R_{\\rm s}$ is the mass-$\\Lambda$ parameter ratio\nin the square lattice nearest-neighbor formulation~\\cite{Hasenfratz}\n\\begin{equation}\nR_{\\rm s}= R_{\\overline{\\rm MS}}\\times\n\\left( {\\Lambda_{\\overline{\\rm MS}}\\over \n\\Lambda_{\\rm s}}\\right)=\\left( {8\\over e}\\right)^{1\\over N-2}\n{1\\over \\Gamma\\left( 1 + {1\\over N-2}\\right)}\\times\n\\sqrt{32} \\exp\\left[ {\\pi\\over 2(N-2)}\\right],\n\\label{RL}\n\\end{equation}\nand $\\Lambda_{2l}$ is the corresponding two-loop formula\n\\begin{equation}\n\\Lambda_{2l}= \\left( {2\\pi N \\over N-2}\\beta\\right)^{1\\over N-2}\n\\exp \\left( -{2\\pi N\\over N-2}\\beta\\right).\n\\label{twoloopla}\n\\end{equation}\nThe ratio $\\Lambda_{\\rm s}\/\\Lambda_{2l}$ should go to one in the\ncontinuum limit, according to asymptotic scaling. The available\nseries of $M_G^2$ are longer than any series of the mass-gap\nestimators; therefore, neglecting the very small difference between\n$M_{G}$ and $M$ (we have seen that for $N\\geq 3$ $(M_{G}-M)\/M\\lesssim\n10^{-3}$ in the continuum limit), for which formula (\\ref{RL})\nholds, we use $M_{G}$ as estimator of $M$. In Fig.~\\ref{asysc} we\nplot $\\Lambda_{\\rm s}\/ \\Lambda_{2l}$ for various values of $N$,\n$N=3,4,8$, and for comparison the exact curve for $N=\\infty$. As\nalready noted in Ref.~\\cite{Wolff} by a Monte Carlo study, for\n$N=3,4$ at $\\xi\\simeq 10$ the asymptotic scaling regime is still far\n(about 50\\% off at $N=3$ and $15\\%$ at $N=4$), while for $N=8$ it is\nverified for $\\xi\\gtrsim 4$ within a few per cent, and notice that the\nconvergence radius $\\beta_r\\simeq 0.55$ corresponds to $\\xi\\simeq 8$.\nAnyway with increasing $N$ curves of $\\Lambda_{\\rm s}\/ \\Lambda_{2l}$\nclearly approach the exact $N=\\infty$ limit.\n \n\n\\subsection{The triangular lattice}\n\\label{sctr}\n\nOn the triangular lattice we have calculated the two-point Green's\nfunction up to $O(\\beta^{15})$, from which we have extracted\nstrong-coupling series of the quantities $E$, $\\chi$, $\\xi_{G}^2$,\n$u$, $M_{\\rm t}^2$, already introduced in Sec.~\\ref{sectrNi}, and of\nthe ratios $s\\equiv M_{\\rm t}^2\/M_{G}^2$. Some of the above series\nfor $N=3$ are reported in App.~\\ref{appsctr}.\n\nLike ${\\rm O}(N)$ $\\sigma$ models on the square lattice, \nno indication of the presence\nof a critical point at a finite real value of $\\beta$ emerges\nfrom the strong-coupling analysis for $N\\geq 3$. \nBy a Dlog-PA analysis of the $O(\\beta^{15})$\nstrong-coupling series of $\\chi$ and $\\xi_{G}^2$ at $N=3$, \nwe found that the singularities closest to the origin is \n$\\bar{\\beta}\\simeq 0.358\\pm i 0.085$, giving rise to a convergence\nradius $\\beta_r\\simeq 0.37$ which should correspond to a rather\nlarge correlation length: $\\xi_{G}\\simeq 70$.\nIncreasing $N$ such singularities move toward their\n$N=\\infty$ limit $\\bar{\\beta}=0.206711\\pm\\,i\\,0.181627$.\nSome details of this analysis are given in Table \\ref{zeroes}.\n\nIn our analysis of dimensionless quantities we considered, as on the\nsquare lattice, both the series in the energy and in $\\beta$. The\nestimates concerning the continuum limit are obtained by evaluating\nthe approximants of the energy series at $E=1$, and those of the\n$\\beta$-series at a $\\beta$ associated to a reasonably large\ncorrelation length. For $N=3$ we chose $\\beta=0.33$, whose\ncorresponding correlation length should be $\\xi\\simeq 22$, according\nto a strong-coupling estimate.\n\nCalculating a few more components of $G(x)$ at larger orders (i.e.,\nthose involved by the wall-wall correlation function at distance\n$\\sqrt{3}\/2 \\times 5$ up to $O(\\beta^{16})$), we computed the\nratio $s\\equiv M_{\\rm t}^2\/M_{G}^2$ up to $O(\\beta^{11})$\n\\cite{SCUN1,RV}. For $N=3$ the analysis of the strong-coupling series\nof $s$ (some details are given in Table \\ref{trs}) leads to the\nestimate $s^*=0.998(3)$ from the energy approach, and $s^*=0.998(1)$\nevaluating the approximants at $\\beta=0.33$ (we considered PA's and\nDlog-PA's with $l+m\\geq 8$ and $m\\geq l\\geq 4$). Such results are in\nperfect agreement with those found for the square lattice.\n\nPA's and Dlog-PA's (with \n$l+m\\geq 11$ and $m\\geq l\\geq 5$)\nof the strong-coupling series of $u$ expressed \nin terms of the energy, evaluated at $E=1$, lead to the estimate\n$u^*=0.249(1)$ at $N=3$. The analysis of the series in $\\beta$\ngives $u^*=0.2502(4)$.\nAgain universality is satisfied. \n\nA summary of the results on the triangular lattice can be\nfound in Table \\ref{sum}.\n\nAs on the square lattice we checked asymptotic scaling by looking at\nthe ratio $\\Lambda_{\\rm t}\/ \\Lambda_{2l}$, where $\\Lambda_{\\rm t}$ is\nthe effective $\\Lambda$-parameter on the triangular lattice, defined\nin analogy with Eq.~(\\ref{effla}). Beside\nthe formulas concerning asymptotic scaling given for the square\nlattice case, cfr.\\ Eqs.~(\\ref{effla}-\\ref{twoloopla}), we need here\nthe $\\Lambda$-parameter ratio $\\Lambda_{\\rm t}\/\\Lambda_{\\rm s}$\ncalculated in App.~\\ref{apptr}, cfr.\\ Eq.~(\\ref{ratioltr2}). We again\nused $M_G$ as approximate estimator of the mass-gap $M$.\nFig.~\\ref{asysctr} shows curves of $\\Lambda_{\\rm t}\/ \\Lambda_{2l}$ for\nvarious values of $N$, $N=3,4,8$, and for comparison the exact curve\nfor $N=\\infty$. Such results are similar to those found on the square\nlattice: for $N=3,4$ asymptotic scaling regime is still far at\n$\\xi_G\\simeq 10$, but it is verified within a few per cent at $N=8$,\nwhere the correlation length corresponding to the strong-coupling\nconvergence radius is $\\xi\\simeq 8$.\n\n\n\\subsection{The honeycomb lattice}\n\\label{scho}\n\nOn the honeycomb lattice we have calculated \nthe two-point Green's function \nup to $O(\\beta^{30})$, from which we extracted \nstrong-coupling series of the quantities $E$, $\\chi$, \n$\\xi_{G}^2$, $u$, $M_{\\rm v}^2$, $M_{\\rm h}^2$, \nalready introduced in Sec.~\\ref{iNhl},\nand of the ratios $r\\equiv M_{\\rm v}^2\/M_{\\rm h}^2$,\n$s\\equiv M_{\\rm h}^2\/M_{G}^2$.\nSome of the above series for $N=3$ are reported \nin App.~\\ref{appscex}.\n\nAt $N=3$ a Dlog-PA analysis of the $O(\\beta^{30})$\nstrong-coupling series of $\\chi$ and $\\xi_{G}^2$ \ndetected two couples of complex conjugate singularities,\none on the imaginary axis at $\\bar{\\beta}\\simeq\\pm i0.460$, \nquite close to the origin, and the other \nat $\\bar{\\beta}\\simeq 0.93\\pm i 0.29$.\nThe singularity on the imaginary axis\nleads to a rather small convergence radius in terms\nof correlation length, indeed at $\\beta\\simeq 0.46$ \nwe estimate $\\xi\\simeq 2.6$. \nAt $N=4$ we found $\\bar{\\beta}\\simeq\\pm i0.444$, \nand $\\bar{\\beta}\\simeq 0.88\\pm i 0.41$.\nAt larger $N$ the singularities closest\nto the origin converge toward the $N=\\infty$ value\n$\\bar{\\beta}=\\pm i \\,0.362095$.\nNotice that, as on the square lattice, the partition function on the\nhoneycomb lattice enjoys the symmetry $\\beta\\rightarrow -\\beta$.\n\nAgain we analyzed both the series in the energy and in $\\beta$.\nThe estimates concerning the continuum limit are obtained by evaluating\nthe approximants of the energy series at $E=1$, and \nthose of the $\\beta$-series at $\\beta=0.85$ for the $N=3$ case,\nwhich should correspond to $\\xi\\simeq 22$.\n\nBy rotation invariance the ratio \n$r\\equiv M_{\\rm h}^2\/M_{\\rm v}^2$ should go to one\nin the continuum limit. \nFrom $G(x)$ up to $O(\\beta^{30})$ we extracted\nthe ratio $r$ up to $O(\\beta^{20})$. \nAgain PA's and Dlog-PA's \nof the energy series evaluated at $E=1$\nand of the $\\beta$-series evaluated at $\\beta=0.85$\n(some details are given in Table \\ref{hor})\ngive the correct result in the continuum limit: \nrespectively $r^*=1.00(1)$ and $r^*=1.001(1)$ at $N=3$\n(we considered PA's and Dlog-PA's with \n$l+m\\geq 16$ and $m\\geq l\\geq 7$).\n\nCalculating a few more components of $G(x)$ at larger orders (i.e.,\nthose involved by $G^{({\\rm w})}_{\\rm h}(x)$, defined in\nEq.~(\\ref{g2}), at distances $x=\\sqrt{3}\/2 \\times 9$ and\n$x=\\sqrt{3}\/2 \\times 10$ respectively at $O(\\beta^{34})$ and\n$O(\\beta^{35})$), we computed the ratio $s\\equiv M_{\\rm h}^2\/M_{G}^2$\nup to $O(\\beta^{25})$ \\cite{SCUN1,RV}. For $N=3$ the analysis of the\nstrong-coupling series of $s$ gives $s^*=0.999(3)$ from the\n$E$-approximants and $s^*=0.9987(5)$ from the $\\beta$-approximants\nevaluated at $\\beta=0.85$ (some details are given in Table \\ref{hos}),\nin agreement with the result found on the other lattices. We\nconsidered PA's and Dlog-Pa's with $l+m\\geq 22$, $m\\geq l\\geq 10$.\n\nThe analysis of the energy series of $u$ confirms universality:\nPA's and Dlog-PA's (with $l\\leq m$,\n$l+m\\geq 26$, $l\\geq 12$) of the energy series\nevaluated at $E=1$ give $u^*=0.249(3)$,\nand those of the $\\beta$-series at $\\beta=0.85$ $u^*=0.2491(3)$.\nAs for the square lattice, the curve $u(E)$ obtained \nfrom the PA's at $N=3$ and the exact curve $u(E)$ \nat $N=\\infty$, cfr.\\ Eq.~(\\ref{omhoin}), \nwould be hardly distinguishable if plotted together.\n\nAs noted above, the convergence radius $\\beta_r$ is small in terms of\ncorrelation length for all values of $N$: it goes from $\\xi\\simeq 1.0$\nat $N=\\infty$ to $\\xi\\simeq 2.6$ at $N=3$. Nevertheless in this case\nDlog-PA resummations seem to give reasonable estimates of $\\xi_G$ even\nbeyond $\\beta_r$ (apparently up to about the next singularity closest\nto the origin). In Fig.~\\ref{asyscho} we show curves of \n$\\Lambda_{\\rm h}\/ \\Lambda_{2l}$, where $\\Lambda_{\\rm h}$ is the\neffective $\\Lambda$-parameter on the honeycomb lattice, for various\nvalues of $N$, $N=3,4,8$, and for comparison the exact curve for\n$N=\\infty$. The necessary ratio of $\\Lambda$-parameters has been\ncalculated in App.~\\ref{appex}, cfr.\\ Eqs.~(\\ref{ratiolho}) and\n(\\ref{ratiolho2}).\n\n\n\\subsection{Conclusions}\n\\label{concl}\n\nWe have shown that quite accurate continuum limit estimates of\ndimensionless renormalization-group invariant quantities, such as $s$\nand $u$ (cfr.\\ Eqs.~(\\ref{sdef}) and (\\ref{omsqin})), can be obtained\nby analyzing their strong-coupling series and applying resummation\ntechniques both in the inverse temperature variable $\\beta$ and in the\nenergy variable $E$. In particular, in order to get continuum\nestimates from the analysis of the energy series, we evaluated the\ncorresponding PA's and Dlog-PA's at $E=1$, i.e., at the continuum\nlimit. This idea was already applied to the calculation of the\ncontinuum limit of the zero-momentum four-point coupling $g_r$,\nobtaining accurate results~\\cite{gr}. These results look very\npromising in view of a possible application of such strong-coupling\nanalysis to four-dimensional gauge theories.\n\nThe summary in Table \\ref{sum} of our $N=3$ strong-coupling results\nfor the continuum values $r^*$, $s^*$ and $u^*$,\nfor all lattices we have considered, shows that\nuniversality is verified within a precision of few per mille, leading\n to the final estimates $s^*\\simeq 0.9985$ and $u^*\\simeq 0.2495$\nwith an uncertainty of about one per mille.\nThe comparison with the exact $N=\\infty$ results, $s^*=1$ and\n$u^*=1\/4$, shows that quantities like $s^*$ and $u^*$, which describe the\nsmall momentum universal behavior of $\\widetilde{G}(p)$ in the\ncontinuum limit, change very little and apparently monotonically from\n$N=3$ to $N=\\infty$, suggesting that\nat $N=3$ $\\widetilde{G}(p)$ is essentially Gaussian at small momentum.\n\nLet us make this statement more precise.\nIn the critical region\none can expand the dimensionless \nrenormalization-group invariant function\n\\begin{equation}\nL(p^2\/M_G^2)\\equiv {\\widetilde{G}(0)\\over \\widetilde{G}(p)}\n\\label{elle}\n\\end{equation}\naround $y\\equiv p^2\/M_G^2=0$, writing\n\\begin{eqnarray}\nL(y)&=&1 + y + l(y)\\nonumber \\\\\nl(y)&=&\\sum_{i=2}^\\infty c_i y^i.\n\\label{lexp}\n\\end{eqnarray}\n$l(y)$ parameterizes the difference from a generalized Gaussian\npropagator. One can easily relate the coefficients $c_i$ of the\nexpansion (\\ref{lexp}) to dimensionless renormalization-group\ninvariant ratios involving the moments $m_{2j}$ of $G(x)$.\n\nIt is worth observing that\n\\begin{equation}\nu^* = {1\\over 4 (1 - c_2)}.\n\\label{uc2}\n\\end{equation} \nIn the large-$N$ limit the function $l(y)$ is\ndepressed by a factor $1\/N$.\nMoreover the coefficients of its low-momentum expansion are very small.\nThey can be derived from the $1\/N$ expansion of the \nself-energy~\\cite{Flyv,CR,CRselfenergy}. \nIn the leading order in the $1\/N$ expansion one finds \n\\begin{eqnarray}\nc_{2}&\\simeq&-{0.00619816...\\over N},\\nonumber \\\\ \nc_{3}&\\simeq&{0.00023845...\\over N},\\nonumber \\\\\nc_{4}&\\simeq&-{0.00001344...\\over N},\\nonumber \\\\ \nc_{5}&\\simeq&{0.00000090...\\over N},\n\\label{c1N}\n\\end{eqnarray}\netc.. For sufficiently large $N$ we then expect\n\\begin{equation}\nc_i\\ll c_2\\ll 1 \\;\\;\\;\\;\\;\\;\\;\\;\\;{\\rm for}\\;\\;\\;\\; i\\geq 3.\n\\label{crel}\n\\end{equation} \nAs a consequence, since \nthe zero of $L(y)$ closest \nto the origin is $y_0=-s^*$, the value of $s^*$ \nis substantially fixed by the term proportional to\n$(p^2)^2$ in the inverse propagator, through the approximate relation \n\\begin{equation}\ns^*-1\\simeq c_2 \\simeq 4 u^* - 1 .\n\\label{s4u}\n\\end{equation}\nIndeed in the large-$N$ limit one finds from Eqs.~(\\ref{largeNs}) and \n(\\ref{largeNu})\n\\begin{equation}\ns^*-4u^*={-0.000252\\over N}+O\\left( {1\\over N^2}\\right),\n\\end{equation}\nwhere the coefficient of the $1\/N$ term is much\nsmaller than those of $s^*$ and $u^*$.\n\nFrom this large-$N$ analysis one expects \nthat even at $N=3$ the function $l(y)$ be small in a relatively large\nregion around $y=0$. \nThis is confirmed by the strong-coupling estimate of $u^*$,\nwhich, using Eq.~(\\ref{uc2}), leads to $c_2 \\simeq -0.002$.\nFurthermore, the comparison of the estimates of $s^*$ and $u^*$\nshows that $s^*-4u^*\\simeq 0$ within the precision of our analysis,\nconsistently with Eq.~(\\ref{crel}).\nIt is interesting to note that similar results have been\nobtained for the models with $N\\leq 2$,\nand in particular for the Ising Model, i.e., for $N=1$, where the\nstrong-coupling analysis turns out to be very precise~\\cite{Nm2}.\n\nWe can conclude that the two-point Green function for all $N\\geq 3$ is\nalmost Gaussian in a large region around $p^2=0$, i.e.,\n$|p^2\/M_G^2|\\lesssim 1$, and the small corrections to Gaussian\nbehavior are essentially determined by the $(p^2)^2$ term in the\nexpansion of the inverse propagator.\n\n\n\nDifferences from Gaussian behavior will become important at\nsufficiently large momenta, as predicted by simple weak coupling\ncalculations supplemented by a renormalization group resummation.\nIndeed the asymptotic behavior of $G(x)$ for $x\\ll1\/M$ (where $M$ is\nthe mass gap) turns out to be\n\\begin{equation}\nG(x) \\sim \\left(\\ln{1\\over xM}\\right)^{\\gamma_1\/b_0}, \\qquad\n{\\gamma_1\\over b_0} = {N-1\\over N-2} \\,;\n\\end{equation}\n$b_0$ and $\\gamma_1$ are the first coefficients of the\n$\\beta$-function and of the anomalous dimension of the fundamental\nfield $\\vec s$ respectively. Let us remind that a free Gaussian\nGreen's function behaves like $\\ln (1\/x)$.\nImportant differences are\npresent in other Green's functions even at small momentum, as shown in\nthe analysis of the four-point zero-momentum renormalized coupling,\nwhose definition involves the zero-momentum four-point correlation\nfunction (\\ref{chi4})~\\cite{gr}.\nHowever monotonicity in $N$ seems to be a persistent feature.\n\nOur strong-coupling calculations allow also a check of asymptotic\nscaling for a relatively large range of correlation lengths. For all\nlattices considered the ratio between the effective\n$\\Lambda$-parameter extracted from the mass-gap and its two-loop\napproximation, $\\Lambda\/\\Lambda_{2l}$, when considered as function of\n$\\xi_G$, shows similar patterns with changing $N$. Confirming earlier\nMonte Carlo studies, large discrepancies from asymptotic scaling are\nobserved for $N=3$ in the range of correlation lengths we could\nreliably investigate, i.e., $\\xi\\lesssim 50$. At $N=8$ and for all\nlattices considered, asymptotic scaling within a few per cent is\nverified for $\\xi\\gtrsim 4$, and increasing $N$ the ratio\n$\\Lambda\/\\Lambda_{2l}$ approaches smoothly its $N=\\infty$ limit.\n\n\n\\acknowledgments\n\nIt is a pleasure to thank B.~Alles\nfor useful and stimulating discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzavko b/data_all_eng_slimpj/shuffled/split2/finalzzavko new file mode 100644 index 0000000000000000000000000000000000000000..ba3bba9badd8462f0c781edd5603a2b005df204f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzavko @@ -0,0 +1,5 @@ +{"text":"\\section{Syst\\`emes int\\'egrables quantiques}\n\nLe {\\bf mod\\`ele \\`a 6 sommets} est un c\\'el\\`ebre mod\\`ele de {\\bf physique statistique} introduit par Pauling en 1935, qui permet notamment \nde d\\'ecrire le cristal de la glace (voir \\cite{BaxterBook}). Il est r\\'ealis\\'e sur un r\\'eseau dont chaque\nsommet est reli\\'e \\`a 4 autres sommets. Un \\'etat du syst\\`eme est une orientation\ndes ar\\^etes telle qu'\\`a chaque sommet arrivent exactement 2 fl\\^eches (Figure 1). \nLes fl\\^eches repr\\'esentent l'orientation des mol\\'ecules d'eau du cristal\nles unes par rapport aux autres. \nIl y a 6 configurations\npossibles \\`a chaque sommet (Figure 2), ce qui justifie l'appellation de ce mod\\`ele.\n~\\begin{center}\n\\begin{figure}\\label{orientation}\n \\hspace{4.5cm} \\epsfig{file=orientation.eps,width=0.3\n \\linewidth}\n \\caption{Une orientation d'un r\\'eseau (mod\\`ele \\`a 6 sommets).}\n\\end{figure}\n\\end{center}\n\\begin{center}\n\\begin{figure}\\label{6vertex}\n \\hspace{1cm} \\epsfig{file=6vertex.eps,width=0.8\\linewidth}\n \\caption{6 configurations possibles \\`a chaque sommet.}\n\\end{figure}\n\\end{center}\n\n\nL'\\'etude du mod\\`ele de la glace est fortement li\\'ee \\`a celle d'un autre mod\\`ele, cette fois-ci en {\\bf physique statistique quantique}, appel\\'e {\\bf mod\\`ele $XXZ$} de Spin $1\/2$, dit de Heisenberg quantique (1928). Il s'agit d'une variante en physique quantique du mod\\`ele d'Ising (1925) (voir \\cite{JM}), qui mod\\'elise des cha\\^ines de spins magn\\'etiques quantiques\nayant deux \\'etats classiques, haut ou bas (Figure 3).\n\\begin{center}\n\\begin{figure}\\label{spin}\n \\hspace{4.5cm} \\epsfig{file=spin2.eps,width=0.3\n \\linewidth\n }\n \\caption{Etats d'un Spin $1\/2$ (haut ou bas).}\n\\end{figure}\n\\end{center}\n\nCes deux mod\\`eles, mod\\`ele \\`a 6 sommets et mod\\`ele $XXZ$, figurent parmi les plus \\'etudi\\'es\nen physiques statistique et quantique. Les structures math\\'ematiques qui les sous-tendent sont tr\\`es proches.\nEn d\\'epit de leur formulation assez \\'el\\'ementaire,\nils sont extr\\^emement riches et leur analyse a une tr\\`es longue histoire.\n\nEn physique statistique (quantique), le comportement du syst\\`eme est contr\\^ol\\'e par la {\\bf fonction de partition} $\\mathcal{Z}$ \\footnote{En physique statistique, la fonction de partition s'exprime comme la somme $\\sum_j \\text{exp}(-E_j\/(k_BT))$ sur tous les \\'etats $j$ du syst\\`eme, o\\`u $E_j$ est l'\\'energie de l'\\'etat $j$, $T$ est la temp\\'erature du syst\\`eme et $k_B$ la constante de Boltzmann. En physique quantique, la somme est remplac\\'ee par une trace $\\text{Tr}_W(\\text{exp}(-E\/(k_BT)))$ o\\`u $E$ est l'op\\'erateur \\og hamiltonien\\fg{} qui agit sur l'espace $W$ des \\'etats quantiques du syst\\`eme.}, qui permet d'obtenir les grandeurs mesurables\\footnote{Une grandeur mesurable $Q$ est obtenue comme moyenne pond\\'er\\'ee sur les \\'etats $\\frac{\\sum_j \\text{exp}(-E_j\/(k_BT)) Q_j}{\\mathcal{Z}}$ des valeurs $Q_j$ sur chaque \\'etat $j$.}. Cette fonction\n$\\mathcal{Z}$ est tr\\`es difficile \\`a calculer en g\\'en\\'eral. La m\\'ethode de la {\\bf matrice de transfert}\nest un proc\\'ed\\'e pour tenter de la d\\'eterminer : il s'agit d'\\'ecrire $\\mathcal{Z}$ comme trace\nd'un op\\'erateur $\\mathcal{T}$ (la matrice de transfert) agissant sur l'{\\bf espace des \\'etats} $W$ : \n$$\\mathcal{Z} = \\on{Tr}_W (\\mathcal{T}^M).$$\nIci $M$ est un entier associ\\'e \\`a la taille du r\\'eseau du mod\\`ele.\nAinsi, pour trouver $\\mathcal{Z}$, il suffit d'obtenir les valeurs propres $\\lambda_j$ de $\\mathcal{T}$ :\n$$\\mathcal{Z} = \\sum_j \\lambda_j^M.$$\nLe spectre $\\{ \\lambda_j \\}_j$ de $\\mathcal{T}$ est appel\\'e le {\\bf spectre du syst\\`eme quantique}.\n\nDans un c\\'el\\`ebre article s\\'eminal de 1971, inspir\\'e notamment par les travaux de Bethe (1931), Baxter \\cite{Baxter} a compl\\`etement r\\'esolu ce probl\\`eme\\footnote{Baxter a introduit la m\\'ethode puissante des \\og Q-op\\'erateurs\\fg{} qui lui a \\'egalement permis de r\\'esoudre le mod\\`ele \\og \\`a 8 sommets\\fg, plus complexe. Le mod\\`ele \\`a 6 sommets avait aussi \\'et\\'e r\\'esolu par d'autres m\\'ethodes, notamment dans les travaux de Lieb et Sutherland (1967).}. Gr\\^ace \\`a une \\'etude\ntr\\`es pr\\'ecise il a notamment montr\\'e\nque les valeurs propres $\\lambda_j$ de $\\mathcal{T}$ ont une structure tout \\`a fait remarquable : elles\ns'expriment sous la forme\n\\begin{equation} \\label{relB}\n\\lambda_j = A(z) \\frac{Q_j(zq^2)}{Q_j(z)} + D(z) \\frac{Q_j(zq^{-2})}{Q_j(z)},\n\\end{equation}\no\\`u $z,q\\in\\CC^*$ sont des param\\`etres du mod\\`ele (respectivement spectral et quantique),\n$A(z)$ et $D(z)$ sont des fonctions \\og universelles\\fg{} (au sens o\\`u elles ne d\\'ependent pas de la valeur propre\n$\\lambda_j$). La fonction $Q_j(z)$ d\\'epend de la valeur propre, mais c'est un polyn\\^ome.\nLa relation (\\ref{relB}) est la fameuse {\\bf relation de Baxter} (ou \\og relation $TQ$ de Baxter\\fg).\nLes polyn\\^omes $Q_j$ sont appel\\'es {\\bf polyn\\^omes de Baxter}.\n\n\\medskip\n\nEn r\\'esultent alors naturellement les questions suivantes :\n\n- Y a-t-il une explication pour l'existence de la relation de Baxter ?\n\n- Une expression analogue avec des polyn\\^omes permet-elle de d\\'ecrire le spectre d'autres syst\\`emes quantiques ?\n\n\\medskip\n\nUne conjecture formul\\'ee en 1998 par Frenkel-Reshetikhin \\cite{Fre} affirme que la deuxi\\`eme question doit avoir une r\\'eponse positive. Comme on ne peut esp\\'erer effectuer en g\\'en\\'eral le calcul d\\'etaill\\'e de Baxter qui est connu pour le mod\\`ele $XXZ$, c'est en r\\'epondant \\`a la premi\\`ere question que nous pouvons d\\'emontrer cette conjecture. Pour ce faire, \\'etudions les structures math\\'ematiques, alg\\'ebriques, sous-jacentes \\`a la th\\'eorie.\n\n\\section{Groupes quantiques et leurs repr\\'esentations}\n\nLes {\\bf groupes quantiques} sont graduellement apparus au cours des ann\\'ees 1970, en particulier dans les travaux de l'\\'ecole de Leningrad, comme le cadre naturel math\\'ematique pour \\'etudier les matrices de transfert. Drinfeld \\cite{Dri} et Jimbo \\cite{J} ont ind\\'ependamment d\\'ecouvert une formulation alg\\'ebrique uniforme sous forme d'{\\bf alg\\`ebres de Hopf}. Il s'agit d'un des r\\'esultats cit\\'es pour la m\\'edaille Fields de Drinfeld en 1990.\n\nPour introduire les groupes quantiques de Drinfeld-Jimbo, consid\\'erons d'abord un objet tr\\`es classique, une {\\bf alg\\`ebre de Lie} (simple) complexe de dimension finie. Il s'agit d'un espace vectoriel de dimension finie $\\mathfrak{g}$ muni d'un {\\bf crochet de Lie}, c'est-\\`a-dire d'une application bilin\\'eaire antisym\\'etrique \n$$[,]:\\mathfrak{g}\\times \\mathfrak{g}\\rightarrow \\mathfrak{g}$$ \nsatisfaisant la formule de Jacobi\n$$[x,[y,z]] + [y,[z,x]] + [z,[x,y]] = 0\\text{ pour tous }x,y,z\\in\\mathfrak{g}.$$\nL'exemple le plus simple, mais n\\'eanmoins non trivial car il correspond au mod\\`ele $XXZ$, est celui de l'alg\\`ebre de Lie $\\mathfrak{g} = sl_2$ : c'est l'espace des matrices complexes $2\\times 2$ de trace nulle, muni du crochet naturel \n$$[M,N] = MN - NM$$ \npour lequel il est clairement stable. Pour les g\\'en\\'erateurs lin\\'eaires\n$$E = \\begin{pmatrix}0&1\\\\0&0\\end{pmatrix}\\text{ , }F = \\begin{pmatrix}0&0\\\\1&0\\end{pmatrix}\n\\text{ , }H = \\begin{pmatrix}1&0\\\\0&-1\\end{pmatrix},$$\non a par exemple la relation\n\\begin{equation}\\label{crochet}[E,F] = H.\\end{equation}\n\nCes alg\\`ebres de Lie ont des analogues naturelles de dimension infinie, les {\\bf alg\\`ebres de lacets} \n$$\\hat{\\Glie} = \\Glie \\otimes \\CC[t^{\\pm 1}],$$\navec le crochet de Lie d\\'efini par\n$$[x\\otimes f(t),y\\otimes g(t)] = [x,y]\\otimes (fg)(t)\\text{ pour $x,y\\in\\Glie$ et $f(t),g(t)\\in \\CC[t^{\\pm 1}]$},$$ \nce qui revient \\`a remplacer le corps $\\CC$ par l'anneau des polyn\\^omes de Laurent complexes \n$$\\CC[t^{\\pm 1}] = \\left\\{\\sum_{N\\leq i\\leq M} a_i t^i| N,M\\in\\ZZ, a_i\\in\\CC\\right\\}.$$ \nCes alg\\`ebres sont des quotients d'{\\bf alg\\`ebres de Kac-Moody affines}, qui ont des propri\\'et\\'es alg\\'ebriques semblables \\`a celles des alg\\`ebres de Lie simples de dimension finie (notamment une pr\\'esentation analogue \\`a celle de Serre pour $\\Glie$, comme l'ont montr\\'e Kac (1968) et Moody (1969), voir \\cite{ka}). Elles ont \\'et\\'e \\'etudi\\'ees intensivement pour leurs diverses applications en math\\'ematiques et physique math\\'ematique.\n\nMaintenant, pour \\'etudier les syst\\`emes quantiques qui nous int\\'eressent, ces alg\\`ebres de Lie classiques doivent \\^etre {\\bf quantifi\\'ees}, c'est-\\`a-dire d\\'eform\\'ees en tenant compte du param\\`etre quantique \n$$q = \\text{exp}(h)\\in\\CC^*,$$ \no\\`u $h$ est un analogue de la grandeur de Planck ($q$ sera bien identifi\\'e au param\\`etre quantique de la relation (\\ref{relB})). \nOn retrouve les structures classiques pour $h\\rightarrow 0$, donc $q\\rightarrow 1$. {\\it On supposera dans la suite que $q$ n'est pas une racine de l'unit\\'e.}\n\nBien qu'une telle quantification des alg\\`ebres de Lie $\\Glie$ ou $\\hat{\\Glie}$ elles-m\\^emes ne soit pas connue, Drinfeld et Jimbo ont d\\'ecouvert qu'il existe une quantification naturelle de leurs {\\bf alg\\`ebres enveloppantes} respectives $\\mathcal{U}(\\Glie)$ et $\\mathcal{U}(\\hat{\\Glie})$ (alg\\`ebres universelles d\\'efinies \\`a partir des alg\\`ebres de Lie, par exemple en rempla\\c cant dans la pr\\'esentation de Serre les crochets $[x,y]$ par des expressions alg\\'ebriques $xy - yx$ dans l'alg\\`ebre). On obtient alors les groupes quantiques $\\mathcal{U}_q(\\Glie)$, $\\mathcal{U}_q(\\hat{\\Glie})$ qui d\\'ependent\\footnote{Elles peuvent \\^etre d\\'efinies comme des alg\\`ebres sur $\\CC[[h]]$.} du param\\`etre quantique $q$, voir \\cite{CP}. \n\nPar exemple dans $\\mathcal{U}_q(sl_2)$ la relation (\\ref{crochet}) devient\n$$[E,F] = \\frac{e^{hH} - e^{-hH}}{q - q^{-1}}$$\nqui tend bien vers $H$ quand $h$ tend vers $0$.\n\nLe cas des {\\bf alg\\`ebres affines quantiques} $\\mathcal{U}_q(\\hat{\\Glie})$ est particuli\\`erement remarquable car Drinfeld \\cite{Dri2} a d\\'emontr\\'e\\footnote{La preuve a \\'et\\'e pr\\'ecis\\'ee par la suite par Beck puis par Damiani.} qu'elles peuvent non seulement \\^etre obtenues comme quantification de $\\mathcal{U}(\\hat{\\Glie})$, mais \\'egalement, par un autre proc\\'ed\\'e, comme {\\bf affinisation} du groupe quantique $\\mathcal{U}_q(\\Glie)$. C'est la {\\bf r\\'ealisation de Drinfeld} des alg\\`ebres affines quantiques. Ceci peut \\^etre \\'enonc\\'e dans le diagramme \\og commutatif\\fg{} suivant :\n$$\\xymatrix{ &\\hat{\\Glie}\\ar@{-->}[dr]^{\\text{Quantification}}& \n\\\\ \\Glie \\ar@{-->}[ur]^{\\text{Affinisation}}\\ar@{-->}[dr]_{\\text{Quantification}}& & \\U_q(\\hat{\\Glie})\n\\\\ & \\U_q(\\Glie)\\ar@{-->}[ur]_{\\text{Affinisation quantique}}& }$$\nCe th\\'eor\\`eme, qui revient \\`a donner deux pr\\'esentations isomorphes de $\\mathcal{U}_q(\\hat{\\Glie})$, est un analogue quantique du th\\'eor\\`eme classique de Kac et Moody. Il s'agit d'une bonne indication de l'importance de ces alg\\`ebres d'un point de vue alg\\'ebrique.\n\nLes alg\\`ebres affines quantiques $\\U_q(\\hat{\\Glie})$ ont en fait une structure beaucoup plus riche, ce sont des alg\\`ebres de Hopf. Elles sont notamment munies d'une {\\bf comultiplication} (qui est une op\\'eration duale de la multiplication), c'est-\\`a-dire d'un morphisme d'alg\\`ebre\n\\begin{equation}\\label{coproduit}\\Delta : \\U_q(\\hat{\\Glie}) \\rightarrow \\U_q(\\hat{\\Glie})\\otimes \\U_q(\\hat{\\Glie}).\\end{equation}\nMais surtout, $\\U_q(\\hat{\\Glie})$ poss\\`ede une {\\bf $R$-matrice universelle}, c'est-\\`a-dire un \\'el\\'ement (canonique) dans le carr\\'e tensoriel\\footnote{En fait, dans une l\\'eg\\`ere compl\\'etion du carr\\'e tensoriel.}\n$$\\mathcal{R}(z) \\in (\\U_q(\\hat{\\Glie})\\otimes \\U_q(\\hat{\\Glie}))[[z]]$$\nqui est notamment une solution de l'{\\bf \\'equation de Yang-Baxter quantique} :\n$$\\mathcal{R}_{12}(z)\\mathcal{R}_{13}(zw)\\mathcal{R}_{23}(w) = \\mathcal{R}_{23}(w)\n\\mathcal{R}_{13}(zw) \\mathcal{R}_{12}(z).$$\nLes param\\`etres formels $z$, $w$ sont appel\\'es {\\bf param\\`etres spectraux}. Cette \\'equation est \\`a valeurs dans le cube tensoriel \n$$(\\U_q(\\hat{\\Glie}))^{\\otimes 3}[[z, w]].$$\nLes indices dans les facteurs indiquent l'emplacement des termes de la $R$-matrice universelle : \n$$\\mathcal{R}_{12}(z) = \\mathcal{R}(z)\\otimes 1\\text{ , }\\mathcal{R}_{23}(z) = 1 \\otimes \\mathcal{R}(z)...$$\nIl s'agit d'une \\'equation hautement non triviale, li\\'ee aux mouvements de tresses. En effet, dans la figure 4 on retrouve l'\\'equation en lisant de bas en haut et en multipliant\npar un facteur $\\mathcal{R}_{\\alpha\\beta}$ d'indice $(\\alpha,\\beta)$ lorsque le brin $\\alpha$ croise le brin $\\beta$. C'est pour cette raison que la th\\'eorie des repr\\'esentations des groupes quantiques permet de construire des invariants en topologie de basse dimension (notamment les polyn\\^omes de Jones des n\\oe uds). Il s'agit historiquement, avec la construction par Lusztig et Kashiwara de bases canoniques de repr\\'esentations des alg\\`ebres de Lie classiques, d'un des premiers grands succ\\`es de la th\\'eorie des groupes quantiques. Nous n'aborderons pas ces sujets ici pour nous concentrer sur les applications aux syst\\`emes quantiques.\n \\begin{center}\n\\begin{figure}\\label{tresse}\n \\hspace{4cm} \n {\\epsfig{file=tresse.eps,width=.4\\linewidth}\n}\n \\caption{Equation de Yang-Baxter}\n\\end{figure}\n\\end{center}\n\nPour d\\'ecrire des solutions de l'\\'equation de Yang-Baxter quantique, on peut sp\\'ecialiser sur des {\\bf repr\\'esentations} de dimension finie de $\\U_q(\\hat{\\Glie})$.\nUne repr\\'esentation (lin\\'eaire) de $\\U_q(\\hat{\\Glie})$ est un espace vectoriel $V$ (ici complexe) muni d'un morphisme d'alg\\`ebre\n$$\\rho_V : \\U_q(\\hat{\\Glie}) \\rightarrow \\text{End}(V).$$\nAutrement dit, l'alg\\`ebre $\\U_q(\\hat{\\Glie})$ agit sur l'espace $V$ par op\\'erateurs lin\\'eaires. \n\nL'\\'etude des repr\\'esentations est un vaste domaine, central en math\\'ematiques, appel\\'e {\\bf th\\'eorie des repr\\'esentations}. En arithm\\'etique par exemple, les repr\\'esentations \nde groupes de Galois jouent un r\\^ole crucial. Elles sont \\'egalement essentielles dans la formulation m\\^eme des principes de la physique quantique car ils font intervenir des repr\\'esentations\nde l'alg\\`ebre des observables.\n\nOn d\\'efinit naturellement la {\\bf somme directe de repr\\'esentations} $(V,\\rho_V)$ et $(V',\\rho_{V'})$ avec l'application $\\rho_{V\\oplus V'} = \\rho_V + \\rho_{V'}$ \\`a valeurs\ndans $\\text{End}(V \\oplus V')$.\n\nLes {\\bf repr\\'esentations simples}, c'est-\\`a-dire qui n'ont pas de sous-repr\\'esentation (sous-espace stable pour l'action de l'alg\\`ebre) propre, sont particuli\\`erement importantes, comme nous allons le voir dans notre \\'etude.\nElles constituent les \\og briques \\'el\\'ementaires\\fg{} de la th\\'eorie des repr\\'esentations. Par exemple, toute repr\\'esentation de dimension finie de $\\U_q(\\Glie)$ est {\\bf semi-simple}, c'est-\\`a-dire isomorphe \\`a une somme\ndirecte de repr\\'esentations simples\\footnote{Ce r\\'esultat d\\'emontr\\'e par M. Rosso et G. Lusztig est un analogue quantique du th\\'eor\\`eme classique de Weyl qui assure que toute repr\\'esentation de dimension finie de $\\U(\\Glie)$ est semi-simple.}. Ce n'est pas le cas\\footnote{Cependant, toute repr\\'esentation $V$ de dimension finie de $\\U_q(\\hat{\\Glie})$ admet une filtration de Jordan-H\\\"older par des sous-repr\\'esentations $V_0 = V\\supset V_1\\supset V_2 \\cdots \\supset V_N = \\{0\\}$ avec les $V_i\/V_{i+1}$ simples.} pour l'alg\\`ebre affine quantique $\\U_q(\\hat{\\Glie})$.\n\n\nComme $\\U_q(\\hat{\\Glie})$ est munie d'un coproduit (\\ref{coproduit}), pour deux repr\\'esentations $(V,\\rho_V)$ et $(V',\\rho_{V'})$, le produit tensoriel $V\\otimes V'$\nest aussi une repr\\'esentation en utilisant\n$$\\rho_{V\\otimes V'} = (\\rho_V\\otimes \\rho_{V'})\\circ \\Delta : \\U_q(\\hat{\\Glie})\\rightarrow \\text{End}(V)\\otimes \\text{End}(V') = \\text{End}(V\\otimes V').$$\nCette action sur un {\\bf produit tensoriel de repr\\'esentation} sera utile par la suite. Mais ind\\'ependamment on peut faire aussi agir directement la $R$-matrice\nuniverselle sur un carr\\'e tensoriel $V\\otimes V$ pour $V$ une repr\\'esentation de dimension finie de $\\U_q(\\hat{\\Glie})$ : on peut en effet consid\\'erer l'image de la $R$-matrice universelle dans $\\text{End}(V^{\\otimes 2})(z)$\n$$\\mathcal{R}_{V,V}(z) = (\\rho_V\\otimes \\rho_V)(\\mathcal{R}(z))\\in \\text{End}(V)^{\\otimes 2}[[z]] = \\text{End}(V^{\\otimes 2})[[z]].$$\nOn obtient aussi une solution de l'\\'equation de Yang-Baxter quantique, dite {\\bf $R$-matrice}, mais dans l'alg\\`ebre de dimension finie $\\text{End}(V^{\\otimes 2})[[z]]$.\n\nPar exemple, dans le cas $\\Glie = sl_2$, l'alg\\`ebre affine quantique $\\U_q(\\hat{sl_2})$ poss\\`ede une repr\\'esentation de dimension $2$ dite {\\bf repr\\'esentation fondamentale}\net not\\'ee $V_1$. Par le proc\\'ed\\'e d\\'ecrit ci-dessus, elle produit la $R$-matrice suivante\n\\footnote{La solution explicite de l'\\'equation de Yang-Baxter donn\\'ee ici est la $R$-matrice \\og normalis\\'ee\\fg, obtenue en multipliant $\\mathcal{R}_{V_1, V_1}(z)$ par une certaine fonction scalaire de $z$. On peut constater que ses coefficients sont des fonctions m\\'eromorphes en $z$. C'est un ph\\'enom\\`ene g\\'en\\'eral, voir \\cite{efk}.} dans $\\text{End}(V_1^{\\otimes 2})[[z]]$ avec $V_1^{\\otimes 2}$ qui est de dimension $4$ :\n$$\\begin{pmatrix}1&0&0&0\\\\ 0&\\frac{q^{-1}(z-1)}{z-q^{-2}}&\\frac{z(1 - q^{-2})}{z-q^{-2}}&0\\\\0&\\frac{1-q^{-2}}{z - q^{-2}}&\\frac{q^{-1}(z - 1)}{z - q^{-2}}&0\\\\0&0&0&1\\end{pmatrix}.$$\nC'est la $R$-matrice associ\\'ee au mod\\`ele $XXZ$. Mais la th\\'eorie des groupes quantiques en produit beaucoup d'autres, selon qu'on change l'alg\\`ebre de Lie $\\Glie$ ou la repr\\'esentation $V$. \nElles correspondent \\`a autant de syst\\`emes quantiques. \n\nLa {\\bf matrice de transfert} $\\mathcal{T}_V(z)$ est alors d\\'efinie en prenant la trace partielle sur la repr\\'esentation, c'est-\\`a-dire \n\\begin{equation}\\label{transfer}\n\\mathcal{T}_V(z) = ((\\on{Tr}_V \\circ \\rho_V) \\otimes \\on{id})({\\mathcal{R}(z)})\\in \\U_q(\\hat{\\Glie})[[z]].\n\\end{equation}\nLa repr\\'esentation $V$ qui sert \\`a construire la matrice de transfert $\\mathcal{T}_V(z)$ est appel\\'ee {\\bf espace auxiliaire}.\nComme cons\\'equence de l'\\'equation de Baxter, les matrices de transfert commutent, c'est-\\`a-dire que pour une autre repr\\'esentation $V'$ on a\n$$\\mathcal{T}_V(z)\\mathcal{T}_V(z') = \\mathcal{T}_V(z')\\mathcal{T}_V(z)\\text{ dans }\\U_q(\\hat{\\Glie})[[z,z']].$$\nAinsi, les coefficients $\\mathcal{T}_V[N]$ des matrices de transfert, d\\'efinis par\n$$\\mathcal{T}_V(z) = \\sum_{N\\geq 0}z^N \\mathcal{T}_V[N],$$\nengendrent une sous-alg\\`ebre commutative de $\\U_q(\\hat{\\Glie})$. \n\nDonnons-nous une autre repr\\'esentation de dimension finie $W$ de $\\U_q(\\hat{\\Glie})$, dite {\\bf espace des \\'etats}. Les coefficients $\\mathcal{T}_V[N]$ des matrices de transfert agissent donc sur $W$ en une grande famille commutative d'op\\'erateurs. Ainsi, il fait sens de parler des valeurs propres des matrices de transfert $\\mathcal{T}_V(z)$ sur $W$.\n\nDans le cas particulier du mod\\`ele $XXZ$, on rappelle que ${\\mathfrak g} = sl_2$ et $V = V_1$ est une repr\\'esentation fondamentale de dimension $2$.\nL'espace des \\'etats $W$ est un produit tensoriel de repr\\'esentations fondamentales de dimension $2$ et l'image de l'op\\'erateur $\\mathcal{T}_{V_1}(z)$\ndans $\\text{End}(W)[[z]]$ est bien la matrice de transfert de Baxter. Les r\\'esultats de Baxter donnent donc la structure du spectre de $\\mathcal{T}_{V_1}(z)$\nsur $W$ dans ce cas. \n\n\\medskip\n\nQue dire en g\\'en\\'eral ?\n\n\\section{La conjecture du spectre quantique}\n\nEn 1998 \\cite{Fre}, E. Frenkel et N. Reshetikhin ont propos\\'e une nouvelle approche dans le but de g\\'en\\'eraliser les formules de Baxter.\n\n\\`A cette fin, ils ont introduit le {\\bf $q$-caract\\`ere} $\\chi_q(V)$ d'une repr\\'esentation $V$ de dimension finie de $\\U_q(\\hat{\\Glie})$. Il s'agit d'un polyn\\^ome\nde Laurent \\`a coefficients entiers en des ind\\'etermin\\'ees $Y_{i,a}$ ($1\\leq i\\leq n$, $a\\in\\CC^*$) \n$$\\chi_q(V) \\in \\ZZ[Y_{i,a}^{\\pm 1}]_{1\\leq i\\leq n, a\\in\\CC^*}.$$\nL'entier $n$ est ici le {\\bf rang} de l'alg\\`ebre de Lie $\\Glie$, qui par exemple vaut bien $n$ pour $\\Glie = sl_{n+1}$. La d\\'efinition du $q$-caract\\`ere de $V$ repose\nsur une d\\'ecomposition de $V$ en sous-espaces de Jordan\\footnote{Pour une famille commutative d'op\\'erateurs sur $W$, obtenus \\`a partir de la r\\'ealisation de Drinfeld de $\\U_q(\\hat{\\mathfrak{g}})$ et distincts en g\\'en\\'eral des coefficients des matrices de transfert.} $V_m$ param\\'etr\\'es par des mon\\^ome $m$ en les variables $Y_{i,a}^{\\pm 1}$ :\n$$V = \\bigoplus_m V_m.$$\nLe $q$-caract\\`ere encode les dimensions de cette d\\'ecomposition. Il est d\\'efini par\n$$\\chi_q(V) = \\sum_m \\text{dim}(V_m) m.$$\nAinsi, les coefficients de $\\chi_q(V)$ sont en fait positifs et leur somme est la dimension $V$. \n\nPar exemple, pour $\\Glie = sl_2$ et $V = V_1$ la repr\\'esentation fondamentale de dimension $2$, \n\\begin{equation}\\label{qcar}\n\\chi_q(V) = Y_{1,q^{-1}} + Y_{1,q}^{-1}.\n\\end{equation}\nOn a donc dans ce cas deux sous-espaces de Jordan de dimension $1$ associ\\'es aux mon\\^omes respectifs $Y_{1,q^{-1}}$ et $Y_{1,q}^{-1}$ :\n$$V = V_{Y_{1,q^{-1}}} \\oplus V_{Y_{1,q}^{-1}}.$$\n\nLa {\\bf conjecture du spectre quantique} de Frenkel et Reshetikhin \\cite{Fre} pr\\'edit\\footnote{Dans des cas particuliers, une conjecture analogue avait \\'et\\'e formul\\'ee par N. Reshetikhin \\cite{R3}; V. Bazhanov et N. Reshetikhin\n\\cite{BR}; et A. Kuniba et J. Suzuki \\cite{KS}.} que pour une repr\\'esentation de dimension finie donn\\'ee $V$,\nles valeurs propres $\\lambda_j$ de $\\mathcal{T}_V(z)$ sur une repr\\'esentation simple\\footnote{\nPlus g\\'en\\'eralement, $W$ peut \\^etre un produit tensoriel de repr\\'esentations simples.} $W$ sont obtenues de la mani\\`ere suivante :\ndans le $q$-caract\\`ere $\\chi_q(V)$ de $V$, on remplace chaque variable formelle $Y_{i,a}$ par\\footnote{Pour simplifier l'exposition, on supposera dans la suite de $\\Glie$\nest simplement lac\\'ee (c'est le cas notamment des alg\\`ebres de Lie $sl_{n+1}$).}\n$$F_{i}(az) q^{\\text{deg}(Q_{i,j})} \\frac{Q_{i,j}(zaq^{-1})}{Q_{i,j}(zaq)},$$\no\\`u $F_{i}(z)$ est une fonction universelle, au sens o\\`u elle ne d\\'epend pas de la valeur propre $\\lambda_j$, et $Q_{i,j}(z)$ d\\'epend de la valeur propre $\\lambda_j$ mais est un polyn\\^ome. C'est l'analogue du polyn\\^ome de Baxter.\n\nNotons que c'est bien le $q$-carat\\`ere {\\it de l'espace auxiliaire} $V$ qui est utilis\\'e pour \\'ecrire la formule du spectre de la matrice de transfert {\\it sur l'espace des \\'etats} $W$.\n\nDans le cas particulier du mod\\`ele $XXZ$, on obtient \\`a partir de (\\ref{qcar}) la formule\n$$\\lambda_j = F_{1}(zq^{-1}) q^{\\text{deg}(Q_{1,j})} \\frac{Q_{1,j}(zq^{-2})}{Q_{1,j}(z)} + (F_1(zq))^{-1} q^{-\\text{deg}(Q_{1,j})} \\frac{Q_{1,j}(zq^2)}{Q_{1,j}(z)}.$$\nAinsi, la conjecture est bien compatible avec la formule de Baxter (\\ref{relB}) en identifiant \n$$A(z) = (D(zq^2))^{-1} = (F_1(zq))^{-1}q^{-\\text{deg}(Q_{1,j})}.$$ \nOn peut d\\'etailler par exemple le cas o\\`u l'espace des \\'etats $W\\simeq V_1$ est de dimension $2$. On a alors $2$ valeurs propres $\\lambda_0$ et $\\lambda_1$. La fonction universelle est\n$$F_1(z) = q^{1\/2}\\text{exp}\\left( \\sum_{r > 0} \\frac{z^r (q^{-r} - q^r)}{r(q^r + q^{-r})}\\right),$$\net les polyn\\^omes de Baxter sont\n$$Q_{1,0}(z) = 1\\text{ et }Q_{1,1}(z) = 1 - z(1 + q + q^2).$$\nOn obtient donc le spectre\n$$\\lambda_0 = F_1(zq^{-1})\\left(1 + q^{-3}\\frac{1 - z^{-1}}{1 - z^{-1}q^{-2}}\\right),$$\n\\begin{equation}\\label{bethe}\\lambda_1 = F_1(zq^{-1}) \\left(q\\frac{1-z(1+q^{-1} + q^{-2})}{1 - z(1 + q + q^2)} + q^{-4}\\frac{(1 - z^{-1})(1 - z(q^2 + q^3+ q^4))}{(1 - z^{-1}q^{-2})(1 - z(1 + q + q^2))}\\right).\\end{equation}\n\nEn g\\'en\\'eral la formule peut avoir plus de deux termes.\nPar exemple, dans le cas d'une certaine repr\\'esentation fondamentale $V$ de dimension $3$ de $\\U_q(\\hat{sl_3})$, le $q$-caract\\`ere est\n\\begin{equation}\\label{sl3}\\chi_q(V) = Y_{1,q^{-1}} + Y_{1,q}^{-1}Y_{2,1} + Y_{2,q^2}^{-1},\\end{equation}\net la formule pour le spectre est\n$$F_1(zq^{-1}) q^{\\text{deg}(Q_{1,j})}\\frac{Q_{1,j}(zq^{-2})}{Q_{1,j}(z)} +\\frac{F_2(z) q^{\\text{deg}(Q_{2,j})}}{F_1(zq)q^{\\text{deg}(Q_{1,j})}} \\frac{Q_{1,j}(zq^2)Q_{2,j}(zq^{-1})}{Q_{1,j}(z)Q_{2,j}(zq)} + \\frac{q^{-\\text{deg}(Q_{2,j})}}{(F_2(zq^2))^{-1}} \\frac{Q_{2,j}(zq^3)}{Q_{2,j}(zq)}.$$\n\nNotons qu'en g\\'en\\'eral les repr\\'esentations simples $V$ de dimension finie peuvent avoir une dimension \\og tr\\`es grande\\fg. Par exemple, H. Nakajima a obtenu (\\`a l'aide d'un super-calculateur et en s'appuyant sur \\cite{Nak}) que dans le cas de l'alg\\`ebre de Lie exceptionelle de type $E_8$, une des repr\\'esentations fondamentales a un $q$-caract\\`ere avec $6899079264$ mon\\^omes qui n\\'ecessite un fichier de taille m\\'emoire $180$ Go pour \\^etre \\'ecrit. Il y a donc autant de termes dans la formule de Baxter correspondante. Et les repr\\'esentations fondamentales sont les repr\\'esentations simples de dimensions les plus basses. \n\nIl est donc hors de question d'aborder cette conjecture par un calcul explicite en g\\'en\\'eral. D'ailleurs, m\\^eme si les repr\\'esentations simples de dimension finie de $\\U_q(\\hat{\\Glie})$ ont \\'et\\'e intensivement \\'etudi\\'ees ces vingt-cinq derni\\`eres ann\\'ees, on ne conna\\^it pas en g\\'en\\'eral de formule pour leur $q$-caract\\`ere, ni m\\^eme en fait pour leur dimension.\n\nAinsi, il faut de nouvelles structures pour aborder la conjecture du spectre quantique.\n\nNotre d\\'emonstration avec E. Frenkel \\cite{FH} de la conjecture du spectre quantique repose ainsi sur \nde nouveaux ingr\\'edients dont nous donnons un bref aper\\c cu dans les sections suivantes.\n\n\\section{Repr\\'esentations pr\\'efondamentales}\n\nL'id\\'ee g\\'en\\'erale de la preuve est d'interpr\\'eter les $Q_i$ eux-m\\^emes comme des valeurs\npropres de nouvelles matrices de transfert, construites non pas \\`a partir de repr\\'esentations de dimension\nfinie $V$, mais de repr\\'esentations de dimension infinie dite {\\bf repr\\'esentations pr\\'efondamentales}\n$L_{i,a}^+$ o\\`u $1\\leq i\\leq n$ et $a\\in\\CC^*$.\n\nNous avions construit pr\\'ealablement ces repr\\'esentations pr\\'efondamentales avec M. Jimbo \\cite{HJ} dans un contexte un peu diff\\'erent. Ce ne sont pas des repr\\'esentations de l'alg\\`ebre enti\\`ere $\\U_q({\\hat{\\mathfrak g}})$, mais d'une certaine sous-alg\\`ebre, la {\\bf sous-alg\\`ebre de Borel} \n$$\\U_q(\\hat{\\mathfrak{b}})\\subset \\U_q({\\hat{\\mathfrak g}}).$$ \nCela ne pose cependant pas de probl\\`eme pour construire la matrice de transfert $\\mathcal{T}_{i,a}(z)$ associ\\'ee \\`a la repr\\'esentation pr\\'efondamentale $L_{i,a}$ par la formule (\\ref{transfer}), car il est justement connu que la partie \\og gauche\\fg{} de la $R$-matrice universelle (celle \\`a qui on applique $\\rho_{L_{i,a}}$) est dans la sous-alg\\`ebre de Borel\\footnote{On ne peut cependant pas appliquer la trace \\`a un espace de dimension infinie. On utilise une graduation naturelle de $L_i$ par des espaces de dimension finie (les espaces de poids). Ainsi dans la suite, les traces, matrices de transfert, etc. sont \\og tordues\\fg{} par cette graduation.} :\n$$\\mathcal{T}_{i,a}(z) = ((\\on{Tr}_{L_{i,a}} \\circ \\rho_{L_{i,a}}) \\otimes \\on{id})({\\mathcal{R}(z)})\\in \\U_q(\\hat{\\Glie})[[z]].$$\nIl n'est alors pas difficile de montrer qu'en utilisant un certain automorphisme de $\\U_q(\\hat{\\mathfrak{b}})$ on a \n$$\\mathcal{T}_{i,a}(z) = \\mathcal{T}_i(az)\\text{ o\\`u }\\mathcal{T}_i(z) = \\mathcal{T}_{i,1}(z).$$\n\nPour le cas du mod\\`ele $XXZ$, c'est-\\`a-dire pour $\\Glie = sl_2$, V. Bazhanov, S. Lukyanov, et A. Zamolodchikov avaient d\\'ej\\`a construit \\og \\`a la main\\fg{} une repr\\'esentation pr\\'efondamentale (appel\\'ee repr\\'esentation de $q$-oscillation) et la matrice de transfert associ\\'ee (appel\\'ee $Q$-op\\'erateur de Baxter) dans l'article important \\cite{BLZ}. \n\nPour obtenir l'existence des repr\\'esentations pr\\'efondamentales en g\\'en\\'eral \\cite{HJ}, on ne peut encore une fois pas faire de calculs explicites : le point crucial est de consid\\'erer des syst\\`emes inductifs\\footnote{Les inclusions $L_k\\subset L_{k+1}$, construites \\`a l'aide de produits tensoriels de sous-espaces \\cite{h3}, ne sont pas compatibles avec l'action de $\\U_q(\\hat{\\Glie})$ enti\\`ere mais avec celle d'une sous-alg\\`ebre $\\U_q^+(\\hat{\\mathfrak{b}})$ de $\\U_q(\\hat{\\mathfrak{b}})$.} de repr\\'esentations simples $L_k$ (les repr\\'esentations de Kirillov-Reshetikhin) de dimension finie strictement croissante avec $k\\geq 0$ et de d\\'eterminer en quel sens l'action de la sous-alg\\`ebre de Borel $\\U_q(\\hat{\\mathfrak{b}})$ \\og converge\\fg{} sur la limite inductive $L_\\infty$, qui elle est de dimension infinie :\n$$L_0\\subset L_1\\subset L_2\\subset \\cdots \\subset L_k\\subset L_{k+1}\\subset \\cdots \\subset L_{\\infty}.$$\nIl s'agit ainsi d'une construction asymptotique des repr\\'esentations pr\\'efondamentales.\n\nEn utilisant certaines filtrations de la repr\\'esentation pr\\'efondamentale $L_{i,a}$, nous \\'etablissons qu'effectivement, \\`a un facteur scalaire universel $f_i(z)$ pr\\`es, la matrice de transfert associ\\'ee $\\mathcal{T}_i(z)$ agit sur l'espace des \\'etats $W$ par un op\\'erateur polyn\\^omial :\n$$\\rho_W(\\mathcal{T}_i(z)) \\in f_i(z) \\times (\\text{End}(W))[z].$$\nIl n'est pas difficile d'\\'ecrire une formule explicite pour la fonction universelle scalaire $f_i(z)\\in\\CC[[z]]$ (elle ne d\\'epend que de $V$ et de $W$). Il est beaucoup plus d\\'elicat d'obtenir des informations sur la partie lin\\'eaire polyn\\^omiale\n$$(f_i(z))^{-1}\\rho_W(\\mathcal{T}_i(z)) \\in (\\text{End}(W))[z].$$\n\nDe m\\^eme que les matrices de transfert usuelles commutent, on a \n$$\\mathcal{T}_i(z)\\mathcal{T}_i(z') = \\mathcal{T}_i(z')\\mathcal{T}_i(z),$$\net donc on obtient une famille commutative $\\mathcal{T}_i[m]$ si on \\'ecrit\n$$\\mathcal{T}_i(z) = \\sum_{m\\geq 0}\\mathcal{T}_i[m]z^m.$$\nEn utilisant la trigonalisation simultan\\'ee, cette commutativit\\'e implique que les valeurs propres sur $W$ de $(F_i(z))^{-1}\\mathcal{T}_i(z)$ elles-m\\^emes sont \\'egalement des polyn\\^omes.\n\n\\section{Anneau de Grothendieck et relations de Baxter}\n\nIl faut enfin d\\'emontrer que les valeurs propres de la matrice de transfert $\\mathcal{T}_V(z)$\ns'expriment, comme pr\\'evu dans la conjecture, en terme des valeurs propres des $\\mathcal{T}_i(z)$\nselon le $q$-caract\\`ere de $V$. Autrement dit, en rempla\\c cant dans $\\chi_q(V)$ chaque\nvariable $Y_{i,a}$ par le quotient\\footnote{Ce quotient doit en fait \\^etre multipli\\'e par une matrice de transfert d'une repr\\'esentation de dimension $1$ que nous omettons dans la suite pour simplifier l'exposition.} \n$$\\mathcal{T}_i(azq^{-1})\/\\mathcal{T}_i(azq),$$ \nobtient-on la matrice de transfert $\\mathcal{T}_V(z)$ ?\n\nDans le cas ${\\mathfrak g} = \\wh{sl}_2$ et $V$ de dimension du mod\\`ele $XXZ$, un calcul \\cite{BLZ} donne le r\\'esultat. On a bien : \n$$\\mathcal{T}_V(z) = \\frac{\\mathcal{T}_1(zq^{-1})}{\\mathcal{T}_1(zq)} + \\frac{\\mathcal{T}_1(zq^3)}{\\mathcal{T}_1(zq)}.$$\n\nEn g\\'en\\'eral, nous proposons d'utiliser la {\\bf cat\\'egorie $\\mathcal{O}$} que nous avons d\\'efinie avec M. Jimbo \\cite{HJ}. Il s'agit d'une {\\bf cat\\'egorie mono\\\" idale} (stable par produits tensoriels) de repr\\'esentations de l'alg\\`ebre de Borel $\\U_q(\\hat{\\mathfrak{b}})$, contenant les repr\\'esentations de dimension finie ainsi que les repr\\'esentations pr\\'efondamentales. \nNous {\\bf cat\\'egorifions} les relations de Baxter g\\'en\\'eralis\\'ees, c'est-\\`a-dire que nous les exprimons en termes de la cat\\'egorie $\\mathcal{O}$. Pour ce faire, on peut d\\'efinir l'{\\bf anneau de Grothendieck} $K(\\mathcal{O})$ de cette cat\\'egorie. En tant que groupe, il s'agit du groupe libre engendr\\'e par les classes d'isomorphismes de repr\\'esentations simples :\n$$K(\\mathcal{O}) = \\bigoplus_{[V]\\text{ Classe d'un simple dans }\\mathcal{O}.} \\ZZ [V].$$\nAlors tout objet (non n\\'ecessairement simple) de $\\mathcal{O}$ a une image dans $K(\\mathcal{O})$ en imposant la relation\n$$[V''] = [V] + [V']$$\nsi on a une suite exacte dans la cat\\'egorie\n$$0\\rightarrow V \\rightarrow V''\\rightarrow V'\\rightarrow 0.$$\nOn peut alors munir $K(\\mathcal{O})$ d'une structure d'anneau par la relation\n$$[V\\otimes V'] = [V][V']$$\npour des objets $V$, $V'$ de la cat\\'egorie $\\mathcal{O}$.\n\nUn des th\\'eor\\`emes principaux de \\cite{FH} est qu'en rempla\\c cant dans $\\chi_q(V)$ chaque\nvariable $Y_{i,a}$ par le quotient \n$$\\frac{[L_{i,aq^{-1}}]}{[L_{i,aq}]},$$ \nen repla\\c cant $\\chi_q(V)$ par $[V]$ puis en \\og chassant\\fg{} les d\\'enominateurs, on obtient une relation dans l'anneau de Grothendieck $K(\\mathcal{O})$.\n\nPar exemple, dans notre cas favori du mod\\`ele $XXZ$, on obtient\n$$[V] = \\frac{[L_{1,q^{-1}}]}{[L_{1,q}]} + \\frac{[L_{1,q^3}]}{[L_{1,q}]}$$\nqui donne la relation de Baxter cat\\'egorifi\\'ee dans l'anneau de Grothendieck\n$$[V][L_{1,q}] = [V\\otimes L_{1,q}] = [L_{1,q^{-1}}] + [L_{1,q^3}].$$\nEn g\\'en\\'eral on obtient des relations avec plus de termes, comme dans l'exemple pour $\\Glie = sl_3$ ci-dessus pour lequel la formule (\\ref{sl3}) donne\n$$[V\\otimes L_{1,1}\\otimes L_{2,q}] = [L_{1,q^{-2}}\\otimes L_{2,q}] + [L_{1,q^2}\\otimes L_{2,q^{-1}}] + [L_{2,q^3}\\otimes L_{1,1}].$$\nMaintenant, \\og prendre la matrice de transfert\\fg{} est additif et multiplicatif, c'est \\`a dire qu'on a un morphisme d'anneau\\footnote{On peut montrer que ce morphisme d'anneau est injectif et donc que l'anneau de Grothendieck $K(\\mathcal{O})$ est commutatif (bien que la cat\\'egorie ne soit pas tress\\'ee, c'est-\\`a-dire que $V\\otimes V'$ et $V'\\otimes V$ ne sont pas isomorphes en g\\'en\\'eral). En fait, l'application de $q$-caract\\`eres $[V]\\mapsto \\chi_q(V)$ elle-m\\^eme peut \\^etre prolong\\'ee en un morphisme d'anneau injectif sur $K(\\mathcal{O})$.}\n$$\\mathcal{T} : K(\\mathcal{O})\\rightarrow \\mathcal{U}_q(\\hat{\\Glie})[[z]]\\text{ , }[V]\\mapsto \\mathcal{T}_V(z).$$\nAinsi, les relations de Baxter g\\'en\\'eralis\\'ees dans l'anneau de Grothendieck $K(\\mathcal{O})$ impliquent les relations voulues entre les matrices de transfert. La conjecture du spectre quantique est donc d\\'emontr\\'ee.\n\n\\begin{center}*\\end{center}\n\nPour conclure, les formules pour les valeurs propres des matrices de transfert en terme\ndes polyn\\^omes $Q_{i,j}$ impliquent des \\'equations entre les racines de ces polyn\\^omes pour garantir que les p\\^oles\napparents se simplifient (par exemple dans l'\\'equation (\\ref{bethe}), $(1 + q + q^2)^{-1}$ n'est en fait pas un p\\^ole de $\\lambda_1$). Dans le cas du mod\\`ele $XXZ$ ce sont les fameuses \\'equations de l'Ansatz de Bethe.\nCes consid\\'erations ont men\\'e N. Reshetikhin \\cite{R3} \\`a formuler ces \\'equations dans le cas g\\'en\\'eral\n (voir aussi \\cite{BR,KS, F:icmp}). La preuve de la conjecture du spectre quantique permet de donner une explication et une approche uniforme \\`a ces formules. On a maintenant une autre conjecture importante et ouverte : l'existence d'une bijection entre toutes les valeurs propres et les solutions des \\'equations de l'Ansatz de Bethe (conjecture de compl\\'etude).\n\n\\medskip\n\n{\\bf Remerciements} : je souhaite adresser mes remerciements \\`a E. Ghys pour m'avoir encourag\\'e \\`a \\'ecrire cet article, \\`a E. Frenkel et M. Jimbo pour notre collaboration et enfin \\`a J. Dumont, C. Hernandez, P. Zinn-Justin et l'\\'equipe d'Images de Math\\'ematiques pour leurs remarques sur une version pr\\'eliminaire de ce texte.\n\n\n\\backmatter\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nBuilding large-scale quantum computers is still a challenging task due to a plethora of engineering obstacles \\cite{eng}. One prominent challenge is the intrinsic noise. In fact, implementing scalable and reliable quantum computers requires implementing quantum gates with sufficiently low error rates. There has been substantial progress in characterizing noise in a quantum system \\cite{noise3,noise2,noise1} and in building error correcting schemes that can detect and correct certain types of errors \\cite{qec1,qec2,qec3}.\\\\\nNumerous protocols have been constructed to characterize the noise in quantum devices. Many of these protocols fail in achieving one of the following desirables: scalability to large-scale quantum computers and efficient characterization of the noise. Quantum Process Tomography \\cite{QPT} is a protocol that can give a complete description of the dynamics of a quantum black box, however, it's not scalable to large-scale quantum systems. Randomized Benchmarking (RB) is another protocol that's typically used to estimate the error rate of some set of quantum gates \\cite{ ScalableNoise, RBQG}. Although RB is a scalable protocol in principle, it can only measure a single error rate that's used to approximate the average gate infidelity thus providing an incomplete description of noise. Various other protocols based on RB protocol are able to characterize the correlations of noise between the different qubits, however, these protocols lack scalability \\cite{ScalableNoise,SimultaneousRB,ThreeRB}.\nQuantum Error Mitigation \\cite{QEM} (QEM) is a recently emerging field that aims to improve the accuracy of near-term quantum computational tasks. Whereas Quantum Error Correction (QEC) \\cite{AQEC1,AQEC2} necessitates additional qubits to encode a quantum state in a multiqubit entangled state, QEM does not demand any additional quantum resources. It is considered an excellent alternative for enhancing the performance of Noisy Intermediate-Scale Quantum (NISQ) computing \\cite{nisq}. QEM protocols include zero-noise Richardson extrapolation of results from a collection of experiments of varying noise \\cite{Extrapolation}, probabilistic error cancellation through sampling from a set of quantum circuits generated from a target circuit mixed with Pauli gates from an optimized distribution \n\\cite{pem,ProbabilisticMitigation}, and exploiting state-dependent bias through invert-and-measure techniques to map the predicted state to the strongest state \\cite{bias}.\nMeasurement Error Mitigation (MEM) is another QEM protocol that models the noise in a quantum circuit as a measurement noise matrix $\\bm{E}_{meas}$ applied to the ideal output of the circuit. The columns of $\\bm{E}_{meas}$ are the probability distributions obtained through preparing and immediately measuring all possible $2^n$ basis input states \\cite{QiskitTextbook}.\\\\\nRecently, the authors in \\cite{Nature} developed a protocol based on the RB that relies on the concept of a Gibbs Random Field (GRF) to completely and efficiently estimate the error rates of the Pauli Channel and detect correlated errors between the qubits in a quantum computer. Their effort paves the way to enable quantum error correction and\/or mitigation schemes. Herein, we refer to their efficient learning protocol as the \\{EL protocol\\}. \nIn this paper, we build upon the EL protocol and decompose the average noise of a quantum circuit of specific depth into State Preparation and Measurement (SPAM) error and average gate error.\nWe propose a linear algebraic based protocol and proof to efficiently construct and model the average behavior of noise in a quantum system for any desired circuit depth without having to run a large number of quantum circuits on the quantum computer or simulator. We then rely on this model to mitigate the noisy output of the quantum device. For an n-qubit quantum system, the average behavior of the noise can be well approximated as a special form of a Pauli Channel \\cite{Knill2005,Wallman_2016,Ware_2021}. \nA Pauli channel $\\varepsilon$ acts on a qubit state $\\boldsymbol{\\rho}$ to produce\n\\begin{equation}\n\\varepsilon(\\bm{\\rho})=\\sum_{i}p_i\\bm{P}_{i}\\bm{\\rho}\\bm{P}_{i}\n\\label{equation 1}\n\\end{equation}\nwhere $p_i$ is an error rate associated with the Pauli operator $\\bm{P}_{i}$. The $p_i$'s form a probability distribution $(\\sum_{i}p_{i}=1)$, and are related to the eigenvalues, $\\boldsymbol{\\lambda}$, of the Pauli Channel defined as \n\\begin{equation}\n\\lambda_i=2^{-n}Tr(\\bm{P}_{i}\\varepsilon(\\bm{P}_{i}))\n\\label{Equation 2}\n\\end{equation}\nThus, when a state $\\bm{\\rho}$ is subjected to the noisy\nchannel $\\varepsilon$, $p_i$ describes the probability of a multiqubit Pauli error $\\bm{P}_{i}$ affecting the system, while $\\lambda_i$ describes how faithfully a given multispin Pauli operator is transmitted. $\\bm{p}$ and $\\bm{\\lambda}$ are related by Walsh-Hadamard transform where \n\n\n\\begin{equation}\n\\bm{\\lambda}=\\bm{Wp}\n\\label{Equation 3}\n\\end{equation}\n\nWhile RB only estimates the average value of all $\\lambda_i$ of the Pauli Channel, the EL protocol estimates the individual $\\lambda_i$. A complete characterization of the Pauli channel requires learning more than the eigenvalues or error rates associated with single-qubit Pauli operators such as $\\bm{\\sigma}_{z}^{(1)}$ or $\\bm{\\sigma}_{x}^{(3)}$; it requires learning all of the noise correlations in the system, that is, also learning the eigenvalues and error rates associated with multiqubit Pauli operators such as $\\bm{\\sigma}_{z}^{(1)} \\otimes \\mathbf{1}^{(2)} \\otimes \\bm{\\sigma}_{x}^{(3)}$ and how they vary compared to the ones obtained under independent local noise. Estimating these correlations is essential for performing optimal QEC and\/or QEM. However, these correlations increase exponentially as the number of qubits increases, so having an efficient noise characterization protocol is crucial to direct the error mitigation efforts to capture the critical noise correlations.\\\\\nOur method relies on the error rates vector $\\bm{p}$ of the Pauli-Channel to decompose the average behavior of noise for circuits of depth $m$ into two noise components: a SPAM error matrix denoted by the matrix $\\bm{N}$ and a depth dependent component comprising an average gate error matrix denoted by the matrix $\\bm{M}$. We evaluate our model for the average noise by predicting the average probability distribution for circuits of depth $m$ and computing the distance between this predicted distribution and the empirically obtained one. Finally, we use our proposed decomposition to mitigate noisy outputs of random circuits and compare our mitigation protocol with the MEM protocol \\cite{QiskitTextbook}. We applied our noise characterization and mitigation protocols on the following IBM Q 5-qubit quantum computers: Manila, Lima, and Belem\\cite{IBM}.\n\\section*{Results}\n\\subsection*{Proposed Protocol Theory}\nThe ideal output probability distribution of an $n$-qubit quantum circuit with depth $m$ is perturbed by the SPAM and the average gate errors. Our aim is to construct a comprehensive linear algebraic model that takes into account both these errors for an arbitrary depth $m$. Matrix algebra can then be employed to mitigate the noise as follows: \n\\begin{equation}\n \\bm{C}_{ideal}= \\bm{Q}_{m}^{-1}\\bm{C}_{noisy}\n\\end{equation}\nwhere $\\bm{Q}_{m}$ is the characterized noise matrix for circuits of depth $m$, $\\bm{C}_{ideal}$ and $\\bm{C}_{noisy}$ are the ideal and noisy outputs of a given circuit of depth $m$, respectively.\nThe straight-forward approach would be to construct $\\bm{Q}_m$ from empirical simulations in a similar fashion to the $\\bm{E}_{meas}$ noise matrix that was characterized in the MEM scheme. The columns of $\\bm{Q}_{m}$ comprise the emperical average probability distributions for basis input states $\\ket{\\bm{in}}\\in \\{ \\ket{\\bm{0}},\\,\\ket{\\bm{1}},\\,\\dots,\\,\\ket{\\bm{2^n-1}}\\}$, denoted by $\\hat{\\bm{q}}(m,\\ket{\\bm{in}})$, where $\\hat{\\bm{q}}(m,\\ket{\\bm{in}})$ are obtained through sampling a number of depth $m$ circuits to incorporate the average gate and SPAM errors.\n\\begin{equation}\n\\bm{Q}_m=\n\\begin{bmatrix}\n \\hat{\\bm{q}}(m,\\ket{\\bm{0}}) & \\hat{\\bm{q}}(m,\\ket{\\bm{1}}) & \\hdots &\\hat{\\bm{q}}(m,\\ket{\\bm{2^n-1}})\n\\end{bmatrix}\n\\label{eq:Q}\n\\end{equation}\nBuilding $\\bm{Q}_m$, however, through empirical simulations can be expensive especially when the circuit depth is large. Herein, we propose a method for an efficient estimation of $\\bm{Q}_m$ where the individual probability distributions $\\hat{\\bm{q}}(m,\\ket{\\bm{in}})$ are estimated as follows:\n\\begin{equation}\n \\bm{q}'(m, \\ket{\\bm{in}}) = \\bm{N}_{in}\\bm{M}_{in}^m\\ket{\\bm{in}}\n \\label{eq:qhat}\n\\end{equation}\nwhere $\\bm{N}_{in}$ and $\\bm{M}_{in}$ are input-specific matrices that represent the SPAM error matrix and average gate error for input $\\ket{\\bm{in}}$, respectively. Both $\\bm{M}_{in}$ and $\\bm{N}_{in}$ are extracted empirically using random circuits from a set of small circuit depths $T$ and then used in mitigating the outputs for circuits with higher depths. We first show the construction of $\\bm{N}_{0}$ and $\\bm{M}_{0}$.\\\\\nThe construction of $\\bm{N}_{0}$ and $\\bm{M}_{0}$ proceeds by estimating the error rates vector $\\bm{p}$ associated with the Pauli Channel based on the assumption in Equation \\ref{equation 1} for the average behavior of the noisy quantum device at hand using the EL protocol. The protocol proceeds by constructing $K$ random identity circuits of depth $m \\in T$ \\cite{SimultaneousRB, Nature}. Each circuit is constructed by initializing the qubits to the all-zeros state $\\ket{\\bm{0}}$ followed by choosing a random sequence $s \\in S_{m}$, the set of all length $m$ sequences of one-qubit Clifford gates applied independently on each qubit, followed by an inverse gate for the chosen sequence to ensure an identity circuit. It then estimates the resulting empirical probability distribution $\\hat{\\bm{q}}(m,\\ket{\\bm{0}})$ by averaging over all the empirical probability distributions $\\hat{\\bm{q}}(m,s,\\ket{\\bm{0}})$ for the constructed random identity circuits of depth $m$, that is, \n\\begin{equation}\n \\hat{\\bm{q}}(m,\\ket{\\bm{0}})= \\frac{1}{K}\\sum \\hat{\\bm{q}}(m,s,\\ket{\\bm{0}})\n \\label{Equation 4}\n\\end{equation}\n$\\hat{\\bm{q}}(m,\\ket{\\bm{0}})$ is a vector with $2^n$ entries each corresponding to the possible observed outcome. A Walsh-Hadamard transform is then applied on each $\\hat{\\bm{q}}(m,\\ket{\\bm{0}})$ to obtain\n\\begin{equation}\n \\bm{\\Lambda}(m)=\\bm{W}\\hat{\\bm{q}}(m,\\ket{\\bm{0})}\n \\label{Equation 5}\n\\end{equation}\nEach parameter $\\Lambda_{i}(m)$ in $\\bm{\\Lambda}(m)$ is fitted to the model \n\\begin{equation}\n \\Lambda_{i}(m)=A_{i}\\lambda_{i}^{m}\n \\label{Equation 6}\n\\end{equation}\nwhere $A_i$ is a constant that absorbs SPAM errors and \nthe vector $\\bm{\\lambda}$ of all fitted parameters $\\lambda_i$ is a SPAM-free estimate to the eigenvalues of the Pauli Channel defined in Equation \\ref{Equation 2}. Notice that we can rewrite Equation \\ref{Equation 6} as\n\\begin{equation}\n \\bm{\\Lambda}(m)=\\bm{A\\lambda}^{m}\n \\label{Equation 7}\n\\end{equation}\nwhere $\\bm{A}$ is a diagonal matrix where the diagonal entries are $A_i$ and $\\bm{\\lambda}^m$ is an element-wise exponentiation of a vector. An inverse Walsh-Hadamard Transform is then applied on $\\bm{\\lambda}$ to get the error rate vector $\\bm{p}$ of the Pauli Channel as\n\\begin{equation}\n \\bm{p}=\\bm{W}^{-1}\\bm{\\lambda}\n \\label{Equation 8}\n\\end{equation}\n$\\bm{p}$ is then projected onto a probability simplex to ensure $\\sum_{i}p_i=1$. Introducing the GRF model by the EL protocol allows the scalability of estimating $\\bm{p}$ with the increase in the number of qubits. The GRF model assumes the noise correlations are bounded between a number of neighboring qubits depending on the architecture of the quantum computer at hand. Thus, decreasing the number of noise correlations to be estimated.\\\\\nThe final outcome $\\bm{p}$ of the EL protocol represents the SPAM-free probability distribution of the average noise in the quantum computer. Each element $p_i \\in \\bm{p}$ corresponds to the probability of an error of the form $binary(i)$ on an input state $\\ket{\\bm{0}}$. For example, for a 5-qubit quantum computer, $p_0$ corresponds to the probability of no bit flips on the input state, i.e., error of the form $IIIII$, $p_1$ to the error of the form $IIIIX$, $p_2$ to the error of the form $IIIXI$, etc\u2026\\\\\nIn order to proceed with the proof for our proposed decomposition of Equation (\\ref{eq:qhat}) for input state $\\ket{\\bm{0}}$, we first state the following lemma (the detailed proof of the lemma can be found Section I in the supplementary):\n\\begin{lemma} \nLet $\\bm{\\lambda}$ and $\\bm{p}$ be the respective eigenvalues and error rates of a Pauli Channel with $n$ qubits, then $\\bm{\\lambda}^m=\\bm{WM}^m\\ket{\\bm{0}}$\nwhere $\\bm{M}$ is a $2^n \\times 2^n$ matrix such that $M_{ij}=p_{i\\oplus j}$ ($i \\oplus j$ is the bitwise exclusive-OR operator).\n\\label{Lemma}\n\\end{lemma}\n\\noindent Using Lemma \\ref{Lemma} and Equations \\ref{Equation 5} and \\ref{Equation 7}, $\\hat{q}(m,\\ket{0})$ can be estimated as\n\\begin{equation}\n \\bm{q}'(m,\\ket{\\bm{0}})=\\bm{W}^{-1}\\bm{AWM}^{m}\\ket{\\bm{0}}\n \\label{Equation 11}\n\\end{equation}\nThe transition matrix $\\bm{M}=\\bm{M}_{0}$ represents the average error per gate while the $\\bm{N}=\\bm{W}^{-1}\\bm{AW}=\\bm{N}_{0}$ matrix represents the SPAM errors for an input state $\\ket{\\bm{0}}$. Notice that the average noise for depth $m$ circuits on an input state $\\ket{\\bm{0}}$ behaves as a sequence of $m$ average noise gates $\\bm{M}_{0}$ followed by SPAM errors $\\bm{N}_{0}$.\\\\\nThe construction of $\\bm{N}_{in}$ and $\\bm{M}_{in}$ for input state $\\ket{\\bm{in}}$ proceeds similar to the procedure of constructing $\\bm{N}_{0}$ and $\\bm{M}_{0}$, however, a permutation of $\\hat{\\bm{q}}(m,\\ket{\\bm{in}})$ is required before applying a Walsh-Hadamard transform to ensure that each element $p_{i}(\\ket{\\bm{in}})$ in the input-specific error rate vector $\\bm{p}(\\ket{\\bm{in}})$ corresponds to the probability of an error of the form $binary(i)$ on an input state $\\ket{\\bm{in}}$. This permutation is done by applying an input-specific permutation matrix $\\bm{\\pi}_{in}$ on $\\hat{\\bm{q}}(m,\\ket{\\bm{in}})$ $\\forall m$ where $\\pi_{in_{ij}}=1$ if $i\\oplus j=in$ and $0$ otherwise. \n\n\\subsection*{Experiments}\nIn this section, we evaluate the accuracy of the model in Equation $\\ref{Equation 11}$ in predicting the average probability output, $\\hat{\\bm{q}}(m,\\ket{\\bm{0}})$, for identity circuits of higher depths by estimating $\\bm{A}_0$ and $\\bm{p}(\\ket{\\bm{0}})$ using only simulations of lower depths identity circuits. Denote by $\\bm{q}'(m,\\ket{\\bm{0}})$ the predicted average probability distribution obtained using Equation \\ref{Equation 11}. We select a \\textit{training set of depths} $T=\\{1,\\,2,\\,\\dots,\\,m_{max}\\}$ to estimate $\\bm{A}_0$ and $\\bm{p}$ using the EL protocol followed by the construction of the average gate error matrix $\\bm{M}_{0}$ and SPAM error matrix $\\bm{N}_{0}$ where $M_{0_{ij}}=p_{i \\oplus j}(\\ket{\\bm{0}})$ and $\\bm{N}_{0}=\\bm{W}^{-1}\\bm{A}_{0}\\bm{W}$. A new \\textit{testing set of depths} $T'=\\{m_{max}+1,\\,m_{max}+2,\\,\\dots,\\,100\\}$ is then selected where we compute the \\textit{Jensen-Shannon Divergence} ($JSD$) between $\\hat{\\bm{q}}(m',\\ket{\\bm{0}})$ and $\\bm{q}'(m',\\ket{\\bm{0}})$ $\\forall m'\\in T'$. The $JSD$ measures the similarity between the two probability distributions \\cite{JSD}. The lower the $JSD$, the closer the two distributions are. More information about the $JSD$ can be found in Section II in the supplementary. Figure \\ref{VaryingTrainingDepth} presents the computed $JSD$ for different quantum computers while varying $m_{max}$. Figure \\ref{TestErrorBars} presents the average and standard deviation for the test $JSD$ values for the different quantum computers. The average test $JSD$ varies between $0.024$ and $0.056$ for the different $m_{max}$ values with lower average $JSD$ values noted for high $m$ for $m_{max}=80$ as indicated in Figure \\ref{TestErrorBars}b.\n\\begin{figure}[h]\n\\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{Manila}\n \\caption{IBM Q Manila}\n \\label{Manila}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{Lima}\n \\caption{IBM Q Lima}\n \\label{Lima}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{Belem}\n \\caption{IBM Q Belem}\n \\label{Belem}\n\\end{subfigure}\n\\caption{$JSD(\\hat{\\bm{q}}(m,\\ket{\\bm{0}}),\\,\\bm{q}'(m,\\ket{\\bm{0}}))$ for training sets of depths $T$ and testing sets of depths $T'$ with variable maximum training depth $m_{max}\\in\\{20,\\,50,\\,80\\}$ on different IBM Q 5-qubit quantum computers.}\n\\label{VaryingTrainingDepth}\n\\end{figure}\n\\FloatBarrier\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{TestErrorBars.png}\n \\caption{}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{TestErrorBars80_100.png}\n \\caption{}\n \\end{subfigure}\n \\caption{The average and standard deviation of $JSD(\\hat{\\bm{q}}(m,\\ket{\\bm{0}}),\\,\\bm{q}'(m,\\ket{\\bm{0}}))$; (a) over all depths $m \\in [m_{max}+1,100]$ and (b) over depths $m \\in [80,100]$ while varying the maximum training depth $m_{max}$ on different IBM Q 5-qubit quantum computers.}\n \\label{TestErrorBars}\n\\end{figure}\n\\FloatBarrier\n\\noindent We rely on $\\bm{q}'(m, \\ket{\\bm{in}})$ to construct and evaluate the mitigation power of $\\bm{Q}_m$ for different depths. We first select a \\textit{training set of depths} $T=\\{1,\\,20,\\,40,\\,60,\\,80,\\,100\\}$ to estimate $\\bm{A}_{in}$ and $\\bm{p}(\\ket{\\bm{in}})$ for each input state $\\ket{\\bm{in}}$ using the EL protocol followed by the construction of $\\bm{M}_{in}$ using $\\bm{p}(\\ket{\\bm{in}})$ and $\\bm{N}_{in}=\\bm{W}^{-1}\\bm{A}_{in}\\bm{W}$. We then estimate $\\hat{\\bm{q}}(m,\\ket{\\bm{in}})$ as $\\bm{q}'(m,\\ket{\\bm{in}})$ for all inputs using Equation \\ref{eq:qhat} in order to construct $\\bm{Q}_{m}$ using Equation \\ref{eq:Q}. We then choose a new \\textit{testing set of depths} $T'=\\{10,\\,30,\\,50,\\,70,\\,90\\}$ so that $\\bm{Q}_m$ is used in mitigating the outputs for circuits of depth $m \\in T'$ where for a given identity circuit of depth $m$ with input $\\ket{\\bm{in}}$ and sequence $s$ of gates, the mitigated output $\\hat{\\bm{q}}(m,s,\\ket{\\bm{in}})_{mit}$ in obtained as \n\\begin{equation}\n \\hat{\\bm{q}}(m,s,\\ket{\\bm{in}})_{mit}=\\bm{Q}_{m}^{-1}\\hat{\\bm{q}}(m,s,\\ket{\\bm{in}})\n\\end{equation}\n$\\hat{\\bm{q}}(m,s,\\ket{\\bm{in}})_{mit}$ is projected onto a probability simplex to ensure a probability distribution. The $JSD$ between $\\hat{\\bm{q}}(m,s,\\ket{\\bm{in}})_{mit}$ and the ideal output $\\ket{\\bm{in}}$ is computed and then averaged over all input states and all random circuits of depth $m$. We also compare our proposed mitigation protocol using $\\bm{Q}_m$ with the MEM scheme (Figure \\ref{Mitigation}). We report upto 88\\% improvement in the $JSD$ value for the proposed approach compared to the unmitigated approach, and upto 69\\% improvement compared to MEM approach. Note that for the results presented here, we rely on the average SPAM free error rate, $\\bm{p}_{avg}=\\frac{1}{2^n}\\sum_{in=0}^{2^n-1}{\\bm{p}(\\ket{\\bm{in}})}$ to construct $\\bm{M}_{in}=\\bm{M}_{avg}$ for all inputs. We compare the results using $\\bm{p}_{avg}$ and $\\bm{p}(\\ket{\\bm{in}})$ in the supplementary Section V. $\\bm{N}_{in}$ remains input specific. Further elaborations on the results are presented in supplementary Section VI. \n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}[h]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{AveManilaAllInputs.png}\n \\caption{IBM Q Manila}\n \\label{LimaMitigation}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{AveLimaAllInputs.png}\n \\caption{IBM Q Lima}\n \\label{AthensMitigation}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{AveBelemAllInputs.png}\n \\caption{IBM Q Belem}\n \\label{belem Mitigation}\n\\end{subfigure}\n\\caption{Average $JSD$ between the ideal output $\\ket{\\bm{in}}$ and each of the unmitigated output $\\hat{\\bm{q}}(m,s,\\ket{\\bm{in}})$, mitigated output by the MEM protocol, and mitigated output by our proposed noise model for each depth $m$ on IBM Q 5-qubit quantum computers.}\n\\label{Mitigation}\n\\end{figure}\n\\FloatBarrier\n\\subsection*{Complexity}\nSo far in the estimation of $\\bm{M}_{in}$ and $\\bm{N}_{in}$ for each input state $\\ket{\\bm{in}}$ using the EL protocol, $K$ random circuits are generated for each depth $1\\le m \\le m_{max}$ where the EL protocol requires $O(2^{2n})$ for the Walsh-Hadamard transform which can be reduced into $O(n^2)$ using fast Walsh-Hadamard transform. Thus, the overall complexity of the construction of $\\bm{M}_{in}$ and $\\bm{N}_{in}$ for all input states is $O(m_{max}Kn^22^{n})$. Furthermore, the GRF model factors the error rates vector into a product of $f\\sim O(n)$ factors, depending on the architecture of the quantum computer, where each factor depends on a subset of adjacent qubits of cardinality $N<$ where $A_{in}$ and $M_{in}$ corresponds to the matrices constructed when running the Gibbs Random Field Protocol on each input. Notice that the protocol can be done with some permutations performed on any $\\ket{in}$ to the $\\ket{0}$.\n\nFigure \\ref{lambdas} and \\ref{As} shows the variation of the entries of the $\\lambda$ and $A$ vector as function of the inputs, respectively.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[height=10cm,width=\\columnwidth]{lambdas}\n\\caption{Variation of the entries of the $\\lambda$ vector as function of the inputs on Athens Quantum Computer}\n\\label{lambdas}\n\\end{figure}\n\\FloatBarrier\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[height=10cm,width=\\columnwidth]{As}\n\\caption{Variation of the entries of the $A$ vector as function of the inputs on Athens Quantum Compute}\n\\label{As}\n\\end{figure}\n\\FloatBarrier\n\n\\section{Conclusion}\nabc.\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe understanding of the turbulence is one of the main unsolved problems\nof classical physics, in spite of the more than 250 years of\nstrong investigations initiated by D.Bernoulli and L.Euler.\n\nIn the stochastic approach to turbulence~\\cite{Monin},~\\cite{Frish}\nthe turbulent cascade is\nconsidered as a stochastic process, described by the probability\ndistribution $P(\\lambda,v)$, where $\\lambda$ and $v$ are the\nappropriate scaled length and velocity increment respectively.\nRecently~\\cite{Friedrich} R.Friedrich and J.Peinke presented\nexperimental evidence that\nthe probability\ndensity function $P(\\lambda,v)$ obeys a Fokker-Planck\nequation (FPE)~\\cite{Risken} (see fig.1 and fig.2\nin~\\cite{Friedrich}):\n\\begin{equation}\n\\frac{\\partial P(\\lambda,v)}{\\partial \\lambda} =\n\\left[ -\\frac{\\partial}{\\partial v} D^1(\\lambda,v)\n+ \\frac{\\partial^2}{\\partial v^2} D^2(\\lambda,v) \\right]\nP(\\lambda,v),\n\\label{FPP}\n\\end{equation}\nwhere the drift and duffusion coefficients\n$D^1$ and $D^2$ respectively are derived by analysis of experimental\ndata of a fluid dynamical experiment (see fig.3 in~\\cite{Friedrich}).\n\nIn their paper Friedrich and Peinke consider the application\nof the FPE to obtain the Kolmogorov scaling with simplified\nassumptions that $D^1$ and $D^2$ are $\\lambda$-independent,\n$D^1$ is linear in $v$ and $D^2$ is quadratic in $v$:\n$$\nD^1= -a\\, v, \\qquad a>0; \\qquad\nD^2 = c\\, v^2,\\qquad c>0 \\,.\n$$\n( In the notations of~\\cite{Friedrich} :\n$ a \\equiv \\gamma $ and $ c \\equiv Q $.)\n\nHere we will consider a more realistic situation\n(see fig.3 in~\\cite{Friedrich})\nof $\\lambda$-dependent $D^1$ and $D^2$ :\n\\begin{equation}\nD^1= -a(\\lambda)\\, v, \\qquad a(\\lambda)>0 ;\\qquad\nD^2 = c(\\lambda)\\, v^2,\\qquad c(\\lambda)>0 .\n\\label{D}\n\\end{equation}\nThus the FPE ~(\\ref{FPP}) will take the form:\n\\begin{equation}\n\\frac{\\partial P}{\\partial \\lambda} = b_0(\\lambda)P(\\lambda,v)+\nb_1(\\lambda) v \\frac{\\partial P}{\\partial v} +\nc(\\lambda)\\left( v \\frac{\\partial}{\\partial v}\\right)^2 P(\\lambda,v),\n\\label{P}\n\\end{equation}\nwhere\n\\begin{equation}\nb_0(\\lambda)= a(\\lambda)+2 c(\\lambda),\n\\quad b_1(\\lambda) = a(\\lambda) + 3 c(\\lambda) .\n\\label{b}\n\\end{equation}\n\n\\begin{center}\n\\section{Exact Solution of the Cauchy Problem for the\nEq.~(\\ref{P})}\n\\end{center}\nIn this section we will find the solution\n$P(\\lambda,v)$\n of the Cauchy problem\nfor the Eq.~(\\ref{P}) with the initial condition\n\\begin{equation}\nP(0,v) = \\varphi (v).\n\\label{In}\n\\end{equation}\nAccording to \\cite{Monin}$-\\!$~\\cite{Friedrich}, when the probability\ndensity function is known, one may derive all properties\nof the turbulent cascade considered as a stochastic process.\n\nFor the solution of the problem (\\ref{P}),~(\\ref{In}) we\nshall use the approach\nof M.Suzuki~\\cite{Suzuki} to the FPE (see also ~\\cite {Donkov} ),\nbased on the disentangling techniques of R.Feynman~\\cite{Feynman}\n and the operational methods developed in the functional\nanalysis, in particular in the theory of pseudodifferential equations\nwith partial derivatives ~\\cite{Hoermander}$-\\!$~\\cite{Maslov}\n\nIn the spirit of the operational methods using the\npseudodifferential operators we can write the solution\nof the Cauchy problem (\\ref{P}),~(\\ref{In}) in the form\n\\begin{equation}\nP(\\lambda, v) =\n\\left(exp_+ \\int_0^{\\lambda} \\left[ b_0(s)+b_1(s)v\n\\frac{\\partial}{\\partial v}\n+ c(s)\\left(v \\frac{\\partial}{\\partial v}\\right)^2 \\right] {\\rm d}s\n\\right) \\varphi(v) ,\n\\label{form}\n\\end{equation}\nwhere the symbol $\\;\\;exp_+\\;\\;$ designates the V.Volterra ordered exponential\n\\begin{equation}\nexp_+ \\int_0^{\\lambda} \\hat C(s) {\\rm d}s =\n\\hat 1 + \\lim_{n\\to\\infty} \\sum_{k=1}^n\\int_0^{\\lambda}{\\rm d}\\lambda_1\n\\int_0^{\\lambda_1}{\\rm d}\\lambda_2 \\dots \\int_0^{\\lambda_{k-1}}\n{\\rm d}\\lambda_{k}\n\\hat C(\\lambda_1) \\hat C(\\lambda_2) \\dots \\hat C(\\lambda_{k}).\n\\label{exp}\n\\end{equation}\n\nThe linearity of the integral and the explicit form of the operators\nin~(\\ref{form}) permit to write the solution $P(\\lambda,v)$ in terms\nof usual, not ordered, operator valued exponent\n\\begin{equation}\nP(\\lambda,v) = {\\rm e}^{\\beta_0 (\\lambda)}\\,\n{\\rm e}^{\\beta_1(\\lambda)v\\frac{\\partial}{\\partial v} +\n\\gamma (\\lambda)\\left(v\\frac{\\partial}{\\partial v}\\right)^2}\n\\varphi(v) ,\n\\label{expP}\n\\end{equation}\nwhere for convenience we have denoted\n\\begin{equation}\n\\beta_j(\\lambda) = \\int_0^{\\lambda}b_j(s){\\rm d}s,\\;\\; (j=0,1);\n\\qquad \\gamma(\\lambda) = \\int_0^{\\lambda}c(s) {\\rm d}s .\n\\label{beta}\n\\end{equation}\nConsequently (from now on \"$'$\" means $\\frac{\\rm d}{{\\rm d}t} $ ) :\n\\begin{equation}\n\\beta_j(0)=0,\\;\\;\\; {\\beta}'_j(\\lambda) =b_j(\\lambda),\\;\\;\\; (j=0,1);\n\\qquad \\gamma(0)=0,\\;\\;\\; {\\gamma}'(\\lambda)=c(\\lambda).\n\\label{betaPR}\n\\end{equation}\n\nSince the operators \\qquad\n$\\hat A \\equiv \\beta_1(\\lambda) v \\frac{\\partial}{\\partial v}$ \\quad and \\quad\n$\\hat B \\equiv \\gamma (\\lambda)\\left(v \\frac{\\partial}{\\partial v}\\right)^2$\\qquad\ncommute : $[\\hat A , \\,\\hat B] = 0$ ,\nfrom Eq.~(\\ref{expP}) we have\n\n\\begin{equation}\nP(\\lambda,v)=\n{\\rm e}^{\\beta_0(\\lambda)}\\,\n{\\rm e}^{\\beta_1(\\lambda) v\\frac{\\partial}{\\partial v}} \\,\n{\\rm e}^{\\gamma(\\lambda)\\left(v\\frac{\\partial}{\\partial v}\\right)^2}\n\\varphi(v).\n\\label{Form}\n\\end{equation}\n\nTherefore, taking into account the formulae\n\\begin{equation}\n{\\rm e}^{\\beta_1(\\lambda)v\\frac{\\partial}{\\partial v}}f(v)\n= f\\left(v{\\rm e}^{\\beta_1(\\lambda)}\\right)\n\\label{F1}\n\\end{equation}\nand\n$\n\\,\\,{\\rm e}^{\\gamma(\\lambda)\\left(v\\frac{\\partial}{\\partial v}\\right)^2}g(v)\n=\\frac{1}{\\sqrt{4\\pi\\gamma(\\lambda)}}\\int_{-\\infty}^{\\infty}\n{\\rm e}^{-\\frac{s^2}{4\\gamma(\\lambda)}} g\\left(v{\\rm e}^\n{-s}\\right){\\rm d}s\n$$\n\\begin{equation}\n\\qquad\\qquad\\qquad=\\frac{1}{\\sqrt{4\\pi\\gamma(\\lambda)}}\n\\int_{-\\infty}^{\\infty}\n{\\rm e}^{-\\frac{\\left(\\ln v-y\\right)^2}{4\\gamma(\\lambda)}}\ng\\left({\\rm e}^\n{y}\\right){\\rm d}y,\n\\label{F2}\n\\end{equation}\nwe obtain the following expression for the exact solution of the\nCauchy problem (\\ref{P}),(\\ref{In})\n$$\nP(\\lambda,v)\n=\\frac{{\\rm e}^{\\beta_0(\\lambda)}}\n{\\sqrt{4\\pi\\gamma(\\lambda)}}\\int_{-\\infty}^{\\infty}\n{\\rm e}^{-\\frac{s^2}{4\\gamma(\\lambda)}} \\varphi\\left(v{\\rm e}^\n{\\beta_1(\\lambda)-s}\\right){\\rm d}s\n$$\n\\begin{equation}\n\\label{end}\n\\qquad\\qquad=\\frac{{\\rm e}^{\\beta_0(\\lambda)}}\n{\\sqrt{4\\pi\\gamma(\\lambda)}}\\int_{-\\infty}^{\\infty}\n{\\rm e}^{-\\frac{\\left(\\ln v+\\beta_1(\\lambda)-y\\right)^2}\n{4\\gamma(\\lambda)}}g\\left({\\rm e}^{y}\\right){\\rm d}y,\n\\end{equation}\nwhere $\\beta_0(\\lambda), \\beta_1(\\lambda)$ and $\\gamma(\\lambda)$\nare defined in~(\\ref{beta}).\n\nSubstituting the expression~(\\ref{end})\nin the Eqs.~(\\ref{P}) and~(\\ref{In}) and using the Eq.~(\\ref{betaPR})\none can see immediately that $P(\\lambda,v)$ is a solution of the\nproblem (\\ref{P}),~(\\ref{In}) and, according to the Cauchy theorem,\nit is the only classical solution of this problem.\n\n\\section{Concluding remarks}\n\\begin{itemize}\n\\item The exact solution of the Cauchy problem (\\ref{P}),~(\\ref{In})\nis obtained using the algebraic method we have described.\nThe Eq.~(\\ref{P}) is a generalization\nof the equation used by R.Friedrich and J.Peinke\n( see section 1)\nin their description of a turbulent cascade by a\nFokker-Planck equation\nwith coefficients derived by a detailed analysis of\nexperimental data\nof a fluid dynamical experiment.\n\\item If\nthe probability distribution function\n$P(\\lambda,v)$ is known, then\none may derive the properties of\na given stochastic process, in our case -\n the turbulent cascade \\cite{Monin}~$-\\!$~\\cite{Friedrich}.\n\\item For more realistic description of the turbulent cascade\nby a FPE\nit should be desirable to use for\n$D^1(\\lambda, v)$ and $D^2(\\lambda ,v)$ in the Eq.~(\\ref{FPP})\nmore general expressions than these in Eq.~(\\ref{D}),\nfor instance:\\\\\n $ D^1(\\lambda,v) = a_1(\\lambda) - a(\\lambda)v,\\,\\,\\, a(\\lambda)>0$\\quad\nand \\quad\n$D^2(\\lambda,v) = c_1(\\lambda) + c(\\lambda) v^2 $.\n\n\\end{itemize}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\label{section1}\n\nPathology analysis based on microscopic images is a critical task in medical image computing. In recent years, deep learning of digitalized pathology slide has facilitated the progress of automating many diagnostic tasks, offering the potential to increase accuracy and improve review efficiency. Limited by computation resources, deep learning-based approaches on whole slide pathology images (WSIs) usually train convolutional neural networks (CNNs) on patches extracted from WSIs and aggregate the patch-level predictions to obtain a slide-level representation, which is further used to identify cancer metastases and stage cancer \\cite{Wang2016DeepLF}. Such a patch-based CNN approach has been shown to surpass pathologists in various diagnostic tasks \\cite{Liu2017DetectingCM}\n\nOff-the-shelf CNNs have been shown to be able to accurately classify or segment pathology images into different diagnostic types in recent studies \\cite{huang2018improving, veeling2018rotation}. However, \nmost of these methods are weak in interpretability especially for clinicians, due to a lack of evidence supporting for the decision of interest. During diagnosis, a pathologist often inspects abnormal structures (e.g., large nucleus, hypercellularity) as the evidence for determining whether the glimpsed patch is cancerous. For CAD systems, learning to pinpoint the discriminative evidence can provide precise visual assistance for clinicians. Strong supervision-based feature localization methods require a large number of pathology images annotated in pixel-level or object-level, which are very costly and time-consuming and can be biased by the experiences of the observers. In this paper, we propose a weakly supervised learning (WSL) method that can learn to localize the discriminative evidence for the class-of-interest on pathology images from weakly labeled (i.e. image-level) training data. Our contributions include: \ni) proposing a new CNN architecture with multi-branch attention modules and deep supervision mechanism, to address the difficulty of localizing discrete and small objects in pathology images, \nii) formulating a generalizable approach that leverages gradient-weighted class activation map and saliency map in a complementary way to provide accurate evidence localization, \niii) designing a new attention module which allows capturing spatial attention from various context, \niv) quantitatively and visually evaluating WSL methods on large scale histopathology datasets, and \nv) constructing a new dataset (HPLOC) based on Camelyon16 for effectively evaluating evidence localization performance on histopathology images.\n\n\n\n\\textbf{Related Work.} Recent studies have demonstrated that CNN can learn to localize discriminative features even when it is trained on image-level annotations \\cite{zhou2016learning}. However, these methods are evaluated on natural image datasets (e.g., PASCAL), where the objects of interest are usually large and distinct in color and shape. In contrast, objects in pathology images are usually small and less distinct in morphology between different classes. A few recent studies investigated WSL approaches on medical images, including lung nodule detection and placental ultrasound images \\cite{feng2017discriminative}. These methods employ GAP-based class activation map and require CNNs ending with global average pooling, which degrades the performance of CNNs as a side effect \\cite{zhou2016learning}. \n\n\\section{Methods}\nThe overview of the framework is shown in Fig.{~\\ref{fig:fig1}}. \nThe model is trained to predict the cancer score for a given image, indicating the presence of cancer metastasis.\nIn the test phase, besides giving a binary classification, the model generates a cancerous evidence localization map and performs localization. \n\n \\afterpage{\n \\begin{figure}[t]\n\n \t\\centering\n\n \t\\begin{tabular}{cc}\n \t\t\\includegraphics[scale=0.5]{.\/fw.pdf} &\n \n \t\n\n \t\t\\includegraphics[scale=0.5]{.\/blk.pdf} \\\\%width=\\linewidth\n\n \t\n \t\\end{tabular}\n \t\\caption{ \\textbf{Left:} Framework overview of the proposed WSL method. The under line (e.g., \/1, 16) denotes stride and number of channels. \\textbf{Right:} A building block for the multi-branch attention based residual module (MA-ResModule). \n \t\n \t} \n \t\\label{fig:fig1}\n \\end{figure}\n}\n \n \\subsection{Cancerous Evidence Localization Networks (CELNet)}\n \\label{subsection_2_3}\n\n\n Given the object of interest is relatively small and discrete, a moderate number of convolutional layers is sufficient for encoding locally discriminative features. As discussed in Section \\ref{section1}, instances on pathology images are similar in morphology and can be densely distributed, the model should avoid over-downsampling in order to pinpoint the cancerous evidence from the densely distributed instances. The proposed CELNet starts with a $3\\times 3$ convolution head followed by 3 Multi-branch Attention-based Residual Modules (MA-ResModule)\n \\footnote{Densely connected module is not employed considering it is comparatively speed-inefficient for WSIs application due to its dense tensor concatenation. } \n \\cite{he2016deep}. Each MA-ResModule is composed of 3 consecutive building blocks integrated with the proposed attention module (MAM) as shown in Fig.{~\\ref{fig:fig1}} (Right). We use $3\\times 3$ convolution with stride of 2 for downsampling in residual connections instead of $1\\times 1$ convolution to reduce information loss. Batch normalization and ReLU are applied after each convolution layer for regularization and non-linearity\n \n \n \\subsubsection{Multi-branch Attention Module (MAM)}\n To eliminate the effect of background contents and focus on representing the cancerous evidence (which can be sparse), we employ attention mechanism. \n Improved on Convolutional Block Attention Module (CBAM) \n\n , which extracts channel attention and spatial attention of an input feature map in a squeeze and excitation manner, we propose a multi-branch attention module. MAM can better approximate the importance of each location on the feature map by looking at its context at different scales. \n Given a squeezed feature map $F_{sq} $ generated by the channel attention module, we compute and derive a 2D spatial attention map $A_s$ by $ A_s = \\sigma(\\sum_{k'} f^{k' \\times k'} (F_{sq}) ),$\n where $f^{k' \\times k'}$ represents a convolution operation with kernel size of $k' \\times k'$, and $\\sigma$ denotes the sigmoid function. We set $k' \\in \\{3, 5, 7\\}$ in our experiments, corresponding to 3 branches. Hereby, the feature map $F_{sq} $ is refined by element-wise multiplication with the spatial attention map $A_s$.\n \nMAM is conceptually simple but effective in improving detection and localization performance as demonstrated in our experiments.\n \n\n\n\n \\subsubsection{Deep Supervision}\n Deep supervision \\cite{lee2015deeply} is employed to empower the intermediate layers to learn class-discriminative representations, for building the cancer activation map in a higher resolution. We achieve this by adding two companion output layers to the last two MA-ResModules, as shown in Fig. \\ref{fig:fig1}. Global max pooling (GMP) is applied to search for the best discriminative features spatially, while global average pooling (GAP) is applied to encourage the network to identify all discriminative parts on the image. Each companion output layer applies GAP and GMP on the input feature map and concatenates the resulting vectors. The cancer score of the input image is derived by concatenating the outputs of the two companion layers followed by a fully convolutional layer (i.e., kernel size $1 \\times 1$) with a sigmoid activation. \n CELNet enjoys efficient inference when applied to test WSIs, as it is fully convolutional and avoids repetitive computation for the overlapping part between neighboring patches. \n \n \n\\subsection{Cancerous Evidence Localization Map (CELM) }\n\\subsubsection{Cancer Activation Map (CAM)}\nGiven an image $I \\in \\mathbb{R}^{H \\times W \\times 3}$, let $y^c = S_c(I)$ represent the cancer score function governed by the trained CELNet (before sigmoid layer). \nA cancer-class activation map $M^c$ shows the importance of each region on the image to the diagnostic value. For a target layer $l$, the CAM $M^c_l$ is derived by taking the weighted sum of feature maps $F_l$ with the weights \\{$\\alpha_{k,l}^c$ \\}, where $\\alpha_{k,l}^c$ represents the importance of $k^{th}$ feature plane. The weights $\\alpha_{k,l}^c$ are computed as $\\alpha_{k,l}^c = Avg_{i,j}( \\frac{\\partial y^c}{\\partial F_l^k(i,j)} )$, i.e., spatially averaging the gradients of cancer score $y^c$ with respect to the $k^{th}$ feature plane $F_l^k$, which is achieved by back propagation (see Fig.\\ref{fig:fig1}). Thus, the CAM of layer $l$ can be derived by $ M_l^c = ReLU(\\sum_k \\alpha_{k,l}^c F_l^k)$, where ReLU is applied to exclude the features with negative influence on the class of interest \\cite{selvaraju2017grad}. \n\nWe derive two CAMs, $M^c_2$ and $M^c_3$ from the last layer of the second and the third residual module on CELNet respectively (i.e., CAM2 and CAM3 in Fig.\\ref{fig:fig1}). CAM3 can represent discriminative regions for identifying a cancer class in a relatively low resolution while CAM2 enjoys higher resolution and still class-discriminative under deep supervision. \n\n\\subsubsection{Cancer Saliency Map (CSM)}\nIn contrast with CAM, the cancer-class saliency map shows the contribution of each pixel site to the cancer score $y^c$. This can be approximated by the derivate of a linear function $S^c(I) \\approx w^TI + b$. Thus the pixel contribution is computed as $ w = \\frac{\\partial S^c(I)}{\\partial I} $. Different from \\cite{Simonyan2013DeepIC}, we derive $w$ by the guided back-propagation \\cite{springenberg2014striving} to prevent backward flow of negative gradients. \nFor a RGB image, to obtain its cancer saliency map $M^s \\in \\mathbb{R}^{H \\times W\\times 1} $ from $w \\in \\mathbb{R}^{H \\times W \\times 3} $, we first normalize $w$ to $[0,1]$ range, followed by greyscale conversion and Gaussian smoothing, instead of simply taking the maximum magnitude of $w$ as proposed in \\cite{Simonyan2013DeepIC}. Thus, the resulting cancer saliency map (see Fig.{~\\ref{fig:vis}} (b)) is far less noisy and more focus on class-related objects than the original one proposed in \\cite{Simonyan2013DeepIC}.\n\n\n\n\\subsubsection{Complementary Fusion}\nThe generated CAMs coarsely display discriminative regions for identifying a cancer class (see Fig.\\ref{fig:vis} (c)), while the CSM is fine-grained, sensitive and represents pixelated contributions for the identification (see Fig.\\ref{fig:vis} (b)). To combine the merits of them for precise cancerous evidence localization, we propose a complementary fusion method. First, CAM3 and CAM2 are combined to obtain a unified cancer activation map $M^c \\in \\mathbb{R}^{H \\times W\\times 1}$ as $M^c = \\alpha f_u(M^c_3) + (1- \\alpha) f_u(M^c_2)$, where $f_u$ denotes a upsampling function by bilinear interpolation, and the coefficient $\\alpha$ in range [0,1] is confirmed by validation.\nThe CELM is derived by complementarily fusing CSM and CAM as $M = \\beta (M^c \\odot M^s) + (1 - \\beta) M^c$,\nwhere $\\odot$ denotes element-wise product, and the coefficient $\\beta$ captures the reliability of the point-wise multiplication of CAM and CSM, and the value of $\\beta$ is estimated by cross-validation in experiments. \n\n\n\n\n\\section{Experiments \\& Results}\nWe first evaluate the detection performance of the proposed model as for clinical requirements, followed by evidence localization evaluations. \n\n\\subsection{Datasets and Experimental Setup}\nThe detection performance of the proposed method is validated on two benchmark datasets, PCam\\cite{veeling2018rotation} and Camelyon16 \\footnote{https:\/\/camelyon16.grand-challenge.org}. \n\n\\textbf{PCam:} The PCam dataset contains 327,680 lymph node histopathology images of size $96 \\times 96$ with binary class labels indicating the presence of cancer metastasis, split into 75\\% for training, 12.5\\% for validation, and 12.5\\% for testing as originally proposed. The class distribution in each split is balanced (1:1). For a fair comparison, following \\cite{veeling2018rotation}, we perform image augmentation by random 90-degree rotations and horizontal flipping during training.\n\n\\textbf{Camelyon16:} The Camelyon16 dataset includes 270 H\\&E stained WSIs (160 normal and 110 cancerous cases) for training and 129 WSIs held out for testing (80 normal and 49 cancerous cases) with average image size about $65000 \\times 45000$, where regions with cancer metastasis are delineated in cancerous slides. To apply our CELNet on WSIs, we follow the pipeline proposed in \\cite{Liu2017DetectingCM}, including WSI pre-processing, patch sampling and augmentation, heatmap generation, and slide-level detection tasks. For slide-level classification, we take the maximum tumor score among all patches as the final slide-level prediction. For tumor region localization, we apply non-suppression maximum algorithm on the tumor probability map aggregated from patch predictions to iteratively extract tumor region coordinates. We work on the WSI data at 10$\\times$ resolution instead of 40$\\times$ with the available computation resources.\n\n\tIn our experiments, all models are trained using binary cross-entropy loss with L2 regularization of $10^{-5}$ to improve model generalizability, and optimized by SGD with Nesterov momentum of 0.9 with a batch size of 64 for 100 epochs. The learning rate is initialized with $10^{-4}$ and is halved at 50 and 75 epochs. We select model weights with minimum validation loss for test evaluation. \n\n\\subsection{Classification Results}\t\n\nAs Tbl.{~\\ref{table1}} shows, CELNet consistently outperforms ResNet, DenseNet, and P4M-DenseNet \\cite{veeling2018rotation} in histopathologic cancer detection on the PCam dataset. \n\n P4M-DenseNet uses less parameters due to parameter sharing in the p4m-equivariance. \n\n For auxiliary experiments, we perform ablation studies and visual analysis. From Tbl.{~\\ref{table1}} , we observe that our attention module brings 1.77\\% accuracy gain, which is larger than the gain brought by CBAM \\cite{woo2018cbam}. Both the CAM and CELM on CELNet are mainly activated for the cancerous regions (see Fig.\\ref{fig:vis} (c) and (d)). These subfigures indicate that CELNet is effective in extracting discriminative evidence for histopathologic classification. \n \n\\begin{table}[h]\n\t\\centering\n\t\\floatbox[{\\capbeside\\thisfloatsetup{capbesideposition={left,top} }}]{table}[\\FBwidth]\n\t\t{\n\t\t\\caption{Quantitative comparisons on the PCam test set. P4M-DenseNet \\cite{veeling2018rotation}: current SoTA method for the PCam benchmark, CELNet: our method, $^{-}$: removal of the proposed multi-branch attention module, +CBAM: integration with convolutional block attention module \\cite{woo2018cbam}. \n\t\t}}\n\t\t{\n\t\t\t\\begin{tabular}{l c c c }\n\t\t\t\t\\toprule\n\t\t\t\tMethods\t\t\t\t\t\t\t& Acc \t\t& AUC \t& \\#Params \\\\\n\t\t\t\t\\midrule \n\t\t\t\tResNet18 \\cite{he2016deep}\t\t\t& \t 88.73\t\t\t&\t95.36\t\t& 11.2M\t\\\\\n\t\t\t\tDenseNet \\cite{veeling2018rotation}\t\t\t\t&\t 87.20 & 94.60\t& 902K \\\\\n\t\t\t\tP4M-DenseNet & 89.80 \t&\t96.30\t& 119K \\\\\n\t\t\t\t\\midrule \n\t\t\t\tCELNet\t\t \t\t\t &\t \\textbf{91.87}\t& \\textbf{97.72} & 297K\t \\\\\n\t\t\t\tCELNet$^{-}$ \t \t\t\t&\t90.10 \t\t& 96.45\t& 292K\t\t\\\\\n\t\t\t\tCELNet$^{-}$ +CBAM &\t 90.86 \t& 97.17 & \t 296K\t\\\\\n\t\t\t\t\\bottomrule\n\t\t\t\\end{tabular}\n\t\t\\label{table1}\n\t\t}\n\\end{table}\n\n\nOn slide-level detection tasks, as shown in Tbl.\\ref{table2}, our CELNet based approach achieves higher classification performance (1.7\\%) in terms of AUC than the baseline method \\cite{Liu2017DetectingCM}, and outperforms previous state-of-the-art methods in slide-level tumor localization performance in terms of FROC score. The results illustrate that instead of using off-the-shelve CNNs as the core patch-level model for histopathologic slide detection, adopting CELNet can potentially bring larger performance gain. CELNet is more parameter-efficient as shown in Tbl.\\ref{table1} and testing a slide on Camelyon16 takes about 2 minutes on a Nvidia 1080Ti GPU. \n\\begin{table}[h]\n\t\\centering\n\t\\floatbox[{\\capbeside\\thisfloatsetup{capbesideposition={left,top} }}]{table}[\\FBwidth]\n\n\t{\\caption{Quantitative comparisons of slide-level classification performance (AUC) and slide-level tumor localization performance (FROC) on the Camelyon16 test set. *: The Challenge Winner uses $40\\times$ resolution while results of other methods are based on $10\\times$. \n\t}}\n\t{\n\t\t\\begin{tabular}{l c c }\n\t\t\t\\toprule\n\t\t\tMethods\t\t\t\t\t\t\t& AUC \t\t& FROC \t \\\\\n\t\t\t\\midrule \n\t\t\tP4M-DenseNet\t\t\t& \t -\t\t\t\t\t\t\t\t\t\t\t\t&\t84.0 \t\t\t\\\\\n\t\t\tLiu \\cite{Liu2017DetectingCM} & 96.5 \t\t\t\t\t\t&\t79.3 \t \\\\\n\t\t\tChallenge Winner$^*$ \\cite{Wang2016DeepLF} & \\textbf{99.4} \t&\t80.7\t \\\\\n\t\t\tPathologist \t\t\t\t&\t \t\t\t96.6 \t\t\t\t\t\t\t& 73.3\t\\\\\n\t\t\tCELNet\t\t \t\t\t &\t 97.2 & \\textbf{84.8} \t \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\label{table2}\n\t}\n\\end{table}\n\n\n\\subsection{Weakly Supervised Localization and Results}\nGiven that the trained CELNet can precisely classify a pathology image, here we aim to investigate its performance in localizing the supporting evidence based on the proposed CELM. To achieve this, based on Camelyon16, we first construct a dataset with region-level annotations for cancer metastasis, namely HPLOC, \nand develop the metrics for measuring localization performance on HPLOC. \n\n\\textbf{HPLOC:} The HPLOC dataset contains 20,000 images of size $96 \\times 96$ with segmentation masks for cancerous region. Each image is sampled from the test set of Camelyon16 and contains both cancerous regions and normal tissue in the glimpse, which harbors the high quality of the Camelyon16 dataset.\n\n\n\\textbf{Metrics:} To perform localization, we generate segmentation masks from CELM\/CAM\/CSM by thresholding and smoothing (see Fig.\\ref{fig:vis} (e)). If a segmentation mask intersects with the cancerous region by at least 75\\%\n\\footnote{The annotated contour in Camelyon16 is usually enlarged to surround all tumors. }\n, it is defined as a true positive. Otherwise, if a segmentation mask intersects with the normal region by at least 75\\%, it is considered as a false positive. Thus, we can use precision and recall score to quantitatively assess the localization performance of different WSL methods, where the results are summarized in Tbl.\\ref{table3}. \n\n\\begin{table}[h]\n\t\\centering\n\t\\floatbox[{\\capbeside\\thisfloatsetup{capbesideposition={left,top} }}]{table}[\\FBwidth]\n\t{\\caption{Quantitative comparisons for different weakly supervised localization methods on the HPLOC dataset. Ours: CELNet + CELM. MAM and DS are short for multi-branch attention module and deep supervision respectively. \n\t}}\n\t{\n\t\t\\begin{tabular}{l c c }\n\t\t\t\\toprule\n\t\t\tMethods\t\t\t\t& Precision & Recall \\\\\n\t\t\t\\midrule \n\t\t\tResNet18 + Backprop \\cite{Simonyan2013DeepIC} \t\t& 79.8\t& 85.5 \\\\ \n\t\t\tResNet18 + GradCAM \\cite{selvaraju2017grad}\t\t \t& \t 85.6\t& 82.4 \\\\\n\t\t\t\\midrule \n\t\t\tOurs\t\t \t\t&\t \\textbf{91.6}\t& 87.3 \t \\\\\n\t\t\tOurs w\/o MAM\t \t\t&\t88.1 \t\t\t\t& 85.6\t\\\\\n\t\t\tOurs w\/o DS\t\t&\t 90.5 \t\t& \t\\textbf{87.7}\t\\\\\n\t\t\tCELNet + GradCAM &\t 91.0\t\t& \t85.4\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\label{table3}\n\t}\n\\end{table}\n\n\n\\begin{figure}[t\n\t\\centering\n\n\t\\begin{tabular}{cccccc}\n\n\n\n\n\n\n\n\n\n\t\t\\includegraphics[scale=0.5]{2a.png} &\n\t\t\\includegraphics[scale=0.5]{2e_sm.png} &\n\t\t\\includegraphics[scale=0.5]{2f_cam.png} &\n\t\t\\includegraphics[scale=0.5]{2b_celm.png} &\n\n\t\t\\includegraphics[scale=0.5]{2c_loc_on_img.png} &\n\t\t\\includegraphics[scale=0.5]{2d_mask.png}\n \\\\\n\t\t(a) Input & (b) CSM & (c) CAM & (d) CELM & (e) Localization & (f) GT \\\\\n\t\\end{tabular}\n\n\t\\caption{ Evidence localization results of our WSL method on the HPLOC dataset.\n\t\t (a) Input glimpse, (b) Cancer Saliency Map, (c) Cancer Activation Map, (d) CELM: Cancerous Evidence Localization Map, (e) Localization results based on CELM, where the localized evidence is highlighted for providing visual assistance, (f) GT: ground truth, white masks represent tumor regions and the black represents normal tissue} \n\n\t\\label{fig:vis}\n\n\\end{figure}\n\n\nWe observe that our WSL method based on CELNet and CELM consistently performs better than the back propagation-based approach \\cite{Simonyan2013DeepIC} and the class activation map-based approach \\cite{selvaraju2017grad}. Note that we used ResNet18 \\cite{he2016deep} as the backbone for the compared methods because it achieves better classification performance and provides higher resolution for GradCAM (12$\\times$12) as compared to DenseNet (3 $\\times$ 3) \\cite{veeling2018rotation}. \nWe perform ablation studies to further evaluate the key components of our method in Tbl.\\ref{table3}. We observe the effectiveness of the proposed multi-branch attention module in increasing the localization accuracy. \nThe deep supervision mechanism effectively improves the precision in localization despite slightly lower recall score, which can be caused by the regularization effect on the intermediate layers, that is, encouraging the learning of discriminative features for classification but also potentially discouraging the learning of some low-level histological patterns. \nWe observe that using CELM can improve the recall score and precision, which indicates that CELM allows better discovery of cancerous evidence than using GradCAM. \nWe present the visualization results in Fig. {\\ref{fig:vis}}, the cancerous evidence \nis represented as large nucleus and hypercellularity in the images, which are precisely captured by the CELM. Fig.\\ref{fig:vis}(e) visualizes the localization results by overlaying the segmentation mask generated from CELM onto the input image, which demonstrates the effectiveness of our WSL method in localizing cancerous evidence.\n\n\n\\section{Discussion \\& Conclusions}\nIn this paper, we have proposed a generalizable method for localizing cancerous evidence on histopathology images. \nUnlike the conventional feature-based approaches, the proposed method does not rely on specific feature descriptors but learn discriminative features for localization from the data. \nTo the best of our knowledge, investigating weakly supervised CNNs for cancerous evidence localization and quantitatively evaluating them on large datasets have not been performed on histopathology images. \nExperimental results show that our proposed method can achieve competitive classification performance on histopathologic cancer detection, and more importantly, provide reliable and accurate cancerous evidence localization using weakly training data, which reduces the burden of annotations. We believe that such an extendable method can have a great impact in detection-based studies in microscopy images and help improve the accuracy and interpretability for current deep learning-based pathology analysis systems. \n\n\\bibliographystyle{splncs04}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn continuum thermodynamics, the constitutive theories are based, besides some general invariance principles, on the second law of thermodynamics, which states that in every admissible process the entropy production has to be non-negative \\cite{Truesdell}.\n\nA rigorous procedure for the exploitation of the entropy principle has been developed for the first time in 1963 by Coleman and Noll \\cite{Coleman-Noll}, and later by Coleman and Mizel \\cite{Coleman-Mizel}. In both papers, the authors assumed the unbalance of entropy \nin the classical form, say the Clausius-Duhem inequality; in this inequality, the entropy flux is taken as the ratio between the heat flux and the absolute temperature. Later on, M\\\"uller \\cite{Muller} proposed an extension of the entropy inequality, allowing a more general expression for the entropy flux, thus obtaining the thermodynamic compatibility for wider classes of materials. A slightly different approach has been applied by other authors who accepted the classical Clausius-Duhem inequality, but proposed a more general form of the local balance of energy \\cite{Gurtin,Dunn-Serrin,Dunn}. Furthermore, in 1972, Liu \\cite{Liu} developed a different procedure for the analysis of the entropy principle, based on the method of Lagrange multipliers.\n\nIn all these papers, the basic assumption is that the second law of thermodynamics restricts the constitutive equations and not the thermodynamic processes. Hence, the constitutive relations are required to be such that the entropy inequality be satisfied for all solutions of the thermodynamic field equations. This assumption is purely mathematical and from a physical point of view may have two\ndifferent interpretations \\cite{Muschik-Ehrentraut}:\n\\begin{itemize}\n\\item all solutions of the balance equations have to satisfy the second law;\n\\item there are solutions of the balance equations which satisfy the second law, and other ones which do not. \n\\end{itemize}\n\nThe first interpretation requires that the constitutive equations must be assigned in such a way the entropy inequality is satisfied along arbitrary processes, whereas the second one means that we have to exclude from the set of solutions of the balance equations those which are not physically achievable, since they do not satisfy the second law of thermodynamics. In \\cite{Muschik-Ehrentraut}, the authors\nproposed a way to choose between the two statements through an \\emph{amendment to the second law}, by expliciting the nearly self-evident, but never precisely formulated, postulate that there are no reversible process directed towards non-equilibrium. \nBy means of this amendment, they were able to prove that, necessarily, the second law of thermodynamics restricts the constitutive equations and not the processes. Such a result justifies, from the physical point of view, the approach to the exploitation of second law \nthrough Coleman-Noll and Liu procedures (see also \\cite{Muschik,Muschik-Papenfuss-Ehrentraut}).\n\nIn a series of papers \\cite{Cimmelli-2007,CST-JMP-2009,CST-PRSA-2010,COT-JMP-2011},\nthe two classical Coleman-Noll and Liu procedures have been extended in order to use as constraints in\nthe entropy inequality both the balance equations and their gradient extensions up to a suitable finite order. \nThis approach, successfully used in many applications of physical interest (see, for instance, \n\\cite{CST-JNET-2010,COP-Elasticity-2011,COP-IJNLM-2013,COP-CMT-2015,OPR-2016,CGOP-miscele-2020}), \nrevealed essential in order to ensure the compatibility of non-local constitutive relations with second law of thermodynamics \nwithout modifying \\emph{a priori} the entropy inequality or the energy balance through the introduction of extra-terms.\nIn particular, the extended Liu technique requires to add to the entropy inequality a linear combination of the field equations and of the spatial gradients of the latter (in the following they are called extended equations), up to the order of the gradients entering the state space. The coefficients of this linear combination are the Lagrange multipliers and depend upon the state variables only. Thus, the number of independent constraints to be taken into account is never less than that of the unknown elements entering the constitutive equations as independent state variables. \n\nIn this paper, we start discussing the Liu extended procedure from an abstract mathematical point of view, \nand then apply the results to some\nphysical instances of continua with non-local constitutive equations. In fact, we consider a system of first order balance laws in one space dimension sufficiently general to contain the equations governing the thermodynamical processes occurring in a continuous medium. We assume that this system involves some functions whose constitutive equations are allowed to be non-local, \\emph{i.e.}, we admit the possibility that the \\emph{state space} includes the gradients of (not necessarily all) the field variables up to the order $r\\ge 1$. The compatibility of the constitutive equations with an entropy-like inequality is then discussed. Once we expand the derivatives in the\nentropy-like inequality and impose as constraints the balance equations and some of their gradient extensions,\na set of sufficient conditions, such that the entropy-like inequality is not violated, are derived. \nFurthermore, the results are applied to two physical instances of continua: \nthe first example is concerned with a fluid with a scalar internal variable and constitutive equations with first order \nnon-localities, the second one to a fluid whose state space includes also the second order spatial derivative of the mass density (Korteweg fluid \\cite{Korteweg}).\n\nThe plan of the paper is the following. In Section~\\ref{sec:balance}, we define mathematically the problem, and introduce the notation that\nwill be used throughout the paper. In Section~\\ref{sec:liu}, we discuss the extended Liu procedure and state the theorem providing the sufficient conditions in order the entropy-like inequality be satisfied for all thermodynamical processes. In Section~\\ref{sec:applications},\nwe give two non-trivial applications of the procedure in meaningful physical situations, and solve the thermodynamical conditions providing explicitly a solution for the constitutive equations. Finally, Section~\\ref{sec:conclusions} contains some concluding remarks.\n\n\\section{Balance equations for a continuous medium}\n\\label{sec:balance}\nPhysical laws describing the mechanical as well as the thermodynamical \nproperties of continuous media are usually expressed in terms of balances of some physical quantities (mass, linear and angular momentum, energy, etc.). Here, for simplicity, we consider the case of one-dimensional continuous media and postpone to a forthcoming paper the multi-dimensional case. \n\nLet \n\\[\n\\mathbf{u}\\equiv(u_1(t,x),\\ldots,u_n(t,x))\n\\] \nbe the vector of $n$ field variables, depending on time $t$ and space $x$, describing a one-dimensional continuum. \n\nIt is well known that the general governing equations of a continuum are underdetermined since they involve some constitutive functions that specify the particular continuum we are dealing with. \nThe various constitutive quantities (for instance, the Cauchy stress tensor or the heat flux) depend on the so called \\emph{state variables}: the state variables include (some of) the field variables when a local constitutive theory is adopted, or, when non-local constitutive theories are considered, also the spatial derivatives up to a finite order $r$ of some field variables. In the following, we shall be concerned with\na non-local constitutive theory. \nIn fact, in our framework, the state variables will be the elements of the set \n\\[\n\\displaystyle \\mathcal{Z}=\\bigcup_{k=0}^r \\mathcal{Z}^{(k)},\n\\]\nwhere\n\\[\n\\begin{aligned}\n&\\mathcal{Z}^{(0)}\\subseteq \\{u_1,\\ldots,u_n\\}\\equiv \\mathcal{U}^{(0)},\\\\\n&\\mathcal{Z}^{(k)}\\subseteq \\left\\{\\frac{\\partial^k u_1}{\\partial x^k},\\ldots,\\frac{\\partial^k u_n}{\\partial x^k}\\right\\}\\equiv \\mathcal{U}^{(k)}, \\qquad k=1,\\ldots,r,\n\\end{aligned}\n\\]\nwhere $r\\ge 1$.\nFinally, let us denote with $\\mathbf{z}$ the vector whose $N$ components belong to the set $\\mathcal{Z}$ of state variables, and $\\mathbf{z}^\\star$ the vector whose $N^\\star$ components belong to the set $\\mathcal{Z}^\\star=\\mathcal{U}^{(0)}\\bigcup \\mathcal{Z}$.\n\nIn local form, the thermomechanical description of a continuum leads to consider a system of partial differential equations having the form\n\\begin{equation}\n\\label{generalbalance}\n\\mathcal{E}_i\\equiv\n\\frac{D\\Phi_i(\\mathbf{u})}{D t}+\\frac{D \\left(\\Psi_i(\\mathbf{u})+\\chi_i(\\mathbf{z}^\\star)\\right)}{D x}-\\Gamma_i(\\mathbf{z}^\\star)=0, \\qquad i=1,\\ldots, n,\n\\end{equation}\nwhere the differential operators $D\/Dt$ and $D\/Dx$, acting on a composite function $F$ depending on $t$ and $x$ through some quantities $w_1(t,x),\\ldots, w_n(t,x)$\n(in the paper, these variables are the field variables or the elements of the state space), by chain rule, are defined as follows\n\\[\n\\frac{D F}{Dt}=\\sum_{j=1}^n\\frac{\\partial F}{\\partial w_j}\\frac{\\partial w_j}{\\partial t},\\qquad\n\\frac{D F}{Dx}=\\sum_{j=1}^n\\frac{\\partial F}{\\partial w_j}\\frac{\\partial w_j}{\\partial x}.\n\\]\nThese operators do not correspond to total derivatives, and we introduce them in order to avoid confusion when stating Theorem 1, see below.\nIn the equations (\\ref{generalbalance}),\n\\begin{itemize}\n\\item $\\Phi_i(\\mathbf{u})$ are some densities, depending at most on the field variables;\n\\item the fluxes are split as sums of functions $\\Psi_i(\\mathbf{u})$ depending at most on the field variables, and \n$\\chi_i(\\mathbf{z}^\\star)$ depending at most on the field variables and the state variables;\n\\item $\\Gamma_i(\\mathbf{z}^\\star)$ are the production terms depending at most on the field variables and the state variables. \n\\end{itemize}\n\nIt is well known that in general the constitutive functions have to satisfy some universal principles \n(invariance with respect to rigid motions, time translation, \nscale changes of fundamental quantities, Galilei or Lorentz transformations, etc.). In such a framework, general representation theorems for isotropic scalar, vectorial or tensorial constitutive equations have to be taken into account \\cite{Wang1,Wang2,Smith1,Smith2,Smith3}. \nAdditional constraints are imposed by the second law of thermodynamics, which requires that the admissible processes must be such that the entropy production is non-negative.\n\nTherefore, in our general framework, we have to exploit the compatibility of constitutive relations with an entropy-like inequality, assumed in the form\n\\begin{equation}\\label{entropyineq1}\n\\frac{D s(\\mathbf{z})}{D t}+\\frac{D(v s(\\mathbf{z}) +J_s(\\mathbf{z}))}{D x}\\ge 0,\n\\end{equation}\nwhere $s$ and $J_s$, which are functions depending on the state variables, represent the entropy and the entropy flux, respectively, and $v$ the velocity. In the case where the velocity does not appear among the field variables (for instance, in the case of a model of rigid heat conductors), \nthe entropy-like inequality is assumed to be\n\\begin{equation}\\label{entropyineq2}\n\\frac{D s(\\mathbf{z})}{D t}+\\frac{D J_s(\\mathbf{z})}{D x}\\ge 0.\n\\end{equation}\n\nIn what follows, we will use the entropy-like inequality in the form \\eqref{entropyineq1}; \nin the cases where the velocity does not belong to the field variables, the corresponding results are obtained simply by setting $v=0$.\n\nFrom the analysis of the entropy inequality, some restrictions on the form of constitutive \nequations can be obtained.\nClassically, the restrictions placed by the entropy principle on the constitutive functions are found by using the Coleman-Noll procedure \\cite{Coleman-Noll,Coleman-Mizel}, or the Liu one \\cite{Liu}.\nBoth procedures have to be extended in order to manage non-local constitutive equations. \nThe entropy principle imposes that the inequality \\eqref{entropyineq1} must be satisfied for arbitrary \nthermodynamic processes \\cite{CJRV-2014,JCL-2010}. To find a set of conditions which are at least sufficient for the \nfulfilment of such a constraint, we apply an extended Liu procedure recently developed in a series of papers \n\\cite{Cimmelli-2007,CST-JMP-2009,CST-PRSA-2010,COT-JMP-2011}, \nincorporating new restrictions consistent with higher order non-local constitutive \ntheories. In fact, in order to exploit the second law, we use as constraints both the balance equations for the unknown fields and their extended equations up to the order of the \nderivatives entering the state space.\n\nSimple mathematical considerations may clarify the necessity of imposing as additional constraints in the \nentropy inequality the gradients of the balance equations when dealing with non-local constitutive equations. \n\nThe thermodynamic processes are solutions of the balance equations, and, if these solutions are smooth enough, \nare trivially solutions of their differential consequences (see also \\cite{Rogolino-Cimmelli-2019}). \nSince the entropy inequality \\eqref{entropyineq1} has to be satisfied in arbitrary smooth processes, then it is \nnatural, from a mathematical point of view, to use the differential consequences of the equations governing those \nprocesses as constraints for such an inequality. \nNext Section will be devoted to the description of the extended Liu procedure in this general framework.\n \n\\section{Extended Liu procedure}\n\\label{sec:liu}\nIn this Section, we introduce a general scheme in order to apply the extended Liu technique in the \ncase of $r$-th order ($r\\ge 1$) non-local constitutive equations.\n\nWe consider this rather general case essentially for two reasons. The first reason is to have a unified framework good enough to be applied to different models with non-local constitutive equations of arbitrary order. The second one is of computational nature. In fact, in dealing with applied problems, it is relevant both the derivation of the thermodynamic restrictions arising from the entropy inequality, and, whenever this is possible, a more or less explicit \ncharacterization of the constitutive equations. Since the thermodynamic restrictions may have a lengthy expression, it is convenient to use a computer algebra system for their possible solution. In order to be able to have a flexible computer algebra package that can be used in many different cases, a general approach reveals useful if not necessary. In fact, we developed some general routines in the CAS Reduce \\cite{Reduce} that implement the algorithm at the core of extended Liu approach. \n\nFirst of all, we need to compute the spatial derivatives of the fundamental balance equations. \nFrom the system \\eqref{generalbalance}, developing the first order time and space derivatives, one has:\n\\begin{equation}\n\\mathcal{E}_i\\equiv \\frac{\\partial \\Phi_i}{\\partial u_j}\\frac{\\partial u_j}{\\partial t}+\\frac{\\partial \\Psi_i}{\\partial u_j}\\frac{\\partial u_j}{\\partial x}+\n\\frac{\\partial \\chi_i}{\\partial z^\\star_{\\alpha}}\\frac{\\partial z^\\star_\\alpha}{\\partial x}-\\Gamma_i=0,\n\\end{equation}\nwhere the Einstein summation convention over repeated indices is used. \nIn order to write in general the $m$-th order spatial derivative of the balance laws, let us recall \n\\cite{Mishkov} a formula giving an expression for the $m$-th derivative of a composite function when the argument is a vector with an arbitrary number of components. This formula is a generalization of the well known Fa\\`a di Bruno's formula \\cite{FaadiBruno,Roman}.\n\nThe following theorem is no more than a simple rewriting of the main result contained in \\cite{Mishkov}.\n\\begin{theorem}\nLet $\\mathbf{w}=(w_1(t,x),\\ldots,w_s(t,x))$ be a vector, and $F(\\mathbf{w}(t,x))$ a composite function for which all the needed derivatives are defined, then\n\\begin{equation}\n\\label{faabruno}\n\\begin{aligned}\n&\\frac{D^m F(\\mathbf{w}(t,x))}{D x^m}=\n\\sum_{J_0}\\sum_{J_1}\\cdots\\sum_{J_m}\n\\frac{m!}{\\prod_{i=1}^m(i!)^{k_i}\\prod_{i=1}^m\\prod_{j=1}^s q_{ij}!}\\times\\\\\n&\\qquad\\times \\frac{\\partial^k F}{\\partial w_1^{p_1}\\partial w_2^{p_2}\\cdots\\partial w_s^{p_s}}\n\\prod_{i=1}^m \\left(\\frac{\\partial^i w_1}{\\partial x^i}\\right)^{q_{i1}}\n\\left(\\frac{\\partial^i w_2}{\\partial x^i}\\right)^{q_{i2}}\\cdots \n\\left(\\frac{\\partial^i w_s}{\\partial x^i}\\right)^{q_{is}},\n\\end{aligned}\n\\end{equation}\nwhere the various sums are over all nonnegative integer solutions of the Diophantine equations\n\\begin{equation*}\n\\begin{aligned}\n&\\sum_{J_0} \\rightarrow k_1+2k_2+\\ldots +mk_m = m,\\\\\n&\\sum_{J_1} \\rightarrow q_{11}+q_{12}+\\ldots +q_{1s} = k_1,\\\\\n&\\sum_{J_2} \\rightarrow q_{21}+q_{22}+\\ldots +q_{2s} = k_2,\\\\\n&\\ldots\\\\\n&\\sum_{J_m} \\rightarrow q_{m1}+q_{m2}+\\ldots +q_{ms} = k_m,\n\\end{aligned}\n\\end{equation*}\nand it is\n\\begin{equation*}\n\\begin{aligned}\n&p_j=q_{1j}+q_{2j}+\\ldots+q_{mj}, \\qquad j=1,\\ldots,s,\\\\\n&k=p_1+p_2+\\ldots+p_s=k_1+k_2+\\ldots+k_m. \\qquad \\square\n\\end{aligned}\n\\end{equation*}\n\\end{theorem}\n\nBy introducing the $m$-th order $(m=1,\\ldots,r)$ spatial derivatives of the balance laws, we get\n\\begin{equation}\n\\label{estese}\n\\begin{aligned}\n&\\frac{D^m \\mathcal{E}_i}{D x^m}\\equiv\\sum_{h=0}^k\\binom{m}{h}\\left[\\frac{D^{h}}{D x^{h}}\\left(\\frac{\\partial \\Phi_i(\\mathbf{u})}{\\partial u_j}\\right)\\frac{\\partial^{m-h+1}u_j}{\\partial t \\partial x^{m-h}} \\right.\\\\\n&\\left.+\\frac{D^{h}}{D x^{h}}\\left(\\frac{\\partial \\Psi_i(\\mathbf{u})}{\\partial u_j}\\right)\\frac{\\partial^{m-h+1}u_j}{\\partial x^{m-h+1}}+\\frac{D^h}{D x^h}\n\\left(\\frac{\\partial \\chi_i(\\mathbf{z}^\\star)}{\\partial z^\\star_\\alpha}\\right)\\frac{\\partial^{m-h+1}z^\\star_\\alpha}{\\partial x^{m-h+1}}\\right]\\\\\n&-\\frac{D^m \\Gamma_i(\\mathbf{z}^\\star)}{D x^{m}}=0.\n\\end{aligned}\n\\end{equation}\nIn order to take into account in the entropy inequality the restrictions determined by the field equations and their spatial derivatives, let us introduce the Lagrange multipliers $\\Lambda^{(k)}_i$\n$(i=1,\\ldots,n,\\; k=0,\\ldots,r)$ associated to \n$\\displaystyle\\frac{D^k\\mathcal{E}_i}{D x^k}$. Therefore, the entropy inequality \\eqref{entropyineq1} becomes\n\\begin{equation}\\label{dis}\n\\frac{\\partial s(\\mathbf{z})}{\\partial z_\\alpha}\\frac{\\partial z_\\alpha}{\\partial t}+v\\frac{\\partial s(\\mathbf{z})}{\\partial z_\\alpha}\\frac{\\partial z_\\alpha}{\\partial x}+s(\\mathbf{z})\\frac{\\partial v}{\\partial x}+\\frac{\\partial J_s(\\mathbf{z})}{\\partial z_\\alpha}\\frac{\\partial z_\\alpha}{\\partial x}-\\sum_{i=1}^n\\sum_{k=0}^r\n\\Lambda_i^{(k)}\\frac{D^k \\mathcal{E}_i}{D x^k}\\geq 0.\n\\end{equation}\n\nBy expanding the derivatives in \\eqref{dis}, and using formulae \\eqref{faabruno} and \\eqref{estese}, a straightforward though tedious computation provides a very long expression that turns out to be a polynomial in some derivatives of field variables not belonging to the state space with coefficients depending at most on the field and state variables. This polynomial must be non-negative! Once \\eqref{dis} has been expanded, we may distinguish the derivatives of field variables therein appearing,\nand not entering the state space, in two different classes:\n\\begin{itemize}\n\\item \\emph{highest derivatives}: time derivatives of the field variables, time derivatives of the spatial derivatives (up to the order $r$) of the field variables, and spatial derivatives of highest order: it is easily ascertained that these highest derivatives appear linearly with coefficients depending on the field and state variables;\n\\item \\emph{higher derivatives}: spatial derivatives whose order is not maximal but higher than that of the derivatives entering the state space: it is easily recognized that these higher derivatives appear in powers of degree up to $r+1$, with coefficients depending on the field and state variables.\n\\end{itemize}\n\nLet us define the following sets:\n\\[\n\\begin{aligned}\n&\\widehat{\\mathcal{Z}}^{(k)}=\\left\\{ w \\in \\mathcal{Z}^{(k)} \n\\;:\\; \\frac{\\partial^h w}{\\partial x^h}\\notin \\mathcal{Z},\\; h=1,\\ldots,r-k\\right\\},\\quad k=0,\\ldots,r-1,\\\\\n&\\widehat{\\mathcal{Z}}^{(r)}\\equiv \\mathcal{Z}^{(r)},\n\\end{aligned}\n\\]\nand\n\\[\n\\begin{aligned}\n&\\widehat{\\mathcal{Z}}=\\bigcup_{k=0}^r \\widehat{\\mathcal{Z}}^{(k)}.\n\\end{aligned}\n\\]\nThe highest derivatives are the time derivatives of the field variables and of their spatial derivatives up to the order $r$, together with the $(r+1)$th order spatial derivatives of the elements belonging to the set $\\widehat{\\mathcal{Z}}$.\n\nTherefore, denoting with $\\boldsymbol\\zeta$ the vector whose components $\\zeta_i$ are the highest derivatives, and with \n$\\boldsymbol\\eta$ the vector whose components $\\eta_j$ are the higher derivatives, \nthe entropy inequality \\eqref{dis} can be cast in the following compact form:\n\\begin{equation}\n\\begin{aligned}\nA_i(\\mathbf{z}^\\star) \\zeta_i&+B^{(r+1)}_{j_1\\ldots j_{r+1}}(\\mathbf{z}^\\star)\\eta_{j_1}\\cdots \\eta_{j_{r+1}}+\nB^{(r)}_{j_1\\ldots j_{r}}(\\mathbf{z}^\\star)\\eta_{j_1}\\cdots \\eta_{j_{r}}\\\\\n&+\\ldots+B^{(2)}_{j_1j_{2}}(\\mathbf{z}^\\star)\\eta_{j_1}\\eta_{j_{2}}+B^{(1)}_{j_1}(\\mathbf{z}^\\star)\\eta_{j_1}+B^{(0)}(\\mathbf{z}^\\star)\\ge 0,\n\\end{aligned}\n\\end{equation}\nwhere the coefficients $A_i$, $B^{(r+1)}_{j_1\\ldots j_{r+1}}$, $B^{(r)}_{j_1\\ldots j_{r}}$,\\ldots,\n$B^{(2)}_{j_1j_{2}}$, $B^{(1)}_{j_1}$ and $B^{(0)}$ may depend upon the field variables and the elements entering the state space.\nThis inequality must be satisfied for every thermodynamical process. \n\nFirst, let us observe that nothing prevents to have a thermodynamic process where $B^{(0)}=0$.\nMoreover, since we used in the entropy inequality all the constraints imposed by\nthe field equations together with their spatial derivatives, the highest and higher derivatives may assume arbitrary values. \nConsequently, we may give a set of conditions that are sufficient in order the inequality \\eqref{dis} be fulfilled for every thermodynamical process. These sufficient conditions provide constraints on the constitutive equations.\n\n\\begin{theorem}\\label{theorem2}\nLet $\\boldsymbol\\zeta=(\\zeta_1,\\ldots,\\zeta_p)$ be the vector of highest derivatives, and $\\boldsymbol\\eta=(\\eta_1,\\ldots,\\eta_q)$ the vector of higher derivatives. \nLet \n\\begin{itemize}\n\\item $A_i(\\mathbf{\\mathbf{z}^\\star})$ are $p$ functions of $\\mathbf{z}^\\star$;\n\\item $B^{(k)}_{j_1\\ldots j_{k}}(\\mathbf{z}^\\star)$, with $k=1,\\ldots,r+1$, are $\\binom{q-k+1}{k}$ functions of $\\mathbf{z}^\\star$; \n\\item $B^{(0)}(\\mathbf{z}^\\star)$ is a function of $\\mathbf{z}^\\star$.\n\\end{itemize}\nThe inequality\n\\begin{equation}\n\\begin{aligned}\nA_i(\\mathbf{z}^\\star) \\zeta_i&+B^{(r+1)}_{j_1\\ldots j_{r+1}}(\\mathbf{z}^\\star)\\eta_{j_1}\\cdots \\eta_{j_{r+1}}+\nB^{(r)}_{j_1\\ldots j_{r}}(\\mathbf{z}^\\star)\\eta_{j_1}\\cdots \\eta_{j_{r}}\\\\\n&+\\ldots+B^{(2)}_{j_1j_{2}}(\\mathbf{z}^\\star)\\eta_{j_1}\\eta_{j_{2}}+B^{(1)}_{j_1}(\\mathbf{z}^\\star)\\eta_{j_1}+B^{(0)}(\\mathbf{z}^\\star)\\ge 0,\n\\end{aligned}\n\\end{equation}\nholds for arbitrary vectors $\\boldsymbol\\zeta$ and $\\boldsymbol\\eta$ if \n\\begin{enumerate}\n\\item $A_i=0$;\n\\item $B^{(2k-1)}_{j_1\\ldots j_{2k-1}}=0$, $k=1,\\ldots,\\lfloor{\\frac{r+2}{2}}\\rfloor$\\footnote{$\\lfloor{x}\\rfloor$ denotes the greatest integer less than $x$.};\n\\item $B^{(2k)}_{j_1\\ldots j_{2k}}\\eta_{j_1}\\cdots \\eta_{j_{2k}}$, $k=1,\\ldots,\\lfloor{\\frac{r+1}{2}}\\rfloor$,\n nonnegative for all $\\boldsymbol\\eta$;\n\\item $B^{(0)}\\ge 0$. $\\qquad \\square$\n\\end{enumerate}\n\n\\end{theorem}\n\nDue to Theorem~\\ref{theorem2}, by imposing that the coefficients of $u_{j,t}$, $u_{j,t x}$, \\ldots, $u_{j,t,\\underbrace{x\\ldots x}_{r}}$, where the indices ${(\\cdot)}_{,t}$ and ${(\\cdot)}_{,x}$ denote the\npartial derivatives with respect to time and space, respectively,\nare vanishing, we obtain:\n\\begin{equation}\n\\begin{aligned}\n&\\frac{\\partial s}{\\partial u_j}-\\sum_{i=1}^n\\sum_{k=0}^r\\Lambda_i^{(k)}\\frac{D^k}{Dx^k}\\left(\\frac{\\partial\\Phi_i}{\\partial u_j}\\right)=0,\\\\\n&\\frac{\\partial s}{\\partial u_{j,x}}-\\sum_{i=1}^n\\sum_{k=1}^r \\binom{k}{k-1}\\Lambda_i^{(k)}\\frac{D^{k-1}}{Dx^{k-1}}\\left(\\frac{\\partial\\Phi_i}{\\partial u_j}\\right)=0,\\\\\n&\\frac{\\partial s}{\\partial u_{j,xx}}-\\sum_{i=1}^n\\sum_{k=2}^r \\binom{k}{k-2}\\Lambda_i^{(k)}\\frac{D^{k-2}}{Dx^{k-2}}\\left(\\frac{\\partial\\Phi_i}{\\partial u_j}\\right)=0,\\\\\n&\\ldots\\\\\n&\\frac{\\partial s}{\\partial u_{j,\\underbrace{x\\ldots x}_{r}}}-\\sum_{i=1}^n\\Lambda_i^{(r)}\\frac{\\partial\\Phi_i}{\\partial u_j}=0,\n\\end{aligned}\n\\end{equation}\nthat allow us the determination of the Lagrange multipliers. The coefficients of the remaining highest derivatives, \n\\begin{equation}\n\\sum_{i=1}^n\\Lambda_i^{(r)}\\left(\\frac{\\partial\\Psi_i}{\\partial \\widehat{z}_\\alpha}+\\frac{\\partial\\chi_i}{\\partial \\widehat{z}_\\alpha}\\right)=0,\\qquad \\widehat{z}_\\alpha\\in\\widehat{\\mathcal{Z}},\n\\end{equation}\nprovide conditions to be used together with the constraints coming from \nthe arbitrariness of the higher derivatives to restrict the constitutive equations. After all these restrictions have been derived, the residual entropy inequality, say\n\\begin{equation} \nB^{(0)}\\ge 0,\n\\end{equation}\nremains providing further constraints.\n\nIt is evident that the general restrictions here derived can not be discussed\nfrom a physical point of view, but they are essential in writing the computer algebra program that almost automatically computes the restrictions placed by second law.\n\nIn the next Section, we provide some examples of physical interest where the procedure here described can be applied, and we discuss the physical meaning of the results.\n\n\\section{Applications}\n\\label{sec:applications}\nHere, we consider two physical examples of fluids whose constitutive equations involve first or second order non-localities, \\emph{i.e.}, special instances of higher grade fluids. \nIn modern terminology, a fluid is said to be of grade $(r+1)$ \n\\cite{Dunn-Serrin,Truesdell_Noll,Dunn-Rajagopal,Gouin2019} if the constitutive quantities are allowed to depend on gradients of order $r$. In recent years, these higher grade \nfluids have been employed, for instance, to model capillarity effects \n\\cite{Gouin1985a,Gouin1985b}, or to analyze the structure of liquid-vapor phase transitions \nunder both static\n\\cite{Aifantis-Serrin-1,Aifantis-Serrin-2} and dynamic \\cite{Slemrod-1,Slemrod-2} conditions. \n\nAs observed in \\cite{Dunn-Serrin,Dunn}, these fluids are, in general, incompatible with the restrictions placed by the second law of thermodynamics. In order to find a remedy to such an incompatibility, these authors proposed a generalization of the classical local balance of energy by postulating the existence of a rate of supply of mechanical energy, the so called interstitial working; in such a framework, the entropy flux has the classical form as the ratio between the heat flux and the absolute temperature. Nevertheless, the same authors \\cite{Dunn-Serrin} remarked that the interstitial working can be removed but at the cost of introducing an entropy extra-flux \\cite{Muller} in order to satisfy the second law of thermodynamics. \n\nIn the applications we consider below, the assumptions we make consist in taking the local energy balance in the classical form without including any extra-term; moreover, we write the entropy inequality without specializing the form of the entropy flux: as will be seen, the expression of entropy flux will arise as a consequence of the extended procedure when solving the constraints placed by the second law.\n\n\\subsection{Fluid of grade 2 with a scalar internal variable}\nLet us consider a fluid of grade 2 whose description involves, in addition to the basic fields of mass density, velocity and internal energy, an internal variable. The latter may describe an additional internal degree of freedom of the material, for instance representative of a suitable scalar microstructure \\cite{Capriz,OS-2008} or another extensive property.\n\nThe governing equations we consider read\n\\begin{equation}\n\\label{fluid-grade-2}\n\\begin{aligned}\n&\\mathcal{E}_1\\equiv\\frac{D\\rho}{D t} + \\frac{D (\\rho v)}{D x}=0, \\\\\n&\\mathcal{E}_2\\equiv\\frac{D(\\rho v)}{D t} + \\frac{D (\\rho v^2 - T)}{D x}=0,\\\\\n&\\mathcal{E}_3\\equiv\\frac{D}{D t}\\left(\\rho\\varepsilon+\\rho \\frac{v^2}{2}\\right) +\\frac{D }{D x}\\left(\\rho v\\varepsilon+\\rho \\frac{v^3}{2}-Tv+ q\\right)=0,\\\\\n&\\mathcal{E}_4\\equiv\\frac{D(\\rho\\gamma)}{Dt}+\\frac{D(\\rho v\\gamma+\\phi)}{Dx}=0,\n\\end{aligned}\n\\end{equation}\nwhere $\\rho$ is the mass density, $v$ the velocity, $\\varepsilon$ the internal energy per unit mass, and $\\gamma$ an internal state variable;\nmoreover, the Cauchy stress $T$, the heat flux $q$, and the flux $\\phi$ of internal variable must be assigned by \nmeans of suitable constitutive equations such that for every admissible process the entropy inequality \n\\begin{equation}\n\\rho\\left(\\frac{Ds}{D t}+v\\frac{Ds}{Dx}\\right)+ \\frac{DJ_s}{Dx} \\ge 0\n\\end{equation}\nbe satisfied, being $s$ and $J_s$ (to be assigned as constitutive quantities) the specific entropy and the entropy flux, respectively.\n\nWe assume the state space spanned by\n\\begin{equation}\n\\mathcal{Z}=\\{\\rho,\\varepsilon,\\gamma,\\rho_{,x},v_{,x},\\varepsilon_{,x},\\gamma_{,x}\\}.\n\\end{equation}\n\nAs shown in the previous Section, the exploitation of second law of thermodynamics is here performed by taking into \naccount the constraints \nimposed on the thermodynamic processes by the balance equations and their first order extensions; these \nconstraints are imposed by introducing some Lagrange multipliers.\nTherefore, the entropy inequality becomes\n\\begin{equation}\n\\label{entropyconstrained}\n\\begin{aligned}\n&\\rho\\left(\\frac{Ds}{D t}+v\\frac{Ds}{Dx}\\right)+ \\frac{DJ_s}{Dx} \\\\\n&\\quad- \\Lambda^{(0)}_1 \\mathcal{E}_1- \\Lambda^{(0)}_2 \\mathcal{E}_2- \\Lambda^{(0)}_3 \\mathcal{E}_3\n- \\Lambda^{(0)}_4 \\mathcal{E}_4\\\\\n&\\quad-\\Lambda^{(1)}_1\\frac{D\\mathcal{E}_1}{Dx}-\\Lambda^{(1)}_2\\frac{D\\mathcal{E}_2}{Dx}\n-\\Lambda^{(1)}_3\\frac{D\\mathcal{E}_3}{Dx}-\\Lambda^{(1)}_4\\frac{D\\mathcal{E}_4}{Dx} \\geq 0.\n\\end{aligned}\n\\end{equation}\nFor the sake of clarity, we present the details of the computation we are required to do in applying the extended Liu procedure. \n\nExpanding the derivatives in the entropy inequality \\eqref{entropyconstrained}, we obtain a very long expression, say\n\\begin{align*}\n&\\left(\\rho\\frac{\\partial s}{\\partial\\rho}-\\Lambda^{(0)}_1\\right)\\rho_{,t}-\\left(\\rho\\Lambda^{(0)}_2+\\rho_{,x}\\Lambda^{(1)}_2\\right)v_{,t}+\\left(\\rho\\frac{\\partial s}{\\partial\\varepsilon}-\\rho\\Lambda^{(0)}_3-\\rho_{,x}\\Lambda^{(1)}_3\\right)\\varepsilon_{,t}\\allowdisplaybreaks\\\\\n&\\quad +\\left(\\rho\\frac{\\partial s}{\\partial\\gamma}-\\rho\\Lambda^{(0)}_4-\\rho_{,x}\\Lambda^{(1)}_4\\right)\\gamma_{,t}\n+\\left(\\rho\\frac{\\partial s}{\\partial\\rho_{,x}}-\\Lambda^{(1)}_1\\right)\\rho_{,tx}\\allowdisplaybreaks\\\\\n&\\quad+\\rho\\left(\\frac{\\partial s}{\\partial v_{,x}}-\\Lambda^{(1)}_2\\right)v_{,tx}\n+\\rho\\left(\\frac{\\partial s}{\\partial \\varepsilon_{,x}}-\\Lambda^{(1)}_3\\right)\\varepsilon_{,tx}\n+\\rho\\left(\\frac{\\partial s}{\\partial \\gamma_{,x}}-\\Lambda^{(1)}_4\\right)\\gamma_{,tx}\\allowdisplaybreaks\\\\\n&\\quad+\\left(\\frac{\\partial T}{\\partial\\rho_{,x}}\\Lambda^{(1)}_2- \\frac{\\partial q}{\\partial\\rho_{,x}}\\Lambda^{(1)}_3 \n- \\frac{\\partial\\phi}{\\partial\\rho_{,x}}\\Lambda^{(1)}_4 \\right)\\rho_{,xxx}\\allowdisplaybreaks\\\\\n&\\quad+\\left(\\frac{\\partial T}{\\partial v_{,x}}\\Lambda^{(1)}_2- \\frac{\\partial q}{\\partial v_{,x}}\\Lambda^{(1)}_3 \n- \\frac{\\partial\\phi}{\\partial v_{,x}}\\Lambda^{(1)}_4 \\right)v_{,xxx}\\allowdisplaybreaks\\\\\n&\\quad +\\left(\\frac{\\partial T}{\\partial \\varepsilon_{,x}}\\Lambda^{(1)}_2 - \\frac{\\partial q}{\\partial \\varepsilon_{,x}}\\Lambda^{(1)}_3 \n- \\frac{\\partial\\phi}{\\partial \\varepsilon_{,x}}\\Lambda^{(1)}_4\\right)\\varepsilon_{,xxx}\\allowdisplaybreaks\\\\\n&\\quad +\\left(\\frac{\\partial T}{\\partial \\gamma_{,x}}\\Lambda^{(1)}_2- \\frac{\\partial q}{\\partial \\gamma_{,x}}\\Lambda^{(1)}_3 \n- \\frac{\\partial\\phi}{\\partial \\gamma_{,x}}\\Lambda^{(1)}_4 \\right)\\gamma_{,xxx}\\allowdisplaybreaks\\\\\n&\\quad+\\left(\\frac{\\partial ^{2}T}{\\partial \\rho_{,x}^{2}} \\Lambda^{(1)}_2 \n- \\frac{\\partial ^{2}q}{\\partial \\rho_{,x}^{2}} \\Lambda^{(1)}_3\n-\\frac{\\partial ^{2}\\phi }{\\partial \\rho_{,x}^{2}} \\Lambda^{(1)}_4 \n\\right) \\rho_{,xx}^{2}\\allowdisplaybreaks\\\\\n&\\quad+2 \\left(\\frac{\\partial ^{2}T}{\\partial \\rho_{,x}\\partial v_{,x}} \\Lambda^{(1)}_2 \n- \\frac{\\partial ^{2}q}{\\partial \\rho_{,x}\\partial v_{,x}} \\Lambda^{(1)}_3\n-\\frac{\\partial ^{2}\\phi }{\\partial \\rho_{,x}\\partial v_{,x}} \\Lambda^{(1)}_4\\right) \\rho_{,xx} v_{,xx}\\allowdisplaybreaks\\\\ \n&\\quad+2\\left( \\frac{\\partial ^{2}T}{\\partial \\varepsilon _{,x}\\partial \\rho_{,x}} \\Lambda^{(1)}_2 \n- \\frac{\\partial ^{2}q}{\\partial \\varepsilon _{,x}\\partial \\rho_{,x}} \\Lambda^{(1)}_3\n- \\frac{\\partial ^{2}\\phi }{\\partial \\varepsilon _{,x}\\partial \\rho_{,x}} \\Lambda^{(1)}_4 \\right) \\rho_{,xx} \\varepsilon _{,xx} \\allowdisplaybreaks\\\\\n&\\quad+2 \\left( \\frac{\\partial ^{2}T}{\\partial \\gamma_{,x}\\partial \\rho_{,x}} \\Lambda^{(1)}_2 \n-\\frac{\\partial ^{2}q}{\\partial \\gamma_{,x}\\partial \\rho_{,x}} \\Lambda^{(1)}_3 \n- \\frac{\\partial ^{2}\\phi }{\\partial \\gamma_{,x}\\partial \\rho_{,x}} \\Lambda^{(1)}_4\\right) \\rho_{,xx}\\gamma_{,xx} \\allowdisplaybreaks\\\\\n&\\quad+\\left(\\frac{\\partial ^{2}T}{\\partial v_{,x}^{2}} \\Lambda^{(1)}_2 \n- \\frac{\\partial ^{2}q}{\\partial v_{,x}^{2}} \\Lambda^{(1)}_3 \n-\\frac{\\partial ^{2}\\phi }{\\partial v_{,x}^{2}} \\Lambda^{(1)}_4\\right) v_{,xx}^{2}\\allowdisplaybreaks\\\\\n&\\quad+2\\left( \\frac{\\partial ^{2}T}{\\partial \\varepsilon _{,x}\\partial v_{,x}} \\Lambda^{(1)}_2 \n- \\frac{\\partial ^{2}q}{\\partial \\varepsilon _{,x}\\partial v_{,x}} \\Lambda^{(1)}_3 \n- \\frac{\\partial ^{2}\\phi }{\\partial \\varepsilon _{,x}\\partial v_{,x}} \\Lambda^{(1)}_4\\right) v_{,xx} \\varepsilon _{,xx} \\allowdisplaybreaks\\\\\n&\\quad+2\\left( \\frac{\\partial ^{2}T}{\\partial \\gamma_{,x}\\partial v_{,x}} \\Lambda^{(1)}_2 \n- \\frac{\\partial ^{2}q}{\\partial \\gamma_{,x}\\partial v_{,x}} \\Lambda^{(1)}_3 \n- \\frac{\\partial ^{2}\\phi }{\\partial \\gamma_{,x}\\partial v_{,x}}\\Lambda^{(1)}_4\\right) v_{,xx}\\gamma_{,xx} \\allowdisplaybreaks\\\\\n&\\quad+ \\left(\\frac{\\partial ^{2}T}{\\partial \\varepsilon _{,x}^{2}} \\Lambda^{(1)}_2\n- \\frac{\\partial ^{2}q}{ \\partial \\varepsilon _{,x}^{2}} \\Lambda^{(1)}_3 \n-\\frac{\\partial ^{2}\\phi }{\\partial \\varepsilon _{,x}^{2}} \\Lambda^{(1)}_4\\right) \\varepsilon _{,xx}^{2} \\allowdisplaybreaks\\\\\n&\\quad+2 \\left(\\frac{\\partial ^{2}T}{\\partial \\varepsilon _{,x}\\partial \\gamma_{,x}}\\Lambda^{(1)}_2 \n- \\frac{\\partial ^{2}q}{\\partial \\varepsilon _{,x}\\partial \\gamma_{,x}} \\Lambda^{(1)}_3 \n- \\frac{\\partial ^{2}\\phi }{\\partial \\varepsilon _{,x}\\partial \\gamma_{,x}}\\Lambda^{(1)}_4 \\right) \\varepsilon _{,xx} \\gamma_{,xx}\n \\allowdisplaybreaks\\\\\n&\\quad+\\left(\\frac{\\partial ^{2}T}{\\partial \\gamma_{,x}^{2}} \\Lambda^{(1)}_2\n- \\frac{\\partial ^{2}q}{\\partial \\gamma_{,x}^{2}} \\Lambda^{(1)}_3\n-\\frac{\\partial ^{2}\\phi }{\\partial \\gamma_{,x}^{2}} \\Lambda^{(1)}_4\\right) \\gamma_{,xx}^{2} \\allowdisplaybreaks\\\\\n&\\quad+\\left(\\frac{\\partial s}{\\partial \\rho_{,x}} \\rho v+\\frac{\\partial J_s}{\\partial \\rho_{,x}} +\\frac{\\partial T}{\\partial \\rho_{,x}} \\Lambda^{(0)}_2- \\frac{\\partial q}{\\partial \\rho_{,x}} \\Lambda^{(0)}_3-\\frac{\\partial \\phi }{\\partial \\rho_{,x}} \\Lambda^{(0)}_4\\right.\\allowdisplaybreaks\\\\\n&\\qquad-\\Lambda^{(1)}_1 v+\\Lambda^{(1)}_2\\left(\\frac{\\partial T}{\\partial \\rho } +2 \\frac{\\partial ^{2}T}{\\partial \\rho \\partial \\rho_{,x}} \\rho_{,x} +2 \\frac{\\partial ^{2}T}{\\partial \\varepsilon \\partial \\rho_{,x}} \\varepsilon _{,x} \n+2 \\frac{\\partial ^{2}T}{\\partial \\gamma\\partial \\rho_{,x}} \\gamma_{,x}\\right) \\allowdisplaybreaks \\\\\n&\\qquad +\\Lambda^{(1)}_3\\left(\\frac{\\partial T}{\\partial \\rho_{,x}} v_{,x}-2 \\frac{\\partial ^{2}q}{\\partial \\varepsilon \\partial \\rho_{,x}} \\varepsilon _{,x} -2 \\frac{\\partial ^{2}q}{\\partial \\gamma\\partial \\rho_{,x}} \\gamma_{,x} \n-2 \\frac{\\partial ^{2}q}{\\partial \\rho \\partial \\rho_{,x}} \\rho_{,x} \n-\\frac{\\partial q}{\\partial \\rho } \\right) \\allowdisplaybreaks\\\\\n&\\qquad\\left.+\\Lambda^{(1)}_4\\left( -2 \\frac{\\partial ^{2}\\phi }{ \\partial \\varepsilon \\partial \\rho_{,x}} \\varepsilon _{,x} \n-2\\frac{\\partial ^{2}\\phi }{\\partial \\gamma\\partial \\rho_{,x}} \\gamma_{,x} \n-2\\frac{\\partial ^{2}\\phi }{\\partial \\rho \\partial \\rho_{,x}} \\rho_{,x} -\\frac{\\partial \\phi }{\\partial \\rho } \\right)\n\\right) \\rho_{,xx} \\allowdisplaybreaks\\\\\n&\\quad+ \\left(\\frac{\\partial s}{\\partial v_{,x}} \\rho v \n+\\frac{\\partial J_s}{\\partial v_{,x}} \n+\\frac{\\partial T}{\\partial v_{,x}} \\Lambda^{(0)}_2 \n- \\frac{\\partial q}{\\partial v_{,x}} \\Lambda^{(0)}_3 \n-\\frac{\\partial \\phi }{\\partial v_{,x}} \\Lambda^{(0)}_4\\right.\\allowdisplaybreaks\\\\\n&\\qquad+\\left(-\\Lambda^{(1)}_1 \\rho \n-\\Lambda^{(1)}_2 \\left(\\rho v \n+2 \\frac{\\partial ^{2}T}{\\partial \\varepsilon \\partial v_{,x}} \\varepsilon _{,x} \n+2 \\frac{\\partial ^{2}T}{\\partial \\gamma\\partial v_{,x}} \\gamma_{,x} \n+2 \\frac{\\partial ^{2}T}{\\partial \\rho \\partial v_{,x}} \\rho_{,x}\\right)\\right.\\allowdisplaybreaks\\\\\n&\\qquad+\\Lambda^{(1)}_3 \\left(T +\\frac{\\partial T}{\\partial v_{,x}} v_{,x}\n-2 \\frac{\\partial ^{2}q}{\\partial \\rho \\partial v_{,x}} \\rho_{,x} \n-2 \\frac{\\partial ^{2}q}{\\partial \\varepsilon \\partial v_{,x}} \\varepsilon _{,x} \n-2 \\frac{\\partial ^{2}q}{\\partial \\gamma\\partial v_{,x}} \\gamma_{,x} \\right) \\allowdisplaybreaks\\\\\n&\\qquad\\left.+\\Lambda^{(1)}_4\\left(-2\\frac{\\partial ^{2}\\phi }{\\partial \\rho \\partial v_{,x}}\\rho_{,x}\n-2 \\frac{\\partial ^{2}\\phi }{ \\partial \\varepsilon \\partial v_{,x}} \\varepsilon _{,x} \n-2\\frac{\\partial ^{2}\\phi }{\\partial \\gamma\\partial v_{,x}} \\gamma_{,x} \\right) \\right) v_{,xx}\\allowdisplaybreaks\\\\\n&\\quad+ \\left(\\frac{\\partial s}{\\partial \\varepsilon _{,x}} \\rho v\n+\\frac{\\partial J_s}{\\partial \\varepsilon _{,x}} \n+\\frac{\\partial T}{\\partial \\varepsilon _{,x}} \\Lambda^{(0)}_2\n -\\frac{\\partial q}{\\partial \\varepsilon _{,x}} \\Lambda^{(0)}_3\n-\\frac{\\partial \\phi }{\\partial \\varepsilon _{,x}} \\Lambda^{(0)}_4\\right.\\allowdisplaybreaks\\\\\n&\\qquad+\\Lambda^{(1)}_2\\left(2 \\frac{\\partial ^{2}T}{\\partial \\varepsilon \\partial \\varepsilon _{,x}} \\varepsilon _{,x} \n+ \\frac{\\partial T}{\\partial \\varepsilon } \n+2 \\frac{\\partial ^{2}T}{\\partial \\varepsilon _{,x}\\partial \\gamma} \\gamma_{,x} \n+2 \\frac{\\partial ^{2}T}{\\partial \\varepsilon _{,x}\\partial \\rho } \\rho_{,x}\\right)\\allowdisplaybreaks\\\\\n&\\qquad+\\Lambda^{(1)}_3\\left(-2\\frac{\\partial ^{2}q}{\\partial \\varepsilon \\partial \\varepsilon _{,x}} \\varepsilon _{,x} \n+\\frac{\\partial T}{\\partial \\varepsilon _{,x}} v_{,x}\n- \\rho v\n-\\frac{\\partial q}{\\partial \\varepsilon } \n-2 \\frac{\\partial ^{2}q}{\\partial \\varepsilon _{,x}\\partial \\gamma} \\gamma_{,x} \n-2 \\frac{\\partial ^{2}q}{\\partial \\varepsilon _{,x}\\partial \\rho } \\rho_{,x}\\right) \\allowdisplaybreaks\\\\\n&\\qquad\\left.+\\Lambda^{(1)}_4\\left(-2 \\frac{\\partial ^{2}\\phi }{\\partial \\varepsilon \\partial \\varepsilon _{,x}} \\varepsilon _{,x} \n-\\frac{\\partial \\phi }{\\partial \\varepsilon } \n-2 \\frac{\\partial ^{2}\\phi }{\\partial \\varepsilon _{,x}\\partial \\gamma} \\gamma_{,x} \n-2 \\frac{\\partial ^{2}\\phi }{\\partial \\varepsilon _{,x}\\partial \\rho } \\rho_{,x}\\right)\\right) \\varepsilon _{,xx} \\allowdisplaybreaks\\\\ \n&\\quad+ \\left(\\frac{\\partial s}{\\partial \\gamma_{,x}} \\rho v\n+\\frac{\\partial J_s}{\\partial \\gamma_{,x}} \n+\\frac{\\partial T}{\\partial \\gamma_{,x}} \\Lambda^{(0)}_2\n- \\frac{\\partial q}{\\partial \\gamma_{,x}} \\Lambda^{(0)}_3\n-\\frac{\\partial \\phi }{\\partial \\gamma_{,x}} \\Lambda^{(0)}_4\\right.\\allowdisplaybreaks\\\\\n&\\qquad+\\Lambda^{(1)}_2\\left(2 \\frac{\\partial ^{2}T}{\\partial \\varepsilon \\partial \\gamma_{,x}} \\varepsilon _{,x} \n+2 \\frac{\\partial ^{2}T}{\\partial \\gamma\\partial \\gamma_{,x}} \\gamma_{,x} \n+\\frac{\\partial T}{ \\partial \\gamma} \n+2 \\frac{\\partial ^{2}T}{\\partial \\gamma_{,x}\\partial \\rho } \\rho_{,x}\\right) \\allowdisplaybreaks\\\\\n&\\qquad+\\Lambda^{(1)}_3\\left(-2 \\frac{\\partial ^{2}q}{\\partial \\varepsilon \\partial \\gamma_{,x}} \\varepsilon _{,x} \n-2 \\frac{\\partial ^{2}q}{\\partial \\gamma\\partial \\gamma_{,x}} \\gamma_{,x} \n-\\frac{\\partial q}{\\partial \\gamma} \n-2 \\frac{\\partial ^{2}q}{\\partial \\gamma_{,x}\\partial \\rho } \\rho_{,x}\n+\\frac{\\partial T}{\\partial \\gamma_{,x}} v_{,x}\\right)\\allowdisplaybreaks\\\\\n&\\qquad\\left.+\\Lambda^{(1)}_4 \\left(-2 \\frac{\\partial ^{2}\\phi }{\\partial \\varepsilon \\partial \\gamma_{,x}} \\varepsilon _{,x} \n-2\\frac{\\partial ^{2}\\phi }{\\partial \\gamma\\partial \\gamma_{,x}} \\gamma_{,x} \n-\\frac{\\partial \\phi }{\\partial \\gamma} \n-2 \\frac{\\partial ^{2}\\phi }{\\partial \\gamma_{,x}\\partial \\rho } \\rho_{,x}\n-\\rho v\\right)\\right)\\gamma_{,xx}\\allowdisplaybreaks\\\\\n& \\rho v\\left(\\frac{\\partial s}{\\partial \\rho }\\rho_{,x} \n+ \\frac{\\partial s}{\\partial \\varepsilon } \\varepsilon _{,x} \n+ \\frac{\\partial s}{\\partial \\gamma} \\gamma_{,x}\\right)\n+\\frac{\\partial J_s}{\\partial \\rho } \\rho_{,x}\n+\\frac{\\partial J_s}{ \\partial \\varepsilon } \\varepsilon _{,x}\n+\\frac{\\partial J_s}{ \\partial \\gamma} \\gamma_{,x}\\allowdisplaybreaks\\\\\n&\\quad-\\Lambda^{(0)}_1 \\left(v\\rho_{,x}+\\rho v_{,x}\\right)\n-\\Lambda^{(0)}_2 \\left(\\rho v v_{,x}-\\frac{\\partial T}{\\partial \\rho } \\rho_{,x}\n- \\frac{\\partial T}{\\partial \\varepsilon } \\varepsilon _{,x} \n-\\frac{\\partial T}{ \\partial \\gamma} \\gamma_{,x} \\right)\\allowdisplaybreaks\\\\\n&\\quad-\\Lambda^{(0)}_3\\left(\\rho v\\varepsilon _{,x} -T v_{,x}+ \\frac{\\partial q}{\\partial \\rho } \\rho_{,x}\n+\\frac{\\partial q}{ \\partial \\varepsilon } \\varepsilon _{,x} \n+\\frac{\\partial q}{\\partial \\gamma} \\gamma_{,x}\\right)\\allowdisplaybreaks\\\\\n&\\quad-\\Lambda^{(0)}_4 \\left(\\rho v\\gamma_{,x} +\\frac{\\partial \\phi }{\\partial \\rho } \\rho_{,x} \n+\\frac{\\partial \\phi }{\\partial \\varepsilon } \\varepsilon _{,x} \n+\\frac{\\partial \\phi }{\\partial \\gamma} \\gamma_{,x} \\right)\n-2 \\Lambda^{(1)}_1 \\rho_{,x} v_{,x}\\allowdisplaybreaks\\\\\n&\\quad-\\Lambda^{(1)}_2\\left(v\\rho_{,x} v_{,x}+\\rho v_{,x}^{2}\n-\\frac{\\partial ^{2}T}{\\partial \\rho ^{2}} \\rho_{,x}^{2}\n-2 \\frac{\\partial ^{2}T}{ \\partial \\rho \\partial \\varepsilon} \\rho_{,x} \\varepsilon _{,x} \n-2 \\frac{\\partial ^{2}T}{\\partial \\rho \\partial \\gamma} \\rho_{,x} \\gamma_{,x} \\right.\\allowdisplaybreaks\\\\\n&\\qquad\\left.- \\frac{\\partial ^{2}T}{\\partial \\varepsilon ^{2}} \\varepsilon _{,x}^{2} \n-2 \\frac{\\partial ^{2}T}{\\partial \\varepsilon \\partial \\gamma} \\varepsilon _{,x} \\gamma_{,x} \n-\\frac{\\partial ^{2}T}{ \\partial \\gamma^{2}} \\gamma_{,x}^{2} \\right)\\allowdisplaybreaks\\\\\n&\\quad-\\Lambda^{(1)}_3\\left(\n( v\\rho_{,x} +\\rho v_{,x})\\varepsilon _{,x} \n-\\left(\\frac{\\partial T}{\\partial \\rho } \\rho_{,x} \n+ \\frac{\\partial T}{\\partial \\varepsilon } \\varepsilon _{,x} \n+\\frac{\\partial T}{\\partial \\gamma} \\gamma_{,x} \\right) v_{,x}\\right.\\allowdisplaybreaks\\\\\n&\\qquad\\left.+\\frac{\\partial ^{2}q}{\\partial \\rho ^{2}} \\rho_{,x}^{2}\n+2 \\frac{\\partial ^{2}q}{\\partial \\rho\\partial \\varepsilon} \\rho_{,x} \\varepsilon _{,x} \n+2 \\frac{\\partial ^{2}q}{\\partial \\rho\\partial\\gamma} \\rho_{,x}\\gamma_{,x} \\right.\\allowdisplaybreaks\\\\\n&\\qquad\\left.+\\frac{\\partial ^{2}q}{\\partial \\varepsilon ^{2}} \\varepsilon _{,x}^{2}\n+2\\frac{\\partial ^{2}q}{\\partial \\varepsilon \\partial \\gamma} \\varepsilon _{,x} \\gamma_{,x} \n+\\frac{\\partial ^{2}q}{\\partial \\gamma^{2}} \\gamma_{,x}^{2} \\right)\\allowdisplaybreaks\\\\\n&\\quad-\\Lambda^{(1)}_4\\left((v \\rho_{,x}+\\rho v_{,x})\\gamma_{,x} \n+\\frac{\\partial ^{2}\\phi }{\\partial \\rho ^{2}} \\rho_{,x}^{2}\n+2 \\frac{\\partial ^{2}\\phi }{\\partial \\rho \\partial \\varepsilon} \\rho_{,x} \\varepsilon _{,x} \\right.\\allowdisplaybreaks\\\\\n&\\qquad\\left.+2\\frac{\\partial ^{2}\\phi }{\\partial \\rho \\partial \\gamma} \\rho_{,x} \\gamma_{,x} \n+\\frac{\\partial ^{2}\\phi }{ \\partial \\varepsilon ^{2}} \\varepsilon _{,x}^{2} \n+2 \\frac{\\partial ^{2}\\phi }{ \\partial \\varepsilon \\partial \\gamma} \\varepsilon _{,x} \\gamma_{,x} \n+\\frac{\\partial ^{2}\\phi }{\\partial \\gamma^{2}} \\gamma_{,x}^{2} \\right)\\ge 0,\n\\end{align*}\nwhere we can distinguish the \\emph{highest derivatives}, say\n\\[\n\\{\\rho_{,t},v_{,t},\\varepsilon_{,t},\\gamma_{,t},\\rho_{,tx},v_{,tx},\\varepsilon_{,tx},\\gamma_{,tx},\\rho_{,xxx},v_{,xxx},\\varepsilon_{,xxx},\\gamma_{,xxx}\\},\n\\]\nand the \\emph{higher derivatives}, say\n\\[\n\\{\\rho_{,xx},v_{,xx},\\varepsilon_{,xx},\\gamma_{,xx}\\}.\n\\]\nAs expected, the entropy inequality is linear in the highest derivatives and quadratic in the higher ones; \nthe coefficients are at most functions of the field and the state variables.\nBy vanishing the coefficients of the highest derivatives, we determine the Lagrange multipliers, say\n\\begin{equation}\n\\begin{aligned}\n&\\Lambda_1^{(0)}=\\rho\\frac{\\partial s}{\\partial\\rho}, \\qquad \n&&\\Lambda_2^{(0)}=-\\frac{\\rho_{,x}}{\\rho}\\frac{\\partial s}{\\partial v_x},\\\\\n&\\Lambda_3^{(0)}=\\frac{\\partial s}{\\partial\\varepsilon}-\\frac{\\rho_{,x}}{\\rho}\\frac{\\partial s}{\\partial\\varepsilon_{,x}}, \\qquad \n&&\\Lambda_4^{(0)}=\\frac{\\partial s}{\\partial\\gamma}-\\frac{\\rho_{,x}}{\\rho}\\frac{\\partial s}{\\partial\\gamma_{,x}},\\\\\n&\\Lambda_1^{(1)}=\\rho\\frac{\\partial s}{\\partial \\rho_{,x}},\\qquad\n&&\\Lambda_2^{(1)}=\\frac{\\partial s}{\\partial v_{,x}},\\\\\n&\\Lambda_3^{(1)}=\\frac{\\partial s}{\\partial \\varepsilon_{,x}},\\qquad\n&&\\Lambda_4^{(1)}=\\frac{\\partial s}{\\partial \\gamma_{,x}},\n\\end{aligned}\n\\end{equation}\nas well as the following restrictions involving the entropy, the Cauchy stress tensor, the heat flux and the flux of internal variable:\n\\begin{equation}\n\\begin{aligned}\n&\\frac{\\partial s}{\\partial v_{,x}}\\frac{\\partial T}{\\partial \\rho_{,x}}-\n\\frac{\\partial s}{\\partial \\varepsilon_{,x}}\\frac{\\partial q}{\\partial \\rho_{,x}}-\n\\frac{\\partial s}{\\partial \\gamma_{,x}}\\frac{\\partial \\phi}{\\partial \\rho_{,x}}=0,\\\\\n&\\frac{\\partial s}{\\partial v_{,x}}\\frac{\\partial T}{\\partial v_{,x}}-\n\\frac{\\partial s}{\\partial \\varepsilon_{,x}}\\frac{\\partial q}{\\partial v_{,x}}-\n\\frac{\\partial s}{\\partial \\gamma_{,x}}\\frac{\\partial \\phi}{\\partial v_{,x}}=0,\\\\\n&\\frac{\\partial s}{\\partial v_{,x}}\\frac{\\partial T}{\\partial \\varepsilon_{,x}}-\n\\frac{\\partial s}{\\partial \\varepsilon_{,x}}\\frac{\\partial q}{\\partial \\varepsilon_{,x}}-\n\\frac{\\partial s}{\\partial \\gamma_{,x}}\\frac{\\partial \\phi}{\\partial \\varepsilon_{,x}}=0,\\\\\n&\\frac{\\partial s}{\\partial v_{,x}}\\frac{\\partial T}{\\partial \\gamma_{,x}}-\n\\frac{\\partial s}{\\partial \\varepsilon_{,x}}\\frac{\\partial q}{\\partial \\gamma_{,x}}-\n\\frac{\\partial s}{\\partial \\gamma_{,x}}\\frac{\\partial \\phi}{\\partial \\gamma_{,x}}=0.\n\\end{aligned}\n\\end{equation}\nThe latter, joined with the conditions obtained by annihilating the coefficients of the linear terms in the higher derivatives, provide the restrictions on the constitutive functions. These conditions can be solved so that we are able to provide an explicit solution that is proved to satisfy the remaining restrictions expressed as inequalities.\nTo proceed further, we start by taking the specific entropy as the sum of an equilibrium term and a non-equilibrium part expressed as a \nquadratic form in the gradients entering the state space; this quadratic part must be semidefinite negative in order to verify the principle of maximum entropy at equilibrium. By using some routines written in the Computer Algebra System Reduce \\cite{Reduce}, we obtain the following solution to all the thermodynamic restrictions:\n\\begin{equation}\\label{entr_1}\n\\begin{aligned}\ns&=s_0(\\rho,\\varepsilon)+s_1(\\rho,\\gamma)\\rho_{,x}^2,\\\\\nT&=\\rho^2\\frac{\\partial s_0}{\\partial \\rho}\\left(\\frac{\\partial s_0}{\\partial\\varepsilon}\\right)^{-1}\n+\\tau_1(\\rho,\\varepsilon,\\gamma)v_{,x}\\\\\n&+\\rho^2\\left(\\frac{\\partial s_0}{\\partial\\varepsilon}\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{-1}\n\\left(s_1\\frac{\\partial^2 s_1}{\\partial\\rho\\partial\\gamma}-\\frac{\\partial s_1}{\\partial\\rho}\\frac{\\partial s_1}{\\partial\\gamma}\\right)\\rho_{,x}^2\\\\\n&+\\rho^2\\left(\\frac{\\partial s_0}{\\partial\\varepsilon}\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{-1}\n\\left(s_1\\frac{\\partial^2 s_1}{\\partial\\gamma^2}-2\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^2\\right)\\rho_{,x}\\gamma_{,x},\\\\\nq&= q_1(\\rho,\\varepsilon,\\gamma)\\varepsilon_{,x}+q_2(\\rho,\\varepsilon,\\gamma)\\rho_{,x}+q_3(\\rho,\\varepsilon,\\gamma)v_{,x},\\\\\n\\phi&=\\left(2\\frac{\\partial s_1}{\\partial\\gamma}\\rho_{,x}\\right)^{-1}\n\\left(q_2\\frac{\\partial s_0}{\\partial \\varepsilon}+f(\\rho,\\varepsilon,\\gamma)+2k\\frac{\\partial s_1}{\\partial\\gamma}\\rho_{,x} -2\\rho^2s_1v_{,x}\\right),\\\\\nJ_s&=q\\frac{\\partial s_0}{\\partial\\varepsilon}-\\frac{1}{2}\\left(q_2\\frac{\\partial s_0}{\\partial \\varepsilon}+f(\\rho,\\varepsilon,\\gamma)-2\\rho^2s_1 v_{,x}\\right)\\rho_{,x},\n\\end{aligned}\n\\end{equation}\nwhere the function $s_1(\\rho,\\gamma)$ must be a negative function in order the principle of maximum entropy at the equilibrium be satisfied.\nMoreover, the reduced entropy inequality becomes \n\\begin{equation}\n\\label{reduced1}\n\\begin{aligned}\n&(q_1\\varepsilon_{,x}+q_2\\rho_{,x}+q_3v_{,x})\\left(\\frac{\\partial^2 s_0}{\\partial \\rho\\partial\\varepsilon}\\rho_{,x}+\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}\\varepsilon_{,x}\\right)+\\tau_1\\frac{\\partial s_0}{\\partial\\varepsilon} v_{,x}^2\\\\\n&-\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{\\frac{1}{2}}\\left(\\frac{\\partial g(\\rho,\\varepsilon)}{\\partial \\rho}\\rho_{,x}+\\frac{\\partial g(\\rho,\\varepsilon)}{\\partial \\varepsilon}\\varepsilon_{,x}\\right)\\rho_{,x}\\geq 0,\n\\end{aligned}\n\\end{equation}\nalong with the constraint\n\\begin{equation}\nq_2\\frac{\\partial s_0}{\\partial\\varepsilon}+f(\\rho,\\varepsilon,\\gamma)-\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{\\frac{1}{2}}g(\\rho,\\varepsilon)=0,\n\\end{equation}\nwhere $f(\\rho,\\varepsilon,\\gamma)$ and $g(\\rho,\\varepsilon)$ are arbitrary functions of their arguments.\n\nThe inequality \\eqref{reduced1} is satisfied if and only if the following conditions hold true:\n\\begin{equation}\n\\label{diffconstrgen}\n\\begin{aligned}\n&q_1\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}\\geq 0,\\qquad\\tau_1\\frac{\\partial s_0}{\\partial\\varepsilon} \\geq 0,\\\\\n&q_2\\frac{\\partial^2 s_0}{\\partial \\rho\\partial\\varepsilon}-\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{\\frac{1}{2}}\\frac{\\partial g}{\\partial \\rho}\\geq 0,\\qquad q_3^2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}-4\\frac{\\partial s_0}{\\partial\\varepsilon}\\tau_1 q_1\\geq 0,\\\\\n&\\left(q_3^2\\frac{\\partial^2 s_0}{\\partial\\rho \\partial\\varepsilon}-4\\frac{\\partial s_0}{\\partial\\varepsilon}\\tau_1 q_2\\right)\\frac{\\partial^2 s_0}{\\partial\\rho \\partial\\varepsilon}+4\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{\\frac{1}{2}}\\frac{\\partial s_0}{\\partial\\varepsilon}\\frac{\\partial g}{\\partial \\rho}\\tau_1\\leq 0,\\\\\n&\\left(q_2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}-q_1\\frac{\\partial^2 s_0}{\\partial\\rho \\partial\\varepsilon}\\right)^2+4\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{\\frac{1}{2}}\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}\\frac{\\partial g}{\\partial\\rho} q_1+\\frac{\\partial s_1}{\\partial\\gamma}\\left(\\frac{\\partial g}{\\partial\\varepsilon}\\right)^2\\\\\n&-2\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{\\frac{1}{2}}\\left(q_2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}+q_1\\frac{\\partial^2 s_0}{\\partial\\rho \\partial\\varepsilon}\\right)\n\\frac{\\partial g}{\\partial\\varepsilon}\\leq 0,\\\\\n&\\left(q_2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}-q_1\\frac{\\partial^2 s_0}{\\partial\\rho \\partial\\varepsilon}\\right)^2\\frac{\\partial s_0}{\\partial\\varepsilon}\\tau_1-\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{\\frac{1}{2}}\\left(q_3^2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}-4\\frac{\\partial s_0}{\\partial\\varepsilon}\\tau_1 q_1\\right)\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}\\frac{\\partial g}{\\partial\\rho}\\\\\n&-\\left(\\frac{\\partial s_1}{\\partial\\gamma}\\right)^{\\frac{1}{2}}\\left(q_3^2\\frac{\\partial^2 s_0}{\\partial \\rho\\partial\\varepsilon}\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}-2\\left(q_2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}+q_1\\frac{\\partial^2 s_0}{\\partial \\rho\\partial\\varepsilon}\\right)\\frac{\\partial s_0}{\\partial \\varepsilon}\\tau_1\\right)\\frac{\\partial g}{\\partial \\varepsilon}\\\\\n&+\\frac{\\partial s_1}{\\partial \\gamma}\\frac{\\partial s_0}{\\partial\\varepsilon}\\left(\\frac{\\partial g}{\\partial\\varepsilon}\\right)^2 \\tau_1\\leq 0.\n\\end{aligned}\n\\end{equation}\n\nThe solution so recovered contains some degrees of freedom that can be fixed in order to model specific physical situations. The constitutive equation \\eqref{entr_1}$_1$ can be interpreted as an extension of the equilibrium constitutive equation to non-equilibrium situations \\cite{Muschik-Ehrentraut}.\nIn what follows, we limit ourselves to discuss in detail the case where $g(\\rho,\\varepsilon)=0$, whereupon the constraints \n\\eqref{diffconstrgen} simplify as\n\\begin{equation}\n\\label{lastrestr1}\n\\begin{aligned}\n&q_2\\frac{\\partial^2 s_0}{\\partial \\rho \\partial\\varepsilon}\\geq 0,\\qquad q_1\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}\\geq 0,\\qquad \\tau_1\\frac{\\partial s_0}{\\partial\\varepsilon}\\geq 0,\\\\\n&\\frac{\\partial^2 s_0}{\\partial \\rho\\partial\\varepsilon}\\left(4\\frac{\\partial s_0}{\\partial\\varepsilon}\\tau_1 q_2-q_3^2\\frac{\\partial^2 s_0}{\\partial \\rho\\partial\\varepsilon}\\right)\\geq 0,\\\\\n&\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}\\left(4\\frac{\\partial s_0}{\\partial\\varepsilon}\\tau_1 q_1-q_3^2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}\\right)\\geq 0,\n\\end{aligned}\n\\end{equation}\ntogether with \n\\begin{equation}\n\\label{relationq1q2}\nq_2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}-q_1\\frac{\\partial^2 s_0}{\\partial \\rho\\partial\\varepsilon}=0.\n\\end{equation}\n\nSome comments about the constitutive relations so characterized are in order. \nIn equilibrium situations, in which the gradients of the field variables (except at most the density $\\rho$) vanish, let us define the absolute temperature $\\theta$ by the classical thermodynamical relation $\\displaystyle \\frac{1}{\\theta}=\\frac{\\partial s_0}{\\partial\\varepsilon}$. \nUnder the hypothesis of invertibility of \n$\\theta$ with respect to $\\varepsilon$, which is allowed by the positivity of the specific heat \n$\\displaystyle c = \\frac{\\partial \\varepsilon}{\\partial\\theta}$, the internal energy $\\varepsilon$ can be\nexpressed as function of the arguments $\\rho$ and $\\theta$. Thus, differentiating with respect to $\\rho$ the\ncondition\n\\[\n\\frac{\\partial s_0(\\rho,\\varepsilon(\\rho,\\theta))}{\\partial \\varepsilon}-\\frac{1}{\\theta}=0,\n\\]\nwe get\n\\begin{equation}\n\\label{sign}\n\\frac{\\partial^2 s_0}{\\partial\\rho\\partial\\varepsilon}+\\frac{\\partial^2 s_0}{\\partial \\varepsilon^2}\\frac{\\partial\\varepsilon}{\\partial\\rho}=0,\n\\end{equation}\nthat, used in \\eqref{relationq1q2}, provides\n\\begin{equation}\nq_2=-q_1\\frac{\\partial\\varepsilon}{\\partial\\rho}.\n\\end{equation}\n\nThus, the heat flux reduces to\n\\begin{equation}\nq = q_1\\frac{\\partial\\varepsilon}{\\partial\\theta}\\theta_{,x}+q_3v_x,\n\\end{equation}\n\\emph{i.e.}, when $q_3=0$, we have the classical Fourier law of heat conduction. \n\nSince \n\\[\n\\frac{\\partial s_0}{\\partial\\varepsilon}=\\frac{1}{\\theta}>0, \\qquad\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}=-\\frac{1}{\\theta^2}\\frac{\\partial\\theta}{\\partial\\varepsilon}< 0,\n\\]\nit is $\\tau_1\\geq 0$, and, as physics prescribes, $q_1\\leq 0$.\nAs far as $q_2$ is concerned, its sign is the same as that of $\\displaystyle\\frac{\\partial\\varepsilon}{\\partial\\rho}$, which in turn, from \\eqref{sign}, has the same sign as \n$\\displaystyle \\frac{\\partial^2 s_0}{\\partial\\rho\\partial\\varepsilon}$; thus, due to $\\displaystyle\\frac{\\partial\\varepsilon}{\\partial\\rho}\\geq 0$, it is $q_2\\geq 0$. \nFinally, the last two inequalities in \\eqref{lastrestr1} provide\n\\begin{equation}\n\\label{onlyone}\nq_3^2\\leq 4 \\left(\\frac{\\partial^2 s_0}{\\partial \\varepsilon^2}\\right)^{-1}\\frac{\\partial s_0}{\\partial\\varepsilon}q_1\\tau_1.\n\\end{equation}\nLast, but not the least, it is worth of being observed that the entropy flux contains the classical term $\\displaystyle \\frac{q}{\\theta}$ and an additional term (extra-flux) coming from the application of the procedure without the need of postulating it at the beginning. Finally, at equilibrium the flux $\\phi$ of the internal variable $\\gamma$ reduces to a constant, and\nthe stress tensor $T$ assumes the classical local form if the mass density is constant. \n\n\\subsection{Korteweg fluids}\nHere, we consider the case of a fluid of grade 3 \\cite{Truesdell_Noll}; in fact, we include in the state space the second spatial derivative of the mass density \\cite{Dunn-Serrin}. For this class of fluids, Korteweg \n\\cite{Korteweg} proposed the Cauchy stress tensor to be given by a constitutive equation like\n\\begin{equation} \n\\label{kort}\nT_{ij}=\\left(-p+\\sum_{k=1}^3\\left(\\alpha_1\\frac{\\partial^2\\rho}{\\partial x_k^2}+\\alpha_2\\frac{\\partial\\rho}{\\partial x_k}\n\\frac{\\partial\\rho}{\\partial x_k}\\right)\\right)\\delta_{ij}+\\alpha_3\\frac{\\partial\\rho}{\\partial x_i}\n\\frac{\\partial\\rho}{\\partial x_j}+\\alpha_4\\frac{\\partial^2\\rho}{\\partial x_i\\partial x_j},\n\\end{equation}\nwhere $\\rho$ denotes the mass density, $p$ the pressure of the fluid, and $\\alpha_i$, $i=1,\\ldots,4$, \nsuitable material functions depending on density and temperature. These fluids received a moderate attention in the \nliterature after the pioneering paper by Dunn and Serrin \\cite{Dunn-Serrin}, where the compatibility with the basic tenets \nof rational continuum thermodynamics \\cite{Truesdell} has been extensively studied. They have been studied also in \n\\cite{CST-JMP-2009,COT-JMP-2011,CST-JNET-2010,COP-Elasticity-2011} \nby means of an extended Liu procedure, and by \nHeida and M\\'alek \\cite{Heida-Malek} following a different methodology. \n\nBy limiting to the one-dimensional case, the balance equations read \n\\begin{equation}\n\\label{korteweg}\n\\begin{aligned}\n&\\mathcal{E}_1\\equiv\\frac{D\\rho}{D t} + \\frac{D (\\rho v)}{D x}=0, \\\\\n&\\mathcal{E}_2\\equiv\\frac{D(\\rho v)}{D t} + \\frac{D (\\rho v^2 - T)}{D x}=0,\\\\\n&\\mathcal{E}_3\\equiv\\frac{D}{D t}\\left(\\rho\\varepsilon+\\rho \\frac{v^2}{2}\\right) +\\frac{D }{D x}\\left(\\rho v\\varepsilon+\\rho \\frac{v^3}{2}-Tv+ q\\right)=0,\n\\end{aligned}\n\\end{equation}\nwhere $\\rho$ is the mass density, $v$ the velocity, and $\\varepsilon$ the internal energy per unit mass;\nmoreover, the stress $T$ and the heat flux $q$ must be assigned by means of constitutive equations.\nThe constitutive equations must be such that for every admissible process the entropy inequality \n\\begin{equation}\n\\rho\\left(\\frac{Ds}{D t}+v\\frac{Ds}{Dx}\\right)+ \\frac{DJ_s}{Dx} \\ge 0,\n\\end{equation}\nbeing $s$ the specific entropy and $J_s$ the entropy flux, is satisfied.\n\nLet us assume the state space spanned by\n\\begin{equation}\n\\mathcal{Z}\\equiv\\{\\rho, \\varepsilon, \\rho_{,x}, \\varepsilon_{,x}, v_{,x}, \\rho_{,xx}\\}.\n\\end{equation}\n\nIn this case, the exploitation of second law of thermodynamics requires that we take into account the constraints \nimposed on the thermodynamic processes by the balance equations together with their first and second order spatial derivatives; \nnevertheless, we observe that, since the unique second order spatial derivative belonging to the state space is $\\rho_{,xx}$, the unique second order extension we need to use as constraint is $\\displaystyle\\frac{D^2\\mathcal{E}_1}{Dx^2}$.\nThus, the entropy inequality writes:\n\\begin{equation}\n\\label{entropyconstrainedkorteweg}\n\\begin{aligned}\n&\\rho\\left(\\frac{Ds}{D t}+v\\frac{Ds}{Dx}\\right)+ \\frac{DJ_s}{Dx} \\\\\n&\\quad- \\Lambda^{(0)}_1 \\mathcal{E}_1- \\Lambda^{(0)}_2 \\mathcal{E}_2- \\Lambda^{(0)}_3 \\mathcal{E}_3\\\\\n&\\quad-\\Lambda^{(1)}_1\\frac{D\\mathcal{E}_1}{Dx}-\\Lambda^{(1)}_2\\frac{D\\mathcal{E}_2}{Dx}\n-\\Lambda^{(1)}_3\\frac{D\\mathcal{E}_3}{Dx}-\\Lambda^{(2)}_1\\frac{D^2\\mathcal{E}_1}{Dx^2} \\geq 0.\n\\end{aligned}\n\\end{equation}\n\nBy expanding the derivatives in \\eqref{entropyconstrainedkorteweg}, we obtain a huge expression that we omit to report here; \nit results linear in the highest derivatives, say\n\\[\n\\{\\rho_{,t},v_{,t},\\varepsilon_{,t},\\rho_{,tx},v_{,tx},\\varepsilon_{,tx},\\rho_{,txx},v_{,txx},\\varepsilon_{,txx},v_{,xxxx},\\varepsilon_{,xxxx},\\rho_{,xxxxx}\\},\n\\]\nand cubic in the higher derivatives, say\n\\[\n\\{v_{,xx},\\varepsilon_{,xx},\\rho_{,xxx},v_{,xxx},\\varepsilon_{,xxx},\\rho_{,xxxx}\\}.\n\\]\n\nTo proceed further, let us write the specific entropy as the sum of the equilibrium part defined for homogeneous states and\na semidefinite negative quadratic form (in order to satisfy the principle of maximum entropy at the equilibrium) in the gradients appearing in the state space. At this stage we do not specify the form of the entropy flux.\n\nThe restrictions imposed by entropy inequality can be solved and provide the following solution:\n\\begin{equation}\n\\begin{aligned}\\label{entr_gen}\n&s = s_0(\\rho,\\varepsilon)+s_1(\\rho)\\rho_{,x}^2,\\\\\n&q = q_1(\\rho,\\varepsilon)\\varepsilon_{,x}+q_2(\\rho,\\varepsilon)\\rho_{,x}+q_3(\\rho,\\varepsilon)v_{,x},\\\\\n&T = \\rho^2\\left(\\frac{\\partial s_0}{\\partial \\varepsilon}\\right)^{-1}\\left(\\frac{\\partial s_0}{\\partial \\rho}-\\frac{\\partial s_1}{\\partial \\rho}\\rho_{,x}^2-2 s_1\\rho_{,xx}\\right)+\\tau_1(\\rho,\\varepsilon)v_{,x},\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{constrkdv}\n\\begin{aligned}\n&q_1\\le 0, \\qquad q_2\\ge 0, \\qquad \\tau_1\\ge 0,\\\\\n&q_2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}-q_1\\frac{\\partial^2 s_0}{\\partial\\rho\\partial\\varepsilon}=0,\\\\\n&4\\frac{\\partial s_0}{\\partial\\varepsilon}\\tau_1 q_2-q_3^2\\frac{\\partial^2 s_0}{\\partial\\rho\\partial\\varepsilon}\\geq 0,\\\\\n&4\\frac{\\partial s_0}{\\partial\\varepsilon}\\tau_1 q_1-q_3^2\\frac{\\partial^2 s_0}{\\partial\\varepsilon^2}\\leq 0,\n\\end{aligned} \n\\end{equation}\nand $s_1\\leq 0$ in order the principle of maximal entropy at the equilibrium be satisfied.\nFinally, the entropy flux turns out to be\n\\begin{equation}\n\\begin{aligned}\\label{flux_entr}\nJ_s&=\\frac{\\partial s_0}{\\partial\\varepsilon}\\left(q_1\\varepsilon_{,x}+q_2\\rho_{,x}+q_3v_{,x}\\right)+2\\rho^2 s_1\\rho_{,x}v_{,x}=\\\\\n&=\\frac{q}{\\theta}+2\\rho^2 s_1\\rho_{,x}v_{,x},\n\\end{aligned}\n\\end{equation}\nwhere $\\theta$, defined as usual as\n\\begin{equation}\n\\frac{1}{\\theta}=\\frac{\\partial s_0}{\\partial\\varepsilon},\n\\end{equation}\nis the absolute temperature.\n\nLet us observe that, in the expression \\eqref{flux_entr} of entropy flux, the second contribution represents the entropy extra-flux \\cite{Muller}, related to the gradients of density and velocity.\nMoreover, a similar reasoning as in previous subsection shows that it is $\\displaystyle q_2=-q_1\\frac{\\partial\\varepsilon}{\\partial\\rho}$ so that, choosing $q_3=0$, we recover the classical Fourier law for heat flux. Also, the last two inequalities \nin \\eqref{constrkdv} provide relation \\eqref{onlyone}.\nFinally, if we consider the classical case, \\emph{i.e.}, $s = s_0(\\rho,\\varepsilon)$, the stress tensor $T$ is expressed in local form, and the classical constitutive equation for the entropy flux $\\displaystyle J_s=\\frac{q}{\\theta}$ is recovered.\n\nTherefore, in order to obtain a constitutive equation for the stress tensor containing first and second order derivatives of mass density we need to assume that $s$ depends on the gradient of $\\rho$, and the entropy flux involves an extra-flux.\nAs a last remark, a stress tensor depending on the gradients of the density can be obtained only if we use the extended\nprocedure for the exploitation of entropy inequality \\cite{CGOP-miscele-2020}.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nIn this paper, we discussed the extended Liu procedure in order to investigate the restrictions placed by an entropy inequality\non the constitutive equations of a continuum whose state space contains spatial derivatives of the unknown fields. The analysis is\nperformed first from a purely mathematical viewpoint by considering a system of balance laws sufficiently general to contain\nthe governing equations of continua, and sufficient conditions for the fulfilment of an entropy-like inequality have been derived. Then, we specialized the results to two physical cases with first or second order non-local \nconstitutive equations. Remarkably, in the application of the procedure we do not modify the energy balance equation with the inclusion of extra-terms (like the interstitial working), and we do not prescribe \\emph{a priori} the form of entropy flux, whose expression arises as a result of\nthe method; in the applications we considered, the procedure provides an entropy flux decomposed in the classical term and an extra-flux.\n\nWe limited ourselves to one-dimensional models, and in the considered physical applications we were able to solve the\nconstraints imposed by the exploitation of the entropy inequality, so determining an explicit expression of the constitutive functions\nby assuming an expansion of the specific entropy at first order in the squared gradients of the field variables entering the state space.\n\nThe procedure in principle allows us to fix the constitutive equations (for stress tensor, heat flux, \\ldots), according to experiments, and determine the form of the entropy flux algorithmically. \nIt is trivial to observe that the procedure requires a huge amount of computation, increasing with the order of non-localities;\nhowever, such computations can be almost automatically carried out by using a Computer Algebra System.\n\nWork is in progress about the investigation of the extended Liu procedure for multi-dimensional continuous media, where\nother general principles of representation theory \\cite{Smith3} of vectorial and tensorial quantities need to be considered. \n\n\\section*{Acknowledgments}\nThe authors acknowledge partial supported by G.N.F.M. of ``Istituto Nazionale di Alta Matematica'' and University of Messina. The authors gratefully thank dr. Luca Amata for drawing their attention to the paper \\cite{Mishkov}.\n \\medskip\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbhhd b/data_all_eng_slimpj/shuffled/split2/finalzzbhhd new file mode 100644 index 0000000000000000000000000000000000000000..76ed727034a35cf0491c3fc2df5cf94cde9464c1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbhhd @@ -0,0 +1,5 @@ +{"text":"\\chapter{Introduction} \n\n\n\n\nLet $(M, \\omega)$ be a closed symplectic manifold. The group $\\Symp(M, \\omega)$ of symplectomorphisms, equipped with the standard $C^\\infty$-topology, is an infinite dimensional Fr\u00e9chet Lie group. A fundamental theorem of Banyaga~\\cite{Banyaga-Isomorphism} states that the isomorphism type of $\\Symp(M,\\omega)$ as a discrete group determines the conformal symplectomorphism type of $(M,\\omega)$. However, the ways topological properties of $\\Symp(M,\\omega)$ relate to the geometry of $(M,\\omega)$ remain largely unknown. To measure our level of ignorance, it suffices to observe that the following two questions are almost completely open in all dimensions $2n \\geq 4$:\n\n\\begin{enumerate}[label=\\textbf{Q\\arabic*.}]\n\\item At the most basic level, how does the homotopy type of $\\Symp(M,\\omega)$ depend on $\\omega\\,$? \n\\item Similarly, suppose $(M,\\omega)$ admits a symplectic or Hamiltonian actions by a compact Lie group $G$ (possibly finite), viewed here as a Lie subgroup $G\\subset\\Symp(M,\\omega)$. What are the homotopy types of the centralizer and of the normalizer $N(G)$ in $C(G)\\subset\\Symp(M,\\omega)\\,$? Can we understand the inclusions \n\\[G\\hookrightarrow \\Symp(M,\\omega)\\quad \\text{~and~}\\quad C(G)\\hookrightarrow N(G)\\hookrightarrow \\Symp(M,\\omega)\\]\nfrom a homotopy theoretic point of view?\n\n\\end{enumerate}\n\n\n\nRegarding the first question, most efforts as been devoted to the study of symplectomorphism groups of rational $4$-manifolds. Following the seminal work of M. Gromov~\\cite{Gr} who showed that the group of compactly supported symplectomorphisms of $\\mathbb{R}^4$ is contractible, the homotopical properties of the group of symplectomorphisms of $\\mathbb{C}P^2$, $S^2 \\times S^2$ and of the $k$-fold symplectic blow-ups $\\mathbb{C}P^2\\#k\\overline{\\mathbb{C}P}^2$, $k\\leq 5$, were studied in several papers such as \\cite{abreu}, \\cite{AGK}, \\cite{MR1775741}, \\cite{P-com}, \\cite{AG}, \\cite{AP}, \\cite{AE}, and~\\cite{LiLiWu2022}. In particular, for $\\mathbb{C}P^2$, $S^2 \\times S^2$, and $\\mathbb{C}P^2\\#k\\overline{\\mathbb{C}P^2}$, $k\\leq 3$, the rational homotopy type of $\\Symp(M,\\omega)$ can be described precisely in terms of the cohomology class $[\\omega]$. For $k\\geq 4$, partial results are known, mostly for $\\pi_0(\\Symp(M,\\omega))$ and $\\pi_1(\\Symp(M,\\omega))$.\nIt is worth pointing out that all these results rely on special properties of $J$-holomorphic curves in symplectic $4$-manifolds and, as such, do not generalize readily to higher dimensions.\\\\ \n\nThe second question can be partially answered in the case of Hamiltonian toric actions using moment map techniques, see~\\cite{P-MaxTori}. If a torus $\\mathbb{T}^n\\subset\\Ham(M^{2n},\\omega)$ acts effectively on $(M^{2n},\\omega)$ with moment map $\\mu:M^{2n}\\to\\mathbb{R}^n$, then \n\n\\begin{enumerate}\n\\item the centralizer $C(\\mathbb{T}^n)$ is equal to the group of all symplectomorphisms $\\phi$ that preserve the moment map, that is, such that $\\mu \\circ \\phi=\\mu$.\n\\item $C(\\mathbb{T}^n)$ is a maximal torus in $\\Symp(M^{2n},\\omega)$, that is, a maximal, connected, and abelian subgroup of $\\Symp(M^{2n},\\omega)$. In particular, since toric manifolds are simply-connected, $C(\\mathbb{T}^n) \\subset\\Ham(M^{2n},\\omega)$. \n\\item $C(\\mathbb{T}^n)$ deformation retracts onto $\\mathbb{T}^{n}$. In particular, the homotopy type of the centraliser of a toric action is independent of the action.\n\\item Moreover, the Weyl group $W(\\mathbb{T}^n):=N(\\mathbb{T}^n)\/C(\\mathbb{T}^n)$ is always finite\\footnote{Futhermore, as for maximal tori in compact Lie groups, it can be shown that the number of conjugacy classes of toric centralizers is finite, and that each $C(\\mathbb{T}^n)$ is flat and totally geodesic in $\\Symp(M^{2n},\\omega)$ for the $L^2$ metric}.\n\\end{enumerate}\nOn the other hand, even for toric actions, very little is known about the homotopy theoretic properties of the inclusions $G\\hookrightarrow\\Symp(M,\\omega)$ or $C(G)\\hookrightarrow N(G)\\hookrightarrow\\Symp(M,\\omega)$. Two noteworthy exceptions are i) the works of McDuff-Slimowitz~\\cite{McDS} and McDuff-Tolman~\\cite{McDT} who show that under some rather mild conditions, Hamiltonian $G$ actions induce injective maps $\\pi_1(G)\\hookrightarrow\\pi_1(\\Ham(M^{2n},\\omega))$, and ii) the results on symplectomorphism groups of rational surface mentionned above that, as a byproduct, allow one to understand the subrings of $\\pi_*(\\Symp(M,\\omega))$ or $H_*(\\Symp(M,\\omega))$ that the various Hamiltonian actions on $(M,\\omega)$ generate.\\\\\n\n\n\nIn this paper we combine pseudo-holomorphic curve techniques with moment map techniques to determine the homotopy type of equivariant symplectomorphisms of $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$ under the presence of a Hamiltonian circle action. Our main result are Theorem~\\ref{full_homo} and Theorem~\\ref{full_homo_CCC} that give the full homotopy type of the centralizer $C(S^1)\\subset\\Symp(M,\\omega)$ for all choices of symplectic forms and of Hamiltonian circle actions on these two $4$-manifolds. Contrary to the toric case, the homotopy type of the stabilizer $C(S^1)$ is not constant and depends, essentially, on whether the circle action extends to a single toric action or to two distinct toric actions. In the former case, apart from a few exceptional circle actions that must be treated separately, $C(S^1)$ retracts onto the unique torus $\\mathbb{T}^2$ the circle extends to. In the latter case, the two toric actions $\\mathbb{T}^2_1$, $\\mathbb{T}^2_2$ that extend the circle action to do not commute, even up to homotopy. We show that the Pontryagin products of the generators of $H_1(\\mathbb{T}^2_1)$ and $H_1(\\mathbb{T}^2_2)$ generate a subalgebra $P^{alg}\\subset H_*(C(S^1))$ that contains classes of arbitrary large degrees. In particular, $C(S^1)$ does not have the homotopy type of a finite dimensional $H$-space. Moreover, looking at the action of the centralizer $C(S^1)$ on the space of invariant almost complex structures, we prove that there is an equality of Pontryagin rings $P^{alg}= H_*(C(S^1))$. This homology equivalence implies that $C(S^1)$ has the homotopy type of a pushout $\\mathbb{T}^2_1 \\leftarrow S^1\\to \\mathbb{T}^2_2$ in the category of topological groups. Finally, when viewed as a bare topological space, we show that $C(S^1)$ is homotopy equivalent to the product $\\Omega S^3 \\times S^1 \\times S^1 \\times S^1$.\\\\\n\nIt seems likely that our results on centralizers of Hamiltonian circle actions could be proven using moment map techniques alone. The main advantage of introducing pseudo-holomorphic curve techniques is that our setting generalises to actions of any compact, possibly finite, abelian group $A\\subset\\Ham(M,\\omega)$. For instance, in the companion paper~\\cite{Zn_symp}, we use the same framework to determine the homotopy type of the centralizers of most finite cyclic subgroups $\\mathbb{Z}_n$ acting on $(S^2 \\times S^2,\\omega_\\lambda)$ and $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ by Hamiltonian diffeomorphisms.\\\\\n\nIn an effort to make the paper understandable to both equivariant geometers and symplectic topologists, we include more details than would be needed in a document aimed at either audience alone. The paper is structured as follows:\\\\\n\nIn Chapter 2, we recall T. Delzant's equivariant classification of toric actions in terms of moment polytopes, and Y. Karshon's equivariant classifications of Hamiltonian circle actions on $4$-manifolds in terms of labelled graphs. We then determine all possible toric extensions, up to equivalences, of an arbitrary Hamiltonian circle actions on $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$.\\\\ \n\nThe crux of the paper lies in Chapters 3 and 4 in which we adapt the framework of \\cite{MR1775741} to study groups of equivariant symplectomorphisms in the presence of an $S^1$ action. In particular we show that the space of invariant almost complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ decomposes into disjoint strata, each of them being homotopy equivalent to an orbit of the centralizer, with stabilizer homotopy equivalent to $S^1$ equivariant K\\\"ahler isometries (see Theorems~\\ref{homogenous} and \\ref{homog}). In Chapter 3, we show that the number of invariant strata in the decomposition corresponds to the number of toric extensions of the given circle action (Proposition~\\ref{prop:ToricExtensionCorrespondance}). In particular we prove that $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ decomposes into either one or two strata, and that the later case occurs only for an exceptional family of circle actions on $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$ (Corollaries~\\ref{cor:lambda=1_IntersectingOnlyOneStratum},\\ref{cor:IntersectingTwoStrata} and \\ref{cor:IntersectingOnlyOneStratum}). In Chapter 4, using techniques similar to the ones developed in~\\cite{AG} we compute the homotopy type of $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ for all Hamiltonian circle actions on $(S^2 \\times S^2,\\omega_\\lambda)$. \n\\\\\n\nIn Chapter 5, we prove $S^1$ equivariant analogues of some technical lemmas of~\\cite{AGK} involving deformation theory. We use these results to prove that in the case the circle admits two distinct toric extensions, the stratum with positive codimension in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ is always of codimension two. \n\\\\\n\nFinally in Chapter 6, we carry out a similar analysis on the manifold $\\CP^2\\# \\overline{\\CP^2}$ and obtain the homotopy type of $\\Symp^{S^1}(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ for all Hamiltonian circle actions on $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$. \n\\\\\n\n\n\n\n\n\n\\chapter{Hamiltonian torus actions on ruled $4$-manifolds}\n\n\\section{Preliminaries on Hamiltonian actions}\nLet $G$ be a compact Lie group, possibly finite, acting effectively and symplectically on a symplectic manifold $(M,\\omega)$. Every such action $\\rho:G\\times M\\to M$ induces an injective homomorphism $G\\hookrightarrow\\Symp(M,\\omega)$ and, in particular, defines a subgroup $G\\subset\\Symp(M,\\omega)$. Let $\\mathfrak{g}$ denote the Lie algebra of $G$ and $\\mathfrak{g}^*$ be it's dual. Given $Y\\in \\mathfrak{g}$, we denote by $\\overline{Y}$ the corresponding fundamental vector field on~$M$. \n\n\\begin{defn}\\label{def:HamiltonianActions} The action of $G$ is called Hamiltonian if\n\\begin{itemize}\n\\item $G$ is finite and the induced homomorphism $G\\to\\Symp(M,\\omega)$ takes values in the subgroup of Hamiltonian diffeomorphisms $\\Ham(M,\\omega)$, or\n\\item $\\dim G\\geq 1$ and there exists a moment map, that is, a smooth map $\\mu:M \\to \\mathfrak{g}^*$ such that \n\\begin{enumerate}\n \\item $d\\mu_p(X_p)(Y) = \\omega(X,\\overline{Y})$ for all $X \\in T_pM$ and $Y \\in \\mathfrak{g}$, and \n \\item the map $\\mu$ is equivariant with respect to the $G$ action on $M$ and the coadjoint action on~$\\mathfrak{g}^*$.\n\\end{enumerate}\nIn particular, $G\\subset\\Ham(M,\\omega)\\subset\\Symp(M,\\omega)$. \n\\end{itemize} \n\\end{defn}\n\n\\begin{remark}\\label{rmk:UniquenessMomentMap}\nWe note that in this definition, the moment map $\\mu:M\\to\\mathfrak{g}^{*}$ is an auxiliary structure whose sole existence is required. There is no claim about its unicity. For instance, the moment map of a Hamiltonian torus action is only defined up to adding a constant $c\\in\\mathfrak{t}^*$. However, there are situations where the choice of a specific moment map is needed. For this reason, we will always distinguish between the structure $(M,\\omega,\\rho)$ given by a Hamiltonian action alone, from the structure $(M,\\omega,\\rho,\\mu)$ in which a moment map $\\mu$ is chosen.\n\\end{remark}\n\n\\begin{defn}\\label{defn:EquivalenceActions}\nWe will say that two actions $\\rho_{i}:G\\to\\Symp(M,\\omega)$ are symplectically equivalent if the corresponding subgroups $\\rho_{i}(G)$ belong to the same conjugacy class with respect to the action of $\\Symp(M,\\omega)$.\n\\end{defn}\n\nGiven a symplectic action $\\rho:G\\times M\\to M$, let $C(G)$ be the centralizer of the corresponding subgroup $G\\subset\\Symp(M,\\omega)$, that is,\n\\[C(G)=\\{\\phi\\in\\Symp(M,\\omega)~|~\\phi\\circ g\\circ \\phi^{-1} = g,~\\forall g\\in G\\}\\]\nand let $N(G)$ be its normalizer, namely,\n\\[N(G)=\\{\\phi\\in\\Symp(M,\\omega)~|~\\phi\\circ G\\circ \\phi^{-1} = G\\}\\]\nClearly, equivalent actions have isomorphic centralizers and normalizers. \n\nIn this paper, we will investigate the homotopy type of the centralizers $C(T)$ of Hamiltonian torus actions on some $4$-manifolds. We start by giving a simple characterization of symplectomorphisms commuting with a Hamiltonian torus action.\n\\begin{lemma}\\label{lemma:CharacterizationCentralizer}\nLet $(M, \\omega)$ be a compact symplectic manifold equipped with a Hamiltonian torus action $\\rho:T\\times M\\to M$ with moment map $\\mu$. Then $\\phi \\in C(T)$ if, and only if, $\\mu \\circ \\phi = \\mu$ and $\\phi \\in \\Symp(M,\\omega)$.\n\\end{lemma}\n\\begin{proof}\n$(\\Leftarrow)$ Suppose $\\phi \\in \\Symp(M,\\omega)$ satisfies $\\mu \\circ \\phi = \\mu$. Let $X \\in\\mathfrak{t}$ and let $\\overline{X}$ denote the associated fundamental vector field. We then have\n\\[\\omega(d\\phi^{-1}(\\overline{X}),Y) = \\phi^*\\omega(\\overline{X}, d\\phi(Y)) = \\omega(\\overline{X}, d\\phi(Y) = d\\mu(d\\phi(Y))=d\\mu(Y) = \\omega(\\overline{X}, Y)\\]\nfor any vector field $Y$, which implies $d\\phi(\\overline{X}) = \\overline{X}$ for all $X \\in \\mathfrak{t}$. Consequently, $\\phi$ commutes with the action.\n\\\\\n\n$(\\Rightarrow)$ If the action $\\rho$ has moment map $\\mu$, then $\\mu \\circ \\phi^{-1}$ is a moment map for the conjugate action $\\phi^{-1} \\circ \\rho \\circ \\phi$. If $\\phi^{-1} \\circ \\rho \\circ \\phi = \\rho$, this implies $\\mu \\circ \\phi^{-1} = \\mu + C$ for some constant $C\\in\\mathfrak{t}$. Since the moment images $\\mu(M)$ and $\\mu\\circ \\phi(M)$ are compact and coincide, this constant must be $0$. \n\\end{proof}\n\nThe image of the moment map of a torus action have special convexity properties.\n\n\\begin{thm}(Atiyah-Guillemin-Sternberg)\nLet $(M,\\omega)$ be a symplectic manifold on which a torus $\\mathbb{T}^d$ is acting with moment map $\\mu$. Then the image of $\\mu$ is a convex polytope of $\\mathfrak{t}^*$ whose vertices are images of the fixed points of the torus action.\n\\end{thm}\n\nAs usual, we call an effective Hamiltonian torus action \\emph{toric} if the torus acting is half the dimension of the manifold $M$. By a theorem of Delzant, the moment map image $\\Delta:=\\mu(M)$ of a toric action determines the symplectic manifold $(M,\\omega)$, the action $\\rho$, and the moment map $\\mu$ up to equivariant symplectomorphisms. \n\\begin{thm}(Delzant \\cite{Delzant}) \nLet $(M_{i},\\omega_{i},\\rho_{i},\\mu_{i})$, $i=1,2$, be two toric manifolds of dimension $2n$ with moment maps $\\mu_{i}:M_{i}\\to\\mathfrak{t}^{*}$. If the two moment polytopes $\\mu_{i}(M_{i})$ coincide, then there exists a $\\mathbb{T}^{n}$-equivariant symplectomorphism $\\phi:(M_{1},\\omega_{1})\\to (M_{2},\\omega_{2})$ such that $\\phi^{*}\\mu_{2}=\\mu_{1}$. Conversely, the moment map images of two equivariantly symplectomorphic toric structures $(M_{i},\\omega_{i},\\rho_{i},\\mu_{i})$, $i=1,2$, coincide.\n\\end{thm}\nChoose an identification $\\mathfrak{t}\\simeq\\mathbb{R}^{n}$ such that $\\ker(\\exp:\\mathfrak{t}\\to \\mathbb{T}^{n})\\simeq \\mathbb{Z}^{n}$. The moment polytopes of toric actions on manifolds of dimension $2n$ -- called Delzant polytopes of dimension $n$ -- are completely characterized by the following three properties, see~\\cite{Delzant}: \n\\begin{itemize}\n\\item (simplicity) there are exactly $n$ edges meeting at each vertex; \n\\item (rationality) the edges meeting at any vertex $p$ are of the form $ p + t u_{i} $, $ t \\in [0,\\ell_i] $, where $ \\ell_i \\in \\mathbb{R}$ and $ u_{i} \\in \\mathbb{Z}^{n} $;\n\\item (smoothness) for each vertex $p$, the corresponding vectors $u_{1},\\ldots,u_{n}$ can be chosen to be a basis of $\\mathbb{Z}^{n}$ over $\\mathbb{Z}$.\n\\end{itemize}\nThis provides a purely combinatorial description of toric structures. Finally, if we are only interested in the subgroup $\\mathbb{T}^n\\subset\\Ham(M^{2n},\\omega)$ associated to a toric action, that is, if we disregard both the pa\\-ra\\-me\\-tri\\-za\\-tion $\\mathbb{T}^n\\hookrightarrow\\Ham(M^{2n},\\omega)$ and the moment map, Delzant's classification theorem yields a bijection\n\\begin{gather*}\n\\{\\text{Inequivalent toric actions on~} 2n\\text{-manifolds}\\}\\\\\n\\updownarrow\\\\\n\\{\\text{Delzant polytopes in~}\\mathbb{R}^n \\text{~up to~} \\AGL(n;\\mathbb{Z}) \\text{~action}\\}\n\\end{gather*}\nAs the next proposition shows, the centralizer and normalizer of symplectic toric actions are easy to describe in terms of moment maps. In particular, the homotopy type of $C(T)$ does not depend on the toric structure.\n\\begin{prop}\\label{prop:CentralizersToricActions}\nConsider an effective toric action $\\rho:\\mathbb{T}^{n}\\to\\Symp(M^{2n},\\omega)$\nwith moment map $\\mu:M\\to\\mathfrak{t}$. Denote by $T$ the corresponding torus\nin $\\Symp(M,\\omega)$, by $C(T)$ its centralizer, by $N(T)$ its normalizer, and by $W(T) =N(T)\/C(T)$ its Weyl group.\n\\begin{enumerate}\n\\item The normalizer $C(T)$ deformation retracts onto $\\mathbb{T}^{n}$. In particular, $C(T)\\subset\\Ham(M,\\omega)$.\n\\item The normalizer $N(T)$ is equal to the group of all symplectomorphisms $\\psi$ such that $\\mu\\circ\\psi = \\Lambda\\circ\\mu$ for some $\\Lambda\\in\\AGL(n;\\mathbb{Z})$. In particular, the Weyl group $W(T)$ is finite.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nThis is a slightly stronger version of Proposition 3.21 in~\\cite{P-MaxTori}, and only the first statement requires some additional justification. By Lemma~\\ref{lemma:CharacterizationCentralizer}, $\\phi\\in C(T)$ iff $\\phi\\in\\Symp(M,\\omega)$ and $\\mu=\\mu\\circ\\phi$. Since the action is toric, each fiber of the moment map consists in a single orbit. It follows that there is a unique map $g_{\\phi}:\\mu(M)=\\Delta\\to\\mathbb{T}^{n}$ such that $\\phi$ can be written as\n\\[\\phi(m) = g_{\\phi}(\\mu(m))\\cdot m\\]\nPick a base point $b\\in\\Delta$ and consider the evaluation fibration\n\\begin{align*}\nC_{b}(T)\\to C(T)&\\to \\mathbb{T}^{n}\\\\\n\\phantom{C_{b}(T)\\to} \\phi&\\mapsto g_{\\phi}(b) \n\\end{align*} \nwhose fiber is the subgroup\n\\[C_{b}(T)=\\{\\phi\\in C(T)~|~\\phi=\\id\\text{~on~}\\mu^{-1}(b)\\}\\]\nConsider $\\phi\\in C_{b}(T)$. Since $\\Delta$ is contractible, the function $g_{\\phi}$ has a unique lift $\\tilde{g}_{\\phi}:\\Delta\\to\\mathfrak{t}$ such that $\\tilde{g}_{\\phi}(b)=0$, and $g_{\\phi}=\\exp\\circ\\tilde{g}_{\\phi}$. Setting $\\tilde{g}_{\\phi,s}=s\\tilde{g}_{\\phi}$ for $s\\in[0,1]$, we obtain a retraction of $C_{b}(T)$ onto $\\{\\id\\}$ through diffeomorphisms $\\phi_{s}$ leaving $\\mu$ invariant and such that $\\phi_{s}=\\id$ on $\\mu^{-1}(b)$. Applying an equivariant Moser isotopy to the family $\\phi_{s}$, this retraction is seen to be homotopic to a retraction in $C_{b}(T)$. This shows that $C_{b}(T)$ is contractible, which in turns implies the result.\n\\end{proof}\nFor Hamiltonian torus actions $T^{d}\\to\\Ham(M^{2n},\\omega)$ for which the torus $T^{d}$ has dimension $d k$, then we can endow $\\mathbb{C}P^1 \\times \\mathbb{C}P^2$ with the symplectic form $(\\lambda -k) \\sigma_1 \\oplus \\sigma_2$, where $\\sigma_1$ is the standard symplectic form on $\\mathbb{C}P^1$ with area 1 and $\\sigma_2$ is the standard symplectic form on $\\mathbb{C}P^2$ such that $\\sigma_2(L) =1$, where $L$ is the class of the line in $\\mathbb{C}P^2$. Restricting this symplectic form to $W_m$ makes it a symplectic manifold. Similarly when $m=2k+1$, choose $\\lambda > k+1$, then we can analogously define the form $(\\lambda - (k+1)) \\sigma_1 \\oplus \\sigma_2$ when $m$ is odd. With these choices of symplectic forms, the Lalonde-McDuff classification theorem~\\ref{L-McD-Classification} implies that $W_m$ is symplectomorphic to $(S^2 \\times S^2,\\omega_\\lambda)$ if $m$ is even and to $(\\mathbb{C}P^2 \\# \\overline{\\mathbb{C}P^2},\\omega_\\lambda)$ when $m$ is odd.\\\\\n\nGiven an integer $m\\geq 0$, the torus $\\mathbb{T}^2$ acts on $\\mathbb{C}P^1 \\times \\mathbb{C}P^2$ by setting\n\\[\n \\left(u,v\\right) \\cdot \\left(\\left[x_1,x_2\\right],\\left[y_1,y_2, y_3\\right]\\right) = \\left(\\left[ux_1,x_2\\right],\\left[u^my_1,y_2,vy_3\\right]\\right)\n\\]\nThis action leaves $W_m$ invariant and preserves both the complex and the symplectic structures. Its restriction to $W_m$ defines a toric action that we denote $\\mathbb{T}^2_m$. \n\nWhen $m$ is even, the image of the moment map is the polytope of Figure~\\ref{hirz}\n\\begin{figure}[H]\n\\centering \n\\begin{tikzpicture}\n\\node[left] at (0,2) {$Q=(0,1)$};\n\\node[left] at (0,0) {$P=(0,0)$};\n\\node[right] at (4,2) {$R= (\\lambda - \\frac{m}{2} ,1)$};\n\\node[right] at (6,0) {$S=(\\lambda + \\frac{m}{2} ,0)$};\n\\node[above] at (2,2) {$D_m=B-\\frac{m}{2}F$};\n\\node[right] at (5.15,1) {$F$};\n\\node[left] at (0,1) {$F$};\n\\node[below] at (3,0) {$B+ \\frac{m}{2}F$};\n\\draw (0,2) -- (4,2) ;\n\\draw (0,0) -- (0,2) ;\n\\draw (0,0) -- (6,0) ;\n\\draw (4,2) -- (6,0) ;\n\\end{tikzpicture}\n\\caption{Even Hirzebruch polygon}\n\\label{hirz}\n\\end{figure}\n\n\\noindent where the labels along the edges refer to the homology classes of the $\\mathbb{T}^2$ invariant spheres in $S^2 \\times S^2$, and where the vertices $P$,$Q$,$R$,$S$ are the fixed points of the torus action. \nSimilarly, when $m$ is odd, the moment map image is given in Figure~\\ref{fig:OddHirzebruch}\n\\begin{figure}[H]\n\\centering \n\\label{hirz0}\n\\begin{tikzpicture}\n\\node[left] at (0,2) {$Q=(0,1)$};\n\\node[left] at (0,0) {$P=(0,0)$};\n\\node[right] at (4,2) {$R= (\\lambda - \\frac{m+1}{2} ,1)$};\n\\node[right] at (6,0) {$S=(\\lambda + \\frac{m-1}{2} ,0)$};\n\\node[above] at (2,2) {$D_m=B-\\frac{m+1}{2}F$};\n\\node[right] at (5.15,1) {$F$};\n\\node[left] at (0,1) {$F$};\n\\node[below] at (3,0) {$B+ \\frac{m-1}{2}F$};\n\\draw (0,2) -- (4,2) ;\n\\draw (0,0) -- (0,2) ;\n\\draw (0,0) -- (6,0) ;\n\\draw (4,2) -- (6,0) ;\n\\end{tikzpicture}\n \\caption{Odd Hirzebruch polygon}\\label{fig:OddHirzebruch}\n\\end{figure}\n\\noindent where $B$ now refers to the homology class of a line $L$ in $\\mathbb{C}P^2 \\# \\overline{\\mathbb{C}P^2}$, $E$ is the class of the exceptional divisor, and $F$ is the fiber class $L - E$.\\\\\n\nWe define the zero-section $s_0$ to be\n\\begin{align*}\n s_0: \\mathbb{C}P^1 &\\to W_m \\\\\n [x_1;x_2] &\\mapsto \\{[x_1,x_2], [0;0;1]\\}\n\\end{align*}\nand the section at infinity $s_\\infty$ to be \n\\begin{align*}\n s_\\infty: \\mathbb{C}P^1 &\\to W_m \\\\\n [x_1;x_2] &\\mapsto \\{[x_1,x_2], [x^m_1;x^m_2;0]\\}\n\\end{align*}\nThe image of $s_0$ is an invariant and holomorphic sphere homologous to either $D_m:=B-\\frac{m}{2}F$ in $S^2 \\times S^2$ or to $D_m:=B- \\frac{m+1}{2}F$ in $\\CP^2\\# \\overline{\\CP^2}$, depending on the parity of $m$. Similarly, $s_\\infty$ defines an invariant and holomorphic sphere that represents either $B + \\frac{m}{2}F$ in $S^2 \\times S^2$ or $B+\\frac{m+1}{2}F$ in $\\CP^2\\# \\overline{\\CP^2}$. Finally, the homology class $F$ can be represented by an invariant fibre such as $\\{[1,0], [y_1,0,y_3]\\}$.\\\\\n\nIt is well known (see~\\cite{Audin}) that a Delzant polygon $\\Delta\\subset\\mathbb{R}^{2}$ with $e\\geq 3$ edges defines a toric $4$-manifold $M_{\\Delta}$ whose second Betti number is $b_{2}(M_{\\Delta})=e-2$. It follows that the moment polytope of any toric action on either $S^2 \\times S^2$ or $\\CP^2\\# \\overline{\\CP^2}$ is a quadrilateral. It is easy to see that any Delzant quadrilateral is equivalent to one of the above Hirzebruch trapezoids up to $\\AGL(2,\\mathbb{Z})$ action and up to rescaling. It follows from Delzant's classification that any toric action on $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$ is equivalent to an action of the above form. In particular, the equivalence classes of toric actions on $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$ are completely characterized by the existence of invariant spheres in specific homology classes.\n\n\n\n\\begin{lemma}\\label{lemma_torus_action_-vecurve}\nUp to equivalence, the toric action $\\mathbb{T}^2_m$, $m\\geq 1$, is characterised by the existence of an invariant, embedded, symplectic sphere $C_m$ with self intersection $-m$. The toric action $\\mathbb{T}^2_{0}$ is characterized by the existence of invariant, embedded, symplectic spheres representing the classes $B$ and~$F$.\\qed\n\\end{lemma}\n\n\n\\begin{lemma}\\label{lemma_number_of_torus_actions}\nWrite $\\lambda=\\ell+\\delta$ with $\\ell$ an integer and $0<\\delta\\leq 1$. Then, up to symplectomorphisms and reparametrizations, \n\\begin{itemize}\n\\item if $\\lambda \\geq 1$, there are exactly $\\ell+1$ inequivalent toric actions on $(S^2 \\times S^2,\\omega_\\lambda)$ given by the even Hirzebruch actions $\\mathbb{T}^2_{2k}$ with $0\\leq k\\leq \\ell$, and\n\\item if $\\lambda >1$, there are exactly $\\ell$ inequivalent toric actions on $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ given by the odd Hirzebruch actions $\\mathbb{T}^2_{2k+1}$ with $0\\leq k\\leq \\ell-1$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\nWrite $m=2k$ or $m=2k+1$ with $k\\geq 0$. As seen above, any toric action on $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$ is $\\mathbb{T}^2$-equivariantly symplectomorphic to one of the actions $\\mathbb{T}^2_m$. The invariant symplectic sphere $D_k$ in class $B-kF$ on $S^2 \\times S^2$ or $L - (k+1)F$ in $\\CP^2\\# \\overline{\\CP^2}$ must have positive area, that is, \n\\[0<\\omega_\\lambda(B-kF) = \\lambda - k = \\ell+\\delta-k\\quad\\text{~or~}\\quad 0<\\omega_\\lambda(L-(k+1)F) =\\lambda-(k+1)F = \\ell+\\delta-(k+1) \\]\nThe result follows.\n\\end{proof}\n\n\\begin{prop}\\label{prop:NormalizersHirzebruch}\nThe Weyl group $W(\\mathbb{T}^2_{m})=N(\\mathbb{T}^2_{m})\/C(\\mathbb{T}^2_{m})$ is isomorphic to\n\\[W(\\mathbb{T}^2_{m})\\simeq\n\\begin{cases}\n\\text{the dihedral group~} D_{8}, & \\text{~when~} m=0 \\text{~and~} \\lambda=1;\\\\\n\\text{the dihedral group~} D_{2}, & \\text{~when~} m=0 \\text{~and~} \\lambda>1;\\\\\n\\text{the dihedral group~} D_{1}\\simeq\\mathbb{Z}_{2},& \\text{~when~} m\\geq 1.\n\\end{cases}\\]\n\\end{prop}\n\\begin{proof}\nThis follows directly from Proposition~\\ref{prop:CentralizersToricActions} and from the description of Hirzebruch trapezoids. Indeed, for $m=0$ and $\\lambda=1$, the moment map polygon is a unit square whose symmetries can be realized by elements of $\\AGL(2,\\mathbb{Z})$. The same holds for $m=0$ and $\\lambda>1$, with the only difference that the moment polygone is now a rectangle with sides of lengths $1$ and $\\lambda$. Finaly, for $m\\geq 1$, if we write $m=2k$ or $m=2k+1$, the only non-trivial element of $\\AGL(2,\\mathbb{Z})$ that leaves the standard Hirzebruch trapezoid invariant is\n\\[\\Lambda \n=\\begin{pmatrix}-1 & -m\\\\ 0 & 1\\end{pmatrix}\n+\\begin{pmatrix}\\lambda+k,0\\end{pmatrix}\\]\nwhich is an element of order 2.\n\\end{proof}\n\n\\subsection{Hamiltonian circle actions on \\texorpdfstring{$S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$}{S2xS2 and CP2\\#CP2}}\nA list of all possible Hamiltonian circle action on $S^2 \\times S^2$ or $\\CP^2\\# \\overline{\\CP^2}$ comes easily from an extension theorem due to Y. Karshon.\n\\begin{thm}[Karshon~\\cite{finTori}, Theorem 1]\nAny symplectic $S^1$ action on $(S^2 \\times S^2,\\omega_\\lambda)$ and $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ extends to an Hamiltonian toric action. \n\\end{thm} \n\nConsequently, every symplectic $S^1$ action on $S^2 \\times S^2$ and $\\mathbb{C}P^2 \\# \\overline{\\mathbb{C}P^2}$ is given by an embedding \n\\begin{align*}\n S^1 &\\hookrightarrow \\mathbb{T}^2_m \\\\\n t &\\mapsto (t^a, t^b)\n\\end{align*}\nwhich is itself characterized by a unique triple of numbers $(a,b;m)\\in \\mathbb{Z} \\times \\mathbb{Z} \\times \\mathbb{Z}_{\\geq 0}$. Since we are only interested in effective actions, this translates numerically into the condition $\\gcd(a,b) = 1$. We shall always assume this unless otherwise stated. \n\n\n\nIn order to construct the graphs of the circle action $S^1(a,b;m)$, we claim that it is enough to compute the isotropy weights at every fixed points. Indeed, given an Hirzebruch trapezoid, the pre-image of a vertex under the moment map is a toric fixed point, and the pre-image of an edge is an invariant embedded two-sphere connecting two fixed points. When we view the space as a Hamiltonian $S^1$-space, such a two-sphere is either fixed by the action, or is a $\\mathbb{Z}_k$ sphere, or has trivial global isotropy. These three possibilities are completely determined by the weights of the $S^{1}(a,b;m)$ action at its two fixed points. We can thus construct the graphs starting from the fixed points and adding edges according to the weights. If there is a fixed surface, its area label can be read from the Hirzebruch trapezoid. Finally, the moment map labels can be added starting with the minimal vertex (characterised by having two positive weights) that we label $\\mu=0$, and then using Lemma~\\ref{Lemma:SymplecticArea} to compute the moment labels for the remaining interior fixed points, and for the maximal fixed point (characterised by having two negative weights).\n\nNow, for the Hirzebruch actions $\\mathbb{T}^2_{m}=(S^{1}\\times S^{1})_{m}$, each $S^{1}$ factor defines two weights at each of the four fixed points $P,Q,R,S$. These weights are listed in table~\\ref{table_weights}. Further, if the the $\\mathbb{T}^2_m$ action has weights $\\{\\alpha_1,\\beta_{1}\\}$ and $\\{\\alpha_{2},\\beta_2\\}$ at a fixed point $x$, then the restricted $S^1(a,b;m)$ action has weights\n\\[\\left\\{a\\alpha_1 + b\\alpha_{2}, a\\beta_{1} + b\\beta_2 \\right\\} \\]\nat $x$. This gives the weights of the $S^{1}(a,b;m)$ actions at the fixed points $P,Q,R,S$.\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{ |p{2cm}||p{4cm}|p{6cm}| }\n \\hline\n Vertex & Weights for $\\mathbb{T}^2_m$ action & Weights for the $S^1(a,b;m)$ action \\\\\n \\hline\n P & $\\big\\{\\{1,0\\},\\{0,1\\}\\big\\}$ & $\\{a,b\\}$ \\rule{0pt}{10pt}\\\\\n Q & $\\big\\{\\{1,0\\},\\{0,-1\\}\\big\\}$ & $\\{a,-b\\}$ \\rule{0pt}{10pt}\\\\\n R & $\\big\\{\\{-1,m\\},\\{0,-1\\}\\big\\}$ & $\\{-a,am-b\\}$ \\rule{0pt}{10pt}\\\\\n S & $\\big\\{\\{-1,-m\\},\\{0,1\\}\\big\\}$ & $\\{-a,-am+b\\}$ \\rule{0pt}{10pt}\\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Weights of $\\mathbb{T}^2_{m}$ and $S^{1}(a,b;m)$ actions}\n\\label{table_weights}\n\\end{table}\n\n\n\nIn Figures~\\ref{fig:GraphsWithFixedSurfaces} and~\\ref{fig:GraphsIsolatedFixedPoints}, we present the graphs of circles actions $S^{1}(a,b;m)$ on $(S^2 \\times S^2,\\omega_\\lambda)$ and $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$. As before, we write $m=2k$ or $m=2k+1$, and in order to deal with even and odd Hirzebruch surfaces simultaneously, we introduce the symbol $\\epsilon_{m} := m\\mod 2$. Each label $\\mu$ represents the value of the moment map normalized by setting $\\min \\mu=0$, while $A$ is the area label of a fixed surface. All fixed surfaces have genus 0. Note that when the isotropy label on an edge is 1, then the circle action on the corresponding invariant sphere has no isotropy, and we erase that edge from the graph. Note also that the identification of the fixed points with the vertices $P,Q,R,S$ of the Hirzebruch trapezoid is there for convenience only and is not part of the data. \n\n\n\n\nFor actions with fixed surfaces, one of the weights $a$, $b$, or $am-b$ must be zero. Since $\\gcd(a,b)=1$, the possible triples are $(\\pm1,0;m)$, $(0,\\pm1;m)$, and $(\\pm1,\\pm m;m)$. \n\\begin{figure}[H]\n\\centering\n\\subcaptionbox{Graph of $S^{1}(1,0;m)$\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.85, every node\/.style={scale=0.85}]\n\\draw [fill] (0,0) ellipse (0.3cm and 0.1cm);\n\\draw [fill] (0,1.2) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,2.8) ellipse (0.1cm and 0.1cm);\n\\draw (0,1.2) -- (0,2.8);\n\\node[right] at (0,2) {m};\n\\node[right] at (0,1.2) {R};\n\\node[right] at (0,2.8) {S};\n\\node[right] at (0.5,2.8){$\\mu= \\lambda +k$};\n\\node[right] at (0.5,0){$\\mu=0$};\n\\node[left] at (-0.4,0) {$A= 1$};\n\\node[right] at (0.5,1.2) {$\\mu=\\lambda - k - \\epsilon_{m}$};\n\\end{tikzpicture}}\n\n\\subcaptionbox{Graph of $S^{1}(-1,0;m)$\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.85, every node\/.style={scale=0.85}]\n\\draw [fill] (0,0) ellipse (0.3cm and 0.1cm);\n\\draw [fill] (0,-1.2) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-2.8) ellipse (0.1cm and 0.1cm);\n\\draw (0,-1.2) -- (0,-2.8);\n\\node[right] at (0,-2.2) {m};\n\\node[right] at (0,-2.8) {S};\n\\node[right] at (0,-1.2) {R};\n\\node[right] at (0.5,0){$\\mu= \\lambda +k$};\n\\node[right] at (0.5,-2.8){$\\mu=0$};\n\\node[left] at (-0.5,0) {$A= 1$};\n\\node[right] at (0.5,-1.2) {$\\mu=m$};\n\\end{tikzpicture}}\n\n\\subcaptionbox{Graph of $S^{1}(0,1;m)$\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.85, every node\/.style={scale=0.85}]\n\\draw [fill] (0,0) ellipse (0.3cm and 0.1cm); \n\\draw [fill] (0,2.8) ellipse (0.3cm and 0.1cm); \n\\node[left] at (-0.3,2.8) {$\\mu = 1$};\n\\node[left] at (-0.3,0){$\\mu=0$};\n\\node[right] at (0.3,2.8){$A= \\lambda-k-\\epsilon_{m}$};\n\\node[right] at (0.3,0){$A= \\lambda+k$};\n\\node[above] at (0,2.8) {\\rule{0em}{3em}}; \n\\end{tikzpicture}}\n\n\\subcaptionbox{Graph of $S^{1}(0,-1;m)$\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.85, every node\/.style={scale=0.85}]\n\\draw [fill] (0,0) ellipse (0.3cm and 0.1cm); \n\\draw [fill] (0,2.8) ellipse (0.3cm and 0.1cm); \n\\node[left] at (-0.3,2.8) {$\\mu = 1$};\n\\node[left] at (-0.3,0){$\\mu=0$};\n\\node[right] at (0.3,2.8){$A= \\lambda+k$};\n\\node[right] at (0.3,0){$A= \\lambda-k-\\epsilon_{m}$};\n\\end{tikzpicture}}\n\\subcaptionbox{Graph of $S^{1}(1,m;m)$\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.85, every node\/.style={scale=0.85}]\n\\draw [fill] (0,0) ellipse (0.3cm and 0.1cm);\n\\draw [fill] (0,-2) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-3.5) ellipse (0.1cm and 0.1cm);\n\\draw (0,-2) -- (0,-3.5);\n\\node[above] at (0,0) {\\rule{0em}{3em}}; \n\\node[right] at (0,-2.75) {m};\n\\node[right] at (0,-3.5) {P};\n\\node[right] at (0,-2) {Q};\n\\node[right] at (0.5,0){$\\mu= \\lambda +k$};\n\\node[right] at (0.5,-3.5){$\\mu=0$};\n\\node[left] at (-0.5,0) {$A= 1$};\n\\node[right] at (0.5,-2) {$\\mu=m$};\n\\end{tikzpicture}}\n\\subcaptionbox{Graph of $S^{1}(-1,-m;m)$\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.85, every node\/.style={scale=0.85}]\n\\draw [fill] (0,-3.5) ellipse (0.3cm and 0.1cm);\n\\draw [fill] (0,-2) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (0,-2);\n\\node[right] at (0,-1) {m};\n\\node[right] at (0,0) {P};\n\\node[right] at (0,-2) {Q};\n\\node[right] at (0.5,0){$\\mu= \\lambda +k$};\n\\node[right] at (0.5,-3.5){$\\mu=0$};\n\\node[left] at (-0.5,-3.5) {$A= 1$};\n\\node[right] at (0.5,-2) {$\\mu=\\lambda - k-\\epsilon_{m}$};\n\\end{tikzpicture}}\n\\caption{Graphs of circle actions with non-isolated fixed points}\\label{fig:GraphsWithFixedSurfaces} \n\\end{figure}\n\n\nFor actions whose fixed points are isolated, none of the weights are zero. The graphs then depend on the signs of $a$, $b$, and $am-b$, as these signs determine which fixed point is minimal and which one is maximal.\n\n\\begin{figure}[H]\n\\centering\n~\n\n\\subcaptionbox*{}{} \n\\setcounter{subfigure}{6} \n\\subcaptionbox{When $a>0,~b>0$, and $am-b>0$\\label{fig:a>0|b>0|am-b>0}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-1.5) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2.5) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (1,-1.5);\n\\draw (1,-1.5) -- (-1,-2.5);\n\\draw (0,0) .. controls (-4,-1.5) and (-2,-3.5) .. (0,-4);\n\\draw (-1,-2.5) -- (0,-4);\n\\node[right] at (-0.35,-3.2) {$b$};\n\\node[right] at (- 0.4,-1.8) {$a$};\n\\node[left] at (-2.2,-1.5) {$a$};\n\\node[right] at (0.7,-0.5) {$am-b$};\n\\node[above] at (0,0.2) {$\\mu = a(\\lambda + k)$};\n\\node[right] at (-0.6,-2.5) {$\\mu = b$};\n\\node[right] at (1,-1.8) {$\\mu = b + a(\\lambda - k-\\epsilon_{m}) $};\n\\node[below] at (0, -4.2) {$\\mu = 0$};\n\\node[left] at (-1,-2.5) {Q};\n\\node[left] at (0.9,-1.4) {R};\n\\node[right] at (0,0) {S};\n\\node[right] at (0,-4) {P};\n\\end{tikzpicture}}\n\\subcaptionbox{When $a>0,~b>0$ and $am-b<0\n}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (1,-2) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (-1,-3) ellipse (0.1cm and 0.1cm); \n\\draw (0,0) -- (1,-2);\n\\draw (-1,-3) -- (0,-4);\n\\draw (0,0) -- (-1,-3) ;\n\\draw (1,-2) -- (0,-4);\n\\node[right] at (-0.9,-3) {Q};\n\\node[left] at (0.9,-2) {S};\n\\node[right] at (0,0) {R};\n\\node[above] at (0,0.2) {$\\mu = b + a (\\lambda -k-\\epsilon_{m})$};\n\\node[right] at (1.3,-2) {$\\mu = a(\\lambda + k) $};\n\\node[left] at (-1.4,-3) {$\\mu = b$};\n\\node[right] at (0,-4) {P};\n\\node[right] at (-1,-3.7) {$b$};\n\\node[right] at (0.5, -3) {$a$};\n\\node[left] at (-0.5,-1.5) {$a$};\n\\node[right] at (0.7,-1) {$b-am$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\end{tikzpicture}}\n\\subcaptionbox{When $a>0,~b<0$\\label{fig:a<0|b>0}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-3) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (1,-3);\n\\draw (-1,-2) -- (0,-4);\n\\draw (0,0) -- (-1,-2) ;\n\\draw (1,-3) -- (0,-4);\n\\node[above] at (0,1) {\\rule{0em}{1em}};\n\\node[above] at (0,0.2) {$\\mu = a(\\lambda +k)-b$};\n\\node[right] at (1.4,-3) {$\\mu = -b $};\n\\node[left] at (-1.4,-2) {$\\mu = a(\\lambda-k-\\epsilon_{m})$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\node[right] at (-1,-3.2) {$a$};\n\\node[right] at (0.5, -3.6) {$-b$};\n\\node[left] at (-0.7,-1.0) {$am-b$};\n\\node[right] at (0.6,-1.3) {$a$};\n\\node[left] at (0.9,-3) {P};\n\\node[right] at (-1,-2) {R};\n\\node[right] at (0,0) {S};\n\\node[right] at (0,-4) {Q};\n\\end{tikzpicture}}\n\\subcaptionbox{When $a<0,~b>0$\\label{fig:a<0|b>0}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-3) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (1,-3);\n\\draw (-1,-2) -- (0,-4);\n\\draw (0,0) -- (-1,-2) ;\n\\draw (1,-3) -- (0,-4);\n\\node[above] at (0,1) {\\rule{0em}{1em}}; \n\\node[above] at (0,0.2) {$\\mu = b - a (\\lambda +k)$};\n\\node[right] at (1.4,-3) {$\\mu = b-am $};\n\\node[left] at (-1.4,-2) {$\\mu = -a(\\lambda+k)$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\node[right] at (-1.2,-3.2) {$-a$};\n\\node[right] at (0.5, -3.6) {$b-am$};\n\\node[left] at (-0.7,-1.0) {$b$};\n\\node[right] at (0.6,-1.3) {$-a$};\n\\node[left] at (0.9,-3) {R};\n\\node[right] at (-1,-2) {P};\n\\node[right] at (0,0) {Q};\n\\node[right] at (0,-4) {S};\n\\end{tikzpicture}}\n\\subcaptionbox{When $a<0,~b<0$, and $am-b>0$\\label{fig:a<0|b<0|am-b>0}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-3) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (1,-3);\n\\draw (-1,-2) -- (0,-4);\n\\draw (0,0) -- (-1,-2) ;\n\\draw (1,-3) -- (0,-4);\n\\node[above] at (0,1) {\\rule{0em}{1em}}; \n\\node[above] at (0,0.2) {$\\mu = -b - a (\\lambda -k-\\epsilon_{m})$};\n\\node[right] at (1.4,-3) {$\\mu = am-b $};\n\\node[left] at (-1.4,-2) {$\\mu = -a(\\lambda-k-\\epsilon_{m})$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\node[right] at (-1.2,-3.2) {$-a$};\n\\node[right] at (0.5, -3.6) {$am-b$};\n\\node[left] at (-0.7,-1.0) {$-b$};\n\\node[right] at (0.6,-1.3) {$-a$};\n\\node[left] at (0.9,-3) {S};\n\\node[right] at (-1,-2) {Q};\n\\node[right] at (0,0) {P};\n\\node[right] at (0,-4) {R};\n\\end{tikzpicture}}\n\\subcaptionbox{When $a<0,~b<0$, and $am-b<0$\\label{fig:a<0|b<0|am-b<0}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-1.5) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2.5) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (1,-1.5);\n\\draw (1,-1.5) -- (-1,-2.5);\n\\draw (0,0) .. controls (-4,-1.5) and (-2,-3.5) .. (0,-4);\n\\draw (-1,-2.5) -- (0,-4);\n\\node[right] at (-0.35,-3.2) {$b-am$};\n\\node[right] at (- 0.4,-1.8) {$-a$};\n\\node[left] at (-2.2,-1.5) {$-a$};\n\\node[right] at (0.7,-0.5) {$-b$};\n\\node[above] at (0,0.2) {$\\mu = -a(\\lambda + k)$};\n\\node[right] at (-0.6,-2.5) {$\\mu = b-am$};\n\\node[right] at (1,-1.8) {$\\mu = b-a(\\lambda + k-\\epsilon_{m}) $};\n\\node[below] at (0, -4.2) {$\\mu = 0$};\n\\node[right] at (0,0) {P};\n\\node[left] at (0.9,-1.4) {Q};\n\\node[left] at (-1,-2.5) {R};\n\\node[right] at (0,-4) {S};\n\\end{tikzpicture}}\n\\caption{Graphs of circle actions whose fixed points are all isolated}\\label{fig:GraphsIsolatedFixedPoints}\n\\end{figure}\n\n\n\\section{Equivalent circle actions}\n\nA single equivalence class of Hamiltonian circle actions on $S^2 \\times S^2$ or $\\CP^2\\# \\overline{\\CP^2}$ may be represented by more than one triple $(a,b;m)$, and this happens whenever the associated labelled graphs are the same up to the action of $\\AGL(1,\\mathbb{Z})$ on the moment map labels. \n\n\\subsection{Equivalent circle actions in \\texorpdfstring{$\\mathbb{T}^2_{m}$}{Tm}}\nThe action $S^{1}(-a,-b;m)$ is equivalent to $S^{1}(a,b;m)$ since they only differ by a reparametrization of the circle. It follows that in \nFigure~\\ref{fig:GraphsWithFixedSurfaces}, the graphs (A), (C), and (E) are $\\AGL(1,\\mathbb{Z})$-equivalent, respectively, to the graphs (B), (D), and (F). Similarly, the graphs (G), (H), and (I) in Figure~\\ref{fig:GraphsIsolatedFixedPoints} are $\\AGL(1,\\mathbb{Z})$-equivalent, respectively, to the graphs (L), (K), and (J). Other examples of equivalent subcircles in $\\mathbb{T}^2_{m}$ are obtained by letting the normalizer $N(\\mathbb{T}^2_{m})$ act on the circle subgroups of $\\mathbb{T}^2_{m}$. For instance, $S^{1}(a,b;m)$ is conjugated to $S^{1}(-a,b-am;m)$ by the adjoint of the element $\\Lambda\\in N(\\mathbb{T}^2_{m})$ of Proposition~\\ref{prop:NormalizersHirzebruch}.\n\n\\begin{prop}\\label{prop:EquivalentSubcircles}\nThe only subcircles of $\\mathbb{T}^2_{m}$ that are equivalent to $S^{1}(a,b;m)$ are the elements of the orbit of $S^{1}(a,b;m)$ under the action of the Weyl group $W(\\mathbb{T}^2_{m})$ and their reparametrizations. \n\\end{prop}\n\\begin{proof}\nWe have to show that if $S^{1}(a,b;m)$ and $S^{1}(c,d;m)$ are in the same conjugacy class, then there is an element of $N(\\mathbb{T}^2_{m})$ taking one circle to the other. As conjugation of actions corresponds to uniform translation of the moment map labels, this will follow from a systematic inspection of the possible labelled graphs.\n\nThe first cases to consider are actions $S^{1}(a,b;m)$ whose fixed points are all isolated, which means that the weights $a$, $b$, and $am-b$ are all non-zero. Two equivalent graphs arising from normalized moment maps must have the same moment map values at the maximum, and their respective weights at the maximum and at the minimum must also coincide. \n\n\\begin{table}[H]\n\\begin{tabular}{|l||c|c|c|}\n\\hline\nType of graph & Weights at max & Weights at min & Moment label at max\\\\\n\\hline\n\\hline\nG: $a>0$, $b>0$, $am-b>0$ & $-a$, $b-am$ & $a$, $b$ & $a(\\lambda+k)$\\\\\n\\hline\nH: $a>0$, $b>0$, $am-b<0$ & $-a$, $am-b$ & $a$, $b$ & $b+a(\\lambda-k-\\epsilon_{m})$ \\\\\n\\hline\nI: $a>0$, $b<0$ & $-a$, $b-am$ & $a$, $-b$ & $a(\\lambda+k)-b$\\\\\n\\hline\nJ: $a<0$, $b>0$ & $a$, $-b$ & $-a$, $b-am$ & $b-a(\\lambda+k)$\\\\\n\\hline\nK: $a<0$, $b<0$, $am-b>0$ & $a$, $b$ & $-a$, $am-b$ & $-b-a(\\lambda-k-\\epsilon_{m})$\\\\\n\\hline\nL: $a<0$, $b<0$, $am-b<0$ & $a$, $b$ & $-a$, $b-am$ & $-a(\\lambda+k)$\\\\\n\\hline\n\\end{tabular}\n\\caption{Weights and moment map values for graphs (G) -- (L)}\\label{table:WeightsMomentMapValuesGL}\n\\end{table}\nWe start by assuming $a>0$, $b>0$, and $m>0$, which implies that the graph of $S^{1}(a,b;m)$ is of type (G) or (H).\n\nAssume the graph of $S^{1}(a,b;m)$ is of type (G), that is, $am-b>0$, and suppose $S^{1}(c,d;m)$, $(c,d)\\neq (a,b)$, is in the same conjugacy class.\n\n\\begin{itemize}\n\\item Suppose the graph of $S^{1}(c,d;m)$ is also of type (G). Looking at the weights at the minimum, we see that $(c,d)=(b,a)$ is the only non-trivial possibility. Looking at the moment map values at the maximum, we conclude that $a(\\lambda+k)=b(\\lambda+k)$, that is, $a=b$, contradicting the fact that $(a,b)\\neq(c,d)$.\n\\item Suppose the graph of $S^{1}(c,d;m)$ is of type (H). Looking again at the weights at the minimum, we see that $c=b$, and $d=a$, and that $a\\neq b$ as before. The sets of weights at the maximum must also be equal, that is, $\\{-a,b-am\\}=\\{-c, cm-d\\}=\\{-b,bm-a\\}$, which implies that $a=a-bm$, contradicting the fact that $bm\\neq 0$.\n\\item Suppose the graph of $S^{1}(c,d;m)$ is of type (I), which implies $c>0$, $d<0$. Looking at the weights at the minimum, we must have either $(c,d)=(a,-b)$ or $(c,d)=(b,-a)$. In the former case, looking at the moment map values at the maximum, we conclude that $a(\\lambda+k)=c(\\lambda+k)-d=a(\\lambda+k)+b$, that is, $b=0$, which is a contradiction. In the later case, the weights at the maximum become $\\{-c, d-cm\\} = \\{-b, -a-bm\\}$, and we must have $\\{-b, -a-bm\\}=\\{-a, b-am\\}$ as sets. Since $a+bm\\neq a$, we must have $b=a$, which implies $a=b=1$, $c=1$, and $d=-1$. The moment map values at the maximum are then $a(\\lambda+k)=(\\lambda+k)$ and $c(\\lambda+k)-d=(\\lambda+k)+1$, which are not equal. \n\\item Suppose the graph of $S^{1}(c,d;m)$ is of type (J), which implies $c<0$, $d>0$. Looking at the weights at the minimum, we must have $\\{a,b\\}=\\{-c,d-cm\\}$ as sets. So, either $(c,d)=(-a,b-am)$ or $(c,d)=(-b,a-bm)$. In the first case, $b-am<0$, which is impossible at the minimum. In the second case, looking at the moment map values at the maximum, we conclude that $a(\\lambda+k)=d-c(\\lambda+k)=a-bm+b(\\lambda+k)=a+b(\\lambda-k)$. This implies $a(\\lambda+k-1)\/(\\lambda+k)=b$. As $a$ and $b$ are non-zero integers, this is impossible.\n\n\\item Suppose the graph of $S^{1}(c,d;m)$ is of type (K), which implies $c<0$, $d<0$, and $cm-d>0$. Comparing the weights at the minimum, we see that either $(c,d)=(-a, -am-b)$ or $(c,d)=(-b,-bm-a)$ with $a\\neq b$. In the former case, comparing the weights at the maximum yields $\\{-a,b-am\\}=\\{c,d\\}=\\{-a,-am-b\\}$. This implies $b=0$, which is excluded. In the former case, we must have $a\\neq b$ and $\\{-a,b-am\\}=\\{c,d\\}=\\{-b,-bm-a\\}$, which again implies $b=0$.\n\\item Finally, suppose the graph of $S^{1}(c,d;m)$ is of type (K), which implies $c<0$, $d<0$, and $cm-d<0$. Comparing the weights at the minimum, we see that either $(c,d)=(-a, b-am)$ or $(c,d)=(-b,a-bm)$. In the former case, we get the conjugate circle $S^{1}(-a,b-am;m)$. In the later case, the moment map values at the maximum gives $a(\\lambda+k)=-c(\\lambda+k)=b(\\lambda+k)$, that is, $a=b$. Then $(c,d)=(-a, b-am)$ and $S^{1}(c,d;m)=S^{1}(-a,b-am;m)$ as before.\n\\end{itemize}\nWe conclude that the only circle action $S^{1}(c,d;m)$ conjugated to an action $S^{1}(a,b;m)$ of type (G) is $S^{1}(-a,b-am;m)$.\n\nAssume now that the graph of $S^{1}(a,b;m)$ is of type (H), that is, $a>0$, $b>0$, and $am-b<0$. By transitivity, we already know that an action $S^{1}(c,d;m)$ in the same conjugacy class cannot be of type (G) or (L).\n\n\\begin{itemize}\n\\item Suppose the graph of $S^{1}(c,d;m)$ is also of type (H). Comparing the weights at the minimum, we see that we must have $(c,d)=(b,a)$. Looking at the weights at the maximum, we must have $\\{-a,am-b\\}=\\{-c,cm-d\\}$ as sets. Since $(c,d)\\neq(a,b)$, the only possibility is $a=d-cm=a-bm$, which implies $b=0$.\n\\item Suppose the graph of $S^{1}(c,d;m)$ is of type (I). Comparing the weights at the minimum, we see that we must have either $(c,d)=(a,-b)$ or $(c,d)=(b,-a)$ with $a\\neq b$. In the first case, comparing the weights at the maximum, we should have $\\{-a,am-b\\}=\\{-c,d-cm\\}=\\{-a,-b-am\\}$. This implies $a=0$, which is excluded. Similarly, in the second case, we get $\\{-a,am-b\\}=\\{-c,d-cm\\}=\\{-b,-b-am\\}$ with $a\\neq b$. This again implies $a=0$.\n\\item Suppose the graph of $S^{1}(c,d;m)$ is of type (J). Looking at the weights at the minimum, we must have $\\{a,b\\}=\\{-c,d-cm\\}$ as sets. So, either $(c,d)=(-a,b-am)$ or $(c,d)=(-b,a-bm)$. In the first case, we obtain the conjugate action $S^{1}(-a,b-am;m)$. In the second case, looking at the weights at the maximum, we must have $\\{-a,am-b\\}=\\{c,-d\\}=\\{-b, bm-a\\}$. If $a=b=1$, this gives\nus the conjugate action $S^{1}(-1,1-m;m)$. If $a=a-bm$, then $b=0$, which is excluded.\n\\item Finally, suppose the graph of $S^{1}(c,d;m)$ is of type (K). Looking at the weights at the minimum, we must have $\\{a,b\\}=\\{-c,cm-d\\}$ as sets. So, either $(c,d)=(-a,-b-am)$ or $(c,d)=(-b,-a-bm)$. Looking at the weights at the maximum, we must also have $\\{-a,am-b\\}=\\{c,d\\}$. If $(c,d)=(-a,-b-am)$, then $am-b=-am-b$, which implies $a=0$. Instead, if $(c,d)=(-b,-a-bm)$, then either $a=b=1$, which is equivalent to the previous case, or $a=a+bm$ which forces $b=0$.\n\\end{itemize}\nWe conclude that $S^{1}(-a,b-am;m)$ is the only subcircle of $\\mathbb{T}^2_{m}$ conjugated to an action $S^{1}(a,b;m)$ of type (H).\n\n\nAssume now that the graph of $S^{1}(a,b;m)$ is of type (I), that is, $a>0$, and $b<0$. By transitivity, we already know that an action $S^{1}(c,d;m)$ in the same conjugacy class cannot be of type (G), (H), (J), or (L). By symmetry of Table~\\ref{table:WeightsMomentMapValuesGL} under a change of sign of the pair $(a,b)$, and comparing with actions of types (H) and (J), we see that the only action $S^{1}(c,d;m)$ of type (K) conjugated to $S^{1}(a,b;m)$ is $S^{1}(-a,b-am;m)$. \n\n\\begin{itemize}\n\\item Suppose the graph of $S^{1}(c,d;m)$ is also of type (I). Comparing the weights at the minimum, we see that we must have $(c,d)=(-b,-a)$. Looking at the weights at the maximum, we must also have $\\{-a,b-am\\}=\\{-c, d-cm\\}=\\{b,-a+bm\\}$. If $b=-a$, then $(a,b)=(1,-1)=(c,d)$. If $b=b-am$, then we must have $a=0$, which is impossible.\n\n\\end{itemize}\nThis shows that for an action $S^{1}(a,b;m)$ of type (I), the only other action $S^{1}(c,d;m)$ in the same conjugacy class is $S^{1}(-a,b-am;m)$.\n\nThis concludes the proof of the proposition for actions $S^{1}(a,b;m)$ whose fixed points are all isolated, in the case $m>0$. When $m=0$ and $\\lambda>1$, the graphs (G) and (L) no longer exist, and the four remaining graphs (H)--(K) only differ by the action of $D_{2}$ on the pair $(a,b)$, that is, by a change of sign of $a$ or $b$. When $\\lambda=1$, we can also interchange $a$ and $b$ to obtain an equivalent graph, which defines an action of $D_{4}$. Finally, the case of actions with non-isolated surfaces involves the graphs (A)--(F) and is simpler. The details are left to the reader. \n\\end{proof}\n\n\n\n\\subsection{Toric extensions of circle actions}\n\\begin{defn}\nWe shall say a circle action $S^1(a,b; m)$ \\emph{extends} to a toric action $\\mathbb{T}^2_{n}$ if it is $S^1$-equivariantly symplectomorphic to a circle action of the form $S^1(c,d; n)$.\n\\end{defn}\n\nIn this section, we determine all possible toric extensions of $S^{1}(a,b;m)$. We begin with the exceptional case $\\lambda=1$.\n\\begin{prop}\nConsider $(S^2 \\times S^2,\\omega_\\lambda)$ with $\\lambda=1$. Then the only Hamiltonian circle actions are of the form $S^1(a,b; 0)$. In particular, they can only extend to the torus $\\mathbb{T}^2_0$. \n\\end{prop}\n\\begin{proof}\nThis follows directly from Theorem~\\ref{lemma_number_of_torus_actions}.\n\\end{proof}\n\n\n\nBy Lemma~\\ref{lemma_torus_action_-vecurve}, a toric extension of $S^{1}(a,b;m)$ to $\\mathbb{T}^2_{n}$, $n\\geq 1$, implies the existence of an invariant sphere of self-intersection $-n$ and of positive symplectic area. These two numerical invariants can be read from labelled graphs using Lemma~\\ref{weight} and Lemma~\\ref{Lemma:SymplecticArea}. As we will see, this imposes enough conditions on the triple $(a,b;m)$ to determine all possible embeddings $S^{1}(a,b;m)\\hookrightarrow \\mathbb{T}^2_{n}$, $n\\neq m$.\n\n\n\n\\begin{prop}\\label{prop:AtMostTwoExtensions}\nConsider a Hamiltonian circle action $S^1(a,b;m)$ with $\\lambda >1$. Under the following numerical conditions on $a,b,m,\\lambda$, the circle action only extends to the toric action~$\\mathbb{T}^2_m$:\n\\begin{itemize}\n \\item when $a\\neq\\pm 1$;\n \\item when $b=0$ or $b=am$\n \\item when $a = \\pm 1$ and $2 \\lambda \\leq |2b-am|+\\epsilon_{m}$.\n\\end{itemize}\nIn all other cases, the circle action may extend to at most two inequivalent toric actions, namely,\n\\begin{itemize}\n \\item when $a=\\pm 1$, $2\\lambda > |2b-am|+\\epsilon_{m}$, and $b \\not\\in \\{0,am\\}$, the circle action $S^1(a,b;m)$ only extends to the toric action $\\mathbb{T}^2_{m}$ and, possibly, to $\\mathbb{T}^2_{|2b-am|}$.\n\n\\end{itemize}\n\\end{prop}\n\\begin{proof}\nSuppose $S^{1}(a,b;m)$ is equivariantly symplectomorphic to $S^{1}(c,d;n)$ for some $n\\neq m$. Note that we necessarily have $m=n\\mod 2$. By assumption, the two actions share the same normalized labelled graph\n\nWe first consider an action $S^{1}(a,b;m)$ whose fixed points are all isolated. For the moment, let's also assume that $m\\neq 0$ and $n\\neq 0$. We observe that if the $\\mathbb{T}^2_{n}$ invariant curve $C_{n}$ of self-intersection $-n$ has non-trivial isotropy, then it must appear as some edge in the graph of $S^{1}(a,b;m)$. As the edge connecting the vertices $Q$ and $R$ is the only one that corresponds to an invariant sphere of negative self-intersection, namely $-m$, we conclude that $m=n$. This contradiction shows that $C_{n}$ must have trivial isotropy. By symmetry, the same is true of the invariant curve $C_{m}$ of self-intersection $-m$. It follows that $a=\\pm 1$ and $c=\\pm 1$. \n\nAssume $a=1$. Because there are no fixed surfaces, we know that $b\\neq 0$ and $m-b\\neq 0$. Figure~\\ref{fig:PossibleGraphsToricExtensions} shows the three possible graphs for $S^{1}(1,b;m)$, which are then of types (G), (H), or (I). The dashed edges represent the possible locations of the $\\mathbb{T}^2_{n}$ invariant curves $C_{n}$ and $C_{-n}$. We can compute the self-intersection of $C_{n}$ and $C_{-n}$ by applying Lemma~\\ref{weight} to the normal bundle of these invariant spheres, and using the configurations of weights shown in Figure~\\ref{fig:Self-Intersection}. The self-intersection of the curve between $P$ and $R$ is $2b-m$, while the self-intersection of the curve between $Q$ and $S$ is $m-2b$. Set $n=|2b-m|$. For the toric action $\\mathbb{T}^2_{n}$ to exist, we must also have $0<2\\omega_\\lambda (C_{n}) = 2\\lambda-|2b-m|-\\epsilon_{m}$. We conclude that the action $S^{1}(1,b;m)$, $m\\neq 0$, may extend to $\\mathbb{T}^2_{|2b-m|}$ whenever $2\\lambda>|2b-m|+\\epsilon_{m}$, and that this is the only other possible toric extension.\n\n\n\\begin{figure}\n\\centering\n~\n\\subcaptionbox*{}{}\n\\setcounter{subfigure}{6}\n\\subcaptionbox{When $a=1,~b>0$, and $m-b>0$\\label{fig:a=1|b>0|m-b>0}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-1.5) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2.5) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (1,-1.5);\n\\draw [dashed, blue] (0,0) -- (-1,-2.5);\n\\draw [dashed, blue] (1,-1.5) -- (0,-4);\n\\draw (-1,-2.5) -- (0,-4);\n\\node[left] at (-0.6,-3.2) {$b$};\n\\node[right] at (0.7,-0.5) {$m-b$};\n\\node[above] at (0,0.2) {$\\mu = (\\lambda + k)$};\n\\node[right] at (-0.6,-2.5) {$\\mu = b$};\n\\node[right] at (1,-1.8) {$\\mu = b + (\\lambda - k-\\epsilon_{m}) $};\n\\node[below] at (0, -4.2) {$\\mu = 0$};\n\\node[left] at (-1,-2.5) {Q};\n\\node[left] at (0.9,-1.4) {R};\n\\node[right] at (0,0) {S};\n\\node[right] at (0,-4) {P};\n\\end{tikzpicture}}\n\\subcaptionbox{When $a=1,~b>0$ and $m-b<0$\n}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (1,-2) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (-1,-3) ellipse (0.1cm and 0.1cm); \n\\draw [dashed, blue] (0,0) .. controls (-4,-1.5) and (-2,-3.5) .. (0,-4);\n\\draw [dashed, blue] (1,-2) -- (-1,-3);\n\\draw (0,0) -- (1,-2);\n\\draw (-1,-3) -- (0,-4);\n\\node[right] at (0,-4) {P};\n\\node[left] at (-1,-3) {Q};\n\\node[left] at (0.9,-2) {S};\n\\node[right] at (0,0) {R};\n\\node[above] at (0,0.2) {$\\mu = b + (\\lambda -k-\\epsilon_{m})$};\n\\node[right] at (1.2,-2) {$\\mu = (\\lambda + k) $};\n\\node[right] at (-0.7,-3) {$\\mu = b$};\n\\node[right] at (-0.4,-3.5) {$b$};\n\\node[right] at (0.7,-1) {$b-m$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\end{tikzpicture}\n}\n\\subcaptionbox{When $a=1,~b<0$\\label{fig:a=1|b>0}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-3) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2) ellipse (0.1cm and 0.1cm);\n\\draw [dashed, blue] (0,0) .. controls (-4,-1.5) and (-2,-3.5) .. (0,-4);\n\\draw [dashed, blue] (-1,-2) -- (1,-3);\n\\draw (0,0) -- (-1,-2) ;\n\\draw (1,-3) -- (0,-4);\n\\node[above] at (0,1) {\\rule{0em}{1em}};\n\\node[above] at (0,0.2) {$\\mu = (\\lambda +k)-b$};\n\\node[right] at (1.4,-3) {$\\mu = -b $};\n\\node[right] at (-0.8,-1.9) {$\\mu = (\\lambda-k-\\epsilon_{m})$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\node[right] at (0.5, -3.6) {$-b$};\n\\node[right] at (-0.4,-1.0) {$m-b$};\n\\node[right] at (1,-3) {P};\n\\node[left] at (-1,-2) {R};\n\\node[right] at (0,0) {S};\n\\node[right] at (0,-4) {Q};\n\\end{tikzpicture}}\n\\caption{Graphs of $S^{1}(1,b;m)$ with possible locations of $C_{n}$ and $C_{-n}$}\n\\label{fig:PossibleGraphsToricExtensions}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \n\\begin{tikzpicture}\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (4,0) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (7,0) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (11,0) ellipse (0.1cm and 0.1cm);\n\\draw [dashed, blue] (0,0) -- (4,0); \n\\draw [dashed, blue] (7,0) -- (11,0);\n\\draw (0,-0.5) -- (0,0.5);\n\\draw (4,-0.5) -- (4,0.5);\n\\draw (7,-0.5) -- (7,0.5);\n\\draw (11,-0.5) -- (11,0.5);\n\\node[left] at (0,0) {$1\\,$}; \n\\node[above] at (0,0.5) {$b$}; \n\\node[right] at (4,0) {$\\,-1$}; \n\\node[above] at (4,0.5) {$m-b$}; \n\\node[right] at (0,-0.25) {$P$};\n\\node[left] at (4,-0.25) {$R$};\n\\node[above,blue] at (2,0) {$1$};\n\\node[left] at (7,0) {$1\\,$}; \n\\node[above] at (7,0.5) {$-b$}; \n\\node[right] at (11,0) {$\\,-1$};\n\\node[above] at (11,0.5) {$b-m$}; \n\\node[right] at (7,-0.25) {$Q$};\n\\node[left] at (11,-0.25) {$S$};\n\\node[above,blue] at (9,0) {$1$};\n\\end{tikzpicture}\n\\caption{Configurations of weights along $C_{\\pm n}$ when $a=1$}\n\\label{fig:Self-Intersection}\n\\end{figure}\n\nWhen $a=-1$, the same arguments apply to $S^{1}(-1,b;m)$ whose possible graphs are now of types (J), (K), and (L). The self-intersections of the curves $C_{\\pm 1}$ are now $\\pm |2b+m|$. We conclude that the action $S^{1}(-1,b;m)$ may extend to $\\mathbb{T}^2_{|2b+m|}$ whenever $m\\neq 0$ and $2\\lambda>|2b+m|+\\epsilon_{m}$.\n\nIn the special case $m=0$, any invariant sphere appearing in the graph of $S^{1}(a,b;0)$ has zero self-intersection. If $S^{1}(a,b;0)$ extends to $\\mathbb{T}^2_{n}$ with $n\\geq 2$, it again follows that the invariant curve $C_{n}$ of self-intersection $-n$ must have trivial isotropy. Consequently, $c=\\pm 1$, and at least one of the weights $a$ or $b$ must be $\\pm1$. If $a=1$, the possible graphs of $S^{1}(1,b;0)$ are the graphs (H) and (I) of Figure~\\ref{fig:PossibleGraphsToricExtensions}. Then, the same argument as before shows that the self-intersection of invariant curves are $0$ and $\\pm 2b$. Consequently, the action $S^{1}(1,b;0)$ may extend to $\\mathbb{T}^2_{|2b|}$ provided $2\\lambda>|2b|$. Similarly, when $a=-1$, the action $S^{1}(-1,b;0)$ may extend to $\\mathbb{T}^2_{|2b|}$ whenever $2\\lambda>|2b|$.\n\nWhen $m=0$ and $b=1$, then the possible self-intersections of invariant curves are $0$ and $\\pm 2a$. However, the two possible graphs are now of type (H) and (J). In both cases, the area of the tentative invariant curve $C_{\\pm n}$ connecting the two interior fixed points is negative. It follows that it is impossible to have $m=0$ and $b=1$. Similarly, if $b=-1$, the possible graphs are of type (I) and (K) and, as before, the area of the tentative invariant curve $C_{\\pm n}$ connecting the two interior fixed points is negative. Consequently, we cannot have $m=0$ and $b=\\pm 1$. This concludes the proof of the statement for actions whose fixed points are all isolated.\n\n\\begin{figure}\n\\centering\n~\n\\subcaptionbox*{}{}\n\\setcounter{subfigure}{7}\n\\subcaptionbox{When $m=0$, $a>0,~b=1$}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (1,-2) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (-1,-3) ellipse (0.1cm and 0.1cm); \n\\draw [dashed, blue] (0,0) .. controls (-4,-1.5) and (-2,-3.5) .. (0,-4);\n\\draw[dashed, blue] (-1,-3) -- (1,-2);\n\\draw (0,0) -- (-1,-3) ;\n\\draw (1,-2) -- (0,-4);\n\\node[left] at (-1.1,-3) {Q};\n\\node[left] at (0.9,-1.9) {S};\n\\node[right] at (0,0) {R};\n\\node[above] at (0,0.2) {$\\mu = 1 + a (\\lambda -k-\\epsilon_{m})$};\n\\node[right] at (1.2,-2) {$\\mu = a(\\lambda + k) $};\n\\node[right] at (-0.9,-3) {$\\mu = 1$};\n\\node[right] at (0,-4) {P};\n\\node[right] at (0.5, -3) {$a$};\n\\node[left] at (-0.5,-1.5) {$a$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\end{tikzpicture}}\n\\setcounter{subfigure}{9}\n\\subcaptionbox{When $m=0$, $a<0,~b=1$\\label{fig:a<0|b=1}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-3) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (1,-3);\n\\draw (-1,-2) -- (0,-4);\n\\draw [dashed, blue] (0,0) .. controls (-4,-1.5) and (-2,-3.5) .. (0,-4);\n\\draw[dashed, blue] (-1,-2) -- (1,-3);\n\\node[above] at (0,1) {\\rule{0em}{1em}};\n\\node[above] at (0,0.2) {$\\mu = 1 - a (\\lambda +k)$};\n\\node[right] at (1.4,-3) {$\\mu = 1 $};\n\\node[above] at (-1,-2) {$\\mu = -a(\\lambda+k)$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\node[right] at (-1.2,-3.2) {$-a$};\n\\node[right] at (0.6,-1.3) {$-a$};\n\\node[left] at (0.9,-3) {R};\n\\node[left] at (-1,-2.2) {P};\n\\node[right] at (0,0) {Q};\n\\node[right] at (0,-4) {S};\n\\end{tikzpicture}}\n\\caption{Graphs of $S^{1}(a,1;0)$ with impossible locations of $C_{n}$ and $C_{-n}$}\n\\label{fig:ImpossibleGraphsToricExtensions}\n\\end{figure}\n\nNow suppose $S^{1}(a,b;m)$ has non-isolated fixed points, that is, $a=0$, or $b=0$, or $b=am$. As the moment map values of graphs (A), (B), (E), and (F) depend only on $m$, it is impossible to get the same graphs from another action $S^{1}(c,d;n)$ with $n\\neq m$. The same remark applies to the area labels of graphs (C) and (D), showing that $S^{1}(a,b;m)$ only extends to $\\mathbb{T}^2_{m}$.\n\\end{proof}\n\n\nIt remains to investigate whether the action $S^{1}(\\pm 1,b;m)$ extends to $\\mathbb{T}^2_{|2b-m|}$ when $2\\lambda > |2b-am|+\\epsilon_{m}$ and $b \\not\\in \\{0,am\\}$. A straightforward but lengthy comparison of the possible graphs of $S^{1}(\\pm1,b;m)$ and $S^{1}(\\pm,d;|2b-am|)$ shows that this is always the case and yields the equivalences stated in the following two corollaries. The graphs giving the first equivalence are shown in Figure~\\ref{fig:ExampleTwoToricExtensions}. The other cases are left to the reader.\n\n\\begin{cor} \\label{cor:CircleExtensionsWith_a=1}\nConsider the action $S^1(1,b;m)$ and suppose $2\\lambda > |2b-m|+\\epsilon_{m}$. Then under the following numerical conditions on $b$ and $m$, the $S^1(1,b;m)$ action extends to the toric action $\\mathbb{T}^2_{|2b-m|}$ and is equivariantly symplectomorphic to the following subcircle in $\\mathbb{T}^2_{|2b-m|}$\\,:\n\\begin{enumerate}\n \\item if $b>0$ and $b>m$, then $S^1(1,b; m)$ is equivalent to $S^1(1,b; |2b-m|)$;\n \\item if $b>0$, $m>b$, and $2b-m < 0$, then $S^1(1,b; m)$ is equivalent to $S^1(1,-b; |2b-m|)$;\n \\item if $b>0$, $m>b$, and $2b-m > 0$, then $S^1(1,b; m)$ is equivalent to $S^1(1,b;|2b-m|)$;\n \\item finally, if $b<0$, then $S^1(1,b; m)$ is equivalent to $S^1(1,-b;|2b-m|)$.\\qed\n\\end{enumerate}\n\\end{cor}\n\n\n\\begin{cor}\\label{cor:CircleExtensionsWith_a=-1}\nConsider the $S^1$ actions $S^1(-1,b;m)$ on $(S^2 \\times S^2,\\omega_\\lambda)$ and suppose $2\\lambda > |2b+m|$. Then under the following numerical conditions on $b$ and $m$, the $S^1(-1,b;m)$ action extends to the toric action $\\mathbb{T}^2_{|2b+m|}$ and is equivariantly symplectomorphic to the following subcircle in $\\mathbb{T}^2_{|2b+m|}$\\,:\n\\begin{enumerate}\n\\item if $b<0$ and $m>-2b$, then $S^1(-1,b;m)$ is equivalent to $S^1(-1,-b; |2b+m|)$;\n\\item if $b<0$, $m>-b$, and $-2b>m$, then $S^1(-1,b;m)$ is equivalent to \n$S^1(-1,b; |2b+m|)$;\n\\item if $b<0$ and $-b>m$, then $S^1(-1,b;m)$ is equivalent to $S^1(-1,b; |2b+m|)$;\n\\item if $b>0$, then $S^1(-1,b; m)$ is equivalent to $S^1(-1,-b; |2b+m|)$.\\qed\n\\end{enumerate}\n\\end{cor}\n\n\\begin{figure}[H]\n\\centering\n\\subcaptionbox*{$S^1(1,b;m)$ of type (H)\\label{fig:ExtendedGraph1}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (1,-2) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm); \n\\draw [fill] (-1,-3) ellipse (0.1cm and 0.1cm); \n\\draw (0,0) -- (1,-2);\n\\draw (-1,-3) -- (0,-4);\n\\draw[dashed, blue] (0,0) -- (-1,-3) ;\n\\draw[dashed, blue] (1,-2) -- (0,-4);\n\\node[right] at (-0.9,-3) {Q};\n\\node[left] at (0.9,-2) {S};\n\\node[right] at (0,0) {R};\n\\node[above] at (0,0.2) {$\\mu = b + (\\lambda-k-\\epsilon_{k})$};\n\\node[right] at (1.3,-2) {$\\mu = (\\lambda+k) $};\n\\node[left] at (-1.4,-3) {$\\mu = b$};\n\\node[right] at (0,-4) {P};\n\\node[right] at (-1,-3.7) {$b$};\n\\node[right] at (0.5, -3) {$a=1$};\n\\node[left] at (-0.5,-1.5) {$a=1$};\n\\node[right] at (0.7,-1) {$b-m$};\n\\node[below] at (0,-4.2) {$\\mu = 0$};\n\\end{tikzpicture}\n}\n~\n\\subcaptionbox*{$S^1(1,b;2b-m)$ of type (G)\\label{sndstrata}}\n[.45\\linewidth]\n{\\begin{tikzpicture}[scale=0.9, every node\/.style={scale=0.9}]\n\\draw [fill] (0,0) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (1,-1.5) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (0,-4) ellipse (0.1cm and 0.1cm);\n\\draw [fill] (-1,-2.5) ellipse (0.1cm and 0.1cm);\n\\draw (0,0) -- (1,-1.5);\n\\draw[dashed, blue] (1,-1.5) -- (-1,-2.5);\n\\draw[dashed, blue] (0,0) .. controls (-4,-1.5) and (-2,-3.5) .. (0,-4);\n\\draw (-1,-2.5) -- (0,-4);\n\\node[right] at (-0.35,-3.2) {$b$};\n\\node[right] at (- 1,-1.8) {$a=1$};\n\\node[left] at (-2.2,-1.5) {$a=1$};\n\\node[right] at (0.7,-0.5) {$(2b-m)-b=b-m$};\n\\node[above] at (0,0.2) {$\\mu = \\lambda+\\frac{(2b-m)-\\epsilon_{k}}{2}= b+\\lambda - k-\\epsilon_{k}$};\n\\node[right] at (-0.6,-2.5) {$\\mu = b$};\n\\node[right] at (0.6,-1.8) {$\\mu=b+\\lambda-\\frac{(2b-m)-\\epsilon_{k}}{2}-\\epsilon_{k}$};\n\\node[right] at (0.91,-2.25) {$=\\lambda+k$};\n\\node[below] at (0, -4.2) {$\\mu = 0$};\n\\node[left] at (-1,-2.5) {Q};\n\\node[left] at (0.9,-1.4) {R};\n\\node[right] at (0,0) {S};\n\\node[right] at (0,-4) {P};\n\\end{tikzpicture}}\n\\caption{The equivalence $S^{1}(1,b;m)\\sim S^{1}(1,b;2b-m)$ when $b>m$}\n\\label{fig:ExampleTwoToricExtensions}\n\\end{figure}\n\n\n\n\n\n\\chapter{Action of \\texorpdfstring{$\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$}{Symp(S2xS2)} on \\texorpdfstring{$\\mathcal{J}^{S^1}_{\\om_\\lambda}$}{J\\hat{}S1}}\\label{ChapterActionOfSymp}\nIn this chapter we show that the space of $S^1$ invariant compatible almost complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ can be decomposed into strata each of which being homotopy equivalent to a homogeneous space under the action of the equivariant symplectomorphism group. \n\n\n\n\n\n\n\\section{\\texorpdfstring{$J$}{J}-Holomorphic Preliminaries}\n\nWe first recall a few facts about compatible almost complex structures and associated $J$-holomorphic curves.\n\n\n\\begin{defn}[Compatible almost complex structures] \nAn almost complex structure $J$ on a symplectic manifold $(M,\\omega)$ is said to be compatible with $\\omega$ if $\\omega(u,Ju)>0$ and $\\omega(Ju, Jv)=\\omega(u,v)$ for all non-zero $u,v\\in T_* M$.\n\\end{defn}\n\n\\begin{lemma}\nThe space $\\mathcal{J}_\\omega = \\mathcal{J}(M,\\omega)$ of all compatible almost complex structures on a symplectic manifold $(M,\\omega)$ is non-empty and contractible.\n\\end{lemma}\n\n\\begin{defn}{\\textit{$J$-holomorphic spheres}:}\nLet $(M,\\omega)$ be a symplectic manifold endowed with a compatible almost complex structure $J$. A rational $J$-holomorphic map, also called a parametrized $J$-holomorphic sphere, is a $C^\\infty$ map\n\\[\nu: ({S}^2, j) \\longrightarrow (M,\\omega,J)\n\\]\nsatisfying the Cauchy-Riemann equation\n\\[\n\\bar\\partial_J(u)=\\frac{1}{2}(du\\circ j - J\\circ du) = 0\n\\]\nwhere $j$ is the usual complex structure on the sphere. The image of a $J$-holomorphic rational map is called a rational $J$-holomorphic curve or simply a $J$-curve.\n\\end{defn}\n\n\n\n\\begin{defn}[Multi-covered and simple maps]\nWe say that a $J$-holomorphic map $u:\\mathbb{C}P^1 \\longrightarrow (M,J)$ is multi-covered if $u = {u}^\\prime \\circ f$, where $f:\\mathbb{C}P^1 \\to \\mathbb{C}P^1$ is a holomorphic map of degree greater than one and where $u':\\mathbb{C}P^1 \\to (M,J)$ is a $J$-holomorphic map. We call a $J$-holomorphic map simple if it is not multi-covered.\n\\end{defn}\n\n\\begin{remark}\nWe usually assume that a $J$-holomorphic map is somewhere injective, meaning that $\\exists z\\in {S}^2$ such that $du_z \\neq 0$ and $u^{-1}u(z) = z$. In particular, somewhere injective maps do not factor through multiple covers $h:S^2\\to S^2$. \n\\end{remark}\n\n\\begin{defn}[Moduli spaces of $J$-holomorphic maps or curves]\nLet $(M,\\omega)$ be a symplectic manifold and let $J \\in \\mathcal{J}_\\omega$. Given $A \\in H_2(M, \\mathbb{Z})$ we denote by $\\widetilde{\\mathcal{M}}(A,J)$ the space of all $J$-holomorphic, somewhere injective maps representing the homology class $A$. The Mobius group $G =\\PSL(2,\\mathbb{C})$ acts freely on this space by reparametrization and the quotient space $\\mathcal{M}(A,J):=\\widetilde{\\mathcal{M}}(A,J)\/G$ is called the moduli space of (unparametrised) $J$-curves in class $A$.\n\\end{defn}\n\nIn dimension 4, the geometric properties of $J$-holomorphic curves are, to a large extend, controlled by homological data. As a result, many properties of complex algebraic curves in complex algebraic surfaces extend to $J$-holomorphic curves in $4$-dimensional symplectic manifolds. Below we list the key properties of $J$-holomorphic curves we will be relying on.\n\n\\begin{thm}[Positivity]\\label{thm_Positivity}Let $(M,\\omega)$ be a 4-dimensional symplectic manifold. If a homology class A $\\in H_2(M,\\mathbb{Z})$ is represented by a nonconstant J-curves for some $J \\in \\mathcal{J_\\omega}$ then $\\omega(A) > 0$.\n\\end{thm}\n\n\\begin{thm}[Fredholm property and automatic regularity.]\\label{thm_Regularity}Let $(M,\\omega)$ be a 4-dimensional symplectic manifold. \nThen the universal moduli space\n\\[\n\\widetilde{\\mathcal{M}}(A,\\mathcal{J}_{\\omega}) := \\bigcup_{J \\in {\\mathcal{J}_{\\omega}}} \\widetilde{\\mathcal{M}}(A,J)\n\\]\nwith $C^l$-topology ($l \\geq 2$) is a smooth Banach manifold and the projection map\n\\[\n\\pi_A: \\widetilde{\\mathcal{M}}(A,\\mathcal{J}_{\\omega}) \\longrightarrow \\mathcal{J}_{\\omega}\n\\]\nis a Fredholm map of index $2(c_1(A) + 2)$ where $c_1 \\in H^2(M,\\mathbb{Z})$ is the first chern class of $(TM, J)$ (note that the Chern class is independent of choice of $J \\in \\mathcal{J}_{\\omega}$). An almost complex structure is said to be regular for the class $A$ if it is a regular value for the projection $\\pi_A$. If this is the case then the moduli spaces $\\widetilde{\\mathcal{M}}(A,J)$ and $\\mathcal{M}(A,J)$ are smooth manifolds of dimensions $2(c_1(A)+2)$ and $2(c_1(A)-1)$ respectively. The set of regular values $\\mathcal{J} \\in \\mathcal{J}_\\omega$ is a subset of second category and is denoted by $\\mathcal{J}_{\\omega}^{\\textrm{reg}}(A)$. If $J \\in J_\\omega$ is integrable and $S$ is an embedded $J$-holomorphic sphere with self-intersection number $[S]\\cdot [S] \\geq -1$, then $J$ is regular for the class $[S]$. In dimension $4$, the same conclusion holds without the integrability assumption.\n\\end{thm}\n\n\\begin{defn}[Cusp Curves] \\label{defn_cusp}\nLet $(M,\\omega)$ be a symplectic manifold. Let $J \\in J_\\omega$. A $J$-holomorphic cusp curve $C$ is a connected finite union of $J$-holomorphic curves \n\\[C = C_1 \\cup C_2 \\ldots \\cup C_k\\] where $C_i = u_i(\\mathbb{C}P^1)$ and $u_i: \\mathbb{C}P^1 \\to (M,J)$ is a (possibly multi-covered) $J$-holomorphic map.\n\\end{defn}\n\n\\begin{thm}[Gromov's compactness theorem]\\label{thm_Compactness} Let $(M,\\omega)$ be a compact symplectic manifold. Let $J_n \\in \\mathcal{J}_{\\omega}$ be a sequence converging to $J$ in the $C^\\infty$ topology and let $S_i$ be $J_i$-holomorphic spheres of bounded symplectic area $\\omega(S_i)$. Then there is a subsequence of the $S_i$ which converges weakly to a $J$-holomorphic curve or cusp-curve $S$. In particular if all the $S_i$'s belong to the class $A$, then $S$ also belongs the class $A$, and any cusp curve defines a homological decomposition of $A=\\sum_i A_i$ such that $\\omega(A_i)>0$.\n\\end{thm}\n\n\\begin{thm}[Positivity of intersections]\\label{thm_PositivityIntersections} Let $J \\in \\J_{\\om_\\lambda}$ and $A$, $B$ be two distinct $J$-holomorphic curves in a $4$-dimensional manifold. Then they intersect at only finitely many points and each point contributes positively to the intersection multiplicity $[A]\\cdot[B]$. Moreover, $[A]\\cdot[B]=1$ iff the curves intersect transversally at exactly one point, while $[A]\\cdot[B]= 0$ iff the curves are disjoint.\n\\end{thm}\n\nAs a corollary of Positivity of intersections we have the following result under the presence of a group action. \n\\begin{cor}\\label{cor_pos}\nLet $(M,\\omega)$ be a symplectic 4-manifold and let $G$ be a compact Lie group acting symplectically on $M$. Suppose that $G$ acts trivially on homology. Let $\\mathcal{J}^{G}$ denote the space of $\\omega$ tame (or compatible) almost complex structures and let $C$ be a $J$ holomorphic curve for some $J \\in \\mathcal{J}^G$. \nThen,\n\\begin{enumerate}\n \\item if C has negative self intersection, then $g \\cdot C = C$ for all $g \\in G$.\n \\item If C has zero self intersection, then $g \\cdot C = C$ or $g\\cdot C \\cap C = \\emptyset$ for all $g \\in G$.\n\\end{enumerate} \n\\end{cor}\n\n\\begin{thm}[Adjunction formula]\\label{thm_Adjunction} Let $u: ({S}^2, j) \\longrightarrow (M^4,J)$ be a somewhere injective $J$-holomorphic map representing the homology class $A$ in a $4$-dimensional manifold. Define the virtual genus of $A$ as\n\\[\ng_v(A) = 1+\\frac{1}{2}([A]\\cdot[A]- c_1(A))\n\\]\nwhere $c_1(A) = \\langle c_1(TM,J), A \\rangle$. Then $g_v(A)\\geq 0$ with equality if, and only if, the map $u$ is an embedding.\n\\end{thm}\n\n\n\\section{\\texorpdfstring{$J$}{J}-holomorphic spheres in \\texorpdfstring{$S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$}{S2xS2 and CP2\\#CP2}}\n\nIn this section, will show how the existence of certain $J$-holomorphic spheres in ruled $4$-manifolds induces a natural partition of the space $\\mathcal{J}_\\omega$. We shall state all the relevant results, and refer the reader to Chapter 9 of \\cite{McD} for more details.\n\n\\begin{cor}\\label{FSimple}\nLet $\\lambda = l + \\delta$ where $l \\in \\mathbb{N}$ and $0<\\delta\\leq 1$.\nThen we have \n\\begin{enumerate}\n \\item Any $J$-holomorphic representative of the class $F$ is a simple curve.\n \\item The only $J$-holomorphic decomposition of the class $B$ are of the form $B = (B-kF) + kF$, where $0\\leq k\\leq\\ell$. In this decomposition, the $J$-holomorphic representative of the class $(B-kF)$ is an embedded sphere, while the class $kF$ may be represented by a collection of (possibly multiply covered) spheres representing multiples of the class $F$.\\qed\n\\end{enumerate}\n\\end{cor}\n\n\n\n\\begin{prop}\\label{prop_FIndecomposable}\nThe moduli space $\\widetilde{\\mathcal{M}}(F,J) \\neq \\emptyset$ for all $J \\in J_\\omega$. In particular, for every compatible almost complex structure $J\\in\\mathcal{J}_{\\omega_\\lambda}$, and for any given point $p\\in S^2\\times S^2$, there is a unique embedded $J$-holomorphic sphere representing the class $F$ passing through~$p$.\\qed\n\\end{prop}\n\n\nWe also have an analogous result for $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$.\n\\begin{thm}\nGiven any point $p \\in \\CP^2\\# \\overline{\\CP^2}$, and any $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda}$, there is a $J$-holomorphic curve in the class $F:=L-E$ passing through $p$. \\qed\n\\end{thm}\n\nThe following Theorem due to Abreu and McDuff~\\cite{MR1775741} tells us about the decomposition of the space of compatible almost complex structures on $(S^2 \\times S^2,\\omega_\\lambda)$ into finitely many strata. \n\\begin{thm} \\label{strata}\nLet $\\mathcal{J}_{\\omega_\\lambda}$ denote the space of all compatible almost complex structures (not necessarily invariant) for the form $\\omega_\\lambda$, on $S^2 \\times S^2$ then the space $\\mathcal{J}_{\\omega_\\lambda}$ admits a finite decomposition into disjoint Fr\u00e9chet manifolds of finite codimensions\n\\[\n\\mathcal{J}_{\\omega_\\lambda} = U_0 \\sqcup U_2 \\sqcup U_4 \\ldots \\sqcup U_{2n}\n\\]\nwhere $2n= \\lceil 2\\lambda \\rceil -1$ and $\\lceil \\lambda \\rceil$ is the unique integer $l$ such that $l < \\lambda \\leq l+1$ and where\n\\[\nU_{2i} := \\left\\{ J \\in \\mathcal{J}_{\\omega_\\lambda}~|~ D_{2i}=B-iF \\in H_2(S^2 \\times S^2,\\mathbb{Z})\\text{~is represented by a $J$-holomorphic sphere}\\right\\}\n\\]\n\\end{thm}\n\nA completely analogous description holds true for $\\CP^2\\# \\overline{\\CP^2}$.\n\n\\begin{thm}\\label{strata-CCC}\nLet $\\mathcal{J}_{\\omega_\\lambda}$ denote the space of all compatible almost complex structures (not necessarily invariant) for the form $\\omega_\\lambda$, then the space $\\mathcal{J}_{\\omega_\\lambda}$ admits a finite decomposition into disjoint Fr\u00e9chet manifolds of finite codimensions\n\\[\n\\mathcal{J}_{\\omega_\\lambda} = U_1 \\sqcup U_3 \\sqcup U_5\\ldots \\sqcup U_{2n-1}\n\\]\nwhere $2n = \\lceil 2\\lambda \\rceil -1$, and $\\lceil \\lambda \\rceil$ is the unique integer $l$ such that $l < \\lambda \\leq l+1$ and where\n\\[ \nU_{2i-1} := \\left\\{ J \\in \\mathcal{J}_{\\omega_\\lambda}~|~ D_{2i-1}=B-iF \\in H_2(\\CP^2\\# \\overline{\\CP^2},\\mathbb{Z})\\text{~is represented by a $J$-holomorphic sphere}\\right\\}\n\\]\n\\end{thm}\n\n\\begin{remark}\nWe label the strata in $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$ by the homological self-intersection of the classes $D_s$.\n\n\\end{remark}\n\n\\begin{remark}\\label{remark-starta-toric}\nWe note that for both $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$, there is a canonical integrable almost complex structure $J_s$ in the strata $U_s$ coming from realizing $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$ as the $s^{\\text{th}}$- Hirzebruch surface $W_s$ of section 1.2. Further recall that associated to each $J_s$ we have a unique $J_s$-holomorphic Hamiltonian toric action $\\mathbb{T}_s$. Thus the set of possible equivalence classes of toric actions on $S^2 \\times S^2$ and $\\CP^2\\# \\overline{\\CP^2}$ are in one-to-one correspondence with the strata in the decomposition of $\\mathcal{J}_{\\omega_\\lambda}$. This fact will be crucial in our later analysis of centralizer subgroups. \n\\end{remark}\n\n\n\n\\section{Intersection of \\texorpdfstring{$\\mathcal{J}^{S^1}_{\\om_\\lambda}$}{J\\hat{}S1} with the strata}\\label{section:intersection}\nIn this section we shall answer the question as to which strata the space of invariant almost complex structures for a given action $(S^2 \\times S^2,\\omega_\\lambda)$ or $(\\CP^2\\# \\overline{\\CP^2}, \\omega_\\lambda)$ intersects.\\\\\n\nIn what follows, we will use the following simple observation several times. Let $(M,\\omega)$ be a simply connected symplectic $4$-manifold. There is a left-exact sequence\n\\begin{equation}\n1 \\to \\Symp_h(M,\\omega) \\to \\Symp(M,\\omega) \\to \\Aut_{c_1,\\omega}\\left(H_2(M,\\mathbb{Z})\\right)\\label{Sequence:ActionOnHomology}\n\\end{equation}\nwhere $\\Symp_h(M,\\omega)$ is the subgroup of symplectomorphisms acting trivially on homology, and where $\\Aut_{c_1,\\omega_\\lambda}\\left(H_2(M,\\mathbb{Z})\\right)$ is the group of automorphisms of $H_2(M,\\mathbb{Z})$ that preserve the intersection product and the Poincar\u00e9 duals of the cohomology classes $c_1(TM)$ and $[\\omega_\\lambda]$. This later group is the identity for $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ with $\\lambda \\geq 1$ and for $(S^2 \\times S^2,\\omega_\\lambda)$ with $\\lambda > 1$. In the case of $(S^2 \\times S^2,\\omega_\\lambda)$ with $\\lambda = 1$, the group $\\Aut_{c_1,\\omega_\\lambda}\\left(H_2(M,\\mathbb{Z})\\right)$ is equal to $\\mathbb{Z}_2$ and is generated by the symplectomorphism that swaps the two $S^2$ factors. Consequently, for any symplectically ruled rational surface, the above sequence is also right-exact and splits.\n\n \n\n\\begin{lemma}\\label{lemma:ActionOnHomology} We have the following equalities among symplectomorphism groups:\n\\begin{itemize}\n\\item $\\Symp(S^2 \\times S^2,\\omega_\\lambda) = \\Symp_h(S^2 \\times S^2,\\omega_\\lambda)\\ltimes\\mathbb{Z}_2$ when $\\lambda=1$,\n\\item $\\Symp(S^2 \\times S^2,\\omega_\\lambda) = \\Symp_h(S^2 \\times S^2,\\omega_\\lambda)$ when $\\lambda>1$, and\n\\item $\\Symp_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda) = \\Symp(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ for all $\\lambda\\geq 1$.\n\\end{itemize}\\qed\n\\end{lemma}\n\n\n\\begin{prop}\\label{prop:ToricExtensionCorrespondance}\nFix a circle action $S^1(a,b;m)$. Then the space of invariant almost complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects the stratum $U_n$ iff the circle action $S^1(a,b; m)$ extends to the toric action $\\mathbb{T}^2_n$. \n\\end{prop}\n\\begin{proof}\nIf $\\lambda = 1$, the result is immediate as the only toric action is $\\mathbb{T}^2_0$. \n\nLet $\\lambda > 1$ and suppose the action $S^1(a,b;m)$ extends to $\\mathbb{T}^2_n$. This means that there exists a circle action $S^1(c,d; n) \\subset \\mathbb{T}^2_n$ which is equivariantly symplectomorphic to the action $S^1(a,b;m)$ via some symplectomorphism $\\phi$. Let $J_{n}$ denote the standard complex structure in the stratum $U_n$. By Lemma \\ref{lemma:ActionOnHomology}, when $\\lambda > 1$, the group $\\Symp_h(S^2 \\times S^2,\\omega_\\lambda)$ is equal to $\\Symp(S^2 \\times S^2,\\omega_\\lambda)$ so that $\\phi$ preserves homology. Consequently, $\\phi^*J_{n}$ belongs to the stratum $U_n$ and is invariant with respect to the $S^1(a,b;m)$ action, showing that the space $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects the stratum $U_n$.\n\nConversely, suppose that $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_n$. If $n\\geq 1$, then there exists an invariant, embedded, symplectic sphere $C_{n}$ of self-intersection $-n$. Arguing as in Proposition~\\ref{prop:AtMostTwoExtensions} and Corollaries~\\ref{cor:CircleExtensionsWith_a=1} and~\\ref{cor:CircleExtensionsWith_a=-1}, we conclude that $S^1(a,b;m)$ must extend to the torus $\\mathbb{T}^2_n$. If $n=0$, there there exist invariant spheres representing the homology classes $B$ and $F$ passing to a common fixed point. Again, this implies that $S^1(a,b;m)$ extends to the torus $\\mathbb{T}^2_0$. Alternatively, we can adapt the proofs of Lemma~\\ref{lemma:EvaluationFibrationConfigurations} in the present document and of Proposition~4.7 in~\\cite{Liat} to show that if an invariant $J$ is in the stratum $U_n$ the circle action $S^1(a,b; m)$ extends to the toric action $\\mathbb{T}^2_n$.\n\\end{proof}\n\n\nThe following corollaries immediately follow from Proposition~\\ref{prop:AtMostTwoExtensions} and Corollaries~\\ref{cor:CircleExtensionsWith_a=1} and~\\ref{cor:CircleExtensionsWith_a=-1}.\n\n\\begin{cor}\\label{cor:lambda=1_IntersectingOnlyOneStratum}\nSuppose $\\lambda=1$. Then the space $\\mathcal{J}_{\\omega_1}$ of compatible, almost-complex structures on $(S^2 \\times S^2,\\omega_1)$ is made of only one stratum. In particular, any Hamiltonian circle action extends to the toric action $\\mathbb{T}^2_0$ and the subspace $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ of $S^1$ invariant almost-complex structures is contractible. \\qed\n\\end{cor}\n\n\\begin{cor}\\label{cor:IntersectingOnlyOneStratum}\nSuppose $\\lambda >1$. Consider an $S^1(a,b;m)$ action $(S^2 \\times S^2,\\omega_\\lambda)$ or $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ depending on whether $m$ is even or odd. Under the following numerical conditions on $a,b,m,\\lambda$, the space $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects only the stratum $U_{m}$:\n\\begin{itemize}\n \\item when $a\\neq\\pm 1$;\n \\item when $b=0$ or $b=am$;\n \\item when $a = \\pm 1$ and $2 \\lambda \\leq |2b-m|+\\epsilon_{m}$.\\qed\n\\end{itemize}\n\\end{cor}\n\n\\begin{cor} \\label{cor:IntersectingTwoStrata}\nConsider the $S^1(\\pm 1,b,m)$ action on $(S^2 \\times S^2,\\omega_\\lambda)$ or $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$, then $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects exactly two strata. More precisely,\n\\begin{enumerate}\n \\item if $a=1$, $b \\neq\\{0,m\\}$, $\\lambda >1$, and $2\\lambda > |2b-m|+\\epsilon_{m}$, then the space of $S^1(1,b;m)$-equivariant almost complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects the two strata $U_m$ and $U_{|2b-m|}$.\n \\item If $a=-1$, $b \\neq\\{0,-m\\}$, $\\lambda >1$, and $2\\lambda > |2b+m|+\\epsilon_{m}$, then the space of $S^1(-1,b;m)$-equivariant almost complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects the two strata $U_m$ and $U_{|2b+m|}$.\\qed\n\\end{enumerate} \n\\end{cor}\n\n\n\n\n\n\n\\section{Symplectic actions of compact abelian groups on~\\texorpdfstring{$\\mathbb{R}^4$}{R4}}\n\nIn order to study the action of the the equivariant symplectomorphism group $\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ on each invariant stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_k$, we will need to understand the equivariant topology of linearised symplectic actions. The main result is Theorem~\\ref{thm:EqGr} which is an equivariant version of Gromov's Theorem on the contractibility of the group of compactly supported symplectomorphisms of star-shaped domains of $\\mathbb{R}^{4}$. This relies on Lemma~\\ref{lemma:remove} and Proposition~\\ref{prop:linearize} that were proven by W. Chen in the unpublished manuscript \\cite{ChenUnpub}. For completeness, we shall reproduce their proof here.\n\nLet $G$ be a compact group acting effectively and symplectically on $ \\mathbb{C}^2 = \\mathbb{R}^4$ with the symplectic form $\\omega_0:= dx_1 \\wedge dy_1 + (dx_2 \\wedge dy_2)$. We say it acts linearly if it acts as a subgroup of $\\U(2) \\subset \\Sp(4)$.\n\n\\begin{lemma}[Chen]\\label{lemma:remove}\nLet $G$ be a compact group acting linearly on $(\\mathbb{R}^4, \\omega_0)$. Suppose $V$ is a $G$ invariant, compact, and star-shaped neighbourhood of $0$. Let $f: \\mathbb{R}^4 \\setminus V \\to \\mathbb{R}^4$ be a $G$-equivariant symplectic embedding which is the identity near infinity. Then, for every $G$-invariant neighbourhood $W$ of $V$, there exists a $G$-equivariant symplectomorphism $g:\\mathbb{R}^4 \\to \\mathbb{R}^4$ such that $g|_{\\mathbb{R}^4\\setminus W} = f$. \n\\end{lemma}\n\n\\begin{proof}\nAs $0 \\in int(V)$ and as $f$ is the identity near infinity, there exist $T>0$ such that $f(Tx) = Tx$ for all $x \\in \\mathbb{R}^4 \\setminus~ V$. Define $f_t(x) = \\frac{f(tx)}{t}$ for $1 \\leq t \\leq T$. As the $G$ action is linear, we observe that $f_t$ is equivariant for all $t \\in [1,T]$. By construction, $f_1 = f$, $f_T = id$ and $f_t^*\\omega_0 = \\omega_0$ for all $t$. Thus there are compact sets $V_t = f_t(V)$ and open neighbourhoods $W_t= f_t(W)$ of $V_t$ such that the restriction $f_t: \\mathbb{R}^4 \\setminus V \\to \\mathbb{R}^4 \\setminus V_t$ and $f_t: \\mathbb{R}^4 \\setminus W \\to \\mathbb{R}^4 \\setminus W_t$ are diffeomorphic. As $G$ acts linearly, each of the sets $W_t$ and $V_t$ are $G$-invariant. \n\nDefine $X_t$ as the vector field that satisfies $\\frac{d}{dt} f_t = X_t \\circ f_t$ and consider the one form $\\alpha_t = i_{X_t}\\omega_0$. As $f_t$ is $G$ equivariant and symplectic, both $X_t$ and $\\alpha_t$ are $G$-invariant. Let $H_t: \\mathbb{R}^4 \\setminus V_t \\to \\mathbb{R}$ be a one parameter family of Hamiltonians that are $G$-invariant and that satisfy $\\alpha_t= dH_t$. Since $f_t$ is the identity near infinity, this implies that $H_t$ is constant near infinity and we can take this constant to be 0. \n\nFinally, we can take a family of $G$-invariant bump functions $\\rho_t: \\mathbb{R}^4 \\to [0,1]$ such that $\\rho_t \\equiv 0$ in a neighbourhood of $V_t$ and $\\rho_t \\equiv 1$ on $\\mathbb{R}^4 \\setminus W_t$. Then the Hamiltonian $\\rho_t H_t: \\mathbb{R}^4 \\to \\mathbb{R}$ is defined on the whole $\\mathbb{R}^4$ and is also $G$-invariant. The Hamiltonian isotopy $g_t$ generated by $\\rho_t H_t$ is $G$ equivariant for all $1\\leq t\\leq T$ and satisfies the properties $g_T = \\id$ and $g_1|_{\\mathbb{R}^4 \\setminus W} = f$. Consequently, $g_1$ is the symplectomorphism $g$ we were looking for.\n\\end{proof}\n\nAny unitary representation of a compact abelian group $G$ on $\\mathbb{C}^2$ induces a splitting into eigenspaces $\\mathbb{C}^2 = \\mathbb{C}_1 \\bigoplus \\mathbb{C}_2$. This simple fact turns out to be essential in our treatment of the equivariant Gromov's Theorem. Consequently, from now on, we assume that the group $G$ is abelian.\n\n\\begin{prop}[Chen]\\label{prop:linearize}\nLet $V\\subset\\mathbb{R}^{4}$ be a compact star-shaped neighbourhood of $\\{0\\}$ and let $\\omega$ be a symplectic form on $\\mathbb{R}^{4}$ such that $\\omega = \\omega_0$ outside some smaller open neighbourhood of the origin $U\\subset V$. Let $G$ be a compact abelian group acting on $(V,\\omega)$ via symplectomorphisms that are linear near the boundary of V. Then the G action is conjugate to a linear symplectic action of $G$ on $(V,\\omega_0)$ by a diffeomorphism $\\Phi$ which is the identity near the boundary and which satisfies $\\Phi^*\\omega = \\omega_0$.\n\\end{prop}\n\n\\begin{proof}\nIdentify $\\mathbb{R}^4$ with $\\mathbb{C}^2$. The linear action near $\\partial V$ extends to a unitary action on $\\mathbb{R}^4$. As $G$ is abelian this linear action splits into two eigenspaces $\\mathbb{C}_{1} \\oplus \\mathbb{C}_{2}$. We can then compactify $\\mathbb{C}^{2}$ along these eigenspaces to obtain $\\mathbb{C}P^{1}\\times \\mathbb{C}P^{1}$ (diffeomorphic to $S^2 \\times S^2$) equipped with two symplectic forms $\\tilde\\omega$ and $\\tilde\\omega_{0}$ induced from $\\omega$ and $\\omega_{0}$. The compactified space inherits two actions of $G$, namely, $\\rho: G \\hookrightarrow \\Symp(S^2 \\times S^2,\\tilde\\omega)$ coming from extending the $G$ action on $\\mathbb{R}^{4}$ and $\\rho_{\\lin}:G \\hookrightarrow \\Symp(S^2 \\times S^2,\\tilde\\omega)$ which extends the linear action near $\\partial V$. \n\\\\\n\n\n\nBy construction, there exists a star-shaped subset $V_1 \\subset V$ such that both the action $\\rho$ and $\\rho_{\\lin}$ agree on $\\mathbb{C}^2 \\setminus V_1$. Note that the point at infinity $p:=(\\infty,\\infty)$ is a fixed point for both the action $\\rho$ and $\\rho_{lin}$. Choose a $\\tilde\\omega$ compatible $G$-invariant almost complex structure $J$ on $S^2 \\times S^2$ which is standard in some neighborhood $X_\\epsilon:= (S^2 \\times D_\\epsilon) \\cup (S^2 \\times D_\\epsilon)$ of the wedge $(S^2\\times\\{\\infty\\})\\vee (\\{\\infty\\}\\times S^2)$ (where $D_\\epsilon$ is a small disk of radius $\\epsilon$ at $\\infty$). As $(S^2\\times\\{\\infty\\})$ and $(\\{\\infty\\}\\times S^2)$ are $J$-holomorphic spheres representing the classes $B$ and $F$ respectively, we conclude that $J$ belongs to the stratum $U_{0}$ so that there exists foliations $\\mathcal{B}_J$ and $\\mathcal{F}_J$ by embedded $J$-holomorphic spheres in the classes $B$ and $F$. Given any $q=(z,w) \\in S^2 \\times S^2$, let $u_w$ denote the unique curve in the class B passing through $(0,w)$ and, similarly, let $v_z$ be the curve in class F passing through $(z,0)$. We can define a self-map of $S^2 \\times S^2$ by setting\n\\begin{align*}\n\\Psi_J: S^2 \\times S^2 &\\longrightarrow S^2 \\times S^2 \\\\\n(z,w) &\\longmapsto u_w \\cap v_z\n\\end{align*}\nAs explain in~\\cite{McD} Chapter 9, this map $\\Psi_J$ is a diffeomorphism. Moreover, $\\Psi_J$ is equivariant with respect to the linear action $\\rho_{\\lin}$ on the domain and the action $\\rho$ on the codomain. Furthermore, as $J$ is the standard complex structure in a neighbourhood $X_\\epsilon$ of the wedge, $\\Psi_J$ is the identity near the base point $p$. We now modify $\\Psi_J$ in order to make it the identity in the neighbourhood $X_\\epsilon$. As $J$ is the standard complex structure on $X_\\epsilon$ we can write\n\\[\n\\Psi_J (z,w)= \n\\begin{cases} \n(z,\\phi_2(z,w)) & \\text{~if~} z \\in D_\\epsilon \\subset S^2 \\times S^2\\\\\n(\\phi_1(z,w),w) & \\text{~if~} w \\in D_\\epsilon \\subset S^2 \\times S^2\n\\end{cases}\n\\]\nwhere $\\phi_1(z,0)=z$ and $\\phi_2(0,w)=w$ for all $z,w \\in S^2$. Choose a $G$ equivariant (for the $G$ action on $\\{\\infty\\}\\times S^2)$) smooth map $\\beta_1:S^2 \\longrightarrow S^2$ such that $\\beta_1(z) = z$ for all $z \\in D_\\epsilon$ and $\\beta_1 =\\infty$ in a neighbourhood $D_\\delta$ contained in $D_\\epsilon$ and such that $\\det(d\\beta_{1}(z)) \\geq 0 ~\\forall ~z \\in S^2$.\nSimilarly define a $G$ equivariant map (for the $G$ action on $S^2\\times\\{\\infty\\}$) $\\beta_2: S^2 \\to S^2$ satisfying analogous conditions as $\\beta_1$. Then we define a modification of $\\Psi_{J}$ by setting\n\\[\n\\Psi^{\\prime}_J(z,w) = \n\\begin{cases}\n\\Psi_J &\\text{~if~} z \\in (S^2 \\times S^2) \\setminus X_\\epsilon \\\\\n(z,(\\phi_2(\\beta_1(z),w)) &\\text{~if~} z \\in D_\\epsilon \\\\\n(\\phi_1(z,\\beta_2(w)),w) &\\text{~if~} w \\in D_\\epsilon\n\\end{cases}\n\\]\nThis modification $\\Psi^{\\prime}_J$ is the identity in a smaller neighbourhood $X_\\delta:=(S^2 \\times D_\\delta) \\cup (S^2 \\times D_\\delta)\\subset X_{\\epsilon} $.\\\\\n\nThe submanifolds $\\{z\\} \\times S^2$ and $S^2 \\times \\{w\\}$ for all $z,w \\in S^2$ are symplectic for the form ${\\Psi'_{\\!J}}^* \\tilde\\omega $ and hence $\\tilde\\omega_0 \\wedge {\\Psi'_{\\!J}}^* \\tilde\\omega>0$. Thus the path $\\omega_t:= t\\tilde\\omega +(1-t){\\Psi'_{\\!J}}^* \\tilde\\omega$ is a path of non-degenerate symplectic forms for all $t \\in [0,1]$. We can then apply an equivariant Moser isotopy to get an equivariant diffeomorphism $\\alpha$ of $S^2 \\times S^2$ such that $\\alpha^* {\\Psi'_{\\!J}}^* \\tilde\\omega = \\tilde\\omega_0$. Further, as ${\\Psi'_{\\!J}}^* \\tilde\\omega= \\tilde \\omega_0$ on $X_\\delta$, the restriction of $\\alpha$ to $X_\\delta$ is the identity. We then set $\\tilde \\Psi_J:= \\Psi'_{\\!J} \\circ \\alpha$.\n\\\\\n\nThe restriction of $\\tilde\\Psi_J: \\mathbb{C}^2 \\to \\mathbb{C}^2$ gives us a map which is $G$-equivariant with respect to the action $\\rho_{\\lin}$ on the domain and $\\rho$ on the codomain. As noted before there exists a star-shaped subset $V_1 \\subset V$ such that both the action $\\rho$ and $\\rho_{\\lin}$ agree on $\\mathbb{C}^2 \\setminus V_1$. We can choose $V_1$ such that $0 \\in int(V_1)$. We now apply Theorem \\ref{lemma:remove} to the map $f:={\\tilde\\Psi_J}^{-1}|_{\\mathbb{C}^2 \\setminus V_1}$ and we choose W in Theorem \\ref{lemma:remove} to be a $G$-invariant open subset of $V$ which contains $V_1$. Let $g:\\mathbb{C}^2 \\to \\mathbb{C}^2$ be an equivariant symplectomorphism as in Theorem \\ref{lemma:remove} such that $g|_{\\mathbb{R}^4\\setminus W} = f$, then the map $\\Phi:= \\left(\\tilde\\Psi_J \\circ g\\right)\\Big|_V: V \\to V$ is identity near the boundary and satisfies $\\Phi^*\\omega = \\omega_0$ and is\n$G$-equivariant where the action of G on the domain of $\\Phi$ is the linear action $\\rho_{\\lin}$ while on the range of $\\Phi$ it is the $G$ action $\\rho$ on V that we started out with. \nThus $\\Phi$ is the required equivariant symplectomorphism that linearizes the given $G$ action and takes the form $\\omega$ to $\\omega_0$.\n\\end{proof}\n\nConsider a polydisk $D^2 \\times D^2$ whose product structure is compatible with the eigenspace decomposition $\\mathbb{R}^{4}=\\mathbb{C}^{2}=\\mathbb{C}_{1}\\oplus \\mathbb{C}_{2}$. Consider the symplectic form $\\omega$ such that $\\omega = \\omega_0$ outside of some smaller polydisk of the form $D_r \\times D_r \\subset D^2 \\times D^2$ for some radius $r$. \n\n\\begin{thm}[Equivariant Gromov's Theorem for Polydisks]\\label{thm:EquivariantGromovForPolydisks}\nLet $G$ be an abelian group. Let $\\omega$ be a symplectic form on $D^2 \\times D^2$ which is equal to $\\omega_0$ near the boundary. Let $G$ act symplectically on $(D^2 \\times D^2 ,\\omega)$ and suppose the action is linear near the boundary. Then the group $\\Symp_c^{G}(D^2\\times D^2, \\omega)$ of equivariant symplectomorphisms that are equal to the identity near the boundary of $D^2 \\times D^2$ is non-empty and contractible.\n\\end{thm}\n\\begin{proof}\nAs the $G$ action outside of $D_r \\times D_r \\subset \\mathbb{R}^4$ is linear, we can extend this $G$ action to the whole of $\\mathbb{R}^4$. Then we can compactify each eigenspace of $\\mathbb{C}$ to an $S^2$ and hence this $G$ action extends to a symplectic action $S^2 \\times S^2$ with respect to the form $\\tilde\\omega$ induced by $\\omega$. \n\\\\\n\nBy Proposition~\\ref{prop:linearize} we can conjugate our $G$ action by a symplectomorphism which is identity near the boundary to get a linear $G$ action on the whole of $V$. As any two conjugate topological subgroups are homeomorphic, we shall just study the homotopy type of the compactly supported equivariant symplectomorphism group $\\Symp_{c,\\lin}^G(D^2 \\times D^2,\\omega)$ for the linear $G$ action on $V$.\n\\\\\n\nLet $\\mathcal{J}^G_{\\omega}$ be the non-empty and contractible space of all equivariant almost complex structures on $D^2 \\times D^2$ that are compatible with $\\omega$ and are the standard split almost complex structure $J_0$ outside of $D_r \\times D_r$. As they are the standard almost complex structures outside of a neighbourhood these almost complex structure extend to $S^2 \\times S^2$ and are compatible with $\\tilde\\omega_0$. Further once we pick a base point $p=(\\infty,\\infty)\\in S^2\\times S^2$ and identify $D^2\\times D^2$ with the complement of a standard neighborhood $X_\\epsilon:= (S^2 \\times D_\\epsilon) \\cup (S^2 \\times D_\\epsilon)$ of the wedge $(S^2\\times\\{\\infty\\})\\vee (\\{\\infty\\}\\times S^2)\\subset S^2\\times S^2$ (note that the wedge point $(\\infty,\\infty)$ is a fixed point for the extended action of G on $S^2 \\times S^2$), then any element $J\\in \\mathcal{J}^G_{\\sigma}$ extends to a equivariant almost complex structure of $S^2 \\times S^2$ which is the standard product complex structure on $S^2 \\times S^2$ on a neighbourhood $X_\\epsilon$ of the wedge $(S^2\\times\\{y\\})\\vee (\\{x\\}\\times S^2)\\subset S^2\\times S^2$. Conversely any such equivariant almost complex structure compatible with $\\tilde\\omega$ that is standard in some neighbourhood of the wedge $(S^2\\times\\{\\infty\\})\\vee (\\{\\infty\\}\\times S^2)\\subset S^2\\times S^2$ gives us an element of $\\mathcal{J}^G_{\\omega}$ .\n\\\\\n\nIn order to show that $\\Symp_{c,\\lin}^G(D^2 \\times D^2,\\omega)$ is contractible, we shall prove that it is homotopy equivalent to the contractible space $\\mathcal{J}^G_{\\omega}$. \n\\\\\n\n\n\n\n\nDefine the map $\\tilde \\Psi_J$ as in the proof of Proposition~\\ref{prop:linearize}. Thus we have a map \n\\begin{align*}\n\\tau: \\mathcal{J}^G_\\omega &\\longrightarrow \\Symp_{c,\\lin}^G(D^2 \\times D^2,\\omega) \\\\ \nJ &\\longmapsto \\tilde\\Psi_J \n\\end{align*}\n\nTo prove that $\\tau$ is a\nhomotopy equivalence we construct a homotopy inverse as follows. Fix a $J' \\in \\mathcal{J}^G_\\omega$, then the homotopy inverse $\\beta$ is defined as,\n\n\\begin{align*}\n\\beta: \\Symp_{c,\\lin}^G(D^2 \\times D^2,\\omega) &\\longrightarrow \\mathcal{J}^G_\\omega \\\\ \n\\phi &\\longmapsto \\phi_{*}J'\n\\end{align*}\n\nBy construction we see that $\\tau(\\beta(\\phi)) = \\id$ and the other direction is homotopic to the identity as $J^G_\\omega$ is contractible.\n\\end{proof}\n\nWe shall repeatedly use the following theorem in our analysis of the homotopy type of the equivariant symplectomorphism groups of $S^2 \\times S^2$.\n\n\n\\begin{thm}(Equivariant Gromov's Theorem)\\label{thm:EqGr}\nLet $(V,\\omega)$ be an compact star-shaped symplectic domain of $\\mathbb{R}^4$ such that $0 \\in int(V)$ and let $\\omega$ be such that $\\omega = \\omega_0$ near the boundary of $V$. Let $G$ be a compact abelian group that acts symplectically and linearly near the boundary and that sends the boundary to itself, then the space of equivariant symplectomorphisms that act as identity near the boundary (denoted by $\\Symp_c^G(V,\\omega)$) is non-empty and contractible.\n\\end{thm}\n\n\\begin{proof}\nBy Proposition~\\ref{prop:linearize} we can conjugate our $G$ action by a symplectomorphism which is identity near the boundary to get a linear $G$ action on the whole of $V$ and such that it takes the form $\\omega$ to $\\omega_0$. As the homotopy type of the two conjugate equivariant symplectomorphism group is the same (they are in fact homeomorphic), we shall just study the homotopy type of the compactly supported equivariant symplectomorphism group for the linear $G$ action on $(V,\\omega_0)$. We denote this group by $\\Symp_{c,\\lin}^G(V,\\omega_0)$.\n\\\\\n\nChoose real numbers $r>0$ and $T>1$ , $D_r \\times D_r$ is a polydisk of radius $r$, such that $\\frac{1}{T}V \\subset D_r \\times D_r \\subset int(V)$,\nand consider the family of maps $F_t : \\Symp_{c,\\lin}^G(V,\\omega_0) \\to \\Symp_{c,\\lin}^G(V,\\omega_0)$ for $1 \\leq t \\leq T$ \ndefined by $F_t(\\phi)(x) = \\frac{\\phi(tx)}{t} $ for all $x \\in V$.\n\\\\\n\nThen we have that $F_t$ is $G$ equivariant for all $1 \\leq t \\leq T$, $F_1(\\phi) = \\phi$ for all $\\phi \\in \\Symp_{c,\\lin}^G(V,\\omega_0)$, $F_t(id) = id$ for all t, and $F_T \\left(\\Symp_{c,\\lin}^G(V,\\omega_0)\\right) \\subset \\Symp_{c,\\lin}^G(D_r \\times D_r,\\omega)$.\n\\\\\n\nThe proof of Theorem \\ref{thm:EquivariantGromovForPolydisks} tells us that the inclusion $i:\\Symp_{c,\\lin}^G(D_r \\times D_r,\\omega) \\hookrightarrow \\Symp_{c,\\lin}^G(V,\\omega_0)$ is contractible. Hence we can fix a contraction $\\alpha_t$ for $T \\leq t \\leq T+1$ such that $\\alpha_T = i$ and $\\alpha_{T+1}(\\phi) = id$ for all $\\phi \\in \\Symp_{c,\\lin}^G(D_r \\times D_r,\\omega)$. Then the concatenation \n$$\\Tilde{F_t}:=\\begin{cases}\nF_t ~~1 \\leq t\\leq T\\\\\nF_T \\circ \\alpha_t ~~ T\\leq t \\leq T+1\n\\end{cases}$$ \ngives us a retraction of $\\Symp_{c,\\lin}^G(V,\\omega_0)$ to $id$ and hence $\\Symp_{c}^G(V,\\omega)$ is contractible.\n\\end{proof}\n\n\\begin{remark}\nTo our knowledge, it is not known whether an equivariant version of Gromov's Theorem on holds true for non-abelian compact groups.\n\\end{remark}\n\n\\section{Homotopical description of \\texorpdfstring{$\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_k$}{J\\hat{}S1\\_k}} \nWe now consider the action of the group of equivariant symplectomorphisms $\\Symp_{h}^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ on the contractible space $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ of invariant, compatible, almost-complex structures, and we investigate the orbit-type stratification of this action up to homotopy. In the case of the nontrivial bundle $\\CP^2\\# \\overline{\\CP^2}$, the analysis of the action of the centralizer on the stratification will be postponed to Section~\\ref{Chapter-CCC}.\n\n\n\\subsection{Notation}\\label{not}\n\n\nWe shall use the following notation in the rest of the document. Let $M$ denote the manifolds $S^2 \\times S^2$ or $\\CP^2\\# \\overline{\\CP^2}$. Let $G$ be a compact abelian group acting symplectically on $(M,\\omega_\\lambda)$. Let $p_0$ be a fixed point for the group action. Given a $G$ invariant symplectic curve C, and a $\\omega_\\lambda$ orthogonal $G$ invariant sphere $\\overline{F}$ in the homology class $F$ that intersects C at a point $p_0$, we define the following spaces: \n\n\\begin{itemize}\n\\item $N(C)$:= The symplectic normal bundle to a symplectic submanifold $C$. \n\\item $\\Symp^{G}_h(M,\\omega_\\lambda)$ := The group of $G$ equivariant symplectomorphisms on $(M,\\omega_\\lambda)$ that acts trivially on homology.\n \\item $\\Stab^{G}(C)$ := The group of all $\\phi \\in \\Symp^{G}_h(M,\\omega_\\lambda)$ such that $\\phi(C) = C$, that is, such that $\\phi$ \\emph{stabilises} $C$ but does not necessarily act as the identity on $C$.\n \\item $\\Fix^{G}(C)$ := The group of all $\\phi \\in \\Symp^{G}_h(M,\\omega_\\lambda)$ such that $\\phi|_C = id$, that is, such that \\emph{fixes $C$ pointwise}. \n \\item $\\Fix^{G}(N(C))$:= The group of all $\\phi \\in \\Symp^{G}_h(M,\\omega_\\lambda)$ such that $\\phi|_C = \\id$ and $d\\phi|_{N(C)}: N(C) \\to N(C)$ is the identity on $N(C)$.\n \\item $\\Gauge1^{G}(N(C))$:= The group of $G$-equivariant symplectic gauge automorphisms of the symplectic normal bundle of $C$.\n \\item $\\Gauge1^{G}(N(C \\vee \\overline{F}))$ := The group of $G$-equivariant symplectic gauge automorphism of the symplectic normal bundle of the crossing divisor $C \\vee \\overline{F}$ that are identity in a neighbourhood of the wedge point. \n \\item $\\mathcal{S}^{G}_{K}$ := The space of unparametrized $G$-invariant symplectic embedded spheres in the homology class $K$. \n \\item $\\mathcal{S}^{G}_{K,p_0}$:= The space of unparametrized $G$-invariant symplectic embedded spheres in the homology class $K$ passing through $p_0$.\n \\item $\\mathcal{J}_{\\omega_\\lambda}^{G}(C)$ := The space of $G$-equivariant $\\omega_\\lambda$ compatible almost complex structures s.t the curve $C$ is holomorphic.\n \\item $\\Symp^{G}(C)$:= The space of all $G$-equivariant symplectomorphisms of the curve C.\n \\item $\\Fix^{G}(N(C \\vee {\\overline{F}}))$ := The space of all $G$-equivariant symplectomorphisms that are the identity in the neighbourhood of $C \\vee {\\overline{F}}$.\n \\item $\\Symp^{S^1}({\\overline{F}}, N(p_0))$ := equivariant symplectomorphism of the sphere $\\overline{F}$ that are the identity in an open set of $\\overline{F}$ around $p_0$. \n \\item $\\overline{\\mathcal{S}^{G}_{F,p_0}}$:= The space of unparametrized $G$-invariant symplectic spheres in the homology class $F$ that are equal to a fixed curve ${\\overline{F}}$ in a neighbourhood of $p_0$.\n \\item $\\Symp^{G}_{p_0,h}(M,\\omega_\\lambda)$:= The group of all $\\phi \\in \\Symp^{G}_{h}(M,\\omega_\\lambda)$ fixing $p_0$.\n \\item $\\Stab^{G}_{p_0}(C)$:= The group of all $\\phi \\in \\Stab^G(C)$ such that $\\phi(p_0) = p_0$. \n\\end{itemize}\n\nAll the above spaces are equipped with the $C^\\infty$ topology.\n\n\\subsection{Case 1: \\texorpdfstring{$\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$}{Symp(S2xS2)} action on \\texorpdfstring{$\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$}{J\\hat{}S1\\_2k} with \\texorpdfstring{$s \\neq 0$}{s>0}}\\label{section:ActionOnU_2k}\n\nLet $\\lambda > 1$ and consider a $S^1$-action $S^1(a,b;m)$ on $(S^2 \\times S^2,\\omega_\\lambda)$ with $m=2k$. Let ${\\mathcal{S}^{S^1}_{D_{2s}}}$ denote the space of all $S^1$ invariant symplectic embedded spheres in the class $D_{2s}=B-sF$. We shall assume that ${\\mathcal{S}^{S^1}_{D_{2s}}}$ is non-empty which, by Theorems~\\ref{cor:IntersectingOnlyOneStratum} and~\\ref{cor:IntersectingTwoStrata}, means that $2s=m$ or $2s=|2b\\pm m|$ depending on $a$. \n\nWe first show that ${\\mathcal{S}^{S^1}_{D_{2s}}}$ is a homogeneous space under the natural action of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$. To this end, we observe that among the two fixed points on the curve in class $D_{2s}$, Table~\\ref{table_weights} shows that at least one fixed point, say $p$, has weights $(w_1,w_2)$ with $w_1\\neq w_2$. It follows that if an invariant curve in the class of the fiber $F$ intersects a curve in the class $D_{2s}$ at $p$, then the two curves intersect $\\omega_\\lambda$-orthogonally. Let $\\mathcal{C}(D_{2s}\\vee F,p)^{S^1}$ be the space of invariant configurations made of curves in classes $D_{2s}$ and $F$ intersecting orthogonally at $p$. \n\n\\begin{lemma}\\label{lemma:EvaluationFibrationConfigurations} \nThe evaluation map at a standard configuration $\\overline{D}_{2s}\\vee \\overline{F}$ through $p$\n\\[\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda) \\twoheadrightarrow \\mathcal{C}(D_{2s}\\vee F,p)^{S^1}\\]\nis a serre fibration.\n\\end{lemma}\n\\begin{proof}\nWe first show that the action is transitive. Given an invariant configuration $C\\vee A \\in \\overline{D}_{2s}\\vee \\overline{F}$, \nthe equivariant symplectic neighbourhood theorem implies that we can find an invariant neighbourhood V of $C \\vee A$, an invariant neighbourhood $V'$ of $\\overline{D} \\cup \\overline{F}$, and an equivariant symplectomorphism $\\alpha:V \\to V'$. We claim that $\\alpha$ can be extended to an ambient diffeomorphism $\\beta$ of $S^2 \\times S^2$. Assume this for the moment. By construction, the pullback form $\\omega_\\beta:=\\beta^*\\omega_\\lambda$ is invariant under the the conjugate action $\\beta^{-1}\\rho\\beta$.\n\nWe observe that the complement of the standard configuration $\\overline{D} \\cup \\overline{F}$ in $W_{2s}$ is symplectomorphic to $\\mathbb{C}^2$ with the symplectic form $\\omega_f:=\\frac{1}{2\\pi}\\partial\\bar\\partial f$ where $f=\\log\\left(\\left(1+||w||^2\\right)^{\\lambda}\\left(1+||w||^{4k}+||z||^2\\right)\\right)$, see~\\cite[Lemma~3.5]{abreu}. Under this identification, the standard $\\mathbb{T}^2_{2s}$ action on $W_{2s}$ become linear on $\\mathbb{C}^2$. It follows that near infinity, the form $\\omega_\\beta$ is equal to $\\omega_f$ and that the action $\\beta^{-1}\\rho\\beta$ is linear. By Proposition~\\ref{prop:linearize} we get that there is an equivariant symplectomorphism $\\gamma$ that is equal to the identity near infinity, and that identifies $(\\mathbb{C}^2,\\omega_f, \\rho)$ with $(\\mathbb{C}^2,\\beta^*\\omega_\\lambda,\\beta^{-1}\\rho\\beta)$. By construction, the equivariant symplectomorphism $\\phi := \\gamma \\circ \\beta$ takes the configuration $C\\vee A$ to $\\overline{D}\\vee\\overline{F}$. \n\nIt remains to see that the local diffeomorphism $\\alpha$ can be extended to an ambient diffeomorphism $\\beta$ of $S^2 \\times S^2$. By the Isotopy Extension Theorem (see~\\cite[Theorem~1.4, p.180]{Hi}), it suffices to show that any two configurations of embedded spheres in classes $\\overline{D}\\cup F$ intersecting transversely and positively are isotopic. In turns, this follows from the fact that any two $F$-foliations corresponding to two almost-complex structures $J$ and $J'$ are diffeotopic, and that any two sections of the product $S^2 \\times S^2$ are diffeotopic through sections iff they are homotopic. This shows that $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ acts transitively on $\\mathcal{C}(D_{2s}\\vee F,p)^{S^1}$.\n\nTo prove the homotopy lifting property, consider any family of maps $\\gamma: D^n \\times [0,1] \\rightarrow \\mathcal{C}(D_{2s}\\vee F,p)^{S^1}$ from a $n$ dimensional disk $D^n$ to $\\mathcal{C}(D_{2s}\\vee F,p)^{S^1}$, and choose a lift $\\overline{\\gamma_0}:D^n \\rightarrow \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ of $\\gamma_0$. Since the complement of a configuration is contractible, the equivariant version of Banyaga's Extension Theorem for families implies that there exists a lift $\\overline{\\gamma}: D^n \\times [0,1] \\rightarrow \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda) $ extending $\\overline{\\gamma_0}$. \n\\[\n\\begin{tikzcd}\nD^n \\times \\{0\\} \\arrow[d,hookrightarrow] \\arrow[r,\"\\overline{\\gamma_0}\"] &\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda) \\arrow[d,\"\\theta\"] \\\\\n D^n \\times [0,1]\\arrow[r,\"\\gamma\"] \\arrow[ur,dashrightarrow,\"\\exists ~ \\overline{\\gamma}\"] &\\mathcal{C}(D_{2s}\\vee F,p)^{S^1}\n\\end{tikzcd}\n\\]\nAlternatively, one can apply the equivariant Gromov-Auroux Lemma~\\ref{Au} to show the existence of the lift $\\overline{\\gamma}$. In both cases, this concludes the proof.\n\\end{proof}\n\n\\begin{cor}\\label{trans} \nFix the action $S^1(a,b;m)$. Then $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ acts transitively on ${\\mathcal{S}^{S^1}_{D_{2s}}}$.\\qed\n\\end{cor}\n\n\n\n\n\\begin{lemma}\\label{first}\nLet $\\overline{D}$ be an invariant symplectic sphere in the class $B - sF$ for which $\\mathcal{S}^{S^1}_{D_{2s}}$ is nonempty. Then the evaluation map\n\\begin{align*}\n \\theta: \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda) &\\twoheadrightarrow {\\mathcal{S}^{S^1}_{D_{2s}}} \\\\\n \\phi &\\mapsto \\phi(\\overline{D})\n\\end{align*}\nis a Serre fibration with fibre over $\\overline{D}$ given by\n\\[\\Stab(\\overline{D}):= \\left\\{ \\phi \\in \\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)~|~ \\phi(\\overline{D}) = \\overline{D}\\right\\}\\]\n\\end{lemma}\n\\begin{proof}\nThe evaluation map is transitive by Corollary~\\ref{trans}. The homotopy lifting property follows from Lemma~\\ref{Au} as in the proof of Lemma~\\ref{lemma:EvaluationFibrationConfigurations}. Alternatively, one can also note that the action map factors through the restriction map \n\\[\\mathcal{C}(D_{2s}\\vee F,p)^{S^1}\\to\\mathcal{S}^{S^1}_{D_{2s}}\\]\nwhich is itself a fibration. To see this, note that the restriction maps fits into a commuting diagram\n\\[\n\\begin{tikzcd}\n\\mathcal{J}^{S^1}_{\\om_\\lambda}\\arrow[r,\"f_1\"]\\arrow[rd,\"f_2\"] & \\mathcal{C}(D_{2s}\\vee F,p)^{S^1} \\arrow[d]\\\\\n& \\mathcal{S}^{S^1}_{D_{2s}}\n\\end{tikzcd}\n\\]\nwhere the maps $f_1$ and $f_2$ are fibrations. Observe that the map $f_1$ is well defined because the weights at the chosen fixed point $p$ are not equal. Hence, for any choice of invariant almost complex structure $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda}$, the unique invariant $J$-holomorphic curve $C$ in class $D_{2s}$ intersects the unique invariant $J$-holomorphic fiber through $\\omega_\\lambda$-orthogonally at~$p$. \n\\end{proof}\n\n\n\n\n\n\\begin{remark}\nAs both $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ and $\\mathcal{S}^{S^1}_{D_{2s}}$ can be shown to be CW-complexes, we see from Theorem 1 in \\cite{Serrefib} (with proof corrected in \\cite{error}), that a Serre fibration in which the total space and base space are both CW complexes is necessarily a Hurewicz fibration. Thus the map $\\theta: \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda) \\twoheadrightarrow {\\mathcal{S}^{S^1}_{D_{2s}}}$ is in fact a Hurewicz fibration and hence the fibre over any arbitrary $D \\in \\mathcal{S}^{S^1}_{D_{2s}}$ is homotopy equivalent to $\\Stab(\\overline{D})$. \n\\end{remark}\n\nThe homotopy type of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ is related to the strata $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$ through the following sequence of fibrations. We use the symbol ``$\\simeq$\" to mean ``weakly homotopy equivalent\" throughout the rest of the document. In the fibrations below, we use the notation established in Section~\\ref{not}.\n\n\n\\[\\Stab^{S^1}(\\overline{D}) \\longrightarrow \\Symp^{S^1}_{h,p_0}(S^2 \\times S^2,\\omega_\\lambda) \\longtwoheadrightarrow {\\mathcal{S}^{S^1}_{D_{2s}}} \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}\\rule{0em}{2em}\\]\n\n\\[\\Fix^{S^1}(\\overline{D}) \\longrightarrow \\Stab^{S^1}(\\overline{D}) \\longtwoheadrightarrow \\Symp^{S^1}(\\overline{D}) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} S^1 ~\\text{or}~ \\SO(3)\\rule{0em}{2em}\\]\n\n\\[\\Fix^{S^1} (N(\\overline{D})) \\longrightarrow \\Fix^{S^1}(\\overline{D}) \\longtwoheadrightarrow \\Gauge1^{S^1}(N(\\overline{D})) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} S^1\\rule{0em}{2em}\\]\n \n\\[\\Stab^{S^1}(\\overline{F}) \\cap \\Fix^{S^1}(N(\\overline{D})) \\longrightarrow \\Fix^{S^1}(N(\\overline{D})) \\longtwoheadrightarrow \\overline{\\mathcal{S}^{S^1}_{F,p_0}} \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\mathcal{J}^{S^1}(\\overline{D})\\simeq \\{*\\}\\rule{0em}{2em}\\]\n \n\\[\\Fix^{S^1}(\\overline{F}) \\longrightarrow \\Stab^{S^1}(\\overline{F}) \\cap \\Fix^{S^1}(N(\\overline{D})) \\longtwoheadrightarrow \\Symp^{S^1}(\\overline{F}, N(p_0)) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\left\\{*\\right\\}\\rule{0em}{2em}\\]\n\n\\[\\left\\{*\\right\\} \\mathbin{\\textcolor{blue}{\\xleftarrow{\\text{~~~$\\simeq$~~~}}}} \\Fix^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\longrightarrow \\Fix^{S^1}(\\overline{F}) \\longtwoheadrightarrow \\Gauge1^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\left\\{*\\right\\}\\rule{0em}{2em}\\]\nHere $\\overline{F}$ and $\\overline{D}$ are the $\\omega_\\lambda$-orthogonally intersecting invariant curves in the $2s^{\\,\\text{th}}$-Hirzebruch surface $W_{2s}$ and whose moment map images are depicted below in red. We denote by $\\overline{F}$ the unique curve in class $F$ intersecting the given curve $\\overline{D} \\in {\\mathcal{S}^{S^1}_{D_{2s}}}$ $\\omega_\\lambda$-orthogonally at $p_0$. In the second fibration, the group $\\Symp^{S^1}(\\overline{D})$ is homotopy equivalent to $\\SO(3)$ when the $S^1$ action fixes the curve $\\overline{D}$ pointwise. Otherwise, it is homotopy equivalent to~$S^1$. \n\\begin{figure}[H]\n\\centering\n\\begin{tikzpicture}\n\\draw[red] (0,1) -- (3,1) ;\n\\draw (0,1) -- (0,0) ;\n\\draw (0,0) -- (4,0) ;\n\\draw[red] (3,1) -- (4,0) ;\n\\node[above] at (1.5, 1.0) {$\\overline{D}$};\n\\node[right] at (3.55, 0.75) {$\\overline{F}$};\n\\node[above] at (3,1) {$p_0$};\n\\end{tikzpicture}\n\\caption{The isolated fixed point $p_0$}\\label{fig:IsolatedFixedPoint}\n\\end{figure}\n\n\nAssuming the homotopy equivalence in the first fibration, we immediately get\n\\begin{thm}\nConsider the $S^1(a,b;m)$ action on $(S^2 \\times S^2,\\omega_\\lambda)$ with $\\lambda >1$. If $ \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s} \\neq \\phi$, then $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/\\Stab^{S^1}(\\overline{D}) \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$.\n\\end{thm}\nFurthermore, tracking down the various homotopy equivalences in the other fibrations, we will prove that the equivariant stabilizer of the curve $\\overline{D}$, namely $\\Stab^{S^1}(\\overline{D})$, is homotopy equivalent to the equivariant stabilizer of the corresponding complex structure under the natural action of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$. More precisely, \n\\begin{itemize}\n \\item $\\Stab^{S^1}(\\overline{D}) \\simeq \\mathbb{T}^2_{2s}$ when $(a,b) \\neq (0,\\pm1)$;\n \\item $\\Stab^{S^1}(\\overline{D}) \\simeq SO(3) \\times S^1$ when $(a,b) = (0,\\pm1)$.\n\\end{itemize}\n \n\n\nWe shall now justify each of the homotopy equivalences in the above fibrations. \n\n\n\n\nNow that we know that the action map\n\\begin{align*}\n \\Stab^{S^1}(\\overline{D}) \\to \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda) &\\twoheadrightarrow {\\mathcal{S}^{S^1}_{D_{2s}}}\n\\end{align*}\nis a fibration, we show that ${\\mathcal{S}^{S^1}_{D_{2s}}}$ is weakly homotopic to $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$. \n\\begin{lemma}\\label{first2}\nThe natural map $\\alpha: \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s} \\to {\\mathcal{S}^{S^1}_{D_{2s}}}$ defined by sending an almost complex structure $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$ to the unique $J$-holomorphic curve in class $D_{2s}$ is a weak homotopic equivalence. \n\\end{lemma}\n\\begin{proof}\nTo show that $\\alpha$ is a weak homotopy equivalence, we first show that the map is Serre fibration. To do so, consider an arbitrary element ${D} \\in {\\mathcal{S}^{S^1}_{D_{2s}}}$. As in the proof of Lemma \\ref{first}, it suffices to show that given a family of map $\\gamma_t$ from a n-dimensional disk $D^n$, such that $\\gamma_0(0) = {D}$, and a lift $g_0^\\prime:D^n: \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$ lifting $\\gamma_0$, then there exists a lift $\\gamma_t^\\prime$ to $ \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$.\n\\[\n\\begin{tikzcd}\n D^n \\times \\{0\\} \\arrow[d,hookrightarrow] \\arrow[r,\"\\gamma_0^\\prime\"] &\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s} \\arrow[d,\"\\alpha\"] \\\\\n D^n \\times [0,1] \\arrow[r,\"\\gamma\"] \\arrow[ur,dashrightarrow,\"\\exists ~ \\gamma^\\prime\"] &\\mathcal{S}^{S^1}_{D_{2s}}\n\\end{tikzcd}\n\\]\nAs in the proof of Lemma \\ref{first}, we have that there exists a lift $\\overline{\\gamma}: D^n \\times [0,1] \\to \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ of $\\gamma$. Pick an element $ J \\in \\alpha^{-1}({D})$ and define $\\gamma^\\prime(s) := \\overline{\\gamma}^*J$. This defines a lift $\\gamma^\\prime$ of $\\gamma$. Hence $\\alpha$ is a fibration, with fibre begin contractible. Thus we get the required result that $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s} \\simeq {\\mathcal{S}^{S^1}_{D_{2s}}}$.\n\\end{proof}\n\n\n\n\n\n\n\\begin{lemma}{\\label{stab}}\nThe restriction map $\\Stab^{S^1}(\\overline{D}) \\twoheadrightarrow \\Symp^{S^1}(\\overline{D})$ is a fibration. \n\\end{lemma}\n\n\\begin{proof}\nTo show that the restriction map is a fibration we use Theorem \\ref{palais}, in which we set $X= \\Symp^{S^1}(\\overline{D})$, $G=\\Stab^{S^1}(\\overline{D})$ and the action is given by \n\\begin{align*}\n G \\times X &\\to X \\\\\n (\\phi, \\psi) &\\to \\phi|_{\\overline{D}} \\circ \\psi\n\\end{align*}\nHence in order to show that the restriction map $r :\\Stab^{S^1}(\\overline{D}) \\to \\Symp^{S^1}(\\overline{D})$ is a fibration, we only need to show that the action described above admits local cross sections. Suppose we only show that a neighbourhood of identity admits local cross sections and that $\\Stab^{S^1}(\\overline{D})$ acts transitively on $\\Symp^{S^1}(\\overline{D})$ this would suffice to show that $r$ is a fibration as by Theorem \\ref{palais}, its a local fibration near the identity and the map $r$ is equivariant with respect to the action of $\\Stab^{S^1}(\\overline{D})$, thus completing the proof.\n\\\\\n\nConsider the identity $\\id \\in \\Symp^{S^1}(\\overline{D})$. As Let $\\alpha:N(\\overline{D}) \\to U$ be a equivariant diffeomorphism between the symplectic normal bundle $N(\\overline{D})$ and a neighbourhood $U$ of $\\overline{D}$. As $\\Symp^{S^1}(\\overline{D})$ is locally contractible (this can be seen for example by noticing that the proof of Prop 3.3.14 in \\cite{MS} can be made equivariant) we can find a neighbourhood V of $\\id$, and a fixed retraction $\\beta_t$ of the neighbourhood $V$ onto the identity. Hence given any $\\psi \\in \\Symp^{S^1}(\\overline{D})$, we get a one parameter family $\\beta_t(\\psi)$ of symplectomorphisms. As $\\pi_1(\\overline{D}) = 0$, $\\beta_t(\\psi)$ is Hamiltonian and is generated by a function $H_t$. Let $\\pi:N(\\overline{D}) \\to \\overline{D}$ be the projection of the normal bundle. Define $\\Tilde{H_t}:= \\alpha \\circ \\pi^*H_t$. Thus $\\Tilde{H_t}$ defines an invariant function on a U. Fix an invariant bump function $\\rho$ with support in $U$ and is 1 in a small neighbourhood around $\\overline{D}$, then $\\rho \\Tilde{H_t}$ is an invariant function and the corresponding symplectomorphism it generates $\\Tilde{\\psi}$ belong to $\\Stab^{S^1}(\\overline{D})$ and extends $\\psi$. Note that if we fix the neighbourhood $U$, the bump function and the retraction of the neighbourhood in $\\Symp^{S^1}(\\overline{D})$ then this procedure gives us a lift of $\\psi$ near id in $\\Symp^{S^1}(\\overline{D}) $ to $\\Stab^{S^1}(\\overline{D})$. By Theorem \\ref{palais} this shows $r$ is fibration. \n\\end{proof}\n\n\nThe above proof also shows that the action is transitive. As given $\\gamma \\in \\Symp^{S^1}(\\overline{D})$ we give a procedure to construct an element $\\tilde\\gamma \\in \\Stab^{S^1}(\\overline{D})$ such that the restriction to $\\overline{D}$ is $\\gamma$, thus showing the action is transitive.\n\n\n\n\\vspace{3mm}\n\n\\begin{lemma}\n$\\Symp^{S^1}(\\overline{D})$ is homotopic to $\\SO(3)$ for the circle action $S^1(0,\\pm 1,m)$ and $\\Symp^{S^1}(\\overline{D})$ is homotopic to $S^1$ for all other circle actions.\n\\end{lemma}\n\n\\begin{proof}\nConsider the circle action induced on $\\overline{D}$. The action $S^1(0,\\pm 1,m)$ fixes $\\overline{D}$ pointwise. Hence $\\Symp^{S^1}(\\overline{D}) = \\Symp(\\overline{D})$. By Smale's theorem we know that $\\Symp(\\overline{D})$ is homotopy equivalent to $\\SO(3)$. \n\\\\\n\nThe restriction for all other actions not equal to $S^1(0,\\pm 1,m)$, do not point wise fix the curve $\\overline{D}$. For all other actions, we have the following two subcases. Assume that the action is effective. Let $\\mu: \\overline{D} \\to \\mathbb{R}$ be it's moment map. Then as explained in the proof of Proposition \\ref{prop:CentralizersToricActions} we have that $\\Symp^{S^1}(\\overline{D}) \\simeq C^\\infty(\\mu(\\overline{D}),S^1)$, where $C^\\infty(\\mu(\\overline{D}),S^1)$ denotes the space of smooth maps from the image of the moment map to $S^1$. As the image of the moment map is an interval, and as the space of smooth maps from an interval to $S^1$ is homotopy equivalent to $S^1$, we have the required result that $ \\Symp^{S^1}(\\overline{D}) \\simeq S^1$.\n\\\\\n\n\nFinally if the induced symplectic $S^1$ action on $\\overline{D}$ is not effective and has $\\mathbb{Z}_k$ stabilizer, the action of $S^1\/\\mathbb{Z}_k \\cong S^1$, is effective and the space of symplectomorphisms equivariant with respect to this quotient effective action is the same as space of symplectomorphisms equivariant with respect to the non-effective $S^1$ action. Thus the homotopy type of $\\Symp^{S^1}(\\overline{D}) \\simeq S^1$.\n\\end{proof}\n\n\n\n\n\n\n\\begin{lemma}\\label{gauge}\nThe map\n\\begin{align*} \n \\alpha: \\Fix^{S^1}(\\overline{D}) &\\twoheadrightarrow \\Gauge1^{S^1}(N(\\overline{D})) \\\\\n \\phi &\\mapsto d\\phi|_{N(\\overline{D})}\n\\end{align*}\n is a Serre fibration with fibre homotopic to $\\Fix^{S^1}(N(\\overline{D}))$. The base space $\\Gauge1^{S^1}(N(\\overline{D}))$ is homotopy equivalent to $S^1$. \n\\end{lemma}\n\n\\begin{proof}\nThe fact that $\\Gauge1^{S^1}(N(\\overline{D})) \\simeq S^1$ is explained in Appendix A. Thus we only need to prove that the restriction of the derivative indeed is a fibration and the fibre is homotopic to $\\Fix^{S^1}(N(\\overline{D}))$.\n\\\\\n\nConsider the action \n \n\\begin{align*}\n \\Fix^{S^1}(\\overline{D}) \\times \\Gauge1^{S^1}(N(\\overline{D})) &\\to\\Gauge1^{S^1}(N(\\overline{D})) \\\\\n (\\phi, \\psi) &\\to d\\phi|_{N(D)} \\circ \\psi\n\\end{align*}\n\n\nAgain by Theorem \\ref{palais} it suffices to show that the there is a local section to above action. Such a local section is produced by Lemma \\ref{EqSymN}.\n\nThe fibre is apriori given by all equivariant symplectomorphisms that act as identity on the normal bundle of $\\overline{D}$. The claim is that this is in fact homotopy equivalent to the space $\\Fix^{S^1}(N(\\overline{D}))$. This follows from lemma \\ref{germ}.\n\\end{proof}\n\nLet $\\overline{\\mathcal{S}^{S^1}_{F,p_0}}$ be the space of unparametrized $S^1$-invariant symplectic spheres in the homology class $F$ that are equal to a fixed invariant curve ${\\overline{F}}$ in a neighbourhood of $p_0$.\n\n\\begin{lemma}\nThe map \\begin{align*}\n \\Fix^{S^1}(N(\\overline{D})) &\\to \\overline{\\mathcal{S}^{S^1}_{F,p_0}} \\\\\n \\phi \\mapsto \\phi(\\overline{F})\n \\end{align*} \nis a fibration and $\\overline{\\mathcal{S}^{S^1}_{F,p_0}} \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D}) \\simeq \\{*\\}$\n\\end{lemma}\n\n\\begin{proof}\n The proof for this is exactly the same as the proof of Corollary \\ref{trans}. We only note that if $F' \\in \\overline{\\mathcal{S}^{S^1}_{F,p_0}}$ then the map $\\phi$ constructed in the proof of Corollary~\\ref{trans} belongs to $\\Fix^{S^1}(N(\\overline{D}))$. Thus we have that $\\Fix^{S^1}(N(\\overline{D}))$ acts transitively on $\\overline{\\mathcal{S}^{S^1}_{F,p_0}}$, and thus the action map induces a fibration. \n \\\\\n \n\n \n \n \\textit{Proof that $\\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D}) \\simeq \\{*\\}$}:\n This follows from the equivariant version of the standard proof of considering the homeomorphic space of equivariant compatible metrics and noting that this space of metrics is contractible. \n \\\\\n \n\\textit{Proof that $\\overline{\\mathcal{S}^{S^1}_{F,p_0}} \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D}) \\simeq \\{*\\}$}:\n Let $\\mathcal{S}^\\perp_{F,p_0}$ denote the space of all $S^1$ invariant symplectically embedded spheres S in class F such that $S \\cap \\overline{D} =p_0$ and $S$ and $\\overline{D}$ intersect $\\omega_\\lambda$-orthogonally at $p_0$. By Lemma \\ref{transverse} we see that for every $S \\in \\mathcal{S}^\\perp_{F,p_0}$ there exists a $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D})$ such that the configuration $S \\vee \\overline{D}$ is $J$-holomorphic. We now have the following fibration\n \n \\begin{equation*}\n \\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D}) \\longtwoheadrightarrow \\mathcal{S}^\\perp_{F,p_0}\n \\end{equation*}\n\nwhere the map $\\gamma: \\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D}) \\to \\mathcal{S}^\\perp_{F,p_0}$ is just sending $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D})$ to the corresponding curve in class $F$ passing through $p_0$. Note that the above map is well defined because the fixed $p_0$ was chosen such that the weights at $p_0$ were distinct. Hence any $S^1$ invariant $F$ curve passing through $p_0$ must be $\\omega_\\lambda$-orthogonal to $\\overline{D}$. Now we show that $\\gamma$ is a homotopy equivalence. To do that we consider the following commutative diagram\n \n\\[ \n\\begin{tikzcd}\n &T \\arrow[d,\"\\pi_2\"] \\arrow[r,\"\\pi_1\"] & \\mathcal{S}^\\perp_{F,p_0} \\\\\n &\\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D}) \\arrow[ur, \"\\gamma\"]\n\\end{tikzcd} \n \\]\n\nwhere $T:= \\left\\{ (A,J) \\in \\mathcal{S}^\\perp_{F,p_0} \\times \\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{D})~|~ A ~~\\text{is J-holomorphic} \\right\\}$. Both the maps $\\pi_1$ and $\\pi_2$ are fibrations (this can be argued as in Lemma \\ref{first}) with contractible fibres. As the diagram commutes, the map $\\gamma$ must be a homotopy equivalence. The proof then follows by showing that $\\overline{\\mathcal{S}^{S^1}_{F,p_0}} \\simeq \\mathcal{S}^\\perp_{F,p_0}$, which is a consequence of Theorem~\\ref{gmpf}.\n\\end{proof}\n\n\n\\begin{lemma}\n \\begin{align*}\n \\Stab^{S^1}(\\overline{F}) \\cap \\Fix^{S^1}(N(\\overline{D})) &\\to \\Symp^{S^1}(\\overline{F}, N(p_0)) \\\\\n \\phi &\\mapsto \\phi|_{\\overline{F}}\n \\end{align*}\n \n is a fibration and $\\Symp^{S^1}(\\overline{F}, N(p_0)) \\simeq \\{*\\}$\n \\end{lemma}\n \n \\begin{proof}\n The fact that this is a fibration follows from applying the proof of Lemma \\ref{stab} mutatis mutandis. The proof that $\\Symp^{S^1}(\\overline{F}, N(p_0)) \\simeq \\{*\\}$, is similar to Lemma \\ref{stab}. $\\Symp^{S^1}(\\overline{F}, N(p_0))$ is homotopy equivalent to maps from the interval $[0,1]$ to $S^1$ that is identity near 0. The space of such maps is contractible thus giving the result. \n \\end{proof}\n\n\n\n\\begin{lemma}\\label{ngauge}\n\n\n\\begin{align*}\n \\Fix^{S^1}(\\overline{F}) &\\to \\Gauge1^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\\\\n \\phi &\\mapsto d\\phi|_{N(\\overline{D} \\vee \\overline{F})}\n\\end{align*}\nis a fibration and $\\Gauge1^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\simeq \\{*\\}$ and the fibre $\\Fix^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\simeq \\{*\\}$\n\\end{lemma}\n\n\\begin{proof}\nThe proof that this is a fibration is similar to the proof of Lemma \\ref{gauge}. The fact that $\\Gauge1^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\simeq \\{*\\}$ follows from by Lemma \\ref{Gauge(N(D))}. The fact that $\\Fix_{S^1}(N(\\overline{D} \\vee \\overline{F})) \\simeq \\{*\\}$ follows from Theorem \\ref{thm:EqGr}.\n\\end{proof}\n\n\nPutting all the fibrations together gives the following theorem.\n\\begin{thm}\\label{homogenous}\nConsider the $S^1(a,b;m)$ action on $(S^2 \\times S^2,\\omega_\\lambda)$ with $\\lambda >1$. If $ \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s} \\neq \\phi$, then we have the following homotopy equivalences:\n\\begin{enumerate}\n \\item when $(a,b) \\neq (0,\\pm1)$, we have $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/\\mathbb{T}^2_{2s} \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$;\n \\item when $(a,b) = (0,\\pm1)$, we have $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/(SO(3) \\times S^1) \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$.\n\\end{enumerate} \n\\end{thm} \n\\begin{proof}\n\n\nWhen $(a,b;m) \\neq (0,\\pm 1;0)$ we have a commutative diagram of fibrations\n\\[\n\\begin{tikzcd}\n &\\Fix^{S^1}(\\overline{D}) \\arrow[r] &\\Stab^{S^1}(\\overline{D}) \\arrow[r,twoheadrightarrow] &\\Symp^{S^1}(\\overline{D}) \\\\\n &S^1 \\arrow[u,hookrightarrow] \\arrow[r] &\\mathbb{T}^2_{2s} \\arrow[u,hookrightarrow] \\arrow[r] &S^1 \\arrow[u,hookrightarrow]\n\\end{tikzcd}\n\\]\nwhile in the case $(a,b) = (0,\\pm1)$, we have the diagram\n\\[\n\\begin{tikzcd}\n &\\Fix^{S^1}(\\overline{D}) \\arrow[r] &\\Stab^{S^1}(\\overline{D}) \\arrow[r,twoheadrightarrow] &\\Symp^{S^1}(\\overline{D}) \\\\\n &S^1 \\arrow[u,hookrightarrow] \\arrow[r] & S^1 \\times SO(3) \\arrow[u,hookrightarrow] \\arrow[r] &SO(3) \\arrow[u,hookrightarrow]\n\\end{tikzcd}\n\\]\nFrom the discussion above, in both the diagrams the leftmost and the rightmost arrows are homotopy equivalences. As the diagram commutes, the 5 lemma implies that the middle inclusion $\\mathbb{T}^2 \\hookrightarrow \\Stab^{S^1}(\\overline{D})$ or $\\left(S^1 \\times SO(3)\\right) \\hookrightarrow \\Stab^{S^1}(\\overline{D})$ are also homotopy equivalences. This gives us the required result.\n\\end{proof}\n\n\\begin{remark}\\label{ActionOnJsom}\nLet $J_{2s}$ be the standard complex structure on $W_{2s}$. We note that for the action $S^1(0,\\pm1,;m)$ the stabiliser of $J_{2s}$ under the natural action of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ on $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s}$ is the group of K\\\"ahler isometries $S^1 \\times \\SO(3)$. For all other circle actions $S^1(a,b;m)$ with $(a,b) \\neq (0,\\pm1)$, the stabiliser of $J_{2s}$ is the maximal torus $\\mathbb{T}^2_{2s}\\subset S^1 \\times \\SO(3)$.\n\\end{remark}\n\n\n\n\n\n\\subsection{Case 2: \\texorpdfstring{$\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$}{Symp(S2xS2)} action on \\texorpdfstring{$\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$}{J\\hat{}S1\\_0}}\\label{IntwithU0}\nIn order to describe the action of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ on the open stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$, we need to modify slightly the setting introduced in the previous section. The main difference comes from the fact that for an almost-complex structure $J\\in\\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_0$, there is no invariant curve with negative self-intersection representing a class $B-kF$, $k\\geq 1$. Instead, each such $J$ determines a regular 2-dimensional foliation of $J$-holomorphic curves in the class $B$. Consequently, there is no natural map between the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$ and the space $\\mathcal{S}^{S^1}_{B}$ of invariant curves in the class $B$. However, once we choose a fixed point $p_0$, given any $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$, there is a unique invariant $J$-holomorphic curve in the class $B$ passing through $p_0$. This defines a map $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0 \\to \\mathcal{S}^{S^1}_{B,p_0}$ that can be used to prove that the space $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$ is homotopy equivalent to an orbit of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$. To do so, because the fixed point $p_0$ is not unique, we must also investigate how the group $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ acts on the fixed point set of the circle action. This is done in Lemma~\\ref{lemma:SymphPreservesAnIsolatedFixedPoint}. Before we proceed to prove this lemma we first describe the action of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ on $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$. Note that by Theorem~\\ref{cor:IntersectingOnlyOneStratum}, Corollary~\\ref{cor:CircleExtensionsWith_a=1} and Corollary~\\ref{cor:CircleExtensionsWith_a=-1}, the space $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$ is non-empty only for the following circle actions:\n\\begin{itemize}\n \\item $S^1(a,b;0)$, or\n \\item $S^1(1,b;m)$ with $|2b-m|=0$ and $2\\lambda > |2b-m|$, or\n \\item $S^1(-1,b;m)$ with $|2b+m|=0$ and $2\\lambda > |2b+m|$.\n\\end{itemize}\nSecondly, we observe that all these actions have at least one isolated fixed point \\emph{except} the actions of the forms\n\\begin{itemize}\n \\item $S^1(\\pm 1,0;0)$ and\n \\item $S^1(0,\\pm 1;0)$\n\\end{itemize}\n\\subsubsection{Actions with an isolated fixed point} We now consider actions $S^1(a,b;m)$ with an isolated fixed point $p_0$. We can choose $p_0$ to correspond to the vertex $R$ in the Hirzerbruch surface $W_m$ shown in Figure~\\ref{hirz}. Given $J\\in\\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_0$, there is a unique $J$-holomorphic curve $B_{p_0,J}$ in class $B$ that passes through $p_0$. Because $p_0$ is fixed, $J$ is invariant, and $B\\cdot B=0$, positivity of intersection implies that $B_{p_0,J}$ is $S^1$-invariant. We thus get a well-defined map \n\\[\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0 \\to \\mathcal{S}^{S^1}_{B,p_0}\\]\nwhere $\\mathcal{S}^{S^1}_{B,p_0}$ denotes the space of invariant, embedded, symplectic spheres representing the class $B$ and containing the point $p_0$. \n\\begin{lemma}\\label{projection_surjective}\nConsider any $S^1(a,b; m)$ action on $(S^2 \\times S^2,\\omega_\\lambda)$. Let $p_0$ and $p_1$ be two fixed points such that there exists an invariant fibre $\\{*\\} \\times S^2$ passing through them . Then there exists no $S^1$ invariant curve in the class $B-kF$ for $k \\geq 0$ passing through $p_0$ and $p_1$.\n\\end{lemma}\n\\begin{proof}\nSuppose not, let $\\overline{D_{2s}}$ be a $S^1$ invariant curve in the class $B-kF$ with $k \\geq 0$ passing through $p_0$ and $p_1$. Then the projection onto the first factor\n$$ \\pi_1 : \\overline{D_{2s}} \\rightarrow S^2 \\times \\{0\\} \\subset S^2 \\times S^2$$\nis surjective. Hence the curve $\\overline{D_{2s}}$ passes through a third fixed point $p_2$. As the symplectic $S^1$ action on $\\overline{D_{2s}}$ has three fixed points, it has to fix $\\overline{D_{2s}}$ pointwise. This is a contradiction as all fixed surfaces for $S^1$ actions must be either a maximum or minimum for the moment map, but the fixed points $p_2$, $p_1$ and $p_0$ cannot have the same moment map value.\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lemma:SymphPreservesAnIsolatedFixedPoint}\nLet $S^1(a,b;m)$ be a circle action for which the space $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$ is non-empty. Assume there is an isolated fixed point $p_0$ corresponding to the vertex $R$ in Figure~\\ref{hirz}. Then any equivariant symplectomorphism that preserves homology $\\phi\\in \\Symp_h^{S^1} (S^2 \\times S^2,\\omega_\\lambda)$ fixes~$p_0$.\n\\end{lemma}\n\\begin{proof}\n\\emph{Case 1: $\\lambda >1$:}\nBy Lemma~\\ref{lemma:CharacterizationCentralizer} and Corollary~\\ref{cor:ActionPreservesWeights} any such $\\phi$ must preserve the moment values and the weights of the fixed points (up to change of order of the tuples). These weights are given in Table~\\ref{table_weights} and the moment map values are given in the graphs~\\ref{fig:GraphsWithFixedSurfaces} and \\ref{fig:GraphsIsolatedFixedPoints}. The two conditions on the circle action imply that either $m=0$, $|2b-m|=0$, or $|2b+m|=0$. It is now easy to see that under any of these three numerical conditions, the weights and moment map values at $R$ differ from the weights at all other fixed points. The result follows.\n\\\\\n\n\\emph{Case 2: $\\lambda = 1$:}\nIf the actions are \\emph{not} of the form $S^1(1,1; 0)$ or $S^1(-1,-1;0)$ with $\\lambda = 1$, then an argument similar to Case 1 holds. \nThe only case left are the actions of the form $S^1(1,1; 0)$ or $S^1(-1,-1;0)$ with $\\lambda = 1$. In this case, the homology classes $F$, $B$ have the same area and the fixed points $R$ and $Q$ have the same weights (up to change of order of tuples) and the same moment map values. We again argue by contradiction in this case. Let $\\overline{B}$ denote a fixed curve in class $B$ passing through $R$ and $P$. Suppose $\\phi \\in \\Symp_h^{S^1} (S^2 \\times S^2,\\omega_\\lambda)$ doesn't fix the point ~$p_0 = R$. Then $\\phi$ has to take the point $R$ to the point $Q$. Further by Lemma~\\ref{lemma:CharacterizationCentralizer}, $\\phi$ fixes the maximum and minimum and hence $\\phi(P) = P$. As $\\phi$ preserves homology, the curve $\\phi(\\overline{B})$ has homology class $B$ and has as to pass through $Q$ and $P$ which contradicts Lemma~\\ref{projection_surjective}. \n\\end{proof}\n\nLet $J_0\\in U_0$ be the complex structure of the Hirzebruch surface $W_0$ and let $B_{p_0}$ be the unique $J_0$-holomorphic curve containing $p_0$ and representing the homology class $B$.\n\\begin{cor}\nLet $S^1(a,b;m)$ be a circle action with an isolated fixed point and for which the structure $J_0\\in U_0$ is invariant. Then the group $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ acts transitively on the space $\\mathcal{S}^{S^1}_{B,p_0}$, and the action map \n\\begin{align*}\n\\Symp^{S^1}_{h}(S^2 \\times S^2,\\omega_\\lambda) &\\longtwoheadrightarrow \\mathcal{S}^{S^1}_{B,p_0}\\\\\n\\phi & \\mapsto \\phi(B_{p_0})\n\\end{align*}\nis a Serre fibration.\n\\end{cor}\n\\begin{proof}\n\nSince any element of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ fixes $p_0$, it follows that this group acts on $\\mathcal{S}^{S^1}_{B,p_0}$. Let $p_1$ be the other fixed point corresponding to the point $Q$ in Figure~\\ref{hirz}. All curves in $\\mathcal{S}^{S^1}_{B,p_0}$ pass through $p_1$ and $p_0$. Since the weights at one of $p_0$ or $p_1$ are always distinct, we can use the fixed point with distinct weights and proceed as in the proofs of Corollary ~\\ref{trans} and Lemma~\\ref{first} to show that the action defines a fibration.\n\\end{proof}\nAs before, we can now show that the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0}$ is homotopy equivalent to a space of invariant curves.\n\\begin{lemma}\nThe natural map $\\alpha: \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0} \\to {\\mathcal{S}^{S^1}_{B,p_0}}$ defined by sending an almost complex structure $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0}$ to the unique $J$-holomorphic curve in class $B$ passing through $p_0$ is a weak homotopic equivalence. \n\\end{lemma}\n\\begin{proof}\nThe argument is identical to the proof of Lemma~\\ref{first2}\n\\end{proof}\n\nFrom now on, we can determine the homotopy type of $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0}$ by going through a similar sequence of fibrations and homotopy equivalences as in Section~\\ref{section:ActionOnU_2k}, namely,\n\\[\\Stab^{S^1}(B_{p_0}) \\to \\Symp^{S^1}_{h}(S^2 \\times S^2,\\omega_\\lambda) \\longtwoheadrightarrow \\mathcal{S}^{S^1}_{B,p_0} \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0}\\rule{0em}{2em}\\]\n \n\\[\\Fix^{S^1}(B_{p_0}) \\to \\Stab^{S^1}_{p_0}(B_{p_0}) \\longtwoheadrightarrow \\Symp^{S^1}(B_{p_0}) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} S^1\\rule{0em}{2em}\\]\n\n\\[\\Fix^{S^1} (N(B_{p_0})) \\to \\Fix^{S^1}(B_{p_0}) \\longtwoheadrightarrow \\Gauge1^{S^1}(N(B_{p_0})) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} S^1\\rule{0em}{2em} \\]\n\n\\[\\Stab^{S^1}(\\overline{F}) \\cap \\Fix^{S^1}(N(B_{p_0})) \\to \\Fix^{S^1}(N(B_{p_0})) \\longtwoheadrightarrow \\overline{\\mathcal{S}^{S^1}_{F,p_0}} \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\mathcal{J}^{S^1}(B_{p_0})\\rule{0em}{2em}\\]\n \n\\[\\Fix^{S^1}(\\overline{F}) \\to \\Stab^{S^1}(\\overline{F}) \\cap \\Fix^{S^1}(N(B_{p_0})) \\longtwoheadrightarrow \\Symp^{S^1}(\\overline{F}, N(p_0)) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\left\\{*\\right\\}\\rule{0em}{2em}\\]\n\n\\[\\left\\{*\\right\\} \\mathbin{\\textcolor{blue}{\\xleftarrow{\\text{~~~$\\simeq$~~~}}}} \\Fix^{S^1}(N(B_{p_0} \\vee \\overline{F})) \\to \\Fix^{S^1}(\\overline{F}) \\longtwoheadrightarrow \\Gauge1^{S^1}(N(B_{p_0} \\vee \\overline{F})) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\left\\{*\\right\\}\\rule{0em}{2em}\\]\nwhere $\\overline{\\mathcal{S}^{S^1}_{F,p_0}}$ denotes the space of all symplectically embedded curve in the class $F$ that pass through $p_0$ and agree with a standard curve $F_{p_0}$ in a neighbourhood of $p_0$. The proofs that these maps are fibrations, and the proofs of the homotopy equivalences are exactly the same as before.\nConsequently, we obtain the following homotopical description of $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0}$.\n\\begin{thm}\\label{Thm_isolatedfixedpt}\nConsider one of the following circle actions on $(S^2 \\times S^2,\\omega_\\lambda)$\n\\begin{itemize}\n \\item $S^1(a,b;0)$ with $(a,b)\\neq (\\pm1,0)$ and $(a,b)\\neq(0,\\pm1)$, or\n \\item $S^1(1,b;m)$ with $|2b-m|=0$ and $2\\lambda > |2b-m|$, or\n \\item $S^1(-1,b;m)$ with $|2b+m|=0$ and $2\\lambda > |2b+m|$.\n\\end{itemize}\nThen the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$ is non-empty and \n\\[\\Symp^{S^1}_{h,p_0}(S^2 \\times S^2,\\omega_\\lambda)\/ \\mathbb{T}^2_0 \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0}\\]\n\\end{thm}\\qed\n\n\n\\subsubsection{Actions without isolated fixed points} \n\nWe now turn our attention to the action of the group of equivariant symplectomorphisms $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ on the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0}$ when the circle action is either\n\\begin{enumerate}\n \\item $S^1(\\pm 1,0;0)$ or\n \\item $S^1(0,\\pm 1;0)$.\n\\end{enumerate}\n\n\nThese actions has no isolated fixed points and the associated graphs are of the form\n\\begin{figure}[H]\n \\centering\n \n\n\n\\subcaptionbox{Subcase 1: $S^1(\\pm 1,0;0)$ }\n[.45\\linewidth]\n{\\begin{tikzpicture}\n[scale=0.85, every node\/.style={scale=0.85}]\n\\draw [fill] (0,0) ellipse (0.3cm and 0.1cm); \n\\draw [fill] (0,2.8) ellipse (0.3cm and 0.1cm); \n\\node[above] at (0,2.9) {$F_{max}$}; \n\\node[below] at (0,-0.1) {$F_{min}$}; \n\\node[left] at (-0.3,2.8) {$\\mu = \\lambda$};\n\\node[left] at (-0.3,0){$\\mu=0$};\n\\node[right] at (0.3,2.8){$A= 1$};\n\\node[right] at (0.3,0){$A= 1$};\n\\end{tikzpicture}\n}\n\\quad\n\\subcaptionbox{Subcase 2: $S^1(0,\\pm 1;0)$}\n[.45\\linewidth]\n{\\begin{tikzpicture}\n[scale=0.85, every node\/.style={scale=0.85}]\n\\draw [fill] (0,0) ellipse (0.3cm and 0.1cm); \n\\draw [fill] (0,2.8) ellipse (0.3cm and 0.1cm); \n\\node[above] at (0,2.9) {$B_{max}$}; \n\\node[below] at (0,-0.1) {$B_{min}$}; \n\\node[left] at (-0.3,2.8) {$\\mu = 1$};\n\\node[left] at (-0.3,0){$\\mu=0$};\n\\node[right] at (0.3,2.8){$A= \\lambda$};\n\\node[right] at (0.3,0){$A= \\lambda$};\n\\end{tikzpicture}\n}\n\\end{figure}\\label{subcase2:Fixedsurface}\n\\noindent where $\\mu$ denotes the value of the moment map and $A$ denotes the area of the fixed surface. We notice that there are pointwise fixed curves in the class $F$ for the circle action $S^1(\\pm 1,0;0)$ and pointwise fixed curves in class $B$ for the action $S^1(0,\\pm 1;0)$. We denote the fixed surface which is a minimum for the moment map as $F_{min}$, $B_{min}$ respectively and the maximum by $F_{max}$, $B_{max}$.\n\\\\\n\nConsider the action $S^1(0,\\pm 1;0)$. By Lemma~\\ref{lemma:CharacterizationCentralizer} we note that any $\\phi \\in \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ must send $B_{max}$ to itself. Then, given $p_0 \\in B_{max}$, we define the following sequence of fibrations and homotopy equivalences:\n\\[\\Fix^{S^1}(B_{max}) \\longrightarrow \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda) \\longtwoheadrightarrow \\Symp(B_{max}) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} SO(3)\\rule{0em}{2em}\\]\n\\[\\Stab^{S^1}(F_{p_0}) \\longrightarrow \\Fix^{S^1}(B_{max}) \\longtwoheadrightarrow \\overline{\\mathcal{S}^{S^1}_{F,p_0}} \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\mathcal{J}^{S^1}_{\\om_\\lambda} \\simeq \\{*\\}\\rule{0em}{2em}\\]\n\\[\\Fix^{S^1} (F_{p_0}) \\longrightarrow \\Stab^{S^1}(F_{p_0}) \\longtwoheadrightarrow \\Symp^{S^1}(F_{p_0}) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} S^1\\rule{0em}{2em}\\]\n\\[\\left\\{*\\right\\}\\mathbin{\\textcolor{blue}{\\xleftarrow{\\text{~~~$\\simeq$~~~}}}} \\Fix^{S^1}(N(B_{max} \\vee F_{p_0})) \\longrightarrow \\Fix^{S^1}(F_{p_0})\\longtwoheadrightarrow \\Gauge1^{S^1}(N(B_{max} \\vee F_{p_0}))\n\\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\left\\{*\\right\\}\\rule{0em}{2em}\\]\n\\\\\n\nFor the other circle action $S^1(\\pm 1,0;0)$, we obtain a similar sequence of fibrations and homotopy equivalences in which $B_{\\max}$ is replaced by the curve $F_{\\max}$. As before, putting all the homotopy equivalences together, we obtain the following theorem:\n\\begin{thm}\\label{Thm_fixedsurfaces}\nConsider the following two circle actions on $(S^2 \\times S^2,\\omega_\\lambda)$ \n\\begin{itemize}\n\\item $S^1(\\pm 1,0;0)$ or\n\\item $S^1(0,\\pm 1;0)$\n\\end{itemize}\nThen there is a homotopy equivalence\n\\[\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/(S^1 \\times \\SO(3)) \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0\\]\n\\end{thm}\\qed\n\nFor convenience, we collect together the two main results of this section in the theorem below. \n\\begin{thm}{\\label{homog}}\nConsider the action $S^1(a,b;m)$ on $(S^2 \\times S^2,\\omega_\\lambda)$ such that one of the following hold:\n\\begin{itemize}\n \\item $S^1(a,b;0)$ with $(a,b)\\neq (\\pm1,0)$ and $(a,b)\\neq(0,\\pm1)$, or\n \\item $S^1(1,b;m)$ with $|2b-m|=0$ and $2\\lambda > |2b-m|$, or\n \\item $S^1(-1,b;m)$ with $|2b+m|=0$ and $2\\lambda > |2b+m|$.\n\\end{itemize}\nThen the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0$ is non-empty and \n\\[\\Symp^{S^1}_{h,p_0}(S^2 \\times S^2,\\omega_\\lambda)\/ \\mathbb{T}^2_0 \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{0}\\]\nIf instead the $S^1(a,b;m)$ action satisfies \n\\begin{itemize}\n\\item $(a,b;m) = (\\pm 1,0;0)$ or\n\\item $(a,b;m) = (0,\\pm 1;0)$\n\\end{itemize} then we have that \\[\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/(S^1 \\times \\SO(3)) \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_0\\]\nand $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects only the strata $U_0$.\n\\end{thm}\\qed \n\n\n\n\n\n\n\\chapter{The homotopy type of the symplectic centralisers of \\texorpdfstring{$S^1(a,b;m)$}{S1(a,b;m)}}\nGiven any Hamiltonian circle action on $(S^2 \\times S^2,\\omega_\\lambda)$, the two Theorems~\\ref{cor:IntersectingOnlyOneStratum} and~\\ref{cor:IntersectingTwoStrata} give us a complete understanding of which strata the space $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects. Together with Theorems~\\ref{homogenous}, and~\\ref{homog} describing the strata as homogeneous spaces, this allows us to compute the homotopy type of the group of equivariant symplectomorphisms.\n\n\\section{When \\texorpdfstring{$\\mathcal{J}^{S^1}_{\\om_\\lambda}$}{J\\hat S1} is homotopy equivalent to a single symplectic orbit}\n\n\n\\begin{thm}\\label{table}\nConsider the circle action $S^1(a,b;m)$ on $(S^2 \\times S^2,\\omega_\\lambda)$. Under the following numerical conditions on $a,b,m,\\lambda$, the homotopy type of $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ is given by the table below.\n\\\\\n\n\\noindent{\n\\begin{tabular}{|p{5cm}||p{3.5cm}|p{2cm}|p{3.3cm}|}\n\\hline\n $S^1$ action $(a,b;m)$ & $\\lambda$ &Number of strata $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects &Homotopy type of $\\Symp^{S^1}(S^2 \\times S^2)$\\\\\n\\hline\n {$(0,\\pm 1;m)$ ~~ $m\\neq 0$} &$\\lambda > 1$ & 1 & $S^1 \\times SO(3)$ \\\\\n\\hline\n\\multirow{2}{10em}{$(0,\\pm 1;0)$ or $(\\pm 1,0;0)$} &$\\lambda = 1$ &1 &$S^1 \\times SO(3)$ \\\\\n&$\\lambda >1$ &1 &$S^1 \\times SO(3)$ \\\\\n\\hline\n$(\\pm 1,\\pm1,0)$&$\\lambda = 1$ &1 & $\\mathbb{T}^2 \\times \\mathbb{Z}_2$ \\\\\n\\hline\n $(\\pm 1,0;m) ~ m\\neq0$ &$\\lambda >1$ &1 &$\\mathbb{T}^2$\\\\\n \\hline\n$(\\pm 1,\\pm m;m)$ $m \\neq 0$ & $\\lambda > 1$ &1 & $\\mathbb{T}^2$\\\\\n\\hline\n$( 1,b;m)$ $b \\neq \\{ m,0\\}$ &$|2b-m| \\geq2 \\lambda \\geq 1$ &1 &$\\mathbb{T}^2$ \\\\\n\\hline\n$(-1,b;m), b \\neq \\{ -m,0\\}$ &$|2b+m| \\geq2 \\lambda \\geq 1$ &1 &$\\mathbb{T}^2$ \\\\\\cline{2-4}\n\\hline\nAll other values of $(a,b;m)$ except $(\\pm 1,b;m)$ &$\\forall \\lambda$ &1 &$\\mathbb{T}^2$ \\\\\n\\hline\n\\end{tabular}\n}\n\\end{thm}\n \n \\begin{proof}\n By Theorem~\\ref{cor:IntersectingOnlyOneStratum}, in each of the above $S^1(a,b;m)$ actions, the space of $S^1$ invariant compatible almost complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects only the stratum $U_m$. Consequently,\n \\[\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)\/\\Stab(J_m) \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m = \\mathcal{J}^{S^1}_{\\om_\\lambda} \\simeq \\{*\\}\\]\n where $\\Stab(J_m)$ denotes the stabiliser of the standard complex structure $J_m \\in U_m$. Thus, for all the actions in the table, we have that $\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda) \\simeq \\Stab(J_m)$. For the $S^1$ action given by the triples $(0,\\pm 1,m)$, $(\\pm1,0,0)$ or the circle action $S^1(0,\\pm1,0)$ when $\\lambda =1$, Theorems~\\ref{homogenous} and~\\ref{homog} imply that $\\Stab(J_m) \\simeq S^1 \\times \\SO(3)$. For all other $S^1$ actions in the table, the stabilizers are homotopy equivalent to $\\mathbb{T}^2$. \n\\\\\n\nWe now show how to recover the homotopy type of the full group $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda) $ from the homotopy type of the subgroup $\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$. When $\\lambda > 1$, we have the equality $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda) = \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ as stated in Lemma~\\ref{lemma:ActionOnHomology}.\n\\\\\n\n\n\nWhen $\\lambda = 1$ and $a\\neq b$, \nthere exists standard $S^1(a,b;m)$ invariant curves in classes $B$ and $F$ such that the isotropy weight of the action on the curve in class $B$ is $a$ and the isotropy weight of the $S^1$ action on the curve in the class $F$ is $b$. Hence, as $\\phi$ is an equivariant symplectomorphism, Lemma~\\ref{lemma:ActionOnHomology} implies that must have $\\phi_*[F] = [F]$ and $\\phi_*[B] = [B]$. Consequently, $\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda) = \\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$.\n\\\\\n\nIn the special case when $\\lambda =1$ and $a=b = \\pm 1$, then we have an equivariant version of the exact sequence~\\ref{Sequence:ActionOnHomology}\n\\begin{equation*}\n 1 \\longrightarrow \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda) \\longrightarrow \\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda) \\longrightarrow \\Aut_{c_1,\\omega_\\lambda}(H^2(S^2 \\times S^2)) \\longrightarrow 1\n\\end{equation*}\n\nwhere $\\Aut_{c_1,\\omega_\\lambda}(H^2(S^2 \\times S^2)) \\cong \\mathbb{Z}_2$. The map \\begin{align*}\n \\phi: S^2 \\times S^2 \\to S^2 \\times S^2 \\\\\n (z,w) \\mapsto (w,z)\n\\end{align*} \nis a $S^1$ equivariant symplectomorphism (for the action $S^1(1,1,0)$ or $ (-1,-1,0)$) and gives a section from $\\mathbb{Z}_2 \\cong \\Aut_{c_1,\\omega_\\lambda}(H^2(S^2 \\times S^2))$ to $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$. Thus we have $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda) \\cong \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda) \\rtimes \\mathbb{Z}_2$. As the semidirect product of two topological groups is homotopy equivalent(as a topological space) to the direct product of the groups, we have that $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda) \\cong \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda) \\rtimes \\mathbb{Z}_2 \\simeq \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda) \\times \\mathbb{Z}_2 \\simeq \\mathbb{T}^2 \\times \\mathbb{Z}_2$. This completes the proof.\n\n\\end{proof}\n\n\\section[The two orbits case]{When \\texorpdfstring{$\\mathcal{J}^{S^1}_{\\om_\\lambda}$}{J\\hat{}S1} is homotopy equivalent to the union of two symplectic orbits}\\label{section:TwoOrbits}\n\n\nTheorem~\\ref{table} gives the homotopy type of the group of equivariant symplectomorphisms for all circle actions on $S^2 \\times S^2$ apart from the following two families of actions:\n\\begin{itemize}\n\\item (i) $a=1$, $b \\neq \\{0, m\\}$, and $2\\lambda > |2b-m|$; or\n\\item (ii) $a=-1$, $b \\neq \\{0, -m\\}$, and $2 \\lambda > |2b+m|$.\n\\end{itemize}\nFor convenience, we will write $m'$ for either $|2b-m|$ or $|2b+m|$ depending on which of the above families we consider. Up to swapping $m$ and $m'$, we will also assume $m'>m$. The goal of this section is to show that the symplectic stabilizers of any of these circle actions is homotopy equivalent to the pushout of the two tori $\\mathbb{T}^2_m$ and $\\mathbb{T}^2_{m'}$ along the common $S^1$ in the category of topological groups.\\\\ \n\nBefore delving into the technicalities, it may be useful to outline the proof, which is an adaptation of the Anjos-Granja argument used in~\\cite{AG} to compute the homotopy type of the full group of symplectomorphisms of $S^2 \\times S^2$ for $1<\\lambda\\leq 2$. The first step is to show that the two inclusions \\[\\mathbb{T}^2_{m} \\hookrightarrow \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)\\quad \\text{~and~}\\quad \\mathbb{T}^2_{m'} \\hookrightarrow \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)\\]\ninduce injective maps in homology. By the Leray-Hirsch theorem, it follows that the cohomology modules of the total space of the fibrations\n\\[\\mathbb{T}^2_m\\to\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\\to \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/\\mathbb{T}^2_m \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_m\\]\n\\[\\mathbb{T}^2_{m'}\\to\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\\to\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/\\mathbb{T}^2_{m'} \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_{m'}\\]\nsplit (with coefficients in an arbitrary field $k$). Using the fact that the contractible space of invariant compatible almost-complex structures decomposes as the disjoint union\n\\[\\mathcal{J}^{S^1}_{\\om_\\lambda}=(\\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_m)\\sqcup (\\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_{m'})\\]\nthe rank of $H^i(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda);k)$ can be computed inductively from Alexander-Eells duality. We then compute the cohomology algebra and the Pontryagin algebra of the pushout\n\\[P = \\pushout(\\mathbb{T}^2_m\\leftarrow S^1\\to \\mathbb{T}^2_{m'})\\]\nin the category of topological groups. We then show that the natural map\n\\[\\Upsilon:P\\to \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\\]\nis a homotopy equivalence in the category of topological groups. We further prove that $P$ is weakly homotopy equivalent, as a topological space, to the product $\\Omega S^{3}\\times S^{1}\\times S^{1}\\times S^{1}$.\n\n\n\n\n\\subsection{Homological injectivity}\nWe first show that the two inclusions $\\mathbb{T}^2_{m} \\hookrightarrow \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ and $\\mathbb{T}^2_{m'} \\hookrightarrow \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ induce injective maps in homology. As the argument does not depend on $m$, we shall only provide the details for the inclusion $\\mathbb{T}^2_{m} \\hookrightarrow \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$.\\\\\n\nFix a symplectomorphism $\\phi_m: W_m\\to (S^2 \\times S^2,\\omega_\\lambda)$ compatible with the fibration structures. Let $\\{*\\}$ be the $S^1(1,b,m)$ fixed point $\\left([0,1][0,0,1]\\right)$ in $W_m$, and let $\\mathcal{E}(W_m, *)$ denote the space of orientation preserving, pointed, homotopy self-equivalences of $(W_m,*)$. Similarly, define $\\mathcal{E}(S^2, *)$ to be the space of all orientation preserving homotopy self-equivalences of the sphere preserving a base point $\\{*\\}$.\n\\\\\n\nWe now observe that for the above two families of circle actions (i) and (ii), the same argument as in Lemma~\\ref{lemma:SymphPreservesAnIsolatedFixedPoint} shows that any $\\phi\\in\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)= \\Symp_h^{S^1}(W_{m})$ fixes the base point $\\{*\\}$.\\\\\n\nNow, recall that the zero section $s_0$ of $W_m$ is given by\n\\begin{align*}\n s_{0}: S^2 &\\to W_m \\\\\n \\left[z_{0}, z_{1}\\right] &\\mapsto\\left(\\left[z_{0}, z_{1}\\right],[0,0,1]\\right)\n\\end{align*} \nand the projection to the first factor is\n\\begin{align*}\n \\pi_1: W_m &\\to S^2 \\\\\n \\left(\\left[z_{0}, z_{1}\\right],\\left[w_{0}, w_{1}, w_{2}\\right]\\right) &\\mapsto\\left[z_{0}, z_{1}\\right]\n\\end{align*}\nWe define a continuous map $h_1:\\Symp_h^{S^1} (S^2 \\times S^2,\\omega_\\lambda) \\to \\mathcal{E}\\left(S^{2}, *\\right)$ by setting\n\\begin{equation*}\n \\begin{aligned}\nh_1:\\Symp_h^{S^1} (S^2 \\times S^2,\\omega_\\lambda) &\\to \\mathcal{E}\\left(S^{2}, *\\right) \\\\\n\\psi &\\mapsto \\psi_1:= \\pi_{1} \\circ \\psi \\circ s_{0}\n\\end{aligned}\n\\end{equation*}\nSimilarly, using the inclusion of $S^{2}$ as the fiber \n\\begin{align*}\n f: S^2 &\\to W_m \\\\\n \\left[z_{0}, z_{1}\\right] &\\mapsto\\left([0,1],\\left[ 0,z_{0}, z_{1}\\right]\\right)\n\\end{align*}\nand the projection to the second factor $\\pi_{2}: S^2 \\times S^2 \\to S^{2}$, we can define a map \n\\begin{align*}\n h_2: \\Symp_h^{S^1} (S^2 \\times S^2,\\omega_\\lambda) &\\to \\mathcal{E}\\left(S^{2}, *\\right) \\\\\n \\psi &\\mapsto \\psi_2:= \\pi_{2} \\circ \\psi \\circ f\n\\end{align*}\n\nWe thus get a continuous map \n\\begin{align*}\n h: \\Symp_h^{S^1} (S^2 \\times S^2,\\omega_\\lambda) &\\to \\mathcal{E}(S^2, *) \\times \\mathcal{E}(S^2,*) \\\\\n \\psi &\\mapsto \\left(h_1(\\psi),h_2(\\psi)\\right) \\\\\n\\end{align*}\n\\begin{lemma} \\label{inj}\nThe inclusion $i_m:\\mathbb{T}^2_m \\hookrightarrow \\Symp_h^{S^1} (S^2 \\times S^2,\\omega_\\lambda) $ induces a map which is injective in homology with coefficients in any field $k$.\n\\end{lemma}\n\\begin{proof}\nAs $\\mathbb{T}^2$ is connected, $i_m: H_0(\\mathbb{T}^2_m; k) \\to H_0(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda); k)$ is injective. To show that the inclusion map induces an injection at the $H_1$ level, we consider the composition $\\alpha: \\mathbb{T}^2_m \\to \\mathcal{E}(S^2,*) \\times \\mathcal{E}(S^2,*)$ given by\n\\[\n\\begin{tikzcd}\n &\\mathbb{T}^2_m \\arrow[r,hookrightarrow] &\\Symp_h^{S^1} (S^2 \\times S^2,\\omega_\\lambda) \\arrow[r,rightarrow,\"h\"] &\\mathcal{E}(S^2, *) \\times \\mathcal{E}(S^2, *) \n\\end{tikzcd}\n\\]\nand show that $\\alpha$ induces a map which is injective in homology. \n\nWe claim that $H_1(\\mathcal{E}(S^2,*);\\mathbb{Z})\\simeq \\mathbb{Z}$. Indeed, the standard action of $\\SO(3)$ on $S^2$ gives rise to a diagram of fibrations\n\\[\n\\begin{tikzcd}\n & \\mathcal{E}(S^2,*)\\arrow[r] &\\mathcal{E}(S^2) \\arrow[r,twoheadrightarrow,\"\\ev\"] &S^2 \\\\\n &S^1= \\SO(2) \\arrow[u,hookrightarrow] \\arrow[r] &\\SO(3) \\arrow[u,hookrightarrow] \\arrow[r,\"\\ev\",twoheadrightarrow] &S^2 \\arrow[u,equal] \\\\\n\\end{tikzcd}\n\\]\n\nwhere the maps $\\ev$ are evaluations at the base point $\\{*\\}$. This induces a long exact ladder of homotopy groups\n\\[\n\\begin{tikzcd}\n &\\cdots \\arrow[r] &\\cancelto{\\mathbb{Z}}{\\pi_2(S^2)} \\arrow[r] &\\pi_1(\\mathcal{E}(S^2,*)) \\arrow[r] &\\pi_1(SO(3)) \\times \\cancelto{0}\\pi_1(\\Omega) \\arrow[r] &\\cancelto{0}{\\pi_1(S^2)} \\\\\n &\\cdots \\arrow[r] &\\mathbb{Z} \\arrow[u,equal] \\arrow[r]&\\cancelto{\\mathbb{Z}}{\\pi_1(S^1)} \\arrow[r] \\arrow[u,\"\\beta\"] &{\\pi_1(SO(3))}\\arrow[u] \\arrow[r] &\\cancelto{0}{\\pi_1(S^2)} \\arrow[u,equal]\n\\end{tikzcd}\n\\]\nwhere we have used the fact, proven by Hansen in~\\cite{Hans}, that $\\mathcal{E}(S^2) \\simeq SO(3) \\times \\widetilde{\\Omega^2}$, where $\\widetilde{\\Omega^2}$ denotes the universal covering space for the connected component of the double loop space of $S^2$ containing the constant based map, and where the $\\SO(3)$ component is just the inclusion. Consequently, $\\pi_1(\\widetilde{\\Omega^2})= 0 $ and the map $\\pi_1(SO(3)) \\to \\pi_1(SO(3)) \\times \\pi_1(\\widetilde{\\Omega^2})$ is an isomorphism. From the commutativity of the middle square, it follows that $\\beta:\\pi_1(S^1) \\to \\pi_1(\\mathcal{E}(S^2, *))$ is also an isomorphism. As the spaces we consider are topological groups, $\\pi_1$ is abelian and hence $\\pi_1 = H_1$, proving the claim.\\\\\n\nNow, the classes $a$, $b$, of the subcircles $(0,1)$ and $(1,0)$ form a basis for $H_1(\\mathbb{T}^2_m; k)$. We claim that $\\alpha_*[0,1]$ and $\\alpha_*[1,0]$ generate a subgroup of rank 2. To see this, let write $\\alpha_*^1$ and $\\alpha_*^2$ for the components of $\\alpha_*$. Then, $\\alpha^1_*(0,1) = 0$ as the circle $(0,1)$ fixes the zero section $\\left([x_1,x_2],[0,0,1]\\right) \\subset W_m$ pointwise, while $\\alpha^2_*[0,1] \\neq 0$ by the reasoning in the previous paragraph. Similarly, $\\alpha^1_*[1,0] \\neq 0$ and $\\alpha^2_*[1,0]= 0$, proving our claim. We conclude that $\\alpha$ is injective on $H_1(\\mathbb{T}^2_m; k)$. \n\\\\\n\nFinally, to show that $i_*$ is injective on $H_2(\\mathbb{T}^2_m;k)$, we will prove the dual statement, namely, that the map $i^*:H^2(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda);k) \\to H^2(\\mathbb{T}^2_m;k)$ is surjective. A generator of $H^2(\\mathbb{T}^2_m;k) \\cong k$ is given by $a \\cup b$. Because $i_*$ is injective at the $H_1$ level, $i^*: H^1(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda);k) \\to H^1(\\mathbb{T}^2;k)$ is surjective, hence there exists elements $a^\\prime$, $b^\\prime \\in H^1(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda);k)$ such that $i^*(a^\\prime) = a$ and $i^*(b^\\prime) = b$. Since $i^*(a^\\prime) \\cup i^*(b^\\prime) = a \\cup b$, it follows that $i^*:H^2(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda);k) \\to H^2(\\mathbb{T}^2_m;k)$ is surjective.\n\\end{proof}\n\\subsection[Cohomology module of the centralizer]{Cohomology module of the centralizer of $S^1(\\pm1,b;m)$}\\label{subsection:CohomologyModule}\nWe are now ready to compute the cohomology module of the centralizer of $S^1(\\pm1,b;m)$ with coefficients in in a field $k$. By duality, this is equivalent to determining the homology module.\n\nRecall that the contractible space of invariant compatible almost-complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ decomposes as the disjoint union\n\\[\\mathcal{J}^{S^1}_{\\om_\\lambda}=(\\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_m)\\sqcup (\\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_{m'})=:U_{m}^{S^1} \\sqcup U_{m'}^{S^1}\\]\nwhere, for convenience, we set $U_m^{S^1}=\\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_m$ and $U_{m'}^{S^1}=\\mathcal{J}^{S^1}_{\\om_\\lambda}\\cap U_{m'}$. We will show in Chapter~\\ref{Chapter-codimension} the following two important facts:\n\\begin{itemize}\n\\item the strata $U_{m}^{S^1}$ and $U_{m'}^{S^1}$ are submanifolds of $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ (see Corollary~\\ref{cor:StrataAreSubmanifolds}), and \n\\item the stratum $U_{m}^{S^1}$ is open in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$, while $U_{m'}^{S^1}$ is of codimension $2$ (see Theorem~\\ref{codimension_calc}). \n\\end{itemize}\nIn particular, it follows that $U_{m}^{S^1}=\\mathcal{J}^{S^1}_{\\om_\\lambda}-U_{m'}^{S^1}$ is connected. As explained in Appendix~\\ref{Appendix-Alexander-Eells}, Proposition~\\ref{prop:AlexanderEellsGeometric},\n\nthe Alexander-Eells duality induces an isomorphism of homology groups\n\\begin{equation}\\label{eq:AlexanderEellsIsomorphismHomology}\n\\lambda_{*}:H_{p}(U_{m'}^{S^1};k)\\to H_{p+1}(U_{m}^{S^1};k)\n\\end{equation}\n\nNow recall that we also have fibrations\n\\begin{align}\\label{eq:TheTwoMainFibrations}\n\\mathbb{T}^2_m\\to\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\\xrightarrow{p_m} \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/\\mathbb{T}^2_m \\simeq U_{m}^{S^1}\\\\\n\\mathbb{T}^2_{m'}\\to\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\\xrightarrow{p_{m'}}\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/\\mathbb{T}^2_{m'} \\simeq U_{m'}^{S^1}\\notag\n\\end{align}\nFrom the first fibration, the connectedness of the open stratum $U_{m}^{S^1}$ implies that the group $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ is connected. In turns, the second fibration implies that the codimension 2 stratum $U_{m'}^{S^1}$ is also connected.\nBecause the two inclusions \\[\\mathbb{T}^2_{m} \\hookrightarrow \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)\\quad \\text{~and~}\\quad \\mathbb{T}^2_{m'} \\hookrightarrow \\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)\\]\ninduce surjective maps in cohomology, the Leray-Hirsch theorem implies that the cohomology module of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ splits as\n\\begin{align}\\label{eq:SplittingCohomology}\nH^*(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda),k) \\cong H^*(U_{m}^{S^1};k) \\otimes H^*(\\mathbb{T}^2_m;k)\\\\\nH^*(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda),k) \\cong H^*(U_{m'}^{S^1};k) \\otimes H^*(\\mathbb{T}^2_{m'};k)\\notag\n\\end{align}\nBy duality, we have corresponding splittings in homology, namely,\n\\begin{align}\\label{eq:SplittingHomology}\nH_*(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda),k) \\cong H_*(U_{m}^{S^1};k) \\otimes H_*(\\mathbb{T}^2_m;k)\\\\\nH_*(\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda),k) \\cong H_*(U_{m'}^{S^1};k) \\otimes H_*(\\mathbb{T}^2_{m'};k)\\notag\n\\end{align}\nIt follows that \n\\[H_{p}(U_{m};k)\\simeq H_{p}(U_{m'};k)\\text{~for all~}p\\geq 0\\]\nTogether with the Alexander-Eells isomorphism~(\\ref{eq:AlexanderEellsIsomorphismHomology}) and the connectedness of $U_{m'}$, this implies that\n\\[H_{p}(U_{m};k)\\simeq k\\text{~for all~}p\\geq 0\\]\nUsing the splitting~\\ref{eq:SplittingHomology} and dualizing, we can finally compute the cohomology module of $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$.\n\n\\begin{thm}{\\label{cohom}} Consider any of the following circle actions:\n\\begin{itemize}\n\\item (i) $a=1$, $b \\neq \\{0, m\\}$, and $2\\lambda > |2b-m|$; or\n\\item (ii) $a=-1$, $b \\neq \\{0, -m\\}$, and $2 \\lambda > |2b+m|$.\n\\end{itemize} Then, the cohomology groups of the symplectic centralizer are\n$$H^p\\left(\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda); k\\right) \\simeq \\begin{cases}\nk^4 ~~p \\geq 2\\\\\nk^3 ~~p =1 \\\\\nk ~~ p=0\\\\\n\\end{cases}$$\nfor any field $k$. In particular, the topological group $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ is of finite type.\n\\end{thm}\n\n\n\n\\subsection{The homotopy pushout \\texorpdfstring{$T_m\\leftarrow S^1(\\pm1,b;m)\\to T_{m'}$}{the two inclusions}}\nAs explained in Corollary~\\ref{cor:CircleExtensionsWith_a=1} and Corollary~\\ref{cor:CircleExtensionsWith_a=-1}, the circle actions $S^1(\\pm1,b;m)$ we are considering in this section extend to exactly two toric actions $\\mathbb{T}^2_m$ and $\\mathbb{T}^2_{m'}$. Geometrically, this means that the two tori $\\mathbb{T}^2_m$ and $\\mathbb{T}^2_{m'}$ intersect in $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$ along the circle $S^1(\\pm1,b;m)$ and, in particular, that we have two inclusions of Lie groups\n\\[\n\\begin{tikzcd}\nS^{1} \\arrow{r}{(1,b')} \\arrow[swap]{d}{(1,b)} & T^{2}_{m'} \\\\\nT^{2}_{m} & \n\\end{tikzcd}\n\\]\nIn this section we consider the homotopy pushout of these two inclusions, namely,\n\\[P:=\\pushout(T_m\\leftarrow S^1\\to T_{m'})\\]\nThis pushout is to be understood in the category of topological groups. As we will show later, the topological group $P$ turns out to be a model for the homotopy type of the centralizer $\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)$.\n\n\\subsubsection*{The Pontryagin algebra of the pushout}\nIn what follows, all k algebras are graded, and the commutator of two elements is given by\n\\[[a,b] = ab - (-1)^{|a|\\cdot|b|}ba\\]\nFor any field $k$, and for any abelian group $A$, the Pontryagin algebra $H_{*}(A;k)$ is isomorphic to the cohomology algebra $H^{*}(A;k)$. It follows that $H_{*}(S^{1})$ is isomorphic to $\\Lambda(t)$, where $t$ is of degree $1$. Similarly, the Pontryagin algebra $H_{*}(T^{2};k)$ is isomorphic to the to an exterior algebra $\\Lambda(z_{1},z_{2})$ generated by two elements of degree one. The pushout diagram of topological groups \n\\[\n\\begin{tikzcd}\nS^{1} \\arrow{r}{(1,b')} \\arrow[swap]{d}{(1,b)} & T^{2}_{m'} \\arrow{d}{} \\\\\nT^{2}_{m} \\arrow{r}{} & P\n\\end{tikzcd}\n\\]\nis homologically free (see Definition~3.1 in~\\cite{AG}). As before, $P$ denotes the pushout in the category of topological groups. By Theorem~3.8 of~\\cite{AG}, the Pontryagin algebra of $P$ is the pushout of $k$ algebras\n\\[\n\\begin{tikzcd}\nH_{*}(S^{1};k) \\arrow{r}{H(1,b')} \\arrow[swap]{d}{H(1,b)} & H_{*}(T^{2}_{m'};k) \\arrow{d}{} \\\\\nH_{*}(T^{2}_{m};k) \\arrow{r}{} & H_{*}(P;k)\n\\end{tikzcd}\n\\]\nwhich is isomorphic to\n\\[\n\\begin{tikzcd}\n\\Lambda(t) \\arrow{r}{(1,b')} \\arrow[swap]{d}{(1,b)} & \\Lambda(y_{1},y_{2}) \\arrow{d}{} \\\\\n\\Lambda(x_{1},x_{2}) \\arrow{r}{} & P^{alg}_{*}\n\\end{tikzcd}\n\\]\nwhere $P^{alg}_{*}\\simeq H_{*}(P;k)$. By the description of the pushout of $k$ algebras as amalgamated products (see \\cite{AG} for more details), the $k$ algebra $P^{alg}_{*}$ can be identified with equivalence classes of finite linear combinations of words in the letters $\\{x_{1},x_{2},y_{1},y_{2}\\}$ under the relations $x_{i}x_{i}=0$, $y_{i}y_{i}=0$, $[x_{1},x_{2}]=0$, $[y_{1},y_{2}]=0$, and $x_{1}+bx_{2}=y_{1}+b'y_{2}$. From the last equality, we can write $y_{1}=(x_{1}+bx_{2})-b'y_{2}$, which means that we can choose, as generators, the elements \\[\\{t=x_{1}+bx_{2}, ~x_{2}, ~y_{2}\\}\\]\nwith the relations $t^{2}=x_{2}^{2}=y_{2}^{2}=0$, $[t,x_{2}]=[t,y_{2}]=0$. The remaining commutator $w=[x_{2},y_{2}]$ is nonzero and commutes with $t$, $x_{2}$ and $y_{2}$. It follows that any word in $t,x_{2},y_{2}$ is equivalent to a linear combination of words of the form\n\\[w^{\\alpha}x_{2}^{\\beta}y_{2}^{\\gamma}t^{\\delta}\\]\nwith $\\alpha\\in\\mathbb{N}\\cup\\{0\\}$, and $\\beta,\\gamma,\\delta\\in\\{0,1\\}$. Hence, there is an isomorphism of graded algebras\n\\[P^{alg}_{*}\\cong \\frac{F(x_{2},y_{2})}{\\langle x_{2}^{2},y_{2}^{2}\\rangle}\\otimes \\Lambda(t)\\]\nwhere $F(x_{2},y_{2})$ denotes the free graded algebra over $k$ generated by the elements $x_{2}$ and $y_{2}$, and where $x_{2},y_{2},t$ are of degree one. In particular,\n\\[P^{alg}_{n}\\simeq\n\\begin{cases}\nk & n=0\\\\\nk^{3} & n=1\\\\\nk^{4} & n\\geq 2\n\\end{cases}\\]\nand the words $w^{\\alpha}x_{2}^{\\beta}y_{2}^{\\gamma}t^{\\delta}$ form an additive basis of the homology module $P^{alg}_{*}$.\n\nBy duality, the cohomology modules $P^{alg,*}$ are $P^{alg,0} \\simeq k$, $P^{alg,1}\\simeq k^3$, and $P^{alg,n}\\simeq k^4$ for all $n\\geq 2$. The algebra structure of $P^{alg,*}$ can be determined as follows. Let $\\hat t$, $\\hat x_{2}$, and $\\hat y_{2}$ be the duals of the generators of degree $1$, and let $\\hat w$ be the dual of the generator $w=[x_2,y_2]$ of degree $2$.\n\\\\\n\nLet us now recall the Hopf-Borel theorem (see~\\cite{McCleary} Theorem 6.36).\n\\begin{thm}(Hopf-Borel)\nLet k be a field of characteristic p where p may be zero or a prime. A connected Hopf algebra $H$ over $k$ is said to be monogenic if $H$ is generated as an algebra by 1 and one homogeneous element $x$ of degree strictly greater than 0. If $H$ is a monogenic Hopf algebra, then\n\\begin{enumerate}\n \\item if $p \\neq 2$ and degree $x$ is odd, then $H \\cong \\Lambda(x)$,\n \\item if $p \\neq 2$ and degree $x$ is even, then $H \\cong k[x] \/\\left\\langle x^{s}\\right\\rangle$ where $s$ is a power of p or is infinite i.e $H \\cong k[x]$,\n \\item if $p=2$, then $H \\cong k[x] \/\\left\\langle x^{s}\\right\\rangle$ where $s$ is a power of 2 or is infinite.\n\\end{enumerate}\n\\end{thm}\n\nAs $P^{alg,*}$ is an associative, graded commutative Hopf algebra of finite type, the Hopf-Borel theorem (see~\\cite{McCleary} Theorem 6.36) implies that $P^{alg,*}$ is a tensor product of monogenic Hopf algebras. For a field $k$ of characteristic $p$ different from $2$, including $p=0$, $P^{alg,*}$ contains a subalgebra of the form \n\\[A^*=\\Lambda(\\hat t,\\hat x_2,\\hat y_2)\\otimes k[\\hat w]\/\\langle \\hat w^s\\rangle\\]\nwhere $s$ is a power of $p$ or is infinite. Suppose $s=p^n\\geq 3$ is finite. Then, the rank of $A^i$ would coincide with the rank of $P^{alg,i}$ up to degree $i=2s-1$, and we would have $A^i=0$ for $i\\geq 2s$. Therefore, we would need $4$ more generators of degree $2s$ to account for the rank of $P^{alg,2s}$, and their pairwise products would imply that $\\rk P^{alg,4s}>4$. This contradiction shows that $s$ must be infinite and that the rank of $A^i$ equals the rank of $P^{alg,i}$ for all $i\\geq 0$. Consequently, for a field $k$ of characteristic $p\\neq 2$, the $k$-algebra $P^{alg,*}$ is isomorphic to\n\\[P^{alg,*}\\cong \\Lambda(\\hat t,\\hat x_{2},\\hat y_{2})\\otimes S(\\hat w)\\]\nIn characteristic $p=2$, $P^{alg,*}$ is the tensor product of truncated polynomial algebras $k[z_i]\/z^{s_i}_{i}$ where $s_i$ is a power of $2$. As before, it contains a subalgebra of the form\n\\[A^*=k[\\hat t,\\hat x_2,\\hat y_2]\/ \\langle \\hat t^2,\\hat x_2^2,\\hat y_2^2 \\rangle \\otimes k[\\hat w]\/\\langle \\hat w^s\\rangle\\]\nAgain, assuming $s$ is finite forces the existence of $4$ new generators in degree $2s$ whose products would yield too many generators in degree $4s$. Therefore, in characteristic $p=2$, the cohomology algebra of $P$ is isomorphic to\n\\[P^{alg,*}\\cong k[\\hat t,\\hat x_{2},\\hat y_{2}]\/ \\langle \\hat t^2,\\hat x_2^2,\\hat y_2^2 \\rangle \\otimes k[\\hat w]\\]\nIn characteristic zero, the computation of the cohomology ring yields the minimal model of $H^{*}(P)\\otimes\\mathbb{Q}$. As $P$ is a H-space, it is a nilpotent space (see Exercise~1.13 in \\cite{dgcRationalHomotopy}), so that the main theorem of dgc rational homotopy theory applies (see \\cite{dgcRationalHomotopy}, Theorem~2.50) namely, the dimension $\\pi_{p}(P)\\otimes\\mathbb{Q}$ for $p\\geq 2$ is equal to the number of generators of degree $p$ in the minimal model. For $p=1$, as $P$ is a topological group, the dimension of $\\pi_1(P) \\otimes \\mathbb{Q}$ is same as the rank of $H_1(P,\\mathbb{Q})$. Consequently,\n\\[\\pi_{p}(P)\\otimes\\mathbb{Q}\\simeq\n\\begin{cases}\n\\mathbb{Q} & p=0\\\\\n\\mathbb{Q}^{3} & p=1\\\\\n\\mathbb{Q} & p= 2\\\\\n0 & p\\geq 3\n\\end{cases}\\]\n\n\\subsubsection*{The homotopy type of $P$}\nWe want to better understand the homotopy type of the space $P$. To this end, consider the embeddings\n\\begin{align}\nf_{m}:T^{2}_{m}&\\to S^{1}\\times S^{1}\\times S^{1}\\\\\n(x_{1},x_{2})&\\mapsto (x_{1},x_{2},b'x_{1})\\notag\\\\\n&\\notag\\\\\nf_{m'}:T^{2}_{m'}&\\to S^{1}\\times S^{1}\\times S^{1}\\\\\n(y_{1},y_{2})&\\mapsto (y_{1},by_{1},y_{2})\\notag\n\\end{align}\nThe universal property of pushouts implies that there is a unique map $f_{P}:P\\to S^{1}\\times S^{1}\\times S^{1}$ making the following diagram commutative\n\\[\n\\begin{tikzcd}\nBS^{1} \\arrow{r}{B(1,b')} \\arrow[swap]{d}{B(1,b)} & BT^{2}_{m'} \\arrow{d}{} \\arrow[bend left=10]{ddr}{Bf_{m'}} & \\\\\nBT^{2}_{m} \\arrow{r}{}\\arrow[bend right=10,swap]{drr}{Bf_{m}} & BP \\arrow[dotted]{rd}{Bf_{P}} & \\\\\n & & BS^{1}\\times BS^{1}\\times BS^{1}\n\\end{tikzcd}\n\\]\nBy Theorem~3.9 of~\\cite{AG}, the homotopy fiber of $Bf_{P}$ is the pushout of the homotopy fibers of the other maps in the diagram. To determine this fiber, we first replace the maps in the diagram of groups by homotopy equivalent fibrations\n\\[\n\\begin{tikzcd}[column sep=huge]\n\\mathbb{Z}\\ar[swap]{d}{(1,1,a_{1})} & \\mathbb{Z}\\times\\mathbb{Z} \\ar{d}{(1,a_{1},a_{2})}\\ar[swap]{l}{a_{2}} \\ar{r}{a_{1}}& \\mathbb{Z} \\ar{d}{(1,1,a_{1})}\\\\\nT^{2}_{m}\\times\\mathbb{R} \\ar[swap]{d}{(a_{1},a_{2},b'a_{1}e({a_{3}}))} & S^{1}\\times\\mathbb{R}\\times\\mathbb{R} \\ar[swap]{l}{(a_{1},ba_{1}e({a_{2}}),a_{3})} \\ar{r}{(a_{1},b'a_{1}e({a_{3}}),a_{2})} \\ar{d}{(a_{1},ba_{1}e({a_{2}}),b'a_{1} e({a_{3}}))}& T^{2}_{m'}\\times\\mathbb{R}\\ar{d}{(a_{1},ba_{1}e({a_{3}}),a_{2})}\\\\\nS^{1}\\times S^{1}\\times S^{1}\\ar{r}{=} & S^{1}\\times S^{1}\\times S^{1} &S^{1}\\times S^{1}\\times S^{1}\\ar[swap]{l}{=}\n\\end{tikzcd}\n\\]\n\nwhere $a_{i}$ denote the $i^{\\text{th}}$ coordinate function and $e(a_j) = e^{2\\pi i a_j}$. Applying the classifying space functor, this gives\n\\[\n\\begin{tikzcd}[column sep=normal]\nS^{1}\\ar[swap]{d}{} & S^{1}\\times S^{1}\\ar{d}{}\\ar[swap]{l}{\\text{pr}_{2}} \\ar{r}{\\text{pr}_{1}}& S^{1} \\ar{d}{}\\\\\nBT^{2}_{m} \\ar[swap]{d}{} & BS^{1} \\ar[swap]{l}{} \\ar{r}{} \\ar{d}{}& BT^{2}_{m'}\\ar{d}{}\\\\\nBS^{1}\\times BS^{1}\\times BS^{1}\\ar{r}{=} & BS^{1}\\times BS^{1}\\times BS^{1} &BS^{1}\\times BS^{1}\\times BS^{1}\\ar[swap]{l}{=}\n\\end{tikzcd}\n\\]\nwhich shows that the homotopy fiber of the canonical map $BP\\to BS^{1}\\times BS^{1}\\times BS^{1}$ is homotopy equivalent to\n\\[\\hocolim\\{S^{1}\\xleftarrow{\\text{pr}_{2}}S^{1}\\times S^{1}\\xrightarrow{\\text{pr}_{1}}S^{1}\\}\\simeq S^{1}*S^{1}\\simeq S^{3}\\]\nConsequently, $BP$ is the total space of a fibration\n\\[S^{3}\\to BP\\to BS^{1}\\times BS^{1}\\times BS^{1}\\]\nthat, after looping, becomes\n\\[\n\\begin{tikzcd}[column sep=normal]\n & T^{2}_{m'}\\ar[swap]{d}{j_{m'}} \\ar{rd}{f_{m'}=(a_{1},ba_{1},a_{2})}& \\\\\n\\Omega S^{3}\\ar{r} & P\\ar{r}{f_{P}} & S^{1}\\times S^{1}\\times S^{1}\\\\\n & T^{2}_{m}\\ar{u}{j_{m}} \\ar[swap]{ru}{f_{m}=(a_{1},a_{2},b'a_{1})}& \n\\end{tikzcd}\n\\]\nThe map $f_{P}$ admits a section given by \n\\[s(a_{1},a_{2},a_{3})= j_{m'}(a_1, {b'}^{-1}a_3)j_{m}(1,{b}^{-1}a_1^{-1}a_{2})\\]\nIt follows that, as a space, $P$ is weakly homotopically equivalent to the product\n\\[P\\simeq \\Omega S^{3}\\times S^{1}\\times S^{1}\\times S^{1}\\]\nwhich is consistent with the algebraic computations of the previous section.\n\n\\subsection{Homotopy type of \\texorpdfstring{$S^1(\\pm1,b;m)$}{S1(1,b;m)} equivariant symplectomorphisms}\n\nWe are now able to determine the homotopy type of the group $\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ for the circle actions\n\\begin{itemize}\n\\item $S^1(1,b,m)$ when $2\\lambda > |2b-m|$, and\n\\item $S^1(-1,b,m)$ when $2\\lambda > |2b+m|$.\n\\end{itemize}\nSince the arguments are identical in the two cases, we will only discuss the first one. Again, in order to keep the notation simple, we write $\\mathbb{T}^2_m$ and $\\mathbb{T}^2_{m'}$ for the two tori the circle extends to, assuming $m'>m$, and we write $(1,b):S^1\\to\\mathbb{T}^2_m$ and $(1,b'):S^1\\to\\mathbb{T}^2_{m'}$ for the two inclusions.\\\\\n \n\nFrom the universal property of pushouts, there is a canonical map \n\\[\\Upsilon:P^{alg}_{*}\\to H_{*}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);k)\\]\nmaking the following diagram commutative\n\\[\n\\begin{tikzcd}\n\\Lambda(t) \\arrow{r}{(1,b')} \\arrow[swap]{d}{(1,b)} & \\Lambda(y_{1},y_{2}) \\arrow{d}{} \\arrow[bend left=10]{ddr}{i_{m'}} & \\\\\n\\Lambda(x_{1},x_{2}) \\arrow{r}{}\\arrow[bend right=10,swap]{drr}{i_{m}} & P^{alg}_{*} \\arrow[dotted]{rd}{\\Upsilon} & \\\\\n & & H_{*}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);k)\n\\end{tikzcd}\n\\]\n\n\\begin{prop}\nFor every field $k$, the map $\\Upsilon:P^{alg}_{*}\\to H_{*}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);k)$ is an isomorphism of $k$-algebras.\n\\end{prop}\n\\begin{proof}\nBy definition, the map $\\Upsilon$ is an homomorphism of $k$-algebras. Since $P^{alg}_{i}\\cong H_{i}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);k)$ for each $i$, it is sufficient to show that $\\Upsilon$ is surjective.\\\\\n\nLet $R$ be the image of $\\Upsilon$. Since the maps $i_{m}$ and $i_{m'}$ are injective, $R$ is the subring generated by the classes $t,x_{2},y_{2}$ viewed as elements in $H_{*}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);k)$. Consider the two fibrations induced by the action maps\n\\begin{align*\n\\mathbb{T}^2_m\\to\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\\xrightarrow{p_m} \\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/\\mathbb{T}^2_m \\simeq U_{m}^{S^1}\\\\\n\\mathbb{T}^2_{m'}\\to\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\\xrightarrow{p_{m'}}\\Symp^{S^1}_h(S^2 \\times S^2,\\omega_\\lambda)\/\\mathbb{T}^2_{m'} \\simeq U_{m'}^{S^1}\n\\end{align*}\nObserve that $p_{m}(t)=0$, $p_{m}(x_{2})=0$, $p_{m'}(t)=0$, and $p_{m'}(y_{2})=0$. Now suppose there is an element $z\\in H_{*}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);k)$, not in $R$, and of minimal degree $d$. Since\n\\begin{multline}\nH_{d}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);k)\\cong\\\\\nH_{d}(U_{m}^{S^1};k)\\otimes H_{0}(T^{2}_{m};k)\n~\\oplus~ H_{d-1}(U_{m}^{S^1};k)\\otimes H_{1}(T^{2}_{m};k)\n~\\oplus~ H_{d-2}(U_{m}^{S^1};k)\\otimes H_{2}(T^{2}_{m};k)\n\\end{multline}\nwe would have a decomposition\n\\[z=c_{1}\\otimes\\mathbf{1}~\\oplus~ c_{t}\\otimes t~\\oplus~ c_{x_{2}}\\otimes x_{2}+c_{T}\\otimes [T^{2}_{m}]\\]\nwith at least one coefficient $c_{j}$ which is not a polynomial in the classes $p_{m}(w)$ and $p_{m}(y_{2})$. Let $c_{\\ell}$ be such coefficient of minimal degree $d-2\\leq\\ell\\leq d$. The inverse of the Alexander-Eells isomorphism of Proposition~\\ref{prop:AlexanderEellsGeometric}\n\\[\\lambda_{*}^{-1}:H_{p+1}(U_{m}^{S^1})\\to H_{p}(U_{m'}^{S^1})\\]\nwould map $c_{\\ell}$ to a class $c_{\\ell-1}'\\in H_{\\ell-1}(U_{m'}^{S^1};k)$. This class could not be a polynomial in $p_{m'}(w)$ and $p_{m'}(x_{2})$ since, otherwise, \n\\[c_{\\ell} = \\lambda_{*}(c_{\\ell-1}')=p_{m}\\big([y_{2}\\otimes c_{\\ell-1}']\\big)\\]\nwould be a polynomial in the classes $p_{m}(w)$ and $p_{m}(y_{2})$. \nIn turn, this class $c_{\\ell-1}'$ would have to be the image of some element in $H_{\\ell-1}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);k)$ not in $R$, contradicting the minimality of~$z$.\n\\end{proof}\n\n\\begin{cor}\nThe map $\\Upsilon:P^{alg}_{*}\\to H_{*}(\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda);\\mathbb{Z})$ is an isomorphism of Pontryagin algebras over the ring of integers.\n\\end{cor}\n\\begin{proof}\nThis follows from the well known fact that a map induces isomorphisms on homology with $\\mathbb{Z}$ coefficients iff it induces isomorphisms on homology with $\\mathbb{Q}$ and $\\mathbb{Z}_{p}$ coefficients for all primes $p$, see~\\cite{Ha}, Corollary~3A.7~(b).\n\\end{proof}\n\n\\begin{thm}\\label{full_homo}\nThe map $\\Upsilon:P\\to\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ is an homotopy equivalence.\n\\end{thm}\n\\begin{proof}\nThe map $\\Upsilon$ is a homology equivalence on integral homology. Because $P$ and $\\Symp_h^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ are topological groups, it follows that it is a weak equivalence, see~\\cite{Dror-WhiteheadTheorem}, Example~4.2. Because both spaces are homotopy equivalent to CW-complexes, this weak equivalence is a homotopy equivalence (See~\\cite{Ha}, Proposition~4.74).\n\\end{proof}\n\n\\section{Centralizers of Hamiltonian \\texorpdfstring{$S^1$}{circle} actions on \\texorpdfstring{$S^2 \\times S^2$}{the product}}\n\nWe summarise all the results we have obtained in this chapter in the following theorem. \n\\begin{thm} \\label{circle}\nConsider any Hamiltonian circle action $S^1(a,b;m)$ on $(S^2 \\times S^2, \\omega_\\lambda)$. The homotopy type of the symplectic stabilizer $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ is given in the table below:\n\\begin{center}\n\\begin{tabular}{|p{4.5cm}|p{3.5cm}|p{2cm}|p{4cm}|}\n \\hline\n Values of $(a,b ;m)$ & $\\lambda$ &Number of strata $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ intersects &Homotopy type of $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$\\\\\n\\hline\n {$(0,\\pm 1;m)$, $m\\neq 0$} &$\\lambda > 1$ & 1 & $S^1 \\times SO(3)$ \\\\\n\\hline\n\\multirow{2}{10em}{$(0,\\pm 1;0)$ or $(\\pm 1,0;0)$} &$\\lambda = 1$ &1 &$S^1 \\times SO(3)$ \\\\\\cline{2-4}\n&$\\lambda >1$ &1 &$S^1 \\times SO(3)$ \\\\\n\\hline\n$(\\pm 1,\\pm1;0)$&$\\lambda = 1$ &1 & $\\mathbb{T}^2 \\times \\mathbb{Z}_2$ \\\\\n\\hline\n $(\\pm 1,0;m), m\\neq0$ &$\\lambda >1$ &1 &$\\mathbb{T}^2$\\\\\n \\hline\n$(\\pm1,\\pm m;m), m \\neq 0$ &$\\lambda > 1$ &1 & $\\mathbb{T}^2$\\\\\n\\hline\n\\multirow{2}{10em}{$(1,b;m), b \\neq \\{ m,0\\}$} \n&$|2b-m| \\geq2 \\lambda \\geq 1$ &1 &$\\mathbb{T}^2$ \\\\\\cline{2-4}\n&$2 \\lambda >|2b-m| \\geq 0$ &2 &$\\Omega S^3 \\times S^1 \\times S^1 \\times S^1$ \\\\\n\\hline\n\\multirow{2}{10em}{$(-1,b;m), b \\neq \\{ -m,0\\}$} \n&$|2b+m| \\geq2 \\lambda \\geq 1$ &1 &$\\mathbb{T}^2$ \\\\\\cline{2-4}\n&$2 \\lambda >|2b+m| \\geq 0$ &2 &$\\Omega S^3 \\times S^1 \\times S^1 \\times S^1$ \\\\\n \\hline\nAll other values of $(a,b;m)$ &$\\forall \\lambda$ &1 &$\\mathbb{T}^2$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\nwhere $\\Omega S^3$ denotes the based loop space of $S^3$. \\qed\n\\end{thm}\n\n\n\n\\chapter{Partition of the space of invariant almost-complex structures}\\label{Chapter-codimension}\nIn the previous section, we calculated the homotopy type of the group of $S^1(\\pm1,b;m)$ equivariant symplectomorphisms assuming that the codimension of the the invariant strata $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{m^\\prime}$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ was 2. In this section, we use deformation theory to show that the invariant strata $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{m^\\prime}$ is a submanifold of $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ and to characterize its normal bundle. We then calculate its codimension. \\\\\n\nWe mimic the techniques in \\cite{AGK} in the equivariant setting. Fix a K\\\"ahler 4-manifold $(M,\\omega,J)$ and an $S^1$ action on $(M,\\omega,J)$ such that $g^*\\omega=\\omega$ and $g^*J= J$ $\\forall g \\in S^1$.\nThe holomorphic $S^1$ action on the base manifold $M$ induces a natural action on the various tensor spaces such as $T^{1,0}M$ or $\\Omega^{0,k}_{J}(M, TM)$. We write $T^{1,0}M^{S^1}$, $\\Omega^{0,k}_{J}(M, TM)^{S^1}$, to denote the $S^1$ invariant elements of these tensor spaces.\n\n\n\\section{Space of invariant complex structures}\nLet $\\mathcal{J}_l$ be the space of almost complex structures of regularity $C^l$ on $M$, endowed with $C^l$ topology. Being a space of sections, $\\mathcal{J}_l$ is a smooth Banach manifold. An explicit atlas can be constructed using the Cayley transform, see for instance~\\cite{Smolentsev}. Given $J\\in\\mathcal{J}_l$, let $\\Omega^{0,1}_{J,l}(M,TM) \\subset \\End_l(T M)$ be the space of endomorphisms of the tangent bundle of regularity $C^l$ that anticommute with $J$, that is,\n$$\n\\Omega^{0,1}_{J,l}(M,TM)=\\left\\{A \\in \\End_l(T M) \\mid A J+J A=0\\right\\} \n$$\nThe map $\\phi_{J}: \\Omega^{0,1}_{J,l}(M,TM) \\rightarrow \\mathcal{J}_l$ given by\n$$\n\\phi_{J}(A)= J e^{A}\n$$\nis a local diffeomorphism sending $C^k$ endomorphisms ($k \\geq l$) to $C^k$ almost complex structures. If $J$ is $S^1$ invariant, then $\\phi$ gives a bijection between invariant endomorphisms near $0$ in $\\Omega^{0,1}_{J,l}(M,TM)$ and invariant almost complex structures in a neighborhood of $J$. This shows that the space $\\mathcal{J}^{S^1}_l$ of invariant almost complex structures is a Banach submanifold of $\\mathcal{J}_l$ whose tangent space $T_J\\mathcal{J}^{S^1}_l$ at $J$ is naturally identified with the linear subspace $\\Omega^{0,1}_{J,l}(M,TM)^{S^1}$.\\\\\n\n\n\n\nLet $I^{S^1}_l$ denote the space of invariant and integrable almost complex structures of $M$ with regularity $C^l$. We now show that $I^{S^1}_l$is a Banach submanifold of $\\mathcal{J}^{S^1}_l$. To this end, let $N_J(X,Y) = [X,Y] + J\\left([JX,Y] + [X,JY]\\right) - [JX,JY]$ denote the Nijenhuis tensor with respect to $J$. By the Newlander-Nirenberg theorem, we know that $J \\in \\mathcal{J}^{S^1}_l$ is integrable iff $N(J)=0$.\n\\\\\n\nConsider the vector bundle $\\Omega^{0,2}_{l-1}(M,TM)^{S^1}$ over $\\mathcal{J}^{S^1}_l$ whose fibre over $J$ is the space $\\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}$ of $S^1$-invariant $(0,2)$ forms of regularity $C^{l-1}$ with values in the holomorphic tangent bundle $TM$. \nThe Nijenhuis tensor can be interpreted as a section $N:\\mathcal{J}_l\\to\\Omega^{0,2}_{l-1}(M,TM)$. This section is equivariant since, for all $g \\in S^1$,\n\\begin{align*}\n\\begin{split}\ng \\cdot N_J(X,Y) &:= g \\cdot [X,Y] + g \\cdot J\\left([JX,Y] + [X,JY]\\right) - g \\cdot [JX,JY] \\\\ \n&= g_* [g_*^{-1}X,g_*^{-1} Y] + g_*J\\left([Jg_*^{-1}X,g_*^{-1}Y] + [g_*^{-1}X,Jg_*^{-1}Y]\\right) - g_* \\cdot [Jg_*^{-1}X,Jg_*^{-1}Y] \\\\ \n&= [X,Y] + J\\left([JX,Y] + [X,JY]\\right) - [JX,JY] \n\\end{split}\n\\end{align*}\nwhere the last equality follows from the facts that $g_*[X,Y] = [g_*X,g_*Y]$ and that $J$ is invariant. In particular, $N$ takes invariant tensors to invariant tensors, that is,\n\\begin{align*}\n N: \\mathcal{J}^{S^1}_l &\\to \\Omega^{0,2}_{l-1}(M,TM)^{S^1} \\\\\n J &\\mapsto N_J\n\\end{align*}\nTo show that $I^{S^1}_l$ is a Banach submanifold, it suffices to show that Nijenhuis tensor intersects the 0-section of the bundle transversally. This is equivalent to showing that, for an integrable $J$, the projection of the derivative to the vertical tangent bundle is surjective. We denote this projection of the derivative of $N$ to the vertical tangent bundle as $\\nabla N$. A priori, $\\nabla N$ depends on a choice of connection on $\\Omega^{0,2}_{l-1}(M,TM)^{S^1}$. However, as shown in Appendix A of \\cite{AGK}, given an arbitrary almost complex structure $J$, we can extend the usual $\\bar\\partial_J$ operator to an operator $\\overline{\\partial}_J:\\Omega^{0,1}_{J,l}(M, TM) \\to \\Omega^{0,2}_{J,l-1}(M, TM)$ so that $\\nabla N_J:=\\nabla N(J)$ is given by the following composition. \n\\[\n\\begin{tikzcd}\n &\\Omega^{0,1}_{J,l}(M, TM)^{S^1} \\arrow[r,\"dN_J\"] \\arrow[rr,bend right=15,\"\\nabla N_J\"]&\\left(\\Omega^{2}_{l-1}(M, TM \\otimes \\mathbb{C})\\right)^{S^1} \\arrow[r,\"\\pi\"] &\\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}\n\\end{tikzcd} \n\\]\n\\noindent where $\\pi$ is the canonical projection of \n\\[\n\\Omega^{2}_{J,l-1}(M, TM \\otimes \\mathbb{C})^{S^1} \n\\cong \\Omega^{2,0}_{J,l-1}(M,TM)^{S^1} \\oplus \\Omega^{1,1}_{J,l-1}(M,TM)^{S^1} \\oplus \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}\n\\]\nonto the last summand. \n\n\\begin{thm}[\\cite{AGK}, Corollary A.9]\n$\\nabla N(J) = -2 J \\overline \\partial_J$. \\qed\n\\end{thm}\nWe are lead to show that $\\overline \\partial_J: \\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\to \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}$ is surjective. This is trivially true whenever the manifold as $M$ is 4 dimensional and $H_J^{0,2}(M,TM)^{S^1} = 0$. \n\n\\begin{lemma}{\\label{Lemma:averaging_surj}}\nConsider a complex manifold $(M,J)$ with a holomorphic $S^1$ action. Then the averaging map \n\\begin{align*}\n \\rho: H_J^{0,2}(M,TM) &\\rightarrow H_J^{0,2}(M,TM)^{S^1} \\\\\n [\\beta] &\\mapsto \\left[\\int_{S^1}g^*\\beta ~dg\\right]\n\\end{align*}\nis surjective.\n\\end{lemma}\n\\begin{proof}\nThe fact that the above map is well defined follows by noting that the $\\overline \\partial_J$ operator commutes with the averaging operator. The surjectivity follows as the averaging operator is the identity on invariant forms.\n\\end{proof}\n\nNote that the above theorem works for any compact group. By the discussion in the previous paragraph and Lemma~\\ref{Lemma:averaging_surj} we can conclude that $\\overline \\partial_J: \\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\to \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}$ is surjective for any holomorphic $S^1$ action on a complex 4-manifold satisfying $H_J^{0,2}(M,TM)=0$.\n\n\n\n\\begin{thm}\nLet $(M,J)$ be a $4$-manifold endowed with an integrable complex structure $J$, and with a holomorphic $S^1$ action. Suppose $H_J^{0,2}(M,TM) = 0$. Then the space $I^{S^1}_l$ of invariant complex structures is a Banach submanifold of $\\mathcal{J}^{S^1}_l$ in a neighbourhood of $J$ with tangent space at $J$ identified with $ \\ker \\overline \\partial_J:\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\to \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1} $. Equivalently,\n\\[\nT_J I_l \\cong \\left(\\im \\overline \\partial_J:\\left(\\Omega^{0,0}_{J,l}(M,TM)\\right)^{S^1} \\to \\Omega^{0,1}_{J,l}(M,TM)^{S^1}\\right) \\oplus H^{0,1}_J(M,TM)^{S^1}\n\\]\n\\end{thm}\n\nLet us now assume that $M$ is symplectic. Let $\\mathcal{J}^{S^1}_{\\omega,l}$ denote the space of all $S^1$ equivariant \\emph{compatible} almost complex structures of regularity $C^l$ endowed with the $C^l$-topology. Our next goal is to show that under some cohomological restrictions, the space of equivariant integrable \\emph{compatible} almost complex structures of regularity $C^l$ denoted by $I^{S^1}_{\\omega,l}$ is a Banach submanifold of $\\mathcal{J}^{S^1}_{\\omega,l}$. We first note that given $J \\in \\mathcal{J}^{S^1}_{\\omega,l}$, the equivariant metric \n$h_J (\\cdot, \\cdot) := \\omega(\\cdot, J\\cdot) - i \\omega(\\cdot, \\cdot)$\n induced by the pair $(\\omega, J)$ identifies \n$T_J \\mathcal{J}^{S^1}_{l} = \\Omega^{0,1}_l(M,TM)^{S^1}$ \nwith the space $\\left(T^{0,2}\\right)^{S^1}:= \\left(\\Omega^{0,2}(M)\\right)^{S^1} \\otimes \\left(\\Omega^{0,2}(M)\\right)^{S^1}$ of complex equivariant $(0,2)$-tensors via the map\n\\begin{align*}\n\\theta:\\left(T^{0,2}\\right)^{S^1} &\\to \\Omega^{0,1}_l(M,TM)^{S^1} \\\\\nA &\\mapsto \\theta(A):= h_J(A \\cdot, \\cdot)\n\\end{align*}\n\nLet us denote by $S\\Omega^{0,1}_{J,l}(M,TM)^{S^1}$ the tangent space of $T_J \\mathcal{J}^{S^1}_{\\omega,l} \\subset T_J \\mathcal{J}^{S^1}_{l}$ of all equivariant compatible almost complex structures. More explicitly, the tangent space consists of elements $A \\in \\Omega^{0,1}_{J,l}(M,TM)^{S^1} $ such that $\\omega(A\\cdot, \\cdot) = - \\omega(\\cdot, A\\cdot)$. Under the above identification, we can check that $S\\Omega^{0,1}_{J,l}(M,TM)^{S^1}$ gets mapped to the space of symmetric $S^1$ invariant $(0,2)$-tensors which we denote by $\\left(S^{0,2}\\right)^{S^1}$. \n\n\nFurther, the quotient $T_{J} \\mathcal{J}_l^{S^1} \/ T_{J} \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ may be identified with the space of invariant $(0,2)$ forms on $M$ since\n\\[T_{J} \\mathcal{J}_l^{S^1} \/ T_{J} \\mathcal{J}^{S^1}_{\\om_\\lambda, l} = \\Omega^{0,1}_{J,l}(M,TM)^{S^1} \/ S \\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\cong T_{J}^{0,2}(M) \/ \\left(S^{0,2}\\right)^{S^1} = \\Omega_{J}^{0,2}(M)^{S^1}.\\]\nAs before, the Nijenhuis tensor defines a map \n\\[N:\\mathcal{J}^{S^1}_{\\omega,l} \\to \\Omega^{0,2}_{l-1}(M,TM)^{S^1}\\]\nwhose kernel is precisely the subspace $I^{S^1}_{\\omega,l}$. We want to show that the derivative $\\nabla N$ is surjective at all $J\\in I^{S^1}_{\\omega,l}$. As we know that $\\nabla N(J) = -2 J \\overline \\partial_J $, we would need to show that $\\overline\\partial_J:S\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\to \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}$ is surjective. As $M$ is a 4-manifold, all forms in $\\Omega^{0,2}_{l-1}(M,TM)^{S^1}$ are closed, hence to show that the restriction of $\\overline\\partial_J$ to $S\\Omega^{0,1}_{J,l}(M,TM)^{S^1}$ is surjective, it would suffice to show that the vector space $SH_{J}^{0,2}(TM)^{S^1}$ defined below is 0.\n\\begin{align*}\nSH_{J}^{0,2}(TM)^{S^1}\n&:= \\frac{\\ker \\overline\\partial:\\Omega^{0,2}_{J,l-1}(M,TM)^{S^1} \\to \\Omega^{0,3}_{J,l-2}(M,TM)^{S^1}}{\\im \\overline\\partial:S\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\to \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}}\\\\\n&=\\frac{\\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}}{\\im \\overline\\partial:S\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\to \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}}\n\\end{align*}\nAs the above condition is not easy to check directly, we consider the following commutative diagram \n\\[\n\\begin{tikzcd}\n &0 \\arrow[d] \\arrow[r] &S\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\arrow[r, \"\\overline\\partial\"] \\arrow[d] &\\Omega^{0,2}_{J,l-1}(M,TM)^{S^1} \\arrow[d] \\arrow[r] &0 \\\\\n &\\Omega^{0,0}_{J,l+1}(M,TM)^{S^1} \\arrow[r,\"\\overline\\partial\"] \\arrow[d,\"\\alpha\"] &\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\arrow[r,\"\\overline\\partial\"] \\arrow[d] &\\Omega^{0,2}_{J,l-1}(M,TM)^{S^1} \\arrow[d] \\arrow[r] &0 \\\\\n &\\Omega^{0,1}_{J,l+1}(M)^{S^1} \\arrow[r, \"\\overline\\partial\"] &\\Omega^{0,2}_{J,l}(M)^{S^1} \\arrow[r, \"\\overline\\partial\"] &0 \\arrow[r] &0\n\\end{tikzcd} \n\\]\nwhere the map $S\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\to \\Omega^{0,1}_{J,l}(M,TM)^{S^1}$ is just the inclusion and the where the map $\\Omega^{0,1}_{J,l}(TM)^{S^1} \\to \\Omega^{0,2}_{J,l}(M)^{S^1}$ is the quotient \\[\\Omega^{0,1}_{J,l}(TM)^{S^1} \\to \\Omega^{0,1}_{J,l}(TM)^{S^1}\/S\\Omega^{0,1}_{J,l}(M,TM)^{S^1}\\]\nfollowed by identifying $\\Omega^{0,1}_{J,l}(TM)^{S^1}\/S\\Omega^{0,1}_{J,l}(M,TM)^{S^1}$ with\n$\\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}$ (see\\cite{AGK} p. 548 for more details about the identification). The map $\\alpha$ is defined as\n\\begin{align*}\n \\alpha: \\Omega^{0,0}_{J,l+1}(M,TM)^{S^1} &\\to \\Omega^{0,1}_{J,l+1}(M)^{S^1} \\\\\n X &\\mapsto \\alpha(X)(Y):= \\omega(X,JY)-i\\omega(X,Y)\n\\end{align*} \nwhere $J \\in I^{S^1}_{\\omega,l}$ and $X,Y \\in \\Omega^{0,0}_{J,l+1}(M,TM)^{S^1}$. We refer the reader to Appendix B in \\cite{AGK} for the proof of commutativity of the diagram in the non-equivariant case, and we note that it still holds in the equivariant setting due to the fact that $\\overline\\partial_J$ is equivariant. The above diagram gives rise to a long exact sequence is cohomology\n\\\\\n\n\n\\begin{equation}\\label{les}\n\\begin{aligned} 0 \\longrightarrow H_{J}^{0}(T M)^{S^1} & \\longrightarrow cl\\Omega_{J}^{0,1}(M)^{S^l} \\stackrel{\\delta}{\\longrightarrow} cl S\\Omega_{J}^{0,1}(M,TM)^{S^1} \\stackrel{q}\\longrightarrow H_{J}^{0,1}(T M)^{S^1} \\longrightarrow \\\\ & \\longrightarrow H_{J}^{0,2}(M)^{S^1} \\longrightarrow SH_{J}^{0,2}(TM)^{S^1} \\longrightarrow H_{J}^{0,2}(T M)^{S^1} \\longrightarrow 0 \n\\end{aligned}\n\\end{equation}\n\n\n\n\\noindent where $cl\\Omega_{J}^{0,1}(M)^{S^l}$ denotes the kernel of $\\overline\\partial_J$ in $\\Omega_{J}^{0,1}(M)^{S^l}$ and similarly $cl S\\Omega_{J}^{0,1}(M,TM)^{S^1}$ is the kernel of $\\overline\\partial_J: S\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\to \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}$. Thus if we had a 4-manifold $M$ with an $S^1$ invariant compatible integrable almost complex structure $J$ such that $H_J^{0,2}(M) =0$ and $H_J^{0,2}(TM)=0$, noting that $\\overline{\\partial}_J$ takes $S^1$ invariant elements to $S^1$ invariant elements we can conclude that $H_J^{0,2}(M)^{S^1} =0$ and $H_J^{0,2}(M,TM)^{S^1}=0$. Further, it follows from equation \\ref{les} for such a manifold $(M,\\omega,J)$ as above, that $SH_{J}^{0,2}(TM)^{S^1}=0$ and hence $I^{S^1}_{\\omega,l}$ would indeed be a manifold in a neighbourhood of such a $J$. Thus $H_J^{0,2}(M) =0$ and $H^{0,2}_J(TM)=0$ gives us a simpler condition for when $I^{S^1}_{\\omega,l}$ would indeed be a manifold in a neighbourhood of $J$ as required.\n\\\\\n\nAdditionally as the averaging operator commutes with the $\\overline{\\partial}_J$ operator, $H_J^{0,2}(M) =0$ implies that $H_J^{0,2}(M)^{S^1} =0$. This tells us that $q:cl S\\Omega_{J}^{0,1}(M,TM)^{S^1} \\to H_{J}^{0,1}(T M)^{S^1} $ is surjective and hence by the first isomorphism theorem we have $\\frac{cl S\\Omega_{J}^{0,1}(M,TM)^{S^1}}{ker ~q:cl S\\Omega_{J}^{0,1}(M,TM)^{S^1} \\to H_{J}^{0,1}(T M)^{S^1}}$ is isomorphic to $H_{J}^{0,1}(T M)^{S^1}$.\nThen the above long exact sequence gives us \n\\[\\frac{cl S\\Omega_{J}^{0,1}(M,TM)^{S^1}}{ker~ q} \\cong \\frac{cl S\\Omega_{J}^{0,1}(M,TM)^{S^1}}{\\im \\delta} \\cong H_{J}^{0,1}(M,TM)^{S^1}\\]\nPutting all this together we obtain the following local description of $I^{S^1}_{\\omega,l}$.\n\\begin{thm}\\label{integrable}\nLet $(M,\\omega,J)$ be a K\\\"ahler 4-manifold with a K\\\"ahler $S^1$ action. Suppose that $H_J^{0,2}(M)=0$ and $H_J^{0,2}(TM)=0$. Then $I^{S^1}_{\\omega,l}$ is a Banach submanifold of $\\mathcal{J}^{S^1}_{\\omega,l}$ in a neighbourhood of $J$ with tangent space at $J \\in I^{S^1}_{\\omega,l}$ identified with\n\\[\nT_J I^{S^1}_{\\omega,l} = cl S\\Omega_{J}^{0,1}(M,TM)^{S^1} = \\ker \\overline\\partial_J: S\\Omega^{0,1}_{J,l}(M,TM)^{S^1} \\longrightarrow \\Omega^{0,2}_{J,l-1}(M,TM)^{S^1}.\n\\]\nEquivalently,\n\\[T_J I^{S^1}_{\\omega,l} \\cong \\im \\delta \\bigoplus H_{J}^{0,1}(T M)^{S^1}.\\]\n\\end{thm}\n\n\\begin{prop}\nThe conditions $H_J^{0,2}(M) =0$ and $H^{0,2}_J(M,TM)=0$ are satisfied for all the Hirzebruch surfaces.\n\\end{prop}\n\\begin{proof}\nTo prove $H^{0,2}_J(M,TM)=0$ for all Hirzebruch surfaces see the computation in Example 6.2b) p.312 in \\cite{Ko}. To prove, $H_J^{0,2}(M) =0$ we note that the the rank of $H_J^{0,2}(M) =0$ (usually called the geometric genus $p_g$) is a birational invariant. As all Hirzebruch surfaces are birationally equivalent, the result follows from the computation on p.220 in \\cite{Ko}. \n\\end{proof}\nFinally, we would like to show that the strata $U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\omega,l}$\nis a Banach submanifold of $\\mathcal{J}^{S^1}_{\\omega,l}$. The most naive method to try to prove this would be to consider the universal moduli space $\\mathcal{M}(D_s,\\J_{\\om_\\lambda})$ of curves in the class $D_s$ (where $D_s$ is defined to be the class $B -\\frac{s}{2}F$ if $M =S^2 \\times S^2$ or the class $B- \\frac{s+1}{2}F$ if $M= \\CP^2\\# \\overline{\\CP^2}$) and try to prove that the inclusion of $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ is transverse to the projection of $\\mathcal{M}(D_s,\\J_{\\om_\\lambda})$ to the space of all compatible almost complex structures of regularity $C^l$:\n\\[\n\\begin{tikzcd}\n&\\mathcal{M}(D_s,\\J_{\\om_\\lambda})\\arrow[d,\"\\pi\"]\\\\\n\\mathcal{J}^{S^1}_{\\om_\\lambda, l} \\arrow[r,\"i\"] &\\J_{\\om_\\lambda}\n\\end{tikzcd} \n\\] \nHowever, this approach is flawed as the two maps are never transversal. An alternative method is to try to define an equivariant universal moduli space $\\mathcal{M}^{S^1}(D_s,\\mathcal{J}^{S^1}_{\\om_\\lambda, l})$ and argue that the image under the projection to $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ is a Banach submanifold of $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$. This is the approach we implement in the following section.\n\n\\section{Construction of Equivariant moduli spaces}\n\nIn this section we construct moduli spaces of $S^1$ invariant $J$-holomorphic maps into $S^2 \\times S^2$ or $\\CP^2\\# \\overline{\\CP^2}$. Recall that $J_m$ is the standard complex structure on the $m^\\text{th}$ Hizerbruch surface $W_m$, where $m=2k$ or $m=2k+1$. Let $D_s$ denote the homology class $B-\\frac{s}{2}F$ in $S^2 \\times S^2$ and let it denote the class $B -\\frac{s+1}{2}F$ in $\\CP^2\\# \\overline{\\CP^2}$. As seen in Chapter~2, there is a $\\mathbb{T}^2_m$ invariant, $J_m$-holomorphic curve $\\overline{D}$ in $W_m$ in the homology class $D_s$. Consider the $S^1(a,b;m)$ action on $(S^2 \\times S^2,\\omega_\\lambda)$ or $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$. From the graph for the circle action $S^1(a,b;m)$ we see that $S^1$ acts on $\\overline{D}$ in a non-effective manner with global stabilizer $\\mathbb{Z}_{a}$. The following theorem is useful in our analysis. \n\n\\begin{lemma}\\label{WellDefinedModuli}\nConsider the $S^1(a,b;m)$ action on $(S^2 \\times S^2,\\omega_\\lambda)$ or $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$. Let $S$ be any $S^1(a,b;m)$-invariant symplectic embedded sphere in the homology class $D_s$ with $s>0$. Then the $S^1$ action on $S$ has global stabilizer isomorphic to~$\\mathbb{Z}_a$. \n\\end{lemma}\n\n\\begin{proof}\nThis follows \\ref{weight} from noting that any other $S^1$ invariant curve passes through the same set of fixed points as $\\overline{D}$ , and hence by \\ref{weight} we that the global stabilizer is the same. \n\\end{proof}\n\nThus we can fix an action on a base sphere, namely the standard $S^1$ action that agrees with the action of $S^1$ on $\\overline{D}$ and consider the moduli space of all equivariant maps $u: (S^2,j_0) \\to (M,J)$ for some $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$. We define $\\mathcal{M}^{S^1}(D_s,\\mathcal{J}^{S^1}_{\\om_\\lambda, l})$ as follows\n\\[\n\\begin{aligned}\n\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}): = \\{(&u,J) ~|~ \\text{$u:S^2 \\to M$ is equivariant, somewhere injective, J-holomorphic and} \\\\\n&\\text{represents the class $D_s$}\\}\n\\end{aligned}\n\\]\n\n\n\n\\begin{remark}\nAs we are only interested in the case when $s>0$, the curves in class $D_s$ have negative self intersection and the adjunction formula tells us that these curves are embedded. Thus all somewhere injective curves in class $D_s$ for $s>0$ are embedded. \n\\end{remark}\nAs in the non-equivariant case we now wish to prove that this moduli space is a smooth Banach manifold. To prove this we recall the non-equivariant set up as in Chapter 3 in \\cite{McD} and reformulate it in the equivariant setting.\n\\\\\n\nWe have a bundle \n\\[\n\\begin{tikzcd}\n &\\mathscr{E}^{S^1}_{q-1,p} \\arrow[d, \"\\pi\"] \\\\\n & B^{S^1}_{q,p}\\times \\mathcal{J}^{S^1}_l \n\\end{tikzcd}\n\\]\n\\noindent where $\\mathscr{E}^{S^1}_{q-1,p}$ is a vector bundle over $B^{S^1}_{q,p}\\times \\mathcal{J}^{S^1}_l $ with fibre over $(u,J)$ consisting of $S^1$ invariant sections of $\\Omega^{0,1}_J(S^2, u^*TM)$ of Sobolev regularity $W^{q-1,p}$ i.e \n\\[ \\pi^{-1}(u,J) = \\Gamma(S^2, \\Omega^{0,1}_J(S^2, u^*TM)^{S^1}) \\] and the space $B^{S^1}_{q,p}$ is defined as follows: \\[B^{S^1}_{q,p}:= \\{ u \\in \\left(W^{q,p}(S^2,M)\\right)^{S^1} ~|~ [u]=D_s\\}\\]\nwhere $W^{q,p}(S^2,M))^{S^1}$ denotes the space of equivariant maps of Sobolev regularity $W^{q,p}$ from $S^2$ to $M$. \n\\\\\n\nWe would like to show that the section $\\mathscr{F}^{S^1}(u,J):= (u,\\overline \\partial_J u) :B_{q,p}^{S^1} \\times \\mathcal{J}^{S^1}_l \\to \\mathscr{E}^{S^1}_{q-1,p}$ (where $\\overline \\partial_J u = \\frac{1}{2}(du + J \\circ du \\circ j_{S^2})$) is transversal to the zero section. Note that $\\left({\\mathscr{F}^{S^1}}\\right)^{-1}(0) = \\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l})$, thus giving it a smooth structure as in the non-equivariant case (See \\cite{McD} Lemma 3.2.1) \n\\\\\n\n In order to show trasnversality, we consider the projection of the derivative map $d\\mathscr{F}^{S^1}$ to the vertical tangent bundle and show that this map is surjective at $(u,J)$ when $u$ is a simple equivariant curve. We shall denote this projection by $D\\mathscr{F}^{S^1}$. More explicitly, we need to show that the map \n\n\\[\\begin{aligned}\nD\\mathscr{F}_{u,J}^{S^1}: \\mathscr{W}^{q,p}(S^2, u^*TM)^{S^1} \\times &C^l(M, \\End(TM, J, \\omega))^{S^1} \\\\\n\\to &\\mathscr{W}^{q,p}(S^2, \\Omega^{0,1}_J(S^2, u^*TM))^{S^1}\n\\end{aligned}\n\\]\nis surjective. But by Lemma 3.2.1 in \\cite{McD} we know that the analogously defined linearized derivative $D\\mathscr{F}_{u,J}$ in the non-equivariant case \n\\[\\begin{aligned}\nD\\mathscr{F}_{u,J}: \\mathscr{W}^{q,p}(S^2, u^*TM) \\times &C^l(M, \\End(TM, J, \\omega)) \\\\\n\\to &\\mathscr{W}^{q,p}\\left(S^2, \\Omega^{0,1}_J\\left(S^2, u^*TM\\right)\\right)\n\\end{aligned}\n\\]\nis surjective. As $J \\in \\mathcal{J}^{S^1}_{\\om_\\lambda}$, the $\\bar\\partial_J$ operator commutes with the averaging operator with respect to the $S^1$ action. Averaging the above non-equivariant derivative $D\\mathscr{F}_{u,J}$ by the $S^1$ action then shows that $D\\mathscr{F}_{u,J}^{S^1}$ is surjective as well.\n\n\\begin{thm}\n $\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l})$ is a smooth Banach manifold. \\qed\n\\end{thm}\n\n\n\nWe now consider the projection map \n\\[\n\\begin{tikzcd}\n &\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}) \\arrow[d,\"\\pi\"] \\\\\n &\\mathcal{J}^{S^1}_{\\om_\\lambda, l}\n\\end{tikzcd}\n\\]\nTo conclude that the image of $\\pi$ is a submanifold of $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ we need the following theorem whose proof can be found in \\cite{mardsen}.\n\\\\\n\n\n\n\n\\begin{thm}\\label{mrdsn}(Theorem 3.5.18 in \\cite{mardsen})\nLet $f:M \\to N$ be a smooth map between Banach manifolds such that \n\\begin{enumerate}\n \\item $\\ker Tf$ is a sub-bundle of $TM$\n \\item For each $m \\in M$, $f_*(T_m M)$ is closed and splits in $T_{f(m)}N$ \n \\item f is open or closed onto it's image\n\\end{enumerate} \nThen $f(M)$ is a smooth Banach submanifold of N.\n\\end{thm}\n\nA map that satisfies the above conditions is called a sub-immersion. \n\n\\begin{lemma}\\label{subimmersion}\nThe projection map $\\pi: \\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}) \\to \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ is a sub-immersion.\n\\end{lemma}\n\\begin{proof}\n\nNote that the $\\ker d\\pi$ is of constant rank and is the tangent space to the reparametrization group $\\mathbb{C}^*$ is which freely on $\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l})$. Hence $\\ker d\\pi$ is a sub-bundle of $T\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l})$.\n\\\\\n\nNow we show that the image of $d\\pi$ is closed $T_J\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$. Note first that $T_J\\mathcal{J}^{S^1}_{\\om_\\lambda, l} = S\\Omega_J^{0,1}(M,TM)^{S^1}$, hence $\\pi_* T_{(u,J)} \\mathcal{M}^{S^1}\\left(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}\\right)$ as a subspace of $S\\Omega_J^{0,1}(M,TM)^{S^1}$ can be described as follows: \n\\begin{multline}\\label{proj_tangent}\n \\pi_*T_{(u,J)} \\mathcal{M}^{S^1}\\left(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}\\right)=\n \\left\\{ \\alpha \\in S\\Omega_J^{0,1}(M,TM)^{S^1}~| ~ [\\alpha \\circ du \\circ j_{S^2}]= 0 \\in H_J^{0,1}(S^2, u^*TM)^{S^1}\n\\right\\}\n\\end{multline}\n\n(This follows from noting that the proof of proposition 2.8 in \\cite{AGK} goes through under the presence of a compact group action.)\nLet $\\gamma_n \\in \\pi_* T_{(u,J)} \\mathcal{M}^{S^1}\\left(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}\\right)$ i.e $\\gamma_n \\in \\Omega^{0,1}(M,TM)^{S^1}= T\\mathcal{J}^{S^1}_{\\om_\\lambda, l} $ and satisfies $[\\gamma_n \\circ du \\circ J_{S^2}]= 0 \\in H_J^{0,1}(S^2, u^*TM)^{S^1}$. Further assume the sequence $\\gamma_n$ converges to $\\gamma$ in $\\Omega^{0,1}(M,TM)^{S^1}$. Then $[\\gamma \\circ du \\circ j_{S^2}] = 0 \\in H_{j_{S^2}}^{0,1}(S^2, u^*TM)^{S^1}$, thus showing image of $d\\pi$ is closed $T\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$.\n\\\\\n\nNext to that the image of $d\\pi$ splits in $T\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$, we proceed as follows. We firstly show that the codimension of the image of $d\\pi$ in $T\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ is finite and hence it follows that the image of $d\\pi$ splits. Consider the map \n\\begin{align*}\n L: S\\Omega_J^{0,1}(M,TM)^{S^1} &\\rightarrow H_{j_{S^2}}^{0,1}\\left(S^2, u^*TM\\right)^{S^1} \\\\\n \\alpha &\\mapsto [\\alpha \\circ du \\circ j_{S^2}]\n\\end{align*}\n\nBy equation~\\ref{proj_tangent} we see that the kernel of this map is precisely the image of the map $d\\pi$. As $H_{j_{S^2}}^{0,1}\\left(S^2, u^*TM\\right)^{S^1}$ is finite dimensional it follows that the codimension of $d\\pi$ is finite and hence the image of $d\\pi$ in $T\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ splits.\n\\\\\n\nFinally to show that $\\pi$ is open onto it's image we note that, \n\\[\\begin{tikzcd}\n &\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}) \\arrow[dr,\"\\pi\"] \\arrow[d, \"q\"] \\\\\n &\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l})\/\\mathbb{C}^* \\arrow[r,\"h\", \"\\cong\"'] & \\text{im} ~\\pi\n\\end{tikzcd}\n\\]\nwhere the map $h:\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l})\/\\mathbb{C}^* \\to \\text{im} \\pi$ is a homeomorphism. As $q$ is a quotient map for a group action, we have that $q$ is an open map and as $\\pi = h \\circ q$, we have the $\\pi$ too is an open map, thus showing that $\\pi$ satisfies all the conditions in the lemma and hence $\\pi$ is a sub-immersion.\n\\end{proof}\n\n\\begin{cor}\\label{cor:StrataAreSubmanifolds}\n$U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ is a Banach submanifold of $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$.\n\\end{cor}\n\\begin{proof}\nThis follows from Lemma~\\ref{subimmersion} and from observing that the image of $\\pi$ is $U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$.\n\\end{proof}\n\nWe will now describe the normal bundle of $U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ (when $s > 0$) in terms of infinitesimal deformations of complex structures. To this end, we first find a cohomological condition ensuring that the inclusion of complex integrable structures into $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ is transverse to the stratum $U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$, in other words, that we have transverse maps\n\\[\n\\begin{tikzcd}\n &\\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}) \\arrow[d,\"\\pi\"] \\\\\n I^{S^1}_{\\omega,l} \\arrow[r,\"i\"] &\\mathcal{J}^{S^1}_{\\om_\\lambda, l} \n\\end{tikzcd}\n\\\\\n\\]\n\\begin{lemma} \\label{compliment}\nLet $(M,\\omega_\\lambda,J_s)$ denote any of the Hirzebruch surfaces $(S^2 \\times S^2,\\omega_\\lambda,J_s)$ or $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda,J_s)$, and let $(u,J_s)\\in \\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l})$. Then the induced map $u^*:H_{J_s}^{0,1}\\left(M,TM\\right)^{S^1} \\rightarrow H^{0,1}_{j_{S^2}}\\left(S^2, u^*TM\\right)^{S^1}$ is an isomorphism. \n\\end{lemma}\n\n\\begin{proof}\nFrom Proposition 3.4 in \\cite{AGK} we know that $u^*:H_{J_s}^{0,1}(M,TM) \\rightarrow H_{j_{S^2}}^{0,1}(S^2, u^*TM) $ is an isomorphism. As $u$ is equivariant this indeed gives us that \\[u^*:H_{J_s}^{0,1}\\left(M,TM\\right)^{S^1} \\longrightarrow H_{j_{S^2}}^{0,1}\\left(S^2, u^*TM\\right)^{S^1}\\]\nis also an isomorphism.\n\\end{proof}\n\n\\begin{lemma}\\label{trans_strata}\nLet $(M,\\omega_\\lambda)$ denote either $(S^2 \\times S^2,\\omega_\\lambda)$ or $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$. Further let $i: I^{S^1}_{\\omega,l} \\hookrightarrow \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ denote the inclusion and $\\pi: \\mathcal{M}^{S^1}(D_s , \\mathcal{J}^{S^1}_{\\om_\\lambda, l}) \\to \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ denote the projection. Then,\n\\begin{itemize}\n \\item $i \\pitchfork \\pi$,\n \\item the infinitesimal complement (i.e the fibre of the normal bundle) of $U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ at $J_s \\in I^{S^1}_{\\omega,l}$ can be identified with $H^{0,1}_{J_s}(M,TM)^{S^1}$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}Recall by Theorem~\\ref{integrable}, that the tangent space of $I^{S^1}_{\\omega,l}$ was given by \n\\[T_{J_s} I^{S^1}_{\\omega,l} = cl S\\Omega_{J_s}^{0,1}(M,TM)^{S^1} := \\ker \\overline\\partial_{J_s}: S\\Omega^{0,1}_{{J_s},l}\\left(M,TM\\right)^{S^1} \\longrightarrow \\Omega^{0,2}_{{J_s},l-1}\\left(M,TM\\right)^{S^1}\\]\n\n\nLet $\\gamma \\in T_{J_s} \\mathcal{J}^{S^1}_{\\om_\\lambda, l} = \\left(S\\Omega^{0,1}_{J_s}(M,TM)\\right)^{S^1}$ and define $\\left[\\gamma \\circ du \\circ j_{S^2}\\right] := \\eta \\in H_{j_{S^2}}^{0,1}(S^2, u^*TM)^{S^1}$. To show that $i \\pitchfork \\pi$, we need to produce $\\beta \\in T_{J_s}I^{S^1}_{\\omega,l} = cl S\\Omega_{J_s}^{0,1}(M,TM)^{S^1}$ such that $[\\left(\\gamma - \\beta\\right) \\circ du \\circ j_{S^2}] = 0$. To do so, we consider the following commutative diagram.\n\n\n\\[ \n\\stackinset{l}{13ex}{b}{6ex}{%\n\\scalebox{.8}\n{%\n\\begin{tikzcd}[row sep=9ex, column sep = 13ex, ampersand replacement=\\&]\n[\\alpha] \n \\arrow[mapsto]{r}\n \\arrow[mapsto]{d}\n\\& \\left[\\alpha \\circ du \\right]\n \\arrow[mapsto]{d} \\\\\n\\left[\\alpha \\circ J_m\\right]\n \\arrow[mapsto]{r} \n\\& \\begin{array}{c}[\\alpha \\circ du \\circ j_{S^2}] =\\\\ \\left[\\alpha \\circ J_s \\circ du\\right]\\end{array}\n\\end{tikzcd}\n} \n}{%\n\\begin{tikzcd}[row sep = 20ex, column sep = 20ex, ampersand replacement=\\&]\nH_{J_s}^{0,1}(M,TM)^{S^1} \n \\arrow{r}{u^*} \n \\arrow[swap]{d}{J_s^*}\n\\& H_{j_{S^2}}^{0,1}(S^2, u^*TM)^{S^1}\n \\arrow{d}{j_{S^2}^*} \\\\\nH_{J_s}^{0,1}(M,TM)^{S^1} \n \\arrow[swap]{r}{u^*} \n\\& H_{j_{S^2}}^{0,1}(S^2, u^*TM)^{S^1}\n\\end{tikzcd}\n}\n\\]\nwhere all the maps $u^*$,$J^*$ and $j_{S^2}^*$ are isomorphisms. Further we have the equality $\\left[\\alpha \\circ du \\circ j_{S^2}\\right] = \\left[\\alpha \\circ J \\circ du\\right]$ as $u$ is $j_{S^2}$-$J_s$ holomorphic. As we know that $H_{J_s}^{0,2}(M)^{S^1} =0$, from the long exact sequence equation \\ref{les} we see that the quotient map \n\\[ cl S\\Omega_{J_s}^{0,1}(M,TM)^{S^1} \\to H_{J_s}^{0,1}(M,TM)^{S^1}\\] \nis surjective. As both $u^*$ and $J^*$ are isomorphisms, there exists $\\beta \\in cl S\\Omega_{J_s}^{0,1}(M,TM)^{S^1} = T_{J_s} I_{\\omega_\\lambda,l}^{S^1}$ such that $\\left[\\beta \\circ J \\circ du\\right] = \\left[\\beta \\circ du \\circ j_{S^2}\\right]= \\eta:= \\left[\\gamma \\circ du \\circ j_{S^2}\\right] $. Hence we indeed have $\\left[\\left(\\gamma -\\beta\\right) \\circ du \\circ j_{S^2} \\right] = 0 $ as required.\n\\\\\n\n\n\nWe now show that the fibre of the normal bundle of $U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ at $J_s \\in I^{S^1}_{\\omega,l}$ can be identified with $H^{0,1}_{J_s}(M,TM)^{S^1}$. As seen in the proof of Lemma~\\ref{subimmersion}, we know that there is a map \n\\begin{align*}\n L: S\\Omega_{J_s}^{0,1}(M,TM)^{S^1} &\\rightarrow H_{J_s}^{0,1}\\left(S^2, u^*TM\\right)^{S^1} \\\\\n \\alpha &\\mapsto [\\alpha \\circ du \\circ j_{S^2}]\n\\end{align*}\n\nAs the quotient map $ cl S\\Omega_{J_s}^{0,1}(M,TM)^{S^1} \\to H_{J_s}^{0,1}(M,TM)^{S^1}$ is surjective and the maps $u^*$ and $j_{S^2}$ are isomorphisms, we have that the map $L$ is surjective. As the kernel of L is the image of $d\\pi$, the cokernel can be identified with $H_{j_{S^2}}^{0,1}\\left(S^2, u^*TM\\right)^{S^1} \\cong H_{J_s}^{0,1}(M,TM)^{S^1}$ Hence the the fibre of the normal bundle of $U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ at $J_s \\in I^{S^1}_{\\omega,l}$ can be identified with $H^{0,1}_{J_s}(M,TM)^{S^1}$.\n\\end{proof}\n\n\n\n\\section{Isotropy representations}\n\nAs shown in the previous section, the codimension of $U_{s,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ inside $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ is equal to the dimension of $H_{J_s}^{0,1}(M,TM)^{S^1}$. This can be calculated using Lemma~\\ref{trans_strata}. In this section we only perform the calculation for $S^2 \\times S^2$ and we postpone the discussion of $\\CP^2\\# \\overline{\\CP^2}$ to Section~\\ref{Isom_CCC}.\n\\\\\n\n\\subsection{Even Hirzebruch surfaces and their isometry groups}\n\n\n\nLet $2k=m$. By Theorem 4.2 in \\cite{AGK}, the action of the isometry group $K(2k)\\simeq S^{1}\\times \\SO(3)$ on the space $H_{J}^{0,1}(M,TM)$ of infinitesimal deformations is isomorphic to $\\Det\\otimes \\Sym^{2k-2}$, where $\\Det$ is the standard action of $S^{1}=U(1)$ on $\\mathbb{C}^{2}$, and where $\\Sym(\\mathbb{C}^{2})$ is the representation $\\mathscr{W}_{k-1}$ of $\\SO(3)$ induced by the $(2k-2)$-fold symmetric product of the standard representation of $\\SU(2)$ on $\\mathbb{C}^{2}$. We use this fact to calculate the dimension of the $S^1$ invariant subspace of $H_{J}^{0,1}(M,TM)$ and thus obtain the codimension $U_{m,l} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda, l}$ inside $\\mathcal{J}^{S^1}_{\\om_\\lambda, l}$. \\\\\n\nFollowing \\cite{AGK}, we construct the Hirzebruch surface $\\mathbb{F}_{2k}$ by K\\\"ahler reduction of $\\mathbb{C}^{4}$ under the action of the torus $T^{2}_{2k}$ defined by\n\\[(s,t)\\cdot z = (s^{2k}tz_{1},tz_{2},sz_{3},sz_{4})\\]\nThe moment map is $\\phi(z)=(2k|z_{1}|^{2}+|z_{3}|^{2}+|z_{4}|^{2}, |z_{1}|^{2}+|z_{2}|^{2})$ and the reduced manifold at level $(\\lambda+k,1)$ is symplectomorphic to $(S^{2}\\times S^{2},\\omega_{\\lambda})$ and biholomorphic to the Hirzebruch surface $\\mathbb{F}_{2k}$. In this model, the projection to the base is given by $[(z_{1},\\ldots,z_{4})]\\mapsto [z_{3}:z_{4}]$, the zero section is $[w_{0}:w_{1}]\\mapsto [(w_{0}^{2k},0,w_{0},w_{1})]$, and a fiber is $[w_{0}:w_{1}]\\mapsto [(w_{0}w_{1}^{2k},w_{0}w_{1},0,w_{1})]$. The torus $T^{2}(2k)=T^{4}\/T^{2}_{2k}$ acts on $\\mathbb{F}_{2k}$. This torus is generated by the elements $[(1,e^{it},1,1)]$ and $[(1,1,e^{is},1)]$, and its moment map is $[(z_{1},z_{2},z_{3},z_{4})]\\mapsto(|z_{2}|^{2},|z_{3}|^{2})$. The moment polytope $\\Delta(2k)$ is the convex hull of the vertices $(0,0)$, $(1,0)$, $(1,\\lambda+k)$, and $(0,\\lambda-k)$.\n\n\\[\n\\begin{tikzpicture}\n\\draw (0,0) -- (1,0) ;\n\\draw (0,0) -- (0,3) ;\n\\draw (1,0) -- (1,4) ;\n\\draw (0,3) -- (1,4) ;\n\\node[right]{};\n\\end{tikzpicture}\n\\]\n\nThe isometry group of $\\mathbb{F}_{2k}$ is \n\\[K(2k) = Z_{\\U(4)}(T^{2}_{2k})\/T^{2}_{2k}=(T^{2}\\times \\U(2))\/T^{2}_{2k}\\simeq S^{1}\\times\\PU(2)\\simeq S^{1}\\times\\SO(3)\\]\nwhere the middle isomorphism is given by\n\\[[(s,t), A]\\mapsto (s^{-1}t\\det A^{k}, [A])\\]\nUnder this isomorphism, an element $[(1,a,b,1)]$ of the torus $T(2k)$ is taken to\n\\[\\left(ab^{k},\\begin{bmatrix}b&0\\\\0&1\\end{bmatrix}\\right)=\\left(b^{k}a,\\begin{bmatrix}b^{1\/2}&0\\\\0&b^{-1\/2}\\end{bmatrix}\\right)\\]\nConsequently, at the Lie algebra level of the maximal tori, the map identifying the maximal torus of $K(2k)$ whose lie algebra is denoted by $\\lie{t}^{2}(2k)$ with the maximal torus $S^1 \\times SO(2) \\subset S^1 \\times SO(3)$ whose lie algebra is denoted by $\\lie{t}^{2}$ (where $SO(2)$ is identified with the rotations around the z-axis) is given by\n\\[\\begin{pmatrix}\n1&k\\\\\n0&1\n\\end{pmatrix}\\]\nThe moment polytope associated to the maximal torus $T^{2}\\subset K(2k)$ is thus the balanced polytope obtained from $\\Delta(2k)$ by applying the inverse transpose $\\begin{pmatrix}1&0\\\\-k&1\\end{pmatrix}$\nand has the following shape\n\\[\n\\begin{tikzpicture}\n\\draw (0,0) -- (1,-1) ;\n\\draw (0,0) -- (0,1) ;\n\\draw (0,1) -- (1,2) ;\n\\draw (1,2) -- (1,-1) ;\n\\end{tikzpicture}\n\\]\n\n\\subsection{Even isotropy representations and codimension calculation}\n\nLet $J_m$ be the standard $S^1$ invariant integrable almost complex structure in the strata $U_m$, coming from the Hirzebruch surface $W_m$. The action of the isometry group $K(2k)\\simeq S^{1}\\times \\SO(3)$ on the space $H_{J_m}^{0,1}(S^2 \\times S^2,T(S^2 \\times S^2)) \\cong \\mathbb{C}^{m-1}$ (see \\cite{Ko} Example 6.2(b)(4), p.309 for more details about how the isomorphism is obtained) of infinitesimal deformations is isomorphic to $\\Det\\otimes \\Sym^{2k-2}$ , where $\\Det$ is the standard action of $S^{1}=U(1)$ on $\\mathbb{C}^{2}$, and where $\\Sym(\\mathbb{C}^{2})$ is the representation $\\mathscr{W}_{k-1}$ of $\\SO(3)$ induced by the $(2k-2)$-fold symmetric product of the standard representation of $\\SU(2)$ on $\\mathbb{C}^{2}$ (see Theorem 4.2 in\\cite{AGK}). We shall denote this $(2k-2)$-fold symmetric product of the standard representation of $\\SU(2)$ on $\\mathbb{C}^{2}$ as $\\mathscr{V}_{2k-2}$. See \\cite{B-tD} for more details about the representation theory of $\\SO(3)$ and $\\SU(2)$ \\\\\n\n\nThe circle of $\\SO(3)=\\PU(2)=\\U(2)\/\\Delta(S^{1})$\n\\[R(t)=\\begin{pmatrix}1 & 0 & 0\\\\ 0 & \\cos(t) & -\\sin(t)\\\\ 0 & \\sin(t) & \\cos(t)\\end{pmatrix}, ~t\\in[0,2\\pi)\\]\nlifts to \n\\[e(t\/2):=\\begin{pmatrix}e^{it\/2} & 0\\\\ 0 & e^{-it\/2}\\end{pmatrix}\\in\\SU(2)\\]\nAs explained above, the action of $K(2k)$ on $H^{0,1}_{J_m}(S^2 \\times S^2,T^{1.0}_{J_m}(S^2 \\times S^2)) \\cong \\mathbb{C}^{m-1}$ is isomorphic to $\\Det\\otimes\\Sym^{2k-2}$.Hence to calculate the the codimension we only need to calculate the dimension of the invariant subspace of $H^{0,1}(S^2 \\times S^2,T^{1.0}_{J_m}(S^2 \\times S^2)) \\cong \\mathbb{C}^{m-1}$ under this action. To do so we note that a basis of $\\Sym^{2k-2}$ is given by the homogeneous polynomials $P_{j}=z_{1}^{2k-2-j}z_{2}^{j}$ for $j\\in\\{0,\\ldots,2k-2\\}$. The action of $R(t)$ on $P_{j}$ is\n\\[R(t)\\cdot P_{j}=e(t\/2)\\cdot P_{j}=e^{i\\big(2k-2-2j\\big)t\/2}P_{j}=e^{it(k-1-j)}P_{j}\\]\nso that the action of $(e^{is},R(t))\\subset S^{1}\\times\\SO(3)$ on $P_{j}$ is\n\\[\\left(e^{is},R(t)\\right)\\cdot P_{j}=e^{i\\big(s+t(k-1-j)\\big)}P_{j}\\]\nEach $P_{j}$ generates an eigenspace for the action of the maximal torus $T(2k)$. In particular, the circle $S^{1}(a,b;2k)$ acts trivially on $P_{j}$ if, and only if, \n\\[a+b(k-1-j)=(a,b)\\cdot(1,k-1-j)=0\\]\nfor $j\\in\\{0,2k-2\\}$. Equivalently, we must have\n\\[a+bj=(a,b)\\cdot(1,j)=0\\]\nfor $j\\in\\{1-k,\\ldots,k-1\\}$\n\nHence the dimension of the invariant subspace is given by the number of $j \\in \\{1-k,\\ldots,k-1\\}$ such that $a+bj=0$.\n\\\\\n\n\n\nNote that the above codimension calculation was with respect to the basis of the maximal torus in $K(2n)$. Hence to calculate the codimension for the $S^1(1,b,m) \\subset \\mathbb{T}^2_m$ as in our case, we need to transform the basis by multiplication by the matrix $\\begin{pmatrix}\\frac{m}{2}& -1\\\\1&0\\end{pmatrix}$. Thus it takes the vector $\\begin{pmatrix}1\\\\b\\end{pmatrix}$ in the basis for the standard moment polytope \n\n\\[\n\\begin{tikzpicture}\n\\draw (0,1) -- (3,1) ;\n\\draw (0,1) -- (0,0) ;\n\\draw (0,0) -- (4,0) ;\n\\draw (3,1) -- (4,0) ;\n\n\\end{tikzpicture}\n\\]\n\nto the vector $\\begin{pmatrix}\\frac{m}{2}-b\\\\1\\end{pmatrix} $ in the basis for the balanced polytope (for which we did the above calculations).\n\n\\[\n\\begin{tikzpicture}\n\\draw (0,0) -- (1,-1) ;\n\\draw (0,0) -- (0,1) ;\n\\draw (0,1) -- (1,2) ;\n\\draw (1,2) -- (1,-1) ;\n\n\\end{tikzpicture}\n\\]\n\nTherefore the codimension of $S^1(1,b;m)$ is given by the number of $j \\in \\{1-\\frac{m}{2}, \\cdots, \\frac{m}{2}-1\\}$ such that $(\\frac{m}{2} - b) + j = 0$. Relabelling $j^\\prime$ as $\\frac{m}{2}+j$, we have that the codimension is given by \nthe number of $j^\\prime \\in \\{1, \\cdots , m-1\\}$ such that $j^\\prime=b$.\n \n\\begin{thm}\\label{codimension_calc}\nGiven the circle action $S^1(1,b,m)$ with $2\\lambda > |2b-m|$ and $b \\neq \\{0, m\\}$, the complex codimension of of the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ in given by the number of $j \\in \\{1, \\cdots , m-1\\}$ such that $j=b$.\n\\\\\nSimilarly for the action $S^1(-1,b,m)$ with $2\\lambda > |2b+m|$ and $b \\neq \\{0, -m\\}$, the complex codimension of of the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ in given by the number of $j \\in \\{1, \\cdots , m-1\\}$ such that $j=-b$.\n\\end{thm}\n\\begin{cor}\nFor the circle actions \\begin{itemize}\n \\item (i) $a=1$, $b \\neq \\{0, m\\}$, and $2\\lambda > |2b-m|$; or\n\\item (ii) $a=-1$, $b \\neq \\{0, -m\\}$, and $2 \\lambda > |2b+m|$.\n\\end{itemize}The complex codimension of the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ is either 0 or 1.\n\\end{cor} \n\\begin{proof}\nFollows from the calculation and discussion above.\n\\end{proof}\n\n\\begin{remark}\nIn the beginning of the section, we only show that the space $\\mathcal{J}^{S^1}_{\\om_\\lambda, l} \\cap U_{2k,l}$ was a Banach submanifold. But in order to obtain the topology of the space of $\\Symp^{S^1}(S^2 \\times S^2,\\omega_\\lambda)$ with $\\mathbb{C}^\\infty$-topology, we require that the space $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2k}$ with the $C^\\infty$ topology is a Fr\\'echet manifold and that the codimension of $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2k}$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ is given by the same formula as in Theorem \\ref{codimension_calc}. As this discrepancy exists in the literature even in the non-equivariant case, and as a resolution of this issue is well beyond the scope of this document we do not attempt to resolve this here. \n\\end{remark}\n\n\n\n\\chapter{Odd Hirzebruch surfaces}\\label{Chapter-CCC}\n\n\\section{Homotopy type of \\texorpdfstring{$\\Symp(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$}{Symp(CP2\\#CP2}}\nWe now compute the centralisers for the $S^1$ actions on the odd Hirzebruch surfaces. The theory is extremely analogous to the even Hirzebruch case i.e $S^2 \\times S^2$, hence we shall only point out the key differences. \n\\\\\n\nRecall that the odd Hirzebruch surface $W_m$ (where m is odd) is defined as a complex submanifold of $ \\mathbb{C}P^1 \\times \\mathbb{C}P^2$ defined by setting\n\\[\nW_m:= \\left\\{ \\left(\\left[x_1,x_2\\right],\\left[y_1,y_2, y_3\\right]\\right) \\in \\mathbb{C}P^1 \\times \\mathbb{C}P^2 ~|~ {x^m}_1y_2 - x_2{y^m}_1 = 0 \\right\\}\n\\]\n\nThis manifold is diffeomorphic to $\\mathbb{C}P^2 \\# \\overline{\\mathbb{C}P^2}$. The Torus $\\mathbb{T}^2$ acts on $\\mathbb{C}P^1 \\times \\mathbb{C}P^2$ in the following manner. \n\n\\[\n \\left(u,v\\right) \\cdot \\left(\\left[x_1,x_2\\right],\\left[y_1,y_2, y_3\\right]\\right) = \\left(\\left[ux_1,x_2\\right],\\left[u^my_1,y_2,vy_3\\right]\\right)\n\\]\n\n(again with m being odd) and the moment map image looks like\n\n\\[\n\\begin{tikzpicture}\n\\node[left] at (0,2) {$Q=(0,1)$};\n\\node[left] at (0,0) {$P=(0,0)$};\n\\node[right] at (4,2) {$R= (\\lambda - \\frac{m+1}{2} ,1)$};\n\\node[right] at (6,0) {$S=(\\lambda + \\frac{m-1}{2} ,0)$};\n\\node[above] at (2,2) {$B-\\frac{m+1}{2}F$};\n\\node[right] at (5,1) {$F$};\n\\node[left] at (0,1) {$F$};\n\\node[below] at (3,0) {$B+ \\frac{m-1}{2}F$};\n\\draw (0,2) -- (4,2) ;\n\\draw (0,0) -- (0,2) ;\n\\draw (0,0) -- (6,0) ;\n\\draw (4,2) -- (6,0) ;\n\\end{tikzpicture}\\label{oddHirz}\n\\]\nwhere $B$ now refers to the homology class of a line $L$ in $\\mathbb{C}P^2 \\# \\overline{\\mathbb{C}P^2}$ and $F$ refers to the class $L - E$ where $L$ is the class of the line and $E$ is the class of the exceptional divisor. There is a canonical form which we also call $\\omega_\\lambda$ on $\\mathbb{C}P^2 \\# \\overline{\\mathbb{C}P^2}$, which has weight $\\lambda$ on B and 1 on $F$. Note that with our convention, $\\lambda$ must be strictly greater than 1 for the curve in class $E$ to have positive symplectic area.\nAs before all symplectic $S^1$ action on $\\mathbb{C}P^2 \\# \\overline{\\mathbb{C}P^2}$ extend to toric actions. Hence only need to consider sub-circles of the above family of torus actions. The graphs for the different circles are given in Figures \\ref{fig:GraphsWithFixedSurfaces} and \\ref{fig:GraphsIsolatedFixedPoints}. As explained in Theorem~\\ref{strata-CCC}, we have a stratification of the space of almost complex structures i.e the space $\\mathcal{J}_{\\omega_\\lambda}$ of all compatible almost complex structures for the form $\\omega_\\lambda$, decomposes into disjoint Fr\u00e9chet manifolds of finite codimensions\n\\[\n\\mathcal{J}_{\\omega_\\lambda} = U_1 \\sqcup U_3 \\sqcup U_5\\ldots \\sqcup U_{2n-1}\n\\]\nwhere \n\\[ \nU_{2i-1} := \\left\\{ J \\in \\mathcal{J}_{\\omega_\\lambda}~|~ D_{2i-1}:= B-iF \\in H_2(S^2 \\times S^2,\\mathbb{Z})\\text{~is represented by a $J$-holomorphic sphere}\\right\\}.\n\\]\n\n\n\nWe shall now use this stratification to construct fibrations for the action of equivariant symplectomorphism group $\\Symp^{S^1}_{h}(S^2 \\times S^2,\\omega_\\lambda)$ on the space $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s+1}$ of invariant almost complex structures in each stratum. We follow the same notation as in \\ref{not}. The proofs that the following maps are in fact fibrations is exactly the same as the proof given in Section~\\ref{section:ActionOnU_2k}.\n\n\\[\\Stab^{S^1}(\\overline{D}) \\longrightarrow \\Symp^{S^1}_{h}(S^2 \\times S^2,\\omega_\\lambda) \\longtwoheadrightarrow {\\mathcal{S}^{S^1}_{D_{s}}} \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\mathcal{J}^{S^1} \\cap U_{2s+1}\\rule{0em}{2em}\\]\n\n\\[\\Fix^{S^1}(\\overline{D}) \\longrightarrow \\Stab^{S^1}(\\overline{D}) \\longtwoheadrightarrow \\Symp^{S^1}(\\overline{D}) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} S^1 ~\\text{or}~ SO(3)\\rule{0em}{2em}\\]\n \n\\[\\Fix^{S^1} (N(\\overline{D})) \\longrightarrow \\Fix^{S^1}(\\overline{D}) \\longtwoheadrightarrow \\Gauge1^{S^1}(N(\\overline{D})) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} S^1 \\rule{0em}{2em}\\]\n \n\\[\\Stab^{S^1}(\\overline{F}) \\cap \\Fix^{S^1}(N(\\overline{D})) \\longrightarrow \\Fix^{S^1}(N(\\overline{D})) \\longtwoheadrightarrow \\overline{\\mathcal{S}^{S^1}_{F,p_0}} \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\mathcal{J}^{S^1}(\\overline{D})\\simeq \\{*\\}\\rule{0em}{2em} \\]\n \n\\[\\Fix^{S^1}(\\overline{F}) \\longrightarrow \\Stab^{S^1}(\\overline{F}) \\cap \\Fix^{S^1}(N(\\overline{D})) \\longtwoheadrightarrow \\Symp^{S^1}(\\overline{F}, N(p_0)) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\left\\{*\\right\\}\\rule{0em}{2em}\\]\n \n\\[\\left\\{*\\right\\} \\mathbin{\\textcolor{blue}{\\xleftarrow{\\text{~~~$\\simeq$~~~}}}} \\Fix^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\longrightarrow \\Fix^{S^1}(\\overline{F}) \\longtwoheadrightarrow \\Gauge1^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\mathbin{\\textcolor{blue}{\\xrightarrow{\\text{~~~$\\simeq$~~~}}}} \\left\\{*\\right\\}\\rule{0em}{2em}\\]\n\nWhen the $S^1(a,b;m) \\subset \\mathbb{T}^2_m$ action is of the form $(a,b)= (0,\\pm1)$\n\\[\n\\begin{tikzcd}\n &\\Fix^{S^1}(\\overline{D}) \\arrow[r] &\\Stab^{S^1}(\\overline{D}) \\arrow[r,twoheadrightarrow] &\\Symp^{S^1}(\\overline{D}) \\\\\n &S^1 \\arrow[u,hookrightarrow] \\arrow[r] &U(2) \\arrow[u,hookrightarrow] \\arrow[r] &SO(3) \\arrow[u,hookrightarrow] \\\\\n\\end{tikzcd}\n\\]\nFor all other actions with $(a,b) \\neq (0,\\pm1)$ we have \n\\[\n\\begin{tikzcd}\n &\\Fix^{S^1}(\\overline{D}) \\arrow[r] &\\Stab^{S^1}(\\overline{D}) \\arrow[r,twoheadrightarrow] &\\Symp^{S^1}(\\overline{D}) \\\\\n &S^1 \\arrow[u,hookrightarrow] \\arrow[r] &\\mathbb{T}^2_{2s+1} \\arrow[u,hookrightarrow] \\arrow[r] &S^1 \\arrow[u,hookrightarrow] \\\\\n\\end{tikzcd}\n\\]\nIn both diagrams, the leftmost and the rightmost arrows are homotopy equivalences. As the diagram above commutes, the 5 lemma implies that the middle inclusions $\\mathbb{T}^2 \\hookrightarrow \\Stab^{S^1}(\\overline{D})$ or $U(2) \\hookrightarrow \\Stab^{S^1}(\\overline{D})$ are also weak homotopy equivalences.\n\n\\begin{thm}\\label{homogenous_CCC}\nConsider the $S^1(a,b;m)$ action on $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ with $\\lambda >1$. If $ \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s+1} \\neq \\phi$, then we have the following homotopy equivalences:\n\\begin{enumerate}\n \\item when $(a,b) \\neq (0,\\pm1)$, we have $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)\/\\mathbb{T}^2_{2s+1} \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s+1}$;\n \\item when $(a,b) = (0,\\pm1)$, we have $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)\/U(2) \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{2s+1}$.\n\\end{enumerate} \\qed\n\\end{thm} \n\n\n\nAs in the $S^2 \\times S^2$ case, to understand the homotopy type of the equivariant symplectomorphism we need to next understand which strata the space of invariant almost complex structures intersect. Consider the circle action $S^1(a,b;m)$ on $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ and $\\lambda >1$, Corollaries~\\ref{cor:IntersectingOnlyOneStratum} and \\ref{cor:IntersectingTwoStrata} imply that\n\\begin{enumerate}\n \\item if $a=1$, $b \\neq\\{0,m\\}$ and $2\\lambda > |2b-m|+1$, then the space of $S^1(1,b;m)$-equivariant almost complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects the two strata $U_m$ and $U_{|2b-m|}$.\n \\item If $a=-1$,$b \\neq\\{0,-m\\}$ and $2\\lambda > |2b+m|+1$, then the space of $S^1(-1,b;m)$-equivariant almost complex structures $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects the two strata $U_m$ and $U_{|2b+m|}$.\n \\item for all other cases $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects only the stratum $U_m$.\n\\end{enumerate} \n\n\n\n\n\nAs before, we the homotopy type of the equivariant symplectomorphism group can be easily described whenever $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects only one strata.\n\n\\begin{thm}\\label{CCC1strata}\nConsider the circle action $S^1(a,b;m)$ on $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$. Under the following numerical conditions on $a,b,m,\\lambda$, the homotopy type of $\\Symp^{S^1}(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ is given by the table below.\n\\noindent{\\begin{center}\n \\begin{tabular}{|p{4.5cm}|p{3.5cm}|p{2cm}|p{4cm}|}\n\n \\hline\n Values of $(a,b ;m)$ & $\\lambda >1$ &Number of strata $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects &Homotopy type of $\\Symp^{S^1}(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$\\\\\n\\hline\n {$(0,\\pm 1;m)$, $m\\neq 0$} &$\\forall\\lambda $ & 1 & $\\U(2)$ \\\\\n\\hline\n{$(0,\\pm 1;0)$ or $(\\pm 1,0;0)$} \n&$\\forall\\lambda$ &1 &$\\U(2)$ \\\\\n\\hline\n $(\\pm 1,0;m), m\\neq0$ &$\\forall \\lambda $ &1 &$\\mathbb{T}^2$\\\\\n \\hline\n$(\\pm 1,\\pm m;m), m \\neq 0$ &$\\forall \\lambda$ &1 & $\\mathbb{T}^2$\\\\\n\\hline\n{$(1,b;m), b \\neq \\{m,0\\}$} \n&$|2b-m|+1 \\geq2 \\lambda$ &1 &$\\mathbb{T}^2$ \\\\\n\\hline\n{$(-1,b;m), b \\neq \\{ -m,0\\}$} \n&$|2b+m|+1 \\geq2 \\lambda$ &1 &$\\mathbb{T}^2$ \\\\\n\n\\hline\nAll other values of $(a,b;m)$ except $(\\pm 1,b;m)$&$\\forall \\lambda$ &1 &$\\mathbb{T}^2$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}}\n\\qed\n\\end{thm}\n\n\n\nTheorem~\\ref{CCC1strata} gives the homotopy type of the group of equivariant symplectomorphisms for all circle actions apart from the following two exceptional families:\n\\begin{itemize}\n\\item (i) $a=1$, $b \\neq \\{0, m\\}$, and $2\\lambda > |2b-m|+1$; or\n\\item (ii) $a=-1$, $b \\neq \\{0, -m\\}$, and $2 \\lambda > |2b+m|+1$.\n\\end{itemize}\n\nAt this point, we proceed as in Chapter 4. Firstly, we show that the map $\\mathbb{T}^2_m \\hookrightarrow \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ induces a map that is injective in homology. \n\\\\\n\nFix a curve $\\overline{F}$ in the homology class $F$ and passing through the fixed points $Q$ and $P$ in figure~\\ref{oddHirz}. Let $\\mathcal{S}^{S^1}_{F,Q}$ denote the space of $S^1$-invariant curves in the class $F$ passing through $Q$ (defined in figure~\\ref{oddHirz}) and let $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\overline{F},\\omega_\\lambda)$ denote the space of $S^1$-equivariant symplectomorphisms of $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ that pointwise fix the curve $\\overline{F}$.\n\\\\\n\nWithout loss of generality, assume that $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{|2b-m|}$ is the strata of positive codimension in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$. As $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ is contractible, $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m = \\mathcal{J}^{S^1}_{\\om_\\lambda} \\setminus \\left(\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{|2b-m|}\\right)$, and the real codimension of $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{|2b-m|}$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ is 2 (See Corollary~\\ref{odd_codim}), it follows that $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m$ is connected. Further we have that $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda) \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m\/\\mathbb{T}^2_m$ is connected. As the fixed points for the $S^1(\\pm 1,b,m)$ actions are isolated and as $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ is connected, any element $\\phi \\in \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ takes a fixed point for the $S^1$ action to itself. Thus the action of $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ on $\\mathcal{S}^{S^1}_{F,Q}$ is well defined.\n\n\\begin{lemma}\\label{odd_inj}\nThe inclusion $i: \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\overline{F},\\omega_\\lambda) \\hookrightarrow \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ is a homotopy equivalence. \n\\end{lemma}\n\\begin{proof}\nConsider the fibration\n\\begin{equation*}\n \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\overline{F},\\omega_\\lambda) \\hookrightarrow \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda) \\longtwoheadrightarrow \\mathcal{S}^{S^1}_{F,Q}\n\\end{equation*}\nTo show that the action $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ on $\\mathcal{S}^{S^1}_{F,Q}$ is transitive we note that given $F^\\prime \\in \\mathcal{S}^{S^1}_{F,Q}$, there exists a $J^\\prime \\in \\mathcal{J}^{S^1}_{\\om_\\lambda}$ such that $F^\\prime$ is $J^\\prime$-holomorphic. As $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ is connected, consider a path $J_t$ such that $J_0=J^\\prime$ and $J_1=J_m$ where $J_m$ is the standard complex structure on the $m^\\text{th}$-Hirzebruch surface for which the curve $\\overline{F}$ is holomorphic.\n\\\\\n\nBy Theorem~\\ref{prop_FIndecomposable}, for every $J_t$ we have a family of curves $F_t$ (with $F_0= F^\\prime$ and $F_1 =\\overline{F}$) in class $F$ passing through $Q$ and this curve is $S^1$ invariant as $J_t$ are $S^1$ invariant. By Lemma~\\ref{Au} we have a one parameter family of Hamiltonian symplectomorphisms $\\phi_t \\in \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ such that $\\phi_t(F_0) = F_t$ for all $t$.\n\nThus it suffices to show that $\\mathcal{S}^{S^1}_{F,Q}$ is contractible to complete the proof. To do this note that,\n\\begin{equation*}\n \\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{F}) \\to \\mathcal{J}^{S^1}_{\\om_\\lambda} \\to \\mathcal{S}^{S^1}_{F,Q}\n\\end{equation*}\n\nwhere $\\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{F})$ denotes the space of $S^1$ invariant almost complex structures for which the curve $\\overline{F}$ is $J$-holomorphic. As both $\\mathcal{J}^{S^1}_{\\om_\\lambda}(\\overline{F})$ and $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ are contractible, $\\mathcal{S}^{S^1}_{F,Q}$ is contractible as well completing the proof.\n\\end{proof}\n\n Define the following projections just as in the exposition above Theorem~\\ref{inj}. In our case we take the point $\\{*\\}$ to be the point $Q$\nin Figure~\\ref{oddHirz}.\n\n \\begin{align*}\n s_{0}: S^2 &\\to W_m \\\\\n \\left[z_{0}, z_{1}\\right] &\\mapsto\\left(\\left[z_{0}, z_{1}\\right],[0,0,1]\\right)\n\\end{align*} \nand the projection to the first factor of $\\mathbb{C}P^1 \\times \\mathbb{C}P^2$ is\n\\begin{align*}\n \\pi_1: W_m &\\to S^2 \\\\\n \\left(\\left[z_{0}, z_{1}\\right],\\left[w_{0}, w_{1}, w_{2}\\right]\\right) &\\mapsto\\left[z_{0}, z_{1}\\right]\n\\end{align*}\nWe define a continuous map $h_1:\\Symp_h^{S^1} (\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda) \\to \\mathcal{E}\\left(S^{2}, *\\right)$ by setting\n\\begin{equation*}\n \\begin{aligned}\nh_1:\\Symp_h^{S^1} (\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda) &\\to \\mathcal{E}\\left(S^{2}, *\\right) \\\\\n\\psi &\\mapsto \\psi_1:= \\pi_{1} \\circ \\psi \\circ s_{0}\n\\end{aligned}\n\\end{equation*}\n\nFurther define the restriction map $r: \\Symp_h^{S^1} (\\CP^2\\# \\overline{\\CP^2},\\overline{F},\\omega_\\lambda) \\to \\mathcal{E}\\left(S^{2}, *\\right)$ by just restricting $\\phi \\in \\Symp_h^{S^1} (\\CP^2\\# \\overline{\\CP^2},\\overline{F},\\omega_\\lambda)$ to the fibre $\\overline{F}$.\n\nThus we have a well defined map \n\\begin{align*}\n h: \\Symp_h^{S^1} (\\CP^2\\# \\overline{\\CP^2},\\overline{F},\\omega_\\lambda) &\\to \\mathcal{E}\\left(S^{2}, *\\right) \\times \\mathcal{E}\\left(S^{2}, *\\right) \\\\\n \\phi &\\mapsto (h_1(\\phi), r(\\phi))\n\\end{align*}\n\n\\begin{thm}\nThe inclusion map $i:\\mathbb{T}^2_m \\hookrightarrow \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ induces a map that is injective in homology.\n\\end{thm}\n\n\\begin{proof} By Lemma~\\ref{odd_inj}, it suffices to prove that the inclusion $i:\\mathbb{T}^2_m \\hookrightarrow \\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\overline{F},\\omega_\\lambda) $ induces a map that is injective in homology. \n\\\\\n\nComposing with $h$ we have an inclusion of $h \\circ i:\\mathbb{T}^2_m \\hookrightarrow \\mathcal{E}\\left(S^{2}, *\\right) \\times \\mathcal{E}\\left(S^{2}, *\\right)$ and it suffices to show that this map induces a map that is injective in homology. The proof of this claim in analogous to the proof of Theorem~\\ref{inj}.\n\n\\begin{remark}\nAs the argument above doesn't depend on $m$, the same proof as above also shows that for the family of circle actions given by $S^1(1,b,m)$ with $2\\lambda > |2b-m|+1$, the inclusion $\\mathbb{T}^2_{|2b-m|}$ into $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ also induces a map that is injective in homology. Similarly we can also show that for the $S^1(-1,b;m)$ actions with $2\\lambda > |2b+m|+1$, the inclusion $\\mathbb{T}^2_{|2b+m|}$ and $T_m$ into $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ also induces a map that is injective in homology.\n\\end{remark}\n\n\n\\end{proof}\n\nAs in the $S^2 \\times S^2$ case, we have that $i:\\mathbb{T}^2_m \\hookrightarrow \\Symp^{S^1} (\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda) $ induces a map which is injective in homology, From our discussion above we also had that $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)\/\\mathbb{T}^2_m \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m$ and $\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)\/\\mathbb{T}^2_{|2b-m|} \\simeq \\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{|2b-m|}$. Let $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m := P$ and $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{|2b-m|} ;= Q$, as $i:\\mathbb{T}^2_m \\hookrightarrow \\Symp^{S^1} (\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda) $ induces a map which is injective in homology, further by Leray-Hirsch theorem we have that \n\\begin{align*}\n H^*(\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)) \\cong H^*(P) \\bigotimes H^*(\\mathbb{T}^2) \\\\\n H^*(\\Symp^{S^1}_h(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)) \\cong H^*(Q) \\bigotimes H^*(\\mathbb{T}^2)\n\\end{align*}\nAs before we need to compute the codimension of the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_{|2b-m|} $in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$. The computation in the section~\\ref{Isom_CCC} shows it to be two (see Corollary~\\ref{odd_codim}). \n\\\\\n\nThus we have the following theorem on the ranks of the homology groups of the space of equivariant symplectomorphisms.\n\n\\begin{thm} Consider the following circle actions on $\\mathbb{C}P^2 \\# \\overline \\mathbb{C}P^2$ \\begin{itemize}\n\\item (i) $a=1$, $b \\neq \\{0, m\\}$, and $2\\lambda > |2b-m|+1$; or\n\\item (ii) $a=-1$, $b \\neq \\{0, -m\\}$, and $2 \\lambda > |2b+m|+1$.\n\\end{itemize} \nThen we have \n$$H^p\\left(\\Symp^{S^1}(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda), k\\right) = \\begin{cases}\nk^4 ~~p \\geq 2\\\\\nk^3 ~~p =1 \\\\\nk ~~ p=0\\\\\n\\end{cases}$$\nfor any field $k$.\n\\end{thm}\n\nAs the proof of Theorem~\\ref{full_homo} holds verbatim for the $S^1(a,b;m)$ actions on $\\CP^2\\# \\overline{\\CP^2}$ satisfying the conditions\n\\begin{itemize}\n\\item (i) $a=1$, $b \\neq \\{0, m\\}$, and $2\\lambda > |2b-m|+1$; or\n\\item (ii) $a=-1$, $b \\neq \\{0, -m\\}$, and $2 \\lambda > |2b+m|+1$.\n\\end{itemize} \nThe above results give us the homotopy type of the centralizer for all circle actions on $\\CP^2\\# \\overline{\\CP^2}$ which we summarise in the table below.\n\n\\begin{thm}\\label{full_homo_CCC}\nFor the $S^1$ action given by the integers $(a,b;m)$, acting on $(\\CP^2\\# \\overline{\\CP^2}, \\omega_\\lambda)$. Under the following numerical conditions on $a,b,m,\\lambda$, the homotopy type of $\\Symp^{S^1}(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ is given by the table below.\n\\begin{center}\n \\begin{tabular}{|p{4.5cm}|p{3.5cm}|p{2cm}|p{3.75cm}|}\n\n \\hline\n Values of $(a,b ;m)$ & $\\lambda>1$ &Number of strata $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ intersects &Homotopy type of $\\Symp^{S^1}(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$\\\\\n\\hline\n {$(0,\\pm 1;m)$, $m\\neq 0$} &$\\forall\\lambda$ & 1 & $\\U(2)$ \\\\\n\\hline\n{$(0,\\pm 1;0)$ or $(\\pm 1,0;0)$}\n&$\\forall\\lambda$ &1 &$\\U(2)$ \\\\\n\n\\hline\n $(\\pm 1,0;m), m\\neq0$ &$\\forall\\lambda$ &1 &$\\mathbb{T}^2$\\\\\n \\hline\n$(\\pm 1,\\pm m;m), m \\neq 0$ &$\\forall\\lambda$ &1 & $\\mathbb{T}^2$\\\\\n\\hline\n\\multirow{2}{10em}{$(1,b;m), b \\neq \\{m,0\\}$} \n&$|2b-m|+1 \\geq2 \\lambda$ &1 &$\\mathbb{T}^2$ \\\\\\cline{2-4}\n&$2 \\lambda >|2b-m|+1 $ &2 &$\\Omega S^3 \\times S^1 \\times S^1 \\times S^1$ \\\\\n \\hline\n \\multirow{2}{10em}{$(-1,b;m), b \\neq \\{ -m,0\\}$} \n&$|2b+m|+1 \\geq2 \\lambda $ &1 &$\\mathbb{T}^2$ \\\\\\cline{2-4}\n&$2 \\lambda >|2b+m|+1$ &2 &$\\Omega S^3 \\times S^1 \\times S^1 \\times S^1$ \\\\\n\\hline\nAll other values of $(a,b;m)$ &$\\forall \\lambda$ &1 &$\\mathbb{T}^2$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\qed\n\\end{thm}\n\n\n\\section{Isometry groups of odd Hirzebruch surfaces}\\label{Isom_CCC}\n\nIn this section we calculate the codimension of the smaller strata in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$. Let $M$ denote the manifold $\\CP^2\\# \\overline{\\CP^2}$. By Theorem~\\ref{trans_strata}, we know that the normal bundle to the strata $U_{2s+1} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda}$ can be identified with $H^{0,1}_{J_{2s+1}}(M,T^{1,0}_{J_{2s+1}}M)^{S^1}$. Thus to calculate the codimension of $U_{2s+1} \\cap \\mathcal{J}^{S^1}_{\\om_\\lambda}$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$, we need to understand how $S^1$ acts on $H^{0,1}_{J_{2s+1}}(M,T^{1,0}_{J_{2s+1}}M)$ and calculate the dimension of the invariant subspace. We use the convention $m = 2k+1$.\n\\\\\n\n\nThe Hirzebruch surface $\\mathbb{F}_{2k+1}$ is obtained by K\\\"ahler reduction of $\\mathbb{C}^{4}$ under the action of the torus $T^{2}_{2k+1}$ defined by\n\\[(s,t)\\cdot z = (s^{2k+1}tz_{1},tz_{2},sz_{3},sz_{4})\\]\nThe moment map is $\\phi(z)=((2k+1)|z_{1}|^{2}+|z_{3}|^{2}+|z_{4}|^{2}, |z_{1}|^{2}+|z_{2}|^{2})$ and the reduced manifold at level $(\\lambda +k,1)$ is symplectomorphic to $(\\mathbb{C}P^2 \\# \\overline{\\mathbb{C}}P^2,\\omega_{\\lambda})$ and biholomorphic to the Hirzebruch surface $\\mathbb{F}_{2k+1}$. In this model, the projection to the base is given by $[(z_{1},\\ldots,z_{4})]\\mapsto [z_{3}:z_{4}]$, the zero section is $[w_{0}:w_{1}]\\mapsto [(w_{0}^{2k+1},0,w_{0},w_{1})]$, and a fiber is $[w_{0}:w_{1}]\\mapsto [(w_{0}w_{1}^{2k+1},w_{0}w_{1},0,w_{1})]$. The torus $T^{2}(2k+1)=T^{4}\/T^{2}_{2k+1}$ acts on $\\mathbb{F}_{2k+1}$. This torus is generated by the elements $[(1,e^{it},1,1)]$ and $[(1,1,e^{is},1)]$, and its moment map is $[(z_{1},z_{2},z_{3},z_{4})]\\mapsto(|z_{2}|^{2},|z_{3}|^{2})$. The moment polytope $\\Delta(2k+1)$ is the convex hull of the vertices $(0,0)$, $(1,0)$, $(1,\\lambda+k)$, and~$(0,\\lambda-k -1)$.\n\\\\\n\nThe isometry group of $\\mathbb{F}_{2k+1}$ is \n\\[K(2k+1) = Z_{\\U(4)}(T^{2}_{2k+1})\/T^{2}_{2k+1}=(T^{2}\\times \\U(2))\/T^{2}_{2k+1}\\simeq \\U(2)\\]\nwhere the last isomorphism is given by\n\\[[(s,t), A]\\mapsto (s^{-1}t\\det A^{k}) A\\]\nUnder this isomorphism, an element $[(1,a,b,1)]$ of the torus $T(2k+1)$ is taken to\n\\[ab^{k}\\begin{bmatrix}b&0\\\\0&1\\end{bmatrix}\\]\nConsequently, at the Lie algebra level of the maximal tori, the map $\\lie{t}^{2}(2k+1)\\to \\lie{t}^{2}$ is given by\n\\[\\begin{pmatrix}\n1&k+1\\\\\n1&k\n\\end{pmatrix}\\]\nThe moment polytope associated to the maximal torus $T^{2}\\subset K(2k+1)$ is thus the balanced polytope obtained from $\\Delta(2k+1)$ by applying the inverse transpose $\\begin{pmatrix}-k&1\\\\k+1&-1\\end{pmatrix}$.\n\n\\subsection{Odd isotropy representations and codimension calculation}\n\nThe action of the isometry group $K(2k+1)\\simeq \\U(2)$ on the space $H_{J}^{0,1}(M,TM)$ of infinitesimal deformations is isomorphic to $\\Det^{1-k}\\otimes \\Sym^{2k-1}$, where $\\Det$ is the determinant representation of $\\U(2)$ on $\\mathbb{C}$, and where $\\Sym^{k}(\\mathbb{C}^{2})$ is the $k$-fold symmetric product of the standard representation of $\\U(2)$ on $\\mathbb{C}^{2}$. Using the double covering $S^{1}\\times\\SU(2)\\to\\U(2)$, we see that irreducible representations of $\\U(2)$ correspond to irreducible representations of $S^{1}\\times\\SU(2)$ for which $(-1,-\\id)$ acts trivially. If $A_{n}$ denotes the representation $t\\cdot z=t^{n}z$ of $S^{1}$ on $\\mathbb{C}$, and if $V_{k}$ is the $k$-fold symmetric product of the defining representation of $\\SU(2)$ on $\\mathbb{C}^{2}$, then the irreducible representations of $\\U(2)$ are $A_{n}\\otimes V_{k}$ with $n+k$ even. In this notation, we have the identifications $\\Det = A_{2}$, while $\\Sym=A_{1}\\otimes V_{1}$. Consequently, $\\Det^{1-k}\\otimes \\Sym^{2k-1} = A_{1}\\otimes V_{2k-1}$.\n\nWith respect to the double covering $S^{1}\\times\\SU(2)\\to\\U(2)$, the maximal torus $T^{2}\\subset\\U(2)$ of diagonal matrices $D_{s,t}:=\\begin{pmatrix}e^{is} & 0\\\\ 0 & e^{it}\\end{pmatrix}$ lifts to\n\\[\\left(|D_{s,t}|^{1\/2},\\frac{D}{|D|^{1\/2}}\\right)\n=\\left(e^{i(s+t)\/2},\\begin{pmatrix}e^{i(s-t)\/2} & 0\\\\ 0 & e^{i(t-s)\/2}\\end{pmatrix}\\right)\\]\nAs explained above, the action of $K(2k+1)$ on $H^{0,1}(\\CP^2\\# \\overline{\\CP^2},T^{1.0}_{J_m}(\\CP^2\\# \\overline{\\CP^2})) \\cong \\mathbb{C}^{m-1}$ is isomorphic to $\\Det^{1-k}\\otimes\\Sym^{2k-1}$. Hence to calculate the the codimension we only need to calculate the dimension of the invariant subspace of the vector space $ H^{0,1}(\\CP^2\\# \\overline{\\CP^2},T^{1.0}_{J_m}(\\CP^2\\# \\overline{\\CP^2})) \\cong \\mathbb{C}^{m-1}$ under the $S^1(1,b;m)$ action. To do so we note that a basis of $\\Sym^{2k-1}$ is given by the homogeneous polynomials $P_{j}=z_{1}^{2k-1-j}z_{2}^{j}$ for $j\\in\\{0,\\ldots,2k-1\\}$. The action of $D_{s,t}$ on $P_{j}$ is\n\\[D_{s,t}\\cdot P_{j}=e^{i\\big((s+t)(1-k)+s(2k-1-j)+tj\\big)}P_{j}\\]\nso that each $P_{j}$ generates an eigenspace for the action of the maximal torus $T(2k+1)$ generated by $D_{s,t}$. In particular, the circle $S^{1}(a,b;2k+1)$ acts trivially on $P_{j}$ if, and only if, \n\\[(a-b)(k-j)+b=(a,b)\\cdot(k-j,j-k+1)=0\\]\n\\\\\nThus the codimension (in the balanced basis of the maximal torus of K(2k+1) is given by the number of $j \\in \\{0,\\ldots,2k-1\\}$ such that \\begin{equation}\\label{formula_CCC}\n (a-b)(k-j)+b=(a,b)\\cdot(k-j,j-k+1)=0.\n\\end{equation}\n\\\\\n\nNote that just as in the $S^2 \\times S^2$ case, the above codimension calculation was with respect to the basis of the maximal torus in $K(2k+1)$. Hence to calculate the codimension for the $S^1(1,b,m) \\subset \\mathbb{T}^2_m$ as in our case, we need to transform the basis by multiplication by the matrix $\\begin{pmatrix}\\frac{m-1}{2}+1& -1\\\\\\frac{m-1}{2}&-1\\end{pmatrix}$. Thus it takes the vector $\\begin{pmatrix}1\\\\b\\end{pmatrix}$ in the basis for the standard moment polytope \n\n\n\\[\n\\begin{tikzpicture}\n\\draw (0,1) -- (3,1) ;\n\\draw (0,1) -- (0,0) ;\n\\draw (0,0) -- (4,0) ;\n\\draw (3,1) -- (4,0) ;\n\n\\end{tikzpicture}\n\\]\n\nto the vector $\\begin{pmatrix}\\frac{m+1}{2}-b\\\\\\frac{m-1}{2}-b\\end{pmatrix}$ in the basis for the balanced polytope. Hence $a$ and $b$ in equation~\\ref{formula_CCC} above need to be replaced by $\\frac{m+1}{2}-b$ and $\\frac{m-1}{2}-b$ respectively to get the correct codimension for the $S^1(1,b,m)$ action. \n\\\\\n\nThus we have the following theorem. \n\n\n\n\\begin{thm}\\label{codimension_calc2}\nGiven the circle action $S^1(1,b,m)$ on $(\\CP^2\\# \\overline{\\CP^2},\\omega_\\lambda)$ with $2\\lambda > |2b-m|+1$ and $b \\neq \\{0, m\\}$, the complex codimension of of the strata $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ in given by the number of $j \\in \\{1, \\cdots , m-1\\}$ such that $j=b$.\n\\\\\n\nSimilarly for the action $S^1(-1,b,m)$ with $2\\lambda > |2b+m|+1$ and $b \\neq \\{0, -m\\}$, the complex codimension of of the strata $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ in given by the number of $j \\in \\{1, \\cdots , m-1\\}$ such that $j=-b$.\n\\qed\n\\end{thm}\n\n\\begin{cor}\\label{odd_codim}\nFor the circle actions \\begin{itemize}\n \\item (i) $a=1$, $b \\neq \\{0, m\\}$, and $2\\lambda > |2b-m|+1$; or\n\\item (ii) $a=-1$, $b \\neq \\{0, -m\\}$, and $2 \\lambda > |2b+m|+1$.\n\\end{itemize}The complex codimension of the stratum $\\mathcal{J}^{S^1}_{\\om_\\lambda} \\cap U_m$ in $\\mathcal{J}^{S^1}_{\\om_\\lambda}$ is either 0 or 1.\n\\qed\n\\end{cor}\n\n\n\\chapter{Equivariant Gauge Groups}\n\nIn this section we basically show how to calculate the homotopy type of the equivariant gauge groups that arise in lemma \\ref{gauge} and lemma \\ref{ngauge}. \n\\\\\n\nLet $P$ be a principle $G$- bundle over $B$, where $G$ is an abelian group that acts on the right. Let $H$ be a lie group that act on the base space $B$ and this action lifts to an action on the bundle $P$. We shall denote this action by a left action. Note that $H$ need not act effectively on the space $B$. \n\\\\\n\nLet $\\Gauge1^H(P)$ denote all equivariant (with respect to the action of H) bundle automorphisms i.e equivariant maps $u$ such that the following diagram commutes.\n\\[ \\begin{tikzcd}\n &P \\arrow[d,\"\\pi\"] \\arrow[r, \"u\"] &P \\arrow[dl,\"\\pi\"] \\\\\n &B \n\\end{tikzcd}\n\\]\n\nGiven $u \\in \\Gauge1^H(P)$, define the map \n\\begin{align*}\n \\phi_u : P &\\rightarrow G \\\\\nx &\\mapsto \\phi_u(x)\n\\end{align*}\n\nwhere $\\phi_u(x)$ is defined such that \n$$x \\cdot \\phi_u(x) = u(x)$$\n\nLet us now see how the map $\\phi_u$ behaves under the left action of $H$. We have \n\n$$ u(h \\cdot x) = h\\cdot x \\cdot \\phi_u(h \\cdot x) $$ \n\nBut we already have that \n\n$$u(h \\cdot x) = h \\cdot u(x) = h \\cdot x \\cdot \\phi_u(x)$$\n\nPutting the equalities together and noticing that the $G$ -action is free we get that \n\n$$ \\phi_u(h \\cdot x) = \\phi_u(x) $$ \n\nThat is that the map $\\phi_u$ is invariant under the action of $H$. \n\\\\\n\nAlso from the defintion we can see that $$\\phi_u( x \\cdot g) = g^{-1} \\cdot \\phi_u(x) \\cdot g = \\phi_u(x)$$ \n\nwhere the last equality follows from $G$ being abelian. \n\\\\\n\nDenote by $\\Maps_{H,G}(P,G)$ the space of all $H$ and $G$-invariant smooth functions from P to G equipped with the $C^\\infty$ topology. Note that as $G$ is abelian, this space of maps is homeomorphic with the space $\\Inv(B,G)$, of $H$-invariant maps from $B$ to $G$ endowed with the $C^\\infty$-topology. Further we have that the map\n\n\n\\begin{align*}\n \\rho: \\Gauge1^H (P) &\\rightarrow \\Maps_{H,G}(P,G) \\\\\n u &\\mapsto \\phi_u\n\\end{align*}\nis an an homeomorphism (with both spaces endowed with the $C^\\infty$-topology) with the inverse being constructed using the definition $x \\cdot \\phi_u(x) = u(x)$. \n\\\\\n\nThus $\\Gauge1^H(P)$ is homeomorphic to $\\Inv(B,G)$.\n\\\\\n\nLet us now use this to calculate the Homotopy type of the Gauge groups in the fibrations in Chapter 2. \n\\\\\n\nConsider a rank 2 symplectic normal bundle of $\\overline{D}$. Let us fix an equivariant arbitrary compatible fibre wise almost complex structure $J$ on $N(\\overline{D})$. As this is a rank two bundle, the structure group is $Sp(2)$, and this now can be reduced to $U(1)= S^1$, and the two bundles are isomorphic. Thus the space of symplectic automorphisms of the original bundle is homeomorphic to the space of symplectic automorphisms of the reduced bundle.\n\\\\\n\nIn our case we have a right group action of $S^1$ on this bundle and we are interested in the equivariant symplectic automorphisms of this bundle.This is homotopic to the space of Equivariant symplectic automorphisms of the $U(1)$ bundle.(As the reduction of the structure group can be done equivariantly) And as $U(1) = S^1$ the space of Equivariant symplectic automorphisms of the $U(1)$ bundle is the same as $\\Gauge1^{S^1}(P)$ where $P$ is the associated principal bundle. This is homeomorphic to $\\Inv(S^2,S^1)$ from the above discussion.\n\\\\\n\nFinally note that for any non-trivial $S^1$ action on $S^2$ (possibly non-effective) the quotient space under this action is just the interval. Hence the space $\\Inv(S^2,S^1)$ is just the space of smooth maps from the interval to $S^1$.\n\\\\\n\nBefore we embark on trying to calculate the homotopy type of $\\Gauge1^{S^1}(N(\\overline{D}))$ (See Section~\\ref{not} for notation), we would need the following technical lemma. Note that in all our calculations above we have used the $C^\\infty$-topology on $\\Gauge1^{S^1}(N(\\overline{D}))$ and $\\Inv_{S^1}(S^2,S^1)$. Let $\\Gauge1_c^{S^1}(N(\\overline{D}))$ denote the space of continuous $S^1$-equivariant gauge transformations of the bundle $N(\\overline{D})$ equipped the $C^0$-topology. Using the same argument at the beginning of the section we can show that $\\Gauge1_c^{S^1}(N(\\overline{D}))$ is homotopic to the space $\\Inv_{S^1,c}(S^2,S^1)$ of continuous $S^1$-invariant maps from $S^2$ to $S^1$. Then we have that,\n\n\n\\begin{lemma}\\label{wolken}\nThe space $\\Gauge1^{S^1}(N(\\overline{D}))$ with the $C^\\infty$-topology is homotopic to the space $\\Gauge1_c^{S^1}(N(\\overline{D}))$ equipped with the $C^0$-topology.\n\\end{lemma}\n\nThe proof of this lemma follows from an equivariant version of the arguments used to prove Theorem 3.2.13 in \\cite{homgauge}. \n\n\\begin{lemma} \\label{Gauge(D)}\n$\\Gauge1^{S^1}(N(\\overline{D})) \\simeq \\Gauge1_c^{S^1}(N(\\overline{D})) \\simeq \\Inv_{S^1,c}(S^2,S^1) \\simeq S^1$\n\\end{lemma}\n\n\\begin{proof}\n$\\Gauge1^{S^1}(N(\\overline{D})) \\simeq \\Gauge1_c^{S^1}(N(\\overline{D})) \\simeq \\Inv_{S^1,c}(S^2,S^1)$ follows from Lemma \\ref{wolken} and the discussion above. \n\\\\\n\nLet $\\Maps(S^2\/S^1,S^1)$ denote the space of continuous maps from $S^2\/S^1$ to $S^1$.Then the space $\\Inv_{S^1,c}(S^2,S^1)$ is homeomorphic to the space $\\Maps(S^2\/S^1,S^1)$. Further we note that as $S^2\/S^1$ is homeomorphic to an interval $[0,1]$, the space $\\Maps(S^2\/S^1,S^1)$ can be identified the space of continuous maps from the interval $[0,1]$ to $S^1$ which we denote by $\\Maps([0,1],S^1)$ . Let $p_0$ be a fixed point for the $S^1$ action on $S^2$. Then we consider the following fibration, \n\n\n\\[\n\\begin{tikzcd}\n &\\left\\{*\\right\\} \\simeq \\Inv_{S^1,c}\\left(\\left(S^2,p_0\\right), \\left(S^1,id\\right)\\right) \\arrow[r,hookrightarrow] &\\Inv(S^2,S^1) \\arrow[r,\"ev\"] &S^1 \\\\ \n\\end{tikzcd}\n\\]\n\nWhere the map $ev:\\Inv(S^2,S^1) \\rightarrow S^1 $ is just the evaluation map at the fixed point (for the $S^1$ action on $S^2$) $p_0$ and the space $\\Inv_{S^1,c}\\left(\\left(S^2,p_0\\right), \\left(S^1,id\\right)\\right)$ is the space of all continuous maps from $S^2$ to $S^1$, invariant under the $S^1$ action and send the point $p_0$ to the identity on $S^1$. As above, the space $\\Inv\\left(\\left(S^2,p_0\\right), \\left(S^1,id\\right)\\right)$ can be identified with the space of continuous maps from the the interval $[0,1]$ to $S^1$ that send $0$ to the the identity on $S^1$. This space of pointed maps from $[0,1]$ to $S^1$ is contractible, thus completing the proof.\n\\end{proof}\n\n\\begin{remark}\nWe need the point $p_0$ to be a fixed point as the evaluation map has to be surjective.\n\\end{remark}\n\n\\begin{lemma} \\label{Gauge(N(D))}\n$\\Gauge1^{S^1}(N(\\overline{D} \\vee \\overline{F})) \\simeq \\left\\{*\\right\\}$\n\\end{lemma}\n\n\\begin{proof}\nAnalogous to our method above, we get that $\\Gauge1^{S^1}(N(\\overline{D} \\vee \\overline{F}))$ is just the space of continuous maps from the following configuration to $S^1$ that send some neighbourhood of the crossing to the identity in $S^1$\n\\begin{figure}[H]\n \\centering\n \n \n \n\\begin{tikzpicture}[thick, scale=0.6]\n\\draw (0,-3) -- (0,-1);\n\\draw (0,1) --(0,3);\n\\draw[red] (0,1) --(0,-1);\n\\draw[red] (1,0) --(-1,0);\n\\draw (-1,0) -- (-3,0);\n\\draw (1,0) --(3,0);\n\\draw (10,0) circle (2cm);\n\\draw[->] (4,0.5) -- (7,0.5);\n\\node [right] at (12.2,0) {id};\n\\node[below] at (1.3,0) {$\\overline{D}$};\n\\node[right] at (0,2) {$\\overline{F}$};\n\\draw [fill,red] (12,0) circle (0.1cm);\n\\end{tikzpicture}\n\\end{figure}\n\nAnd the space of such maps is indeed contractible thus completing the proof. \n\\end{proof}\n\\begin{comment}\n\n\nWe now want to carry out similar computation but for action of finite abelian groups $\\mathbb{Z}_n$ on the bundle. As discussed before, we have that $\\Gauge1^{Z_n}(N(\\overline{D})) \\simeq \\Inv_{Z_n}(S^2,S^1)$ where $\\Inv_{Z_n}(S^2,S^1)$ denotes the space of $Z_n$ invariant maps from $S^2$ to $S^1$. Further let $\\Gauge1_c^{Z_n}(N(\\overline{D}))$ denote the space of continuous $\\mathbb{Z}_n$ equivariant gauge transformations. As in Lemma \\ref{wolken} we have that \n$\\Gauge1^{Z_n}(N(\\overline{D})) \\simeq \\Gauge1_c^{Z_n}(N(\\overline{D}))$. Further, just as in the $S^1$ case we may identify $\\Gauge1_c^{Z_n}(N(\\overline{D}))$ with the space $\\Inv_{Z_n, c}(S^2,S^1)$ of continuous $\\mathbb{Z}_n$ invariant maps from $S^2$ to $S^1$.\n\\\\\n\nPutting all this together we have, \n\n\\begin{lemma}\\label{finitegauge}\n$\\Gauge1^{Z_n}(N(\\overline{D})) \\simeq \\Gauge1_c^{Z_n}(N(\\overline{D})) \\simeq \\Inv_{Z_n, c}(S^2,S^1) \\simeq S^1$\n\\end{lemma}\n\n\\begin{proof}\nThe homotopy equivalences $\\Gauge1^{Z_n}(N(\\overline{D})) \\simeq \\Gauge1_c^{Z_n}(N(\\overline{D})) \\simeq \\Inv_{Z_n, c}(S^2,S^1)$ are all explained above. Thus we only need to show that $\\Inv_{Z_n, c}(S^2,S^1) \\simeq S^1$. In our case we know that the $Z_n$ action on $S^2$ are in fact restrictions of $S^1$ actions on $S^2$, hence they are rotations about fixed points of $S^2$. Note that $\\Inv_{Z_n,c}(S^2,S^1)$ is homeomorphic to the space $\\Maps(S^2\/Z_n, S^1)$ of continuous maps from $S^2\/\\mathbb{Z}_n$ to $S^1$. Further, $S^2\/Z_n$ is homeomorphic to $S^2$ and hence $\\Maps(S^2\/Z_n, S^1) \\cong \\Maps(S^2, S^1)$. Finally, we note that $\\Maps(S^2, S^1) \\simeq S^1$ thus completing the result.\n\\end{proof}\n\n\\begin{lemma}\\label{wedgegauge}\n$\\Gauge1^{Z_n}(N(\\overline{D} \\vee \\overline{F})) \\simeq \\left\\{*\\right\\}$\n\\end{lemma}\n\n\\begin{proof}\nAnalogous to the proof of Lemmas \\ref{Gauge(N(D))} and \\ref{finitegauge}, we can identify the group $\\Gauge1_{Z_n}(N(\\overline{D} \\vee \\overline{F}))$ with maps from $S^2 \\vee S^2$ that send a neighbourhood of the wedge point to the identity in $S^1$. The space of such maps is contractible. \n\\end{proof}\n\nFinally, we need to understand the homotopy type of $\\mathbb{Z}_n$ equivariant symplectomorphisms $\\Symp^{\\mathbb{Z}_n}(S^2)$, in our analysis in Chapter 6. \n\\begin{lemma}\\label{Wang}\nConsider a symplectic action of $\\mathbb{Z}_n$ on $S^2$, then the space $\\Symp^{\\mathbb{Z}_n}(S^2)$ is homotopic to $\\SO(3)$, if $\\mathbb{Z}_n$ fixes $S^2$ pointwise, and is homotopic to $S^1$ otherwise.\n\\end{lemma}\n\\begin{proof}\nLet $\\psi \\in \\Symp^{\\mathbb{Z}_n}(S^2)$, consider the graph $\\tilde \\psi$ of $\\psi$ i.e\n\n\\begin{align*}\n \\tilde \\psi: S^2 &\\rightarrow S^2 \\times S^2 \\\\\n z &\\mapsto (z,\\psi(z))\n\\end{align*}\n\nLet $\\SO(3)^{\\mathbb{Z}_n}$ denote the centraliser of $Z_n$ inside $\\SO(3)$. Choose a $\\mathbb{Z}_n$ equivariant metric for the product $\\mathbb{Z}_n$ action on $S^2 \\times S^2$ coming from the $\\mathbb{Z}_n$ action on $S^2$. Then by Theorem C, Corollary C and Corollary 4.1 in \\cite{wang}, the mean curvature flow with respect to this equivariant metric gives us a canonical homotopy of $\\psi$ to an element inside $\\SO(3)^{\\mathbb{Z}_n}$. Further this homotopy is identity on all the elements of $\\SO(3)^{\\mathbb{Z}_n}$. Thus the map we get is in fact a deformation retract of $\\Symp^{\\mathbb{Z}_n}(S^2)$ and $\\SO(3)^{\\mathbb{Z}_n}$. Note $\\SO(3)^{\\mathbb{Z}_n} = \\SO(3)~ or~ S^1$ depending on whether $\\mathbb{Z}_n$ is in the centre of $SO(3)$ or not respectively, thus proving the claim.\n\\end{proof}\n\\end{comment}\n\\chapter[Equivariant Gompf argument]{Holomorphic configurations and equivariant Gompf argument}\n\n\n\\begin{thm}\\label{transverse}\nLet $G$ be a compact group. Let A and B be two $G$-invariant symplectic spheres in a 4-dimensional symplectic manifold $(M,\\omega)$ intersecting $\\omega$-orthogonally at a unique fixed point $p$ for the $G$ action. Then there exists an invariant $J \\in \\mathcal{J}_\\omega^{G}$ such that both A and B are $J$-holomorphic. Here $\\mathcal{J}_\\omega^{G}$ denotes the space of $G$ invariant compatible almost complex structures on $M$. \n\\end{thm}\n\n\\begin{proof}\nThe proof follows from mimicking the proof of Lemma A.1 in \\cite{Evans} under the presence of a group action.\n\\iffalse\nAs $A$ and $B$ are symplectic submanifolds of $M$. There exists a compatible(with respect to the restricted form on $A$ and $B$) $G$- invariant almost complex $J_A$ and $J_B$ such that $A$ is $J_A$-holomorphic and $B$ is $J_B$-holomorphic. This gives us invariant metrics $g_A$ and $g_B$ defined on the tangent bundles $TA$ and $TB$ respectively. \n\\\\\n\nIn a neighbourhood of $p$, choose a chart $U_p$ such that $A$ and $B$ are $\\mathbb{R}^2 \\times \\{0\\}$ and $\\{0\\}\\times \\mathbb{R}^2$ respectively. This is possible as $A$ and $B$ are symplectically orthogonal. Define the metric $\\Tilde{g}:= g_A \\oplus g_B$ on $T_p M$. We note that \n\nNow extend $g_A$ and $g_B$ in a neighbourhood of the point $p$ as follows. Pick a smooth chart $U$ such that in that chart the submanifold $A$ is $\\{0\\}\\times \\mathbb{R}^2$ and the submanifold $B$ is $\\mathbb{R}^2 \\times \\{0\\}$. We think of a metric as a section of $TM \\otimes T^*M$ that is positive definite. Further we demand that $U$ is a trivialising chart for the bundle $TM \\otimes T^*M$. Then in the neighbourhood $U$ both $g_A$ and $g_B$ are function from $\\mathbb{R}^4$ to $\\mathbb{R}^4$. We define a smooth extension of $g_A$ and $g_B$ to the whole neighbourhood as follows. We denote the extension by $\\Tilde{g}$.\n\n$$ \\Tilde{g}\\left((v_1,v_2,v_3,v_4)\\right):= g_A\\left((0,0,v_3,v_4\\right) + g_B\\left(v_1,v_2,0,0)\\right) - g_A(0,0,0,0)$$\n\\\\\n\nThe restriction $\\Tilde{g}$ to $\\{0\\} \\times \\mathbb{R}^2$ is $g_A(0,0,v_1,v_2) + {g_B(0,0,0,0)} - g_A(0,0,0,0)$. As $g_A(0,0,0,0)= g_B(0,0,0,0)$, we have $g_A(0,0,v_1,v_2) + {g_B(0,0,0,0)} - g_A(0,0,0,0) = g_A(0,0,v_1,v_2) = g_A$. Similarly we can show that the restriction of $\\Tilde{g}$ to $\\mathbb{R}^2 \\times \\{0\\}$ is $g_B$. Thus showing that we can extend $\\Tilde{g}$ smoothly to a neighbourhood $U$ of $p$. Apriori this extension need not be equivariant, we further choose a smaller neighbourhood $\\Tilde{U} \\subset U$ which is invariant under the $G$ action. And we average $\\Tilde{g}$ on $\\Tilde{U}$ under the $G$ action to get a function $g^\\prime$ which is equivariant and extends $g_A$ and $g_B$. \n\\\\\n\nFinally, given any other point $x \\in M$, $x \\neq p$ we can find a $G$ invariant open set $U_x$, such that either $U_x \\cap A \\neq \\emptyset$ or $U_x \\cap B \\neq \\emptyset$ and a $G$-equivariant metric $\\tilde{g}_{U_x}$ such that $\\tilde{g}_{U_x}(a) = g_A$ for all $a \\in U_x cap A$ and $\\tilde{g}_{U_x}(b) = g_B(b)$ for all $b \\in U_x \\cap B$. We then patch together all the above metrics using a $G$-invariant partition of unity for the open cover $\\bigcup_{x\\in M\\setminus p} U_x \\cup \\Tilde{U}$, thus giving us a metric with the required properties.\n\\fi\n\\iffalse\nwhere the co-ordinates is gotten by realizing $T_pM \\cong T_p A \\oplus T_p B$ i.e $(v_1,v_2)$ and $(u_1,u_2) \\in T_p A $ and similarly $(v_3,v_4)$ and $(u_3,u_4) \\in T_p B $. Further we note that $\\Tilde{g}_p$ is $S^1$-invariant. \n\\\\\n\nNow consider the equivariant symplectic normal bundles $N(A)$ to $A$ and $N(B)$ which is normal to $B$. At $p$ we see that the fibre symplectic normal bundle $N(A)$ agrees with $T_p B$. We already have a metric defined on $N_p(A) = T_p B$, we define an $S^1$-invariant fibrewise metric $\\Tilde{g}_A$ on $N(A)$ such that $\\Tilde{g}_A$ at the point $p$ agree with $g_B$ at the point $p$. Similarly define $\\Tilde{g}_B$ on $N(B)$ such that $\\Tilde{g}_B$ at the point $p$ agree with $g_A$ at the point $p$.\n\\\\\n\nDefine $h_A:= g_A \\oplus \\Tilde{g}_A$ and $h_B := g_B \\oplus \\Tilde{g}_B$ and define the metric on $TM|_{A \\cup B}$\n\\[\nh_x := \\begin{cases}\n{h_A} ~~ x \\in A \\\\\n{h_B} ~~ x \\in B \\\\\n\\end{cases}\n\\]\n\nFinally extend this metric $h$ to the whole of M, thus giving us an invariant $S^1$-metric $\\Tilde{h}$. Finally consider the almost complex structure corresponding to this metric, thus solving the problem.\n\\vspace{5mm}\n\\fi\n\n\\end{proof}\n\n \n\\begin{thm}{(Equivariant Gompf Argument)} \\label{gmpf}\nLet $G$ be a compact group. Let $p$ be a fixed point for the action. Let $A$,$B$ and $\\overline{A}$ be $G$-invariant symplectic spheres in a 4-dimensional symplectic manifold $(M,\\omega)$ such that \n\\begin{itemize}\n \\item $A \\cap B = \\{p\\}$ and the intersection at $p$ is transverse\n \\item $\\overline{A} \\cap B = \\{p\\}$ and the intersection at $p$ is $\\omega$-orthogonal.\n\\end{itemize} \nThen there exists an $S^1$-equivariant isotopy $A_t$ of $A$ such that $A_t$ intersects $B$ transversely at $p$ for all $t$, $A_1$ = $\\overline{A}$ in a small neighbourhood of $p$ and the curve $A_1$ agrees with $A$ outside some neighbourhood of $p$.\n\\end{thm}\n\\iffalse\nBefore we embark on the proof of this statement, we make a few observations. The first being that due to the symplectic neighbourhood theorem we can in fact work on the symplectic normal bundle and as this is a local problem, we can simplify our hardships by working in a trivialising chart i.e in $\\mathbb{R}^4$. The problem thus reformulated translates to the following one:\n\\\\\n\nGiven an $S^1$ action on $\\mathbb{R}^4$ and two $S^1$ invariant symplectic planes $A$ and $B$. There is an isotopy $\\psi_t \\in \\Ham_c(\\mathbb{R}^4)$ such that $\\psi_t(A)$ intersects $B$ transversely at $0$ for all $t$, and $\\psi_1(A) = B^{\\bot_\\omega}$ in a small neighbourhood around the origin, where $\\omega_0$ is the standard symplectic form on $\\mathbb{R}^4$ and $B^{\\bot_\\omega}$ is the symplectic orthogonal to $B$. Further we may as well assume $B$ to be the two plane in $\\mathbb{R}^4$ given by $(0,0,x,y)$, and $B^{\\bot_\\omega}$ to the given by the plane $(x,y,0,0)$.\n\\\\\n\nNext we observe that given a function $A:=(f,g): \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2$, the graph of $A$ is a symplectic (for the standard form) submanifold of $\\mathbb{R}^4$ iff $\\left\\{f,g\\right\\} > -1$. This can be proven from a direct computation. We now have enough background to embark on the proof of the above theorem.\n\\fi\n\\begin{proof}Since this is a local problem, we can work in a trivialising chart in $\\mathbb{R}^4$ in which the action is linear. Let $B^{\\bot_\\omega}$ be the symplectic orthogonal to $B$. We can assume the image of $B$ to be the two plane in $\\mathbb{R}^4$ given by $(0,0,x,y)$, and $B^{\\bot_\\omega}$ to the given by the plane $(x,y,0,0)$. As $\\overline{A}$ is $\\omega$-orthogonal to $B$, we can choose the neighbourhood such that $\\overline{A} = B^{\\bot_\\omega}$ near $p$. As $A$ is transverse to $B$ at $p$, we can assume its image is given by the graph of function (which we also call A) $A:(f,g): \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2$.\n\\\\\n\nNext we observe that given a function $A:=(f,g): \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2$, the graph of $A$ is a symplectic (for the standard form) submanifold of $\\mathbb{R}^4$ iff $\\left\\{f,g\\right\\} > -1$. This can be proven from a direct computation. We will construct an isotopy of graphs of function of the form $A_t:= \\alpha_t(r^2) A$ where $\\alpha_t$ is a bump function depending only on the radius squared (for a fixed $G$ invariant metric) in $\\mathbb{R}^2$, and such that \\begin{itemize}\n \\item $A_0 = A$,\n \\item $A_1 = 0$ near (0,0),\n \\item $A_t = A$ outside of some neighbourhood of the origin,\n \\item $A_t$ is symplectic for all $t$.\n\\end{itemize} \nNote that as $\\alpha_t$ is depends on the radius for a fixed $G$ invariant metric, $A_t$ is also $G$ invariant.\n\\\\\n\n\nDefine $E = g\\left\\{f,r^2\\right\\} + f\\left\\{r^2,g\\right\\}$. Using the fact that $r^2(0,0) = 0$ and $(r^2)^\\prime (0,0) = 0$ we see that $E(0,0) = 0$ and $\\frac{\\partial}{\\partial r}E(0,0) = 0$. By the intermediate value theorem, there exists $c > 0$, $\\epsilon > 0$ and $u > 0$ such that on the ball of radius $u + \\epsilon$ around the origin $B(0,u+\\epsilon)$ we have $E(x) \\geq -c r^2(x)$. Choose $\\delta$ such that on $B(0,u+\\epsilon)$, $1 + \\left\\{f,g\\right\\} > \\delta > 0$\n\\\\\n\n\n\nPick $\\alpha: \\mathbb{R} \\rightarrow \\mathbb{R}$ satisfying the following properties\n\n\n\n \n \\begin{itemize}\n \\item $\\alpha(r^2) = 1$ for $r^2 \\geq u$ .\n \\item $\\alpha(r) = 0$ for r near $0$.\n \n \\item $\\alpha^\\prime(r^2) \\leq \\frac{\\delta}{2cr^2} < \\frac{1 + \\{f,g\\}}{2cr^2}$ \n \\end{itemize} \n \n\nDefine $\\alpha_t:= (1-t) + t \\alpha(r^2)$ and $A_t := \\alpha_t A$. To show that $A_t$ is symplectic for all $0 \\leq t \\leq 1$ we need to check that $\\left\\{\\alpha_tf,\\alpha_tg\\right\\} > -1$ for all $0 \\leq t \\leq 1$. \nIn the neighbourhood $B(u)$ we have\n\\begin{equation*}\n 1 + \\left\\{\\alpha_t f,\\alpha_t g\\right\\} = \\underbrace{1 + \\alpha_t^2 \\{f,g\\}}_{\\geq \\delta} + \\underbrace{\\alpha_t\\alpha_t^\\prime E}_{\\geq \\frac{-\\delta}{2}} \\geq 0\n\\end{equation*}\n\nThe inequality $1 + \\alpha_t^2 \\{f,g\\} \\geq \\delta$ follows from the definition of $\\delta$ and from noting that $0 \\geq \\alpha_t \\geq 1$. $\\alpha_t\\alpha_t^\\prime E \\geq \\frac{-\\delta}{2}$ follows from the inequality \n\n\\begin{align*}\n \\alpha_t\\alpha_t^\\prime E &\\geq \\alpha_t\\alpha_t^\\prime (-cr^2) \\\\\n &\\geq -\\alpha_t\\frac{\\delta}{2cr^2} (cr^2) \\\\\n &\\geq -\\alpha_t \\frac{\\delta}{2} \\\\\n &\\geq \\frac{-\\delta}{2}\n\\end{align*}\n\nThus in the neighbourhood $B(u)$ we have the inequality $1 + \\left\\{\\alpha_t f,\\alpha_t g\\right\\} > 0$ for all t. Outside of $B(u)$, the derivative $\\alpha_t^\\prime$ is identically 0 and $\\alpha_t \\equiv 1$. Hence $\\alpha_t\\alpha_t^\\prime E \\equiv 0$ outside $B(u)$ and $1 + \\left\\{\\alpha_t f,\\alpha_t g\\right\\} = 1 + \\alpha_t^2 \\{f,g\\} + \\cancelto{0}{\\alpha_t\\alpha_t^\\prime E} = 1 + \\{f,g\\} > 0$ outside of $B(u)$.\n\\\\\n\nFinally we note that $A_1 =0$ in a neighbourhood of $(0,0)$ and it equals $A$ outside the ball of radius $u$ around the origin, thus proving the claim.\n\\end{proof}\n\n\n\n\\chapter[Equivariant Differential Topology]{Equivariant versions of classical results from Differential Topology}\n\n\\begin{comment}\n\n\n\\begin{lemma}[Relative Poincare Lemma](see~\\cite{KM}, Lemma~43.10)\\label{RelativePoincareLemma} Let $M$ be a smooth finite dimensional manifold and let $S\\subset M$ be a closed submanifold. Let $\\omega$ be a closed $(k+1)$-form on $M$ which vanishes on $S$. Then there exists a $k$-form $\\sigma$ on an open neighborhood $U$ of $S$ in $M$ such that $d\\sigma=\\omega$ on $U$ and $\\sigma=0$ along $S$. If moreover $\\omega=0$ along $S$, then we may choose $\\sigma$ such that the first derivatives of $\\sigma$ vanish on $S$.\n\\end{lemma}\n\n\n\n\\begin{proof}\nBy restricting to a tubular neighborhood of $S$ in $M$, we may assume that\n$M$ is a smooth vector bundle $p:E\\to S$ and that $i:S\\to E$ is the zero section. We consider $\\mu:\\mathbb{R}\\times E\\to E$, given by $\\mu(t,x)=\\mu_{t}(x)=tx$, then\n$\\mu_{1}=\\id_{E}$ and $\\mu_{0}=i\\circ p:E\\to S\\to E$. Let $V\\in\\mathfrak{X}(E)$ be the vertical vector field $V(x)=vl(x,x)= \\frac{d}{dt}(x+tx)$ whose flow is $\\text{Fl}_{t}^{V}=\\mu_{e^{t}}$. Locally, for $t$ in $(0,1]$ we have\n\\[\\frac{d}{dt}\\mu_{t}^{*}\\omega = \\frac{d}{dt}(\\text{Fl}_{\\log t}^{V})^{*}\\omega =\\frac{1}{t}(\\text{Fl}_{\\log t}^{V})^{*}\\mathcal{L}_{V}\\omega = \\frac{1}{t}\\mu_{t}^{*}(i_{V}d\\omega+di_{V}\\omega)=\\frac{1}{t}d\\mu_{t}^{*}i_{V}\\omega\\]\nFor $x\\in E$ and $X_{1},\\ldots,X_{k}\\in T_{x}E$ we have\n\\begin{align*}\n(\\frac{1}{t}\\mu_{t}^{*}i_{V}\\omega)_{x}(X_{1},\\ldots,X_{k}) &= \\frac{1}{t}(i_{V}\\omega_{tx}(T_{x}\\mu_{t}\\cdot X_{1},\\ldots,T_{x}\\mu_{t}\\cdot X_{k})\\\\\n&= \\frac{1}{t}\\omega_{tx}(V(tx),T_{x}\\mu_{t}\\cdot X_{1},\\ldots,T_{x}\\mu_{t}\\cdot X_{k})\\\\\n&= \\omega_{tx}(vl(tx, tx), T_{x}\\mu_{t}\\cdot X_{1},\\ldots,T_{x}\\mu_{t}\\cdot X_{k})\n\\end{align*}\nSo the $k$-form $\\frac{1}{t}\\mu_{t}^{*}i_{V}\\omega$ is defined and smooth in $(t,x)$ for all $t\\in[0,1]$ and describes a smooth curve in $\\Omega^{k}(E)$. Note that for $x\\in S = 0_{E}$ we have $\\frac{1}{t}\\mu_{t}^{*}i_{V}\\omega=0$, and if $\\omega = 0$ on $T_{S}M$, we also have $0=\\frac{d}{dt}\\mu_{t}^{*}\\omega=\\frac{1}{t}d\\mu_{t}^{*}i_{V}\\omega$, so that all first derivatives of $\\frac{1}{t}\\mu_{t}^{*}i_{V}\\omega$ vanish along $S$. \nSince $\\mu_{0}^{*}\\omega = p^{*}i^{*}\\omega = 0$ and $\\mu_{1}^{*}\\omega=\\omega$, we have\n\\begin{align*}\n\\omega \n&=\\mu_{1}^{*}\\omega-\\mu_{0}^{*}\\omega\\\\\n&=\\int_{0}^{1}\\frac{d}{dt}\\mu_{t}^{*}\\omega\\,dt\\\\\n&=\\int_{0}^{1}d(\\frac{1}{t}\\mu_{t}^{*}i_{V}\\omega)\\,dt\\\\\n&=d\\left(\\int_{0}^{1}(\\frac{1}{t}\\mu_{t}^{*}i_{V}\\omega)\\,dt\\right)\\\\\t\n&=d\\sigma\n\\end{align*}\nIf $x\\in S$, we have $\\sigma= 0$, and all first derivatives of $\\sigma$ vanish along $S$ whenever $\\omega = 0$ on $T_{S}M$.\n\\end{proof}\n\n\\begin{remark} \nIf there is a symplectic action of a compact group $G$ acting on $M$ such that $\\omega$ is $G$ invariant and $S$ is $G$-invariant, then we can constuct $\\sigma$ as above such that in addition to the above conditions $\\sigma$ also is $G$-invariant. This is gotten by noting that \n\\begin{align*}\n \\omega = \\int_G \\omega = \\int_G d\\sigma = d \\int_G \\sigma\n\\end{align*}\n\nLet $\\tilde\\sigma:= \\int_G \\sigma$ and hence $d\\tilde\\sigma = \\omega$ and $\\tilde\\sigma$ satisfies all the conditions.\n\\end{remark}\n\n\\begin{lemma}[Moser isotopy] Let $(M,\\omega)$ be a symplectic manifold and let $S\\subset M$ be a submanifold. Suppose that $\\omega_{i}$, $i=0,1$, are closed $2$-forms such that at each point $x\\in S$, the forms $\\omega_{0}$ and $\\omega_{1}$ are equal and non-degenerate on $T_{x}S$. Then there exist open neighborhoods $N_{0}$ and $N_{1}$ of $S$ and a diffeomorphism $\\phi:N_{0}\\to N_{1}$ such that $\\phi^{*}\\omega_{1}=\\omega_{0}$, $\\phi|_{S}=\\id$, and $d\\phi|_{S}=\\id$.\n\\end{lemma}\n\n\n\n\\begin{proof}\nConsider the convex linear combination $\\omega_{t}=\\omega_{0}+t(\\omega_{1}-\\omega_{0})$. Since $\\omega_{0}$ and $\\omega_{1}$ are equal along $S$, there exists a neighborhood $U_{1}$ of $S$ on which $\\omega_{t}$ is non-degenerate for all $t\\in[0,1]$. By restricting $U_{1}$ to a possibly smaller neighborhood $U_{2}$, the Relative Poincar\u00e9 Lemma~\\ref{RelativePoincareLemma} implies that there exists a $1$-form $\\sigma$ such that $d\\sigma=(\\omega_{1}-\\omega_{0})$, $\\sigma=0$ on $S$, and all first derivatives of $\\sigma$ vanish along $S$. Define the time-dependent vector field $X_{t}$ on $U_{2}$ by setting\n\\[\\sigma = -i_{X_{t}}\\omega_{t}\\]\nSince $X_{t}=0$ on $S$, by restricting $U_{2}$ to a smaller neighborhood $U_{3}$, we can ensure that the flow $\\psi_{t}$ of $X_{t}$ exists for $t\\in[0,1]$. We then have\n\\[\\frac{d}{dt}\\psi_{t}^{*}\\omega_{t} = \\psi_{t}^{*}\\left(\\frac{d}{dt}\\omega_{t}+\\mathcal{L}_{X_{t}}\\omega_{t}\\right) = \\psi_{t}^{*}\\left(\\frac{d}{dt}\\omega_{t}+di_{X_{t}}\\omega_{t}\\right) = \\psi^{*}(\\omega_{1}-\\omega_{0}-d\\sigma)=0\\]\nso that $\\psi^{*}\\omega_{t}=\\omega_{0}$. Finally, since $\\sigma=0$ on $S$, $\\psi=\\id$ on $S$, and since all first derivatives of $\\sigma$ vanish on $T_{S}M$, $d\\psi=\\id$ on $T_{S}M$.\n\\end{proof}\n\n\n\\begin{remark}\nAs the remark above, when both $\\omega_1$ and $\\omega_2$ are both invariant under a compact group action $G$, and $S$ is an $G$-invariant submanifold, then there is a $G$-equivariant diffeomorphism $\\phi$ that satisfies the conditions as above. \n\\end{remark}\n\\end{comment}\n\n\\begin{lemma}\\label{EqSymN}[Equivariant Symplectic neighborhoods theorem] \nLet $G$ be a compact group, and let $(M_{i},\\omega_{i})$, $i=0,1$, be two symplectic $G$-manifolds. Let $S_{i}\\subset M_{i}$ be two invariant symplectic submanifolds with invariant symplectic normal bundles $N_{i}$. Suppose that there is an equivariant isomorphism $A:N_{0}\\to N_{1}$ covering an equivariant symplectomorphism $\\phi:S_{0}\\to S_{1}$. Then $\\phi$ extends to a equivariant symplectomorphism of neighborhoods $\\Phi:U_{0}\\to U_{1}$ whose derivative along $S_{0}$ is equal to $A$.\n\\end{lemma}\n\\begin{proof}\nWe can extend the automorphism $A$ to a diffeomorphism of neighborhoods $\\psi:U_{0}\\to U_{1}$ by setting\n\\[\\psi = \\exp\\circ A\\circ \\exp^{-1}\\]\nBy construction, $d\\psi = A$ along $S_{0}$, so that $\\omega_{0}$ and $\\psi^{*}\\omega_{1}$ coincides along $S_{0}$. Applying the $G$- equivariant Moser isotopy lemma gives the result.\n\\end{proof}\n\nLet $(M,\\omega)$ be a symplectic manifold. Let $G$ be a compact lie group acting symplectically on $M$. Let $S$ be an invariant submanifold under the $G$ action. Let $\\Op(S)$ be an invariant open neighbourhood of $S$, Further define\n\n\\[\\Symp^G_{\\id,N}(M,S)=\\{\\phi\\in\\Symp^G_{0}(M)~|~\\phi|_{S}=\\id,~d\\phi|_{T_{S}M}=\\id\\}\\]\n\\[\\Symp^G_{\\id,\\Op(S)}(M,S)=\\{\\phi\\in\\Symp^G_{0}(M)~|~\\phi=\\id~\\text{near}~S\\}\\]\n\nthen we would like to show that $\\Symp^G_{\\id,N}(M,S) =\\Symp^G_{\\id,\\Op(S)}(M,S) $. But before we do that we would need the following lemmas.\n\nFollowing~\\cite{Hi}, we define a invariant tubular neighborhood of a invariant submanifold $\\iota:S\\hookrightarrow M$ as a smooth equivariant embeddings $f:E\\hookrightarrow M$ of a vector bundle $\\pi:E\\to S$ such that \n\\begin{enumerate}\n\\item $f|_{S}=\\iota$ after identifying $S$ with the zero section of $\\pi:E\\to S$.\n\\item $f(E)$ is an open neighborhood of $S$.\n\\end{enumerate}\nIn practice, it is often enough to work with the normal bundle $N\\subset T_{S}M$ defined as the orthogonal of $T_{S}M$ relative to a equivariant Riemannian structure. (See \\cite{Bredon} p. 306 for existence of such invariant tubular neighbourhood.)\n\n\\begin{lemma}[Unicity of tubular neighborhoods]\\label{UnicityTubularNeighborhoods} (See~\\cite{Hi}, Theorem 4.5.3) Let $M$ be a $G$-manifold, let $\\iota:S\\hookrightarrow M$ be a invariant submanifold with normal bundle $N$. Then, \n\\begin{enumerate}\n\\item given any two invariant tubular neighborhoods $f_{i}:N_i \\hookrightarrow M$, $i=0,1$, there is a equivariant gauge transformation $A\\in\\mathcal{G}(N)$ such that $f_{0}$ and $f_{1}\\circ A$ are equivariant isotopic rel. to $S$. \n\\item The space $\\mathcal{T}_{S}$ of all invariant tubular neighborhoods $f:N\\hookrightarrow M$ is homotopy equivalent to the group of equivariant gauge transformations $\\mathcal{G}(N)$.\n\\item The space $\\mathcal{T}_{S,d\\iota}$ of invariant tubular neighborhoods $f:N\\hookrightarrow M$ such that $df|_{S}=d\\iota$ is contractible.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n(1) We construct an equivariant isotopy $F_{t}$ in two steps. Firstly, given an $G$-invariant smooth function $\\delta:S\\to(0,1]$, let $U_{\\delta}\\subset N$ be the invariant disc bundle\n\\[U_{\\delta}=\\{x\\in N~|~|x|<\\delta(\\pi(x))\\}\\]\nLet k is the rank of the bundle and $C(G)$ denote the centraliser of G in $\\SO(k)$. Note that $N$ smoothly retracts onto $U_{\\delta}$ through embeddings $G_{t}:N\\to N$ of the form\n\\[G_{t} = (1-t)\\id + th_{\\delta(x)}\\]\nwhere $h_{r}:\\mathbb{R}^{k}\\to D^{k}(r)$ is a equivariant one-parameter family of contracting, $C(G) \\subset \\SO(k)$-invariant diffeomorphisms, restricting to the identity on $D^{k}(r\/2)$ and varying smoothly with $r$. Then, choosing an appropriate invariant function $\\delta$, and composing $f_{1}$ with $G_{t}$, we can isotope $f_{1}$ to an embedding $f_{\\delta}=f_{1}G_{1}$ satisfying\n\\begin{equation}\\label{inclusion-assumption}\nf_{\\delta}(N)\\subset f_{0}(N)~\\text{and}~f_{\\delta}=f_{1}~\\text{on}~U_{\\delta\/2}\n\\end{equation}\nso that the map $g=f_{0}^{-1}f_{\\delta}:N\\to N$ is well-defined. Secondly, observe that the map $g$ is equivariantly isotopic to its vertical derivative $A_{f_{\\delta}}=dg^{\\text{vert}}\\in\\mathcal{G}(N)$ along $S$ via the canonical smooth isotopy\n\\[H_{0}(x)=\\phi(x),\\quad H_{t}(x)=g(tx)\/t,~00$ so that the $\\epsilon$-disk subbundle $V_{\\epsilon}\\subset N$ is symplectomorphic to a tubular neighborhood $U$ of~$S$. Let $\\iota$ denote both inclusions $U\\hookrightarrow M$ and $V\\hookrightarrow N$.\n\\\\\n\nLet $\\Omega_{S}^{\\mathrm{loc},G}$ be the space of germs of $G$ invariant symplectic forms defined near $S$ and agreeing with $\\omega$ along $T_{S}M$. Given any two germs $[\\omega_{0}]$ and $[\\omega_{1}]$, their linear convex combination $\\omega_{t}=(1-t)\\omega_{0}+t\\omega_{1}$ is non-degenerate in some neighborhood of $S$. Consequently, $\\Omega_{S}^{\\mathrm{loc},G}$ is convex, hence contractible. By the Symplectic neighborhood theorem, the group $\\mathcal{G}_{S,\\omega}$ acts transitively on $\\Omega_{S}^{\\mathrm{loc},G}$, giving rise to a fibration\n\\[\\mathcal{G}_{S,\\omega}^{\\mathrm{loc},G}\\stackrel{\\simeq}{\\to}\\mathcal{G}_{S,\\omega}\\to\\Omega_{S}^{\\mathrm{loc},G}\\]\nwhose fiber $\\mathcal{G}_{S,\\omega}^{\\mathrm{loc},G}$ is the group of germs of equivariant diffeomorphisms that are symplectic near $S$. This space is homeomorphic to the space $\\mathcal{E}_{S,\\omega}$ of germs along $S$ of equivariant symplectic embeddings $f:\\Op(S)\\to M$ such that $f|_{S}=\\iota$ and $df|_{S}=d\\iota$. By Lemma~\\ref{UnicityTubularNeighborhoods} (3), we know that $\\mathcal{E}_{S,\\omega}$ is contractible, so that $\\mathcal{G}_{S,\\omega}$ and $\\mathcal{G}_{S,\\omega}^{\\mathrm{loc},G}$ are also contractible, thus completing the proof. \n\\end{proof}\n\n\\begin{lemma}\\label{Au} Let $G$ be a compact group acting symplectically on a compact manifold $(M,\\omega)$. Let $W_t$ be a smooth $k$-parameter family of symplectic submanifolds ($t \\in [0,1]^k$), which are invariant under the $G$ action. Then there exists a $k$-parameter family of equivariant Hamiltonian symplectomorphisms $\\phi_{t}: M \\rightarrow M$ such that $\\phi_t(W_0) = W_t$\n\\end{lemma}\n\n\\begin{proof}\nThe proof follows by mimicking the proof of Proposition 4 in \\cite{Auroux} under the presence of a group action.\n\\end{proof}\n\nFinally, we present the following theorem due to Palais, that we repeatedly use in Chapters 3 and 6 to justify that our maps between infinite dimensional spaces is a fibration. \n\nLet $X$ be a topological space with an action of a topological group $G$. We say X admits local cross sections at $x_0 \\in X$ if for there is a neighbourhood $U$ containing $x_0$ and a map $\\chi: X \\rightarrow G$ such that $\\chi(u) \\cdot x_0 = u$ for all $u \\in U$. We say X admits local cross sections if this is true for all $x_0 \\in X$. \n\n\\begin{thm}{(Palais)}\\label{palais}\nLet $X$, $Y$ be a topological spaces with a action of a topological group $G$. Let the $G$ action on $X$ admit local cross sections. Then any equiariant map $f$ from another space $Y$ to $X$ is locally trivial.\n\\end{thm}\n\n\\begin{proof}\nSuppose for every point $x_0 \\in X$ there is a local section $\\chi: U \\rightarrow G$ where $U$ is an open neighbourhood of $x_0$. Then we define a local trivialisation of $f$ as follows. \n\n\\begin{align*}\n \\rho: U \\times f^{-1}(x_0) &\\rightarrow f^{-1}(U) \\\\\n (u, \\gamma) &\\mapsto \\chi(u) \\cdot \\gamma\n\\end{align*}\n\nAs $f$ is equivariant we indeed have $f(\\rho(u,\\gamma)) = f( \\chi(u) \\cdot \\gamma) = \\chi(u) \\cdot f(\\gamma) = \\chi(u) \\cdot x_0 = u$, where the last equality follows from the definition of being a local section. Thus $\\rho$ maps $U \\times f^{-1}(x_0) $ into $ f^{-1}(U)$.\\\\\n\nConversely there is map \n\n\\begin{align*}\n \\beta: f^{-1}(U) &\\rightarrow U \\times f^{-1}(x_0) \\\\\n y &\\mapsto \\left(f(y), {\\chi(f(y))}^{-1} \\cdot y\\right)\n\\end{align*}\n\nWe can indeed check that the two maps are inverses of each other. \n\n$\\beta \\circ \\rho (u, \\gamma) = \\beta(\\chi(u) \\cdot \\gamma) = \\left(\\chi(u) \\cdot \\gamma, {\\chi(\\chi(u) \\cdot f(\\gamma))}^{-1} \\cdot \\chi(u) \\cdot \\gamma \\right) = (u, {\\chi(u)}^{-1} \\chi(u) \\cdot \\gamma) = (u,\\gamma)$.\n\nSimilarly we can check that $\\rho \\circ \\beta = id$, thus completing the proof. \n\\end{proof}\n\n\\chapter{Alexander-Eells isomorphism}\\label{Appendix-Alexander-Eells}\n\n\nIn this appendix, we prove the Alexander-Eells isomorphism used in Section~\\ref{subsection:CohomologyModule}. We first recall an isomorphism between the homology of a submanifold $Y\\subset X$ and the homology of its complement $X-Y$ that is reminiscent of the Alexander-Pontryagin duality in the category of oriented, finite dimensional manifolds. This isomorphism, due to J. Eells, exists whenever the submanifold $Y$ is co-oriented and holds, in particular, for infinite dimensional Fr\u00e9chet manifolds. We then give a geometric realization of this isomorphism in the special case $Y$ and $X-Y$ are orbits of a continuous action $G\\times X\\to X$ satisfying some mild assumptions. We closely follow Eells~\\cite{Eells} p. 125--126.\\\\\n\nLet $X$ be a manifold, possibly infinite dimensional, and let $Y$ be a co-oriented submanifold of positive codimension $p$. As explained in~\\cite{Eells}, there exists an isomorphism of singular cohomology groups\n\\[\\phi:H^{i}(Y)\\to H^{i+p}(X, X-Y)\\]\ncalled the Alexander-Eells isomorphism. We define the fundamental class (Thom class) of the pair $(X,Y)$ as $u=\\phi(1)\\in H^{p}(X, X-Y)$.\n\\begin{prop}[Eells, p. 113]\nThe pairing\n\\[\\begin{aligned}\nH^{*}(Y)\\otimes H^{*}(X,X-Y)&\\to H^{*}(X,X-Y)\\\\\ny\\otimes x &\\mapsto y\\cup x\n\\end{aligned}\\]\nmakes $H^{*}(X,X-Y)$ into a free $H^{*}(Y)$-module of rank one, generated by $u$.\n\\end{prop}\n\n \n\nLet $\\phi_{*}:H_{i+p}(X,X-Y)\\to H_{i}(Y)$ be the dual of the Alexander-Eells isomorphism $\\phi^{*}=\\phi$. By definition, we have\n\\[\\phi_{*}(a)=u\\cap a\\]\n\nSuppose a topological group $G$ acts continuously on $X$ (on the left), leaving $Y$ invariant, and in such a way that both $X-Y$ and $Y$ are homotopy equivalent to orbits. We have continuous maps $\\mu:G\\times (X,X-Y)\\to (X,X-Y)$, ~$\\mu:G\\times (X-Y)\\to (X-Y)$, and $\\mu:G\\times Y\\to Y$ inducing $H_{*}(G)$-module structures on $H_{*}(X,X-Y)$, $H_{*}(X-Y)$ and $H_{*}(Y)$. We write $\\mu_{*}(c\\otimes a)=c*a$ for the action of $c\\in H_{i}(G)$.\n\n\\begin{lemma}\\label{lemma:AlexanderEellsGmodule}\nIn this situation, the Alexander-Eells isomorphism preserves the $H_{*}(G)$-module structure, that is, the following diagram is commutative:\n\\[\n\\begin{tikzcd}\nH_{*}(G)\\otimes H_{*}(X,X-Y) \\arrow{r}{\\mu_{*}} \\arrow[swap]{d}{1\\otimes\\phi_{*}} & H_{*}(X,X-Y) \\arrow{d}{\\phi_{*}} \\\\\nH_{*}(G)\\otimes H_{*}(Y) \\arrow{r}{\\mu_{*}} & H_{*}(Y)\n\\end{tikzcd}\n\\]\nThus for any $a\\in H_{i+p}(X,X-Y)$, $c\\in H_{i}(G)$, we have $\\phi_{*}(c*a) = c*\\phi_{*}(a)$.\n\\end{lemma}\n\\begin{proof}\nWe first note that if $u$ is the fundamental class of the pair $(X,Y)$, then $\\mu^{*}(u)=1\\otimes u\\in H^{0}(G)\\otimes H^{p}(X,X-Y)$, because $H^{i}(X,X-Y)=0$ for all $i 80$\\GeVc , where the jet $\\pt$ value is\ncorrected for the $\\pt$-dependent calorimeter energy response.\nThe trigger efficiency is defined as the fraction\nof triggered events out of a sample of minimum bias events (described below)\nin bins of offline reconstructed leading-jet $\\pt$.\nThe trigger becomes fully efficient for collisions with a leading particle-flow jet\nwith corrected $\\pt$ greater than 100\\GeVc.\n\nIn addition to the jet data sample, a minimum bias event sample was collected using\ncoincidences between the trigger signals from both the $+z$ and $-z$ sides of either the BSC\nor the HF, which was pre-scaled to record only about 0.1--0.2\\% of the collisions delivered by the LHC.\nIn order to suppress non-collision-related noise, cosmic-ray muons,\nout-of-time triggers, and beam backgrounds, the minimum bias and jet triggers used in this analysis\nwere required to arrive in time with the presence of both colliding ion bunches in the interaction region.\nThe events selected by the jet trigger described above also satisfy all triggers and selections\nimposed for minimum bias events.\n\n\\subsection{Event selection and centrality determination}\n\\label{sec:event_selection}\nA sample of inelastic hadronic collisions is selected offline from the triggered events. Contamination from beam-halo events is removed based upon the timing of the $+z$ and $-z$ BSC signals. A requirement of a reconstructed primary collision vertex based on at least two tracks with transverse momenta above $75$~\\MeVc is imposed. This requirement removes other beam related background events (e.g., beam-gas, ultraperipheral collisions) with large HF energy deposits but very few pixel detector hits. The vertex is required to be compatible with the length of the pixel clusters reconstructed in the event, as a standard method in CMS~\\cite{Khachatryan:2010us}. Finally, an offline HF coincidence is applied, which requires at least three towers on each side of the interaction point in the HF with at least 3~GeV total deposited energy per tower. This event selection, including the minimum bias trigger, has an efficiency of 97\\% with an uncertainty of 3\\% for hadronic inelastic PbPb collisions. This efficiency is\ntaken into account in the centrality determination, and the uncertainty of the efficiency has a negligible effect\non the results of this study.\n\nTable~\\ref{evselcuts} shows the number of events remaining after the various selection criteria are applied. Events with a jet trigger of $\\pt > 80$\\GeVc are selected, followed by the offline event selection for inelastic hadronic collisions (described above). Prior to jet finding on\nthe selected events, a small contamination of\nnoise events from the electromagnetic calorimeter and hadron calorimeter is removed using signal\ntiming, energy distribution, and pulse-shape information \\cite{ref:EGM-10-002,Chatrchyan:2009hy}. The leading and subleading jets are determined among the jets with pseudorapidity $|\\eta| < 2$, which are reconstructed as described in Section~\\ref{sec:jet_reconstruction}.\nEvents are then selected if the corrected jet \\pt is larger than $120$\\GeVc (corrected\nfor the $\\pt$- and $\\eta$-dependent detector energy response). The subleading jet in the\nevent is required to have a corrected jet $\\pt > 30$\\GeVc.\nThe azimuthal angle between the leading and the subleading jets is required to be at least $2\\pi\/3$.\nFurther jets found in the event, beyond the leading and the subleading ones, are not considered in this analysis.\nIn order to remove events with residual HCAL noise that are missed by the noise-rejection algorithms,\neither the\nleading or subleading jet is required to have at least one track of $\\pt > 4$\\GeVc. For high-\\pt jet events\nthis selection does not introduce any significant bias on the sample and removes only 2\\% of the\nselected dijet events.\n\nThe centrality of the collisions is represented by the number of participating nucleons (\\npart) in a collision, which is correlated with the total transverse energy measured in HF. The minimum bias event sample is divided into constant fractions of total inelastic cross section and for each fraction the average value of \\npart\\ is determined using a Glauber calculation~\\cite{Miller:2007ri}. The dispersion of the \\npart\\ values due to reconstruction effects is based on {\\GEANTfour} simulations of events generated with a multi-phase transport {\\textsc{ampt}} simulation~\\cite{Lin:2004en}.\n\n\n\\begin{table*}[htbp]\n\\begin{center}\n\\caption{The effects of various selections applied to the data sample. In the third column, the\nfractional values are with respect to the line above and in the fourth column they are with respect to the triggered sample. The selections are applied in\nsequence.}\n\\label{evselcuts} \\begin{tabular}{|l|r|r|r|}\n\\hline\nSelections & Events remaining & \\% of previous & \\% of triggered \\\\\n\\hline\nJet triggered events ($\\pt^{\\text{corr}}>80$\\GeVc) & 369\\,938 & 100.00 & 100.00 \\\\\nOffline collision selection & 310\\,792 & 84.01 & 84.01 \\\\\nHCAL and ECAL noise rejection & 308\\,453 & 99.25 & 83.38 \\\\\nLeading jet $\\ptlead>120$\\GeVc & 55\\,911 & 18.13 & 15.11 \\\\\nSubleading jet $\\ptsub>30$\\GeVc & 52\\,694 & 94.25 & 14.24 \\\\\n\\dphi $>2\\pi\/3$ & 49\\,993 & 94.87 & 13.51 \\\\\nTrack within a jet & 49\\,054 & 98.12 & 13.26 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\\subsection{Simulated data samples}\n\\label{sec:pythia_samples}\n\n\nIn PbPb collisions there is a high multiplicity of soft particles produced, the PbPb underlying event. It is essential to understand how the jet reconstruction is modified in PbPb collisions at different centralities. This is studied with simulations of dijet events in pp collisions with the \\PYTHIA event generator (version 6.423, tune Z2)~\\cite{bib_pythia},\nmodified for the isospin content of the colliding nuclei. A minimum hard-interaction scale ($\\hat{p}_\\mathrm{T}$) selection of 80\\GeVc\\ is used to increase the number of dijet events produced in the momentum range studied. \\PYTHIA simulations at lower $\\hat{p}_\\mathrm{T}$ (discussed in~\\cite{Cacciari:2011tm}) are also investigated and found to agree with the $\\hat{p}_\\mathrm{T} > 80$\\GeVc\\ results within the uncertainties. To model the PbPb background, minimum bias \\PbPb\\ events are simulated with the \\HYDJET event generator~\\cite{Lokhtin:2005px}, version 1.8 (denoted \\PYTHYD in this paper).\nThe parameters of \\HYDJET are tuned to reproduce the total particle multiplicities, charged hadron spectra, and elliptic flow at all centralities, and to approximate\nthe underlying event fluctuations seen in data, differences being within the underlying event systematic uncertainty.\n\nThe full detector simulation and analysis chain is used to process both \\PYTHIA dijet events and \\PYTHIA dijet events embedded into \\HYDJET events. The reconstruction of particle flow jets is studied by using the \\PYTHIA generator jet information in comparison to the same fully reconstructed jet in \\PYTHYD, matched in momentum space. The effects of the PbPb underlying event on jet \\pt\\ and position resolution, jet \\pt\\ scale, and jet-finding efficiency are determined as a function of collision centrality and jet \\pt. These effects do not require corrections on the results but contribute to the systematic uncertainties.\n\\section{Results}\n\\label{sec:results}\n\nThe goal of this analysis is to characterize possible modifications of dijet event properties as a function\nof centrality and leading jet transverse momentum in \\PbPb\\ collisions.\nThe analysis is performed in six bins of collision centrality: 0--10\\%, 10--20\\%, 20--30\\%, 30--50\\%, 50--70\\%, and 70--100\\%, the latter being the most peripheral bin. The 0--20\\% most central events are further analyzed in bins of leading jet $\\pt$: 120--150, 150--180, 180--220, 220--260, 260--300, 300--500\\GeVc.\nThroughout the paper, the results obtained\nfrom \\PbPb\\ data are compared to references based on the \\PYTHYD\nsamples described in Section~\\ref{sec:pythia_samples}. The subscripts $1$ and $2$ in the kinematical quantities always refer to the leading and subleading jets, respectively.\n\n\n\\subsection{Dijet azimuthal correlations}\n\\label{sec:dphi}\n\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[width=\\cmsFigWidth]{dijet_dphi_all_pt_0to20_20120104}\n\\end{center}\n\\caption{Distribution of the angle $\\dphi$ between the leading and subleading jets\nin bins of leading\njet transverse momentum from\n120 $ < \\ptlead < 150$\\GeVc\\ to $\\ptlead > 300$\\GeVc\\ for\n subleading jets of $\\ptsub> 30$\\GeVc.\nResults for 0--20\\% central PbPb events are shown as points while the histogram\nshows the results for\n{\\sc {pythia}} dijets embedded into \\HYDJET PbPb simulated events. The error bars represent the statistical uncertainties.}\n\\label{fig:dphiPt}\n\\end{figure*}\n\nEarlier studies of the dijet events in heavy-ion collisions~\\cite{Chatrchyan:2011sx,Aad:2010bu} have shown persistency in dijet azimuthal correlations despite the asymmetry in dijet momenta. This aspect is crucial\nin the interpretation of energy loss observations~\\cite{CasalderreySolana:2011rq}.\nTo understand the momentum dependence of the quenching effects, this study investigates the angular correlation, \\ie, the opening azimuthal angle, \\dphi, between the leading and subleading jets of the events, in bins of leading jet $\\ptlead$.\n\nFor events with 0--20\\% centrality, two features are visible in the \\dphi\\ distributions shown in Fig.~\\ref{fig:dphiPt}: a peaking structure at $\\dphi = \\pi$, and a constant offset from zero in the overall distribution. The distribution around the $\\dphi = \\pi$ peak reflects the back-to-back dijet production and although this distribution changes across the various leading-jet \\pt\\ bins, there is no significant difference between PbPb data and the \\PYTHYD sample.\nThis observation confirms the conclusions of earlier studies~\\cite{Chatrchyan:2011sx,Aad:2010bu}, extending the analysis to differential leading-jet \\pt bins.\nThe event fraction that extends to small \\dphi\\ values is likely due to the matching of the leading jet with a random underlying event fluctuation instead of the true subleading jet partner. The difference in the rate of such events between the PbPb data and the \\PYTHYD sample is compatible with the effect of quenching, which makes it easier for a background fluctuation to supersede a genuine low \\pt jet.\nThe fraction of these background events strongly depends on the centrality and leading jet \\pt. For the purposes of the study presented in this paper, the contribution of these background events to the results is subtracted by using the events at small \\dphi.\n\n\\subsection{Dijet momentum balance}\n\\label{sec:asymmetry}\nTo characterize the dijet momentum balance (or imbalance) quantitatively, we use the asymmetry ratio\n\\begin{equation}\n\\label{eq:aj}\nA_J = \\frac{\\ptlead-\\ptsub}{\\ptlead+\\ptsub}~.\n\\end{equation}\nDijets are selected with $\\dphi > 2\\pi\/3$.\nIt is important to note that the subleading jet $\\ptsub > 30$\\GeVc\\ selection imposes\na $\\ptlead$-dependent limit on the magnitude of \\AJ. The distributions are normalized to\nthe number of selected dijet events.\n\nAs discussed in Section~\\ref{sec:dphi}, the contribution of background fluctuations is estimated from\nthe events with dijets of $\\dphi < \\pi\/3$, and the distributions obtained from these events are subtracted from the results.\nThe estimated fraction of background events, as a function of both leading jet \\pt\\ and centrality, is shown in the bottom row of Fig.~\\ref{fig:SubRate}.\nThe fraction of dijet events in which the subleading jet is found within the acceptance, after the subtraction of\nbackground events, is shown in the top row of Fig.~\\ref{fig:SubRate}. The events in which the subleading jet is not found should be taken into account when comparing the asymmetry distributions, although the bias is negligible for bins of leading jet $\\pt > 180$\\GeVc.\n\n\\begin{figure*}[htb]\n\\begin{center}\n\\includegraphics[width=\\cmsFigWidth]{SubLeadingRate_d20120103}\n\\caption{\nFraction of events with a genuine subleading jet with $\\dphi > 2\\pi\/3$, as a function of leading jet $\\ptlead$ (left)\nand \\npart\\ (right). The background due to underlying event\nfluctuations is estimated from $\\dphi < \\pi\/3$ events and subtracted from the number of dijets.\nThe fraction of the estimated background is shown in the bottom panels.\nThe error bars represent the statistical uncertainties.}\n\\label{fig:SubRate}\n\\end{center}\n\\end{figure*}\n\n\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[width=\\cmsFigWidth]{dijet_imbalance3_lead120_sub30_all_cent_20120103_subt}\n\\caption{Dijet asymmetry ratio, $A_{J}$, for leading jets of $\\ptlead> 120$\\GeVc\\ and\n subleading jets of $\\ptsub> 30$\\GeVc\\ with a selection of $\\dphi>2\\pi\/3$ between the two jets.\nResults are shown for six bins of collision centrality, corresponding to selections of 70--100\\% to 0--10\\% of the total inelastic cross section.\nResults from data are shown as points, while the histogram shows the results\nfor\n\\PYTHIA dijets embedded into \\HYDJET PbPb simulated events.\nData from pp collisions at 2.76\\TeV are shown as open points in comparison to PbPb results of 70--100\\% centrality.\nThe error bars represent the statistical uncertainties.}\n\\label{fig:JetAsymm}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[width=\\cmsFigWidth]{dijet_imbalance3_0to20_pt_20120103_subt}\n\\caption{Dijet asymmetry ratio, $A_{J}$, in bins of leading jet transverse momentum from\n120 $ < \\ptlead < 150$\\GeVc\\ to $\\ptlead > 300$\\GeVc\\ for\n subleading jets of $\\ptsub> 30$\\GeVc\\\nand $\\dphi>2\\pi\/3$ between leading and subleading jets.\nResults for 0--20\\% central PbPb events are shown as points, while the histogram\nshows the results for\n\\PYTHIA dijets embedded into \\HYDJET PbPb simulated events. The error bars represent the statistical uncertainties.}\n\\label{fig:JetAsymmPt}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[width=\\cmsFigWidth]{dijet_imbalance5_0to20_pt_20120103_subt}\n\\caption{Subleading jet transverse momentum fraction ($\\ptsub\/\\ptlead$), in bins of leading\njet transverse momentum from\n120 $ < \\ptlead < 150$\\GeVc\\ to $\\ptlead > 300$\\GeVc\\ for\n subleading jets of $\\ptsub> 30$\\GeVc\\\nand $\\dphi>2\\pi\/3$ between leading and subleading jets.\nResults for 0--20\\% central PbPb events are shown as points, while the histogram\nshows the results for\n\\PYTHIA dijets embedded into \\HYDJET PbPb simulated events.\nThe arrows show the mean values of the distributions and the error bars represent the statistical uncertainties.}\n\\label{fig:JetFractionPt}\n\\end{center}\n\\end{figure*}\n\nThe centrality dependence of $A_J$ for \\PbPb\\ collisions is shown in\nFig.~\\ref{fig:JetAsymm}, in comparison to results from \\PYTHYD simulations.\nThe most peripheral events are also compared\nto results from \\pp\\ collisions at $\\sqrt{s} = 2.76\\TeV$, where the same jet algorithm is used.\nThis comparison supports the use of the\n\\PYTHYD sample as a reference for the dijet asymmetry, which also takes into account underlying event\neffects when comparing with PbPb data.\nThe shape of the dijet momentum balance distribution experiences a gradual change with collision centrality,\ntowards more imbalance. In contrast, the \\PYTHIA simulations only\nexhibit a modest broadening, even when embedded in the highest multiplicity\n\\PbPb\\ events.\n\nTo study the momentum dependence of the amount of energy loss,\nFig.~\\ref{fig:JetAsymmPt} presents the distributions of $A_J$ in different bins of leading jet \\pt,\nfor 0--20\\% central events. One observes a strong evolution in the shape of the distribution across the\nvarious \\pt\\ bins, while a significant difference between PbPb data and \\PYTHYD simulations persists in\neach \\pt\\ bin. The distributions of the \\ptrat\\ ratio, shown in Fig.~\\ref{fig:JetFractionPt}, provide a more intuitive way of quantifying the energy loss.\nBoth the $A_J$ and \\ptrat\\ distributions are affected by the cut on the subleading jet $\\pt$, which should be taken into account in the interpretation of the average value. However, in the bins with leading jet $\\pt> 180\\GeVc$, more than 95\\% of the leading jets are correlated with a subleading jet, indicating that the bias due to dijet selection is very small.\n\n\\subsection{The dependence of dijet momentum imbalance on the \\texorpdfstring{\\pt}{pt} of the leading jet }\n\nThe dependence of the energy loss on the leading jet momentum can be studied using the jet transverse momentum ratio\n$\\ptsub\/\\ptlead$.\nThe mean value of this ratio is presented as a function of $\\ptlead$\nin Fig.~\\ref{deltaPt} for three bins of collision centrality, 50--100\\%, 20--50\\%, and 0--20\\%.\nThe \\PYTHYD simulations are shown as squares and the\nPbPb data are shown as points. Statistical and systematic uncertainties are plotted as error bars and brackets, respectively.\nThe main contributions to the systematic uncertainty in $\\ptsub\/\\ptlead$\nare the uncertainties in the $\\pt$-dependent residual energy scale and the effects of the underlying event on the jet energy resolution.\nEarlier studies of jet-track correlations~\\cite{Chatrchyan:2011sx} have shown that the energy composition of the quenched jets was not significantly different, which\nputs a constraint on the energy scale uncertainty. The uncertainty on the energy scale is derived from\nthree sources: the uncertainty evaluated in the pp studies \\cite{Chatrchyan:2011ds},\nthe energy scale difference in pp data and MC, and the energy scale and its parton type dependence~\\cite{MattPFlow} in simulations of PbPb events (see Section~\\ref{sec:pythia_samples}). These contributions are added in quadrature to assign the total uncertainty on the jet energy scale. Using this value as a boundary, the uncertainty in the $\\ptsub\/\\ptlead$\nresults is then estimated by varying the jet response at low \\pt\\ and at high \\pt\\ independently.\nThe uncertainty on the underlying event effects is estimated from the full\ndifference between \\pp and \\PYTHYD.\nThese effects add up to 6\\% in the most central events.\nFor the low leading-jet \\pt\\ bins, jet reconstruction efficiency also introduces a minor uncertainty on the order of 1\\%.\nUncertainties due to additional misreconstructed jets, calorimeter noise, and the track requirement are negligible compared\nto the dominating sources of uncertainty.\nFor the centrality bins of 50--100\\%, 20--50\\% and 0--20\\%, the sources of systematic uncertainty are summarized in Table~\\ref{RatioSystematics}.\n\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[width=\\cmsFigWidth]{deltaPtOverPt5_lead120_sub30_diff_20120103}\n\\caption{\nAverage dijet momentum ratio $\\ptsub\/\\ptlead$ as a function of\nleading jet \\pt for three bins of collision centrality, from peripheral to central collisions,\ncorresponding to selections of 50--100\\%, 30--50\\% and 0--20\\% of the total inelastic cross section.\nResults for \\PbPb\\ data are shown as points with vertical bars and brackets indicating\nthe statistical and systematic uncertainties, respectively. Results for \\PYTHYD are shown as squares. In the 50--100\\% centrality bin,\nresults are also compared with pp data, which is shown as the open circles.\nThe difference between the \\PbPb\\ measurement and the \\PYTHYD expectations is shown in the bottom panels. }\n\\label{deltaPt}\n\\end{center}\n\\end{figure*}\n\nAs shown in Fig.~\\ref{deltaPt}, both the PbPb data and the \\PYTHYD samples reveal an increasing trend for the mean value of the\n jet transverse momentum ratio, as a function of the leading jet $\\ptlead$. This can be understood\nby the reduction in the effects of jet splitting and energy resolution as one goes to higher jet momenta.\nHowever, the central \\PbPb\\ data points lie consistently below the \\PYTHYD trend. The difference between the pp data and the \\PYTHYD reference is of the order of the systematic uncertainty of the measurement, whereas the difference between \\PbPb\\ data and the reference is more than twice larger. This difference is related to the parton energy loss and for central PbPb collisions it is of significant magnitude across the whole \\pt\\ range explored in this study.\n\n\\begin{table}[htbp]\n\\begin{center}\n\\caption{Summary of the \\ptrat\\ systematic uncertainties. The range of values represent the\nvariation from low ($\\ptlead<140\\GeVc$) to high\n($\\ptlead>300\\GeVc$) leading jet \\pt. }\n\\label{RatioSystematics} \\begin{tabular}{|l|c|c|c|c|}\n\\hline\nSource & 50--100\\% & 20--50\\% & 0--20\\% \\\\\n\\hline\nUnderlying event & 1\\% & 3\\% & 5\\% \\\\\nJet energy scale & 3\\% &3\\% & 3\\% \\\\\nJet efficiency & 1--0.1\\% & 1--0.1\\% & 1--0.1\\% \\\\\nJet misidentification & $<0.1$\\% & $<0.1$\\% & 1--0.1\\%\\\\\nCalorimeter noise & $<0.1$\\% & $<0.1$\\% & $<0.1$\\% \\\\\nJet identification & $<0.1$\\% & $<0.1$\\% & $<0.1$\\%\\\\\n\\hline\nTotal & 3.5\\% & 4.5\\% & 6\\% \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\n\n\n\\section{Summary}\n\\label{sec:summary}\n\nDijet production in PbPb collisions at\n$\\rootsNN=2.76\\TeV$ was studied with the CMS detector in a data sample\ncorresponding to an integrated luminosity of \\lum. The \\ak\\ algorithm was\nused to reconstruct jets based on combined tracker and calorimeter\ninformation. Events containing a leading jet with $\\ptlead > 120$\\GeVc\\ and\na subleading jet with $\\ptsub > 30$\\GeVc\\ in the pseudorapidity range\n$|\\eta| < 2$ were analyzed.\nData were compared to \\PYTHYD dijet simulations,\ntuned to reproduce the observed underlying event fluctuations.\nFor the most peripheral collisions, good agreement between data and simulations is\nobserved. For more central collisions, the dijet momentum imbalance in the data\nis significantly larger than seen in the simulation. Across the entire range of jet momenta studied, no significant broadening of the dijet angular correlations is observed with respect to the reference distributions.\n\nThe dijet momentum imbalance was studied as a function of the leading jet $\\ptlead$\nfor different centrality ranges in comparison to the \\PYTHYD simulation.\nFor leading jet momenta\n$\\ptlead > 180$\\GeVc\\ the dijet balance distributions are found to be essentially\nunbiased by the subleading jet threshold of $\\ptsub> 30 $\\GeVc.\nFor mid-central (30--50\\%) and more central PbPb event selections, a significantly\nlower average dijet momentum ratio $\\langle \\ptsub\/\\ptlead \\rangle$\nis observed than in the pp data and in the dijet embedded simulations. The downward shift in\n$\\langle \\ptsub\/\\ptlead \\rangle$, with respect to the \\PYTHYD reference, is seen to increase\nmonotonically with increasing collision centrality,\nand to be largely independent of the leading jet $\\ptlead$,\nup to $\\ptlead$ values in excess of 350\\GeVc.\n\nIn summary, the results presented in this paper confirm previous observations based on\na smaller dataset and extend the measurements of jet-quenching effects to wider centrality and\nleading jet transverse momentum ranges, as well as to lower subleading jet transverse momentum.\n\n\n\\section*{Acknowledgments}\n\\hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC machine. We thank the technical and administrative staff at CERN and other CMS institutes. This work was supported by the Austrian Federal Ministry of Science and Research; the Belgium Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport; the Research Promotion Foundation, Cyprus; the Ministry of Education and Research, Recurrent financing contract SF0690030s09 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\\'eaire et de Physique des Particules~\/~CNRS, and Commissariat \\`a l'\\'Energie Atomique et aux \\'Energies Alternatives~\/~CEA, France; the Bundesministerium f\\\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Office for Research and Technology, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Korean Ministry of Education, Science and Technology and the World Class University program of NRF, Korea; the Lithuanian Academy of Sciences; the Mexican Funding Agencies (CINVESTAV, CONACYT, SEP, and UASLP-FAI); the Ministry of Science and Innovation, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\\c{c}\\~ao para a Ci\\^encia e a Tecnologia, Portugal; JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Science and Technological Development of Serbia; the Ministerio de Ciencia e Innovaci\\'on, and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the National Science Council, Taipei; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation.\n\nIndividuals have received support from the Marie-Curie programme and the European Research Council (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \\`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Council of Science and Industrial Research, India; and the HOMING PLUS programme of Foundation for Polish Science, cofinanced from European Union, Regional Development Fund.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecently, assistive robotics became an increasingly popular research field as it provides use-case applications for solutions from a wide range of disciplines. Whereas special-purposed service robots such as autonomous cleaning systems or lawn mowers already found their way into many households, more general-task directed robotic assistants still entail challenges for the safe and reliable use in unconstrained environments. In general, these systems are aimed at providing various services in the household and workspace of humans that are assigned in direct and human-oriented interactions. This requires the assistive robot to enhance the mostly geometric perception as solely needed for cleaning tasks to a more general semantic understanding of its environment. In particular, a common task for a service robot is to find and bring back a specific kind of object. In order to execute such a command the robot must be able to interact with the human by understanding its needs and subsequently having the capabilities to navigate and find the dedicated object autonomously.\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{info3.png}\n\t%\n\t\\caption{Example image of the TIAGo robot in front of a mapped office environment. Multiple objects on the desk, such as screen, cup, book, mouse and keyboard, are additionally incorporated as projected spheres into the point cloud map.}\n\t\t\\label{info}\n\\end{figure}\nIn this paper we focus on the perceptual semantics for processing meaning-oriented information for objects and humans.\nMultiple approaches \\cite{hobbit, martinez, lee} have been presented targeting the detection and recognition of semantic data for mobile robots. Common methods harness model-based solution approaches where each object is identified by its shape or color using handcrafted descriptors. Given that the target object has an unique appearance, these methods can be very performant for recognition tasks. On the other side, low image resolution, small objects, an ordinary appearance and multiple instances of similar looking objects can impair the detection and recognition of such systems. In addition, the target models have to be taught beforehand with training images of the object showing it from multiple different perspectives.\n\nWe tackle this challenge by using neural networks and combine them with geometric constraints provided by the robot. Our system can be divided in two separate modules. The first one targets the enrichment of geometric maps with semantic information by detecting and localizing objects in a robust manner (Fig. \\ref{info}). Additionally, we make sure to constantly correct and extend currently mapped objects based on updated perception data from the robot. This makes our system also suitable for localization methods that provide retrospective optimization capabilities. The procedure is carried out on the fly in real-time, when the robot is exploring the surroundings making it a well benefiting add-on for geometric-based mapping algorithms. \nThe second module detects and analyzes potential human interaction partners where we aim to additionally predict cooperation willingness. This provides a first step of interactions with humans in a proactive manner while at the same time semantic objects can be adhoc incorporated in resulting tasks.\nWe implemented our system on the TIAGo, a mobile humanoid robot platform.\nThe rest of this paper is organized as follows: Section \\ref{sec:system} describes our proposed system. In section \\ref{implementation}, we give details of the current state of implementation. Section \\ref{conclusion} concludes and gives an insight about our future work.\n\\begin{figure*}[th]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{system3.pdf}\n\t\n\n\t\\caption{Overview of the proposed method. We use RGB-D images to feed our process lines for object registration and human condition estimation. The object registration consist of 2D object detections that are mapped and associated to 3D objects. The human condition estimation starts with multi task detection process for prediction persons and faces that are in a subsequent step further analyzed for interaction willingness. }\n\t\\label{system}\n\\end{figure*}\n\n\\section{system components} \n\\label{sec:system}\nOur system (Fig. \\ref{system}) consists of two main pipelines for extracting and evaluating semantic information from visual data. The first one is dedicated to detect and register volumetric objects. The second one classifies and estimates the attention related behavior of humans. In the following subsections, we describe each component in detail.\n\n\\subsection{Object registration} \nThe first step of the object registration is the classification of image regions in the image stream provided by the camera in the head of the robot. For this matter deep neural networks have been proven to reliably detect wide ranges of different types of objects. Popular architectures are the RCNN model family \\cite{rcnn, FasterRCNN} and the YOLO model family \\cite{yolo, yolo9000}. Due to it outstanding speed we chose YOLOv4 \\cite{yolov4} for our framework. The network is pretrained on the COCO database\\cite{coco} and able to predict up to 80 different classes. However, when testing it outside of the database we experienced a quite decreased accuracy rate and repeating occurrences of false positive detections. Accordingly, the unfiltered processing of the provided predictions will result with false labeled map parts in later stages. To overcome this problem, we apply an adapted Intersection over Union (IOU) tracker\\cite{iou} that associates similarly located and equally labeled bounding boxes from consecutive images. Associated detections form a so called \\textit{track}, where the confidence of a true positive object classification increases with the tracks length.\nAfter reaching a specific track length threshold the 2D detection is projected into the 3D space using the intrinsic camera parameters and the extrinsics provided by the robots localization algorithm. For this purpose the depth estimation from the camera active stereo module is used together with the RGB images to generate metric scaled point clouds. Depending on the bounding box sizes we then cut out object cuboids that are afterwards projected into the robots 3D point cloud map. While this method for registering of semantic objects can be performed in a very efficient way, it is combined with supportive solutions to overcome the following challenges: \n \\linebreak\n\\subsubsection{Object Recognition}\nWhen the robot is moving around it may detect objects in the image stream that already has been registered earlier to the map. Processing it a second time will lead to multiple mapped instances of the same physical objects and must be therefore prevented. Consequently, the object mapping must be able to recognize objects that has been seen and registered before. In general, 3D object recognition has been extensively researched in the recent time, but it commonly requires huge computational efforts. In addition, objects from the same class can resemble each other in a way that taking unique visual fingerprints fails. We approach the issue by comparing the projected localization of a new object \\textit{candidate} with the localization of earlier registered objects of the same class. Is the candidate significantly overlaying with its counterpart it is likely that both belong to the same origin. This approach is implemented using a \\textit{nearest neighbor} association process, where we compare the average distance between the entire point clouds instead of their centroids. This way, we incorporate the objects size in the association decision as point clouds of large objects can contain centroids that are wide apart, while the clouds themselves heavily overlap. \n \\linebreak\n\\subsubsection{Object Optimization}\nThe probabilistic localization of the robot involves uncertainties and will only be a convergence of its real pose. This effect is further enhanced when navigating in an unknown environment and, thus, a simultaneous mapping process is required. For localization methods with optimization capabilities the detection of distinctive or familiar map areas is exploited to re-evaluate the past trajectory and, where appropriate, to correct drifts. Concurrently, we use this mechanism to recalculate the poses of our semantic objects derived from the corrected robot trajectory. At the same time, we analyze if this leads to any strong overlaps between their updated point clouds. This provides us the opportunity to even correct mapping errors induced from the erroneous trajectory. Objects with salient overlap are assumed to belong to the same physical instance and are therefore merged whereby the points clouds are concatenated and the localization and object dimensions are recalculated accordingly. This way, we can not only maintain and correct the perception of semantic objects around us, but with the aid of point cloud merging we are able to build up more complete point cloud appearance models of our objects.\n\\subsection{Human Behavior Estimation}\nTo create optimal preconditions for a successful human-robot interaction we apply a dedicated process pipeline to search for and analyze human interaction partners. Instead of using one network for person detection and one for face detection, we utilize a custom neural network that is able to predict person and face bounding boxes simultaneously.\n \\linebreak \n\\subsubsection{Face-Person Detection}\nThe network architecture is based on the SSD approach~\\cite{ssd} that provides a higher inference rate than two-stages detectors. Moreover, both detectors for persons and faces are combined using multi-task learning which enables them to share their first layers of feature extraction. This equally boost the efficiency and the generalization capabilities. \n\nThe main difficulty for the combination of the two tasks face and person detection in a single neural network is the fact that publicly available databases contain only ground truths for one of the two tasks. For this purpose, we developed a custom multi-task loss function and designed an architecture consisting of a shared backbone and separate detection layers for each detection task. During training, we alternate between batches of person annotations and batches of face annotations, which are taken from different databases.\nThe prediction of the respective class with non-existing ground truth is assumed to be correctly determined and only the gradients of the detection layers with existing ground truth information are adjusted. Thus, a completely end-to-end trainable framework could be created.\n \nWhile the predicted bounding boxes for persons give us a good estimation of its localization, the face predictions are further used for behavior examination. In particular, we want to know if the human is interested in an interaction with the robot. A good indication for general interest is the gaze or head pose pointing towards the robot.\nAs the gaze estimation is error-prone at low image resolutions, we focus on the head pose to evaluate general interaction willingness.\n \\linebreak\n\\subsubsection{Interaction Willingness Estimation}\nTo estimate the head pose we first determine the position of facial landmarks in the face images provided by the face predictor. These facial keypoints are distinctive spots in the face (e.g. the corners of the eyes, the sides and top of the nose) that provide us information about the current formation of the face. We estimate the landmark positions by applying an ensemble of regression trees~\\cite{faciallm} that has proven to provide very fast and reliable predictions. In the following step we project these 2D landmarks into the 3D space. This is achieved by using a default 3D model that contains the same facial landmarks and aligning it with the previously predicted 2D counterpart. As this represents a minimization problem we apply the commonly used Levenberg-Marquardt optimization to estimate a matching 3D mask. The rotation and translation for the projection that provides the smallest re-projection error implies the head pose.\n\nThe prediction is performed for every single image that contains a detected face. However, the pose information from a single frame is not sufficient to derive assumptions about an underlying general interaction willingness. Brief views in the direction of the robot can be caused by arbitrary intentions including behavior estimation (e.g. when the robot is moving) or even causal glances without deeper conscious intentions. We therefore track a humans head pose over multiple images and gain confidence about interaction intentions the longer the persons focus is directed at the robot. We assume that a viewing direction towards the robot of a duration of 3~ms represents a reliable threshold to determine interaction willingness. However, short distractions are common that result in brief interruptions of the attention towards the robot. We therefore use a solution based on our previous idea \\cite{hempel} and calculate the interaction willingness in a dynamic manner. We apply a progress bar that loads faster when the attention is directed to the robot and unloads slower in case of distractions. In this way, showing interaction willingness can be resumed in natural way even though it has been abandoned for a short time. \n\n\\section{Implementation}\n\\label{implementation}\nWe implemented our system in form of multiple Robot Operating System (ROS) compatible modules for seamless intercommunication with other (ROS) components and deployed it on the TIAGo robot. For mapping and localization we use the RTAB-Map~\\cite{rtabmap} as its graph-based approach suits our semantic data update and correction process. The robots IMU is used as guess to perform a laser-based ICP-SLAM that runs along with other ROS nodes for path planning, collision avoidance and motion planning on the robots onboard i7 computer. Our image processing focused methods are deployed on an external mobile system that is placed on the robots shoulders. It contains a Quadro RTX 5000 that is able to process the neural networks more efficiently than CPUs.\n\n\n\\section{Conclusion}\n\\label{conclusion}\nIn this work, we addressed the problem of semantic meaningful perception for mobile assistive robots which constitutes a fundamental requirement for solving complex tasks. \n\nWe propose a neural network enhanced approach for successively mapping and maintaining 3D objects of the robots environment that can run alongside other geometrical mapping modules. \nSimilarly, we process humans by utilizing a custom single-shot detector that simultaneously provides person and face predictions in the image stream. The latter are furtherly used to estimates the persons interaction willingness to enable proactive collaboration behavior on the robots side. All modules are implemented on a mobile humanoid robot platform and are compatible for intercommunication with other ROS modules such as path and motion planning. \nIn future works we will exploit this advantage to incorporate additional text-to-speech and speech recognition modules. First, we will search for potential interaction partners and proactively call for tasks when interaction willingness is predicted. Afterwards the tasks can be provided orally, processed and executed. Typical tasks will be the localization and bringing of specific objects that have either already been mapped or have to be found in a dedicated exploration drive.\nThe behavior will be evaluated in real world scenarios. \n\n\\bibliographystyle{IEEEtranDOI}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAccording to core accretion models \\citep{mizuno-78}, giant gas\nplanets such as Jupiter and Saturn formed via the accumulation of an\nprotocore of rock and ice which gained solid material until it reached\nsufficient size to begin accreting the gaseous component of the\nprotosolar nebula. The existence of solid ice in the outer solar\nsystem promotes the rapid growth of the more massive protocores\nallowing the accretion of large quantities of gas necessary for\nJupiter-sized planets. Giant planets thus have a dense core of rock\nand ice surrounded by a H-He envelope. It is not known, however,\nwhether the initial dense core remains stable following the accretion\nof the H-He outer layer or whether the core erodes into the fluid\nhydrogen-rich layers above \\citep{stevenson-pss-82,guillot-book}.\n\nThe gravitational moments of Jupiter and Saturn, which have been\nmeasured by prior planetary missions and will be determined for\nJupiter with unprecedented accuracy by the upcoming Juno mission, may\nbe used in combination with interior models\n\\citep{militzer-apj-08,guillot-book,hubbard-ass-05,saumon-apj-04,nettelmann} to\nestimate the mass of the present-day core, but it is unclear whether\nthese masses correspond to the primordial core mass. It has been\nsuggested \\citep{guillot-book,saumon-apj-04} that the present-day core\nmass of Jupiter may be too small to explain its formation by core\naccretion within the relatively short lifetime of the protosolar\nnebula \\citep{pollack-icarus-96}, although a more recent Jupiter model\n\\citep{militzer-apj-08} predicted a larger core of 14--18 Earth masses\nwhich is consistent with core accretion. Furthermore, direct\nmeasurements of Jupiter's atmosphere suggest a significant enhancement\nin the concentration of heavy $(Z > 3)$ elements\n\\citep{niemann-science-96}, but it is unknown to what extent this\nshould be attributed to a large flux of late-arriving planetesimals\nversus the upwelling of core material. Determining the extent of core\nerosion is thus a major priority for understanding the interiors of\ngiant planets and the process by which they were formed.\n\nIn this work we focus on water ice, presumed to be a major constituent\nof the core, and consider the question of whether it has significant\nsolubility in fluid metallic hydrogen at conditions corresponding to\nthe core-mantle boundaries of giant gas planets. Water ice is the most\nprevalent of the planetary ices (water, methane and ammonia) which may\nbe assumed to make up the outermost layers of a differentiated\nrock-ice core \\citep{hubbard-science-81}. At the conditions of\ntemperature and pressure prevalent at giant planet cores, water ice is\npredicted \\citep{cavazzoni,french-prb-09} to be in either in a fully\natomic fluid phase in which oxygen and hydrogen migrate freely and\nindependently, or in a superionic phase in which oxygen atoms vibrate\naround defined lattice sites while hydrogen atoms migrate\nfreely. Assuming the existence of a core-mantle boundary at which\nwater ice and the fluid H-He phase are in direct contact, the relevant\nquestion is the extent to which the system may lower its Gibbs free\nenergy by the redistribution of the atoms of the ice phase into the\nfluid hydrogen. The extreme pressure and temperature conditions\nprevalent at giant planet core-mantle boundaries (8000--12000K and\n8--18 Mbar for Saturn, 18000--21000K and 35--45 Mbar for Jupiter) are\nnot yet obtainable in the laboratory, thus \\emph{ab initio}\nsimulations provide the best available guide to determining the extent\nof core solubility.\n\n\\section{Theory and Methodology}\n\nWe used density functional molecular dynamics (DFT-MD) calculations\nand coupling constant integration (CCI) techniques to compute the\nGibbs free energy of solvation, $\\Delta G_{sol}$, of H$_2$O in fluid\nmetallic hydrogen, i.e. the change in Gibbs free energy when an H$_2$O\nmolecule is removed from the pure ice phase and dissolved in H. The\nfree energy of solubility is computed from the free energies of three\nsystems: pure ice, pure fluid H, and a mixed system in which the atoms\nof one water molecule are dissolved in $n$ atoms of hydrogen,\n\n\\begin{equation}\n\\Delta G_{sol} = G \\left(\\mbox{O} \\mbox{H}_{n+2}\\right) - \\left[G \\left( {\\mbox{H}_2 \\mbox{O}} \\right) + G\\left( \\mbox{H}_{n} \\right) \\right]\n\\label{deltag}\n\\end{equation}\n\nwhere $G \\left( {\\mbox{H}_2 \\mbox{O}} \\right)$ is the energy per\n$\\mbox{H}_2\\mbox{O}$ stoichiometric unit of the ice phase, and\n$G\\left(\\mbox{H}_{n}\\right)$ is obtained from an appropriately-scaled\nsimulation of 128 H atoms. This quantity becomes more negative as\nsolubility increases. A $\\Delta G_{sol}$ of zero implies a saturation\nconcentration of exactly one H$_2$O to $n$ H. In order to span the\nrange of likely conditions for the core-mantle boundary of Jupiter and\nSaturn, we considered pressures of 10, 20 and 40 Mbar, at a range of\ntemperatures from 2000 to 20000~K.\n\nComputation of free energies from MD simulations is difficult since\nthe entropy term is not directly accessible. Here we use a two-step\nCCI approach as previously applied by several authors\n\\citep{alfe-nature-99,morales-pnas-09,wilson-prl-10} to compute free\nenergies. The CCI method provides a general scheme for computing\n$\\Delta F$ between systems governed by potential energy functions\n$U_1$ and $U_2$. We construct an artificial system $U_\\lambda$ whose\nforces are derived from a linear combination of the potential energies\nof the two systems $U_\\lambda = (1-\\lambda)U_1 + \\lambda U_2$. The\ndifference in Helmholtz free energy between the two systems is then\n\n\\begin{equation}\n\\Delta F = \\int_0^1 \\langle U_2 - U_1 \\rangle_\\lambda \\: d \\lambda,\n\\end{equation}\n\nwhere the average is taken over the trajectories governed by the\npotential $U_\\lambda$. We perform two CCIs for each $G$ calculation:\nfirst from the DFT-MD system to a system governed by a classical pair\npotential which we fit to the DFT dynamics of the system via a\nforce-matching approach \\citep{izvekov-jcp-04}, and then from the\nclassical system to a reference system whose free energy is known\nanalytically.\n\n\\subsection{Material phases}\n\nThe material phases in question must be established prior to the Gibbs\nfree energy calculations. At the pressure and temperature conditions\nof interest, hydrogen is a metallic fluid of H atoms in which\nmolecular bonds are not stable. Water dissolved within hydrogen\nlikewise is non-molecular, with a free O atom in an atomic H\nfluid. For water ice, the phase diagram at giant planet core pressures\nis divided into three regimes\n\\citep{cavazzoni,goldman,mattsson-prl-06,french-prb-09}: a\nlow-temperature ($<$~2000~K) crystalline regime, an intermediate\nsuperionic regime in which oxygen atoms vibrate around fixed lattice\nsites while hydrogen atoms migrate freely, and a higher-temperature\nfully fluid regime in which both hydrogen and oxygen atoms are\nmobile. The transition between the crystalline and superionic regimes\nhas not yet been studied in detail at high pressures but our\nsimulations find that it occurs below 2000~K. The transition from\nsuperionic to fully fluid is found to occur in simulation at\ntemperatures ranging from 8000~K for 10 Mbar and 13000~K for 50 Mbar\n\\citep{french-prb-09}. Consequently our study includes the superionic\nand fully fluid ice phases.\n\nPrevious studies of superionic ice \\citep{cavazzoni,french-prb-09}\nhave used an \\emph{bcc} arrangement of atoms for the oxygen\nsublattice. We found that such a lattice was stable at all points\nstudied in the superionic regime except for 10 Mbar at 2000 and\n3000~K. At these conditions we found that the oxygen sublattice was\nstable in the \\emph{Pbca} geometry which we recently reported to be\nthe most stable zero-temperature structure for ice at 10 Mbar\n\\citep{militzer-prl-10}. At 10 Mbar and 5000~K, superionic ice with a\n\\emph{bcc} sublattice was stable but the \\emph{Pbca} was not. The\n\\emph{bcc} oxygen sublattice was found to be stable for 20 and 40 Mbar\npressures at all temperatures studied in the superionic regime.\nAttempts to perform a superionic simulation in the \\emph{Cmcm}\ngeometry reported by \\cite{militzer-prl-10} to be the stable\nzero-temperature structure at these pressures resulted in an unstable\nsystem with a large anisotropic strain. We thus used a \\emph{bcc}\noxygen sublattice for all superionic ice simulations, except the\n2000~K and 3000~K simulations at 10 Mbar which used the oxygen\nsublattice from the \\emph{Pbca} phase, as indicated in Figure 1. While\nwe cannot yet exclude the possibility of the existence of yet another\nsuperionic ice structure, we note that the differences between\ndifferent ice phase energies are found to be on the order 0.1~eV per\nH$_2$O, and thus will not significantly affect our results about the\nstability of ice in giant planet cores.\n\n\\subsection{Computation of Gibbs free energies}\n\nOur goal in this paper is to study the solubility of water ice in\nfluid hydrogen. Due to considerations arising from the entropy of\nmixing, the solubility of one material in another is never zero,\nhowever solubility at trace quantities is not sufficient for core\nerosion. In particular, we wish to know whether solubility is\nthermodynamically favored at concentrations significantly greater\nthan the background concentration of oxygen in the fluid envelope of\nJupiter or Saturn -- this is equal to approximately one O atom to 1000\nH atoms if we assume solar concentrations for the Jovian envelope and\napproximately one part in 300 if we assume a threefold enrichment for\noxygen as observed for most other heavy elements \\citep{mahaffy}. We\nbegin by computing the Gibbs free energies of solubility for\ndissolving H$_2$O in pure H at one part in 125, and generalize later.\n\nThe coupling constant integration approach requires, as an integration\nend point, a reference system whose free energy may be computed\nanalytically. It is important to ensure that the system does not\nundergo a phase change along the integration pathway since this may\ncause numerical difficulties in the integration. For the fluid\nsystems, being pure hydrogen, hydrogen with oxygen, and ice at\ntemperatures above the superionic-to-fluid transition, we used an\nideal atomic gas as the reference system\n\\citep{alfe-nature-99,morales-pnas-09,wilson-prl-10}. For superionic\nice, we use as a reference system a combination of an ideal gas system\nfor the hydrogen atoms with an Einstein crystal of oxygen atoms each\noxygen tethered to its ideal lattice site with a harmonic potential of\nspring constant 30~eV\/$\\mbox{\\AA}^2$.\n\nThe DFT-MD simulations in this work used the Vienna Ab Initio\nSimulation Package (VASP) \\citep{vasp} with pseudopotentials of the\nprojector augmented wave type \\citep{paw} and the exchange-correlation\nfunctional of \\citet{pbe}. The pseudopotentials used had a core radius of 0.8~{\\AA} for hydrogen and 1.1~{\\AA} for oxygen. Wavefunctions were expanded in a basis set\nof plane waves with a 900~eV cutoff and the\nBrillouin Zone was sampled with $2 \\times 2 \\times 2$ $k$-points. The\nelectronic temperature effects were taken into account via Fermi-Dirac\nsmearing. A new set of force-matched potentials was fitted for each\npressure-temperature conditions for each stoichiometry. All MD\nsimulations used a 0.2~fs timestep. In the classical potential under\nsuperionic conditions, an additional harmonic potential term was added\nto ensure that oxygen atoms remained in the appropriate lattice sites,\nhowever, we found that in most cases the fitted pair potential alone\nwas sufficient to stabilize the superionic state.\n\nThe first step of the free energy calculations was the determination\nof the appropriate supercell volumes for each system for each set of\npressure and temperature conditions. This was accomplished via\nconstant-pressure MD simulations\n\\citep{hernandez-jcp-01,hernandez-prl-10} with a duration of 1.6~ps\n(0.6~ps for ice). DFT-MD trajectories were then computed in a fixed\ncell geometry in order to fit classical potentials. A run of 0.4~ps\nwas found to be sufficient for fitting suitable potential. We then\nperformed molecular dynamics runs 600 fs long (400 fs for ice) at five\n$\\lambda$ values to integrate between the DFT and classical\nsystems. Finally, we performed classical Metropolis Monte Carlo at 24\n$\\lambda$ values to integrate from the classical to the reference\nsystem.\n\n\\section{Simulation results}\n\nTable I lists the total Gibbs free energies for each simulated system\n(H$_{128}$, OH$_{127}$ and ice) for each set of temperature and\npressure conditions. Ice was confirmed to remain superionic except at\nthe 20~Mbar\/12000~K and 40~Mbar\/20000~K conditions where it was fully\nfluid. The error bars on the computed $G$ values are dominated primarily by the uncertainty in the computed volume at the desired pressure, and secondarily by uncertainty in the $\\langle U_{DFT} - U_{classical} \\rangle_\\lambda$ term in the coupling constant integration. These free energies are combined using Equation \\ref{deltag} to\ngive $\\Delta G_{sol}$ representing the free energy change associated\nwith removing an H$_2$O from the ice phase and dissolving it in the\nhydrogen fluid at a concentration of molecule per 125 solute H\natoms. $\\Delta G_{sol}$ increases strongly with temperature, but shows\nonly a weak dependence on pressure within the 10--40 Mbar range under\nconsideration. The $\\Delta G$ values were found to be well converged with respect to wavefunction cutoff, k-point sampling to within the available error bars. The effect of the electronic entropy term on $\\Delta\nG_{sol}$ was found to be less than 0.1 eV in all cases. From a linear\ninterpolation through the adjacent data points we estimated the\ntemperature at which $\\Delta G_{sol}$ passes through zero as 2400~K\n$\\pm$ 200~K at 40 Mbar, 2800~K $\\pm$ 200~K at 20 Mbar, and 3400~K\n$\\pm$ 600~K at 10 Mbar. As shown in Figure 1, this is clearly far\nlower than any reasonable estimate for the core-mantle boundaries for\neither Jupiter or Saturn. The onset of high solubility occurs within\nthe portion of the phase diagram where ice is superionic, and does not\ncoincide with either the superionic-to-fluid transition or the\ncrystalline-to-superionic transition in ice.\n\nThe total $\\Delta G$ of solubility may be broken down into three\ncomponents: a $\\Delta U$ term from the potential energy, a $P \\Delta\nV$ term from the volume difference, and a $-T \\Delta S$ entropic\nterm. Figure 2 shows this breakdown as a function of temperature for\n20~Mbar. The breakdown for other pressures looks similar. The $\\Delta\nU$ term, representing the difference in chemical binding energy,\nprovides approximately a 2 eV per molecule preference for ice\nformation at all temperatures where ice is superionic, but a much\nsmaller preference for the 12000~K case where ice is fluid. The $PV$\nterm is indistinguishable from zero within the error bars, suggesting\nthat this is not a pressure-driven transition, in contrast to recent\nresults on the partitioning of noble gases between hydrogen and helium\nin giant planet interiors in which volume effects were found to be the\ndominant term \\citep{wilson-prl-10}. The $-T \\Delta S$ term dominates\nthe temperature-dependent behavior, underlining that ice dissolution\nis indeed an entropy-driven process.\n\nGiven the Gibbs free energy of solubility for the insertion of one\nH$_2$O into 125 H atoms we can determine an approximate Gibbs free\nenergy of solubility at other concentrations, by neglecting the\ncontribution of the oxygen-oxygen interaction and including only the\nentropic term arising from the mixing. Under these approximations, we\nobtain the expression\n\n\\begin{eqnarray}\n\\frac{\\Delta G_{sol}[m] - \\Delta G_{sol}[n]}{k_BT} &=& (m+2) \\log \\left( \\frac{(m+2)V_H + V_O}{(m+2)V_H}\\right)\\nonumber \\\\ \n&-& (n+2) \\log \\left( \\frac{(n+2)V_H + V_O}{(n+2)V_H} \\right)\\nonumber \\\\\n&-& \\log \\left( \\frac{m+2}{n+2} \\right) ,\n\\label{conc}\n\\end{eqnarray}\n\nwhere $V_H$ and $V_O$ are the effective volumes of each H and O atom\nin the fluid. This approximation becomes invalid as the oxygen-oxygen\ninteraction term in the fluid oxygen phase becomes significant,\nhowever for our purposes it is sufficient to know that the saturation\nconcentration is significantly higher than the background\nconcentration. If a saturation concentration of oxygen in hydrogen\ndoes indeed exist then this may be expected to have a retarding effect\non the erosion of the core.\n\nWe must also consider possibility that hydrogen and oxygen could\ndissolve separately from the H$_2$O mixture, leaving behind a\ncondensed phase with stoichiometry other than H$_2$O. We tested two\ncases explicitly, computing the free energies of pure oxygen and\none-to-one HO phases at the Jupiter-like 40 Mbar, 20,000~K set of\nconditions. Both O and HO were found to be in a fully fluid state at these temperature\/pressure conditions. We found that HO had a free energy of solubility of -11.2\neV per oxygen, while pure oxygen had -22.8 eV per oxygen. Comparing to\nthe -8.9 eV per oxygen solubility of H$_2$O, this suggests oxygen-rich\ncondensed phases are less thermodynamically stable than the H$_2$O\nphase, and certainly far less favourable than dissolution of the dense\nphase into metallic hydrogen. We have also neglected the possibility\nof hydrogen-enriched dense phases such as H$_3$O. While it possible\nsuch phases may be somewhat energetically preferred to H$_2$O, it is\nextremely unlikely that the preference will be strong compared to the\n$8-12$ eV per O unit preference for solubility.\n\nIn Figure 3 we plot the estimated $\\Delta G_{sol}$ as a function of\nconcentration for various computed temperatures at a pressure of\n40~Mbar. A negative value of $\\Delta G_{sol}$ means that it is\nthermodynamically favorable to dissolve further ice into the hydrogen\nphase at a given hydrogen-phase ice concentration, and the point at\nwhich $\\Delta G_{sol}$ is zero is the saturation concentration. For\n2000~K and 3000~K the saturation concentrations are estimated to be on\nthe order of 1:500 and 1:20 respectively, while for higher\ntemperatures the saturation concentration is much higher. Given that\nwe neglect O--O interactions in the fluid hydrogen phase, it is\ndifficult to precisely determine the saturation concentrations for\nhigher temperatures. However, we may safely say that the saturation\nconcentration for temperatures in excess of 3000~K is very much\ngreater than the background oxygen concentration, and hence that\nsolubility of ice into hydrogen at the core-mantle boundaries of\nJupiter and Saturn is expected to be strongly thermodynamically\nfavored.\n\n\\section{Discussion}\n\nThe consequences of core erosion for planetary evolution models have\nbeen previously considered by \\citet{stevenson-pss-82} and later\n\\citet{guillot-book}. The effects of core erosion can potentially be\ndetected either by orbital probes such as Juno or by atmospheric entry\nprobes, since the redistribution of core material throughout the\nplanet will manifest itself both by a smaller core (detectable from\ngravitational moments) and a higher concentration of heavy elements in\nthe atmosphere than would be expected in a planet without core\nerosion. Once core material has dissolved into the metallic H layers,\nthe rate at which core material can be redistributed throughout the\nplanet is expected to be limited by double diffusive convection\n\\citep{turner,huppert}. Since the higher density due to compositional\ngradients of the lower material interferes with the convection\nprocess, convection may be slowed significantly. \\citet{guillot-book}\nmodelled the effect of core erosion under the assumptions of fully\nsoluble 30 Mbar core in each planet. Under their assumptions up to 19\nEarth masses could have been redistributed from Jupiter's core but\nonly 2 Earth masses from Saturn's, the difference being Jupiter's\nhigher temperatures. While this prediction is subject to significant\nuncertainty in many aspects of the model, it does suggest that a\nredistribution of a significant fraction of the initial protocore is\npossible, at least in Jupiter. Further refinement of models for the\nupconvection of core material and its observable consequences may be\nfruitful. The effect of core erosion on the heat transport and mass\ndistribution properties of Jupiter and Saturn should also be taken\ninto account in future static models of these planets' interiors.\n\nThese calculations can be expanded in several ways. We have neglected\nthe presence of helium in the hydrogen-rich mantle, however due to the\nlarge magnitude of $\\Delta G_{sol}$ and the chemical inertness of\nhelium we do not expect the presence of helium in the mantle to\nsignificantly affect the solubility behavior. We have explicitly made\nthe assumption that ice and hydrogen are in direct contact, an\nassumption which might fail in one of two ways. If hydrogen and helium\nare immiscible at the base of the atmosphere then the core may make\ndirect contact with a helium-rich layer. This, however, is unlikely in\nthe context of the calculations of \\citet{morales-pnas-09} who predict\nhydrogen-helium immiscibility only far away from the cores of Jupiter\nand Saturn. The other possibility is that the ice layers of the core\nmay be gravitationally differentiated, leaving ice beneath layers of\nthe less dense planetary ices methane and ammonia. Since the bonding\nin these is similar to that in water ice one could assume that they\nshow a similar solubility behavior, but this analysis is the subject\nof future work.\n\nWe have also considered only the structure of the present-day\nplanet. As suggested by \\citet{slav}, a proper consideration of core\nsolubility must also include solubility during the formation process,\nas dissolution of the icy parts of the core into the accreting\nhydrogen during the formation may result in the amount of ice on the\ncore itself being small by the time the planet reaches its final\nsize. A treatment of the formation processes for Jupiter and Saturn\nusing ice solubilities derived from \\emph{ab initio} calculations may\nbe valuable.\n\nOur calculations strongly suggest that icy core components are highly\nsoluble in the fluid mantle under the conditions prevalent at the\ncore-mantle boundaries of Jupiter and Saturn. Since many\nrecently-discovered exoplanets are more massive and hence internally\nhotter than Jupiter, it can be expected that any initial icy cores in\nthese exoplanets will also dissolve. The presence of core erosion may allow models predicting a small present-day Jovian core to be made consistent with the large initial core required by core erosion, however models of the interior mass distribution of the planet will need to be revised to take the inhomogeneous composition of the lower layers implied by convection-limited core redistribution into account. Improved models which\ninclude core redistribution processes, combined with the\ndata from the Juno probe, may assist in understanding the history and\npresent structure of Jupiter and other planets in our own and in other\nsolar systems.\n\n\n\n\\acknowledgments\n\n This work was supported by NASA and NSF. Computational resources were supplied in part by TAC, NCCS and NERSC. We thank D. Stevenson for discussions.\n\n\n\\clearpage\n\n\n\\bibliographystyle{apj}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nBoth long and short GRB jets have to cross a significant amount of matter (the stellar atmosphere for long GRBs and the merger's ejecta in short ones) before producing the observed $\\gamma$-rays. \nThis understanding has lead to great interest in jet propagation within surrounding matter and the question was explored both analytically \\citep[e.g.,][]{Blandford_Rees1974, Begelman_Cioffi1989, Meszaros_Waxman2001, Matzner2003, Lazzati_Begelman2005, Bromberg2011} and numerically \\citep[e.g.,][]{Marti+1995, Marti+1997, Aloy+2000, macfadyen_supernovae_2001, Reynolds+2001, Zhang+2004, Mizuta+2006, Morsony+2007, Wang+2008, Lazzati+2009, Mizuta+2009, Morsony+2010, Nagakura+2011, Lopez+2013, Ito+2015, Lopez+2016, Harrison2018}.\nThis, naturally, raises the possibility that some jets are ``choked\" during their propagation and are unable to break out of the surrounding dense medium. \nThe observed temporal distributions of both long \\citep{Bromberg+2012} and short \\citep{Moharan_Piran2017} GRBs suggest that this happens in both types of events and there are indications that this also happens in some Supernovae \\citep{Piran2019}.\n\nIn cases that the jet does not emerge we may still observed the signature of the cocoon that forms. \nFirst, the breakout of the shock driven by the cocoon produces a bright flash. \nFor example, a cocoon breakout is most likely the origin of low-luminosity GRBs ({\\it ll}GRBs) \\citep{Kulkarni1998, macfadyen_supernovae_2001, Tan+2001, campana_association_2006, Wang+2007, waxman_grb_2007, katz_fast_2010, Nakar_Sari2012,Nakar2015}. \nThese type of GRBs are rarely observed, however, when their low luminosity is taken into account it was realized that they are more numerous than regular LGRBs \\citep{Soderberg+2006}. \nAnother signature arises from the fast cocoon material that engulfs the star once the hot cocoon material breaks out and spreads. \nSpecifically, this material leads to very broad absorption lines that are visible as long as it is optically thick \\citep{Piran2019}. \nSuch lines have been observed in several SNe some accompanied by {\\it ll}GRBs \\citep{Galama+1998, Iwamoto+1998, Modjaz+2006, Mazzali+2008, Bufano+2012, Xu+2013, Ashall+2019,Izzo_et_al_2019} and others without \\citep{Mazzali+2000, Mazzali+2002, Mazzali+2009}. \nFinally, the cooling emission of the cocoon will also generate a potentially detectable UV-optical transient on time scale of hours to days \\citep{nakar_piran2017}.\n\nThe important signature that helps determining the origin of the broad absorption lines is the energy-velocity distribution of the fast moving material. \nRegular spherical explosion result in a very steep distribution with roughly $\\mathrm{d} E(v)\/\\mathrm{d} \\ln v \\propto v^{-5}$ \\citep[e.g.,][]{nakar_sari2010}. \nHowever, when a jet is involved in the explosion this distribution is expected to be much shallower with much more energy at high velocities.\nRecently, \\cite{eisenberg2022} have shown that when the jet is successful the cocoon generates a unique energy-velocity flat distribution with $\\mathrm{d} E\/\\mathrm{d} \\ln \\Gamma\\beta \\propto {\\rm const.}$ over a wide range of velocities from sub to mildly relativistic, where $\\beta=v\/c$ and $\\Gamma$ is the corresponding Lorentz factor. \nThey also found that, when the jet is choked, it leaves a unique signature of a flat energy-velocity distribution. \nHowever, in the case of choked jets the flat distribution covers a range of velocities that is narrower than that of outflows driven by successful jets. \nMotivated by these results we examine here in detail the energy-velocity distributions of different chocked jet, focusing on the relation between the properties of the choked jet and the final energy-velocity distribution of the outflow after it becomes homologous.\n\nFor our study we use a large set of 2D relativistic hydrodynamical simulations. \nWe consider explosions that are driven by choked jets in which we vary the opening angle and the engine working time of the jet as well as the structure of the progenitor. \nWe follow the simulations until the entire outflow becomes homologous and examine what is the relation between these properties and the outflow energy-velocity distribution.\n\nThe paper is structured as follows. \nIn Section \\ref{sec: methodology} we describe the numerical procedure adopted for the simulations. \nIn Section \\ref{subsec: sim_setup} we describe the code choice and the composite mesh structure adopted, while in Section \\ref{subsec: ics} we report in detail the setup for the stellar and interstellar environment and the initial conditions for the relativistic jet. \nNumerical aspects are discussed in two appendixes: a resolution study is described in Appendix \\ref{sec: appendix A} and in Appendix~\\ref{sec: appendix B} we explore the different use of a numerical smoothing function for the stellar density profile. \nIn Section \\ref{sec: results} we explore the results of our set of simulations.\nWe summarize our findings and consider the implications to observations in Section \\ref{sec: conclusions}. \n\n\n\n\\section{Methodology}\n\\label{sec: methodology}\n\n\\subsection{Simulation Setup}\n\\label{subsec: sim_setup}\n\nOur simulations are performed using the open source massively parallel multidimensional relativistic magneto-hydrodynamic code {\\textsc{pluto}} (v4.3) \\citep{Mignone2007}. \nThe code uses a finite-volume, shock-capturing scheme designed to integrate a system of conservation laws where the flow quantities are discretized on a logically rectangular computational grid enclosed by a boundary. \nWe use the special relativistic hydrodynamics module in 2D cylindrical coordinates. \nWe perform our calculations using a parabolic reconstruction scheme combined with a third-order Runge-Kutta time stepping. \nWe also force the code to reconstruct the 4-velocity vectors at each time step. \n\nThe 2D simulations enables us to reach high resolution with reasonable computational resources. \n3D simulations carried by \\citet{Harrison2018} suggest a similar generic evolution of the jet for the same parameters. \nThe main difference arises in the morphology of the jet head in 2D simulations that is affected by a plug at the head front. \nThis plug diverts some of the jet elements sideways to dissipated their energy in oblique shocks. \nThis difference is not significant for our purposes.\n\nWe chose the equation of state of the fluid to be ideal and with a constant relativistic polytropic index of $4\/3$. \nThis equation of state is applicable for a relativistic gas (as in the jet) as well to a radiation dominated Newtonian gas, such as the shocked stellar envelope.\n\nTo study the long term evolution of the jet and the cocoon from the star, we use a large grid spanning for several orders of magnitude. \nThis allows us to track the evolution of the system for at least two minutes after the breakout. \nAt that time the entire stellar envelope is shocked by the cocoon and it expands enough to become homologous. \nWe use a grid of size $4736 \\times 4636$ cells, with the radial cylindrical coordinate\\footnote{Throughout the paper $r$ is used for the 2D cylindrical radius while $R$ stands for the 3D radius.} \nextending within the range $r = [0,350] \\times 10^{10} \\cm$ and the vertical coordinate extending within the range $ z= [0.1 , 360] \\times 10^{10} \\cm$. \n\nWe use a combination of a uniform and two non-uniform mesh grids in $r-z$ coordinates with a decreasing resolution from the inner region of the simulation box to the outer boundaries. \nThe grid mesh is uniform in the inner part to maintain a high resolution of the jet injection and the formation of the resulting high pressure cocoon.\nThe uniform mesh has $1000 \\times 900$ grid points extending in the ranges $r = [0, 1] \\times 10^{10} \\cm$ and $ z=[0.1, 1] \\times 10^{10} \\cm$ with a resolution along both coordinates of $\\Delta (r,z)_\\mathrm{unif.} = 10^{7} \\cm $.\nNext to the uniform mesh we placed a stretched mesh with $1278^2$ grid points extending along both coordinates within the range $(r,z) = [1,6] \\times 10^{10} \\cm$ with a stretching ratio of $\\sim1.0018$ \nThe number of grid points for this mesh is chosen such that its initial grid spacing is the same of to the adjacent uniform mesh $\\Delta (r,z)_\\mathrm{s, init} = \\Delta(r,z)_\\mathrm{unif.} = 10^7 \\cm$ and its final grid spacing is $\\Delta(r,z)_\\mathrm{s, final} = 10^8 \\cm $. \nWe cover the remaining grid at larger distances with a logarithmic spaced mesh with $2458^2$ grid points extending within the range $(r,z) = [6, 360] \\times 10^{10}\\cm$. \nThe number of grid points is chosen such that the grid spacing of the mesh at $(r,z) = 6 \\times 10^{10} \\cm$ coincides to the resolution of the stretched mesh, such that $\\Delta(r,z)_\\mathrm{log, init} = \\Delta(r,z)_\\mathrm{s, final} = 10^8 \\cm $. \nIn this way we ensure a smooth increase of the resolution without jumps for the entire simulation grid.\nA detailed resolution study for these simulations is reported in Appendix \\ref{sec: appendix A}.\n\nWe inject the jet along the inner lower $z$ boundary, denoted $z_0$ (see \\ref{sec:jet}). \nOtherwise, we impose a reflective boundary condition at this boundary as it approximates the equatorial plane of the system. \nWe impose axial-symmetric conditions for the inner vertical boundary. \nBoth outer boundaries are set to outflow.\n\n\n\\subsection{Initial conditions}\n\\label{subsec: ics}\n\n\\subsubsection{The star}\n\nWe approximate the stellar density profile as a continuous power law that mimics the sharp decline of density in radius near the stellar edge\\footnote{This profile diverges at the origin but this region does not influence the jet propagation and it is not included in our computational domain.}\n\\begin{equation}\n\\label{eq: rho_profile}\n \\rho(R) = \\begin{cases}\n \\rho_* \\left( \\dfrac{R_*}{R} - 1\\right)^2 + \\rho_0 ,& \\mathrm{for} ~ R \\leq R_* \\ , \\\\ \n \\rho_0, & \\mathrm{for} ~ ~ R > R_* \\ .\n \\end{cases}\n\\end{equation}\nHere we choose $\\rho_* = 100 ~ \\g ~ \\cm^{-3}$ and $R_* = 3\\times 10^{10} \\cm$. \nThe total integrated mass of the star is $M_* = (9 \\pi \/ 5) ~ M_\\odot$ (see \\ref{sec:scale} for scaling of these parameters to other values.)\n\nFor this density profile the local slope, $\\alpha \\equiv \\mathrm{d} \\log \\rho(R)\/\\mathrm{d} \\log R = {-2}\/{(1-R\/R_*)} $. \nThe slope, $\\alpha$, reaches the critical value of 3 for $R = R_*\/3$. Beyond this value a spherical blast wave accelerates and eventually looses causality. \nWe present our results for this specific density profile. \nHowever, in Sec.~\\ref{sec: diff_profiles} we show that the results for different stellar density profiles (both inner and outer) are qualitatively similar.\n\nSurrounding the star we have an external CSM density of $\\rho_0 = 1.67 \\times 10^{-21} ~ \\g ~\\cm^{-3}$.\nThis exact value is unimportant as it is added just to avoid a numerical vacuum. \nThe interaction of the jet or the cocoon outflow with this CSM is insignificant. \nTo avoid numerical artifacts arising from the sudden drop in density at the edge of the star we smooth the density of the outer edge of the star with a power law: \n\\begin{equation}\n\\label{eq: smooth}\n \\rho_\\mathrm{smooth} (R) = \\rho_\\mathrm{s} \\left( \\dfrac{R}{R_\\mathrm{s}} + 1\\right)^{-8} \\ , \n\\end{equation}\nwith $ \\rho_\\mathrm{s} = 0.05 ~ \\g ~ \\cm^{-3}$ and a gradient scale of $R_\\mathrm{s} = 5 \\times 10^8 \\cm$. \nWe verified that this arbitrary choice of the smoothing function does not affect our results (see Appendix ~\\ref{sec: appendix B}).\nIn order to avoid any initial random motion we set a uniform and low ambient pressure of $P = 3.5 ~ \\keV ~ \\cm^{-3}$ within the simulation grid. \n\n\\subsubsection{The jet}\n\\label{sec:jet}\n\nWe inject a collimated jet with a constant luminosity $L_\\mathrm{j}$, operating for $t_\\mathrm{e}$ so that the total injected energy is $E_{0} = L_\\mathrm{j} \\times t_\\mathrm{e} = 10^{51} \\erg$. \nA uniform jet is injected through a nozzle with a velocity in the $z$ direction with an initial bulk Lorentz factor $\\Gamma_{0,\\mathrm{j}}$ a density $\\rho_\\mathrm{j} $ and a specific enthalpy $h_\\mathrm{j} \\gg 1$. \nBeing relativistically hot the jet spreads quickly to form an initial opening angle $\\theta_\\mathrm{j} \\simeq 1\/(1.4 \\Gamma_{0,\\mathrm{j}})$ (see details on this injection method at \\citealt{Mizuta2013} and \\citealt{Harrison2018}).\n\nThe jet is numerically initialized by the injection of density, pressure, and momentum along the $z-$direction through a nozzle parallel to the $z-$axis with a radius $r_\\mathrm{j}$ at an initial height $z = z_0$. \nThe head cross section is then $\\Sigma_\\mathrm{j} = \\pi r_\\mathrm{j}^2$. \nFor an initial opening angle $\\theta_\\mathrm{j} > 0.1~\\rad$, we set up $r_\\mathrm{j} = 10^8~ \\cm$, allowing a sufficient mesh coverage over the nozzle and we set the initial injection height at $z_0 = 10^9~\\cm$. \n\nWe consider a constant jet luminosity $L_\\mathrm{j}$.\nThis determines the product $\\rho_\\mathrm{j} h_\\mathrm{j}$ as:\n\\begin{equation}\n \\label{eq: rho_j}\n \\rho_\\mathrm{j} h_\\mathrm{j} = \\dfrac{L_\\mathrm{j}}{\\Sigma_\\mathrm{j} \\Gamma_{0,\\mathrm{j}}^2 c^3} \\ .\n\\end{equation}\nWe choose $h_\\mathrm{j} = 100$. \nThis choice of the enthalpy is arbitrary, as long as $h_\\mathrm{j} \\gg 1$.\nThe jet's pressure is given by $ P_\\mathrm{j} = (h_\\mathrm{j} - 1) {\\rho_\\mathrm{j} c^2}\/{4}$.\n\nWe explored the parameter space running simulations for different initial values of $L_\\mathrm{j} $ at steps of $2.5 \\times 10^{50}~\\erg~\\s^{-1}$ from $2.5 \\times 10^{50}~\\erg~\\s^{-1}$ to $2 \\times 10^{51}~\\erg~\\s^{-1}$ for a total of 9 different luminosities. \nFor each value of the luminosity, we run simulations for a set of different opening angles $\\theta_\\mathrm{j} = [0.05, 0.1, 0.2, 0.4, 0.6]~ \\rad$. \nAs we keep the total jet energy fixed these conditions translate to different engine working times $t_\\mathrm{e} = 10^{51}\\erg \/ L_\\mathrm{j}$. \nFor each of the 9 values of the luminosity we run 5 different values of the opening angle for a total of 45 simulations.\n\n\n\\subsubsection{Scaling relations}\n\\label{sec:scale}\nWhile we consider specific numerical values for the stellar and jet parameters, our solutions can be scaled to other values. \nThe equation of motion of the forward shock speed $\\beta_\\head$, is regulated by $\\Tilde{L}$ \\citep{Matzner2003,Bromberg2011}:\n\\begin{equation}\n\\label{eq: tilde_L2}\n \\Tilde{L} \\simeq \\dfrac{L_\\mathrm{j}}{ \\Sigma_\\mathrm{j} \\rho(R) c^3} \\ ,\n\\end{equation} \nwith\n\\begin{equation}\n\\label{eq: beta_h}\n \\beta_\\head = \\dfrac{1}{1+{\\Tilde{L}}^{-1\/2}} \\ . \n\\end{equation}\nThe stellar size $R_*$, is the scale length of t the system. \nThe scalings $\\Sigma_\\mathrm{j} \\propto R_*^2$ and $\\rho(R\/R_*) \\propto \\rho_*$ we can express $\\tilde{L}$ as\n\\begin{equation} \n\\Tilde{L} \\propto \\dfrac{E_0}{t_\\mathrm{e} \\rho_* R_*^2} \\ . \n\\end{equation}\nIf we scale the stellar radius as $R_* = \\lambda R_*'$ we have to scale the density and the jet luminosity accordingly in order to maintain $\\Tilde{L}$ and $\\beta_\\head$ unchanged. \n\nAs we show later the location where the jet is choked (i.e. where the last element launched by the jet reaches the head) with respect to the stellar radius has also to be constant. \nThe choking location $z_\\mathrm{ch}$ is roughly proportional to the engine time $t_\\mathrm{e}$ \\citep{Nakar2015}:\n\\begin{equation}\n\\label{eq: zchoke}\n z_\\mathrm{ch} = \\int_0^{t_\\mathrm{ch}} \\beta_\\head c \\mathrm{d} t \\simeq \\beta_\\head c t_\\mathrm{ch} = \\dfrac{\\beta_\\head c }{1-\\beta_\\head} t_\\mathrm{e} \\ . \n\\end{equation}\nwhere $ t_\\mathrm{ch} = {t_\\mathrm{e}}\/{(1 - \\beta_\\head)}$ is the choking time. \nIf $\\beta_\\head$ is kept constant than any transformation on $R_*$ will leave $z_\\mathrm{ch} \/ R_*$ unchanged if $t_\\mathrm{e} = \\lambda t_\\mathrm{e}' \\propto R_*$. \nThus, scaling $E_0 = \\eta E_0'$ and $R_* = \\lambda R_*'$ we require that and $\\rho_* = \\eta \\lambda^{-3} \\rho_*'$.\n\nBecause $M_* \\propto \\rho_* R_*^3$, when keeping this scaling of $t_\\mathrm{e}$ and $R_*$, we can rewrite Eq.~\\ref{eq: tilde_L2} as $ \\Tilde{L} \\propto E_0\/M_*$, which means that since $\\Tilde{L}$ is kept constant the typical velocity of the system $v_0 = (2E_0\/M_*)^{1\/2}$ is also conserved under these transformations.\n\nTurning to the jet parameters we recall that the only relevant quantities are $L_\\mathrm{j}$, $t_\\mathrm{e}$ and $\\theta_\\mathrm{j}$.\nThe first two determine $E_0$ and the latter determined $\\Gamma_{0,\\mathrm{j}}$. \nThe luminosity, together with the stellar parameters determine the produce $\\rho_\\mathrm{j} h_\\mathrm{j}$ with the condition that $h_\\mathrm{j}$ while arbitrary should be much larger than one. \n\nIn summary, given the physical scales $R_*$, $\\rho_*$, $E_0$, $v_0$, $t_\\mathrm{e}$ and the scalings $R_* = \\lambda R_*'$, $E_0 = \\eta E_0'$, the parameters defining the physics of our system, i.e. $\\tilde{L}$, $z_\\mathrm{ch}\/R_*$, will not change if $t_\\mathrm{e} = \\lambda t'_\\mathrm{e}$ and $\\rho_* = \\eta \\lambda^{-3} \\rho'_*$.\n\n\\section{Results}\n\\label{sec: results}\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.352]{figures\/frame_different_times.pdf}\n \\caption{A canonical jet ($\\theta = 0.2$ rad; $L_\\mathrm{j} = 10^{51} \\erg \\ \\s^{-1}$ and $t_e=1 \\s$) launched in a test star with a density profile given by Eq.~\\ref{eq: rho_profile}. \n The four panels show, from left to right, the relativistic $\\Gamma\\beta$ factor, the density $\\rho$, the pressure $P$ and a scalar tracer of the ejected jet material (an unitary value implies pure jet material with no mixing). \n All the quantities but the $\\Gamma\\beta$ factor are normalized to the respective maximum values in order to increase the color contrast. The scale for the velocity of our system is dictated by $v_0$. With the above parameters $\\beta_0 =v_0\/c = 0.014$ and the relativistic regime begins when $\\beta \\Gamma\/\\beta_0\\approx 50$.\n The different rows show the evolution of the jet at $t = t_\\mathrm{e} = 1~\\s$ (when the engine stops, first row) - the jet is clearly seen here surrounded by a cocoon, $t=1.3~\\s$ (choking time, second row) - the jet has disappeared as its tail reached the head, $t=14.2~\\s$ (shortly after the breakout, third row), and $t=16.0~\\s$ (sideways spreading outside the star, fourth row). \n To enhance the color contrast we change the normalization scale of $\\rho\/\\rho_\\max$ and $P\/P_\\max$ in the third and fourth row in order to capture the tenuous material and pressure spilling outside the star. }\n \\label{fig: jet_t01}\n\\end{figure*}\n\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.35]{figures\/frame_040.pdf}\n \\caption{4-panel figure of the breakout of an unchoked jet at $t=t_\\mathrm{e} = $4 s with an opening angle of $\\theta_\\mathrm{j} = 0.1$ rad with a luminosity of $L_\\mathrm{j}=2.5 \\times 10^{50}~\\erg~\\s^{-1}$. \n In this case the jet tail did not catch up the jet head, resulting in an unchoked breakout and with a head propagation operating as the jet engine were still active. We use the same color scale of the fourth row of Fig.~\\ref{fig: jet_t01} for comparison.}\n \\label{fig: jet_t4}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.38]{figures\/spreading_angle_evolution_theta_02_alternate_ratio.pdf}\n \\caption{\\emph{Left}: A schematic figure of how the aspect ratio $\\theta(t)$ is defined (Eq.~\\ref{eq: theta_of_t}) superposed on the density map of the jet cocoon while it is still inside the star. \n The dashed line represents the maximal width of the cocoon, while the dashed-dotted line represents the maximal height. \n \\emph{Centre and right}: Evolution of the aspect ratio of the jet as a function of time (central panel) and of the cocoon\/jet head location $z_\\head$ (right panel). \n The square dots represent the choking time of the jet while the triangular dots represent the breakout time of the cocoon. \n In both frames the thick dashed horizontal line represents the condition for an isotropic blast wave ($r_\\mathrm{c} = z_\\head$), while the horizontal dotted line represent the original opening angle of the launched jet ($0.2~\\rad$ for the figure). \n On the right-hand side the dashed-dotted vertical line represents the star edge, located at $R_* = 3 \\times 10^{10}~\\cm$.}\n \\label{fig: spreading_angle}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.35]{figures\/curves_divided_by_volume_bo_final.pdf}\n \\caption{Classification of the energy-velocity distribution grouped according to their different cocoon volume $V_\\bo$ at $t=t_\\bo$. \n Each curve is marked by a different shade of color and a triplet of numbers indicating, in order: the opening angle in radians, $t_\\mathrm{e}$ in seconds, and then $\\sqrt{V_*\/V_\\bo}$. The dashed black line represents the isotropic case.}\n \\label{fig: choking_height}\n\\end{figure*}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.34]{figures\/correlation_volume_cutoff.pdf}\n \\caption{Correlation between $\\sqrt{V_*\/V_\\bo}$ and the cutoff of the energy-velocity distribution for the simulated set of jets for a cutoff value of $0.25$ of the maximum of each differential energy distribution in the plots of Fig.~\\ref{fig: choking_height}. \n The red-colored area represents 1-standard deviation error of the red fit curve. \n From the fitting formula we see that the distribution cutoff corresponds to 4 times the the square root of the volume ratio at the breakout for values of $(\\Gamma\\beta)_\\mathrm{cut}$ above 3. \n The black dashed line represents the linear limit for high cutoffs.}\n \\label{fig: volume_cutoff}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.34]{figures\/V_bo_z_ch.pdf}\n \\caption{Correlation between the cocoon volume at the breakout and the choking height of the jet for different values of the initial opening angle $\\theta_\\mathrm{j}$. \n The gray-shaded region represents the 1$\\sigma$ error for the fits. }\n \\label{fig: v_bo_z_ch}\n\\end{figure}\n\n\\subsection{The jet-cocoon system}\n\\label{subsec: analysis}\n\nWe start analyzing our simulation set considering a jet with our \\emph{canonical} parameters of 1-sided luminosity of $L_\\mathrm{j} = 10^{51} \\erg\/\\s$ and $\\theta_\\mathrm{j} = 0.2~\\rad \\simeq 10^\\circ$.\n\nWhile advancing through the stellar atmosphere the interaction of the relativistic jet with the stellar material results in a forward-reverse shock structure that is called the head of the jet \\citep{Blandford_Rees1974, Begelman_Cioffi1989, Meszaros_Waxman2001, Matzner2003, Lazzati_Begelman2005, Bromberg2011}. \nThe jet head propagation velocity, $\\beta_\\head$, is much lower than the jet velocity before it reaches the head and for typical GRB jets it is Newtonian. \nThe shock-heated jet and stellar material that enters the head flow sideways because of the high head pressure and form a pressurized cocoon which enshrines the jet. \nThe contact discontinuity between the material shocked in forward and the reverse shocks divides the cocoon to inner and outer parts. \nThe inner cocoon is composed of tenuous jet material which has crossed the reverse shock while the outer cocoon is composed of denser shocked stellar material. \nThe cocoon exerts a pressure on the jet such that, if sufficiently high, collimates it, thus reducing its opening angle and consequently reducing the jet cross section compared to the uncollimated jet. \n\nWithin our chosen stellar structure model the jet head moves at a constant velocity at the inner region of the core where the local density slope $\\alpha = 2$. \nIf the jet reaches outer regions where $\\alpha > 2$ it starts accelerating.\n\nThe jet is \\emph{choked} if the engine stops while the jet is propagating within the stellar envelope and the last jet element launched by the engine (\\emph{tail}, hereafter) catches up with the jet head before the latter breaks out of the star.\nIn this case all the engine's energy goes into the cocoon. \nClearly, the choking height (Eq.~\\ref{eq: zchoke}) satisfies $z_\\mathrm{ch} < R_*$. \nOtherwise we define the jet as unchoked or successful. \nThroughout the following analysis we focus mostly on jets choked at various depths inside the star. \nFor comparison we show also the case of an unchoked jet breaking out of the star.\nFor a detailed study on the energy-velocity distribution of stellar explosions which are driven by successful jets see \\cite{eisenberg2022}.\n\nWe divide the evolution of the jet to three different phases: 1) the injection phase and choking phase $t < t_\\mathrm{ch}$, 2) the cocoon expansion phase $t_\\mathrm{ch} < t < t_\\bo$ and 3) the cocoon breakout phase $t > t_\\bo$. \nThe different phases of a choked jet are shown in Fig.~\\ref{fig: jet_t01}.\n\n\\subsubsection{Injection and Choking: $t \\leq t_\\mathrm{ch}$}\nThe engine operates for $t_\\mathrm{e}$ producing a jet. \nThis is clearly seen in the first row of Fig.~\\ref{fig: jet_t01} in both the rightmost panel showing $\\beta \\Gamma$ and the leftmost one showing that the tracer of the jet material is concentrated mostly within a narrow cylinder along the symmetry axis $z$ with radius $r \\simeq r_\\mathrm{j}$ (color dark red).\nThis behaviour is typical of the collimated regime \\citep{Bromberg2011}.\n\nAfter the jet engine stops the last jet material launched at the injection nozzle propagates upwards. \nAt $t_\\mathrm{e}$ the jet head is still unaware that the engine stopped and the jet head continues to propagate with $\\beta_\\head$ (in this specific simulation $\\beta_\\head \\simeq 0.2)$ after the central jet engine is switched off.\nHowever, as $\\beta_\\head < 1$ while the jet material moves at $\\beta_\\mathrm{t} \\simeq 1$ the jet tail catches up with the head.\nOnly at that time the information that the engine stopped reaches the head and the reverse shock within the jet disappears. \nThis is the time where the jet is choked. \n\nAs the head propagates with a velocity of $\\beta_\\mathrm{h} c$ and the tail propagates at $\\beta_\\mathrm{t} \\simeq 1$, we can estimate the the time that the jet tail will catch up with the head at $t_\\mathrm{ch}$, as defined in Eq.~\\ref{eq: zchoke}.\nUntil $t < t_\\mathrm{ch}$ the jet continues to drive the head forward through the stellar atmosphere. \nThe second row of Fig.~\\ref{fig: jet_t01} shows the system at $t = t_\\mathrm{ch}$, roughly 0.3 seconds after the end of the engine activity, in this specific simulation. \nOne can see that the very fast jet material around the core disappeared. At this stage all the jet's energy has been dissipated and given to the surrounding cocoon. \n\n\\subsubsection{Cocoon expansion: $ t_\\mathrm{ch} < t < t_\\bo$}\n\nAfter the jet choking, the cocoon becomes less and less collimated and proceeds spreading sideways while the forward shock decelerates when it is deep within the envelope and accelerates as it reaches the steep density gradient near the stellar edge \\citep{Irwin_Nakar_Piran2019}.\nDuring the propagation the inner cocoon transfers energy to the freshly shocked material (via PdV work).\n\n\\subsubsection{Brekout: $t> t_\\bo$}\nAfter the breakout the cocoon material spreads both radially and tangentially to engulfs the stellar surface, quickly shrouding the breakout point from most observers (see the last row of Fig.~\\ref{fig: jet_t01}). \nThe star is blanketed by the ejecta in a time equal to $t_\\mathrm{wrap} \\simeq \\pi R_* \/2 v_\\mathrm{bo}$, where $v_\\mathrm{bo}$ is the breakout velocity of the cocoon near the pole.\nThe shock driven by the cocoon also moves tangentially towards the equator at a slower pace until the entire stellar envelope is shocked at $t_\\mathrm{shock} \\simeq \\pi R_* \/(2 v_\\mathrm{p})$ where $v_\\mathrm{p}$ is the pattern velocity at which the spilled material travels along the stellar surface \\citep{Irwin_et_al2021}. \nShortly after reaching the equator the shocked material propagates almost radially and outwards and it becomes homologous once the outflow reaches $\\sim 2R_*$. \n\n\\subsubsection{Successful jets} \nJets whose engine operates long enough break out from the stellar envelope before the end of the activity of the central engine. \nThese jets are not choked and can preserve an ultra-relativistic velocity once they get out of the star. \nWe show an example of a successful jet in Fig.~\\ref{fig: jet_t4}. \nBecause the jet broke out without being choked the cocoon structure inside the star is mostly collimated along the vertical axis. \nFrom the first and fourth panel which show $\\Gamma\\beta$ and the jet tracer respectively, it is evident how the innermost region is still dominated by tenuous, highly relativistic jet material. \nComparing the last row of Fig.~\\ref{fig: jet_t01} and Fig.~\\ref{fig: jet_t4} we notice that for the same normalized density and normalized pressure scale, the longer duration of the unchoked jet results in expulsion of denser and faster stellar material with respect to the choked jet. \n\n\\subsection{The spreading angle and the cocoon volume}\n\nTo describe quantitatively the geometry of the jet-cocoon during its propagation within the stellar envelope we use the aspect ratio, defined as\n\\begin{equation}\n\\label{eq: theta_of_t}\n \\theta(t) = \\dfrac{\\max(r_\\mathrm{c}(t))}{z_\\head(t)}\\ ,\n\\end{equation}\nwhere $r_\\mathrm{c}$ is the cocoon cylindrical radius and $z_\\mathrm{h}$ is the head position. \nFor $\\theta \\ll 1$ the aspect ratio is a good approximation of the cocoon spreading angle.\nThe expanded cocoon at the moment of the breakout is shown in the third row of Fig.~\\ref{fig: jet_t01}. \nThe steep density transition results in an elongation and acceleration of the cocoon and the ejection of low-density material from the star, which rapidly engulfs the star's external layers. \nWe define the breakout angle $\\theta_\\bo$ as the geometric opening angle measured at the breakout time $t=t_\\bo$, namely \n\\begin{equation}\n\\label{eq: theta_bo}\n \\theta_\\bo = \\theta (t_\\bo) = \\dfrac{\\max(r_\\mathrm{c}(t_\\bo))}{R_*}\\ .\n\\end{equation}\n\nThe evolution of $\\theta(t)$ for $\\theta_\\mathrm{j} = 0.2~\\rad$ and several values of $t_\\mathrm{e}$ is reported in Fig.~\\ref{fig: spreading_angle}. \nAt first, immediately after injection, the aspect ratio starts growing. \nThe growth continues until $z_\\head$ is roughly twice the injection radius, $z_0$, at which point the aspect ratio starts decreasing, approaching the point where the cocoon opening angle is comparable to $\\theta_\\mathrm{j}$. \nThis evolution reflects the time it takes the pressure in the cocoon to build up to the point that it starts collimating the jet effectively (see \\citealt{Harrison2018} for details). \nThe evolution of the aspect ratio changes dramatically immediately as the jet is fully choked. \nSince there is no more fresh jet material to drive the head its velocity drops sharply. \nAt the same time the cocoon pressure, and thus its sideways expansion, is not affected. \nThe result is that the aspect ratio grows continuously after $t_{\\mathrm{ch}}$. \nThere is a short episode, just before and after the breakout when the aspect ratio decreases, as the head accelerates near the edge of the star and after the breakout. \nSoon after that the aspect ratio starts increasing rapidly as some of the material that broke out of the star spreads sideways at speed that is close to the speed of light. \nOne clear property that is seen in the figure is that jet that are choked more deeply have longer time to expand before they breakout and therefore a deeper choking results in a wider cocoon with a larger volume at the time of breakout. \nAs we show next this fact has important implications for the energy-velocity distribution of the outflow. \n\nThe volume of the cocoon at breakout, $V_\\bo$ is another parameter that describes the properties of the jet-cocoon system. \nAs the energy of a choked jet is given to the cocoon, for a given energy, the cocoon mass (and hence volume) at breakout, will corresponds to a typical expansion velocity of the cocoon material. \nAs the volume-averaged density in the shocked cocoon material and the volume-averaged density of the star are roughly the same, we can define a characteristic velocity at the breakout linked with the breakout volume, namely:\n\\begin{equation}\n\\label{eq: volume_bo}\n \\beta_\\bo \\simeq \\beta_0 \\sqrt{\\dfrac{V_*}{V_\\bo}} \\ . \n\\end{equation}\n\n\\subsection{The Energy-velocity distribution}\n\nFig.~\\ref{fig: choking_height} depicts the energy-velocity distribution of the entire set of simulations for different values of the engine working time $t_\\mathrm{e}$ and different initial opening angles $\\theta_\\mathrm{j}$ at $t=120 \\s$ (when the outflow is homologous and kinetic energy dominates). \nThe $x$-axis is normalized by $\\beta_0$ and the $y$-axis by $E_0$. \nEach curve is differentiated by color and labeled by a triplet of numbers describing, respectively, $\\theta_\\mathrm{j}$, $t_\\mathrm{e}$, and $\\sqrt{V_*\/V_\\bo}$. \nWe grouped the different curves according to $\\sqrt{V_*\/V_\\bo}$. \nFor a comparison we superposed the energy-velocity distribution of an isotropic spherically symmetric explosion (black-dashed line) for all panels. \n\nFig.~\\ref{fig: choking_height} shows, first, that in all cases the energy-velocity distribution exhibits a roughly constant energy per logarithmic scale of $\\Gamma\\beta$ over a range of velocities. \nThe distribution rises quickly before this rough plateau starts and decays sharply after it ends. \nThe rough plateau always starts at $\\beta_0$ and its highest velocity is determined almost entirely by $V_{\\bo}$, with a weak dependence on the jet opening angle. \nTo estimate the highest velocity of the flat part of the distribution we define $\\beta_{\\rm cut}$ as the velocity obtained when the energy-velocity distribution drops to 1\/4 of its maximum value. \nThis arbitrary definition provides a velocity that is slightly larger than the end of the plateau (e.g., in the spherical case $\\beta_{\\rm cut}=2\\beta_0$).\nFig~\\ref{fig: volume_cutoff} shows that there is a strong positive correlation between $\\beta_{\\rm cut}$ and $\\sqrt{V_*\/V_{\\bo}}$. \nFor small values of $\\sqrt{V_*\/V_{\\bo}}<3$, where typically $z_{\\mathrm{ch}} \\ll R_*$, we see that $\\beta_{\\rm cut}\\approx \\beta_{\\bo}$. \nHowever, for larger values of $\\sqrt{V_*\/V_{\\bo}}$ where the choking takes place not very deep within the stellar envelope, $\\beta_{\\rm cut} > \\beta_{\\bo}$. \nThe origin of the material faster than $\\beta_{\\bo}$ in these cases is the inner cocoon, which retain a significant fraction of its energy at the time of the breakout and outer cocoon material that is close to the edge of the star, where the forward shock is faster than $\\beta_{\\bo}$.\n\nThe value of $V_*\/V_{\\bo}$ is expected to depend on the jet opening angle and the choking depth. The jet opening angle determines the aspect ratio of the cocoon as long as the head that is pushed ahead by the jet is feeding the cocoon ($\\theta\\approx \\theta_\\mathrm{j}$), while the choking depth determines by how much this aspect ratio increases until the breakout. \nFig.~\\ref{fig: v_bo_z_ch} depicts the correlation between $V_\\bo$ and $z_\\mathrm{ch}$ for different values of the initial $\\theta_\\mathrm{j}$. \nAs expected, $V_\\bo$ is a function of $z_\\mathrm{ch}$ and $\\theta_\\mathrm{j}$. \nA deeper choking height and a wider jet correspond to a larger cocoon volume upon breakout.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.64]{figures\/scheme.pdf}\n \\caption{A sketch of the division of the stellar volume for the analysis of the distribution of the stellar material. \n The cocoon shape taken at $t=t_\\bo$ is overlaid. We associate a scalar tracer to each of the four sectors: I) internal-axis, II) external-axis, III) internal-equatorial, and IV) external-equatorial. }\n \\label{fig: scheme}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.34]{figures\/frame_195.pdf}\n \\caption{Maps of the four tracers I, II, III and IV (from left to right) associated with the four sectors of the stellar material (see Fig.~ \\ref{fig: scheme}). \n The maps show the distribution of the stellar material at $t=60~\\s$ resulting from a jet with canonical parameters (see Sec~\\ref{subsec: analysis}). }\n \\label{fig: 4_tracers}\n\\end{figure*}\n\n\\subsubsection{The origin of ejecta with different final velocities}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.51]{figures\/energy_vs_velocity_distribution_tracer3_195_single.pdf}\n \\caption{The energy-velocity distribution of the four different matter tracers associated with the sectors depicted in Fig.~\\ref{fig: 4_tracers}.\n Matter from region II (external along the axis) dominates the highest velocity region. \n Matter from region III (internal equatorial) dominates the low velocity regime. \n Matter from region I (internal along the axis) dominates the intermediate region and the plateau. \n Matter from IV (external equatorial) is always subdominant.}\n \\label{fig: e_vs_v_tracers}\n\\end{figure}\n\nTo understand the origin of the various components of the outflow, we tracked the distribution of the ejected material using four different scalar tracers associated with four distinct regions of the star. \nThe division to the different regions is determined at the time of the breakout and it is shown in Fig.~\\ref{fig: scheme}. \nThe tracers follow the mass in each of this region (at the time of breakout): I) internal-axis, II) external-axis, III) internal-equatorial, IV) external-equatorial.\n\nFig.~\\ref{fig: 4_tracers} shows the distribution of the stellar material from each of the regions at $t=60~\\s$, roughly $46~\\s$ after the breakout.\nFig.~\\ref{fig: e_vs_v_tracers} shows the energy-velocity distributions of the four sectors.\n\nWe see that the quasi-spherical outflow that leads the ejecta is made only of tenuous material coming from the on-axis, external layers of the progenitor directly above the expanding jet cocoon (region II). \nThis component contains only around $2\\%$ of the total stellar mass but it contains 11\\% of the total ejecta energy. \nEvidently, the fastest ejecta is dominated by this sector. \nThe material associated with the stellar core part that is along the axis (region I; first panel in Fig.~\\ref{fig: 4_tracers}) is much more concentrated than that of the external-axis region (II) but much more extended than the two equatorial sectors. \nIt contains $30\\%$ of the total stellar mass and 46\\% of the outflow energy. \nThis section dominates the energy distribution over a wide range of velocities. \nAlmost all the rest of the mass and the energy are contained in the internal-equatorial sector (III) which carries $60\\%$ of the mass and 32\\% of the energy. \nIt dominates the energy at low velocities $\\lesssim \\beta_0$. \nFinally, the outer-equatorial section carries $5\\%$ of the ejecta mass and 11\\% of its energy. \nAll its material is moving at intermediate velocities and it is subdominant at all velocities. \n\n\\subsection{The effect of the stellar density profile}\n\\label{sec: diff_profiles}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.43]{figures\/different_profile.pdf}\n \\caption{\\emph{Left}: Energy-velocity distributions of jet simulations with canonical parameters for four different stellar density profiles. \n The blue line ($\\rho \\propto R^{-2} (R_*-R)^2 $) represents the profile used in most of the previous simulations. \\emph{Right}: A comparison of the energy distributions of the cases $\\alpha=2,n=3$ and $\\alpha=2.5,n=3$ with those arising from from two jets choked in a star with a canonical density profile at the same heights $z_\\mathrm{ch}\/R_*$, respectively.}\n \\label{fig: density_profiles_s}\n\\end{figure*}\n\n\\begin{center}\n\\begin{table*}\n\\label{tab: different_profiles}\n\\begin{tabular}{|c||ccccc|}\n\\hline \nJets & $t_\\mathrm{e}$ [s] & $\\theta_\\mathrm{j}$ [rad] & $\\rho(r)$ & $z_\\mathrm{ch}\/R_*$ & $t_\\bo$ [s] \\\\ \n\\hline \nCanonical & 1 & 0.2 & $\\propto R^{-2} (R_*-R)^2 $ & 0.21 & 13.9 \\\\ \n$\\alpha2$\\_$n2.5$ & 1 & 0.2 & $\\propto R^{-2} (R_*-R)^{2.5} $ & 0.25 & 11.2 \\\\ \n$\\alpha2$\\_$n3$ & 1 & 0.2 & $\\propto R^{-2} (R_*-R)^3 $ & 0.25 & 9.5 \\\\ \n$\\alpha2.5$\\_$n2$ & 1 & 0.2 & $\\propto R^{-2.5} (R_*-R)^2 $ & 0.28 & 6.8 \\\\ \n$\\alpha2.5$\\_$n3$ & 1 & 0.2 & $\\propto R^{-2.5} (R_*-R)^3 $ & 0.37 & 4.0 \\\\ \n\\hline \nCanonical\\_t1.33 & 1.33 & 0.2 & $\\propto R^{-2} (R_*-R)^2 $ & 0.25 & 11.5 \\\\ \nCanonical\\_t2 & 2 & 0.2 & $\\propto R^{-2} (R_*-R)^2 $ & 0.37 & 8.3 \\\\ \n\\hline\n\\end{tabular}\n\\caption{Properties of the jets injected in different density profiles. The table lists the engine working time $t_\\mathrm{e}$, the initial opening angle $\\theta_\\mathrm{j}$, the density profile $\\rho(r)$ used in the run, the choking height relative to the star radius $z_\\mathrm{ch}\/R_*$, and the breakout time $t_\\bo$.}\n\\end{table*}\n\\end{center}\n\nTo study the effect of different stellar density profiles we consider stellar density profiles that can be written as:\n\\begin{equation}\n\\label{eq: rhogen}\n \\rho (R) = \\rho_*\\left(\\dfrac{R_*}{R}\\right)^\\alpha \\left(1-\\dfrac{R}{R_*}\\right)^n \\ ,\n\\end{equation}\nwhere $n$ is the outer slope at the edge, and $\\alpha$ is the inner slope, with $\\alpha <3$. \nThe density profile described by Eq.~\\ref{eq: rho_profile}, which is used through the rest of the paper, is roughly equivalent to the case of $\\alpha=2,~ n=2$ and it will be referred as the \\emph{canonical profile} hereafter. \nThe profiles that we consider are listed in Table.~\\ref{tab: different_profiles}. For each profile we run a simulation with our canonical jet parameters, $\\theta_\\mathrm{j} = 0.2~\\rad$, $L_\\mathrm{j} = 10^{51}~\\erg~\\s^{-1}$, and we inject the jets from the same initial height ($z_0 = 10^9~\\cm$). \n\nFig.~\\ref{fig: density_profiles_s} shows a comparison of the energy-velocity distributions from different stellar profiles. \nFirst, it shows that the distributions are all flat over a range of velocities, implying that them main feature of the outflow from an explosion driven by a choked jet is independent of exact stellar profile (a similar result was found by \\citealt{eisenberg2022} in the case of explosions that are driven by successful jets). \nWhen looking in more detail, the right-hand side shows two pairs of simulation. \nEach pair shows the results of different stellar profiles with similar $\\theta_\\mathrm{j}$ and $z_\\mathrm{ch}$ (which dominates $V_{\\bo}$). \nThe distributions found in the two simulations of each pair are very similar, implying that when the cocoon properties are similar the stellar profile has a minor effect on the outflow energy-velocity distribution. \n\nOn the left-hand side of Fig.~\\ref{fig: density_profiles_s} we compare the energy-velocity distributions of jets with the exact same parameters (including $t_\\mathrm{e}$) but different envelope density profiles. \nIt shows that the stellar profile affects the velocity of the head (as was found previously by \\citealt{Bromberg2011,Harrison2018}) and therefore jets with the same properties are choked at different heights when propagating in different density profiles. \nSince the energy-velocity profile depends strongly on $z_\\mathrm{ch}$ two jets with the same properties that propagate at different stellar profiles will result in outflows with different energy-velocity distributions, as shown on the right-hand panel of this figure. \n\n\\subsection{The energy velocity distribution at different viewing angles}\n\\label{sec: profiles_different_angles}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.39]{figures\/comparison_same_choking_height_255_four_plots.pdf}\n \\caption{Energy-velocity distributions for six different viewing angles (chosen so that the corresponding wedges have the same volumes). \\emph{Top}: Two jets with the same choking height $ (V_* \/ V_\\bo)^{1\/2}= 1.95 $ but different initial opening angles. \n \\emph{Bottom}: Two jets choked at $ (V_* \/ V_\\bo)^{1\/2} \\sim 4 $ (bottom panels) and with the same opening angles of the jets on the top row, respectively. \n The black dashed line represents the isotropic energy-velocity distribution for $1\/6$ of the total volume. The profiles are taken at $t=120~\\s$.}\n \\label{fig: energy_profiles_volumes}\n\\end{figure*}\n\nSince jet driven explosions are aspherical, one expects that the outflow will not be isotropic. \nFig.~\\ref{fig: energy_profiles_volumes} depicts the energy-velocity distribution of four simulations. \nFor each simulation we show the distributions at six different sections, where each section is the sum of the ejecta within a range of polar direction. \nTo see the dependence on the initial conditions we show simulations with two different jet opening angles ($0.2$ and $0.6$ rad) and two different values of breakout volume $\\sqrt{V_*\/V_\\bo}$. \nAs expected, the outflow is aspherical. \nA common, also expected, property of all four simulations is that the maximal velocity of the outflow is around the jet axis at lower polar angles. \nThis result was found also for jet-driven explosions of successful jets \\citep{eisenberg2022}. \nIn the two simulations with the large value of $\\sqrt{V_*\/V_\\bo}$ (i.e., low $z_\\mathrm{ch}$ and\/or wide $\\theta_\\mathrm{j}$; top panels) the energy-velocity distribution of the equatorial outflow ($\\theta \\gtrsim 60^\\circ$) is similar to that of a spherical explosion, with a typical velocity $\\beta_0$. The faster outflow is confined to lower angles. \nIn the two simulations with the small value of $\\sqrt{V_*\/V_\\bo}$ (i.e., high $z_\\mathrm{ch}$ and narrow $\\theta_\\mathrm{j}$; bottom panels) a large range of velocities was seen in all directions, but still faster velocities are observed closer to the jet axis.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.42]{figures\/density_profile_175e49_255_with_fit_four_eqvol.pdf}\n \\caption{Density profiles at different viewing angles of two jets with a similar breakout volume $ (V_* \/ V_\\bo)^{1\/2}= 1.95 $ (top row) but different initial opening angle compared to the density profile of jets choked at $ (V_* \/ V_\\bo)^{1\/2} \\sim 4 $ (bottom row) with the same opening angles, respectively. \n The solid thick black line in each panel represent a power-law fit of $\\rho(r) \\propto v^{-5}$, corresponding to a flat distribution of energy per logarithmic bin of the velocity. \n The profiles are taken at $t=120~\\s$.}\n \\label{fig: density_profiles_1}\n\\end{figure*}\n\nFor clarity we present in Fig.~\\ref{fig: density_profiles_1} the radial density distribution profiles, $\\rho(R)$, for different viewing angles of four different simulations. \nThese are the same simulations and the same divisions to angular sections as in Fig.~\\ref{fig: energy_profiles_volumes}. \nThis presentation is often used in studies of SN ejecta and at sub-relativistic velocity $\\rho(R) \\propto \\beta^{-5} \\frac{\\mathrm{d} E}{\\mathrm{d} \\log\\beta}$.\n\n\n\\section{Conclusions and implications to observations}\n\\label{sec: conclusions}\n\nWe carried relativistic hydrodynamical simulations in 2D cylindrical coordinates of stellar explosions driven by jets, focusing on configurations of choked relativistic jets and exploring how those can lead to different realizations of the velocity distribution of the outflow in its homologous expansion phase. \nWe followed the evolution of a relativistic jet from the injection deep inside the star to the point where it is choked and then continued to follow the cocoon as it emerges from the envelope and ultimately unbinds it up to the point that the outflow becomes homologous. \nWe scrutinized the various stages of the jet inside the star and analyzed what happens during the choking process and the adiabatic cocoon expansion. \nWhile the results are given for a specific set of parameters we provided scaling relation for the physical parameters of the jet and the star in order to facilitate a dimensionless treatment of the problem.\n\nWe summarize our findings as follows:\n\n\\begin{itemize}\n \\item All jet driven explosions in which the jet is not choked too deep within the star generate an outflow with a unique feature: a significant range of velocities over which the outflow carries a roughly constant amount of energy per logarithmic scale of the proper velocity ($\\Gamma\\beta$). \n This is a universal property of jet driven explosions. \n The main difference between different setups is the range of velocities over which the energy is constant.\n \n \\item The plateau of the energy-velocity distribution starts in all cases at $v_0=\\sqrt{E_0\/M_*}$. \n The maximal velocity of the plateau depends mostly on the cocoon volume upon breakout and the corresponding velocity is $\\beta_\\bo=\\beta_0\\sqrt{V_*\/V_\\bo}$. \n For $\\sqrt{V_*\/V_\\bo} < 3$ the maximal velocity is comparable to $\\beta_\\bo$, while for larger values of $\\sqrt{V_*\/V_\\bo}$ the maximal velocity is larger than $\\beta_\\bo$ and it can become mildly relativistic. \n \n \\item The volume of the cocoon upon breakout, $V_\\bo$, depends on the choking height, $z_\\mathrm{ch}$, and on opening angle of the jet upon launching. \n A higher $z_\\mathrm{ch}$ and narrower opening angle leads to a smaller $V_\\bo$ and thus to an outflow that extends to higher velocities.\n \n \\item The outflow from an explosion driven by a choked jet is not isotropic.\n In general, the material along the poles (that is along the jet direction) is faster while the material along the equator is slower.\n\\end{itemize}\n\nA spherical explosions accelerate only a negligible fraction of the stellar mass to very high velocities. \nWe have shown here that the situation is drastically different when there is a jet that breaks the symmetry. \nSuch a jet can deposit a significant amount of energy at high velocity matter, even in case that the jet is choked within the envelope. \nThis excess in high velocity outflow (compared to a spherical explosion) is certainly expected when the entire stellar explosion is driven by a jet, but it is also expected if the jet is accompanied by a simultaneous more spherical explosion (see e.g., \\citealt{eisenberg2022}).\nIf sufficiently optically thick such a high velocity material that surrounds a SN would produce a very broad absorption lines (with typical width corresponding to 0.1-0.2c) in the observed spectrum. \nIt will be observed in the early spectra but will disappear later when this outer envelope that is rapidly expanding becomes optically thin. \n\nLines that show an excess of high velocity material have been observed in several SNe \\citep{Galama+1998, Iwamoto+1998, Mazzali+2000, Mazzali+2002, Modjaz+2006, Mazzali+2008, Bufano+2012, Xu+2013, Ashall+2019,Izzo_et_al_2019}.\nOur result show that, as suggested by \\cite{Piran2019}, a chocked jet can lead to that high velocity material. \nHowever, we have found that some conditions are needed to observe the corresponding broad absorption lines. \nFirst, the jet must be chocked at sufficiently large distance at the stellar atmosphere. \nThe signature of jets that are chocked too deep will not be so significant.\nSecond, as there is less fast moving material in directions far from the jet direction, the fast moving matter will become optically thin earlier in these directions. \nAs the broad absorption line will fade faster, this implies that observers at such viewing angles are less likely to observe the broad absorption line signature. \nThese last two facts imply that we may not observer broad emission line in all SNe that harbour relativistic jets. \n\nThe excess in fast material was observed in various types of stripped envelope SNe. \nThis include SNe that are associated with long GRBs, SNE that are associated with {\\it ll}GRBs and SNe that are not associated with GRBs at all. \nLong GRBs must contain successful relativistic jets. \n{\\it ll}GRBs contain jets which may very well be choked \\citep{Kulkarni1998, macfadyen_supernovae_2001, Tan+2001, campana_association_2006, Wang+2007, waxman_grb_2007, katz_fast_2010, Nakar_Sari2012,Nakar2015}. \nWe do not know if SNe that are not associated with GRBs harbour jets, but if they are then these jets must be choked ones. \nA previous study by \\cite{eisenberg2022} have shown that successful jets can generate the energy-velocity distribution which is observed in SNe that are associated with long GRBs. \nOur finding here show that choked jets can explain the energy-velocity distribution seen in SNe that are associated with {\\it ll}GRBs and in SNe that are not associated with any type of GRBs. \nThis provides further support for the interpretation of the ``disappearing'' early very broad absorption lines in some SNe as arising from choked jets. \nThese findings also show that such lines may not be detected in all SNe that harbor choked jets. \nFurther exploration of this model, including estimates of the observed spectra and the fraction of events in which these lines will be observed will be carried out in future work. \n\n\n\\section*{Acknowledgments}\nWe kindly thank Christopher Irwin for the stimulating discussions and suggestions.\nThis work is supported by the ERC grants TReX (TP and MP) and JetNS and an ISF grant 1995\/21 (EN).\n\\section*{Data Availability}\nThe data underlying this article will be shared on reasonable\nrequest to the corresponding author.\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbmjn b/data_all_eng_slimpj/shuffled/split2/finalzzbmjn new file mode 100644 index 0000000000000000000000000000000000000000..004ead1a83afb64d5f558709dd4eadac7f2284bf --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbmjn @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nTopological order\\cite{W8987,W9039,WN9077} is a new kind of order beyond the\nsymmetry breaking orders\\cite{L3726} in gapped quantum systems. Topological\norders are patterns of \\emph{long-range entanglement}\\cite{CGW1038} in\n\\emph{gapped quantum liquids} (GQL)\\cite{ZW1490}. Based on the unitary modular\ntensor category (UMTC) theory for non-abelian\nstatistics\\cite{MS8977,BakK01,K062}, in \\Ref{KW1458,W150605768}, it is proposed\nthat 2+1D bosonic topological orders are classified by $\\{\\text{UMTC}\\}\\times\n\\{\\text{iTO}_B\\}$, where $\\{\\text{UMTC}\\}$ is the set of UMTCs and\n$\\{\\text{iTO}_B\\}$ is the set of invertible topological orders\n(iTO)\\cite{KW1458,F1478} for 2+1D boson systems. In fact $\\{\\text{iTO}_B\\}=\\Z$\nwhich is generated by the $E_8$ bosonic quantum Hall (QH) state, and a table of\nUMTCs was obtained in \\Ref{RSW0777,W150605768}. Thus, we have a table (and a\nclassification) of 2+1D bosonic topological orders.\n\nIn a recent work\\cite{LW150704673}, we show that 2+1D fermionic topological\norders are classified by $\\{\\mce{\\sRp(Z_2^f)}\\}\\times \\{\\text{iTO}_F\\}$,\nwhere $\\{\\mce{\\sRp(Z_2^f)}\\}$ is the set of non-degenerate unitary\nbraided fusion categories (UBFC) over the symmetric fusion category (SFC)\n$\\sRp(Z_2^f)$ (see Definition \\ref{defMTC}). We also require\n$\\mce{\\sRp(Z_2^f)}$s to have modular extensions.\n$\\{\\text{iTO}_F\\}$ is the set of invertible topological orders for 2+1D fermion\nsystems. In fact $\\{\\text{iTO}_F\\}=\\Z$ which is generated by the $p+\\ii p$\nsuperconductor. In \\Ref{LW150704673} we computed the table for\n$\\mce{\\sRp(Z_2^f)}$s, and obtained a table (and a classification) of\n2+1D fermionic topological orders.\n\nIn \\Ref{LW150704673}, we also point out the importance of modular extensions.\nIf a $\\mce{\\sRp(Z_2^f)}$ does not have a modular extension, it means\nthat the fermion-number-parity symmetry is not on-site (\\ie\nanomalous\\cite{W1313}). On the other hand, if a $\\mce{\\sRp(Z_2^f)}$\ndoes have modular extensions, then the $\\mce{\\sRp(Z_2^f)}$ is\nrealizable by a lattice model of fermions. In this case, a given $\\mce{\\sRp(Z_2^f)}$ may have several\nmodular extensions. We found that different modular extensions of\n$\\mce{\\sRp(Z_2^f)}$ contain information of iTO$_F$s.\n\nOur result on fermionic topological orders can be easily generalized to\ndescribe bosonic\/fermionic topological orders with symmetry. This will be the\nmain topic of this paper. (Some of the results are announced in\n\\Ref{LW150704673}). In this paper, we will consider symmetric GQL phases for\n2+1D bosonic\/fermionic systems. The notion of GQL was defined in \\Ref{ZW1490}.\nThe symmetry group of GQL is $G$ (for bosonic systems) or $G^f$ (for fermionic\nsystems). If a symmetric GQL has long-range entanglement (as defined in\n\\Ref{CGW1038,ZW1490}), it corresponds to a symmetry enriched topological\n(SET) order\\cite{CGW1038}. If a symmetric GQL has short-range entanglement, it\ncorresponds to a symmetry protected trivial (SPT) order [which is also\nknown as symmetry protected topological (SPT)\norder]\\cite{GW0931,PBT1225,CLW1141,CGL1314,CGL1204}.\n\nIn this paper, we are going to show that, 2+1D symmetric GQLs are classified by\n$\\mce{\\cE}$ plus their modular extensions and chiral central charge. In other\nwords, GQLs are labeled by triples $(\\cC,\\cM,c)$, where $\\cC$ is a\n$\\mce{\\cE}$, $\\cM$ a modular extension of $\\cC$, and $c$ the chiral central\ncharge of the edge state. (To be more precise, a modular extension of $\\cC$,\n$\\cM$, is a UMTC with a fully faithful embedding $\\cC \\to \\cM$. In particular,\neven if the UMTC $\\cM$ is fixed, different embeddings correspond to\ndifferent modular extensions.) Here the SFC $\\cE$ is given by $\\cE=\\Rp(G)$ for\nbosonic cases, or $\\cE=\\sRp(G^f)$ for fermionic cases. In yet another way to\nphrase our result: we find that the structure $\\cE \\hookrightarrow \\cC\n\\hookrightarrow \\cM$ classifies\nthe 2+1D GQLs with symmetry $\\cE$, where $\\hookrightarrow$ represents the\nembeddings and\n$\\cen{\\cE}{\\cM}=\\cC$ (see Definition \\ref{cendef}).\n\nAs a special case of the above result, we find that bosonic 2+1D SPT phase with\nsymmetry $G$ are classified by the modular extensions of $\\Rp(G)$, while\nfermionic 2+1D SPT phase with symmetry $G^f$ are classified by the modular\nextensions of $\\sRp(G^f)$ that have central charge $c=0$.\n\nWe like to mention that \\Ref{BBC1440} has classified bosonic GQLs with symmetry\n$G$, using $G$-crossed UMTCs. This paper uses a different approach so that we\ncan classify both bosonic and fermionic GQLs with symmetry. We also like to\nmention that there is a mathematical companion \\Ref{LW160205936} of this paper, where\n one can find detailed proof and explanations for related mathematical results. \n\n\nThe paper is organized as the following. In Section \\ref{GQLsymm}, we review\nthe notion of topological order and introduce category theory as a theory of\nquasiparticle excitations in a GQL. We will introduce a categorical way to\nview the symmetry as well. In Section \\ref{inv}, we discuss invertible GQLs\nand their classification based on modular extensions. In Sections \\ref{clGQL} and\n\\ref{clGQL2}, we generalize the above results and propose a classification of\nall GQLs. Section \\ref{stack} investigates the stacking operation from\nphysical and mathematical points of view. Section \\ref{howto} describes how to\nnumerically calculate the modular extensions and Section \\ref{examples}\ndiscusses some simple examples. For people with physics background, one way to\nread this paper is to start with the Sections \\ref{GQLsymm} and \\ref{clGQL2},\nand then go to Section \\ref{examples} for the examples.\n\n\n\n\\section{Gapped quantum liquids, topological order and symmetry}\n\\label{GQLsymm}\n\n\\subsection{The finite on-site symmetry and symmetric fusion category}\n\nIn this paper, we consider physical systems with an on-site symmetry described\nby a finite group $G$. For fermionic systems, we further require that $G$\ncontains a central $Z_2$ fermion-number-parity subgroup. More precisely,\nfermionic symmetry group is a pair $(G,f)$, where $G$ is a finite group, $f\\neq\n1$ is an element of $G$ satisfying $f^2=1,fg=gf,\\forall g\\in G$. We denote the\npair $(G,f)$ as $G^f$.\n\nThere is another way to view the on-site symmetries, which is nicer because\nbosonic and fermionic symmetries can be formulated in the same manner.\nConsider a bosonic\/fermionic product state $|\\psi\\ket$ that does not break the\nsymmetry $G$: $U_g|\\psi\\ket=|\\psi\\ket,\\ g\\in G$. Then the new way to view the\nsymmetry is to use the properties of the excitations above the product state to\nencode the information of the symmetry $G$.\n\nThe product state contain only local excitations that can be created by acting\nlocal operators $O$ on the ground state $O|\\psi\\ket$. For any group action\n$U_g$, $U_g O|\\psi\\ket=U_g O U_g^\\dag U_g|\\psi\\ket=U_g O U_g^\\dag |\\psi\\ket$ is\nan excited state with the same energy as $O|\\psi\\ket$. Since we assume the\nsymmetry to be on-site, $U_g OU_g^\\dag$ is also a local operator. Therefore,\n$U_g OU_g^\\dag|\\psi\\ket$ and $O|\\psi\\ket$ correspond to the degenerate local\nexcitations. We see that local excitations ``locally'' carry group\nrepresentations. In other words, different types of local excitations are\nlabeled by irreducible representations of the symmetry group. \n\nBy looking at how the local excitations (more precisely, their group\nrepresentations) fuse and braid with each other, we arrive at the mathematical\nstructure called symmetric fusion categories (SFC). By definition a SFC is a\nbraided fusion category where all the objects (the excitations) have trivial\nmutal statistics (\\ie centralize each other, see next section). \nA SFC is automatically a unitary braided fusion category. \n\nIn fact, there are only two kinds of SFCs: one is representation category of\n$G$: $\\Rp(G)$, with the usual braiding (all representations are bosonic); the\nother is $\\sRp(G^f)$ where an irreducible representation is bosonic if $f$ is\nrepresented trivially ($+1$), and fermionic if $f$ is represented\nnon-trivially($-1$).\n\nIt turns out SFC (or the fusion and braiding properties of the local\nexcitations) fully characterize the symmetry group. Therefore, it is\nequivalent to say finite on-site symmetry is given by a SFC $\\cE$. By Tannaka\nduality $\\cE$ gives rises to a unique finite group $G$ and by checking the\nbraiding in $\\cE$ we know whether it is bosonic or fermionic. This is the new\nway, the categorical way, to view the symmetry. Such a categorical view of\nbosonic\/fermionic symmetry allows us to obtain a classification of symmetric\ntopological\/SPT orders.\n\n\\subsection{Categorical description of topological excitations with symmetry}\n\nIn symmetric GQLs with topological order (\\ie with long range entanglement),\nthere can be particle-like excitations with local energy density, but they\ncannot be created by local operators. They are known as (non-trivial)\ntopological excitations. Topological excitations do not necessarily carry group\nrepresentations. Nevertheless, we can still study how they fuse and braid with\neach other; so we have a unitary braided fusion category (UBFC) to describe the\nparticle-like excitations. To proceed, we need the following key definition on\n``centralizers.''\n\\begin{dfn}\n The objects $X,Y$ in a UBFC $\\cC$ are said to \\emph{centralize} (mutually\n local to) each other if \n \\begin{align}\n c_{Y,X}\\circ c_{X,Y}=\\mathrm{id}_{X\\otimes Y},\n \\end{align}\n where $c_{X,Y}: X\\otimes Y\\cong Y\\otimes X$ is the braiding in $\\cC$.\n\\end{dfn}\n\n Physically, we say that $X$ and $Y$ have trivial mutual statistics.\n\n\\begin{dfn}\n\\label{cendef}\n Given a subcategory $\\cD\\subset \\cC$, its \\emph{centralizer}\n $\\cen{\\cD}{\\cC}$ in $\\cC$\n is the full subcategory of objects in $\\cC$ that centralize all the objects in\n $\\cD$. \n\\end{dfn}\n\nWe may roughly view a category as a ``set'' of particle-like excitations.\nSo the centralizer $\\cen{\\cD}{\\cC}$ is the ``subset'' of particles in $\\cC$\nthat have trivial mutual statistics with all the particles in $\\cD$.\n\n\\begin{dfn}\n\\label{defMTC}\n A UBFC $\\cE$ is a \\emph{symmetric} fusion category if $\\cen{\\cE}{\\cE}=\\cE$.\nA UBFC $\\cC$ with a fully faithful embedding $\\cE\\inj \\cen{\\cC}{\\cC}$ is called\na UBFC over $\\cE$. Moreover, $\\cC$ is called a non-degenerate UBFC over $\\cE$, or\n$\\mce{\\cE}$, if $\\cen{\\cC}{\\cC}=\\cE$.\n\\end{dfn}\n\n\\begin{dfn} \\label{def:hom-bfce}\nTwo UBFCs over $\\cE$, $\\cC$ and $\\cC'$ are equivalent if there is a unitary braided\nequivalence $F:\\cC\\to\\cC'$ such that it preserves the embeddings, i.e.,\nthe following diagram commute.\n \\newdir^{ (}{{}*!\/-5pt\/@^{(}}\n\\begin{align}\n\\label{Ceq}\n\\xymatrix{\n \\cE\\ar@{^{ (}->}[r]\\ar@{=}[d]&\\cC\\ar[d]^{F}\n \\\\\n \\cE\\ar@{^{ (}->}[r]&\\cC'\n}\n\\end{align}\nWe denote the category of unitary braided auto-equivalences of $\\cC$ by $\\mathcal{A}\\mathrm{ut}(\\cC)$ and its underlining group by $\\mathrm{Aut}(\\cC)$. \n\\end{dfn}\n\nWe recover the usual definition of\nUMTC when $\\cE$ is trivial, \\ie the category of Hilbert spaces, denoted by\n$\\mathrm{Vec}=\\Rp(\\{1\\})$. In this case the subscript is omitted.\n\nPhysically, a UBFC $\\cC$ is the collection of all bulk topological excitations\nplus their fusion and braiding data. Requiring $\\cC$ to be a \n$\\mce{\\cE}$ means: (1) the set of local excitations, $\\cE$ (which is the set of\nall the irreducible representations of the symmetry group), is included in\n$\\cC$ as a subcategory; (2) $\\cC$ is anomaly-free, \\ie all the topological\nexcitations (the ones not in $\\cE$) can be detected by mutual\nbraiding\\cite{KW1458}. In other words, every topological excitation must have\nnon-trivial mutual statistics with some excitations. Those excitations that\ncannot be detected by mutual braiding (i.e., $\\cen{\\cC}{\\cC}$) are exactly the\nlocal excitations in $\\cE$. Moreover, we want the symmetry to be on-site\n(gaugeable), which requires the existence of modular extensions (see Definition \\ref{mextdef}). Such an understanding leads to the following\nconjecture:\n\\begin{conj}\n Bulk topological excitations of topological orders with symmetry $\\cE$ are\nclassified by $\\mce{\\cE}$'s that have modular extensions.\n\\end{conj}\n\\noindent\n\nWe like to remark that $\\mce{\\cE}$'s fail to classify\ntopological orders. This is because two different topologically ordered phases\nmay have bulk topological excitations with the same non-abelian statistics (\\ie\ndescribed by the same $\\mce{\\cE}$). However, $\\mce{\\cE}$'s, with modular\nextensions, do classify topological orders up to invertible ones. See next\nsection for details. The relation between anomaly and modular extension will\nalso be discussed later.\n\n\n\\section{Invertible GQLs and modular extension}\n\\label{inv}\n\n\\subsection{Invertible GQLs}\n\nThere exist non-trivial topological ordered states that have only trivial\ntopological excitations in the bulk (but non-trivial edge states). They are\n``invertible'' under the stacking operation\\cite{KW1458,F1478}\n(see Section \\ref{stack} for details). More generally,\nwe define \n\\begin{dfn}\nA GQL is invertible if its bulk topological excitations are all trivial\n(\\ie can all be created by local operators).\n\\end{dfn}\nConsider some invertible GQLs with the same symmetry $\\cE$. The bulk\nexcitations of those invertible GQLs are the same which are described by the\nsame SFC $\\cE$. Now the question is: How to distinguish those invertible GQLs?\n\n\nFirst, we believe that invertible bosonic topological orders with no symmetry\nare generated by the $E_8$ QH state (with central charge $c=8$) via\ntime-reversal and stacking, and form a $\\Z$ group. Stacking with an $E_8$ QH state only\nchanges the central charge by $8$, and does not change the bulk excitations or\nthe symmetry. So the only data we need to know to determine the invertible\nbosonic topological order with no symmetry is the central charge $c$. The\nstory is parallel for invertible fermionic topological orders with no symmetry,\nwhich are believed to be generated by the $p+\\ii p$ superconductor state with\ncentral charge $c=1\/2$.\n\nSecond, invertible bosonic GQLs with symmetry are generated by bosonic SPT\nstates and invertible bosonic topological orders (\\ie $E_8$ states) via\nstacking. We know that the bosonic SPT states with symmetry $G$ are\nclassified by the 3-cocycles in $H^3[G,U(1)]$. Therefore, bosonic invertible\nGQLs with symmetry $G$ are classified by $H^3[G,U(1)]\\times \\Z$ (where $\\Z$\ncorresponds to layers of $E_8$ states).\n\nHowever, this result and this point of view is not natural to generalize to\nfermionic cases or non-invertible GQLs. Thus, we introduce an equivalent point\nof view, which can cover boson, fermion, and non-invertible GQLs in the same\nfashion.\n\n\\subsection{Modular extension}\n\nFirst, we introduce the notion of modular extension of a $\\mce{\\cE}$:\n\\begin{dfn}\n\\label{mextdef}\n Given a $\\mce{\\cE}$ $\\cC$, its \\emph{modular extension} is a UMTC\n $\\cM$, together with a fully faithful embedding $\\iota_\\cM:\n \\cC\\hookrightarrow\\cM$, such that $\\cen{\\cE}{\\cM}=\\cC$, equivalently\n $\\dim(\\cM)=\\dim(\\cC)\\dim(\\cE)$.\n\n Two modular extensions $\\cM$ and $\\cM'$ are equivalent if\n there is an equivalence between the UMTCs ${F:\\cM\\to\\cM'}$ that preserves the\n embeddings, i.e., the following diagram commute.\n \\newdir^{ (}{{}*!\/-5pt\/@^{(}}\n\\begin{align}\n\\label{MEeq}\n\\xymatrix{\n \\cC\\ar@{^{ (}->}[r]\\ar@{=}[d]&\\cM\\ar[d]^{F}\\\\\n \\cC\\ar@{^{ (}->}[r]&\\cM'\n}\n\\end{align}\n We\n denote the set of equivalent classes of modular extensions of $\\cC$ by $\\mathcal{M}_{ext}(\\cC)$.\n\\end{dfn}\n\\begin{rmk}\n Since the total quantum dimension of modular extensions of a given $\\cC$ is\nfixed, there are only finitely many different modular extensions, due to\n\\Ref{BNRW13}. In principle we can always perform a finite search to exhaust all\nthe modular extensions.\n\\end{rmk}\n\nRemember that $\\cC$ describes the particle-like excitations in our topological\nstate. Some of those excitations are local that have trivial mutual statistics\nwith all other excitations. Those local excitation form $\\cE \\subset \\cC$.\nThe modular extension $\\cM$ of $\\cC$ is obtained as adding particles that have\nnon-trivial mutual statistics with the local excitations in $\\cE$, so that\nevery particle in $\\cM$ will always have non-trivial mutual statistics with\nsome particles in $\\cM$. Since the particles in $\\cE$ carry ``charges'' (\\ie\nthe irreducible representations of $G$), the added particles correspond to\n``flux'' (\\ie the symmetry twists of $G$). So the modular extension correspond\nto gauging\\cite{LG1209} the on-site symmetry $G$. Since we can use the gauged\nsymmetry to detect SPT orders\\cite{HW1339}, we like to propose the following\nconjecture\n\\begin{conj}\\label{invclassB}\n Invertible bosonic GQLs with symmetry $\\cE=\\Rp(G)$ are classified by\n$(\\cM,c)$ where $\\cM$ is a modular extension of $\\cE$ and $c=0$ mod 8.\n\\end{conj}\n\n\n\\subsection{Classify 2+1D bosonic SPT states}\n\nInvertible bosonic GQLs described by $(\\cM,c)$ include both bosonic SPT states\nand bosonic topological orders. Among those, $(\\cM,c=0)$ classify bosonic SPT\nstates. In other words:\n\\begin{cor}\n2+1D bosonic SPT states with symmetry $G$ are classified by the modular\nextensions of $\\Rp(G)$ (which always have $c=0$).\n\\end{cor} \n\nIn \\Ref{CLW1141,CGL1314,CGL1204}, it was shown that 2+1D bosonic SPT states are\nclassified by $H^3[G,U(1)]$. Such a result agrees with our conjecture, due to\nthe following theorem, which follows immediately from results in \\Ref{dgno2007}. \n\n\\begin{thm} \\label{thm:1to1-repG}\nThe modular extensions of $\\Rp(G)$ 1-to-1 correspond to 3-cocycles in $H^3[G,U(1)]$. The central charge of these modular extensions are $c=0$ mod 8.\n\\end{thm}\n\n\\begin{rmk}\nIn Sec.\\,\\ref{sec:mext-repG}, we give more detailed explanation of the 1-to-1 correspondence in Theorem\\,\\ref{thm:1to1-repG}. Moreover, we will prove a stronger result in Theorem\\,\\ref{thm:spt}. It turns out that the set $\\cM_{ext}(\\Rp(G))$ of modular extensions of $\\Rp(G)$ is naturally equipped with a physical stacking operation such that $\\cM_{ext}(\\Rp(G))$ forms an abelian group, which is isomorphic to the group $H^3[G,U(1)]$. \n\\end{rmk}\n\n\n\n\\begin{rmk}\n$c\/8$ determines the number of layers of the $E_8$ QH states, which is the\ntopological order part of invertible bosonic symmetric GQLs. In other words\n\\begin{align}\n&\\ \\ \\ \\\n\\{ \\text{invertible bosonic symmetric GQLs} \\} \n\\nonumber\\\\\n&=\n\\{ \\text{bosonic SPT states} \\}\\times \\{ \\text{layers of $E_8$ states} \\}.\n\\end{align}\n\\end{rmk}\n\n\n\n\n\\subsection{Classify 2+1D fermionic SPT states}\n\nThe above approach also apply to fermionic case. Note that, the invertible\nfermionic GQLs with symmetry $G^f$\nhave bulk excitations described by SFC $\\cE=\\sRp(G^f)$.\nSo we would like to conjecture that\n\\begin{conj}\\label{invclassF}\n Invertible fermionic GQLs with symmetry $G^f$ are classified by\n$(\\cM,c)$, where $\\cM$ is a modular extension of $\\cE=\\sRp(G^f)$,\nand $c$ is the central charge determining the layers of $\\nu=8$ IQH states.\n\\end{conj}\n\\begin{rmk}\nNote that, the central charge $c$ mod 8 is determined by $\\cM$, while\n$ (c - \\text{mod}(c,8))\/8$ determines the number of layers of the\n$\\nu=8$ IQH states.\n\\end{rmk}\n\\begin{rmk}\nInvertible fermionic symmetric GQLs include both fermion SPT states and\nfermionic topological orders. $(\\cM,c)$ with $c=0$ classify fermionic SPT\nstates. \n\\end{rmk} \nIn other words, \n\\begin{cor}\n2+1D fermionic SPT states with symmetry $G$ are classified by the $c=0$ modular extensions of\n$\\sRp(G^f)$. \n\\end{cor} \n\\begin{rmk}\nUnlike the bosonic case, in general\n\\begin{align}\n&\\ \\ \\ \\\n\\{ \\text{invertible fermionic symmetric GQLs} \\} \n\\\\\n& \\neq\n\\{ \\text{fermionic SPT states} \\}\\times \\{ \\text{layers of $p+\\ii p$ states} \\}.\n\\nonumber \n\\end{align}\n\\end{rmk}\n\n\nWhen there is no symmetry, the invertible fermionic GQLs become the invertible\nfermionic topological order, which have bulk excitations described by\n$\\cE=\\sRp(Z_2^f)$. $\\sRp(Z_2^f)$ has 16 modular extensions, with central\ncharges $c=n\/2, n=0,1,2,\\dots,15$. There is only one modular extension with\n$c=0$, which correspond to trivial product state. Thus there is no non-trivial\nfermionic SPT state when there is no symmetry, as expected.\n\nThe modular extensions with $c=n\/2$ correspond to invertible fermionic\ntopological order formed by $n$ layers of $p+\\ii p$ states.\nSince the modular extensions can only determine $c$ mod 8,\nin order for the above picture to be consistent, we need to show the following\n\\begin{thm}\\label{pE8}\nThe stacking of 16 layers $c=1\/2$ $p+\\ii p$ states is equivalent to a $\\nu=8$\nIQH state, which is in turn equivalent to a $E_8$ bosonic QH state stacked with\na trivial fermionic product state.\n\\end{thm}\n\\begin{proof}\nFirst, two layers of $p+\\ii p$ states is equal to one layer of $\\nu=1$ IQH\nstate. Thus, 16 layers $c=1\/2$ $p+\\ii p$ states is equivalent to a $\\nu=8$ IQH\nstate. To show $\\nu=8$ IQH state is equivalent to $E_8$ bosonic QH state\nstacked with a trivial fermionic product state, we note that the $\\nu=8$ IQH\nstate is described by $K$-matrix $K_{\\nu=8}=I_{8\\times 8}$ which is a 8-by-8\nidentity matrix. While the $E_8$ bosonic QH state stacked with a trivial\nfermionic product state is described by $K$-matrix $K_{E_8\\boxtimes\n\\cF_0}=K_{E_8}\\oplus \\bpm 1 &0 \\\\ 0& -1 \\epm $, where $K_{E_8}$ is the matrix\nthat describe the $E_8$ root lattice. We also know that two odd\\footnote{An odd\nmatrix is a symmetric integer matrix with at least one of its diagonal elements\nbeing odd.} $K$-matrices $K_1$ and $K_2$ describe the same fermionic topological\norder if after direct summing with proper number of $\\bpm 1 &0 \\\\ 0& -1 \\epm\n$'s:\n\\begin{align}\nK_1'&=K_1\\oplus \\bpm 1 &0 \\\\ 0& -1 \\epm \\oplus \\cdots\n\\nonumber\\\\\nK_2'&=K_2\\oplus \\bpm 1 &0 \\\\ 0& -1 \\epm \\oplus \\cdots,\n\\end{align}\n$K_1'$ and $K_2'$ become equivalent, \\ie\n\\begin{align}\n K_1' = U K_2' U^T,\\ \\ \\ \\ U \\in SL(N,\\Z).\n\\end{align}\nNotice that $K_{\\nu=8}\\oplus \\bpm 1 &0 \\\\ 0& -1 \\epm$ and $K_{E_8\\boxtimes\n\\cF_0}$ have the same determinant $-1$ and the same signature. Using the result\nthat odd matrices with $\\pm 1$ determinants are equivalent if they have\nthe same signature, we find that $K_{\\nu=8}\\oplus \\bpm 1 &0 \\\\ 0& -1 \\epm$\nand $K_{E_8\\boxtimes \\cF_0}$ are equivalent. Therefore $\\nu=8$ IQH state is\nequivalent to $E_8$ bosonic QH state stacked with a trivial fermionic product\nstate.\n\\end{proof}\n\n\n\\section{A full classification of 2+1D GQLs with symmetry}\n\\label{clGQL}\n\nWe have seen that all invertible GQLs with symmetry $G$ (or $G^f$) have the\nsame kind of bulk excitations, described by $\\Rp(G)$ (or $\\sRp(G^f)$). To\nclassify distinct invertible GQLs that shared the same kind of bulk\nexcitations, we need to compute the modular extensions of $\\Rp(G)$ (or\n$\\sRp(G^f)$). This result can be generalized to non-invertible topological\norders.\n\nIn general, the bulk excitations of a 2+1D bosonic\/fermionic SET are described\nby a $\\mce{\\cE}$ $\\cC$. However, there can be many distinct SET orders that\nhave the same kind of bulk excitations described by the same $\\cC$. To\nclassify distinct invertible SET orders that shared the same kind of bulk\nexcitations $\\cC$, we need to compute the modular extensions of $\\cC$. This\nleads to the following\n\\begin{conj}\n\\label{classSET}\n 2+1D GQLs with symmetry $\\cE$ (\\ie the 2+1D SET orders) are classified by $(\\cC,\\cM,c)$,\n where $\\cC$ is a $\\mce{\\cE}$ describing the bulk topological excitations,\n $\\cM$ is a modular extension of $\\cC$ describing the edge state up to\n $E_8$ states, and $c$ is the central charge determining the layers of $E_8$\n states.\n\\end{conj}\n\nLet $\\cM$ be a modular extension of a $\\mce{\\cE}$ $\\cC$. We note that\nall the simple objects (particles) in $\\cC$ are contained in $\\cM$ as\nsimple objects. Assume that the particle labels of $\\cM$ are\n$\\{i,j,\\dots, x, y,\\dots\\}$, where $i,j,\\cdots $ correspond to the particles in\n$\\cC$ and $x,y,\\cdots $ the additional particles (not in $\\cC$). Physically,\nthe additional particles $x,y,\\cdots $ correspond to the symmetry twists of the\non-site symmetry\\cite{W1447}. The modular extension $\\cM$ describes the\nfusion and the braiding of original particles $i,j,\\cdots $ with the symmetry\ntwists. In other words, the modular extension $\\cM$ is the resulting\ntopological order after we gauge the on-site symmetry\\cite{LG1209}.\n\nNow, it is clear that the existence of modular extension is closely related to\nthe on-site symmetry (\\ie anomaly-free symmetry) which is gaugable (\\ie allows\nsymmetry twists). For non-on-site symmetry (\\ie anomalous\nsymmetry\\cite{W1313}), the modular extension does not exist since the symmetry\nis not gaugable (\\ie does not allow symmetry twists). We also have\n\\begin{conj}\n\\label{classSETA}\n 2+1D GQLs with anomalous symmetry\\cite{W1313} $\\cE$ are\nclassified by $\\mce{\\cE}$'s that have no modular extensions.\n\\end{conj}\n\n\nIt is also important to clarify the equivalence relation between the triples\n$(\\cC,\\cM,c)$. Two triples $(\\cC,\\cM,c)$ and $(\\cC',\\cM',c')$ are equivalent\nif: (1) $c=c'$; (2) there exists braided equivalences $F_\\cC:\\cC\\to\\cC'$ and\n$F_\\cM:\\cM\\to\\cM'$ such that all the embeddings are preserved, i.e., the\nfollowing diagram commutes.\n \\newdir^{ (}{{}*!\/-5pt\/@^{(}}\n\\begin{align}\n\\label{TOeq}\n\\xymatrix{\n \\cE\\ar@{^{ (}->}[r]\\ar@{=}[d]&\\cC\\ar@{^{ (}->}[r]\\ar[d]^{F_\\cC}\n &\\cM\\ar[d]^{F_\\cM}\\\\\n \\cE\\ar@{^{ (}->}[r]&\\cC'\\ar@{^{ (}->}[r]&\\cM'\n}\n\\end{align}\nThe equivalence classes will be in one-to-one\ncorrespondence with GQLs (\\ie SET orders and SPT orders).\n\nNote that the group of the automorphisms of a $\\mce{\\cE}$ $\\cC$, denoted by\n$\\mathrm{Aut}(\\cC)$ (recall Definition\\,\\ref{def:hom-bfce}), naturally acts on the\nmodular extensions $\\mathcal{M}_{ext}(\\cC)$ by changing the embeddings, i.e. $F\\in\\mathrm{Aut}(\\cC)$ acts as follows: \n$$\n(\\cC\\hookrightarrow\\cM)\\mapsto (\\cC\\xrightarrow{F}\\cC\\hookrightarrow \\cM)\n$$ \nFor a fixed $\\cC$, the above equivalence relation amounts to say that GQLs with bulk excitations described by a fixed $\\cC$ are in one-to-one correspondence with the quotient\n$\\mathcal{M}_{ext}(\\cC)\/\\mathrm{Aut}(\\cC)$ plus a central charge $c$. When $\\cC=\\cE$, the GQLs\nwith bulk excitations described by $\\cE$ and central charge $c=0$ are SPT phases. In this case, the group $\\mathrm{Aut}(\\cE)$, where $\\cE$ is viewed as the trivial $\\mce{\\cE}$, is trivial. Thus, SPT phases are classified by the modular extensions of\n$\\cE$ with $c=0$.\n\n\n\\section{Another description of 2+1D GQLs with symmetry}\n\\label{clGQL2}\n\n\nAlthough the above result has a nice mathematical structure, it is hard to\nimplement numerically to produce a table of GQLs. To fix this problem, we\npropose a different description of 2+1D GQLs. The second description is\nmotivated by a conjecture that the fusion and the spins of the particles,\n$(\\cN^{IJ}_K,\\cS_I)$, completely characterize a UMTC. We conjecture that\n\\begin{conj}\n\\label{NsNsNs}\nThe data $( \\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i; \\cN^{IJ}_K,\\cS_I;c)$, up\nto some equivalence relations, gives a one-to-one classification of\n2+1D GQLs with symmetry $G$ (for boson) or $G^f$ (for fermion), with a\nrestriction that the symmetry group can be fully characterized by the fusion\nring of its irreducible representations. The data $( \\tilde N^{ab}_c,\\tilde\ns_a; N^{ij}_k,s_i; \\cN^{IJ}_K,\\cS_I;c)$ satisfies the conditions described in\nAppendix \\ref{cnds} (see \\Ref{W150605768} for UMTCs). \n\\end{conj}\n\nHere $( \\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i; \\cN^{IJ}_K,\\cS_I;c)$\nis closely related to $(\\cE;\\cC;\\cM;c)$ discussed above. The data $(\\tilde\nN^{ab}_c,\\tilde s_a)$ describes the symmetry (\\ie the SFC $\\cE$):\n$a=1,\\cdots,\\tilde N$ label the irreducible representations and $\\tilde\nN^{ab}_c$ are the fusion coefficients of irreducible representations. $\\tilde\ns_a =0$ or $1\/2$ depending on if the fermion-number-parity transformation $f$\nis represented trivially or non-trivially in the representation $a$. The data\n$(N^{ij}_k,s_i)$ describes fusion and the spins of the bulk particles\n$i=1,\\cdots,N$ in the GQL. The data $(N^{ij}_k,s_i)$ contains $(\\tilde\nN^{ab}_c,\\tilde s_a)$ as a subset, where $a$ is identified with the first\n$\\tilde N$ particles of the GQL. The data $(\\cN^{IJ}_K,\\cS_I)$ describes\nfusion and the spins of a UMTC, and it includes $(N^{ij}_k,s_i)$ as a subset,\nwhere $i$ is identified with the first $N$ particles of the UMTC. Also among\nall the particles in UMTC, only the first $N$ (\\ie $I=1,\\cdots,N$) have trivial\nmutual statistics with first $\\tilde N$ particles (\\ie $I=1,\\cdots,\\tilde N$).\nLast, $c$ is the chiral central charge of the edge state.\n\nIf the data $( \\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i)$ fully characterized\nthe $\\mce{\\cE}$, then the Conjecture \\ref{NsNsNs} would be equivalent to the\nConjecture \\ref{classSET}.\nHowever, for non-modular tensor category, $( \\tilde\nN^{ab}_c,\\tilde s_a; N^{ij}_k,s_i)$ fails to to fully characterize a\n$\\mce{\\cE}$. In other words, there are different $\\mce{\\cE}$'s that have the\nsame data $( \\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i)$. We need to include\nthe extra data, such as the $F$-tensor and the $R$-tensor, to fully\ncharacterize the $\\mce{\\cE}$.\n\nIn Appendix \\ref{SETtbl}, we list the data $( \\tilde N^{ab}_c,\\tilde s_a;\nN^{ij}_k,s_i)$ that satisfy the conditions in Appendix \\ref{cnds} (without the\nmodular extension condition) in many tables. Those tables include all the\n$\\mce{\\cE}$'s (up to certain total quantum dimensions), but the tables are not\nperfect: (1) some entries in the tables may be fake and do not correspond to\nany $\\mce{\\cE}$ (for the conditions are only necessary); (2) some entries in\nthe tables may correspond to more then one $\\mce{\\cE}$ (since $( \\tilde\nN^{ab}_c,\\tilde s_a; N^{ij}_k,s_i)$ does not fully characterize a $\\mce{\\cE}$).\n\nWe then continue to compute $(\\cN^{IJ}_K,\\cS_I;c)$, the modular extensions of\n$( \\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i)$. We find that the modular\nextensions can fix the imperfectness mentioned above. First, we find that the\nfake entries do not have modular extensions, and are ruled out. Second, as we\nwill show in Section \\ref{stack}, all $\\mce{\\cE}$'s have the same numbers of\nmodular extensions (if they exist); therefore, the entry that corresponds to\nmore $\\mce{\\cE}$'s has more modular extensions. The modular extensions can tell\nus which entries correspond to multiple $\\mce{\\cE}$'s. This leads to the\nconjecture that the full data $( \\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i;\n\\cN^{IJ}_K,\\cS_I;c)$ gives rise to an one-to-one classification of 2+1D GQLs, and allows us to calculate the\ntables of 2+1D GQLs, which include 2+1D SET states and 2+1D SPT states. Those\nare given in Section \\ref{examples}.\n\nAs for the equivalence relation, we only need to consider\n$(\\cN^{IJ}_K,\\cS_I;c)$, since the data $(\\tilde N^{ab}_c,\\tilde s_a;\nN^{ij}_k,s_i)$ is included in $(\\cN^{IJ}_K,\\cS_I;c)$. Two such data\n$(\\cN^{IJ}_K,\\cS_I;c)$ and $(\\bar \\cN^{IJ}_K,\\bar \\cS_I;\\bar c)$ are called\nequivalent if $c=\\bar c$, and $(\\cN^{IJ}_K,\\cS_I)$ and $(\\bar \\cN^{IJ}_K,\\bar\n\\cS_I)$ are related by two permutations of indices in the range $N_{\\cM} \\geq I\n> N$ and in the range $N\\geq I > \\tilde N$, where $N_{\\cM}$ is the range of\n$I$. Such an equivalence relation corresponds to the one in eqn. (\\ref{TOeq})\nand will be called the TO-equivalence relation. We use the TO-equivalence\nrelation to count the number of GQL phases (\\ie the number of SET orders and\nSPT orders).\n\nWe can also define another equivalence relation, called ME-equivalence\nrelation: we say $(\\cN^{IJ}_K,\\cS_I;c)$ and $(\\bar \\cN^{IJ}_K,\\bar \\cS_I;\\bar\nc)$ to be ME-equivalent if $c=\\bar c$ and they only differ by a permutation\nof indices in range $I > N$. The ME-equivalence relation is closely related to\nthe one defined in eqn. (\\ref{MEeq}). We use the ME-equivalence relation to\ncount the number of modular extensions of a \\emph{fixed} $\\cC$. \n\nLast, let us explain the restriction on the symmetry group. In the Conjecture\n\\ref{NsNsNs}, we try to use the fusion $\\tilde N^{ab}_c$ of the irreducible\nrepresentations to characterize the symmetry group. However, it is known that\ncertain different groups may have identical fusion ring for their irreducible\nrepresentations. So we need to restrict the symmetry group to be the group\nthat can be fully characterized by its fusion ring. Those groups include\nsimple groups and abelian groups\\cite{Yuan16}. If we do not impose such a\nrestriction, then the Conjecture \\ref{NsNsNs} give rise to GQLs with a given\nsymmetry fusion ring, instead of a given symmetry group.\n\n\n\\section{The stacking operation of GQLs}\n\\label{stack}\n\n\\subsection{Stacking operation}\n\nConsider two GQLs $\\cC_1$ and $\\cC_2$. If we stack them together (without\nintroducing interactions between them), we obtain another GQL, which is denoted\nby $\\cC_1\\boxtimes \\cC_2$. The stacking operation $\\boxtimes$ makes the set of\nGQLs into a monoid. $ \\boxtimes $ does not makes the set of GQLs into a group,\nbecause in general, a GQL $\\cC$ may not have an inverse under $\\boxtimes$. \\ie\nthere is no GQL $\\cD$ such that $\\cC\\boxtimes \\cD$ becomes a trivial product\nstate. This is because when a GQL have non-trivial topological excitations,\nstacking it with another GQL can never cancel out those topological\nexcitations.\n\nWhen we are considering GQLs with symmetry $\\cE$, the simple stacking\n$\\boxtimes$ will ``double'' the symmetry, leads to a GQL with symmetry\n$\\cE\\bt\\cE$ ($\\Rp(G\\times G)$ or $\\sRp(G^f\\times G^f)$). In general we allow\nlocal interactions between the two layers to break some symmetry such that the\nresulting system only has the original symmetry $\\cE$ (In terms of the symmetry group, keep only the\nsubgroup $G\\hookrightarrow G\\times G$ with the diagonal embedding $g\\mapsto\n(g,g)$). This leads to the stacking between GQLs with symmetry $\\cE$,\ndenoted by $\\bt_\\cE$. Similarly, $\\bt_\\cE$ makes GQLs with symmetry $\\cE$ a\nmonoid, but in general not all GQLs are invertible.\n\nHowever, if the bulk excitations of $\\cC$ are all local (\\ie all described by\nSFC $\\cE$), then $\\cC$ will have an inverse under the stacking operation\n$\\boxtimes_\\cE$, and this is why we call such GQL invertible. Those invertible\nGQLs include invertible topological orders and SPT states.\n \n\n\\subsection{The group structure of bosonic SPT states}\n\\label{grpSPT}\n\nWe have proposed that 2+1D SPT states are classified by $c=0$ modular\nextensions of the SFC $\\cE$ that describes the symmetry. Since SPT states are\ninvertible, they form a group under the stacking operation $ \\boxtimes_\\cE $.\nThis implies that the modular extensions of the SFC should also form a group\nunder the stacking operation. So checking if the modular extensions of the\nSFC have a group structure is a way to find support for our conjecture.\n\nHowever, in this section, we will first discuss such stacking\noperation and group structure from a physical point of view.\nWe will only consider bosonic SPT states.\n\nIt has been proposed that the bosonic SPT states are described by group\ncohomology $\\cH^{d+1}[G,U(1)]$\\cite{CLW1141,CGL1314,CGL1204}. However, it has\nnot been shown that those bosonic SPT states form a group under stacking\noperation. Here we will fill this gap. An ideal bosonic SPT state of symmetry\n$G$ in $d+1$D is described the following path integral\n\\begin{align}\n Z =\\sum_{\\{g_i\\}} \\prod_{\\{i,j,\\cdots \\}} \\nu_{d+1}(g_i,g_j,\\cdots )\n\\end{align}\nwhere $\\nu_{d+1}(g_i,g_j,\\cdots )$ is a function $G^{d+1} \\to U(1)$, which is a\ncocycle $\\nu_{d+1}\\in \\cH^{d+1}[G,U(1)]$. Here the space-time is a complex whose\nvertices are labeled by $i,j,\\cdots $, and $\\prod_{\\{i,j,\\cdots \\}}$ is the\nproduct over all the simplices of the space-time complex.\nAlso $\\sum_{\\{g_i\\}}$ is a sum over all $g_i$ on each vertex.\n\nNow consider the stacking of two SPT states described by cocycle\n$\\nu_{d+1}'$ and \n$\\nu_{d+1}''$: \n\\begin{align}\n Z =\\sum_{\\{g'_i,g''_i\\}} \\prod_{\\{i,j,\\cdots \\}} \n\\nu'_{d+1}(g'_i,g'_j,\\cdots )\n\\nu''_{d+1}(g''_i,g''_j,\\cdots ) .\n\\end{align}\nSuch a stacked state has a symmetry\n$G \\times G$ and is a $G \\times G$ SPT state.\n\nNow let us add a term to break the $G \\times G$-symmetry to $G$-symmetry\nand consider\n\\begin{align}\n\\label{ZU}\n Z =\\sum_{\\{g'_i,g''_i\\}} \\prod_{\\{i,j,\\cdots \\}} &\n\\nu'_{d+1}(g'_i,g'_j,\\cdots )\n\\nu''_{d+1}(g''_i,g''_j,\\cdots ) \\times \n\\nonumber\\\\\n \\prod_i & \\ee^{-U|g_i'-g_i''|^2}\n,\n\\end{align}\nwhere $|g'-g''|$ is an invariant distance between group elements. As we change\n$U=0$ to $U=+ \\infty $, the stacked system changes into the system for an ideal\nSPT state described by the cocycle\n$\\nu_{d+1}(g_i,g_j,\\cdots)=\\nu_{d+1}'(g_i,g_j,\\cdots )\n\\nu_{d+1}''(g_i,g_j,\\cdots )$. If such a deformation does not cause any phase\ntransition, then we can show that the stacking of a $\\nu_{d+1}'$-SPT state with\na $\\nu_{d+1}''$-SPT state give rise to a $\\nu_{d+1}=\\nu_{d+1}'\\nu_{d+1}''$-SPT\nstate. Thus, the key to show the stacking operation to give rise to the group\nstructure for the SPT states, is to show the theory \\eqn{ZU} has no phase\ntransition as we change $U=0$ to $U= +\\infty $.\n\nTo show there is no phase transition, we put the system on a closed space-time\nwith no boundary, say $S^{d+1}$. In this case, $\\prod_{\\{i,j,\\cdots \\}} \n\\nu'_{d+1}(g'_i,g'_j,\\cdots ) \\nu''_{d+1}(g''_i,g''_j,\\cdots )=1$, since\n$\\nu_{d+1}'$ and $\\nu_{d+1}''$ are cocycles.\nThus the path integral \\eq{ZU} is reduced to\n\\begin{align}\n Z =\\sum_{\\{g'_i,g''_i\\}} \\prod_i \\ee^{-U|g_i'-g_i''|^2} \n= \\Big(|G| \\sum_g \\ee^{-U|1-g|^2}\\Big)^{N_v} ,\n\\end{align}\nwhere $N_v$ is the number of vertices and $|G|$ the order of the symmetry\ngroup. We see that the free energy density\n\\begin{align}\n f = -\\lim_{N_v\\to\\infty}\\ln Z\/N_v\n\\end{align}\nis a smooth function of $U$ for $U\\in [0, \\infty )$. There is indeed no phase\ntransition.\n\nThe above result is highly non trivial from a categorical point of view.\nConsider two 2+1D bosonic SPT states described by two modular extensions $\\cM'$\nand $\\cM''$ of $\\Rp(G)$. The natural tensor product $\\cM' \\boxtimes \\cM''$ is\nnot a modular extension of $\\Rp(G)$, but a modular extension of $\\Rp(G)\n\\boxtimes \\Rp(G)=\\Rp(G \\times G)$. So, $\\cM' \\boxtimes \\cM''$ describes a $G\n\\times G$-SPT state. According to the above discussion, we need to break the\n$G \\times G$-symmetry down to the $G$-symmetry to obtain the $G$-SPT state.\nSuch a symmetry breaking process correspond to the so call ``anyon\ncondensation'' in category theory. We will discuss such anyon condensation\nlater. The stacking operation $\\bt_\\cE$, with such a symmetry breaking process\nincluded, is the correct stacking operation that maintains the symmetry $G$.\n\n\n\\subsection{Mathematical construction of the stacking operation}\n\n\nWe have conjectured that a 2+1D topological order with symmetry $\\cE$ is\nclassified by $(\\cC,\\cM_\\cC,c)$, where $\\cC$ is a $\\mce{\\cE}$ , $\\cM_\\cC$ is a\nmodular extension of $\\cC$, and $c$ is the central charge. If we have another\ntopological order of the same symmetry $\\cE$ described by $(\\cC',\\cM_{\\cC'},c')$,\nstacking $(\\cC,\\cM_\\cC,c)$ and $(\\cC',\\cM_{\\cC'},c')$ should give a third\ntopological order described by similar data $(\\cC'',\\cM_{\\cC''},c'')$:\n\\begin{align}\n (\\cC,\\cM_\\cC,c) \\boxtimes_\\cE (\\cC',\\cM_{\\cC'},c') =\n(\\cC'',\\cM_{\\cC''},c'')\n\\end{align}\n\nIn this section, we will show that such a stacking operation can be defined\nmathematically. This is an evidence supporting our Conjecture \\ref{classSET}.\nWe like to point out that a special case of the above result for\n$\\cC=\\cC'=\\cC''=\\cE=\\Rp(G)$ was discussed in section \\ref{grpSPT}.\n\nTo define $\\boxtimes_\\cE$ mathematically, first, we like to introduce\n\\begin{dfn}\\label{alg}\n A \\emph{condensable algebra} in a UBFC $\\cC$ is a\n triple $(A,m,\\eta)$, $A\\in\\cC$,\n $m:A\\ot A\\to A$, $\\eta:\\one\\to A$ satisfying\n \\begin{itemize}\n \\item Associative: $m(\\id_A\\ot m)=m(m\\ot \\id_A)$\n \\item Unit: $m(\\eta\\ot\\id_A)=m(\\id_A\\ot\\eta)=\\id_A$\n \\item Isometric: $m m^\\dag=\\id_A$\n \\item Connected: $\\Hom(\\one,A)=\\C$\n \\item Commutative: $m c_{A,A}=m$\n \\end{itemize}\n\\end{dfn}\nPhysically, such an condensable algebra $A$ is a composite self-bosonic anyon\nsatisfies additional conditions such that one can condense $A$ to obtain\nanother topological phase.\n\n\\begin{dfn}\n A (left) \\emph{module} over a condensable algebra $(A,m,\\eta)$ in $\\cC$ is a\n pair $(X,\\rho)$, $X\\in\\cC$, $\\rho:A\\ot X\\to X$ satisfying\n \\begin{gather}\n \\rho(\\id_A\\ot\\rho)=\\rho(m\\ot \\id_M),\\nonumber\\\\\n \\rho(\\eta\\ot\\id_M)=\\id_M.\n \\end{gather}\n It is further a \\emph{local} module if\n \\begin{align*}\n \\rho c_{M,A} c_{A,M}=\\rho.\n \\end{align*}\n\\end{dfn}\n We denote the category of left $A$ modules by $\\cC_A$.\n A left module $(X,\\rho)$ is turned into a right module via the braiding,\n $(X,\\rho c_{X,A})$ or $(X,\\rho c_{A,X}^{-1})$, and thus an $A$-$A$ bimodule.\n The relative tensor functor $\\ot_A$ of bimodules then turns $\\cC_A$ into a fusion category.\n (This is known as $\\alpha$-induction in subfactor context.)\n In general there can be two monoidal structures on $\\cC_A$, since there are\n two ways to turn a left module into a bimodule (usually we pick one for\n definiteness\n when considering $\\cC_A$ as a fusion category).\n The two monoidal structures coincide for the fusion subcategory $\\cC_A^0$ of\n local $A$ modules. Moreover, $\\cC_A^0$ inherited the braiding from $\\cC$ and\n is also a UBFC. The local modules are nothing but the anyons in the\n topological phases after condensing $A$.\n\\begin{lem}[DMNO\\cite{dmno}]\n \\[\\dim(\\cC_A)=\\frac{\\dim(\\cC)}{\\dim(A)}.\\]\n If $\\cC$ is a UMTC, then so is $\\cC_A^0$, and\n \\[\\dim(\\cC_A^0)=\\frac{\\dim(\\cC)}{\\dim(A)^2}.\\]\n\\end{lem}\n A non-commutative algebra $A$ is also of interest. We have the left center\n $A_l$ of $A$, the maximal subalgebra such that $m c_{A_l,A}=m$, and the right\n center $A_r$, the maximal subalgebra such that $m c_{A,A_r}=m$. $A_l$ and\n $A_r$ are commutative subalgebras, thus condensable.\n\\begin{thm}[FFRS\\cite{FFRS03}]\n There is a canonical equivalence between the categories of local modules\n over the left and right centers, $\\cC_{A_l}^0=\\cC_{A_r}^0$.\n\\end{thm}\n\\begin{dfn}\n The Drinfeld center $Z(\\cA)$ of a monoidal category $\\cA$ is a monoidal category with\n objects as pairs $(X\\in\\cA,b_{X,-})$, where $b_{X,-}: X\\ot -\\to -\\ot X$ are\n half-braidings that satisfy similar conditions as braidings. Morphisms and\n the tensor product are naturally defined.\n\\end{dfn}\n $Z(\\cA)$ is a braided monoidal category. There is a forgetful tensor functor\n $for_\\cA:Z(\\cA)\\to \\cA$, $(X,b_{X,-})\\mapsto X$ that forgets the half-braidings.\n\\begin{thm}[M{\\\"u}ger\\cite{Mue01}]\n $Z(\\cA)$ is a UMTC if $\\cA$ is a fusion category and\n $\\dim(Z(\\cA))=\\dim(\\cA)^2$.\n\\end{thm}\n\\begin{dfn}\n Let $\\cC$ be a braided fusion category and $\\cA$ a fusion category, a tensor\n functor $F:\\cC\\to \\cA$ is called a central functor if it factorizes through\n $Z(\\cA)$, i.e., there exists a braided tensor functor $F':\\cC\\to Z(\\cA)$ such\n that $F=F'for_\\cA$.\n\\end{dfn}\n\n\\begin{lem}\n [DMNO\\cite{dmno}]\n Let $F:\\cC\\to\\cA$ be a central functor, and $R:\\cA\\to\\cC$ the right adjoint\nfunctor of $F$.\nThen the object $A=R(\\one) \\in\\cC$ has a canonical structure of condensable algebra.\n$\\cC_A$ is monoidally equivalent to the image of\n$F$, i.e. the smallest fusion subcategory of $\\cA$ containing $F(\\cC)$.\n\\end{lem}\n\n\\begin{exa}\n If $\\cC$ is a UBFC, it is naturally embedded into\n $Z(\\cC)$, so is $\\overline\\cC$. Therefore, $\\cC\\bt\\overline\\cC\\hookrightarrow Z(\\cC)$.\n Compose this embedding with the forgetful functor $for_\\cC:Z(\\cC)\\to\\cC$ we\n get a central functor\n\\begin{align*}\n \\cC\\bt\\overline\\cC &\\to \\cC\\\\\n X\\bt Y&\\mapsto X\\ot Y.\n\\end{align*}\nLet $R$ be its right adjoint functor, we obtain a condensable algebra\n$L_\\cC:=R(\\one)\\cong \\oplus_i ( i\\bt \\bar i) \\in \\cC\\bt\\overline\\cC$ ($\\bar i$\ndenotes the dual object, or anti-particle of $i$) and $\\cC=(\n\\cC\\bt\\overline\\cC)_{L_\\cC}$, $\\dim(L_\\cC)=\\dim(\\cC)$.\nIn particular, for a symmetric category $\\cE$, $L_\\cE$ is a condensable algebra\nin $\\cE\\bt\\cE$, and $\\cE=(\\cE\\bt\\cE)_{L_\\cE}=(\\cE\\bt\\cE)_{L_\\cE}^0$ for $\\cE$\nis symmetric, all $L_\\cE$-modules are local.\nCondensing $L_\\cE$ is nothing but breaking the symmetry from $\\cE\\bt\\cE$ to\n$\\cE$.\n\\end{exa}\n\n\nNow, we are ready to define the stacking operation for $\\mce{\\cE}$'s as well\nas their modular extensions.\n\\begin{dfn}\\label{stacking}\n Let $\\cC,\\cD$ be $\\mce{\\cE}$'s, and $\\cM_\\cC,\\cM_\\cD$ their \n modular extensions. The stacking is defined by:\n \\begin{align*}\n \\cC\\bt_\\cE\\cD:=(\\cC\\bt\\cD)^0_{L_\\cE},\\quad \\cM_\\cC\\bt_\\cE \\cM_\\cD:=(\\cM_\\cC\\bt\n \\cM_\\cD)_{L_\\cE}^0\n \\end{align*}\n\\end{dfn}\nNote that in Ref.~\\onlinecite{DNO11}, the tensor product $\\bt_\\cE$ for\n$\\mce{\\cE}$'s is defined as $(\\cC\\bt\\cD)_{L_\\cE}$. For $\\mce{\\cE}$'s the two\ndefinitions coincide $(\\cC\\bt\\cD)^0_{L_\\cE}=(\\cC\\bt\\cD)_{L_\\cE}$, for $L_\\cE$ lies in\nthe centralizer of $\\cC\\bt\\cD$ which is $\\cE\\bt\\cE$. But for the modular extensions we have to take\nthe unusual definition above.\n\n\\begin{thm}\n $\\cC\\bt_\\cE\\cD$ is a $\\mce{\\cE}$, and\n $\\cM_\\cC\\bt_\\cE\\cM_\\cD$ is a modular extension of $\\cC\\bt_\\cE\\cD$.\n\\end{thm}\n\\begin{proof}\n The embeddings\n$\\cE=(\\cE\\bt\\cE)_{L_\\cE}^0\\hookrightarrow (\\cC\\bt\\cD)^0_{L_\\cE}=\\cC\\bt_\\cE\\cD\n\\hookrightarrow (\\cM_\\cC\\bt\\cM_\\cD)^0_{L_\\cE}=\\cM_\\cC\\bt_\\cE\\cM_\\cD$\nare obvious.\nSo $\\cC\\bt_\\cE\\cD$ is a UBFC over $\\cE$. Also\n\\begin{align*}\n \\dim(\\cC\\bt_\\cE\\cD)=\\frac{\\dim(\\cC\\bt\\cD)}{\\dim(L_\\cE)}\n =\\frac{\\dim(\\cC)\\dim(\\cD)}{\\dim(\\cE)},\n\\end{align*}\nand $\\cM_\\cC\\bt_\\cE\\cM_\\cD$ is a UMTC,\n\\begin{align*}\n \\dim(\\cM_\\cC\\bt_\\cE\\cM_\\cD)\n =\\frac{\\dim(\\cM_\\cC\\bt\\cM_\\cD)}{\\dim(L_\\cE)^2}=\\dim(\\cC)\\dim(\\cD).\n\\end{align*}\n Thus, $\\cM_\\cC\\bt_\\cE\\cM_\\cD$ is a\n modular extension of $\\cC\\bt_\\cE\\cD$.\n\\end{proof}\n\n Take $\\cD=\\cE$. Note that $\\cC\\bt_\\cE\\cE=\\cC$. Therefore, for any\n modular extension $\\cM_\\cE$ of $\\cE$, $\\cM_\\cC\\bt_\\cE\\cM_\\cE$ is still a\n modular extension of $\\cC$. In the following we want to show the inverse,\n that one can extract the ``difference'', a modular extension of $\\cE$,\n between two modular extensions of $\\cC$.\n\n\\begin{lem}\\label{Lag}\n We have $(\\cC\\bt\\overline\\cC)_{L_\\cC}^0=\\cen{\\cC}{\\cC}$.\n\\end{lem}\n\\begin{proof}\n $(\\cC\\bt\\overline\\cC)_{L_\\cC}$ is equivalent to $\\cC$ (as a fusion\ncategory). Moreover, for $X\\in\\cC$ the equivalence gives the free module $\nL_\\cC\\ot(X\\bt\n\\one )\\cong L_\\cC\\ot(\\one\\bt X)$. $L_\\cC\\ot(X\\bt\\one )$ is a local $L_\\cC$ module if and only\nif $X\\bt \\one$ centralize $L_\\cC$. This is the same as $X\\in\n\\cen{\\cC}{\\cC}$. Therefore we have $(\\cC\\bt \\overline\\cC)_{L_\\cC}^0=\\cen{\\cC}{\\cC}$.\n\\end{proof}\n\n\n\\begin{thm}\\label{main}\n let $\\cM$ and $\\cM'$ be two modular extensions of the $\\mce{\\cE}$ $\\cC$. There\n exists a unique $\\cK\\in\\mathcal{M}_{ext}(\\cE)$ such that $\\cK\\bt_\\cE\\cM=\\cM'$. Such $\\cK$\n is given by\n \\begin{align*}\n \\cK=(\\cM'\\bt \\overline\\cM)_{L_\\cC}^0.\n \\end{align*}\n\\end{thm}\n\\begin{proof}\n $\\cK$ is a modular extension of $\\cE$. This follows\n Lemma \\ref{Lag}, that $\\cE=\\cen{\\cC}{\\cC}=(\\cC\\bt\\overline \\cC)^0_{L_\\cC}$ is a full\n subcategory of $\\cK$. $\\cK$ is a UMTC by construction, and\n $\\dim(\\cK)=\\frac{\\dim(\\cM)\\dim(\\cM')}{\\dim(L_\\cC)^2}=\\dim(\\cE)^2$.\n\n To show that $\\cK=(\\cM'\\bt\\overline\\cM)_{L_\\cC}$ satisfies\n $\\cM'=\\cK\\bt_\\cE\\cM$, note that\n $\\cM'=\\cM'\\bt\\mathrm{Vec}=\\cM'\\bt(\\overline\\cM\\bt\\cM)_{L_{\\overline\\cM}}^0$. It suffies that\n \\begin{align*}\n (\\cM'\\bt\\overline\\cM\\bt\\cM)_{\\one\\bt\n L_{\\overline\\cM}}^0=[(\\cM'\\bt\\overline\\cM)_{L_\\cC}^0\\bt\n \\cM]_{L_\\cE}^0\\\\\n =(\\cM'\\bt\\overline\\cM\\bt\\cM)_{(L_\\cC\\bt\\one)\\ot(\\one\\bt\n L_\\cE)}^0.\n \\end{align*}\n This follows that $\\one\\bt L_{\\overline\\cM} $ and $(L_\\cC\\bt\\one)\\ot(\\one\\bt\n L_\\cE)$ are left and right centers of the algebra $(L_\\cC\\bt\\one)\\ot(\\one\\bt\n L_{\\overline\\cM})$.\n\n If $\\cM'=\\cK\\bt_\\cE\\cM=(\\cK\\bt\\cM)_{L_\\cE}^0$,\n then\n \\begin{align*}\n \\cK= (\\cK\\bt\\cM\\bt\\overline\\cM)_{\\one\\bt\n L_\\cM}^0=\n (\\cK\\bt\\cM\\bt\\overline\\cM)_{(L_\\cE\\bt\\one)\\ot(\\one\\bt \n L_\\cC)}^0\\\\\n =[(\\cK\\bt_\\cE\\cM)\\bt\\overline\\cM]_{L_\\cC}^0\n =(\\cM'\\bt\\overline\\cM)_{L_\\cC}^0.\n \\end{align*}\n It is similar here that $\\one\\bt L_{\\cM} $ and\n $(L_\\cE\\bt\\one)\\ot(\\one\\bt\n L_\\cC)$ are the left and right centers of the algebra\n $(L_\\cE\\bt\\one)\\ot(\\one\\bt\n L_{\\cM})$. This proves the uniqueness of $\\cK$.\n\n\\end{proof}\n\n\nLet us list several consequences of Theorem \\ref{main}.\n\\begin{thm}\\label{hegroup}\n $\\mathcal{M}_{ext}(\\cE)$ forms an finite abelian group.\n \\end{thm}\n \\begin{proof}\n Firstly, there exists at least one modular extension of a symmetric fusion\n category $\\cE$,\n the Drinfeld center $Z(\\cE)$. So the set $\\mathcal{M}_{ext}(\\cE)$ is not empty.\n The multiplication is given by the stacking $\\bt_\\cE$.\n It is easy to verify that the stacking $\\bt_\\cE$ for modular extensions\n is associative and commutative. To show that they form a group we only need\n to find out the identity and inverse.\n In this case $\\cK=(\\cM'\\bt \\overline \\cM)^0_{L_\\cE}=\\cM'\\bt_\\cE\\overline \\cM$,\n Theorem \\ref{main} becomes $\\cM'\\bt_\\cE\\overline\\cM\\bt_\\cE\\cM=\\cM'$, for any\n modular extensions $\\cM,\\cM'$ of $\\cE$.\n Thus, $\\overline{\\cM'}\\bt_\\cE\n \\cM'=\\overline{\\cM'}\\bt_\\cE\n \\cM'\\bt_\\cE\\overline\\cM\\bt_\\cE\\cM\n =\\overline\\cM\\bt_\\cE\\cM$, i.e. $\\overline\\cM\\bt_\\cE\\cM$, is the same category\n for any extension $\\cM$, which turns out to be $Z(\\cE)$. It is exactly the identity element. It is then\n obvious that the inverse of $\\cM$ is $\\overline\\cM$.\n The finiteness follows from \\Ref{BNRW13}.\n \\end{proof}\n\n \\begin{exa}\n For bosonic case we find that $\\mathcal{M}_{ext}(\\Rp(G))=H^3(G,U(1))$, which is\n discussed in more detail in the next subsection. For fermionic case a\n general group cohomological classification is still lacking. We know some\n simple ones such as $\\mathcal{M}_{ext}(\\sRp(\\Z_2^f))=\\Z_{16}$, which agrees with\n Kitaev's\n 16-fold way\\cite{K062}.\n \\end{exa}\n\n \\begin{thm}\\label{hetorsor}\n For a $\\mce{\\cE}$ $\\cC$, if the modular extensions exist, $\\mathcal{M}_{ext}(\\cC)$ form\n a $\\mathcal{M}_{ext}(\\cE)$-torsor. In particular, $|\\mathcal{M}_{ext}(\\cC)|=|\\mathcal{M}_{ext}(\\cE)|$.\n \\end{thm}\n \\begin{proof}\n The action is given by the stacking $\\bt_\\cE$.\n For any two extensions $\\cM,\\cM'$, there is a unique extension $\\cK$ of\n $\\cE$, such that $\\cM\\bt_\\cE\\cK=\\cM'$. To see $Z(\\cE)$ acts trivially, note\n that $\\cM'\\bt_\\cE Z(\\cE)=\\cM\\bt_\\cE \\cK\\bt_\\cE Z(\\cE)=\\cM\\bt_\\cE\\cK=\\cM'$\n holds for any $\\cM'$. Due to uniqueness we also know that only $\\cZ_\\cE$\n acts trivially. Thus, the action is free and transitive.\n \\end{proof}\n This means that for any modular extension of $\\cC$,\n stacking with a nontrivial modular extensions of $\\cE$, one always obtains\n a different modular extension of $\\cC$; on the other hand, starting with a\n particular modular extension of $\\cC$, all the other modular extensions can\n be generated by staking modular extensions of $\\cE$ (in other words, there\n is only on orbit). However, in general, there is no preferred choice of the\n starting modular extension, unless $\\cC$ is the form $\\cC_0\\bt \\cE$ where\n $\\cC_0$ is a UMTC. \n\n \n\\subsection{Modular extensions of $\\Rp(G)$} \\label{sec:mext-repG}\n\n\\begin{figure}[tb]\n$$\n\\setlength{\\unitlength}{.5pt}\n\\begin{picture}(100, 185)\n \\put(-80,){\\scalebox{1}{\\includegraphics{pic-half-braiding}}}\n \\put(0,0){\n \\put(-18,-40){\n \n \\put(20, 265) { $e\\in \\Rp(G) \\subset \\cM$}\n \\put(-30, 170) { $-\\otimes A$}\n \\put(75, 81) { $x$}\n \\put(75,30) { $\\gamma_2$}\n \\put(75,132) { $\\gamma_1$}\n \\put(100, 210) {$\\cM$}\n \\put(230, 81) {$\\cM_A$}\n \\put(160,45) {$\\mathrm{Vec}$}\n \\put(0, 60) {$F(e)$}\n }\\setlength{\\unitlength}{1pt}}\n \\end{picture}\n \n$$\n\\caption{Consider a physical situation in which the excitations in the $2+1$D\n bulk are given by a modular extension $\\cM$ of $\\Rp(G)$, and those on the\n gapped boundary by the UFC $\\cM_A$. Consider a simple particle $e\\in \\Rp(G)$\n in the bulk moving toward the boundary. The bulk-to-boundary map is given by\n the central functor $-\\otimes A: \\cM \\to \\cM_A$, which restricted to $\\Rp(G)$\n is nothing but the forgetful functor $F:\\Rp(G) \\to \\mathrm{Vec}$. Let $x$ be a\n simple excitation in $\\cM_A$ sitting next to $F(e)$. We move $F(e)$ along the\n semicircle $\\gamma_1$ (defined by the half-braiding), then move along the\n semicircle $\\gamma_2$ (defined by the symmetric braiding in the trivial phase\n $\\mathrm{Vec}$).}\n\\label{fig:G-grading}\n\\end{figure}\n\nWe set $\\cE=\\Rp(G)$ throughout this subsection. Let $(\\cM,\\iota_\\cM)$ be a modular\nextension of $\\Rp(G)$. $\\iota_\\cM$ is the embedding\n$\\iota_\\cM:\\cE\\hookrightarrow\\cM$ that we need to consider explicitly in this\nsubsection. The algebra $A=\\mathrm{Fun}(G)$ is a condensable algebra in $\\Rp(G)$ and\nalso a condensable algebra in $\\cM$. Moreover, $A$ is a Lagrangian algebra in\n$\\cM$ because $(\\dim A)^2 = |G|^2=(\\dim \\Rp(G))^2 = \\dim \\cM$. Therefore, $\\cM\\simeq Z(\\cM_A)$, where $\\cM_A$ is the category of right $A$-modules in $\\cM$. In other words, $\\cM$ describes the bulk excitations in a 2+1D topological phase with a gapped boundary (see Fig.\\,\\ref{fig:G-grading}). Moreover, the fusion category $\\cM_A$ is pointed and equipped with a canonical fully faithful $G$-grading\\cite{dgno2007}, which means that \n$$\n\\cM_A=\\oplus_{g\\in G} (\\cM_A)_g, \\quad (\\cM_A)_g\\simeq \\mathrm{Vec}, \\,\\, \\forall g\\in G, \n$$ \n$$\n\\quad \\mbox{and} \\quad \\otimes: (\\cM_A)_g \\boxtimes (\\cM_A)_h \\xrightarrow{\\simeq} (\\cM_A)_{gh}.\n$$\nLet us recall the construction of this $G$-grading. The physical meaning of\nacquiring a $G$-grading on $\\cM_A$ after condensing the algebra $A=\\mathrm{Fun}(G)$ in\n$\\cM$ is depicted in Figure\\,\\ref{fig:G-grading}. The process in\nFigure\\,\\ref{fig:G-grading} defines the isomorphism $F(e) \\otimes_A x\n \\xrightarrow{z_{e,x}} x \\otimes_A F(e) = F(e) \\otimes_A x$,\n which further gives a monoidal automorphism $\\phi(x)\\in \\mathrm{Aut}(F)=G$ of\nthe fiber functor $F: \\Rp(G) \\to \\mathrm{Vec}$. \n\nSince $\\phi$ is an isomorphism, the associator of the monoidal category $\\cM_A$ determines a unique\n$\\omega_{(\\cM,\\iota_M)}\\in H^3(G, U(1))$ such that $\\cM_A \\simeq \\mathrm{Vec}_G^\\omega$ as $G$-graded fusion categories. \n\n\n\\begin{thm} \\label{thm:spt}\nThe map $(\\cM, \\iota_\\cM) \\mapsto \\omega_{(\\cM, \\iota_\\cM)}$ defines a group isomorphism $\\cM_{ext}(\\Rp(G)) \\simeq H^3(G, U(1))$. In particular, we have \n$$ \n(Z(\\mathrm{Vec}_G^{\\omega_1}),\\iota_{\\omega_1}) \\boxtimes_\\cE\n(Z(\\mathrm{Vec}_G^{\\omega_2}),\\iota_{\\omega_2}) \\simeq (Z(\\mathrm{Vec}_G^{\\omega_1+\\omega_2}), \\iota_{\\omega_1+\\omega_2}).\n$$\n\\end{thm}\n\nFor the proof and more related details, see also \\Ref{LW160205936}.\n\n\n\\subsection{Relation to numerical calculations}\nIn Section \\ref{clGQL2} we proposed another way to characterise GQLs, using\nthe data $( \\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i; \\cN^{IJ}_K,\\cS_I;c)$\nwhich is more friendly in numerical calculations. We would like to investigate how\nto calculate the stacking operation in terms of these data.\n\nAssuming that $\\cC$ and $\\cC'$ can be characterized by data\n$(N^{ij}_k,s_i)$ and $( N^{\\prime ij}_k,s^\\prime_i)$. Let $(\nN^{\\cD, ij}_k, s^\\cD_i)$ be the data that characterizes the stacked $\\mce{\\cE}$ $\\cD\n=\\cC\\bt_\\cE \\cC'$.\n\nTo calculate $(N^{\\cD, ij}_k, s^\\cD_i)$, let us first construct\n\\begin{align}\n N^{ii',jj'}_{kk'} = N^{ij}_{k} N^{\\prime i'j'}_{k'} \n,\\ \\ \\ \\ \\\ns_{ii'}=s_i+s'_{i'}.\n\\end{align}\nNote that, the above data describes a $\\mce{\\cE\\boxtimes \\cE}$ $\\cD' =\\cC\\bt\n\\cC'$ (\\ie with centralizer $\\cE\\bt \\cE$), which is not what we want. We need\nreduce centralizer from $\\cE\\bt \\cE$ to $\\cE$. This is the $G\\times G$ to $G$\nprocess and $\\cC$-$\\cC'$ coupling, or condensing the $L_\\cE$\nalgebra, as discussed above\n\nTo do the $\\cE\\bt \\cE$ to $\\cE$ reduction (\\ie to obtain the real stacking\noperation $\\bt_\\cE$), we can introduce an equivalence relation. Noting that the\nexcitations in $\\cD'=\\cC\\bt \\cC'$ are labeled by $ii'=i\\bt i'$, the equivalence\nrelation is \n\\begin{align}\n ii'\\sim jj', \\quad{\\text{if }} ii'\\ot L_\\cE=jj'\\ot L_\\cE.\n\\end{align}\nwhere $L_\\cE=\\oplus_a a\\bar a, a\\in\\cE$. In the simple case of abelian groups,\nwhere all the $a$'s are abelian particles, the equivalence relation reduces to\n\\begin{align}\n (a\\ot i)i'\\sim i(a\\ot i'),\\ \\ \\ \\\n\\forall \\ \\ i\\in \\cC,\\ i'\\in \\cC',\\ a\\in \\cE.\n\\end{align}\nMathematically, this amounts to consider only the free local $L_\\cE$ modules.\nThe equivalent classes $[ii']$ are then some composite anyons in\n$\\cD=\\cC\\bt_\\cE\\cC'$\n\\begin{align}\n [ii']=k\\oplus l \\oplus \\cdots , \\quad\\text{ for some }k,l,\\dots\\in\\cD.\n\\end{align}\nIn other words, they form a fusion sub ring of $\\cD$.\nMoreover, the spin of $ii'$ is the same as the direct summands\n\\begin{align}\n s_{ii'}=s_k^\\cD=s_l^\\cD=\\cdots\n\\end{align}\nSince it is\nlimited to a subset of data of $\\mce{\\cE}$'s, we can only give these necessary\nconditions. However, as we already give a large list of GQLs in terms of these data,\nthey are usually enough to pick the resulting $\\cC\\bt_\\cE\\cC'$ from the list.\n\n\\section{How to calculate the modular extension of a $\\mce{\\cE}$}\n\\label{howto}\n\n\n\\subsection{A naive calculation}\n\nHow do we calculate the modular extension $\\cM$ of $\\mce{\\cE}$ $\\cC$ from\nthe data of $\\cC$? Actually, we do not know how to do that. So here, we will\nfollow a closely related Conjecture \\ref{NsNsNs}, and calculate instead\n$(\\cN^{IJ}_K,\\cS_I,c)$ (that fully characterize $\\cM$) from the data $(\n\\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i)$ (that partially characterize $\\cC$).\nIn this section, we will describe such a calculation.\n\nWe note that all the simple objects (particles) in $\\cC$ are contained in $\\cM$\nas simple objects, and $\\cM$ may contain some extra simple objects. Assume\nthat the particle labels of $\\cM$ are $\\{I,J,\\cdots \\}=\\{i,j,\\dots, x,\ny,\\dots\\}$, where we use $i,j,\\cdots $ to label the particles in $\\cC$ and\n$x,y,\\cdots $ to label the additional particles (not in $\\cC$). Also let us\nuse $a,b,\\cdots $ to label the simple objects in the centralizer of $\\cC$:\n$\\cE=\\cC_\\cC^\\text{cen}$. Let $\\cN^{IJ}_K$, $\\cS_{I}$ be the fusion\ncoefficients and the spins for $\\cM$, and $N^{ij}_k,\\ s_i$ be the fusion\ncoefficients and the spins for $\\cC$. The idea is to find as many conditions on\n$(\\cN^{IJ}_K, \\cS_{I})$ as possible, and use those conditions to solve for\n$(\\cN^{IJ}_K, \\cS_{I})$. Since the data $(\\cN^{IJ}_K, \\cS_{I})$ describe the\nUMTC $\\cM$, they should satisfy all the conditions discussed in\n\\Ref{W150605768}. On the other hand, as a modular extension of $\\cC$,\n$(\\cN^{IJ}_K, \\cS_{I})$ also satisfy some additional conditions. Here, we will\ndiscuss those additional conditions.\n\nFirst, the modular extension $\\cM$ has a fixed total quantum dimension.\n\\begin{align}\n\\label{dimMCE}\n \\dim(\\cM)=\\dim(\\cE)\\dim(\\cC).\n\\end{align}\nIn other words\n\\begin{align}\n \\sum_{I\\in \\cM} d_I^2 = \\sum_{a\\in \\cE} d_a^2 \\sum_{i\\in \\cC} d_i^2.\n\\end{align}\n\nPhysically, the modular extension $\\cM$ is obtained by ``gauging'' the\nsymmetry $\\cE$ in $\\cC$ (\\ie adding the symmetry twists of $\\cE$). So the\nadditional particles $x,y,\\cdots$ correspond to the symmetry twists. Fusing an\noriginal particle $i\\in \\cC$ to a symmetry twist $x\\notin \\cC$ still give us a\nsymmetry twist. Thus\n\\begin{align}\n \\cN^{ix}_j = \\cN^{xi}_j = \\cN^{ij}_x =0.\n\\end{align}\n\nTherefore, $\\cN_i$ for $i\\in \\cC$ is block diagonal: \n\\begin{align}\n\\cN_i= N_i \\oplus \\hat N_i, \n\\end{align}\nwhere $( N_i)_{jk}=\\cN^{ij}_{k}=N^{ij}_{k}$ and $(\\hat N_i)_{x \ny}=\\cN^{iy}_{x}$. \n\nIf we pick a charge conjugation for the additional particles $x\\mapsto\n\\bar x$, the conditions for fusion rules reduce to\n\\begin{align}\n \\cN^{i x}_{y}=\\cN^{x i}_{y}=\\cN^{\\bar{x} y}_{i}=\\cN^{i \\bar{y}}_{\\bar{x}},\n\\nonumber \\\\\n \\sum_{k\\in\\cC} N^{ij}_k \\cN^{kx}_{y}= \\sum_{z \\notin\\cC} \\cN^{i z}_{ x} \\cN^{j y}_{z}.\n \\label{extN}\n\\end{align}\nWith a choice of charge conjugation, it is enough to construct (or search for)\nthe matrices $\\hat N_i$ and $\\cN^{xy}_z$ to determine all the extended fusion\nrules $\\cN^{IJ}_K$. \n\nBesides the general condition \\eqref{extN}, there are also some simple\nconstraints on $\\hat N_i$ that may speed up the numerical search.\nFirstly, observe that \\eqref{extN} is the same as\n\\begin{align}\n\\hat N_i \\hat N_j = \\sum_{k\\in \\cC} N^{ij}_k \\hat N_k,\n\\end{align}\nwhere $i,j,k \\in \\cF$. This means that $\\hat N_i$ satisfy the same fusion\nalgebra as $ N_i$, and $N^{ij}_k=\\cN^{ij}_k$ is the structure constant;\ntherefore, the eigenvalues of $\\hat N_i$ must be a subset of the eigenvalues\nof $ N_i$. \n\nSecondly, since $\\sum_{y\\notin\\cC} \\cN^{i x}_{y}d_{y}= d_i d_{x}$, by\nPerron-Frobenius theorem, we know that $d_i$ is the largest eigenvalue of\n$\\hat N_i$, with eigenvector $v, v_{x}=d_{x}$. ($d_i$ is also the largest\nabsolute values of the eigenvalues of $\\hat N_i$.) Note that ${\\hat N_{\\bar\ni} \\hat N_{i}= \\hat N_i \\hat N_{\\bar i},}$ ${ \\hat N_{\\bar i}=\\hat\nN_i^\\dag}$. Thus, $d_i^2$ is the largest eigenvalue of the positive\nsemi-definite Hermitian matrix $\\hat N_{i}^\\dag\\hat N_{i}$. For any unit\nvector $v$ we have $v^\\dag \\hat N_i^\\dag \\hat N_i v\\leq d_i^2$, in\nparticular,\n\\begin{align}\n (\\hat N_i^\\dag \\hat N_i)_{xx}=\\sum_{y} (\\cN^{ix}_{y})^2\\leq\n d_i^2.\n \\label{extentry}\n\\end{align}\nThe above result is very helpful to reduce the scope of numerical search.\n\nOnce we find the fusion rules, $\\cN^{IJ}_K$, we can then use the rational\nconditions and other conditions to determine the spins $\\cS_I$ (for details, see\n\\Ref{W150605768}). The set of data $(\\cN^{IJ}_K,\\cS_I)$ that satisfy all the\nconditions give us the set of modular extensions.\n\n\nThe above proposed calculation for modular extensions is quite expensive. If\nthe quantum dimensions of the particles in $\\cC$ are all equal to 1: $d_i=1$,\nthen there is another much cheaper way to calculate the fusion coefficient\n$\\cN^{IJ}_K$ of the modular extension $\\cM$. Such an approach is explained in\nAppendix \\ref{FRgroup}. We will also use such an approach in our calculation.\n\nLast, we would like to mention that two sets of data $(\\cN^{IJ}_K,\\cS_I)$ and\n$(\\bar \\cN^{IJ}_K,\\bar \\cS_I)$ describe the same modular extension of\n$\\cC$, if they only differ by a permutation of indices $x \\in \\cM$ but\n$x \\notin \\cC$. So some times, two sets of data $(\\cN^{IJ}_K,\\cS_I)$ and\n$(\\bar \\cN^{IJ}_K,\\bar \\cS_I)$ can describe different modular extensions,\neven through they describe the same UMTC. (Two sets of data\n$(\\cN^{IJ}_K,\\cS_I)$ and $(\\bar \\cN^{IJ}_K,\\bar \\cS_I)$ describe the same\nUMTC, if they are only different by a permutation of indices $I \\in \\cM$.)\n\nWhy we use such a permutation in the calculation of modular extensions. (which\nis the ME-equivalence relation discussed before)? This is because when we\nconsidering modular extensions, the particle $x \\in \\cM$ but $x \\notin \\cC$\ncorrespond to symmetry twists. They are extrinsic excitations that do not\nappear in the finite energy spectrum of the Hamiltonian. While the particle\n$i\\in \\cC$ are intrinsic excitations that do appear in the finite energy\nspectrum of the Hamiltonian. So $x \\notin \\cC$ and $i\\in \\cC$ are physically\ndistinct and we do not allow permutations that mix them. Also we should not\npermute the particles $a\\in \\cE$, because they correspond to symmetries. We\nshould not mix, for example, the $Z_2$ symmetry of exchange layers and the\n$Z_2$ symmetry of 180$^\\circ$ spin rotation.\n\n\n\\subsection{The limitations of the naive calculation}\n\nSince a $\\mce{\\cE}$ $\\cC$ is not modular, the data $(\\tilde N^{ab}_c,\\tilde\ns_a;N^{ij}_k,s_i)$ may not fully characterize $\\cC$. To fully characterize $\\cC$, we need to use additional data, such\nas the $F$-tensor and the $R$-tensor\\cite{K062,W150605768}. \n\nIn this paper, we will not use those additional data. As a result, the data\n$(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$ may correspond to several different\n$\\mce{\\cE}$ $\\cC$'s. In other words, $(\\tilde N^{ab}_c,\\tilde\ns_a;N^{ij}_k,s_i)$ is a one-to-many labeling of $\\mce{\\cE}$'s.\n\nSo in our naive calculation, when we calculate the modular extensions of\n$(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$, we may actually calculate the\nmodular extension of several different $\\cC$'s that are described by the same\ndata $(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$. But for $\\mce{\\cE}$'s that\ncan be fully characterized by the data $(\\tilde N^{ab}_c,\\tilde\ns_a;N^{ij}_k,s_i)$, our calculation produce the modular extensions of a single\n$\\cC$. For example, the naive calculation can obtain the correct modular\nextensions of $\\cC=\\Rp(G)$ and $\\cC=\\sRp(G^f)$, when $G$ and $G^f$ are abelian\ngroups, or simple finite groups\\cite{Yuan16}.\n\nIf the $(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$ happen to describe two\ndifferent $\\mce{\\cE}$'s, we find that our naive calculation will produce the\nmodular extensions for both of $\\mce{\\cE}$'s (see Section \\ref{Z2N5}). So by\ncomputing the modular extensions of $(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$,\nwe can tell if $(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$ corresponds to none,\none, two, \\etc $\\mce{\\cE}$'s. This leads to the Conjecture \\ref{NsNsNs} that\n$(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i,\\cN^{IJ}_K,\\cS_I;c)$ can fully and\none-to-one classify GQLs in 2+1D.\n\n\\section{Examples of 2+1D SET orders and SPT orders}\n\\label{examples}\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{ The bottom two rows correspond to the two modular extensions of\n$\\Rp(Z_2)$ (denoted by $N_c^{|\\Th|}=2^{\\zeta^1_2}_0$). Thus we have two\ndifferent trivial topological orders with $Z_2$ symmetry in 2+1D (\\ie two $Z_2$\nSPT states). \n} \n\\label{mextZ2} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n \\hline \n$2^{\\zeta_{2}^{1}}_{ 0}$ & $2$ & $1, 1$ & $0, 0$ & $\\Rp(Z_2)$ \\\\\n\\hline\n$4^{ B}_{ 0}$ & $4$ & $1, 1, 1, 1$ & $0, 0, 0, \\frac{1}{2}$ & $Z_2$ gauge\\\\\n$4^{ B}_{ 0}$ & $4$ & $1, 1, 1, 1$ & $0, 0, \\frac{1}{4}, \\frac{3}{4}$ & double semion\\\\\n \\hline \n\\end{tabular} \n\\end{table}\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{The two modular extensions of $N^{|\\Th|}_{c}=3^{\\zeta_{2}^{1}}_{ 2}$.\n$3^{\\zeta_{2}^{1}}_{ 2}$ has a centralizer $\\Rp(Z_2)$. Thus we have two\ntopological orders with $Z_2$ symmetry in 2+1D which has only one type of\nspin-$1\/3$ topological excitations. We use $N^{|\\Th|}_{c}$ to label\n$\\mce{\\cE}$'s, where $\\Theta ={D}^{-1}\\sum_{i}\\ee^{2\\pi\\ii s_i} d_i^2=\n|\\Th|\\ee^{2\\pi \\ii c\/8}$ and $D^2=\\sum_id_i^2$.\n} \n\\label{mextZ2a} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$3^{\\zeta_{2}^{1}}_{ 2}$ & $6$ & $1, 1, 2$ & $0, 0, \\frac{1}{3}$ & \n\\tiny $K=\\begin{pmatrix}\n 2 & -1 \\\\\n -1 & 2 \\\\\n\\end{pmatrix}\n$\n\\\\\n\\hline\n$5^{ B}_{ 2}$ & $12$ & $1, 1, 2,\\zeta_{4}^{1},\\zeta_{4}^{1}=\\sqrt{3}$ & $0, 0, \\frac{1}{3}, \\frac{1}{8}, \\frac{5}{8}$ & $(A_1,4)$ \\\\\n$5^{ B}_{ 2}$ & $12$ & $1, 1, 2,\\zeta_{4}^{1},\\zeta_{4}^{1}$ & $0, 0, \\frac{1}{3}, \\frac{3}{8}, \\frac{7}{8}$ & \\\\\n \\hline \n\\end{tabular} \n\\end{table}\n\nIn this section, we will discuss simple examples of $\\mce{\\cE}$ $\\cC$'s, and\ntheir modular extensions $\\cM$. The triple $(\\cC, \\cM,c)$ describe a\ntopologically ordered or SPT phase. A single $\\mce{\\cE}$ $\\cC$ only describes\nthe set of bulk topological excitations, which correspond to topologically\nordered states up to invertible ones.\n\nHowever, in this section we will not discuss examples of $\\mce{\\cE}$ $\\cC$.\nWhat we really do is to discuss examples of the solutions $(\\tilde\nN^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$ (which are not really $\\mce{\\cE}$'s, but\nclosely related). We will also discuss the modular extensions\n$(\\cN^{IJ}_K,\\cS_I;c)$ of $(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$.\n$(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$ will correspond to $\\mce{\\cE}$\n$\\cC$ if it has modular extensions $(\\cN^{IJ}_K,\\cS_I;c)$. This allows us to\nclassify GQLs in terms of the data $(\\tilde N^{ab}_c,\\tilde\ns_a;N^{ij}_k,s_i,\\cN^{IJ}_K,\\cS_I;c)$.\n\n\\subsection{$Z_2$ bosonic SPT states}\n\nTables \\ref{SETZ2-34}, \\ref{SETZ2-5}, and \\ref{SETZ2-6} list the solutions\n$(\\tilde N^{ab}_c,\\tilde s_a;N^{ij}_k,s_i)$ when $(\\tilde N^{ab}_c,\\tilde\ns_a)$ describes a SFC $\\Rp(Z_2)$. The table contains all $\\mce{\\Rp(Z_2)}$'s\nbut may contain extra fake entries. Physically, they describe possible\nsets of bulk excitations for $Z_2$-SET orders of bosonic systems. The sets of\nbulk excitations are listed by their quantum dimensions $d_i$ and spins $s_i$.\n\nFor example, let us consider the entry $N_c^{|\\Th|}=2_0^{\\zeta_2^1}$ in Table\n\\ref{SETZ2-34}. Such an entry has a central charge $c=0$. Also $N=2$, hence\nthe $Z_2$-SET state has two types of bulk excitations both with $d_i=1$ and\n$s_i=0$. Both types of excitations are local excitations; one is the trivial type\nand the other carries an $Z_2$ charge.\n\nThe first question that we like to ask is that ``is such an entry a fake entry,\nor it corresponds to some $Z_2$-symmetric GQL's?'' If it corresponds to\nsome $Z_2$-symmetric GQL's, how many distinct $Z_2$-symmetric GQL\nphases that it corresponds to? In other word, how many distinct\n$Z_2$-symmetric GQL phases are there, that share the same set of bulk\ntopological excitations described by the entry $2_0^{\\zeta_2^1}$?\n\nBoth questions can be answered by computing the modular extensions of\n$2_0^{\\zeta_2^1}$ (which is also denoted as $\\Rp(Z_2)$). We find that the\nmodular extensions exist, and thus $\\Rp(Z_2)$ does correspond to some \n$Z_2$-symmetric GQL's. In fact, one of the $Z_2$-symmetric GQL's is the\ntrivial product state with $Z_2$ symmetry. Other $Z_2$-symmetric GQL's are\n$Z_2$ SPT states.\n\nAfter a numerical calculation, we find that there are only two different\nmodular extensions of $\\Rp(Z_2)$ (see Table \\ref{mextZ2}). Thus there are two\ndistinct $Z_2$-symmetric GQL phases whose bulk excitations are described by the\n$\\Rp(Z_2)$. The first one corresponds to the trivial product states whose\nmodular extension is the $Z_2$ gauge theory which has four types of particles\nwith $(d_i,s_i)=(1,0), (1,0),(1,0),(1,\\frac12)$. (Gauging the $Z_2$ symmetry\nof the trivial product state gives rise to a $Z_2$ gauge theory.) The second\none corresponds to the only non-trivial $Z_2$ bosonic SPT state in 2+1D, whose\nmodular extension is the double-semion theory which has four types of particles\nwith $(d_i,s_i)=(1,0), (1,0),(1,\\frac14),(1,-\\frac14)$. (Gauging the $Z_2$\nsymmetry of the $Z_2$-SPT state gives rise to a double-semion theory\n\\cite{LG1209}.) So the $Z_2$-SPT phases are classified by $\\Z_2$, reproducing\nthe group cohomology result\\cite{CLW1141,CGL1314,CGL1204}. In general, the\nmodular extensions of $\\Rp(G)$ correspond to the bosonic SPT states in 2+1D\nwith symmetry $G$.\n\n\\subsection{$Z_2$-SET orders for bosonic systems}\n\n\\begin{table}[t] \n\\caption{\nThe fusion rule of the $N_c^{|\\Th|}=3_2^{\\zeta_2^1}$ $Z_2$-SET order.\nThe particle $\\textbf{1}$\ncarries the $Z_2$-charge $0$, and the particle $s$ carries the $Z_2$-charge\n$1$. From the table, we see that $\\sigma\\otimes\\sigma=\\textbf{1} \\oplus s\n\\oplus \\sigma$.\n} \n\\label{SET32} \n\\centering\n\\begin{tabular}{ |c|ccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{3}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 2$\\\\\n\\hline\n $3^{\\zeta_{2}^{1}}_{ 2}$ & $\\textbf{1}$ & $s$ & $\\sigma$ \\\\\n\\hline\n$\\textbf{1}$ & $ \\textbf{1}$ & $ s$ & $ \\sigma$ \\\\\n$s$ & $ s$ & $ \\textbf{1}$ & $ \\sigma$ \\\\\n$\\sigma$ & $ \\sigma$ & $ \\sigma$ & $ \\textbf{1} \\oplus s \\oplus \\sigma$ \\\\\n\\hline\n\\end{tabular}\n\\end{table} \n\n\\begin{table}[t] \n\\caption{\nThe fusion rules of the two $N_c^{|\\Th|}=4_1^{\\zeta_2^1}$ $Z_2$ symmetry\nenriched topological orders with identical $d_i$ and $s_i$. \nWe see that one has a $Z_2\\times Z_2$ fusion rule and\nthe other has a $Z_4$ fusion rule.\n} \n\\label{SETZ2-45} \n\\centering\n\\begin{tabular}{ |c|cccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{4}$ & $ \\frac{1}{4}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 1$ & $ 1$\\\\\n\\hline\n $4^{\\zeta_{2}^{1}}_{ 1}$ & $\\textbf{00}$ & $\\textbf{01}$ & $\\textbf{10}$ & $\\textbf{11}$ \\\\\n\\hline\n$\\textbf{00}$ & $ \\textbf{00}$ & $ \\textbf{01}$ & $ \\textbf{10}$ & $ \\textbf{11}$ \\\\\n$\\textbf{01}$ & $ \\textbf{01}$ & $ \\textbf{00}$ & $ \\textbf{11}$ & $ \\textbf{10}$ \\\\\n$\\textbf{10}$ & $ \\textbf{10}$ & $ \\textbf{11}$ & $ \\textbf{00}$ & $ \\textbf{01}$ \\\\\n$\\textbf{11}$ & $ \\textbf{11}$ & $ \\textbf{10}$ & $ \\textbf{01}$ & $ \\textbf{00}$ \\\\\n\\hline\n\\end{tabular}\n~~~~~~\n\\begin{tabular}{ |c|cccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{4}$ & $ \\frac{1}{4}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 1$ & $ 1$\\\\\n\\hline\n $4^{\\zeta_{2}^{1}}_{ 1}$ & $\\textbf{0}$ & $\\textbf{2}$ & $\\textbf{1}$ & $\\textbf{3}$ \\\\\n\\hline\n$\\textbf{0}$ & $ \\textbf{0}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{3}$ \\\\\n$\\textbf{2}$ & $ \\textbf{2}$ & $ \\textbf{0}$ & $ \\textbf{3}$ & $ \\textbf{1}$ \\\\\n$\\textbf{1}$ & $ \\textbf{1}$ & $ \\textbf{3}$ & $ \\textbf{2}$ & $ \\textbf{0}$ \\\\\n$\\textbf{3}$ & $ \\textbf{3}$ & $ \\textbf{1}$ & $ \\textbf{0}$ & $ \\textbf{2}$ \\\\\n\\hline\n\\end{tabular}\n\\end{table} \n\nThe entry $N_c^{|\\Th|}=3_2^{\\zeta_2^1}$ in Table \\ref{SETZ2-34} corresponds to\nmore non-trivial $\\mce{\\Rp(Z_2)}$. It describes the bulk excitations of\n$Z_2$-SET orders which has only one type of non-trivial topological\nexcitation(with quantum dimension $d=2$ and spin $s=1\/3$, see Table\n\\ref{SET32}). The other two types of excitations are local excitations with\n$Z_2$-charge $0$ and $1$. We find that $3_2^{\\zeta_2^1}$ has modular\nextensions and hence is not a fake entry.\n\nTo see how many SET orders that have such set of bulk excitations, we need to\ncompute how many modular extensions are there for $3_2^{\\zeta_2^1}$. We find\nthat $3_2^{\\zeta_2^1}$ has two modular extensions (see Table \\ref{mextZ2a}).\nThus there are two $Z_2$-SET orders with the above mentioned bulk excitations.\nIt is not an accident that the number of $Z_2$-SET orders with the same set of\nbulk excitations is the same as the number of $Z_2$ SPT states. This is\nbecause the different $Z_2$-SET orders with a fixed set of bulk excitations are\ngenerated by stacking with $Z_2$ SPT states.\n\nWe would like to point out that for any $G$-SET state, if we break the\nsymmetry, the $G$-SET state will reduce to a topologically ordered state\ndescribed by a UMTC. In fact, the different $G$-SET states described by the\nsame $\\mce{\\cE}$ (\\ie with the same set of bulk excitations) will reduce to the\nsame topologically ordered state (\\ie the same UMTC). In Appendix \\ref{SB}, we\ndiscussed such a symmetry breaking process and how to compute UMTC from\n$\\mce{\\cE}$. We found that the two $Z_2$-SET orders from $3_2^{\\zeta_2^1}$\nreduce to an abelian topological order described by a $K$-matrix $\\bpm 2& -1\\\\\n-1& 2 \\epm$. This is indicated by SB:$K=\\bpm 2& -1\\\\ -1& 2 \\epm$ in the\ncomment column of Table \\ref{SETZ2-34}. In other place, we use SB:$N^B_c$ or\nSB:$N^F_c({a \\atop b})$ to indicate the reduced topological order after the\nsymmetry breaking (for bosonic or fermionic cases). (The topological orders\ndescribed by $N^B_c$ or $N^F_c({a \\atop b})$ are given by the tables in\n\\Ref{W150605768} or \\Ref{LW150704673}.)\n\nAs we have mentioned, there are two $Z_2$-SET orders with the same bulk\nexcitations. But how to realize those $Z_2$-SET orders? We find that one of\nthe $Z_2$-SET orders is the double layer FQH state with $K$-matrix $\\bpm 2 &\n-1\\\\ -1 & 2\\\\ \\epm$ (same as the reduced topological order after symmetry\nbreaking), where the $Z_2$ symmetry is the layer-exchange symmetry. The\nquasiparticles are labeled by the $l$-vectors $l=\\bpm l_1\\\\l_2\\epm$. The two\nnon-trivial quasiparticles are given by \n\\begin{align} \nl\n&= \\bpm 1 \\\\ 0\\epm, \\ \\ \\bpm 0 \\\\ 1\\epm, \\ \\ \n\\end{align} \nwhose spins are all equal to $\\frac13$.\n\nSince the layer-exchange $Z_2$ symmetry exchanges $l_1$ and $l_2$, we see that\nthe two excitations $ \\bpm 1 \\\\ 0\\epm, \\ \\ \\bpm 0 \\\\ 1\\epm$ always have the\nsame energy. Despite the $Z_2$ symmetry has no 2-dim irreducible\nrepresentations, the above spin-1\/3 topological excitations has an exact\ntwo-fold degeneracy due to the $Z_2$ layer-exchange symmetry. This effect is\nan interplay between the long-range entanglement and the symmetry:\n\\emph{degeneracy in excitations may not always arise from high dimensional\nirreducible representations of the symmetry.}\n\nSuch two degenerate excitations are viewed as one type of topological\nexcitations with quantum dimension $d=2$ (for the two-fold degeneracy) and spin\n$s=\\frac13$ (see Table \\ref{SETZ2-34}). The $Z_2$ symmetry twist in such a\ndouble-layer state carry a non-abelian statistics with quantum dimension\n$d=\\sqrt{3}$. In fact, there are two such $Z_2$ symmetry twists whose spin\ndiffer by 1\/2.\n\nThe other $Z_2$-SET order can be viewed as the above\ndouble layer FQH state $K=\\bpm 2 & -1\\\\ -1 & 2\\\\ \\epm$ stacked with a $Z_2$ SPT\nstate.\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{ The four modular extensions of $N^{|\\Th|}_{c}=5^{\\zeta_{2}^{1}}_{ 0}$ with $Z_2\\times Z_2$ fusion.\n$5^{\\zeta_{2}^{1}}_{ 0}$ has a centralizer $\\Rp(Z_2)$. The first pair and the\nsecond pair turns out to be equivalent.\n} \n\\label{mextZ2b} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$5^{\\zeta_{2}^{1}}_{ 0}$ & $8$ & $1\\times 4, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0$ & \\\\\n\\hline\n$9^{ B}_{ 0}$ & $16$ &\\tiny $1\\times 4, 2,\\zeta_{2}^{1}\\times 4$ &\\tiny $0, 0,\n\\frac{1}{2}, \\frac{1}{2}, 0, \\frac{15}{16}, \\frac{1}{16}, \\frac{7}{16},\n\\frac{9}{16}$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 0}$ & $16$ &\\tiny $1\\times 4, 2,\\zeta_{2}^{1}\\times 4$ &\\tiny $0, 0,\n\\frac{1}{2}, \\frac{1}{2}, 0, \\frac{3}{16}, \\frac{13}{16}, \\frac{11}{16},\n\\frac{5}{16}$ & $3^{ B}_{ 3\/2}\\boxtimes 3^{ B}_{-3\/2}$\\\\\n\\hline\n$9^{ B}_{ 0}$ & $16$ &\\tiny $1\\times 4, 2,\\zeta_{2}^{1}\\times 4$ &\\tiny $0, 0,\n\\frac{1}{2}, \\frac{1}{2}, 0, \\frac{1}{16}, \\frac{15}{16}, \\frac{9}{16},\n\\frac{7}{16}$ & $3^{ B}_{ 1\/2}\\boxtimes 3^{ B}_{-1\/2}$\\\\\n$9^{ B}_{ 0}$ & $16$ &\\tiny $1\\times 4, 2,\\zeta_{2}^{1}\\times 4$ &\\tiny $0, 0,\n\\frac{1}{2}, \\frac{1}{2}, 0, \\frac{13}{16}, \\frac{3}{16}, \\frac{5}{16},\n\\frac{11}{16}$ & $3^{ B}_{-3\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n\\hline \n\\end{tabular} \n\\end{table}\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{ The four modular extensions of $N^{|\\Th|}_{c}=5^{\\zeta_{2}^{1}}_{ 1}$ with $Z_2\\times Z_2$ fusion.\n$5^{\\zeta_{2}^{1}}_{ 1}$ has a centralizer $\\Rp(Z_2)$.\n} \n\\label{mextZ2c} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$5^{\\zeta_{2}^{1}}_{ 1}$ & $8$ & $1\\times 4, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}$ & \\\\\n\\hline\n$9^{ B}_{ 1}$ & $16$ &\\tiny $1\\times 4, 2,\\zeta_{2}^{1}\\times 4$ &\\tiny $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{16}, \\frac{1}{16}, \\frac{9}{16}, \\frac{9}{16}$ & $3^{ B}_{ 1\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 1}$ & $16$ &\\tiny $1\\times 4, 2,\\zeta_{2}^{1}\\times 4$ &\\tiny $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{13}{16}, \\frac{13}{16}, \\frac{5}{16}, \\frac{5}{16}$ & $3^{ B}_{-3\/2}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n\\hline\n$9^{ B}_{ 1}$ & $16$ &\\tiny $1\\times 4, 2,\\zeta_{2}^{1}\\times 4$ &\\tiny $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{15}{16}, \\frac{3}{16}, \\frac{7}{16}, \\frac{11}{16}$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{ 1}$ & $16$ &\\tiny $1\\times 4, 2,\\zeta_{2}^{1}\\times 4$ &\\tiny $0, 0,\n\\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{3}{16}, \\frac{15}{16},\n\\frac{11}{16}, \\frac{7}{16}$ & $3^{ B}_{ 3\/2}\\boxtimes 3^{ B}_{-1\/2}$\\\\\n\\hline \n\\end{tabular} \n\\end{table}\n\n\n\\subsection{Two other $Z_2$-SET orders for bosonic\nsystems}\n\nThe fourth and fifth entries in Table \\ref{SETZ2-34} describe the bulk\nexcitations of two other $Z_2$-SET orders. Those bulk excitations have\nidentical $s_i$ and $d_i$, but they have different fusion rules $N^{ij}_k$ (see\nTable \\ref{SETZ2-45}). \n\nBoth entries have two modular extensions, and correspond to two SET orders.\nAmong the two SET orders for the $Z_2\\times Z_2$ fusion rule, one of them is\nobtained by stacking a $Z_2$ \\emph{neutral} $\\nu=1\/2$ Laughlin state with a\ntrivial $Z_2$ product state. The other is obtained by stacking a $Z_2$ neutral\n$\\nu=1\/2$ Laughlin state with a non-trivial $Z_2$ SPT state. \n\nThe entry with $Z_4$ fusion rule also correspond to two SET orders. They are\nobtained by stacking a $Z_2$ \\emph{charged} $\\nu=1\/2$ Laughlin state with a\ntrivial or a non-trivial $Z_2$ SPT state. Here, \\emph{charged} means that the\nparticles forming the $\\nu=1\/2$ Laughlin state carry $Z_2$-charge 1. In this\ncase, the anyon in the $\\nu=1\/2$ Laughlin state carries a fractional\n$Z_2$-charge $1\/2$. So the fusion of two such anyons give us a $Z_2$-charge 1\nexcitation instead of a trivial neutral excitation. This leads to the $Z_4$\nfusion rule.\n\n\n\\subsection{The rank $N=5$ $Z_2$-SET orders for bosonic systems}\n\\label{Z2N5}\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{\nThe first and the third entries in Table \\ref{mextZ2b} have different\nfusion rules, despite they have the same $(d_i,s_i)$.\n} \n\\label{frZ2_5_1} \n\\centering\n\\begin{tabular}{ |c|ccccccccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{2}$ & $ \\frac{1}{2}$ & $ 0$ & $ \\frac{1}{16}$ & $ \\frac{7}{16}$ & $ \\frac{9}{16}$ & $ \\frac{15}{16}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 1$ & $ 1$ & $ 2$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$\\\\\n\\hline\n $9^{ 1}_{ 0}$ & $\\textbf{1}$ & $\\textbf{2}$ & $\\textbf{3}$ & $\\textbf{4}$ & $\\textbf{5}$ & $\\textbf{6}$ & $\\textbf{7}$ & $\\textbf{8}$ & $\\textbf{9}$ \\\\\n\\hline\n$\\textbf{1}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{7}$ & $ \\textbf{8}$ & $ \\textbf{9}$ \\\\\n$\\textbf{2}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{9}$ & $ \\textbf{6}$ & $ \\textbf{7}$ \\\\\n$\\textbf{3}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{7}$ & $ \\textbf{6}$ & $ \\textbf{9}$ \\\\\n$\\textbf{4}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{9}$ & $ \\textbf{8}$ & $ \\textbf{7}$ \\\\\n$\\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{2} \\oplus \\textbf{3} \\oplus \\textbf{4}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ \\\\\n$\\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ \\\\\n$\\textbf{7}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ \\\\\n$\\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ \\\\\n$\\textbf{9}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ \\\\\n\\hline\n\\end{tabular}\n\\\\[3mm]\n\\begin{tabular}{ |c|ccccccccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{2}$ & $ \\frac{1}{2}$ & $ 0$ & $ \\frac{1}{16}$ & $ \\frac{7}{16}$ & $ \\frac{9}{16}$ & $ \\frac{15}{16}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 1$ & $ 1$ & $ 2$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$\\\\\n\\hline\n $9^{ 1}_{ 0}$ & $\\textbf{1}$ & $\\textbf{2}$ & $\\textbf{3}$ & $\\textbf{4}$ & $\\textbf{5}$ & $\\textbf{6}$ & $\\textbf{7}$ & $\\textbf{8}$ & $\\textbf{9}$ \\\\\n\\hline\n$\\textbf{1}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{7}$ & $ \\textbf{8}$ & $ \\textbf{9}$ \\\\\n$\\textbf{2}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{9}$ & $ \\textbf{6}$ & $ \\textbf{7}$ \\\\\n$\\textbf{3}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{9}$ & $ \\textbf{8}$ & $ \\textbf{7}$ \\\\\n$\\textbf{4}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{7}$ & $ \\textbf{6}$ & $ \\textbf{9}$ \\\\\n$\\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{2} \\oplus \\textbf{3} \\oplus \\textbf{4}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ \\\\\n$\\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{1} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ & $ \\textbf{5}$ \\\\\n$\\textbf{7}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{3}$ \\\\\n$\\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{2} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ & $ \\textbf{5}$ \\\\\n$\\textbf{9}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{4}$ \\\\\n\\hline\n\\end{tabular} \n\\end{table}\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{\nThe third and the fourth entries in Table \\ref{mextZ2c} have different\nfusion rules, despite they have the same $(d_i,s_i)$.\n} \n\\label{frZ2_5_3} \n\\centering\n\\begin{tabular}{ |c|ccccccccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{2}$ & $ \\frac{1}{2}$ & $ \\frac{1}{8}$ & $ \\frac{3}{16}$ & $ \\frac{7}{16}$ & $ \\frac{11}{16}$ & $ \\frac{15}{16}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 1$ & $ 1$ & $ 2$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$\\\\\n\\hline\n $9^{ 1}_{ 1}$ & $\\textbf{1}$ & $\\textbf{2}$ & $\\textbf{3}$ & $\\textbf{4}$ & $\\textbf{5}$ & $\\textbf{6}$ & $\\textbf{7}$ & $\\textbf{8}$ & $\\textbf{9}$ \\\\\n\\hline\n$\\textbf{1}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{7}$ & $ \\textbf{8}$ & $ \\textbf{9}$ \\\\\n$\\textbf{2}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{9}$ & $ \\textbf{6}$ & $ \\textbf{7}$ \\\\\n$\\textbf{3}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{7}$ & $ \\textbf{6}$ & $ \\textbf{9}$ \\\\\n$\\textbf{4}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{9}$ & $ \\textbf{8}$ & $ \\textbf{7}$ \\\\\n$\\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{2} \\oplus \\textbf{3} \\oplus \\textbf{4}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ \\\\\n$\\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ \\\\\n$\\textbf{7}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ \\\\\n$\\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ \\\\\n$\\textbf{9}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ \\\\\n\\hline\n\\end{tabular}\n\\\\[3mm]\n\\begin{tabular}{ |c|ccccccccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{2}$ & $ \\frac{1}{2}$ & $ \\frac{1}{8}$ & $ \\frac{3}{16}$ & $ \\frac{7}{16}$ & $ \\frac{11}{16}$ & $ \\frac{15}{16}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 1$ & $ 1$ & $ 2$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$\\\\\n\\hline\n $9^{ 1}_{ 1}$ & $\\textbf{1}$ & $\\textbf{2}$ & $\\textbf{3}$ & $\\textbf{4}$ & $\\textbf{5}$ & $\\textbf{6}$ & $\\textbf{7}$ & $\\textbf{8}$ & $\\textbf{9}$ \\\\\n\\hline\n$\\textbf{1}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{7}$ & $ \\textbf{8}$ & $ \\textbf{9}$ \\\\\n$\\textbf{2}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{9}$ & $ \\textbf{6}$ & $ \\textbf{7}$ \\\\\n$\\textbf{3}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{9}$ & $ \\textbf{8}$ & $ \\textbf{7}$ \\\\\n$\\textbf{4}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{7}$ & $ \\textbf{6}$ & $ \\textbf{9}$ \\\\\n$\\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{2} \\oplus \\textbf{3} \\oplus \\textbf{4}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ \\\\\n$\\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{1} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ & $ \\textbf{5}$ \\\\\n$\\textbf{7}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{3}$ \\\\\n$\\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{2} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ & $ \\textbf{5}$ \\\\\n$\\textbf{9}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{4}$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{\nThe fusion rules of the first and the second entries in Table \\ref{mextZ2c}.\n} \n\\label{frZ2_5_3a} \n\\centering\n\\begin{tabular}{ |c|ccccccccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{2}$ & $ \\frac{1}{2}$ & $ \\frac{1}{8}$ & $ \\frac{1}{16}$ & $ \\frac{1}{16}$ & $ \\frac{9}{16}$ & $ \\frac{9}{16}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 1$ & $ 1$ & $ 2$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$\\\\\n\\hline\n $9^{ 1}_{ 1}$ & $\\textbf{1}$ & $\\textbf{2}$ & $\\textbf{3}$ & $\\textbf{4}$ & $\\textbf{5}$ & $\\textbf{6}$ & $\\textbf{7}$ & $\\textbf{8}$ & $\\textbf{9}$ \\\\\n\\hline\n$\\textbf{1}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{7}$ & $ \\textbf{8}$ & $ \\textbf{9}$ \\\\\n$\\textbf{2}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{9}$ & $ \\textbf{6}$ & $ \\textbf{7}$ \\\\\n$\\textbf{3}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{7}$ & $ \\textbf{6}$ & $ \\textbf{9}$ \\\\\n$\\textbf{4}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{9}$ & $ \\textbf{8}$ & $ \\textbf{7}$ \\\\\n$\\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{2} \\oplus \\textbf{3} \\oplus \\textbf{4}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ \\\\\n$\\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ \\\\\n$\\textbf{7}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ \\\\\n$\\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ \\\\\n$\\textbf{9}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ \\\\\n\\hline\n\\end{tabular}\n\\\\[3mm]\n\\begin{tabular}{ |c|ccccccccc|}\n \\hline \n $s_i$ & $0$ & $ 0$ & $ \\frac{1}{2}$ & $ \\frac{1}{2}$ & $ \\frac{1}{8}$ & $ \\frac{5}{16}$ & $ \\frac{5}{16}$ & $ \\frac{13}{16}$ & $ \\frac{13}{16}$\\\\\n $d_i$ & $1$ & $ 1$ & $ 1$ & $ 1$ & $ 2$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$ & $\\zeta_{2}^{1}$\\\\\n\\hline\n $9^{ 1}_{ 1}$ & $\\textbf{1}$ & $\\textbf{2}$ & $\\textbf{3}$ & $\\textbf{4}$ & $\\textbf{5}$ & $\\textbf{6}$ & $\\textbf{7}$ & $\\textbf{8}$ & $\\textbf{9}$ \\\\\n\\hline\n$\\textbf{1}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{7}$ & $ \\textbf{8}$ & $ \\textbf{9}$ \\\\\n$\\textbf{2}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{9}$ & $ \\textbf{6}$ & $ \\textbf{7}$ \\\\\n$\\textbf{3}$ & $ \\textbf{3}$ & $ \\textbf{4}$ & $ \\textbf{1}$ & $ \\textbf{2}$ & $ \\textbf{5}$ & $ \\textbf{8}$ & $ \\textbf{7}$ & $ \\textbf{6}$ & $ \\textbf{9}$ \\\\\n$\\textbf{4}$ & $ \\textbf{4}$ & $ \\textbf{3}$ & $ \\textbf{2}$ & $ \\textbf{1}$ & $ \\textbf{5}$ & $ \\textbf{6}$ & $ \\textbf{9}$ & $ \\textbf{8}$ & $ \\textbf{7}$ \\\\\n$\\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{2} \\oplus \\textbf{3} \\oplus \\textbf{4}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ \\\\\n$\\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ \\\\\n$\\textbf{7}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ \\\\\n$\\textbf{8}$ & $ \\textbf{8}$ & $ \\textbf{6}$ & $ \\textbf{6}$ & $ \\textbf{8}$ & $ \\textbf{7} \\oplus \\textbf{9}$ & $ \\textbf{2} \\oplus \\textbf{3}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{4}$ & $ \\textbf{5}$ \\\\\n$\\textbf{9}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{9}$ & $ \\textbf{7}$ & $ \\textbf{6} \\oplus \\textbf{8}$ & $ \\textbf{5}$ & $ \\textbf{2} \\oplus \\textbf{4}$ & $ \\textbf{5}$ & $ \\textbf{1} \\oplus \\textbf{3}$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\nThe first and the second entries in Table \\ref{SETZ2-5} describe two $N=5$\n$\\mce{\\Rp(Z_2)}$'s. They describe two different sets of bulk excitations for\n$Z_2$-SET orders. Those bulk excitations have identical $s_i$ and $d_i$,\nbut they have different fusion rules $N^{ij}_k$: the 4 $d=1$ particles have a\n$Z_2\\times Z_2$ fusion rule for the first entry, and they have a $Z_4$ fusion\nrule for the second entry (as indicated by F:$Z_2\\times Z_2$ or F:$Z_4$ in the\ncomment column of Table \\ref{SETZ2-5}).\n\n\\subsubsection{The first entry in Table \\ref{SETZ2-5}}\n\nLet us compute the modular extensions of the first entry (\\ie\n$5^{\\zeta_2^1}_{0}$ with $Z_2\\times Z_2$ fusion). Since the total quantum\ndimension of the modular extensions is $D^2=16$, the modular extensions must\nhave rank $N=13$ or less (since quantum dimension $d \\geq 1$).\n\n\nNow we would like to show $N=13$ is not possible. If a modular extension has\n$N=13$, then it must have 12 particles (labeled by $a=1,\\cdots,12$) with\nquantum dimension $d_a=1$, and one particle (labeled by $x$) with quantum\ndimension $d_x=2$, so that $12\\times 1^2+2^2=D^2=16$. In this case,\nwe must have the fusion rule\n\\begin{align}\n a\\otimes x=x,\\ \\ \\ \\ x\\otimes x= 1\\oplus 2\\oplus 3\\oplus 4.\n\\end{align}\nwhere $x\\otimes x$ is determined by the fusion rule of the $\\mce{\\Rp(Z_2)}$.\nThe above determines the fusion matrix $N_x$ defined as $(N_x)_{ij} \\equiv\nN^{xi}_{j}$. The largest eigenvalue of $N_x$ should be $2$, the quantum\ndimension of $x$. Indeed, we find that the largest eigenvalue of $N_x$ is $2$.\nBut we also require that $N_x$ can be diagonalized by a unitary matrix (which\nhappens to be the $S$-matrix). $N_x$ fails such a test. So $N$ cannot be 13.\n\n$N$ also cannot be 12. If $N=12$, then the modular extension will have 10\nparticles (labeled by $a=1,\\cdots,10$) with quantum dimension $d_a=1$, one\nparticle (labeled by $x$) with quantum dimension $d_x=2$, and one particle\n(labeled by $y$) with quantum dimension $d_y=\\sqrt 2$. The fusion of 10\n$d_a=1$ particles is described by an abelian group $Z_{10}$ or $Z_2\\times Z_5$.\nNone of them contain $Z_2\\times Z_2$ as subgroup. Thus $N=12$ is incompatible\nwith the $Z_2\\times Z_2$ fusion of the first four $d_a=1$ particles.\n\nWe searched the modular extensions with $N$ up to 11. We find four $N=9$\nmodular extensions (see Table \\ref{mextZ2b}), and thus the first entry\ncorresponds to valid $Z_2$-SET states. \n\n\nIn fact one of the $Z_2$-SET states is the $Z_2$ gauge theory with a $Z_2$\nglobal symmetry, where the $Z_2$ symmetry action exchange the $Z_2$-charge $e$\nand the $Z_2$-vortex $m$. The degenerate $e$ and $m$ give rise to the\n$(d,s)=(2,0)$ particle (the fifth particle in the table). The bound state of\n$e$ and $m$ is a fermion $f$. It may carry the $Z_2$-charge 0 or 1, which\ncorrespond to the third and the fourth particle with $(d,s)=(1,1\/2)$ in the\ntable. \n\nHowever, from the discussion in the last few sections, we know that a\n$\\mce{\\Rp(Z_2)}$ always has 2 modular extensions, corresponding to the 2\nbosonic $Z_2$-SPT states in 2+1D. This seems contradictory with the above result\nthat the $Z_2$-SET state, $5^{\\zeta_2^1}_{0}$ with $Z_2\\times Z_2$ fusion, has\nfour different modular extensions. \n\nIn fact, there is no contradiction. Here, we only use $(N^{ij}_k,s_i)$ to\nlabel different entries. However, a $\\mce{\\cE}$ is fully characterized by\n$(N^{ij}_k,s_i)$ plus the $F$-tensors and the $R$-tensors. \nTo see this point, we note that the Ising-like UMTC $N^B_c=3^B_{m\/2}$,\n$m=1,3,\\cdots,15$ (with central charge $c=m\/2$) has three particles: $1$, $f$\nwith $(d_f,s_f)=(1,1\/2)$, and $ \\sigma $ with $(d_\\sigma ,s_\\sigma\n)=(\\sqrt{2},m\/16)$. Its $R$-tensor is given by\\cite{K062}\n\\begin{align}\n R^{ff}_1 &=-1, &\n R^{ \\sigma f }_\\sigma &= \n R^{ f \\sigma }_\\sigma = -\\ii^m, \n\\\\\n R^{ \\sigma \\sigma }_1 &= (-1)^{\\frac{m^2-1}{8}} \\ee^{-\\ii \\frac{ \\pi }{8} m}, &\n R^{ \\sigma \\sigma }_f &= (-1)^{\\frac{m^2-1}{8}} \\ee^{\\ii \\frac{3 \\pi }{8} m},\n\\nonumber \n\\end{align}\nand some components of the $F$-tensor are given by\n\\begin{align}\n F^{f \\sigma \\sigma; \\sigma }_{ f;1 }=\n F^{ \\sigma \\sigma f; \\sigma }_{ f;1 }=1.\n\\end{align}\nThe values of $R^{ \\sigma f }_\\sigma$ and $R^{ f \\sigma }_\\sigma$ are not gauge\ninvariant. But if we fix the values of the $F$-tensor to be the ones given\nabove, this will fix the gauge, and we can treat $R^{ \\sigma f }_\\sigma$ and\n$R^{ f \\sigma }_\\sigma$ as if they are gauge invariant quantities.\n\nIf we stack $N^B_c=3^B_{m\/2}$ and $N^B_c=3^B_{m'\/2}$ together, the induced\nUMTC $ 3^B_{m\/2}\\boxtimes 3^B_{m'\/2}$ contains particles\n$\\textbf{1}=(1,1)$, $\\textbf{2}=(f,f')$, $\\textbf{3}=(f,1)$,\n$\\textbf{4}=(1,f')$, $\\textbf{5}=( \\sigma , \\sigma' )$. Those 5 particles are\nclosed under the fusion, and correspond to the 5 particles in $\\mce{\\Rp(Z_2)}$\n$5^{\\zeta_2^1}_{m+m'}$. We note that some components of the $R$-tensor of $\n3^B_{m\/2}\\boxtimes 3^B_{m'\/2}$ are given by\n\\begin{align}\n R^{(f,1),( \\sigma , \\sigma')}_{( \\sigma , \\sigma')}\n&= R^{( \\sigma , \\sigma'), (f,1)}_{( \\sigma , \\sigma')} =-\\ii^m,\n\\nonumber\\\\\n R^{(1,f'),( \\sigma , \\sigma')}_{( \\sigma , \\sigma')}\n&= R^{( \\sigma , \\sigma'), (1,f')}_{( \\sigma , \\sigma')} =-\\ii^{m'}.\n\\end{align}\n\nTaking $(m,m')=(-1,1)$ and $(1,-1)$, it is clear the $ 3^B_{-1\/2}\\boxtimes\n3^B_{ 1\/2}$ and $ 3^B_{ 1\/2}\\boxtimes 3^B_{-1\/2}$ give rise to two different\n$R$-tensors that have identical $(N^{ij}_k,s_i)$. So the first entry in\nTable \\ref{SETZ2-5} (\\ie $5^{\\zeta_2^1}_{0}$ with $Z_2\\times Z_2$ fusion) split\ninto two different entries if we include the $R$-tensors. Each give rise to two\nmodular extensions, and this is why we got four modular extensions. In\nTable \\ref{mextZ2b}, the first two modular extensions have the same\n$(N^{ij}_k,s_i)$, $F$-tensor and $R$-tensors when restricted to the first 5\nparticles. \nThe second pair of modular extensions also have the same $(N^{ij}_k,s_i)$,\n$F$-tensor and $R$-tensor when restricted to the first 5 particles, but their\n$R$-tensor is different from that of the first pair.\nHowever, note that under the exchange of the two fermions, the $R$-tensor of\nthe first pair becomes that of the second pair.\n\n\nWe like to stress that Table \\ref{mextZ2b} is obtained using the\nME-equivalence relation, \\ie the different entries are different under the\nME-equivalence relation (see Section \\ref{clGQL2}). We see that for each fixed\n$\\mce{\\Rp(Z_2)}$ (\\ie for each fixed set of $(N^{ij}_k,s_i)$, $F$-tensor and\n$R$-tensor), there are two modular extensions, which agrees with our general result\nfor modular extensions. However, if we ignore $F$-tensor and $R$-tensor, then\nfor each fixed set of $(N^{ij}_k,s_i)$, we get four modular extensions. This\nis because $(N^{ij}_k,s_i)$ is only a partial description of a\n$\\mce{\\Rp(Z_2)}$, and\nas discussed above, in this case there are two ways to assign $F$-tensor and $R$-tensor to\nthem.\nThis is why each fixed $(N^{ij}_k,s_i)$ has four modular extensions, while\neach fixed $(N^{ij}_k,s_i,F,R)$ has only two modular extensions.\n\nOn the other hand, under the TO-equivalence relation (see\nSection \\ref{clGQL2}),\nthe two ways to assign $F$-tensor and $R$-tensor are actually equivalent\n(related by exchanging the two fermions), and the first entry in\nTable \\ref{SETZ2-5} corresponds to only one $\\mce{\\Rp(Z_2)}$. Thus,\nthe first entry is equivalent to the third entry, and\nthe second entry is equivalent to the fourth entry in Table \\ref{mextZ2b}.\nSo the four entries of Table \\ref{mextZ2b} in fact represent only two distinct\n$Z_2$-SET orders.\n\nOne of the two $Z_2$-SET orders have been studied extensively. It corresponds\nto $Z_2$ gauge theory with a $\\Z_2$ global symmetry that exchanges the\n$Z_2$-gauge-charge $e$ and the $Z_2$-gauge-vortex $m$\\cite{W0303,KLW0834}. \n\n\\subsubsection{The second entry in Table \\ref{SETZ2-5}}\n\nNext, we compute the modular extensions of the second entry in Table\n\\ref{SETZ2-5} (\\ie $5^{\\zeta_2^1}_{0}$ with\n$Z_4$ fusion). Again, we can use the same argument to show that modular\nextensions of rank 12 and above do not exist. We searched the modular\nextensions with $N$ up to 11, and find that there is no modular extensions.\nSo the second entry is not realizable and does not correspond to any valid\nbosonic $Z_2$-SET in 2+1D. This is indicated by NR in the comment column of\nTable \\ref{SETZ2-5}.\n\nNaively, the (none existing) state from the second entry is very similar to\nthat from the first entry. It is also a $Z_2$ gauge theory with a $Z_2$ global\nsymmetry that exchange $e$ and $m$. However, for the second entry, the $f$\nparticles (the third and the fourth particles) are assigned fraction\n$Z_2$-charge of $\\pm 1\/2$. This leads to the $Z_4$ fusion rule. Our result\nimplies that such an assignment is not realizable (or is illegal).\nIt turns out that all the $5^{\\zeta_2^1}_{c}$'s with $Z_4$ fusion do not have\nmodular extensions. They are not realizable, and do not correspond to any 2+1D\nbosonic $Z_2$-SET orders.\n\n\\subsubsection{The third entry in Table \\ref{SETZ2-5}}\n\nThird, let us compute the modular extensions of the third entry in Table\n\\ref{SETZ2-5} (\\ie $5^{\\zeta_2^1}_{1}$ with $Z_2\\times Z_2$ fusion). We find\nthat the entry has four modular extensions. In fact, the entry corresponds to\ntwo different $\\mce{\\Rp(Z_2)}$s, each with two modular extensions, as implied\nby the two $Z_2$-SPT states. The two $\\mce{\\Rp(Z_2)}$s have identical\n$(N^{ij}_k,s_i,c)$, but different $F$-tensors and $R$-tensors. Sometimes two\ndifferent $\\mce{\\cE}$'s (with different $F$-tensors and the $R$-tensors) can\nhave the same $(N^{ij}_k,s_i)$'s. The third, seventh,\\dots, entries of Table\n\\ref{SETZ2-5} provide such examples. We like to stress that this is different\nfrom the first entry in Table \\ref{SETZ2-5} which corresponds to one\n$\\mce{\\Rp(Z_2)}$.\n\nTo see those different $F$-tensors and $R$-tensors, we note that one of the two\n$5^{\\zeta_2^1}_{1}$ with $Z_2\\times Z_2$ fusion has modular extensions given by\n$3^B_{1\/2}\\boxtimes 3^B_{1\/2}$ and $3^B_{-3\/2}\\boxtimes 3^B_{5\/2}$. We find the\n$R$-tensor for this first $5^{\\zeta_2^1}_{1}$ with $Z_2\\times Z_2$ fusion is\ngiven by\n\\begin{align}\n R^{(f,1),( \\sigma , \\sigma')}_{( \\sigma , \\sigma')}\n&= R^{( \\sigma , \\sigma'), (f,1)}_{( \\sigma , \\sigma')} =-\\ii,\n\\nonumber\\\\\n R^{(1,f'),( \\sigma , \\sigma')}_{( \\sigma , \\sigma')}\n&= R^{( \\sigma , \\sigma'), (1,f')}_{( \\sigma , \\sigma')} =-\\ii.\n\\end{align}\nThe second $5^{\\zeta_2^1}_{1}$ with $Z_2\\times Z_2$ fusion has modular\nextensions given by $3^B_{-1\/2}\\boxtimes 3^B_{3\/2}$ and $3^B_{3\/2}\\boxtimes\n3^B_{-1\/2}$. We find the $R$-tensor for the second $5^{\\zeta_2^1}_{1}$ with\n$Z_2\\times Z_2$ fusion is given by\n\\begin{align}\n R^{(f,1),( \\sigma , \\sigma')}_{( \\sigma , \\sigma')}\n&= R^{( \\sigma , \\sigma'), (f,1)}_{( \\sigma , \\sigma')} =\\ii,\n\\nonumber\\\\\n R^{(1,f'),( \\sigma , \\sigma')}_{( \\sigma , \\sigma')}\n&= R^{( \\sigma , \\sigma'), (1,f')}_{( \\sigma , \\sigma')} =\\ii.\n\\end{align}\nWe see that the two $5^{\\zeta_2^1}_{1}$'s with $Z_2\\times Z_2$ fusion are\nreally different $\\mce{\\Rp(Z_2)}$. Each $5^{\\zeta_2^1}_{1}$ has two modular\nextensions, and that is why we have four entries in Table \\ref{mextZ2c}.\n\nAgain, Table \\ref{mextZ2c} is obtained using the ME-equivalence relation,\nand is not a table of GQLs. Under the TO-equivalence relation, the third\nentry is equivalent to the fourth entry of Table \\ref{mextZ2c}. So the four\nentries in Table \\ref{mextZ2c} actually describe \\emph{three} different\n$Z_2$-SET orders. This has a very interesting consequence: \\emph{The $Z_2$-SET\nstate described by the third (or fourth) entry in \\ref{mextZ2c}, after stacked\nwith an $Z_2$-SPT state, still remains in the same phase.} This is an example\nof the following general statement made previously: \\emph{The GQLs with bulk\nexcitations described by $\\cC$ are in one-to-one correspondence with the\nquotient $\\mathcal{M}_{ext}(\\cC)\/\\mathrm{Aut}(\\cC)$ plus a central charge $c$.} \nIn such an example $\\mathrm{Aut}(\\cC)$ is non-trivial.\n\nIt is worth noting here that for the second $5^{\\zeta_2^1}_{1}$, two modular\nextensions $3^B_{-1\/2}\\boxtimes 3^B_{3\/2}$ and $3^B_{3\/2}\\boxtimes 3^B_{-1\/2}$\nare actually equivalent UMTCs. This is an example that different embedings\nleads to different modular extensions. For $3^B_{-1\/2}\\boxtimes 3^B_{3\/2}$ the\nfirst fermion in $5^{\\zeta_2^1}_{1}$ is embedded into $3^B_{-1\/2}$ and the\nsecond fermion is embedded into $3^B_{3\/2}$, while for $3^B_{3\/2}\\boxtimes\n3^B_{-1\/2}$ the first fermion is embedded into $3^B_{3\/2}$ and the second\nfermion is embedded into $3^B_{-1\/2}$. The equivalence between\n$3^B_{-1\/2}\\boxtimes 3^B_{3\/2}$ and $3^B_{3\/2}\\boxtimes 3^B_{-1\/2}$ that\nexchanges both fermions and symmetry twists fails to relate the two\nembeddings, as they differ by a non-trivial automorphism of $5^{\\zeta_2^1}_{1}$ that exchanges only the two fermions. This is an\nexample that the $\\mathrm{Aut}(\\cC)$ action permutes the modular extensions, as discussed in Section \\ref{clGQL}.\n\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{\nThe three modular extensions of $\\Rp(Z_3)$.\n} \n\\label{mextZ3} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n \\hline \n$3^{\\zeta_{4}^{1}}_{ 0}$ & $3$ & $1, 1, 1$ & $0, 0, 0$ & $\\Rp(Z_3)$ \\\\\n\\hline\n$9^{ B}_{ 0}$ & $9$ & $1\\times 9$ & $0, 0, 0, 0, 0, \\frac{1}{3}, \\frac{1}{3}, \\frac{2}{3}, \\frac{2}{3}$ & $Z_3$ gauge\\\\\n$9^{ B}_{ 0}$ & $9$ & $1\\times 9$ & $0, 0, 0, \\frac{1}{9}, \\frac{1}{9}, \\frac{4}{9}, \\frac{4}{9}, \\frac{7}{9}, \\frac{7}{9}$ & \\\\\n$9^{ B}_{ 0}$ & $9$ & $1\\times 9$ & $0, 0, 0, \\frac{2}{9}, \\frac{2}{9}, \\frac{5}{9}, \\frac{5}{9}, \\frac{8}{9}, \\frac{8}{9}$ & \\\\\n \\hline \n\\end{tabular} \n\\end{table}\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{\nThe six modular extensions of $\\Rp(S_3)$.\n} \n\\label{mextS3} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$3^{\\sqrt{6}}_{ 0}$ & $6$ & $1, 1, 2$ & $0, 0, 0$ & $\\Rp(S_3)$ \\\\\n\\hline\n$8^{ B}_{ 0}$ & $36$ & $1, 1, 2, 2, 2, 2, 3, 3$ & $0, 0, 0, 0, \\frac{1}{3}, \\frac{2}{3}, 0, \\frac{1}{2}$ & $S_3$ gauge\\\\\n$8^{ B}_{ 0}$ & $36$ & $1, 1, 2, 2, 2, 2, 3, 3$ & $0, 0, 0, 0, \\frac{1}{3}, \\frac{2}{3}, \\frac{1}{4}, \\frac{3}{4}$ & \\\\\n$8^{ B}_{ 0}$ & $36$ & $1, 1, 2, 2, 2, 2, 3, 3$ & $0, 0, 0, \\frac{1}{9}, \\frac{4}{9}, \\frac{7}{9}, 0, \\frac{1}{2}$ & $(B_4,2)$ \\\\\n$8^{ B}_{ 0}$ & $36$ & $1, 1, 2, 2, 2, 2, 3, 3$ & $0, 0, 0, \\frac{1}{9}, \\frac{4}{9}, \\frac{7}{9}, \\frac{1}{4}, \\frac{3}{4}$ & \\\\\n$8^{ B}_{ 0}$ & $36$ & $1, 1, 2, 2, 2, 2, 3, 3$ & $0, 0, 0, \\frac{2}{9}, \\frac{5}{9}, \\frac{8}{9}, 0, \\frac{1}{2}$ & $(B_4,-2)$ \\\\\n$8^{ B}_{ 0}$ & $36$ & $1, 1, 2, 2, 2, 2, 3, 3$ & $0, 0, 0, \\frac{2}{9}, \\frac{5}{9}, \\frac{8}{9}, \\frac{1}{4}, \\frac{3}{4}$ & \\\\\n \\hline \n\\end{tabular} \n\\end{table}\n\n\\subsection{$Z_3$, $Z_5$, and $S_3$ SPT orders for bosonic systems}\n\nWe also find that $\\Rp(Z_3)$ has 3 modular extensions (see Table \\ref{mextZ3}),\n$\\Rp(Z_5)$ has 5 modular extensions (see Table \\ref{mextZ5}), and $\\Rp(S_3)$\nhas 6 modular extensions (see Table \\ref{mextS3}). They correspond to the 3\n$Z_3$-SPT states, the 5 $Z_5$-SPT states and the 6 $S_3$-SPT states\nrespectively. These results agree with those from group cohomology\ntheory\\cite{CGL1314}.\n\nWe note that for $\\Rp(Z_2)$, $\\Rp(Z_3)$, and $\\Rp(S_3)$, their modular\nextensions all correspond to distinct UMTCs. However, for $\\Rp(Z_5)$, its 5\nmodular extensions only correspond to 3 distinct UMTCs. $\\Rp(Z_5)$ has 5\nmodular extensions because $\\Rp(Z_5)$ can be embedded into the same UMTC in\ndifferent ways. The different embeddings correspond to different modular\nextensions.\n\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table}[t] \n\\caption{\nThe 16 modular extensions of $\\sRp(Z_2^f)$. \n} \n\\label{mextZ2f} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n \\hline \n$2^{ 0}_{0}$ & $2$ & $1, 1$ & $0, \\frac{1}{2}$ & $\\sRp(Z_2^f)$ \\\\\n\\hline\n$4^{ B}_{ 0}$ & $4$ & $1, 1, 1, 1$ & $0, \\frac{1}{2}, 0, 0$ & $Z_2$ gauge\\\\\n\\hline\n$4^{ B}_{ 1}$ & $4$ & $1, 1, 1, 1$ & $0, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}$ & F:$Z_4$ \\\\\n$4^{ B}_{ 2}$ & $4$ & $1, 1, 1, 1$ & $0, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}$ & F:$Z_2\\times Z_2$ \\\\\n$4^{ B}_{ 3}$ & $4$ & $1, 1, 1, 1$ & $0, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}$ & F:$Z_4$ \\\\\n$4^{ B}_{ 4}$ & $4$ & $1, 1, 1, 1$ & $0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}$ & F:$Z_2\\times Z_2$ \\\\\n$4^{ B}_{-3}$ & $4$ & $1, 1, 1, 1$ & $0, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}$ & F:$Z_4$ \\\\\n$4^{ B}_{-2}$ & $4$ & $1, 1, 1, 1$ & $0, \\frac{1}{2}, \\frac{3}{4}, \\frac{3}{4}$ & F:$Z_2\\times Z_2$ \\\\\n$4^{ B}_{-1}$ & $4$ & $1, 1, 1, 1$ & $0, \\frac{1}{2}, \\frac{7}{8}, \\frac{7}{8}$ & F:$Z_4$ \\\\\n$3^{ B}_{ 1\/2}$ & $4$ & $1, 1,\\zeta_{2}^{1}$ & $0, \\frac{1}{2}, \\frac{1}{16}$ & $p+\\ii p$ SC\\\\\n$3^{ B}_{ 3\/2}$ & $4$ & $1, 1,\\zeta_{2}^{1}$ & $0, \\frac{1}{2}, \\frac{3}{16}$ & \\\\\n$3^{ B}_{ 5\/2}$ & $4$ & $1, 1,\\zeta_{2}^{1}$ & $0, \\frac{1}{2}, \\frac{5}{16}$ & \\\\\n$3^{ B}_{ 7\/2}$ & $4$ & $1, 1,\\zeta_{2}^{1}$ & $0, \\frac{1}{2}, \\frac{7}{16}$ & \\\\\n$3^{ B}_{-7\/2}$ & $4$ & $1, 1,\\zeta_{2}^{1}$ & $0, \\frac{1}{2}, \\frac{9}{16}$ & \\\\\n$3^{ B}_{-5\/2}$ & $4$ & $1, 1,\\zeta_{2}^{1}$ & $0, \\frac{1}{2}, \\frac{11}{16}$ & \\\\\n$3^{ B}_{-3\/2}$ & $4$ & $1, 1,\\zeta_{2}^{1}$ & $0, \\frac{1}{2}, \\frac{13}{16}$ & \\\\\n$3^{ B}_{-1\/2}$ & $4$ & $1, 1,\\zeta_{2}^{1}$ & $0, \\frac{1}{2}, \\frac{15}{16}$ & \\\\\n \\hline \n\\end{tabular} \n\\end{table}\n\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table*}[t] \n\\caption{\nThe five modular extensions of $\\Rp(Z_5)$.\n} \n\\label{mextZ5} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n \\hline \n$5^{\\sqrt{5}}_{ 0}$ & $5$ & $1\\times 5$ & $0, 0, 0, 0, 0$ & \\\\\n\\hline\n$25^{ B}_{ 0}$ & $25$ & $1\\times 25$ & $0, 0, 0, 0, 0, 0, 0, 0, 0, \\frac{1}{5}, \\frac{1}{5}, \\frac{1}{5}, \\frac{1}{5}, \\frac{2}{5}, \\frac{2}{5}, \\frac{2}{5}, \\frac{2}{5}, \\frac{3}{5}, \\frac{3}{5}, \\frac{3}{5}, \\frac{3}{5}, \\frac{4}{5}, \\frac{4}{5}, \\frac{4}{5}, \\frac{4}{5}$ & $5^{ B}_{ 0}\\boxtimes 5^{ B}_{ 0}$\\\\\n$25^{ B}_{ 0}$ & $25$ & $1\\times 25$ & $0, 0, 0, 0, 0, \\frac{1}{25}, \\frac{1}{25}, \\frac{4}{25}, \\frac{4}{25}, \\frac{6}{25}, \\frac{6}{25}, \\frac{9}{25}, \\frac{9}{25}, \\frac{11}{25}, \\frac{11}{25}, \\frac{14}{25}, \\frac{14}{25}, \\frac{16}{25}, \\frac{16}{25}, \\frac{19}{25}, \\frac{19}{25}, \\frac{21}{25}, \\frac{21}{25}, \\frac{24}{25}, \\frac{24}{25}$ & \\\\\n$25^{ B}_{ 0}$ & $25$ & $1\\times 25$ & $0, 0, 0, 0, 0, \\frac{1}{25}, \\frac{1}{25}, \\frac{4}{25}, \\frac{4}{25}, \\frac{6}{25}, \\frac{6}{25}, \\frac{9}{25}, \\frac{9}{25}, \\frac{11}{25}, \\frac{11}{25}, \\frac{14}{25}, \\frac{14}{25}, \\frac{16}{25}, \\frac{16}{25}, \\frac{19}{25}, \\frac{19}{25}, \\frac{21}{25}, \\frac{21}{25}, \\frac{24}{25}, \\frac{24}{25}$ & \\\\\n$25^{ B}_{ 0}$ & $25$ & $1\\times 25$ & $0, 0, 0, 0, 0, \\frac{2}{25}, \\frac{2}{25}, \\frac{3}{25}, \\frac{3}{25}, \\frac{7}{25}, \\frac{7}{25}, \\frac{8}{25}, \\frac{8}{25}, \\frac{12}{25}, \\frac{12}{25}, \\frac{13}{25}, \\frac{13}{25}, \\frac{17}{25}, \\frac{17}{25}, \\frac{18}{25}, \\frac{18}{25}, \\frac{22}{25}, \\frac{22}{25}, \\frac{23}{25}, \\frac{23}{25}$ & \\\\\n$25^{ B}_{ 0}$ & $25$ & $1\\times 25$ & $0, 0, 0, 0, 0, \\frac{2}{25}, \\frac{2}{25}, \\frac{3}{25}, \\frac{3}{25}, \\frac{7}{25}, \\frac{7}{25}, \\frac{8}{25}, \\frac{8}{25}, \\frac{12}{25}, \\frac{12}{25}, \\frac{13}{25}, \\frac{13}{25}, \\frac{17}{25}, \\frac{17}{25}, \\frac{18}{25}, \\frac{18}{25}, \\frac{22}{25}, \\frac{22}{25}, \\frac{23}{25}, \\frac{23}{25}$ & \\\\\n\\hline\n\\end{tabular} \n\\end{table*}\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table*}[t] \n\\caption{\nAll the 8 modular extensions of $\\sRp(Z_4^f)$.\n} \n\\label{mextZ4f} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$4^{ 0}_{0}$ & $4$ & $1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}$ & \n$\\sRp(Z_4^f)$ \\\\\n\\hline\n$16^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, 0, 0, 0, 0, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{4}, \\frac{3}{4}$ & \\\\\n\\hline \n$16^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{32}, \\frac{1}{32}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{9}{32}, \\frac{9}{32}, \\frac{17}{32}, \\frac{17}{32}, \\frac{25}{32}, \\frac{25}{32}$ & \\\\\n$16^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{5}{16}, \\frac{5}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{13}{16}, \\frac{13}{16}$ & $8^{ B}_{ 1}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{32}, \\frac{3}{32}, \\frac{11}{32}, \\frac{11}{32}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{19}{32}, \\frac{19}{32}, \\frac{27}{32}, \\frac{27}{32}$ & \\\\\n$16^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{ 3}\\boxtimes 4^{ B}_{ 1}$\\\\\n$16^{ B}_{-3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{32}, \\frac{5}{32}, \\frac{13}{32}, \\frac{13}{32}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{21}{32}, \\frac{21}{32}, \\frac{29}{32}, \\frac{29}{32}$ & \\\\\n$16^{ B}_{-2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{3}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{15}{16}, \\frac{15}{16}$ & $8^{ B}_{-1}\\boxtimes 2^{ B}_{-1}$\\\\\n$16^{ B}_{-1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{32}, \\frac{7}{32}, \\frac{15}{32}, \\frac{15}{32}, \\frac{23}{32}, \\frac{23}{32}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{31}{32}, \\frac{31}{32}$ & \\\\\n\\hline\n\\end{tabular}\n\\end{table*} \n\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table*}[t] \n\\caption{\nThe two $c=0$ modular extensions of $\\sRp(Z_8^f)$ imply that\nthe $Z_8^f$ fermionic SPT phases are described by $\\Z_2$.\nAll other modular extensions only appear for integer $c$ and are all abelian (two modular extensions\nfor each integer $c$).\n} \n\\label{mextZ8f} \n\\centering\n\\begin{tabular}{ |c|c|l|p{5.3in}|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$8^{ 0}_{0}$ & $8$ & $1\\times 8$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}$ & \\\\\n\\hline\n$64^{ B}_{ 0}$ & $64$ & $1\\times 64$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}$, $\\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}$& \\\\\n$64^{ B}_{ 0}$ & $64$ & $1\\times 64$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}$, $\\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}$ & \\\\\n\\hline\n\\end{tabular}\n\\end{table*} \n\n\n\n\n\\subsection{Invertible fermionic topological orders}\n\nWe find that $\\sRp(Z_2^f)$ has 16 modular extensions (see Table \\ref{mextZ2f})\nwhich correspond to invertible fermionic topological orders in 2+1D. One might\nthought that the invertible fermionic topological orders are classified by\n$\\Z_{16}$. But in fact, the invertible fermionic topological orders are\nclassified by $\\Z$, obtained by stacking the $c=1\/2$ $p+\\ii p$ states. The\ndiscrepancy is due to the fact that the modular extensions cannot see the $c=8$\n$E_8$ states. The 16 modular extensions exactly correspond to the invertible\nfermionic topological orders modulo the $E_8$ states. \n\nWe also find that the modular extensions with $c=$ even have a\n$Z_2\\times Z_2$ fusion rule, while the modular extensions with $c=$ odd have a\n$Z_4$ fusion rule (indicated by F:$Z_2\\times Z_2$ or F:$Z_4$ in the comment\ncolumn of Table).\n\nThe $Z_2^f$-SPT states for fermions is given by the modular extensions with\nzero central charge. We see that there is only one modular extension with\ncentral charge $c=0$. Thus there is no non-trivial 2+1D fermionic SPT states\nwith $Z_2^f$ symmetry. In general, the modular extensions of $\\sRp(G^f)$ with\nzero central charge correspond to the fermionic SPT states in 2+1D with\nsymmetry $G^f$.\n\n\n\nTo calculate the $Z_2\\times Z_2^f$ SPT orders for fermionic systems, we first\ncompute the modular extensions for $\\sRp(Z_2\\times Z_2^f)$. We note that\n$\\sRp(Z_2\\times Z_2^f)=\\sRp(Z_2^f\\times \\tilde Z_2^f)$. Thus, the modular\nextensions for $\\sRp(Z_2\\times Z_2^f)$ is the modular extensions of\n$\\sRp(Z_2^f\\times \\tilde Z_2^f)$. Some of the modular extensions\nof $\\sRp(Z_2^f\\times \\tilde Z_2^f)$ are given by the modular extensions of\n$\\sRp(Z_2^f)$ stacked (under $\\boxtimes$) with the modular extensions of\n$\\sRp(\\tilde Z_2^f)$. Some of the modular extensions of $\\sRp(Z_2\\times\nZ_2^f)$ are given by the modular extensions for $\\Rp(Z_2)$ stacked (under\n$\\boxtimes$) with the modular extensions of $\\sRp(Z_2^f)$.\n\nThe above mathematical statements correspond to the following physical picture:\nSome fermionic GQLs with $Z_2\\times Z_2^f$ symmetry can be viewed as bosonic\nGQLs with $Z_2$ symmetry stacked with fermionic GQLs with $Z_2^f$ symmetry.\nAlso some fermionic GQLs with $Z_2^f\\times \\tilde Z_2^f$ symmetry can be viewed\nas fermionic GQLs with $Z_2^f$ symmetry stacked with fermionic GQLs with\n$\\tilde Z_2^f$ symmetry.\n\nUsing \\eqn{Hgcnd}, we find that the modular extensions for $Z_2\\times Z_2^f$\nsymmetry must have ranks $7, 9, 10, 12, 16$. By direct search for those ranks,\nwe find that the modular extensions of $\\sRp(Z_2\\times Z_2^f)$ are given by\nTables \\ref{mextZ2Z2f}, \\ref{mextZ2Z2f12}, \\ref{mextZ2Z2f12b} and\n\\ref{mextZ2Z2f16}. The $N=9$ modular extensions of $\\sRp(Z_2\\times Z_2^f)$ in\nTable \\ref{mextZ2Z2f} are given by the stacking of the $N=3$ modular extensions\nof $\\sRp(Z_2^f)$ and the $N=3$ modular extensions of $\\sRp(\\tilde Z_2^f)$. The\n$N=16$ modular extensions of $\\sRp(Z_2\\times Z_2^f)$ in Table \\ref{mextZ2Z2f16}\nare given by the stacking of the $N=4$ modular extensions of $\\sRp(Z_2^f)$ and\nthe $N=4$ modular extensions of $\\sRp(\\tilde Z_2^f)$. There are also 64 $N=12$\nmodular extensions of $\\sRp(Z_2\\times Z_2^f)$ given by the stacking of the\n$N=4$ ($N=3$) modular extensions of $\\sRp(Z_2^f)$ and the $N=3$ ($N=4$) modular\nextensions of $\\sRp(\\tilde Z_2^f)$.\n\n\nMany of the modular extensions have non-trivial topological orders since the\ncentral charge $c$ is non-zero. There are eight modular extensions for each central\ncharge $c=0,1\/2,1,3\/2,\\dots,15\/2$, and in total $8\\times 16=128$ modular\nextensions. Those eight with $c=0$ correspond\nto the $Z_2\\times Z_2^f$ fermionic SPT states. Those are all the $Z_2\\times\nZ_2^f$ fermionic SPT states\\cite{GL1369}.\n\n\n\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table*}[t] \n\\caption{\nAll the 32 modular extensions of $\\sRp(Z_2\\times Z_2^f)$ with $N = 9$.\n} \n\\label{mextZ2Z2f} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$4^{ 0}_{0}$ & $4$ & $1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}$ & $\\sRp(Z_2\\times Z_2^f)$ \\\\\n\\hline\n$9^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{7}{16}, \\frac{9}{16}, \\frac{15}{16}, 0$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{7}{16}, \\frac{9}{16}, \\frac{15}{16}, 0$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{5}{16}, \\frac{11}{16}, \\frac{13}{16}, 0$ & $3^{ B}_{-3\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{5}{16}, \\frac{11}{16}, \\frac{13}{16}, 0$ & $3^{ B}_{-3\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n\\hline \n$9^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{1}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{1}{8}$ & $3^{ B}_{ 1\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{7}{16}, \\frac{11}{16}, \\frac{15}{16}, \\frac{1}{8}$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{7}{16}, \\frac{11}{16}, \\frac{15}{16}, \\frac{1}{8}$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{5}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{1}{8}$ & $3^{ B}_{-3\/2}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$9^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{3}{16}, \\frac{9}{16}, \\frac{11}{16}, \\frac{1}{4}$ & $3^{ B}_{ 3\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{3}{16}, \\frac{9}{16}, \\frac{11}{16}, \\frac{1}{4}$ & $3^{ B}_{ 3\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{7}{16}, \\frac{13}{16}, \\frac{15}{16}, \\frac{1}{4}$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$9^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{7}{16}, \\frac{13}{16}, \\frac{15}{16}, \\frac{1}{4}$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$9^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{5}{16}, \\frac{9}{16}, \\frac{13}{16}, \\frac{3}{8}$ & $3^{ B}_{ 5\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{5}{16}, \\frac{9}{16}, \\frac{13}{16}, \\frac{3}{8}$ & $3^{ B}_{ 5\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{3}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{3}{8}$ & $3^{ B}_{ 3\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{16}, \\frac{7}{16}, \\frac{15}{16}, \\frac{15}{16}, \\frac{3}{8}$ & $3^{ B}_{-1\/2}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$9^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{7}{16}, \\frac{9}{16}, \\frac{15}{16}, \\frac{1}{2}$ & $3^{ B}_{ 7\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{7}{16}, \\frac{9}{16}, \\frac{15}{16}, \\frac{1}{2}$ & $3^{ B}_{ 7\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{5}{16}, \\frac{11}{16}, \\frac{13}{16}, \\frac{1}{2}$ & $3^{ B}_{ 5\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{5}{16}, \\frac{11}{16}, \\frac{13}{16}, \\frac{1}{2}$ & $3^{ B}_{ 5\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{-3}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{1}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{5}{8}$ & $3^{ B}_{-7\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{-3}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{7}{16}, \\frac{11}{16}, \\frac{15}{16}, \\frac{5}{8}$ & $3^{ B}_{ 7\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{-3}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{7}{16}, \\frac{11}{16}, \\frac{15}{16}, \\frac{5}{8}$ & $3^{ B}_{ 7\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{-3}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{5}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{5}{8}$ & $3^{ B}_{ 5\/2}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$9^{ B}_{-2}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{3}{16}, \\frac{9}{16}, \\frac{11}{16}, \\frac{3}{4}$ & $3^{ B}_{-5\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{-2}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{3}{16}, \\frac{9}{16}, \\frac{11}{16}, \\frac{3}{4}$ & $3^{ B}_{-5\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{-2}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{7}{16}, \\frac{13}{16}, \\frac{15}{16}, \\frac{3}{4}$ & $3^{ B}_{ 7\/2}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$9^{ B}_{-2}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{7}{16}, \\frac{13}{16}, \\frac{15}{16}, \\frac{3}{4}$ & $3^{ B}_{ 7\/2}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$9^{ B}_{-1}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{5}{16}, \\frac{9}{16}, \\frac{13}{16}, \\frac{7}{8}$ & $3^{ B}_{-3\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{-1}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{5}{16}, \\frac{9}{16}, \\frac{13}{16}, \\frac{7}{8}$ & $3^{ B}_{-3\/2}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$9^{ B}_{-1}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{3}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{7}{8}$ & $3^{ B}_{-5\/2}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$9^{ B}_{-1}$ & $16$ & $1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}, 2$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{16}, \\frac{7}{16}, \\frac{15}{16}, \\frac{15}{16}, \\frac{7}{8}$ & $3^{ B}_{ 7\/2}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n\\hline\n\\end{tabular}\n\\end{table*} \n\n\n\n\n\\subsection{$Z_{2n}^f$ SPT orders for fermionic systems}\n\nWe also find the modular extensions for $\\sRp(Z_4^f)$, $\\sRp(Z_6^f)$, and\n$\\sRp(Z_8^f)$ (see Tables \\ref{mextZ4f}, \\ref{mextZ6f}, and \\ref{mextZ8f}).\nAgain, many of them has non-trivial topological orders since the central charge\n$c$ is non-zero. \n\nFor $Z_4^f$ group, only one of them have $c=0$. So there is no non-trivial\n$Z_4^f$ fermionic SPT states. For $Z_6^f$ group, only three of them have\n$c=0$. So, the $Z_6^f$ fermionic SPT states are described by $\\Z_3$. For\n$Z_8^f$ group, only two of them have $c=0$. So, the $Z_8^f$ fermionic SPT\nstates are described by $\\Z_2$. Those results are consistent with the results\nin \\Ref{KTT1429,CJWang}. However, the calculation present here is more\ncomplete.\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table*}[t] \n\\caption{\nThe first 32 modular extensions of $\\sRp(Z_2\\times Z_2^f)$ with $N =12$.\n} \n\\label{mextZ2Z2f12} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$4^{ 0}_{0}$ & $4$ & $1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}$ & $\\sRp(Z_2\\times Z_2^f)$ \\\\\n\\hline\n$12^{ B}_{ 1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{9}{16}$ & $4^{ B}_{ 0}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{ 1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{9}{16}$ & $4^{ B}_{ 0}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{ 1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{1}{16}, \\frac{1}{16}, \\frac{7}{16}, \\frac{15}{16}$ & $4^{ B}_{-3}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{ 1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{1}{16}, \\frac{1}{16}, \\frac{7}{16}, \\frac{15}{16}$ & $4^{ B}_{-3}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{ 1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{1}{16}, \\frac{1}{16}, \\frac{5}{16}, \\frac{13}{16}$ & $6^{ B}_{-1\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{ 1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{1}{16}, \\frac{1}{16}, \\frac{5}{16}, \\frac{13}{16}$ & $6^{ B}_{-1\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{ 1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{1}{16}, \\frac{1}{16}, \\frac{3}{16}, \\frac{11}{16}$ & $4^{ B}_{-1}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{ 1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{1}{16}, \\frac{1}{16}, \\frac{3}{16}, \\frac{11}{16}$ & $4^{ B}_{-1}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{ 3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{11}{16}$ & $4^{ B}_{ 0}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{ 3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{11}{16}$ & $4^{ B}_{ 0}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{ 3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{1}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{9}{16}$ & $4^{ B}_{ 1}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{ 3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{1}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{9}{16}$ & $4^{ B}_{ 1}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{ 3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{16}, \\frac{3}{16}, \\frac{7}{16}, \\frac{15}{16}$ & $6^{ B}_{ 1\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{ 3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{16}, \\frac{3}{16}, \\frac{7}{16}, \\frac{15}{16}$ & $6^{ B}_{ 1\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{ 3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{3}{16}, \\frac{3}{16}, \\frac{5}{16}, \\frac{13}{16}$ & $4^{ B}_{-1}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{ 3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{3}{16}, \\frac{3}{16}, \\frac{5}{16}, \\frac{13}{16}$ & $4^{ B}_{-1}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{ 5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{13}{16}$ & $4^{ B}_{ 0}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{ 5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{13}{16}$ & $4^{ B}_{ 0}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{ 5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{11}{16}$ & $4^{ B}_{ 1}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{ 5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{11}{16}$ & $4^{ B}_{ 1}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{ 5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{1}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{9}{16}$ & $6^{ B}_{ 3\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{ 5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{1}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{9}{16}$ & $6^{ B}_{ 3\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{ 5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{5}{16}, \\frac{5}{16}, \\frac{7}{16}, \\frac{15}{16}$ & $4^{ B}_{-1}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{ 5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{5}{16}, \\frac{5}{16}, \\frac{7}{16}, \\frac{15}{16}$ & $4^{ B}_{-1}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{ 7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{15}{16}$ & $4^{ B}_{ 0}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{ 7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{15}{16}$ & $4^{ B}_{ 0}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{ 7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{13}{16}$ & $4^{ B}_{ 1}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{ 7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{13}{16}$ & $4^{ B}_{ 1}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{ 7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{11}{16}$ & $6^{ B}_{ 5\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{ 7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{11}{16}$ & $6^{ B}_{ 5\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{ 7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{1}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{9}{16}$ & $4^{ B}_{ 3}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{ 7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{1}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{9}{16}$ & $4^{ B}_{ 3}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\section{Summary}\n\\label{sum}\n\nGQLs contain both topologically ordered states and SPT states. In this paper,\nwe present a theory that classify GQLs in 2+1D for bosonic\/fermionic systems\nwith symmetry.\n\nWe propose that the possible non-abelian statistics (or sets of bulk\nquasiparticles excitations) in 2+1D GQLs are classified by \n$\\mce{\\cE}$, where $\\cE=\\Rp(G)$ or $\\sRp(G^f)$ describing the symmetry in\nbosonic or fermionic systems. However, $\\mce{\\cE}$'s fail to\nclassify GQLs, since different GQL phases can have identical non-abelian\nstatistics, which correspond to identical $\\mce{\\cE}$. \n\nTo fix this problem, we introduce the notion of modular extensions for a\n$\\mce{\\cE}$. We propose to use the triple $(\\cC,\\cM,c)$ to\nclassify 2+1D GQLs with symmetry $G$ (for boson) or $G^f$ (for fermion). Here\n$\\cC$ is a $\\mce{\\cE}$ with $\\cE=\\Rp(G)$ or $\\sRp(G^f)$, $\\cM$ is a modular\nextension of $\\cC$ and $c$ is the chiral central charge of the edge state. We\nshow that the modular extensions of a $\\mce{\\cE}$ has a one-to-one\ncorrespondence with the modular extensions of $\\cE$. So the number of the\nmodular extensions is solely determined by the symmetry $\\cE$. Also, the $c=0$\nmodular extensions of a $\\cE$ ($\\cE=\\Rp(G)$ or $\\sRp(G^f)$) classify the 2+1D\nSPT states for bosons or fermions with symmetry $G$ or $G^f$.\n\nAlthough the above result has a nice mathematical structure, it is hard to\nimplement numerically to produce a table of GQLs. To fix this problem, we\npropose a different description of 2+1D GQLs. We propose to use the data $(\n\\tilde N^{ab}_c,\\tilde s_a; N^{ij}_k,s_i; \\cN^{IJ}_K,\\cS_I;c)$, up to some\npermutations of the indices, to describe 2+1D GQLs with symmetry $G$\n(for boson) or $G^f$ (for fermion), with a restriction that the symmetry group\n$G$ can be fully characterized by the fusion ring of its irreducible\nrepresentations (for example, for simple groups or abelian groups). Here the\ndata $(\\tilde N^{ab}_c,\\tilde s_a)$ describe the symmetry and the data\n$(N^{ij}_k,s_i)$ describes fusion and the spins of the bulk particles in the\nGQL. The modular extensions are obtained by ``gauging'' the symmetry $G$ or\n$G^f$. The data $(\\cN^{IJ}_K,\\cS_I)$ describes fusion and the spins of the\nbulk particles in the ``gauged'' theory. Last, $c$ is the chiral central\ncharge of the edge state.\n\nIn this paper (see Appendix \\ref{cnds}) and in \\Ref{W150605768}, we list the\nnecessary and the sufficient conditions on the data $(\\tilde N^{ab}_c,\\tilde\ns_a; N^{ij}_k,s_i; \\cN^{IJ}_K,\\cS_I;c)$, which allow us to obtain a\nlist of GQLs. However, in this paper, we did not give the list\nof GQLs directly. We first give a list of $(\\tilde N^{ab}_c,\\tilde s_a;\nN^{ij}_k,s_i)$, which is an imperfect list of $\\mce{\\cE}$'s. We then compute\nthe modular extensions $(\\cN^{IJ}_K,\\cS_I;c)$ for each entry $(\\tilde\nN^{ab}_c,\\tilde s_a; N^{ij}_k,s_i)$, which allows us to obtain a perfect list\nof GQLs (for certain symmetry groups). As a special case, we calculated the\nbosonic\/fermionic SPT states for some groups in 2+1D.\n\nIn \\Ref{LW160205936}, we will give a more mathematical description of our\ntheory. Certainly we hope to generalize the above framework to higher\ndimensions. We also hope to develop more efficient numerical codes to obtain\nbigger tables of GQLs.\n\n\\bigskip\n\n\\noindent {\\bf Acknowledgement}: \nWe like to thank Pavel Etingof, Dmitri Nikshych, Chenjie Wang, and Zhenghan\nWang for many helpful discussions. This research is supported by NSF Grant No.\nDMR-1506475, and NSFC 11274192. It is also supported by the John Templeton\nFoundation No. 39901. Research at Perimeter Institute is supported by the\nGovernment of Canada through Industry Canada and by the Province of Ontario\nthrough the Ministry of Research. LK is supported by the Center of\nMathematical Sciences and Applications at Harvard University.\n\n\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table*}[t] \n\\caption{\nThe second 32 modular extensions of $\\sRp(Z_2\\times Z_2^f)$ with $N =12$.\n} \n\\label{mextZ2Z2f12b} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$4^{ 0}_{0}$ & $4$ & $1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}$ & $\\sRp(Z_2\\times Z_2^f)$ \\\\\n\\hline\n$12^{ B}_{-7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}$ & $4^{ B}_{ 4}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{-7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}$ & $4^{ B}_{ 4}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{-7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{7}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{15}{16}$ & $4^{ B}_{ 1}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{-7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{7}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{15}{16}$ & $4^{ B}_{ 1}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{-7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{5}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{13}{16}$ & $6^{ B}_{ 7\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{-7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{5}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{13}{16}$ & $6^{ B}_{ 7\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{-7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{3}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{11}{16}$ & $4^{ B}_{ 3}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{-7\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{3}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{11}{16}$ & $4^{ B}_{ 3}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{-5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}$ & $4^{ B}_{ 4}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{-5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}$ & $4^{ B}_{ 4}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{-5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{1}{16}, \\frac{9}{16}, \\frac{11}{16}, \\frac{11}{16}$ & $4^{ B}_{-3}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{-5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{1}{16}, \\frac{9}{16}, \\frac{11}{16}, \\frac{11}{16}$ & $4^{ B}_{-3}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{-5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{15}{16}$ & $6^{ B}_{-7\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{-5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{15}{16}$ & $6^{ B}_{-7\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{-5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{5}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{13}{16}$ & $4^{ B}_{ 3}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{-5\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{5}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{13}{16}$ & $4^{ B}_{ 3}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{-3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}$ & $4^{ B}_{ 4}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{-3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}$ & $4^{ B}_{ 4}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{-3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{16}, \\frac{11}{16}, \\frac{13}{16}, \\frac{13}{16}$ & $4^{ B}_{-3}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{-3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{16}, \\frac{11}{16}, \\frac{13}{16}, \\frac{13}{16}$ & $4^{ B}_{-3}\\boxtimes 3^{ B}_{ 3\/2}$\\\\\n$12^{ B}_{-3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{1}{16}, \\frac{9}{16}, \\frac{13}{16}, \\frac{13}{16}$ & $6^{ B}_{-5\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{-3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{1}{16}, \\frac{9}{16}, \\frac{13}{16}, \\frac{13}{16}$ & $6^{ B}_{-5\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{-3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{15}{16}$ & $4^{ B}_{ 3}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{-3\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{15}{16}$ & $4^{ B}_{ 3}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{-1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{16}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}$ & $4^{ B}_{ 4}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{-1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{16}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}$ & $4^{ B}_{ 4}\\boxtimes 3^{ B}_{ 7\/2}$\\\\\n$12^{ B}_{-1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{16}, \\frac{13}{16}, \\frac{15}{16}, \\frac{15}{16}$ & $4^{ B}_{-3}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{-1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{16}, \\frac{13}{16}, \\frac{15}{16}, \\frac{15}{16}$ & $4^{ B}_{-3}\\boxtimes 3^{ B}_{ 5\/2}$\\\\\n$12^{ B}_{-1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{16}, \\frac{11}{16}, \\frac{15}{16}, \\frac{15}{16}$ & $6^{ B}_{-3\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{-1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{16}, \\frac{11}{16}, \\frac{15}{16}, \\frac{15}{16}$ & $6^{ B}_{-3\/2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$12^{ B}_{-1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{1}{16}, \\frac{9}{16}, \\frac{15}{16}, \\frac{15}{16}$ & $4^{ B}_{-1}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n$12^{ B}_{-1\/2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1,\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1},\\zeta_{2}^{1}$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{1}{16}, \\frac{9}{16}, \\frac{15}{16}, \\frac{15}{16}$ & $4^{ B}_{-1}\\boxtimes 3^{ B}_{ 1\/2}$\\\\\n\\hline\n\\end{tabular}\n\\end{table*} \n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table*}[t] \n\\caption{\nAll the 32 modular extensions of $\\sRp(Z_2\\times Z_2^f)$ with $N =16$.\n} \n\\label{mextZ2Z2f16} \n\\centering\n\\begin{tabular}{ |c|c|l|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ & comment \\\\\n\\hline \n$4^{ 0}_{0}$ & $4$ & $1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}$ & $\\sRp(Z_2\\times Z_2^f)$ \\\\\n\\hline\n$16^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, 0, 0, 0, 0, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}$ & $4^{ B}_{ 0}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, 0, 0, \\frac{1}{8}, \\frac{1}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{-1}\\boxtimes 4^{ B}_{ 1}$\\\\\n$16^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, 0, 0, \\frac{1}{8}, \\frac{1}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{-1}\\boxtimes 4^{ B}_{ 1}$\\\\\n$16^{ B}_{ 0}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, 0, 0, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}$ & $8^{ B}_{-1}\\boxtimes 2^{ B}_{ 1}$\\\\\n\\hline\n$16^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}$ & $4^{ B}_{ 1}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}$ & $4^{ B}_{ 1}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{8}, \\frac{7}{8}$ & $8^{ B}_{ 0}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{ 1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{8}, \\frac{7}{8}$ & $8^{ B}_{ 0}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{4}, \\frac{3}{4}$ & $8^{ B}_{ 1}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{4}, \\frac{3}{4}$ & $8^{ B}_{ 1}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}$ & $4^{ B}_{ 1}\\boxtimes 4^{ B}_{ 1}$\\\\\n$16^{ B}_{ 2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{-1}\\boxtimes 4^{ B}_{ 3}$\\\\\n$16^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{ 3}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{ 3}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{4}, \\frac{3}{4}$ & $8^{ B}_{ 2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{ 3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{4}, \\frac{3}{4}$ & $8^{ B}_{ 2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, 0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}$ & $4^{ B}_{ 4}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{ 3}\\boxtimes 4^{ B}_{ 1}$\\\\\n$16^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{ 3}\\boxtimes 4^{ B}_{ 1}$\\\\\n$16^{ B}_{ 4}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}$ & $8^{ B}_{ 3}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{-3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}$ & $4^{ B}_{-3}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{-3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}$ & $4^{ B}_{-3}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{-3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{8}, \\frac{3}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{8}, \\frac{7}{8}$ & $8^{ B}_{ 4}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{-3}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{4}, \\frac{1}{4}, \\frac{3}{8}, \\frac{3}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{8}, \\frac{7}{8}$ & $8^{ B}_{ 4}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{-2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}$ & $8^{ B}_{-3}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{-2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}$ & $8^{ B}_{-3}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{-2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}$ & $4^{ B}_{-3}\\boxtimes 4^{ B}_{ 1}$\\\\\n$16^{ B}_{-2}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{ 3}\\boxtimes 4^{ B}_{ 3}$\\\\\n$16^{ B}_{-1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{3}{8}, \\frac{3}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{-1}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{-1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, 0, 0, \\frac{3}{8}, \\frac{3}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $4^{ B}_{-1}\\boxtimes 4^{ B}_{ 0}$\\\\\n$16^{ B}_{-1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{4}, \\frac{1}{4}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $8^{ B}_{-2}\\boxtimes 2^{ B}_{ 1}$\\\\\n$16^{ B}_{-1}$ & $16$ & $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ & $0, 0, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{4}, \\frac{1}{4}, \\frac{5}{8}, \\frac{5}{8}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}$ & $8^{ B}_{-2}\\boxtimes 2^{ B}_{ 1}$\\\\\n\\hline\n\\end{tabular}\n\\end{table*} \n\n\n\\def1.25} \\setlength\\tabcolsep{3pt{1.25} \\setlength\\tabcolsep{3pt}\n\\begin{table*}[t] \n\\caption{\nAll the modular extensions of $\\sRp(Z_6^f)=\\sRp(Z_3\\times Z_2^f)$.\n} \n\\label{mextZ6f} \n\\centering\n\\begin{tabular}{ |c|c|l|l| } \n\\hline \n$N^{|\\Th|}_{c}$ & $D^2$ & $d_1,d_2,\\cdots$ & $s_1,s_2,\\cdots$ \\\\\n\\hline \n$6^{ 0}_{0}$ & $6$ & $1, 1, 1, 1, 1, 1$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}$ \\hfill $\\sRp(Z_6^f)$\\\\\n\\hline\n$36^{ B}_{ 0}$ & $36$ & $1\\times 36$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{2}{3}, \\frac{2}{3}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}$ \\\\\n$36^{ B}_{ 0}$ & $36$ & $1\\times 36$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, 0, 0, 0, 0, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{2}{9}, \\frac{2}{9}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{5}{9}, \\frac{5}{9}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{8}{9}, \\frac{8}{9}, \\frac{8}{9}, \\frac{8}{9}$ \\\\\n$36^{ B}_{ 0}$ & $36$ & $1\\times 36$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, 0, 0, 0, 0, \\frac{1}{9}, \\frac{1}{9}, \\frac{1}{9}, \\frac{1}{9}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{4}{9}, \\frac{4}{9}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{7}{9}, \\frac{7}{9}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}$ \\\\\n\\hline\n$36^{ B}_{ 1}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{11}{24}, \\frac{11}{24}, \\frac{11}{24}, \\frac{11}{24}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{19}{24}, \\frac{19}{24}, \\frac{19}{24}, \\frac{19}{24}, \\frac{5}{6}, \\frac{5}{6}$ \\\\\n$36^{ B}_{ 1}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{72}, \\frac{1}{72}, \\frac{1}{72}, \\frac{1}{72}, \\frac{1}{18}, \\frac{1}{18}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{2}{9}, \\frac{2}{9}, \\frac{25}{72}, \\frac{25}{72}, \\frac{25}{72}, \\frac{25}{72}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{49}{72}, \\frac{49}{72}, \\frac{49}{72}, \\frac{49}{72}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}$ \\\\\n$36^{ B}_{ 1}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{1}{8}, \\frac{17}{72}, \\frac{17}{72}, \\frac{17}{72}, \\frac{17}{72}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{41}{72}, \\frac{41}{72}, \\frac{41}{72}, \\frac{41}{72}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{65}{72}, \\frac{65}{72}, \\frac{65}{72}, \\frac{65}{72}, \\frac{17}{18}, \\frac{17}{18}$ \\\\\n$36^{ B}_{ 2}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{7}{12}, \\frac{7}{12}, \\frac{7}{12}, \\frac{7}{12}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{11}{12}, \\frac{11}{12}, \\frac{11}{12}, \\frac{11}{12}$ \\\\\n$36^{ B}_{ 2}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{36}, \\frac{1}{36}, \\frac{1}{36}, \\frac{1}{36}, \\frac{1}{9}, \\frac{1}{9}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{5}{18}, \\frac{5}{18}, \\frac{13}{36}, \\frac{13}{36}, \\frac{13}{36}, \\frac{13}{36}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{25}{36}, \\frac{25}{36}, \\frac{25}{36}, \\frac{25}{36}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}$ \\\\\n$36^{ B}_{ 2}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{5}{36}, \\frac{5}{36}, \\frac{5}{36}, \\frac{5}{36}, \\frac{2}{9}, \\frac{2}{9}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4}, \\frac{7}{18}, \\frac{7}{18}, \\frac{17}{36}, \\frac{17}{36}, \\frac{17}{36}, \\frac{17}{36}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{29}{36}, \\frac{29}{36}, \\frac{29}{36}, \\frac{29}{36}, \\frac{8}{9}, \\frac{8}{9}$ \\\\\n$36^{ B}_{ 3}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{24}, \\frac{1}{24}, \\frac{1}{24}, \\frac{1}{24}, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{17}{24}, \\frac{17}{24}, \\frac{17}{24}, \\frac{17}{24}, \\frac{5}{6}, \\frac{5}{6}$ \\\\\n$36^{ B}_{ 3}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{19}{72}, \\frac{19}{72}, \\frac{19}{72}, \\frac{19}{72}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{43}{72}, \\frac{43}{72}, \\frac{43}{72}, \\frac{43}{72}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{67}{72}, \\frac{67}{72}, \\frac{67}{72}, \\frac{67}{72}$ \\\\\n$36^{ B}_{ 3}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{11}{72}, \\frac{11}{72}, \\frac{11}{72}, \\frac{11}{72}, \\frac{5}{18}, \\frac{5}{18}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{3}{8}, \\frac{4}{9}, \\frac{4}{9}, \\frac{35}{72}, \\frac{35}{72}, \\frac{35}{72}, \\frac{35}{72}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{59}{72}, \\frac{59}{72}, \\frac{59}{72}, \\frac{59}{72}, \\frac{17}{18}, \\frac{17}{18}$ \\\\\n$36^{ B}_{ 4}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{5}{6}, \\frac{5}{6}, \\frac{5}{6}, \\frac{5}{6}$ \\\\\n$36^{ B}_{ 4}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{1}{18}, \\frac{1}{18}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{7}{18}, \\frac{7}{18}, \\frac{7}{18}, \\frac{7}{18}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{13}{18}, \\frac{13}{18}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}$ \\\\\n$36^{ B}_{ 4}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{5}{18}, \\frac{5}{18}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2}, \\frac{11}{18}, \\frac{11}{18}, \\frac{11}{18}, \\frac{11}{18}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{17}{18}, \\frac{17}{18}, \\frac{17}{18}, \\frac{17}{18}$ \\\\\n$36^{ B}_{-3}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{7}{24}, \\frac{7}{24}, \\frac{7}{24}, \\frac{7}{24}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{23}{24}, \\frac{23}{24}, \\frac{23}{24}, \\frac{23}{24}$ \\\\\n$36^{ B}_{-3}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{13}{72}, \\frac{13}{72}, \\frac{13}{72}, \\frac{13}{72}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{37}{72}, \\frac{37}{72}, \\frac{37}{72}, \\frac{37}{72}, \\frac{5}{9}, \\frac{5}{9}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{13}{18}, \\frac{13}{18}, \\frac{61}{72}, \\frac{61}{72}, \\frac{61}{72}, \\frac{61}{72}, \\frac{8}{9}, \\frac{8}{9}$ \\\\\n$36^{ B}_{-3}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{5}{72}, \\frac{5}{72}, \\frac{5}{72}, \\frac{5}{72}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{29}{72}, \\frac{29}{72}, \\frac{29}{72}, \\frac{29}{72}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{5}{8}, \\frac{53}{72}, \\frac{53}{72}, \\frac{53}{72}, \\frac{53}{72}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}$ \\\\\n$36^{ B}_{-2}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{12}, \\frac{1}{12}, \\frac{1}{12}, \\frac{1}{12}, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{5}{12}, \\frac{5}{12}, \\frac{5}{12}, \\frac{5}{12}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{5}{6}, \\frac{5}{6}$ \\\\\n$36^{ B}_{-2}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{11}{36}, \\frac{11}{36}, \\frac{11}{36}, \\frac{11}{36}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{23}{36}, \\frac{23}{36}, \\frac{23}{36}, \\frac{23}{36}, \\frac{13}{18}, \\frac{13}{18}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{8}{9}, \\frac{8}{9}, \\frac{35}{36}, \\frac{35}{36}, \\frac{35}{36}, \\frac{35}{36}$ \\\\\n$36^{ B}_{-2}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{7}{36}, \\frac{7}{36}, \\frac{7}{36}, \\frac{7}{36}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{19}{36}, \\frac{19}{36}, \\frac{19}{36}, \\frac{19}{36}, \\frac{11}{18}, \\frac{11}{18}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{3}{4}, \\frac{7}{9}, \\frac{7}{9}, \\frac{31}{36}, \\frac{31}{36}, \\frac{31}{36}, \\frac{31}{36}, \\frac{17}{18}, \\frac{17}{18}$ \\\\\n$36^{ B}_{-1}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{5}{24}, \\frac{5}{24}, \\frac{5}{24}, \\frac{5}{24}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{13}{24}, \\frac{13}{24}, \\frac{13}{24}, \\frac{13}{24}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}$ \\\\\n$36^{ B}_{-1}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{7}{72}, \\frac{7}{72}, \\frac{7}{72}, \\frac{7}{72}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{31}{72}, \\frac{31}{72}, \\frac{31}{72}, \\frac{31}{72}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{55}{72}, \\frac{55}{72}, \\frac{55}{72}, \\frac{55}{72}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{8}{9}, \\frac{8}{9}$ \\\\\n$36^{ B}_{-1}$ & $36$ & $1\\times 36$ &\\tiny $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{23}{72}, \\frac{23}{72}, \\frac{23}{72}, \\frac{23}{72}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{47}{72}, \\frac{47}{72}, \\frac{47}{72}, \\frac{47}{72}, \\frac{7}{9}, \\frac{7}{9}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{7}{8}, \\frac{17}{18}, \\frac{17}{18}, \\frac{71}{72}, \\frac{71}{72}, \\frac{71}{72}, \\frac{71}{72}$ \\\\\n\\hline\n$27^{ B}_{ 1\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{19}{48}, \\frac{19}{48}, \\frac{35}{48}, \\frac{35}{48}$ \\\\\n$27^{ B}_{ 1\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{41}{144}, \\frac{41}{144}, \\frac{89}{144}, \\frac{89}{144}, \\frac{137}{144}, \\frac{137}{144}$ \\\\\n$27^{ B}_{ 1\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{1}{16}, \\frac{1}{16}, \\frac{1}{16}, \\frac{25}{144}, \\frac{25}{144}, \\frac{73}{144}, \\frac{73}{144}, \\frac{121}{144}, \\frac{121}{144}$ \\\\\n$27^{ B}_{ 3\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{25}{48}, \\frac{25}{48}, \\frac{41}{48}, \\frac{41}{48}$ \\\\\n$27^{ B}_{ 3\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{11}{144}, \\frac{11}{144}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{59}{144}, \\frac{59}{144}, \\frac{107}{144}, \\frac{107}{144}$ \\\\\n$27^{ B}_{ 3\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{3}{16}, \\frac{3}{16}, \\frac{3}{16}, \\frac{43}{144}, \\frac{43}{144}, \\frac{91}{144}, \\frac{91}{144}, \\frac{139}{144}, \\frac{139}{144}$ \\\\\n$27^{ B}_{ 5\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{31}{48}, \\frac{31}{48}, \\frac{47}{48}, \\frac{47}{48}$ \\\\\n$27^{ B}_{ 5\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{29}{144}, \\frac{29}{144}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{77}{144}, \\frac{77}{144}, \\frac{125}{144}, \\frac{125}{144}$ \\\\\n$27^{ B}_{ 5\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{13}{144}, \\frac{13}{144}, \\frac{5}{16}, \\frac{5}{16}, \\frac{5}{16}, \\frac{61}{144}, \\frac{61}{144}, \\frac{109}{144}, \\frac{109}{144}$ \\\\\n$27^{ B}_{ 7\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{5}{48}, \\frac{5}{48}, \\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{37}{48}, \\frac{37}{48}$ \\\\\n$27^{ B}_{ 7\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{47}{144}, \\frac{47}{144}, \\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{95}{144}, \\frac{95}{144}, \\frac{143}{144}, \\frac{143}{144}$ \\\\\n$27^{ B}_{ 7\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{31}{144}, \\frac{31}{144}, \\frac{7}{16}, \\frac{7}{16}, \\frac{7}{16}, \\frac{79}{144}, \\frac{79}{144}, \\frac{127}{144}, \\frac{127}{144}$ \\\\\n$27^{ B}_{-7\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{11}{48}, \\frac{11}{48}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{43}{48}, \\frac{43}{48}$ \\\\\n$27^{ B}_{-7\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{17}{144}, \\frac{17}{144}, \\frac{65}{144}, \\frac{65}{144}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{113}{144}, \\frac{113}{144}$ \\\\\n$27^{ B}_{-7\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{1}{144}, \\frac{1}{144}, \\frac{49}{144}, \\frac{49}{144}, \\frac{9}{16}, \\frac{9}{16}, \\frac{9}{16}, \\frac{97}{144}, \\frac{97}{144}$ \\\\\n$27^{ B}_{-5\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{1}{48}, \\frac{1}{48}, \\frac{17}{48}, \\frac{17}{48}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}$ \\\\\n$27^{ B}_{-5\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{35}{144}, \\frac{35}{144}, \\frac{83}{144}, \\frac{83}{144}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{131}{144}, \\frac{131}{144}$ \\\\\n$27^{ B}_{-5\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{19}{144}, \\frac{19}{144}, \\frac{67}{144}, \\frac{67}{144}, \\frac{11}{16}, \\frac{11}{16}, \\frac{11}{16}, \\frac{115}{144}, \\frac{115}{144}$ \\\\\n$27^{ B}_{-3\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{7}{48}, \\frac{7}{48}, \\frac{23}{48}, \\frac{23}{48}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}$ \\\\\n$27^{ B}_{-3\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{5}{144}, \\frac{5}{144}, \\frac{53}{144}, \\frac{53}{144}, \\frac{101}{144}, \\frac{101}{144}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}$ \\\\\n$27^{ B}_{-3\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{37}{144}, \\frac{37}{144}, \\frac{85}{144}, \\frac{85}{144}, \\frac{13}{16}, \\frac{13}{16}, \\frac{13}{16}, \\frac{133}{144}, \\frac{133}{144}$ \\\\\n$27^{ B}_{-1\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, 0, \\frac{1}{6}, \\frac{1}{6}, \\frac{1}{3}, \\frac{1}{3}, \\frac{1}{2}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{3}, \\frac{5}{6}, \\frac{5}{6}, \\frac{13}{48}, \\frac{13}{48}, \\frac{29}{48}, \\frac{29}{48}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}$ \\\\\n$27^{ B}_{-1\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{18}, \\frac{1}{18}, \\frac{2}{9}, \\frac{2}{9}, \\frac{7}{18}, \\frac{7}{18}, \\frac{5}{9}, \\frac{5}{9}, \\frac{13}{18}, \\frac{13}{18}, \\frac{8}{9}, \\frac{8}{9}, \\frac{23}{144}, \\frac{23}{144}, \\frac{71}{144}, \\frac{71}{144}, \\frac{119}{144}, \\frac{119}{144}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}$ \\\\\n$27^{ B}_{-1\/2}$ & $36$ &\\tiny $1\\times 18 , \\zeta_2^1\\times 9$ & $0, \\frac{1}{2}, 0, \\frac{1}{2}, 0, \\frac{1}{2}, \\frac{1}{9}, \\frac{1}{9}, \\frac{5}{18}, \\frac{5}{18}, \\frac{4}{9}, \\frac{4}{9}, \\frac{11}{18}, \\frac{11}{18}, \\frac{7}{9}, \\frac{7}{9}, \\frac{17}{18}, \\frac{17}{18}, \\frac{7}{144}, \\frac{7}{144}, \\frac{55}{144}, \\frac{55}{144}, \\frac{103}{144}, \\frac{103}{144}, \\frac{15}{16}, \\frac{15}{16}, \\frac{15}{16}$ \\\\\n\\hline\n\\end{tabular}\n\\end{table*} \n\n\\vfill\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Introduction}\n\\input{introduction.tex}\n\n\\section{Preliminaries} \\label{sec:preliminary}\n\\input{preliminary.tex}\n\n\n\n\\section{Problem Formulation} \\label{sec:problem}\n\\input{problem.tex}\n\n\n\n\n\\section{Formula Synthesis Method}\\label{sec:formula_synthesis}\n\\input{synthesis.tex}\n\n\n\n\\section{Case Study}\\label{sec:case_studies}\n\\input{case.tex}\n\n\n\\section{Conclusion}\\label{sec:conclusion}\n\\input{conclusion.tex}\n\n\n\n\n\\subsection{Signals}\nWe define an n-dimensional discrete signal $\\mathbf{x}$ as a mapping from time domain $\\mathbb{N}^+$ to the real numbers $\\mathbb{R}^n$. A finite signal of length $K+1$ is shown as a sequence $\\mathbf{x} = x_0, x_1, ...,x_K$. We use $x_t^i$ to denote the projection of the state on the $i$th dimension at time $t$.\n\n\nA dataset of labeled signals is defined as \n\n\\begin{align}\\label{eq:dataset}\n \\mathcal{D} = \\{ (\\mathbf{x},\\mathbf{l}) \\mid \\mathbf{x} = & x_0, x_1, \\ldots,x_K, \\\\\n \\mathbf{l} = & l_0, l_1, \\ldots, l_K, \\text{ and } \\nonumber \\\\% K \\in \\mathbb{N} \\nonumber \\\\ \n x_t \\in &\\mathbb{R}^n, l_t \\in \\{0,1\\}, t=0,\\ldots,K\\}, \\nonumber\n\\end{align}\nwhere $l_t = 1$, means that at time $t$, an event of interest is occurred on signal $\\mathbf{x}$.\n\n\n\\subsection{Past Time Signal Temporal Logic}\nA Past Time Signal Temporal Logic (ptSTL) formula is defined with grammar:\n\\begin{align}\n\\phi = \\mathbf{T} | x^i\\sim c | \\neg\\phi | \\phi_1 \\wedge \\phi_2 | \\phi_1 \\vee \\phi_2 | \\phi_1 \\mathbf{S}_{[a,b]} \\phi_2 | \\mathbf{P}_{[a,b]} \\phi | \\mathbf{A}_{[a,b]} \\phi \\label{eq:ptstl} \n\\end{align}\nwhere $x^i$ is a signal variable, $\\sim \\in \\{>,<\\}$, and $c$ is a constant, $\\mathbf{T}$ is the Boolean constant $true$, $\\neg, \\wedge$ and $\\vee$ are the standard Boolean operators, $\\mathbf{S}_{[a,b]}$ $(since)$, $\\mathbf{P}_{[a,b]}$ $(previously)$, and $\\mathbf{A}_{[a,b]}$ $(always)$ are the temporal operators with time interval $[a,b]$. The semantics of a ptSTL formula is defined over a signal for a given time point. \n\nInformally, for signal $\\mathbf{x}$, at time $t$, formula $\\mathbf{P}_{[a,b]} \\phi $ is satisfied if $\\phi$ holds at some time in $[t-b,t-a]$, formula $\\mathbf{A}_{[a,b]} \\phi$ is satisfied if $\\phi$ holds everywhere in $[t-b,t-a]$, and $\\phi_1 \\mathbf{S}_{[a,b]} \\phi_2$ is satisfied if $\\phi_2$ holds at some time $t' \\in [t-b,t-a]$ and $\\phi_1$ holds since then. $(\\mathbf{x},t) \\models \\phi $ denotes that signal $\\mathbf{x}$ satisfies formula $\\phi$ at time $t$. Formally, the semantics are given as follows:\n\\begin{align}\n& (\\mathbf{x}, t) \\models \\mathbf{T} & & \\nonumber \\\\\n& (\\mathbf{x}, t) \\models x^i \\sim c &\\text{ iff } & x^i_t \\sim c , \\sim \\in \\{>,<\\}\\nonumber \\\\\n& (\\mathbf{x}, t) \\models \\phi_1 \\wedge \\phi_2 & \\text{ iff } & (\\mathbf{x}, t) \\models \\phi_1 \\text{ and } (\\mathbf{x}, t) \\models \\phi_2 \\nonumber \\\\\n& (\\mathbf{x}, t) \\models \\phi_1 \\vee \\phi_2 & \\text{ iff } & (\\mathbf{x}, t) \\models \\phi_1 \\text{ or } (\\mathbf{x}, t) \\models \\phi_2 \\nonumber \\\\\n& (\\mathbf{x}, t) \\models \\mathbf{P}_{[a,b]} \\phi & \\text{ iff } & \\exists t' \\in I(t,[a,b]), (\\mathbf{x}, t') \\models \\phi \\label{eq:semantics} \\\\\n& (\\mathbf{x}, t) \\models \\mathbf{A}_{[a,b]} \\phi & \\text{ iff } & \\forall t' \\in I(t,[a,b]), (\\mathbf{x}, t') \\models \\phi \\nonumber \\\\\n& (\\mathbf{x}, t) \\models \\phi_1 \\mathbf{S}_{[a,b]} \\phi_2 & \\text{ iff } & \\exists t' \\in I(t,[a,b]), (\\mathbf{x}, t') \\models \\phi_2, \\nonumber \\\\\n& & & \\forall t'' \\in [t', t] (\\mathbf{x}, t'') \\models \\phi_1 \\nonumber, \n\\end{align}\n\\[ \\text{ where } I(t,[a,b]) = [t-b, t-a] \\cap [0,t] \\]\n\n\n\nNote that the previously ($\\mathbf{P}$) and always ($\\mathbf{A}$) operators are the special cases of the since operator, $\\mathbf{P}_{[a,b]} \\phi := \\mathbf{T}\\ \\mathbf{S}_{[a,b]} \\phi$ and $\\mathbf{A}_{[a,b]} \\phi := \\neg \\mathbf{P}_{[a,b]} \\neg \\phi$. We include them as they are used in the proposed methods.\n\n\n$Parametric$ $Past$ $Time$ $Signal$ $Temporal$ $Logic$ is an extension of ptSTL~\\citep{Asarin:2011}. In a parametric ptSTL formula, instead of numerical values in time interval bounds and predicates, parameters can be used. A parametric formula can be converted to a ptSTL formula by assigning a value to each parameter.\nAs an example consider the parametric formula $\\phi = \\mathbf{P}_{[p_1,p_2]} x < p_3$ with parameters $p_1, p_2$ and $p_3$. ptSTL formula $\\phi (v) = \\mathbf{P}_{[3,5]} x < 10.2$ is obtained with valuation $v = [3,5,10.2]$.\n \n\\subsection{Monotonicity of Parametric Signal Temporal Logic}\n\n\n\nMonotonicity properties for parametric STL is introduced by~\\cite{miningjournal}. A parametric STL formula $\\phi$ with parameters $[p_1, \\ldots, p_m]$ is $monotonically$ $increasing$ with parameter $p_i$ if (\\ref{eq:mon_inc}) holds along any signal $\\mathbf{x}$. Similarly, it is $monotonically$ $decreasing$ with parameter $p_i$ if (\\ref{eq:mon_dec}) holds.\n\\begin{align}\n&\\text{for all } v,v' \\text{ with } v(p_i) < v'(p_i), v(p_j) = v(p_j) \\text{ for each } i\\neq j, \\nonumber \\\\\n& \\quad \\quad \\quad (\\mathbf{x},t) \\models \\phi(v) \\implies (\\mathbf{x},t) \\models \\phi(v') \\label{eq:mon_inc} \\\\\n&\\text{for all } v,v' \\text{ with } v(p_i) > v'(p_i), v(p_j) = v(p_j) \\text{ for each } i\\neq j, \\nonumber \\\\\n& \\quad \\quad \\quad (\\mathbf{x},t) \\models \\phi(v) \\implies (\\mathbf{x},t) \\models \\phi(v') \\label{eq:mon_dec}\n\\end{align}\n\nEssentially, $\\phi$ is monotonically increasing with $p_i$ if the valuation can not change from satisfying to violating when only the value of the parameter $p_i$ is increased. \n\n \nOur aim in this work is to generate a ptSTL formula that represents the labels in a dataset~\\eqref{eq:dataset}. For this purpose, we generate label $\\mathbf{l}^\\phi= l^\\phi_0, l^\\phi_1, \\ldots, l^\\phi_N$ from a given signal $\\mathbf{x} = x_0, \\ldots, x_K$ using a given ptSTL formula $\\phi$ as follows:\n\\begin{align}\\label{eq:label_set}\n\tl^\\phi_t = &\\begin{cases} 1 \\text{ if } (\\mathbf{x}, t) \\models \\phi \\\\\n\t 0 \\text{ otherwise}\n\t\\end{cases}\n\\end{align}\n\nWe define number of positive labels $P^{\\#}(\\phi, \\mathbf{x} )$ (\\ref{eq:positive}) and number of negative labels $N^{\\#}(\\phi, \\mathbf{x} )$ (\\ref{eq:negative}) where $\\mathbf{l}^\\phi$ is generated by evaluating formula $\\phi$ along signal $\\mathbf{x}$ as defined in~\\eqref{eq:label_set}:\n\\begin{align}\\label{eq:positive}\n P^{\\#}(\\phi, \\mathbf{x}) = \\sum_{i=0}^{K} l_i^\\phi\n\n \\end{align}\n\\begin{align}\\label{eq:negative}\n N^{\\#}(\\phi, \\mathbf{x}) = \\sum_{i=0}^{K} \\neg l_i^\\phi\n \\end{align}\n \n \n\n \n\n\nAlso note that\n\\begin{equation}\\label{eq:totalformula}\n P^{\\#}(\\phi, \\mathbf{x}) + N^{\\#}(\\phi, \\mathbf{x}) = K + 1\n\\end{equation}\n\n\n\nWe derive monotonicity properties of $P^{\\#}(\\phi, \\cdot)$ for a parametric ptSTL formula $\\phi$ with respect to the monotonicity of $\\phi$. $P^{\\#}(\\phi,\\cdot)$ is monotonically increasing with $p_i$ if and only if the satisfaction value of $\\phi$ is monotonically increasing with $p_i$, i.e.,if~\\eqref{eq:mon_inc} holds along any signal $\\mathbf{x}$, then~\\eqref{eq:monoton_inc} holds:\n\\begin{align}\n&\\text{for all } v,v' \\text{ with } v(p_i) < v'(p_i), v(p_j) = v(p_j) \\text{ for each } i\\neq j, \\nonumber \\\\\n& \\quad \\quad \\quad P^{\\#}(\\phi(v),\\mathbf{x}) \\leq P^{\\#}(\\phi(v'),\\mathbf{x}) \\label{eq:monoton_inc}\n\\end{align}\nSimilarly, if $\\phi$ is monotonically decreasing with $p_i$, then $P^{\\#}(\\phi, \\cdot)$ is also monotonically decreasing with $p_i$. Specifically, for any signal $\\mathbf{x}$,~\\eqref{eq:monoton_dec} holds when \\eqref{eq:mon_dec} holds:\n\\begin{align}\n&\\text{for all } v,v' \\text{ with } v(p_i) > v'(p_i), v(p_j) = v(p_j) \\text{ for each } i\\neq j, \\nonumber \\\\\n& \\quad \\quad \\quad P^{\\#}(\\phi(v),\\mathbf{x}) \\leq P^{\\#}(\\phi(v'),\\mathbf{x}) \\label{eq:monoton_dec}\n\\end{align}\n\n\nNote that, by~\\eqref{eq:totalformula}, $P^{\\#}(\\phi, \\cdot)$ and $N^{\\#}(\\phi, \\cdot)$ have the opposite monotonicity property, e.g., if $P^{\\#}(\\phi, \\cdot)$ is monotonically increasing with $p_i$ than $N^{\\#}(\\phi, \\cdot)$ is monotonically decreasing with $p_i$.\n\nIn our work, a parameter appears only once in a parametric ptSTL formula. Therefore, the considered formulas are monotonic in each parameter, i.e., either monotonically increasing or monotonically decreasing.\n\n\n\\subsection{Monotonicity for Temporal Parameters}\nThe number of positive labels $P^{\\#}(\\phi, \\mathcal{D})$ of a ptSTL formula $\\phi$ over a dataset $\\mathcal{D}$ is simply defined as the total number of positive labels, and derived from~\\eqref{eq:positive}. $N^{\\#}(\\phi, \\mathcal{D})$ is defined similarly.\n\\[ P^{\\#}(\\phi, \\mathcal{D}) = \\sum_{(\\mathbf{x}, \\mathbf{l}) \\in \\mathcal{D}}^{K} P^{\\#}(\\phi, \\mathbf{x}) \\quad N^{\\#}(\\phi, \\mathcal{D}) = \\sum_{(\\mathbf{x}, \\mathbf{l}) \\in \\mathcal{D}}^{K} N^{\\#}(\\phi, \\mathbf{x})\\]\n\nAs either a positive ($1$) or a negative ($0$) label is assigned to each data point, the equality $|\\mathcal{D}| \\times (K+1) = P^{\\#}(\\phi, \\mathcal{D}) + N^{\\#}(\\phi, \\mathcal{D})$ trivially holds. We define the number of correctly identified positive instances (\\textit{true positives}) with respect to the labels generated by the formula $\\phi$ using~\\eqref{eq:label_set} and the dataset labels as:\n\\begin{align}\\label{eq:tp}\n\n TP^{\\#}(\\phi, \\mathcal{D}) = \\sum_{(\\mathbf{x},\\mathbf{l}) \\in \\mathcal{D}} \\sum_{i=0}^{K} l_i \\wedge l^\\phi _i\n\n \\end{align}\n \nSimilarly, the total number of incorrect positive results, i.e., the data points that have label $0$ in the given dataset and label $1$ according to the ptSTL formula $\\phi$ ($l_i^\\phi=1$) is defined as:\n\\begin{align}\\label{eq:fp}\n\n FP^{\\#}(\\phi, \\mathcal{D}) = \\sum_{(\\mathbf{x},\\mathbf{l})\\in \\mathcal{D}} \\sum_{i=1}^{K} \\neg l _i \\wedge l^\\phi_i \n \\end{align}\n \nThe derivations of $TP^{\\#}(\\cdot, \\cdot)$ and $FP^{\\#}(\\cdot, \\cdot)$ preserves monotonicity properties~\\eqref{eq:monoton_inc} and~\\eqref{eq:monoton_dec}. Therefore, if a parametric ptSTL formula $\\phi$ is increasing (or decreasing) with a parameter $p$, then both $TP^{\\#}(\\cdot, \\cdot)$ and $FP^{\\#}(\\cdot, \\cdot)$ are increasing (or decreasing) with $p$. \n\nWe use $\\mathcal{M}(p,\\phi)$ to denote the monotonicity property of parameter $p$ in $\\phi$ for the number of positives ($TP^{\\#}(\\cdot, \\cdot)$, $FP^{\\#}(\\cdot, \\cdot)$ or $P^{\\#}(\\cdot,\\cdot)$ ):\n\\begin{align}\\label{eq:monotonicity}\n\t\\mathcal{M}(p,\\phi) = &\\begin{cases}\n\t \\mathbf{I} \\text{ if } p \\text{ is monotonically increasing in } \\phi \\\\\n\t \\mathbf{D} \\text{ if } p \\text{ is monotonically decreasing in } \\phi\n\t\\end{cases}\n\\end{align}\n\nMonotonicity property, $\\mathcal{M}(\\cdot,\\cdot) $, for each parameter in a basic formula is given in Table~\\ref{tb:monotonicity_table}. \n \n\\begin{table}[h]\n\\begin{center}\n\\caption{\\textsc{Monotonicity Table}}\\label{tb:monotonicity_table}\n\\begin{tabular}{cc|cc}\n$\\phi$ & $\\mathcal{M}(p,\\phi)$ & $\\phi$ & $\\mathcal{M}(p,\\phi)$ \\\\ \\hline\\hline\n$x > p$ & $\\mathbf{D}$ & $x < p$ & $\\mathbf{I}$\\\\ \\hline\n$\\mathbf{A}_{[c, p]} \\varphi$ & $\\mathbf{D}$ & $\\mathbf{A}_{[p, c]} \\varphi$ & $\\mathbf{I}$\\\\ \\hline\n$\\mathbf{P}_{[c, p]}\\varphi$ & $\\mathbf{I}$ & $\\mathbf{P}_{[ p, c]}\\varphi$ & $\\mathbf{D}$ \\\\ \\hline\n$\\varphi_1 \\mathbf{S}_{[c, p]}\\varphi_2$ & $\\mathbf{I}$ & $\\varphi_1 \\mathbf{S}_{[p,c]}\\varphi_2$ & $\\mathbf{D}$ \\\\ \\hline\n\\end{tabular}\n\\end {center}\n\\end{table}\n\nNote that the preceding derivations are based on the number of positive labels. The number of correctly identified negative labels, $TN^{\\#}(\\phi, \\mathcal{D})$, and the number of incorrectly identified negative labels $FN^{\\#}(\\phi, \\mathcal{D})$ are defined similarly. These show the opposite monotonicity property, i.e., if $TP^{\\#}(\\phi, \\mathcal{D})$ is monotonically increasing in parameter $p$, then $TN^{\\#}(\\phi, \\mathcal{D})$ is monotonically decreasing in $p$. \nFurthermore, the negation operator ($\\neg$) inverts the monotonicity property. For example, while $\\mathcal{M}(p,\\mathbf{P}_{[a,b]} x < p)$ is $\\mathbf{I}$, $\\mathcal{M}(p,\\neg \\mathbf{P}_{[a,b]} x < p)$ is $\\mathbf{D}$. A parameter's monotonicity is determined by checking the syntax tree of the formula: each negation that appears from the root node to the parameter inverts the monotonicity of the parameter shown in Table~\\ref{tb:monotonicity_table}.\n\n\n\\begin{example}\n\\label{ex:pptstl_formula}\n\nConsider parametric ptSTL formula\n\\begin{equation}\\label{eq:exformula}\n\\phi = ( \\mathbf{P}_{[p_1, p_2]} ( qGust < p_3 ) ) \\wedge ( wGust < p_4 )\n \\end{equation} \nMonotonicity properties of $p_1, p_2, p_3$ and $p_4$ are $\n\\mathcal{M}(p_1,\\phi) = \\mathbf{D}$, $\n\\mathcal{M}(p_2,\\phi) = \\mathbf{I}$, $\n\\mathcal{M}(p_3,\\phi) = \\mathbf{I}$, $\n\\mathcal{M}(p_4,\\phi) = \\mathbf{I}.$\n\\end{example}\n\n\n\n\n\\subsection{Parameter Optimization Using Monotonicity}\\label{sec:parametersynthesis}\n\nWe now present an efficient method based on monotonicity to find parameters of a parametric ptSTL formula $\\phi$ from a given dataset $\\mathcal{D}$~\\eqref{eq:dataset} such that the number of correctly identified positives of the resulting formula is maximized while the number false positives is below a given threshold: \n\n\\begin{problem}\\label{prob:optimization} \nGiven a labeled dataset $\\mathcal{D}$~\\eqref{eq:dataset}, a parametric ptSTL formula $\\phi$ with $n$ parameters $p_1, p_2, \\ldots, p_n$, lower and upper bounds $l_i, u_i$ for each parameter $p_i$, an error bound $B \\in \\mathbb{N}$, find the valuation $v$ within the given limits that maximizes $TP^{\\#}(\\phi(v), \\mathcal{D})$ while guaranteeing that $ FP^{\\#}(\\phi(v), \\mathcal{D}) \\leq B$.\n\n\\end{problem}\n\nTo solve this problem, we first present an algorithm for parametric ptSTL formulas with two parameters, and then discuss how this approach is adapted for parametric ptSTL formulas with more than two parameters.\n\nWe present a diagonal search method to solve Prob.~\\ref{prob:optimization} in an efficient way when $n$ is 2, which adapts search problem of the product of an m element chain and an n element chain~\\citep{linial1985searching} for ptSTL parameter optimization. The diagonal search algorithm starts with a valuation $v$ with $v(p_1)$ is the bound on $p_1$ that maximizes $TP^{\\#}(\\phi, \\mathcal{D})$ (i.e. either $l_1$ or $u_1$) and $v(p_2)$ is the bound on $p_2$ that minimizes $TP^{\\#}(\\phi, \\mathcal{D})$. Given step sizes $\\delta_1$ and $\\delta_2$ for both parameters, the algorithm iteratively changes the value of a parameter according to the following rule: change $v(p_1)$ by $\\delta_1$ in the direction decreasing $P^{\\#}(\\phi, \\mathcal{D})$ if the error constraint does not hold at $v$, otherwise change $v(p_2)$ by $\\delta_2$ in the direction increasing $P^{\\#}(\\phi, \\mathcal{D})$. Thus, the algorithm moves along a diagonal of the product of the discretized parameter domains with the objective of satisfying the error bound or improving the optimization criteria. This diagonal method is summarized in Alg.~\\ref{alg:diagonal}\n\n\n\n\\begin{algorithm}\n\\caption{$DiagonalSearch(\\phi,B,\\mathcal{D}, l_1, u_1, \\delta_1, l_2, u_2, \\delta_2)$}\\label{alg:diagonal}\n\\begin{flushleft}\n\\begin{algorithmic}[1]\n\\Require{$\\phi$: A parametric ptSTL formula with parameters $p_1$ and $p_2$, $B$ : bound on $FP^{\\#}(\\phi(v), \\mathcal{D})$, $\\mathcal{D}$: dataset as in~\\eqref{eq:dataset}, $l_i, u_i, \\delta_i$: lower bound, upper bound and step size for parameter $p_i$, $i \\in \\{1,2\\}$}\n\\Ensure{$v_{best} = \\arg\\max_{ v } \\{TP^{\\#}(\\phi(v),\\mathcal{D}) \\mid FP^{\\#}(\\phi(v),\\mathcal{D}) < B\\}$}\n\\If {$\\mathcal{M}(p_1,\\phi) == \\mathbf{I}$} \\label{line:startinitialize}\n\t\\State $v(p_1) = u_1, \\bar \\delta_1 = -\\delta_1$\n\\Else\n\\State $v(p_1) = l_1, \\bar \\delta_1 = \\delta_1$\n\\EndIf\n\\If {$\\mathcal{M}(p_2,\\phi) == \\mathbf{I}$}\n\t\\State $v(p_2) = l_1, \\bar \\delta_2 = \\delta_2$\n\\Else\n\\State $v(p_2) = u_1, \\bar \\delta_2 = -\\delta_2$\n\\EndIf \\label{line:endinitialize}\n\\State $v_{best} = []$, $TP_{best} = 0$\n\\While{$l_1 \\leq v(p_1) \\leq u_1 \\wedge l_2 \\leq v(p_2) \\leq u_2$}\\label{line:loopstart}\n \\If{$B < FP^{\\#}(\\phi(v),\\mathcal{D})$} \\label{line:errorconstraint}\n \\State $v(p_1) = v(p_1) + \\bar \\delta_1$\n \\Else\n \n \\If{$TP^{\\#}(\\phi(v),\\mathcal{D}) \\geq TP_{best}$}\\label{line:bestknown}\n \\State $TP_{best} = TP^{\\#}(\\phi(v),\\mathcal{D})$, $v_{best} = v$\n \\EndIf\n \\State $v(p_2) = v(p_2) + \\bar \\delta_2$\n \\EndIf\n\\EndWhile\\label{line:loopend}\n\\State \\Return $v_{best}$\n\\end{algorithmic}\n\\end{flushleft}\n\\end{algorithm}\n\nIn lines~\\ref{line:startinitialize}-\\ref{line:endinitialize} of Alg.~\\ref{alg:diagonal}, the initial value and the update direction is defined for each parameter with respect to its monotonicity property. At each iteration of the main loop (lines~\\ref{line:loopstart}-\\ref{line:loopend}), exactly one parameter value is updated. If the error constraint (line~\\ref{line:errorconstraint}) is violated, the parameter that initialized to maximize $TP^{\\#}(\\phi,\\mathcal{D})$, $p_1$, is changed by $\\bar \\delta_1$ to reduce $FP^{\\#}(\\phi(v),\\mathcal{D})$. Otherwise, the current parameter assignment is a candidate solution, and it is checked against the best known solution (line~\\ref{line:bestknown}). Then, the parameter that initialized to minimize $FP^{\\#}(\\phi,\\mathcal{D})$, $p_2$, is changed by $\\bar \\delta_2$ to increase $TP^{\\#}(\\phi(v),\\mathcal{D})$. The iterations end when a parameter is out of the given bounds. Consequently, $O(m_1 + m_2)$ formula evaluations are performed over the given dataset, where $m_1 = \\frac{u_1 - l_1}{\\delta_1}, m_2 = \\frac{u_2 - l_2}{\\delta_2}$.\n\n\n\n\\begin{figure}\n\\begin{center}\n\\resizebox{8.4 cm}{4.2 cm}{%\n\\begin{tikzpicture}[scale=.6]\n\\begin{scope}\n\n\\fill[black!30!red,opacity=0.6] (3,0) rectangle (5,1);\n\\fill[black!30!red,opacity=0.6] (5,0) rectangle (7,5);\n\\fill[black!30!red,opacity=0.6] (6,5) rectangle (7,6);\n\n\\fill[black!30!green,opacity=0.6] (1,0) rectangle (3,6);\n\\fill[black!30!green,opacity=0.6] (3,1) rectangle (5,6);\n\\fill[black!30!green,opacity=0.6] (5,5) rectangle (6,6);\n\\draw (1, 0) grid (7, 6);\n\n\\draw[very thick, scale=1] (1, 0) grid (3, 0);\n\\draw[very thick, scale=1] (3, 0) grid (3, 1);\n\\draw[very thick, scale=1] (3, 1) grid (5, 1);\n\\draw[very thick, scale=1] (5, 1) grid (5, 5);\n\\draw[very thick, scale=1] (5, 5) grid (6, 5);\n\\draw[very thick, scale=1] (6, 5) grid (6, 6);\n\\draw[very thick, scale=1] (6, 6) grid (7, 6);\n\\tikzset{anchor=west}\n\n\\setcounter{row7}{1}\n\\setrowseven {}{\\begin{turn}{90}\\tiny $\\: \\: \\:\\: \\:l_1$\\end{turn}}{\\begin{turn}{90} \\tiny $\\: \\: \\:\\: \\: l_1 + \\delta_1$\\end{turn}}{\\tiny ..}{\\tiny .}{\\begin{turn}{90}\\tiny $ \\: \\: \\:\\: \\:u_1 - \\delta_1$\\end{turn}}{\\begin{turn}{90}\\tiny $\\: \\: \\:\\: \\:u_1$\\end{turn}}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 $\\end{turn}}{1}{2}{2}{2}{3}{5}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 1\\delta_2$\\end{turn}}{1}{2}{2}{3}{4}{5}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 2\\delta_2$\\end{turn}}{1}{2}{3}{3}{4}{5}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 3\\delta_2$\\end{turn}}{2}{2}{3}{3}{4}{5}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 4\\delta_2$\\end{turn}}{2}{2}{3}{3}{4}{5}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 5\\delta_2$\\end{turn}}{3}{3}{4}{4}{4}{5}\n\n\\node[anchor=center] at (4.0, -0.5) {$FP^{\\#}(\\phi(v), \\mathcal{D})$};\n\\end{scope}\n\n\\begin{scope}[xshift=7.5cm]\n\n\\fill[black!30!red,opacity=0.6] (3,0) rectangle (5,1);\n\\fill[black!30!red,opacity=0.6] (5,0) rectangle (7,5);\n\\fill[black!30!red,opacity=0.6] (6,5) rectangle (7,6);\n\n\\fill[black!30!green,opacity=0.6] (1,0) rectangle (3,6);\n\\fill[black!30!green,opacity=0.6] (3,1) rectangle (5,6);\n\\fill[black!30!green,opacity=0.6] (5,5) rectangle (6,6);\n\\draw (1, 0) grid (7, 6);\n\n\\draw[very thick, scale=1] (1, 0) grid (3, 0);\n\\draw[very thick, scale=1] (3, 0) grid (3, 1);\n\\draw[very thick, scale=1] (3, 1) grid (5, 1);\n\\draw[very thick, scale=1] (5, 1) grid (5, 5);\n\\draw[very thick, scale=1] (5, 5) grid (6, 5);\n\\draw[very thick, scale=1] (6, 5) grid (6, 6);\n\\draw[very thick, scale=1] (6, 6) grid (7, 6);\n\n\\draw[line width=2pt,<-] (5.5,5.5)--(6.5,5.5) node[right]{};\n\\draw[line width=2pt,<-] (5.5,4.5)--(5.5,5.5) node[right]{};\n\\draw[line width=2pt,<-] (4.5,4.5)--(5.5,4.5) node[right]{};\n\\draw[line width=2pt,<-] (4.5,0.5)--(4.5,4.5) node[right]{};\n\\draw[line width=2pt,<-] (2.5,0.5)--(4.5,0.5) node[right]{};\n\\draw[line width=2pt,<-] (2.5,0)--(2.5,0.5) node[right]{};\n\\tikzset{anchor=west}\n\n\\setcounter{row7}{1}\n\n\\setrowseven {}{\\begin{turn}{90}\\tiny $\\: \\: \\:\\: \\:l_1$\\end{turn}}{\\begin{turn}{90} \\tiny $\\: \\: \\:\\: \\: l_1 + \\delta_1$\\end{turn}}{\\tiny ..}{\\tiny .}{\\begin{turn}{90}\\tiny $ \\: \\: \\:\\: \\:u_1 - \\delta_1$\\end{turn}}{\\begin{turn}{90}\\tiny $\\: \\: \\:\\: \\:u_1$\\end{turn}}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2$\\end{turn}}{10}{11}{14} {15}{17} {18}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + \\delta_2$\\end{turn}}{10}{12}{15} {16}{17} {19}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 2\\delta_2$\\end{turn}}{11}{12}{15} {17}{18} {21}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 3\\delta_2$\\end{turn}}{13}{13}{17} {19}{21} {22}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 4\\delta_2$\\end{turn}}{14}{14}{18} {20}{22} {24}\n\\setrowseven {\\begin{turn}{-45}\\tiny $l_2 + 5\\delta_2$\\end{turn}}{15}{17}{22} {24}{28} {30}\n\n\n\\node[anchor=center] at (4.0, -0.5) {$TP^{\\#}(\\phi(v), \\mathcal{D})$};\n\\end{scope}\n\n\n\\end{tikzpicture}\n}\n\\end{center}\n\\caption{An example run of Alg.~\\ref{alg:diagonal}.} \\label{fig:grid}\n\\end{figure}\n\nAn example run of Alg.~\\ref{alg:diagonal} is shown in Fig.~\\ref{fig:grid} for illustration, where $B$ is $3$, both $\\mathcal{M}(p_1,\\phi)$ and $\\mathcal{M}(p_2,\\phi)$ are $\\mathbf{I}$. In Fig. \\ref{fig:grid}, each cell contains $FP^{\\#}(\\phi(v), \\mathcal{D})$ and $TP^{\\#}(\\phi(v), \\mathcal{D})$ on the left and right arrays, respectively. Cells with $ FP^{\\#}(\\phi(v), {\\mathcal{D}}) > B$ are marked with red (infeasible parameters) and the rest of the cells are marked with green. The algorithm takes a step to the left when it encounters a red cell and takes a step to the down when it encounters a green cell. The path the algorithm follows is shown on the grid. The algorithm returns $[u_1 - 2\\delta_1, l_2 + 4\\delta_2]$. Note that the algorithm evaluates the optimal in each row it finds the overall optimal for the given step sizes.\n\n\nWe now describe the proposed method to solve Prob.~\\ref{prob:optimization}. Let $\\phi$ be the given parametric ptSTL formula with n parameters $p_1, \\ldots, p_n$. If $n=1$, the optimal value is found with a binary search. If $n=2$, the optimal value is found with $DiagonalSearch$ method described in Alg.~\\ref{alg:diagonal}. Finally, if $n > 2$, $DiagonalSearch$ is run for $p_1$ and $p_2$ for all possible combinations of the last $n-2$ parameters, and the optimal parameters are returned. The whole process is referred as $ParameterSynthesis(\\phi, B, \\mathcal{D})$.\n\n\n\\begin{example}\n\\label{ex:optimization}\nConsider the parametric ptSTL formula~\\eqref{eq:exformula} from Ex. \\ref{ex:pptstl_formula} with parameter ranges: $l_1, l_2 = 0$, $u_1,u_2 = 30$, $l_3=-0.4$, $u_3= 0.3$, $l_4=-240$, $u_4=210$. \n$2,0.05,0.05$ and $30$ are set as the step sizes $\\delta_1\\delta_2,\\delta_3$ and $\\delta_4$, respectively. Since $n>2$, two of the parameters, namely $p_2, p_3$, are selected for $DiagonalSearch$ based on the size of the parameter domains. \nFor the remaining parameters $p_1$ and $p_4$, $\\phi^{i,j}$ is created as follows:\n\\begin{align}\n\\phi^{i,j} =& \\phi(v) \\text{ where } v = [i, p_2, p_3 , j ], \\\\ \\nonumber\n & i \\in \\{0,2, \\ldots 30\\}, j \\in \\{-240, -210, \\ldots 210\\\n\\end{align}\nAlg. \\ref{alg:diagonal} is run for each $\\phi^{i,j}$ with $B=5$ over the dataset $\\mathcal{D}$ defined in Ex.~\\ref{ex:formulation}. The maximum $TP^{\\#}(\\phi^{i,j}(v),\\mathcal{D})$ is attained when $i=4$ and $j=-20$ with $v_{best} = [10, 0]$, which corresponds to ptSTL formula \n$\\phi(v) = (\\mathbf{P}_{[4, 10]} qGust < 0 ) \\wedge ( wGust < -120 ))$ with $TP^{\\#}(\\phi,\\mathcal{D}) = 235$ and $FP^{\\#}(\\phi(v),\\mathcal{D}) = 4$. This formula explains approximately $50\\%$ of the large deviations from the normal behavior (label 1). Thus a possible approach would be adjusting internal commands generated from $pilot$ with respect to the wind behavior characterized by $\\phi(v)$.\nNote that while grid search requires 61440 valuations~\\citep{codit2018}, $ParameterSynthesis(\\phi, B, \\mathcal{D})$ can find the optimal solution with $4721$ valuations.\n \n\n\\end{example}\n\n\\subsection{Formula Synthesis}\nIn this section, we present the solution to the main problem (Prob.~\\ref{prob:main}) considered in this paper: find a ptSTL formula that represents labeled events in a dataset.\nIn general, an unexpected behavior\/fault can occur due to a number of different reasons. To utilize this property, we iteratively construct a formula for the given dataset $\\mathcal{D}$ as a disjunction of ptSTL formulas each representing a different reason. The goal in each iteration is to find a formula for a subset of the labeled instances, while limiting incorrectly labeled instances (FP) as this type of error propagates with disjunction operator. To find such a formula, we define the set of all parametric formulas as in~\\citep{codit2018}, and perform parameter optimization on each of them using $ParameterSynthesis$ method described in Sec.~\\ref{sec:parametersynthesis}. \n\nGiven the set of system variables, $\\{x^1, \\ldots,x^n\\}$, and a bound on the number of operators $N$, the set of all parametric ptSTL formulas with up to $N$ operators $\\mathcal{F}^{\\leq N}$ is recursively defined as:\n\\begin{align}\\label{eq:formula_space}\n\\mathcal{F}^{0} &= \\{ x^i \\sim p_i \\mid \\sim \\in \\{<,>\\}, i = 1,\\ldots,n\\} \\cup \\{\\mathbf{T}\\} \\\\\n\\mathcal{F}^{N} &= \\{\\neg \\phi , \\mathbf{P}_{[a,b]} \\phi, \\mathbf{A}_{[a,b]} \\phi \\mid \\phi \\in \\mathcal{F}^{N-1} \\} \\cup \\nonumber \\\\ \n&\\bigcup\\limits_{i=1}^{n-1} \\{ \\phi_{1} \\wedge \\phi_{2} , \\phi_{1} \\vee \\phi_{2} , \\phi_{1}\\mathbf{S}_{[a,b]}\\phi_{2} \\mid \\nonumber \\\\\n& \\quad\\quad\\quad\\quad \\phi_{1} \\in \\mathcal{F}^{i}, \\phi_{2} \\in \\mathcal{F}^{N-i-1}\\} \\nonumber \\\\\n\\mathcal{F}^{\\leq N} &= \\cup_{i=0}^{N} \\mathcal{F}^{i}\\nonumber\n\\end{align}\n\n\n\n\\begin{algorithm}\n\\caption{$FormulaSynthesis(\\mathcal{F}, B, \\mathcal{D}, p)$}\\label{alg:synthesize} \n\n\\begin{algorithmic}[1]\n\\Require{$\\mathcal{F}$: a set of parametric ptSTL formulas, $B$: bound on the number of false positives, $\\mathcal{D}$: a dataset as in~\\eqref{eq:dataset}, $p$: upper bound on the number of formulas concatenated with disjunction.} \n\n \\State $\\mathcal{F}^v= \\{ \\phi(v) = ParameterSynthesis( \\phi, B, \\mathcal{D}) \\mid \\phi \\in \\mathcal{F}\\}$ \\label{line:parametersyn}\n \\State $i=0, TP_{prev} = 0, TP=1$, $\\Phi = false$\n \\While{$TP > TP_{prev}$\\text{ and } $i < p$}\\label{line:loopstart2}\n \\State $\\phi(v)^* = \\arg \\max_{\\phi(v) \\in \\mathcal{F}^v} TP^{\\#}(\\Phi \\vee \\phi(v), \\mathcal{D})$\n \\State $\\Phi = \\Phi \\vee \\phi(v)^*$\n \\State $i=i+1$\n\n \\State $TP_{prev} = TP$, $TP = TP^{\\#}(\\Phi, \\mathcal{D})$\n \\EndWhile\\label{line:loopend2}\n\n \\State \\Return $\\Phi$\n\\end{algorithmic}\n\\end{algorithm}\n\nThe proposed formula synthesis approach is summarized in Alg.~\\ref{alg:synthesize}. The method takes a set of parametric formulas $\\mathcal{F}$, a bound on the number of false positives $B$, a labeled dataset $\\mathcal{D}$ and a bound $p$ on the number of ptSTL formulas, and generates a ptSTL formula $\\phi^\\star$ in the form of~\\eqref{eq:endformula} with at most $p$ sub-formulas, such that $FP^{\\#}(\\phi^\\star, \\mathcal{D}) < B p$ and $TP^{\\#}(\\phi^\\star, \\mathcal{D})$ is optimized. The set of parametric formulas can be defined as in~\\eqref{eq:formula_space}, or alternatively, an expert of the considered system can write a set of parametric formulas. In the algorithm, first, parameters are optimized for each parametric ptSTL formula $\\phi \\in \\mathcal{F}$ (line~\\ref{line:parametersyn}). Then, starting from $\\Phi = false$, iteratively, the formula $\\phi(v)^\\star$ maximizing the valuation of the combined formula $\\Phi \\vee \\phi(v)^\\star$ is selected from the set of ptSTL formulas $\\mathcal{F}^v$ until the sub-formula limit $p$ is reached, or concatenating new formulas does not improve the result (lines~\\ref{line:loopstart2}-\\ref{line:loopend2}). Note that at each iteration a formula $\\phi(v)^\\star$ is added to $\\Phi$ with disjunction.\n\nIn Alg.~\\ref{alg:synthesize}, $ParameterSynthesis( \\phi, B, \\mathcal{D}) $ is run only once for each parametric ptSTL formula. At every iteration of the algorithm, $TP^{\\#}(\\Phi \\vee \\phi(v), \\mathcal{D})$ is computed for each $\\phi(v) \\in \\mathcal{F}^v$ to select the formula $\\phi(v)^\\star$ that generate the highest increment in $TP$. Note that the resulting formula $\\phi^\\star$ might not be the optimal formula due to the iterative synthesis approach. Essentially, the fitness of the formula is upper bounded by the formula that would be obtained by performing parameter optimization on parametric formulas in the form of~\\eqref{eq:endformula} with $N \\times p$ parameters (as in~\\citep{codit2018}). \nHowever, due to the complexity of the parameter synthesis algorithm, this computation is not feasible for large formulas.\n\n\n\\begin{example}\n\\label{ex:synthesis}\nThe set of all parametric ptSTL formulas $\\mathcal{F}^{\\leq 2}$ with at most $2$ parameters over the system variables $\\{alpha, pilot, wGust, qGust\\}$ is generated according to~\\eqref{eq:formula_space}. The parameter domains are defined as:\n$p_a, p_b \\in \\{2i \\mid i=0, \\ldots,15 \\}$ for $\\mathbf{A}_{[p_a, p_b]}$,$\\mathbf{P}_{[p_a, p_b]}$, \\\\\n$p_{alpha} \\in \\{-0.5 + 0.05i \\mid i=0, \\ldots,20 \\}$, \\\\%$\\mathcal{V} = \\{x_0, x_1, x_2\\}$ \n$p_{pilot} \\in \\{-0.5 +05i \\mid i=0, \\ldots,20 \\}$, \\\\%$\\mathcal{V} = \\{x_0, x_1, x_2\\}$ \n$p_{wGust} \\in \\{-240 +30i \\mid i=0, \\ldots,15 \\}$, \\\\\n$p_{qGust} \\in \\{-0.4 - 0.05i \\mid i=0, \\ldots,14 \\}$. \n \nWe run Alg.~\\ref{alg:synthesize} with the parametric formula set $\\mathcal{F}^{\\leq 2}$, the dataset from Ex.~\\ref{ex:formulation}, bound $B=5$ and subformula limit $p=4$. The resulting formula is:\n\n\\begin{align}\n& \\phi = \\phi_1 \\vee \\phi_2 \\vee \\phi_3 \\vee \\phi_4 \\\\ \\nonumber\n& \\phi_1 = ( \\mathbf{P} _{[4, 10]} ( qGust < 0 ) ) \\wedge ( wGust < -120 ) \\\\ \\nonumber \n& \\phi_2 = ( wGust > 120 ) \\wedge ( \\mathbf{A} _{[14, 14]} ( pilot > -40 ) ) \\\\ \\nonumber \n& \\phi_3 =\\mathbf{P} _{[2, 2]} ( ( alpha < 30 ) \\wedge ( wGust < -120 ) )\\\\ \\nonumber\n& \\phi_4 =( \\mathbf{A} _{[4, 16]} ( qGust > 10 ) ) \\wedge ( pilot < -40 ) \\nonumber \n \\end{align}\n Each sub-formula $\\phi_1, \\phi_2 $, $\\phi_3$ and $ \\phi_4$ explains a condition that led to a disturbance in the pitch angle of the aircraft. \n The first formula $\\phi_1$ shows that a disturbance occurs when $wGust$ is less than $-120$ and $qGust$ was lower than $0$ for some time within the last 10 time steps to last 4 time steps. Formulas $\\phi_2 $, $\\phi_3$ and $ \\phi_4$ show that a disturbance occurs 2) when $wGust$ is greater than $120$ and $pilot$ was greater then $-40$ $14$ time steps ago, or 3) if $alpha$ was less than $30$ and $wGust$ was less than -120 two steps ago, or 4) if $qGust$ was higher than 10 for each step between last 4 and 16 steps and $pilot$ is less than -40 in the current step.\n\n $477$ out of $3000$ data points in $\\mathcal{D}$ are labeled with $1$.\n $TP^{\\#}(\\phi,\\mathcal{D}) $ and $FP^{\\#}(\\phi,\\mathcal{D})$ valuations are 419 and 18 respectively. Total mismatch count of 3000 points is computed as 76 which leads to an accuracy of 97.46\\%.\n\n\nThis result is found in 3350 seconds on a PowerEdge T430 machine with Intel Xeon E5-2650 12C\/24T processor. It is important to note that $\\phi$ includes $11$ operators and $15$ parameters and it is defined over $4$ system variables. This example shows that the proposed method can generate complex formulas from labeled datasets in an efficient way, since, due to the computational complexity, existing formula synthesis algorithms are validated on simpler formulas.\n\n\\end{example}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sect_introduction}\n\nIsotope abundance ratios provide a powerful tool for tracing stellar nucleosynthesis, evaluating the composition of stellar ejecta, and constraining the chemical evolution of the Milky Way \\citep{1994ARA&A..32..191W}. In particular, the $^{12}$C\/$^{13}$C ratio is one of the most useful tracers of the relative degree of primary to secondary processing. $^{12}$C is a predominantly primary nucleus formed by He burning in massive stars on short timescales \\citep[e.g.,][]{1995ApJS...98..617T}. $^{13}$C is produced on a longer timescale via CNO processing of $^{12}$C seeds from earlier stellar generations during the red giant phase in low- and intermediate-mass stars or novae \\citep[e.g.,][]{1994LNP...439...72H,1994ARA&A..32..153M,1994ARA&A..32..191W}. $^{12}$C\/$^{13}$C ratios are expected to be low in the central molecular zone (CMZ) and high in the Galactic outskirts, because the Galaxy formed from the inside out (e.g., \\citealt{2001ApJ...554.1044C}; \\citealt{2012A&A...540A..56P}).\n\nObservations indeed indicate a gradient of $^{12}$C\/$^{13}$C ratios across the Galaxy. \\citet{1976A&A....51..303W} and \\citet{1979MNRAS.188..445W} measured the $J_{Ka,Kc}=1_{1,0}-1_{1,1}$ lines of H$_2^{12}$CO and H$_2^{13}$CO near 5 GHz toward 11 and 24 Galactic continuum sources, respectively. While ignoring effects of photon trapping, the results suggested that the $^{12}$C\/$^{13}$C ratios may vary with galactocentric distance ($R_{\\rm GC}$). With the additional measurement of the $J_{Ka,Kc}=2_{1,1}-2_{1,2}$ line of H$_2$CO at 14.5 GHz, \\citet{1980A&A....82...41H, 1982A&A...109..344H, 1983A&A...127..388H, 1985A&A...143..148H} also reported a gradient after correcting for effects of optical depth and photon trapping. \\citet{1990ApJ...357..477L} used the optically thin lines of C$^{18}$O and $^{13}$C$^{18}$O to trace the carbon isotope ratios. They also found a systematic $^{12}$C\/$^{13}$C gradient across the Galaxy, ranging from about 20--25 near the Galactic center, to 30--50 in the inner Galactic disk, to $\\sim$70 in the local interstellar medium (ISM). \\citet{1996A&AS..119..439W}, complementing these investigations by also including the far outer Galaxy, encountered ratios in excess of 100 and demonstrated that the gradient found in the inner Galaxy continues farther out. \\citet{2005ApJ...634.1126M} obtained $^{12}$C\/$^{13}$C = 6.01$R_{\\rm GC}$ + 12.28 based on the CN measurements of \\citet{2002ApJ...578..211S}. Here and elsewhere, $R_{\\rm GC}$ denotes the galactocentric distance in units of kiloparsecs (kpc). By combining previously obtained H$_2$CO and C$^{18}$O results with these CN data, \\citet{2005ApJ...634.1126M} obtained $^{12}$C\/$^{13}$C = 6.21$R_{\\rm GC}$ + 18.71.\n\nMore recently, \\citet{2017ApJ...845..158H} reported observations of a variety of molecules (e.g., H$_2$CS, CH$_3$CCH, NH$_2$CHO, CH$_2$CHCN, and CH$_3$CH$_2$CN) and their $^{13}$C-substituted species toward Sgr B2(N). These authors obtained an average $^{12}$C\/$^{13}$C value of 24 $\\pm$ 7 in the Galactic center region, which is close to results using $^{12}$CH\/$^{13}$CH (15.8 $\\pm$ 2.4, Sgr B2(M)) by \\citet{2020A&A...640A.125J} and the particularly solid $^{12}$C$^{34}$S\/$^{13}$C$^{34}$S ratio (22.1$^{+3.3}_{-2.4}$, $+$50 km s$^{-1}$ Cloud) from \\citet{2020A&A...642A.222H} who use a variety of CS isotopologs and rotational transitions. \\citet{2019ApJ...877..154Y} proposed a linear fit of $^{12}$C\/$^{13}$C = (5.08 $\\pm$ 1.10)$R_{\\rm GC}$ + (11.86 $\\pm$ 6.60) based on a large survey of H$_2$CO. The latter includes data from the center to the outskirts of the Milky Way well beyond the Perseus Arm. However, data from the CMZ are not similar to those thoroughly traced by \\citet{2020A&A...642A.222H}, not many sources from the innermost Galactic disk could be included in this survey, and also the number of sources beyond the Perseus arm was small, meaning that there is still space for improvement.\n\nWhile the carbon isotope ratio has drawn much attention in the past, it is not the only isotope ratio that can be studied at radio wavelengths and that has a significant impact on our understanding of the chemical evolution of the Galaxy. The isotope ratios of sulfur are providing complementary information on stellar nucleosynthesis that is not traced by the carbon isotope ratio. Sulfur is special in that it provides a total of four stable isotopes, $^{32}$S, $^{34}$S, $^{33}$S, and $^{36}$S. In the Solar System, abundance ratios are 95.02 : 4.21 : 0.75 : 0.021, respectively \\citep{1989GeCoA..53..197A}. $^{32}$S and $^{34}$S are synthesized during stages of hydrostatic oxygen-burning preceding a type II supernova event or during stages of explosive oxygen-burning in a supernova of type Ia; $^{33}$S is synthesized in explosive oxygen- and neon-burning, which is also related to massive stars; and $^{36}$S may be an s-process nucleus. The comprehensive calculations of \\citet{1995ApJS..101..181W} indicate that $^{32}$S and $^{33}$S are primary (in the sense that the stellar yields do not strongly depend on the initial metallicity of the stellar model), while $^{34}$S is not a clean primary isotope; its yield decreases with decreasing metallicity. According to \\citet{1985ApJ...295..604T} and \\citet{1989A&A...210...93L}, $^{36}$S is produced as a purely secondary isotope in massive stars, with a possible (also secondary) contribution from asymptotic giant branch (AGB) stars. Only a small fraction of $^{36}$S is destroyed during supernova explosions (Woosley, priv. comm.). Comparing ``primary'' and ``secondary'' nuclei, we might therefore expect the presence of weak $^{32}$S\/$^{34}$S and $^{34}$S\/$^{33}$S gradients and a stronger $^{32}$S\/$^{36}$S gradient as a function of galactocentric radius.\n\nThere is a strong and widespread molecular species that allows us to measure carbon and sulfur isotope ratios simultaneously, namely carbon monosulfide (CS). CS is unique in that it is a simple diatomic molecule exhibiting strong line emission and possessing eight stable isotopologs, which allows us to determine the above-mentioned carbon and sulfur isotope ratios. Six isotopologs have been detected so far in the ISM (e.g., \\citealt{1996A&A...305..960C}; \\citealt{1996A&A...313L...1M}; \\citealt{2020A&A...642A.222H}; \\citealt{2020ApJ...899..145Y}). \n\nMaking use of the CS species, \\citet{1996A&A...305..960C} and \\citet{1996A&A...313L...1M} obtained average abundance ratios of 24.4 $\\pm$ 5.0, 6.3 $\\pm$ 1.0, and 115 $\\pm$ 17 for $^{32}$S\/$^{34}$S, $^{34}$S\/$^{33}$S, and $^{34}$S\/$^{36}$S for the ISM, respectively. The latter is approximately half the solar value, but similar to the value found in IRC+10216 \\citep{2004A&A...426..219M}. Recently, \\citet{2020A&A...642A.222H} published $^{32}$S\/$^{34}$S ratios of 16.3$^{+2.1}_{-1.7}$ and 17.9 $\\pm$ 5.0 for the $+$50 km s$^{-1}$ cloud and Sgr B2(N) near the Galactic center, respectively. These are only slightly lower than the value of 22 in the Solar System. There is an obvious and confirmed $^{32}$S\/$^{34}$S gradient \\citep{1996A&A...305..960C, 2020ApJ...899..145Y} from the inner Galaxy out to a galactocentric distance of 12.0 kpc. Nevertheless, there is a lack of data at small and large galactocentric distances.\n\nWe are performing systematic observational studies on isotope ratios in the Milky Way, including $^{12}$C\/$^{13}$C \\citep{2019ApJ...877..154Y}, $^{14}$N\/$^{15}$N \\citep{2021ApJS..257...39C},$^{18}$O\/$^{17}$O (\\citealt{2015ApJS..219...28Z}, \\citeyear{2020ApJS..249....6Z}, \\citeyear{2020IAUGA..30..278Z}; \\citealt{2016RAA....16...47L}), and $^{32}$S\/$^{34}$S \\citep{2020ApJ...899..145Y}. We have thus performed a more systematic study on CS and its isotopologs toward 110 high-mass star-forming regions (HMSFRs). $^{12}$C\/$^{13}$C and $^{32}$S\/$^{34}$S ratios can be directly derived from integrated $^{12}$C$^{34}$S\/$^{13}$C$^{34}$S (hereafter C$^{34}$S\/$^{13}$C$^{34}$S) and $^{13}$C$^{32}$S\/$^{13}$C$^{34}$S (hereafter $^{13}$CS\/$^{13}$C$^{34}$S, see Section \\ref{ratios_13c34s}) intensities, respectively. Also, $^{34}$S\/$^{33}$S and $^{34}$S\/$^{36}$S values could be obtained with measurements of C$^{34}$S, $^{12}$C$^{33}$S (hereafter C$^{33}$S), and $^{12}$C$^{36}$S (hereafter C$^{36}$S). Furthermore, $^{32}$S\/$^{33}$S and $^{32}$S\/$^{36}$S ratios can then be derived with the resulting $^{34}$S\/$^{33}$S and $^{34}$S\/$^{36}$S values combined with the $^{32}$S\/$^{34}$S ratios (see Sections \\ref{section_34s33s} to \\ref{section_32s36s}). In Section \\ref{sou_selection}, we describe the source selection and observations for our large sample. Section \\ref{results} presents our results on $^{12}$C\/$^{13}$C, $^{32}$S\/$^{34}$S, $^{34}$S\/$^{33}$S, $^{32}$S\/$^{33}$S, $^{34}$S\/$^{36}$S, and $^{32}$S\/$^{36}$S ratios. Section \\ref{discussion} discusses potential processes that could contaminate and affect the isotope ratios derived in the previous section and provides a detailed comparison with results from earlier studies. Our main results are summarized in Section \\ref{summary}. \n\n\n\n\\section{Source selection and observations}\n\\label{sou_selection}\n\n\\subsection{Sample selection and distance}\n\\label{section_distance}\n\nIn 2019, we selected 18 HMSFRs from the Galactic center region to the outer Galaxy beyond the Perseus arm. To enlarge this sample, we chose 92 sources from the Bar and Spiral Structure Legacy (BeSSeL) Survey\\footnote{http:\/\/bessel.vlbi-astrometry.org} in 2020. These 92 targets were recently released by the BeSSeL project \\citep{2019ApJ...885..131R} and not observed by \\citet{2020ApJ...899..145Y}. In total, 110 objects in the Galaxy are part of our survey. The coordinates of our sample sources are listed in Table \\ref{table_sources}. Determining trigonometric parallaxes is a very direct and accurate method to measure the distance of sources from the Sun \\citep{2009ApJ...700..137R, 2014ApJ...783..130R, 2019ApJ...885..131R}. Over the past decade, mainly thanks to the BeSSeL project, the trigonometric parallaxes of approximately 200 HMSFRs have been determined across the Milky Way through dedicated high-resolution observations of molecular maser lines. Therefore, this is a good opportunity to investigate carbon and sulfur isotope ratios with well-determined distances across the Galaxy. The galactocentric distance ($R_{\\rm GC}$) can be obtained with the heliocentric distance $d$ from the trigonometric parallax data base of the BeSSeL project using \\begin{equation}\nR_{\\rm GC} = \\sqrt{( R_0 cos(l) - d )^2 + R_0^2 sin^2(l)}\n,\\end{equation}\n\\citep{2009ApJ...699.1153R}. $R_0$ = 8.178 $\\pm$ 0.013$\\rm _{stat.}$ $\\pm$ 0.022$\\rm _{sys.}$ kpc \\citep{2019A&A...625L..10G} describes the distance from the Sun to the Galactic center, $l$ is the Galactic longitude of the source, and $d$ is the distance either directly derived from the trigonometric parallax data based on the BeSSeL project or a kinematic distance in cases where no such distance is yet available. Because the uncertainty in $R_0$ is very small, it will be neglected in the following analysis. For 12 of our targets without trigonometric parallax data, we estimated their kinematic distances from the Revised Kinematic Distance calculator\\footnote{http:\/\/bessel.vlbi-astrometry.org\/revised\\_kd\\_2014} \\citep{2014ApJ...783..130R}. The resulting distances indicate that 6 of these 12 sources are located in the CMZ, namely SgrC, the $+$20 km~s$^{-1}$ cloud, the $+$50 km~s$^{-1}$ cloud, G0.25, G1.28$+$0.07, and SgrD. Four targets belong to the inner Galactic disk, namely PointC1, CloudD, Clump2, and PointD1. Two sources, W89-380 and WB89-391, are in the outer regions beyond the Perseus arm. The heliocentric distances ($d$) and the galactocentric distances ($R_{\\rm GC}$) for our sample are listed in Columns 6 and 7 of Table~\\ref{table_sources}.\n\n\n\n\\subsection{Observations}\nWe observed the $J$ = 2-1 transitions of CS, C$^{33}$S, C$^{34}$S, C$^{36}$S, $^{13}$CS, $^{13}$C$^{33}$S, and $^{13}$C$^{34}$S as well as the $J$ = 3-2 transitions of C$^{33}$S, C$^{34}$S, C$^{36}$S, and $^{13}$CS toward 110 HMSFRs with the IRAM 30 meter telescope\\footnote{IRAM is supported by INSU\/CNRS (France), MPG (Germany) and IGN (Spain).} in 2019 June, July, and October under project 045-19 (PI, Christian Henkel) as well as in 2020 August within project 022-20 (PI, Hongzhi Yu). The on$+$off source integration times for our sources range from 4.0 minutes to 10.8 hours. These values are given in Appendix~\\ref{appendix_table} (Table \\ref{fitting_all}). The EMIR receiver with two bands, E090 and E150, was used to cover a bandwidth of $\\sim$16 GHz (from 90.8 to 98.2 GHz and 138.4 to 146.0 GHz) simultaneously in dual polarisation. We used the wide-mode FTS backend with a resolution of 195 kHz, corresponding to $\\sim$0.6 km s$^{-1}$ and $\\sim$0.4 km s$^{-1}$ at 96 GHz and 145 GHz, respectively. The observations were performed in total-power position-switching mode and the off position was set at 30$\\arcmin$ in azimuth. Pointing was checked every 2 hours using nearby quasars. Focus calibrations were done at the beginning of the observations and during sunset and sunrise toward strong quasars. The main beam brightness temperature, $T_{\\rm MB}$, was obtained from the antenna temperature $T_{\\rm A}^*$ via the relation $T_{\\rm MB}$ = $T_{\\rm A}^*\\times F_{\\rm eff}$\/$B_{\\rm eff}$ ($F_{\\rm eff}$: forward hemisphere efficiency; $B_{\\rm eff}$: main beam efficiency) with corresponding telescope efficiencies\\footnote{https:\/\/publicwiki.iram.es\/Iram30mEfficiencies}: $F_{\\rm eff}$\/$B_{\\rm eff}$ are 0.95\/0.81 and 0.93\/0.73 in the frequency ranges of 90.8-98.2 GHz and 138.4-146.0 GHz, respectively. The system temperatures were 100-160 K and 170-300 K on a $T_{\\rm A}^*$ scale for the E090 and E150 band observations. The half power beam width (HPBW) for each transition was calculated as HPBW($\\arcsec$)=2460\/$\\nu$(GHz). Rest frequencies, excitations of the upper levels above the ground state, Einstein coefficients for spontaneous emission, and respective beam sizes are listed in Table~\\ref{table_linelist}.\n\n\n\n\n\\begin{table*}[h]\n\\caption{Observed spectral line parameters$^a$.}\n\\centering\n\\begin{tabular}{cccccc}\n\\hline\\hline\nIsotopolog & Transition & $\\nu_0$\\tablefootmark{b} & $E_{\\rm up}$\\tablefootmark{c} & $A_{\\rm u,l}$\\tablefootmark{d} & HPBW\\tablefootmark{e} \\\\\n & \\ & (MHz) & (K) & (s$^{-1}$) & ($\\arcsec$) \\\\\n\\hline\n\\label{table_linelist}\nCS & 2-1 & 97980.953 &7.1 & 1.68 $\\times$ 10$^{-5}$ & 25.1 \\\\\nC$^{33}$S & 2-1 & 97172.064 &7.0 & 1.64 $\\times$ 10$^{-5}$ & 25.3 \\\\\nC$^{34}$S & 2-1 & 96412.95 &6.9 & 1.60 $\\times$ 10$^{-5}$ & 25.5 \\\\\nC$^{36}$S & 2-1 & 95016.722 &6.8 & 1.53 $\\times$ 10$^{-5}$ & 25.9 \\\\\n$^{13}$CS & 2-1 & 92494.308 &6.7 & 1.41 $\\times$ 10$^{-5}$ & 26.6 \\\\\n$^{13}$C$^{33}$S & 2-1 & 91685.241 &6.6 & 1.38 $\\times$ 10$^{-5}$ & 26.8 \\\\\n$^{13}$C$^{34}$S & 2-1 & 90926.026 &6.5 & 1.34 $\\times$ 10$^{-5}$ & 27.1 \\\\\nC$^{33}$S & 3-2 & 145755.732 &14.0 & 5.92 $\\times$ 10$^{-5}$ & 16.9 \\\\\nC$^{34}$S & 3-2 & 144617.101 &13.9 & 5.78 $\\times$ 10$^{-5}$ & 17.0 \\\\\nC$^{36}$S & 3-2 & 142522.785 &13.7 & 5.54 $\\times$ 10$^{-5}$ & 17.3 \\\\\n$^{13}$CS & 3-2 & 138739.335 &13.3 & 5.11 $\\times$ 10$^{-5}$ & 17.7 \\\\\n\\hline\n\\end{tabular}\n\\tablefoot{\n\\tablefoottext{a}{From the Cologne Database for Molecular Spectroscopy \\citep[CDMS,][]{2005JMoSt.742..215M,2016JMoSp.327...95E}.}\n\\tablefoottext{b}{Rest frequency.}\n\\tablefoottext{c}{Upper energy level.}\n\\tablefoottext{d}{Einstein coefficient for spontaneous emission from upper $u$ to lower $l$ level.}\n\\tablefoottext{e}{Half power beam width.}}\n\\end{table*}\n\n\\subsection{Data reduction}\n\\label{datareduction}\n\nWe used the GILDAS\/CLASS\\footnote{https:\/\/www.iram.fr\/IRAMFR\/GILDAS\/} package to analyze the spectral line data. The spectra of the $J$ = 2-1 transitions of CS, C$^{33}$S, C$^{34}$S, C$^{36}$S, $^{13}$CS, $^{13}$C$^{33}$S, and $^{13}$C$^{34}$S as well as the $J$ = 3-2 transitions of C$^{33}$S, C$^{34}$S, C$^{36}$S, and $^{13}$CS toward one of our targets, DR21, are shown in Fig.~\\ref{fig_dr21}, after subtracting first-order polynomial baselines and applying Hanning smoothing. The spectra of all 110 targets, also after first-order polynomial-baseline removal and Hanning smoothing, are presented in Appendix~\\ref{appendix_spectra} (Fig.~\\ref{spectra_all}). \n\n\n\n\nAmong our sample of 110 targets, we detected the $J$ = 2-1 line of CS toward 106 sources, which yields a detection rate of 96\\%. The $J$ = 2-1 transitions of C$^{34}$S, $^{13}$CS, C$^{33}$S, and $^{13}$C$^{34}$S were successfully detected in 90, 82, 46, and 17 of our sources with signal-to-noise (S\/N) ratios of greater than 3, respectively. The $J$ = 3-2 lines of C$^{34}$S, $^{13}$CS, and C$^{33}$S were detected in 87, 71, and 42 objects with S\/Ns of $\\ge$3.0. Line parameters from Gaussian fitting are listed in Appendix~\\ref{appendix_table} (Table \\ref{fitting_all}). Relevant for the evaluation of isotope ratios is the fact that for 17 sources with 19 velocity components, the S\/Ns of the $J$ = 2-1 transition of $^{13}$C$^{34}$S are greater than 3, which allows us to determine the $^{12}$C\/$^{13}$C and $^{32}$S\/$^{34}$S ratios directly with the $J$ = 2-1 lines of C$^{34}$S, $^{13}$CS, and $^{13}$C$^{34}$S. Toward 82 targets with 90 radial velocity components, the $J$ = 2-1 transitions of C$^{34}$S and $^{13}$CS were both detected with S\/Ns of $\\ge$3.0. The $J$ = 3-2 lines of C$^{34}$S and $^{13}$CS were both found in 71 objects with 73 radial velocity components and S\/Ns of $\\ge$3.0. Furthermore, the $J$ = 2-1 and $J$ = 3-2 transitions of C$^{34}$S and C$^{33}$S were detected with S\/Ns of $\\ge$3.0 toward 46 and 42 sources, respectively.\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.3\\textwidth]{spectra\/DR21.eps}\n\\caption{Line profiles of the $J$ = 2-1 transitions of CS, C$^{33}$S, C$^{34}$S, C$^{36}$S, $^{13}$CS, $^{13}$C$^{33}$S, and $^{13}$C$^{34}$S as well as the $J$ = 3-2 transitions of C$^{33}$S, C$^{34}$S, C$^{36}$S, and $^{13}$CS toward one typical target (DR21) of our large sample of 110 sources, after subtracting first-order polynomial baselines. The main beam brightness temperature scales are presented on the left hand side of the profiles. The spectra of all 110 objects in our sample are shown in Appendix~\\ref{appendix_spectra} (Fig.~\\ref{spectra_all}).}\n \\label{fig_dr21}\n\\end{figure}\n\nThe C$^{36}$S $J$ = 2-1 line was successfully detected with S\/Ns of $\\ge$3.0 toward three targets, namely W3OH, the $+$50 km~s$^{-1}$ cloud near the Galactic center, and DR21. As C$^{36}$S and $^{13}$C$^{33}$S are the least abundant among the CS isotopologs, tentative detection with S\/Ns of $\\sim$2.0 are also presented here but not included in further analyses. In another five objects, the C$^{36}$S $J$ = 2-1 line was tentatively detected. For the C$^{36}$S $J$ = 3-2 transitions, we report one detection with an S\/N larger than 3.0 toward Orion-KL and five tentative detections. The $J$ = 2-1 lines of $^{13}$C$^{33}$S were tentatively detected toward three sources, namely Orion-KL, W51-IRS2, and DR21.\nIntegration times and 1$\\sigma$ noise levels of the observed transitions are listed in Columns 3 and 4 of Table~\\ref{fitting_all} for each target.\n\n\n\n\n\\section{Results}\n\\label{results}\nIn the following, we first estimate the optical depths of the various lines to avoid problems with line saturation that might affect our results. We then present the carbon and sulfur isotope ratios derived from different detected CS isotopologs.\n\n\\subsection{Optical depth}\n\\label{section_opacities}\n\nThe main isotopolog, CS, is usually optically thick in massive star-forming regions (e.g. \\citealt{1980ApJ...235..437L}; \\citealt{2020ApJ...899..145Y}). Therefore, the $^{12}$C\/$^{13}$C and $^{32}$S\/$^{34}$S ratios cannot be determined from the line intensity ratios of $I$(CS)\/$I$($^{13}$CS) and $I$(CS)\/$I$(C$^{34}$S). However, assuming that the $J$ = 2-1 transitions of CS, C$^{34}$S, and $^{13}$CS share the same beam filling factor and excitation temperature, we can estimate the maximum optical depth of the $^{13}$CS $J$ = 2-1 line from:\n\\begin{equation}\n\\frac{T_{\\rm {mb}}(^{12}\\rm C\\rm S)}{T_{\\rm mb}(^{13}\\rm C\\rm S)} \\sim \\frac{1 - e^{-\\tau(^{13}\\rm C\\rm S)R_C}}{1 - e^{-\\tau(^{13}\\rm C\\rm S)}},\\,R_C = \\frac{^{12}\\rm C}{^{13}\\rm C},\n\\end{equation}\nwhere $T_{\\rm {mb}}$ is the peak main beam brightness temperature derived from the best Gaussian-fitting result and listed in Column 8 of Table~\\ref{fitting_all}. In this case, the $^{12}$C\/$^{13}$C ratios can be derived from the integrated line intensities of C$^{34}$S and $^{13}$C$^{34}$S with the assumption of $\\tau(\\rm C^{34}\\rm S) \\textless 1.0$, which then also implies $\\tau(^{13}\\rm C^{34}\\rm S) \\textless 1.0$ (see details in Section \\ref{ratios_13c34s}). Multiplying $\\tau(^{13}\\rm CS)$ by $R_C$ = $^{12}$C\/$^{13}$C, we can get the peak opacity $\\tau(\\rm CS)$ = $\\tau(^{13}{\\rm CS})R_C$. The maximum optical depth of C$^{34}$S can be obtained from:\n\\begin{equation}\n\\frac{T_{\\rm mb}(\\rm C\\rm ^{34}S)}{T_{\\rm mb}(^{13}\\rm C\\rm S)} = \\frac{1 - e^{-\\tau(\\rm C\\rm ^{34}S)}}{1 - e^{-\\tau(^{13}\\rm C\\rm S)}},\n\\end{equation}\nwhere $T_{\\rm {mb}}$ is the peak main beam brightness temperature derived from the best Gaussian-fitting result and listed in Column 8 of Table~\\ref{fitting_all}. As shown in Table \\ref{table_13c34sresults}, the peak optical depths of the $J$ = 2-1 lines of CS, C$^{34}$S, and $^{13}$CS for our 17 targets with detections of $^{13}$C$^{34}$S range from 1.29 to 8.79, 0.12 to 0.55, and 0.05 to 0.34, respectively. Therefore, C$^{34}$S and $^{13}$CS in these 17 objects are optically thin, even though they belong, on average, to the more opaque ones, being successfully detected\nin $^{13}$C$^{34}$S (see below). Nevertheless, the corrections for optical depth are applied to C$^{34}$S and $^{13}$CS with factors of $f_1$ and $f_2$, respectively.\n\\begin{equation}\nf_1=\\frac{\\tau(\\rm C\\rm ^{34}S)}{1-e^{-\\tau(\\rm C\\rm ^{34}S)}} {\\quad\\rm and}\n\\end{equation}\n\\begin{equation}\nf_2=\\frac{\\tau(^{13}\\rm C\\rm S)}{1-e^{-\\tau(^{13}\\rm C\\rm S)}}\n\\end{equation}\nare listed in Columns 7 and 8 of Table~\\ref{table_13c34sresults}, respectively.\n\nFor those 82 sources with detections of $J$ = 2-1 CS, C$^{34}$S, and $^{13}$CS, the optical depths were calculated based on the $^{12}$C\/$^{13}$C gradient that we derived from our C$^{34}$S and $^{13}$C$^{34}$S measurements (for details, see Section \\ref{ratios_13c34s}). In Table \\ref{table_doubleisotope}, the peak opacities of the $J$ = 2-1 lines of CS, C$^{34}$S, and $^{13}$CS for these 82 targets range from 0.34 to 14.48, 0.02 to 0.74, and 0.01 to 0.39, respectively. The CS $J$ = 2-1 lines are optically thick with $\\tau(\\rm CS) \\textgreater 1.0$ in most sources (89\\%) of our sample, while they tend to be optically thin in seven objects, namely Point C1 ($\\tau(\\rm CS) \\leq 0.59$), Sgr C ($\\tau(\\rm CS) \\leq 0.54$), Cloud D ($\\tau(\\rm CS) \\leq 0.64$), G1.28$+$0.07 ($\\tau(\\rm CS) \\leq 0.82$), Sgr D ($\\tau(\\rm CS) \\leq 0.45$), and Point D1 ($\\tau(\\rm CS) \\leq 0.34$). In contrast, the transitions from rare isotopologs, the C$^{34}$S and $^{13}$CS $J$ = 2-1 lines in our sample, are all optically thin, as their maximum optical depths are less than 0.8 and 0.4, respectively. In the following, we are therefore motivated to consider all CS isotopologs as optically thin, except CS itself. This allows us to use ratios of integrated intensity of all the rare CS isotopologs ---but not CS itself--- to derive the carbon and sulfur isotope ratios we intend to study. Small corrections accounting for the optical depths are applied to the C$^{34}$S and $^{13}$CS $J$ = 2-1 lines with factors of $f_1$ and $f_2$, respectively, and are listed in Columns 7 and 8 of Table~\\ref{table_doubleisotope}. The optical depths of the $J$ = 3-2 transitions cannot be estimated, as the CS $J$ = 3-2 line was not covered by our observations because of bandwidth limitations. However, the $^{32}$S\/$^{34}$S ratios for a given source obtained through the double isotope method from the $J$ = 2-1 and $J$ = 3-2 transitions are in good agreement, indicating that the C$^{34}$S and $^{13}$CS $J$ = 3-2 lines in our sample are also optically thin (see details in Section~\\ref{section_double_32s34s}). \n\nThe RADEX non Local Thermodynamic Equilibrium (LTE) model \\citep{2007A&A...468..627V} was used to calculate the variation of excitation temperature, $T_{ex}$, with optical depth. Frequencies, energy levels, and Einstein A coefficients for spontaneous emission were taken from the Cologne Database for Molecular Spectroscopy \\citep[CDMS;][]{2005JMoSt.742..215M,2016JMoSp.327...95E}. Recent collision rates for CS with para- and ortho-H$_2$ \\citep{2018MNRAS.478.1811D} were used. Figure~\\ref{fig_c34stex} shows the excitation temperatures and opacities of C$^{34}$S $J$ = 2-1 for a kinetic temperature of 30 K and a molecular hydrogen density of 10$^5$ cm$^{-3}$. Variations of $T_{ex}$ within about 2 K for our sample targets with optical depths of 0.02 $\\leq \\tau(\\rm C^{34}S) \\leq 0.74$ can barely affect our results.\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.5\\textwidth]{newFigures\/c34s21-tau-tex.png}\n\\caption{Excitation temperature, $T_{ex}$, as a function of optical depth for the $J$ = 2-1 transition of C$^{34}$S. The gray dashed lines indicate the range of opacities for our sample sources.}\n \\label{fig_c34stex}\n\\end{figure}\n\n\n\n\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[width=460pt]{newFigures\/1allratio12c_13c-lowerlimits.pdf}\n \\caption{$^{12}$C\/$^{13}$C isotope ratios from C$^{34}$S\/$^{13}$C$^{34}$S, CN\/$^{13}$CN, C$^{18}$O\/$^{13}$C$^{18}$O, H$_2$CO\/H$_2^{13}$CO, CH$^+$\/$^{13}$CH$^+$, and CH\/$^{13}$CH are plotted as functions of the distance from the Galactic center. The red symbol $\\odot$ indicates the $^{12}$C\/$^{13}$C isotope ratio of the Sun. The filled black circles are the results obtained from C$^{34}$S with corrections of opacity in the current work, and the resulting first-order polynomial fit is plotted as a solid line, with the gray-shaded area showing the 1$\\sigma$ interval of the fit. The open black circles are the 3$\\sigma$ lower limits obtained from nondetections of $^{13}$C$^{34}$S in the current work. The blue triangles, orange pentagons, yellow stars, green squares, and green diamonds are values determined from CN \\citep{2002ApJ...578..211S,2005ApJ...634.1126M}, C$^{18}$O \\citep{1990ApJ...357..477L,1996A&AS..119..439W,1998ApJ...494L.107K}, H$_2$CO \\citep{1980A&A....82...41H,1982A&A...109..344H,1983A&A...127..388H,1985A&A...143..148H,2019ApJ...877..154Y}, CH$^+$ \\citep{2011ApJ...728...36R}, and CH \\citep{2020A&A...640A.125J}, respectively, using the most up-to-date distances. The red crosses visualize the results from the GCE model of \\citet[][see also Section~\\ref{section_discussion_model}]{2011MNRAS.414.3231K,2020ApJ...900..179K}. }\n \\label{fig_gradient_12C13C}\n\\end{figure*}\n\n\n\\begin{table*}[h]\n\\caption{Isotope ratios derived with the $J$ = 2-1 transitions of C$^{34}$S, $^{13}$CS, and $^{13}$C$^{34}$S.}\n\\centering\n\\small\n\\begin{tabular}{lcc|ccc|cc|cc}\n\\hline\\hline\nSource & $V_{\\rm LSR}$ & $R_{GC}$ & \\multicolumn{3}{c}{Optical depth} & \\multicolumn{2}{c}{Corrections for} & $^{12}$C\/$^{13}$C & $^{32}$S\/$^{34}$S \\\\\n & & & & & & \\multicolumn{2}{c}{optical depth} & & \\\\\n & (km s$^{-1}$) & (kpc) & CS & C$^{34}$S & $^{13}$CS & $f_1$ & $f_2$ & & \\\\\n\\hline\n\\label{table_13c34sresults}\nW3OH & -46.96 & 9.64 $\\pm$ 0.03 & 8.289 $\\pm$ 0.083 & 0.292 $\\pm$ 0.003 & 0.127 $\\pm$ 0.001 & 1.2 & 1.1 & 75.30 $\\pm$ 4.27 & 29.80 $\\pm$ 1.68 \\\\\nOrion-KL & 8.17 & 8.54 $\\pm$ 0.00 & 3.722 $\\pm$ 0.037 & 0.180 $\\pm$ 0.002 & 0.074 $\\pm$ 0.001 & 1.1 & 1.0 & 54.89 $\\pm$ 12.90 & 21.24 $\\pm$ 4.67 \\\\\nG359.61$-$00.24 & 19.33 & 5.51 $\\pm$ 0.15 & 3.531 $\\pm$ 0.035 & 0.204 $\\pm$ 0.002 & 0.089 $\\pm$ 0.001 & 1.1 & 1.0 & 43.85 $\\pm$ 11.95 & 17.31 $\\pm$ 4.79 \\\\\n$+$50~km~s$^{-1}$~cloud & 46.79 & 0.02 $\\pm$ 0.04 & 4.278 $\\pm$ 0.043 & 0.192 $\\pm$ 0.002 & 0.151 $\\pm$ 0.002 & 1.1 & 1.1 & 31.22 $\\pm$ 2.06 & 21.71 $\\pm$ 1.40 \\\\\nSgrB2 & 53.18 & 0.55 $\\pm$ 0.05 & 1.290 $\\pm$ 0.013 & 0.140 $\\pm$ 0.001 & 0.113 $\\pm$ 0.001 & 1.1 & 1.1 & 12.18 $\\pm$ 0.73 & 9.77 $\\pm$ 0.58 \\\\\nSgrB2 & 66.54 & 0.44 $\\pm$ 0.70 & 8.119 $\\pm$ 0.081 & 0.523 $\\pm$ 0.005 & 0.341 $\\pm$ 0.003 & 1.3 & 1.2 & 30.62 $\\pm$ 2.04 & 18.99 $\\pm$ 1.31 \\\\\nSgrB2 & 83.38 & 0.41 $\\pm$ 0.02 & 2.417 $\\pm$ 0.024 & 0.097 $\\pm$ 0.001 & 0.077 $\\pm$ 0.001 & 1.0 & 1.0 & 32.90 $\\pm$ 5.65 & 26.24 $\\pm$ 4.49 \\\\\nG006.79$-$00.25 & 20.87 & 4.75 $\\pm$ 0.25 & 8.789 $\\pm$ 0.088 & 0.484 $\\pm$ 0.005 & 0.242 $\\pm$ 0.002 & 1.3 & 1.1 & 45.82 $\\pm$ 8.39 & 19.46 $\\pm$ 3.63 \\\\\nG010.32$-$00.15 & 11.99 & 5.34 $\\pm$ 0.29 & 5.039 $\\pm$ 0.050 & 0.237 $\\pm$ 0.002 & 0.111 $\\pm$ 0.001 & 1.1 & 1.1 & 50.95 $\\pm$ 11.58 & 21.55 $\\pm$ 5.05 \\\\\nG019.36$-$00.03 & 26.40 & 5.58 $\\pm$ 0.49 & 6.789 $\\pm$ 0.068 & 0.332 $\\pm$ 0.003 & 0.141 $\\pm$ 0.001 & 1.2 & 1.1 & 56.41 $\\pm$ 14.37 & 21.19 $\\pm$ 5.53 \\\\\nG024.78$+$00.08 & 110.72 & 3.51 $\\pm$ 0.15 & 8.038 $\\pm$ 0.080 & 0.554 $\\pm$ 0.006 & 0.322 $\\pm$ 0.003 & 1.3 & 1.2 & 32.56 $\\pm$ 4.25 & 16.08 $\\pm$ 2.12 \\\\\nG028.39$+$00.08 & 78.01 & 4.83 $\\pm$ 0.17 & 8.705 $\\pm$ 0.087 & 0.443 $\\pm$ 0.004 & 0.291 $\\pm$ 0.003 & 1.2 & 1.2 & 37.04 $\\pm$ 10.05 & 18.74 $\\pm$ 5.12 \\\\\nG028.83$-$00.25 & 87.19 & 4.50 $\\pm$ 0.48 & 7.283 $\\pm$ 0.073 & 0.388 $\\pm$ 0.004 & 0.183 $\\pm$ 0.002 & 1.2 & 1.1 & 47.93 $\\pm$ 10.00 & 19.73 $\\pm$ 4.16 \\\\\nG030.70$-$00.06 & 89.94 & 4.20 $\\pm$ 0.10 & 7.088 $\\pm$ 0.071 & 0.448 $\\pm$ 0.004 & 0.263 $\\pm$ 0.003 & 1.2 & 1.1 & 33.39 $\\pm$ 2.77 & 18.07 $\\pm$ 1.52 \\\\\nG030.74$-$00.04 & 91.82 & 5.76 $\\pm$ 0.36 & 5.924 $\\pm$ 0.059 & 0.404 $\\pm$ 0.004 & 0.191 $\\pm$ 0.002 & 1.2 & 1.1 & 37.63 $\\pm$ 10.87 & 15.61 $\\pm$ 4.59 \\\\\nG030.81$-$00.05 & 98.89 & 5.73 $\\pm$ 0.24 & 4.691 $\\pm$ 0.047 & 0.306 $\\pm$ 0.003 & 0.187 $\\pm$ 0.002 & 1.2 & 1.1 & 29.07 $\\pm$ 2.99 & 14.97 $\\pm$ 1.44 \\\\\nW51-IRS2 & 61.07 & 6.22 $\\pm$ 0.06 & 1.883 $\\pm$ 0.019 & 0.119 $\\pm$ 0.001 & 0.052 $\\pm$ 0.001 & 1.1 & 1.0 & 38.26 $\\pm$ 4.35 & 16.70 $\\pm$ 1.92 \\\\\nDR21 & -2.49 & 8.10 $\\pm$ 0.00 & 7.079 $\\pm$ 0.071 & 0.270 $\\pm$ 0.003 & 0.106 $\\pm$ 0.001 & 1.1 & 1.1 & 76.22 $\\pm$ 7.57 & 27.48 $\\pm$ 2.84 \\\\\nNGC7538 & -57.12 & 9.47 $\\pm$ 0.07 & 3.433 $\\pm$ 0.034 & 0.131 $\\pm$ 0.001 & 0.050 $\\pm$ 0.001 & 1.1 & 1.0 & 73.19 $\\pm$ 12.54 & 26.72 $\\pm$ 4.57 \\\\\n\\hline\n\\end{tabular}\n\\tablefoot{ Velocities were obtained from measurements of C$^{34}$S, see Table \\ref{fitting_all} in Appendix \\ref{appendix_table}.}\n\\end{table*}\n\n\\subsection{$^{12}$C\/$^{13}$C and $^{32}$S\/$^{34}$S ratios derived directly from $^{13}$C$^{34}$S}\n\\label{ratios_13c34s}\n\n\\subsubsection{$^{12}$C\/$^{13}$C ratios}\n\\label{section_results_12c13c}\n\nThe $^{12}$C\/$^{13}$C ratios derived from the integrated intensity ratios of C$^{34}$S and $^{13}$C$^{34}$S with corrections of optical depth are listed in Table~\\ref{table_13c34sresults}. Figure~\\ref{fig_gradient_12C13C} shows our results as filled black circles. A gradient of $^{12}$C\/$^{13}$C is obtained with an unweighted least-squares fit:\n\\begin{equation}\n^{12}{\\rm C}\/^{13}{\\rm C} = (4.77 \\pm 0.81)R_{\\rm GC}+(20.76 \\pm 4.61).\n\\end{equation}\nThe correlation coefficient is 0.82. Around the CMZ toward the $+$50 km~s$^{-1}$ cloud and SgrB2, four velocity components of C$^{34}$S and $^{13}$C$^{34}$S were detected and then an average $^{12}$C\/$^{13}$C value of 27~$\\pm$~3 is derived. The uncertainties given here and below are standard deviations of the mean. Eleven objects within a range of 3.50 kpc < $R_{\\rm GC}$ < 6.50 kpc in the inner Galactic disk lead to an average $^{12}$C\/$^{13}$C value of 41~$\\pm$~9. In the Local arm near the Sun, the $^{13}$C$^{34}$S lines were detected toward two sources, Orion-KL and DR21. These provide an average $^{12}$C\/$^{13}$C value of 66~$\\pm$~10, which is lower than the Solar System ratio. The other two targets beyond the solar neighborhood belong to the Perseus arm and show a slightly higher value of 74~$\\pm$~8.\n\nFor sources with detections of C$^{34}$S and nondetections of $^{13}$C$^{34}$S, 3$\\sigma$ lower limits of the $^{12}$C\/$^{13}$C ratio have been derived and are shown as open black circles in Fig.~\\ref{fig_gradient_12C13C}. All these lower limits are below the $^{12}$C\/$^{13}$C gradient we describe above. \n\n\n\n\n\n\\subsubsection{$^{32}$S\/$^{34}$S ratios}\n\\label{section_ratios3234_13c34s}\n\nThe $^{32}$S\/$^{34}$S ratios directly derived from the integrated intensity ratios of $^{13}$CS\/$^{13}$C$^{34}$S from the $J$ = 2-1 lines with corrections of optical depth are listed in Table~\\ref{table_13c34sresults} and are plotted as a function of galactocentric distance in Fig.~\\ref{fig_gradient_32S34S}. With an unweighted least-squares fit, a gradient with a correlation coefficient of 0.47 can be obtained:\n\\begin{equation}\n^{32}{\\rm S}\/^{34}{\\rm S} =(0.73 \\pm 0.36)R_{\\rm GC}+(16.50 \\pm 2.07).\n\\end{equation}\nAn average $^{32}$S\/$^{34}$S ratio of 19~$\\pm$~2 is obtained in the CMZ, which is based on the measurements from two sources, namely the $+$50 km~s$^{-1}$ cloud next to the Galactic center and Sgr B2. In the inner Galactic disk at a range of 3.50 kpc < $R_{\\rm GC}$ < 6.50 kpc, $^{13}$C$^{34}$S was detected toward 11 objects, leading to an average $^{32}$S\/$^{34}$S value of 18~$\\pm$~4. For sources in the Local and the Perseus arm beyond the Sun, the $^{32}$S\/$^{34}$S ratios are 24~$\\pm$~4 and 28~$\\pm$~3, respectively. This reveals a gradient from the inner Galactic disk to the outer Galaxy, but none from the CMZ to the inner disk.\n\nFor sources with detections of $^{13}$CS and nondetections of $^{13}$C$^{34}$S, we determined 3$\\sigma$ lower limits to the $^{32}$S\/$^{34}$S ratio, which are shown as open black circles in Fig.~\\ref{fig_gradient_32S34S}. All these lower limits are below the $^{32}$S\/$^{34}$S gradient we describe above. \n\n\n\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[height=600pt]{newFigures\/1Allratio32s_34s_new-lower.pdf}\n \\caption{$^{32}$S\/$^{34}$S isotope ratios as functions of the distance from the Galactic center. The symbol $\\odot$ indicates the $^{32}$S\/$^{34}$S isotope ratio in the Solar System. In the upper panel, the $^{32}$S\/$^{34}$S ratios directly derived from $^{13}$CS\/$^{13}$C$^{34}$S in the $J$ = 2-1 transition and obtained from the double isotope method in the $J$ = 2-1 transition with corrections for optical depth are plotted as black and green dots, respectively. The $^{32}$S\/$^{34}$S ratios without corrections of opacity in the $J$ = 3-2 transition are plotted as light blue dots. The 3$\\sigma$ lower limits of $^{32}$S\/$^{34}$S ratios obtained from nondetections of $^{13}$C$^{34}$S in the current work are shown as open black circles. The $^{32}$S\/$^{34}$S ratios in \\citet{2020ApJ...899..145Y} derived from the double isotope method in the $J$ = 2-1 transitions are shown as blue dots in the lower panel. The $^{32}$S\/$^{34}$S values in the CMZ obtained from $^{13}$CS\/$^{13}$C$^{34}$S in \\citet{2020A&A...642A.222H} are plotted as red stars in both panels. The resulting first-order polynomial fits to $^{32}$S\/$^{34}$S ratios with the direct method and the double isotope method from the $J$ = 2-1 transition in this work are plotted as black and red solid lines in the two panels, respectively, with the gray and yellow shaded areas showing the 1~$\\sigma$ intervals of the fits. The magenta and cyan dashed-dotted lines show the $^{32}$S\/$^{34}$S gradients from \\citet{1996A&A...305..960C} and \\citet{2020ApJ...899..145Y}. The red crosses visualize the results from the GCE model of \\citet[][see also Section~\\ref{section_discussion_model}]{2011MNRAS.414.3231K,2020ApJ...900..179K}. }\n \\label{fig_gradient_32S34S}\n\\end{figure*}\n\n\n\n\\subsection{$^{32}$S\/$^{34}$S ratios obtained through the double isotope method}\n\\label{section_double_32s34s}\n\nThe $^{32}$S\/$^{34}$S values can also be derived from measurements of C$^{34}$S and $^{13}$CS using the carbon gradient obtained from our $^{13}$C$^{34}$S measurements above by applying the following equation:\n\\begin{equation}\n\\frac{^{32}{\\rm S}}{^{34}{\\rm S}} = R_{\\rm C} \\frac{I(^{13}{\\rm CS})}{I(\\rm C^{34}{\\rm S})},\n\\end{equation}\nwhere $R_{\\rm C}$ is the $^{12}$C\/$^{13}$C ratio derived from equation (6). The uncertainty on this latter is also included in our error budget. The $^{32}$S\/$^{34}$S ratios in the $J$ = 2-1 transitions were calculated with corrections of optical depth for 83 targets with 90 radial velocity components, in which the C$^{34}$S and $^{13}$CS $J$ = 2-1 lines were both detected, and are listed in Column 7 of Table \\ref{table_doubleisotope}. An unweighted least-squares fit to these values yields \n\\begin{equation}\n^{32}{\\rm S}\/^{34}{\\rm S} (2-1) = (0.75 \\pm 0.13)R_{\\rm GC}+(15.52 \\pm 0.78), \n\\end{equation}\nwith a correlation coefficient of 0.54. The C$^{34}$S and $^{13}$CS $J$ = 3-2 lines were both detected in 71 objects with 73 radial velocity components. The $^{32}$S\/$^{34}$S ratios derived with equation (8) from the $J$ = 3-2 transition are shown in Column 8 of Table \\ref{table_doubleisotope}. An unweighted least-squares fit to the $J$ = 3-2 transition yields \n\\begin{equation}\n^{32}{\\rm S}\/^{34}{\\rm S} (3-2) = (0.99 \\pm 0.14)R_{\\rm GC}+(16.05 \\pm 0.95), \n\\end{equation}\nwhich is within the errors, and is consistent with the trend obtained from the $J$ = 2-1 transition (equation (9)). However, we note that we do not have CS $J$ = 3-2 data, and therefore no opacity corrections could be applied to our C$^{34}$S $J$ = 3-2 spectra (see also Sect.~\\ref{section_34s33s}).\n\n\n\n\n\n\n\\subsection{$^{34}$S\/$^{33}$S ratios}\n\\label{section_34s33s}\n\nThe $^{34}$S\/$^{33}$S ratios can be determined directly from the intensity ratios of C$^{34}$S\/C$^{33}$S. The $^{34}$S\/$^{33}$S ratios from the $J$ = 2-1 lines were then corrected for optical depths derived in Section~\\ref{section_opacities}. However, both the $J$ = 2-1 and $J$ = 3-2 transitions of C$^{33}$S are split by hyperfine structure (HFS) interactions \\citep{1981CPL....81..256B}, which may affect the deduced values of $^{34}$S\/$^{33}$S. \n\nThe C$^{33}$S $J$ = 2-1 line consists of eight hyperfine components distributed over about 9.0 MHz \\citep{2005JMoSt.742..215M,2016JMoSp.327...95E}, which corresponds to a velocity range of about 28~km~s$^{-1}$. Following the method introduced in Appendix D in \\citet{2021A&A...646A.170G}, and assuming that the intrinsic width of each HFS line is 1~km~s$^{-1}$, the $J$ = 2-1 line profile can be reproduced by four components (see Fig.~\\ref{fig_c33s_hfs}, upper left panel). In this case, the main component ($I_{main}$) consists of four HFS lines ($F$=7\/2-5\/2, $F$=5\/2-3\/2, $F$=1\/2-1\/2, $F$=3\/2-5\/2), which account for 70\\% of the total intensity. Among the 46 sources with detections of the C$^{33}$S $J$ = 2-1 line, all of the four components were detected in 10 targets. Toward 16 objects, only the three components with the lowest velocities were detected, accounting for 98\\% of the total intensity. For the remaining 20 sources, only the main component was detected. Based on the above assumptions, 30\\% of the total intensity would be missed. The situation is different when the main component becomes broader. If the line width of the main component is larger than 10~km~s$^{-1}$, 87\\% of the total intensity is covered by the main spectral feature. When the line width of the main component is larger than 19.0~km~s$^{-1}$, then almost all HFS lines are included. In Fig.~\\ref{fig_model_hfs}, we show the dependence of the HFS factor for the $J$ = 2-1 line, $f_{\\rm 21HFS}$, on the line width of the main component. Depending on the specific condition of each target, we derived $f_{\\rm 21HFS}$ for each source. The values are listed in Table~\\ref{table_3433}. \n\nThe C$^{33}$S $J$ = 3-2 line consists of nine hyperfine components covering about 8.0 MHz \\citep{2005JMoSt.742..215M,2016JMoSp.327...95E}, corresponding to a velocity range of about 16~km~s$^{-1}$. Assuming that the intrinsic width of each HFS line is 1~km~s$^{-1}$, the $J$ = 3-2 line profile can be characterized by three components (see also Fig.~\\ref{fig_c33s_hfs}). All these three components are detected in only two sources, Orion-KL and DR21. The main component consists of four HFS lines ($F$=5\/2-3\/2, $F$=3\/2-1\/2, $F$=7\/2-5\/2, $F$=9\/2-7\/2), which account for 86\\% of the total intensity. When the line width becomes larger than 9.2~km~s$^{-1}$, almost all of the HFS lines overlap. The HFS factors ($f_{\\rm 32HFS}$) of the $J$ = 3-2 transition obtained individually for each source are listed in Table~\\ref{table_3433}.\n\nWe calculated the $^{34}$S\/$^{33}$S intensity ratios and present them in Table~\\ref{table_3433}. Applying corrections accounting for the effect of hyperfine splitting, the $^{34}$S\/$^{33}$S ratios are derived and are \nalso listed in Table~\\ref{table_3433}. The $^{34}$S\/$^{33}$S values obtained from the $J$ = 2-1 transitions are always higher than the ones derived from the $J$ = 3-2 lines toward the same source, with the exception of G097.53$+$03.18. This difference could be caused by the lack of corrections of optical depth in the $J$ = 3-2 transition. The average values of $^{34}$S\/$^{33}$S toward our sample are 4.35~$\\pm$~0.44 and 3.49~$\\pm$~0.26 in the $J$ = 2-1 and $J$ = 3-2 transitions, respectively. The $^{34}$S\/$^{33}$S ratios were found to be independent of galactocentric distance (Fig.~\\ref{fig_all33S}). \n\nAfter applying the opacity correction of the $J$ = 2-1 transition to the $J$ = 3-2 line in the same source, $^{34}$S\/$^{33}$S ratios in the $J$ = 3-2 transition are higher than the $^{34}$S\/$^{33}$S values in the $J$ = 2-1 transition in seven targets, suggesting that the C$^{34}$S $J$ = 3-2 lines in these seven sources are less opaque than the C$^{34}$S $J$ = 2-1 lines. The seven targets are the $+$20~km~s$^{-1}$~cloud, G023.43$-$00.18, G028.39$+$00.08, G028.83$-$00.25, G073.65$+$00.19, G097.53$+$03.18, and G109.87$+$02.11. A comparison of the corrected $^{34}$S\/$^{33}$S ratios in these two transitions is shown in Fig.~\\ref{fig_33S_compared}. On the other hand, toward the other 32 objects of the whole sample of 39 sources with detections in these two transitions, the $^{34}$S\/$^{33}$S ratios in the $J$ = 3-2 transition are still lower than the $^{34}$S\/$^{33}$S values in the $J$ = 2-1 transitions. This suggests that the C$^{34}$S $J$ = 3-2 lines may be more opaque than the C$^{34}$S $J$ = 2-1 lines. The ratios of $^{34}$S\/$^{33}$S values without corrections of opacity in the $J$ = 3-2 transition and the corrected $^{34}$S\/$^{33}$S values from the $J$ = 2-1 lines in 31 targets of these 32 sources are within the range of 1.02 to 1.71, which suggest that the optical depths of C$^{34}$S in the $J$ = 3-2 transition in these objects range from 0.05 to 1.20 based on equation (4). The maximum optical depth of the C$^{34}$S $J$ = 3-2 line toward the additional source, G024.85$+$00.08, which has not considered until now, is estimated to be 2.6.\n\n\n\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[width=500pt]{figure\/hfs_c33s.pdf}\n \\caption{Synthetic C$^{33}$S (2-1) and C$^{33}$S (3-2) spectra for two intrinsic line widths, 1.0~km~s$^{-1}$ (left panels) and 4.0~km~s$^{-1}$ (right panels).}\n \\label{fig_c33s_hfs}\n\\end{figure*}\n\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[width=230pt]{figure\/plot_fHFS21.pdf}\n \\includegraphics[width=230pt]{figure\/plot_fHFS32.pdf}\n \\caption{Blue lines are curves showing the theoretical dependencies of the HFS factors, $f_{\\rm 21HFS}$ (left) and $f_{\\rm 32HFS}$ (right), on the line width of the C$^{33}$S main component for sources in which only the main component was detected. The red dotted vertical and horizontal lines indicate the values of the minimal line widths where the factors are reaching almost 1.0, 19.0~km~s$^{-1}$ for the $J$ = 2-1 transition ($f_{\\rm 21HFS} \\sim$ 0.99) and 9.2~km~s$^{-1}$ for the $J$ = 3-2 transition ($f_{\\rm 32HFS} \\sim$ 0.99). }\n \\label{fig_model_hfs}\n\\end{figure*}\n\n\n\n\n\n\n\n\\subsection{$^{32}$S\/$^{33}$S ratios}\n\\label{section_32s33s}\n\nThe $^{32}$S\/$^{33}$S values can also be derived from the $^{34}$S\/$^{33}$S ratios in Section~\\ref{section_34s33s} using the $^{32}$S\/$^{34}$S ratios ---which we directly obtained from $^{13}$CS\/$^{13}$C$^{34}$S and present in Section~\\ref{section_ratios3234_13c34s}--- by applying the following equation:\n\\begin{equation}\n\\frac{^{32}{\\rm S}}{^{33}{\\rm S}} = \\frac{^{34}\\rm S}{^{33}\\rm S} \\times \\frac{^{32}\\rm S}{^{34}\\rm S} .\n\\end{equation}\nFor sources where we did not detect $^{13}$C$^{34}$S, the $^{32}$S\/$^{34}$S ratios derived from the double isotope method in Section~\\ref{section_double_32s34s} are used. Their uncertainty is also included in our error budget. The resulting $^{32}$S\/$^{33}$S ratios are listed in Table~\\ref{table_3233}. As in the case of the $^{34}$S\/$^{33}$S ratios, the $^{32}$S\/$^{33}$S values obtained from the $J$ = 2-1 transitions with corrections for optical depth are slightly larger than the ones from the $J$ = 3-2 transitions without opacity corrections toward the same source. \n\nIn the CMZ, $^{32}$S\/$^{33}$S $J$ = 2-1 ratios toward four targets, namely the $+$20 km~s$^{-1}$ cloud, the $+$50 km~s$^{-1}$ cloud, Sgr B2, and Sgr D, lead to an average value of 70 $\\pm$ 16. In the inner Galaxy, in a galactocentric distance range of 2.0~kpc~$\\le R_{\\rm GC} \\le$~6.0~kpc, an average $^{32}$S\/$^{33}$S ratio of 82 $\\pm$ 19 was derived from values in 20 sources. Near the solar neighborhood, in a galactocentric distance range of 7.5~kpc~$\\le R_{\\rm GC} \\le$~8.5~kpc, $^{32}$S\/$^{33}$S ratios were obtained toward four objects, Orion-KL, G071.31$+$00.82, DR21, and G109.87$+$02.11, resulting in an average value of 88 $\\pm$ 21. For the outer Galaxy, beyond the local arm, we were able to deduce an average $^{32}$S\/$^{33}$S ratio of 105 $\\pm$ 19 from $^{32}$S\/$^{33}$S values in four sources. All these average values provide us with an indication of the existence of a $^{32}$S\/$^{33}$S gradient in the Galaxy. An unweighted least-squares fit to the $J$ = 2-1 transition data from 46 sources yields\n\\begin{equation}\n^{32}{\\rm S}\/^{33}{\\rm S} (2-1) = (2.64 \\pm 0.77)R_{\\rm GC}+(70.80 \\pm 5.57). \n\\end{equation}\nIn the $J$ = 3-2 transition with data from 42 targets, an unweighted least-squares fit can be obtained with\n\\begin{equation}\n^{32}{\\rm S}\/^{33}{\\rm S} (3-2) = (2.80 \\pm 0.59)R_{\\rm GC}+(59.30 \\pm 4.22). \n\\end{equation}\nThe $^{32}$S\/$^{33}$S gradients derived from these two transitions have similar slopes but obviously different intercepts. The lower intercept in the $J$ = 3-2 transition could be due to the fact that we could not correct the values for optical-depth effects. If this difference is caused only by the opacity, then an average optical depth in the C$^{34}$S $J$ = 3-2 transition of about 0.4 can be derived with equation (4). \n\n\\setcounter{table}{3}\n\n\\begin{table*}[h]\n\\centering\n\\caption{\\label{table_3233} $^{32}$S\/$^{33}$S isotope ratios}\n\\begin{tabular}{lccc}\n\\hline\\hline\nSource & $R_{GC}$ & \\multicolumn{2}{c}{$^{32}$S\/$^{33}$S} \\\\\n & (kpc) & $J$ = 2-1 & $J$ = 3-2 \\\\\n\\hline\nWB89-380 & 14.19 $\\pm$ 0.92 & 90.74 $\\pm$ 19.53 & 83.93 $\\pm$ 17.90 \\\\\nWB89-391 & 14.28 $\\pm$ 0.94 & 127.07 $\\pm$ 40.26 & 65.46 $\\pm$ 14.69 \\\\\nW3OH & 9.64 $\\pm$ 0.03 & 105.49 $\\pm$ 7.26 & 74.20 $\\pm$ 4.35 \\\\\nOrion-KL & 8.54 $\\pm$ 0.00 & 74.50 $\\pm$ 16.85 & 69.12 $\\pm$ 13.51 \\\\\n$+$20 km~s$^{-1}$ cloud & 0.03 $\\pm$ 0.03 & 82.63 $\\pm$ 18.20 & 78.98 $\\pm$ 17.86 \\\\\nG359.61$-$00.24 & 5.51 $\\pm$ 0.15 & 75.75 $\\pm$ 14.99 & 71.69 $\\pm$ 16.86 \\\\\n$+$50 km~s$^{-1}$ cloud & 0.02 $\\pm$ 0.04 & 76.49 $\\pm$ 16.96 & 66.68 $\\pm$ 14.92 \\\\\nG000.31$-$00.20 & 5.25 $\\pm$ 0.36 & 55.61 $\\pm$ 14.84 & $\\cdots$ \\\\\nSgrB2 & 0.44 $\\pm$ 0.70 & 42.98 $\\pm$ 9.62 & 35.66 $\\pm$ 7.84 \\\\\nSgrD & 0.45 $\\pm$ 0.07 & 77.48 $\\pm$ 20.88 & 50.61 $\\pm$ 13.66 \\\\\nG006.79$-$00.25 & 4.75 $\\pm$ 0.25 & 86.65 $\\pm$ 17.01 & 70.08 $\\pm$ 13.79 \\\\\nG007.47$+$00.05 & 12.35 $\\pm$ 2.49 & 107.98 $\\pm$ 30.91 & $\\cdots$ \\\\\nG010.32$-$00.15 & 5.34 $\\pm$ 0.29 & 88.27 $\\pm$ 17.82 & 79.56 $\\pm$ 15.99 \\\\\nG010.62$-$00.33 & 4.26 $\\pm$ 0.21 & 106.56 $\\pm$ 33.99 & $\\cdots$ \\\\\nG011.10$-$00.11 & 5.49 $\\pm$ 0.56 & 72.13 $\\pm$ 29.12 & $\\cdots$ \\\\\nG016.86$-$02.15 & 5.97 $\\pm$ 0.47 & 86.63 $\\pm$ 17.00 & 76.23 $\\pm$ 15.01 \\\\\nG017.02$-$02.40 & 6.40 $\\pm$ 0.36 & 100.03 $\\pm$ 20.54 & 86.36 $\\pm$ 17.47 \\\\\nG017.63$+$00.15 & 6.77 $\\pm$ 0.04 & 73.12 $\\pm$ 23.04 & $\\cdots$ \\\\\nG018.34$+$01.76 & 6.31 $\\pm$ 0.07 & 92.66 $\\pm$ 18.57 & 86.44 $\\pm$ 16.87 \\\\\nG019.36$-$00.03 & 5.58 $\\pm$ 0.49 & 94.55 $\\pm$ 18.78 & 80.16 $\\pm$ 15.89 \\\\\nG023.43$-$00.18 & 3.63 $\\pm$ 0.49 & 72.63 $\\pm$ 16.26 & 80.58 $\\pm$ 16.63 \\\\\nG024.78$+$00.08 & 3.51 $\\pm$ 0.15 & 70.88 $\\pm$ 14.22 & 55.64 $\\pm$ 11.12 \\\\\nG024.85$+$00.08 & 3.85 $\\pm$ 0.23 & 134.87 $\\pm$ 43.15 & 48.25 $\\pm$ 11.88 \\\\\nG028.30$-$00.38 & 4.71 $\\pm$ 0.26 & $\\cdots$ & 76.82 $\\pm$ 20.06 \\\\\nG028.39$+$00.08 & 4.83 $\\pm$ 0.17 & 85.30 $\\pm$ 17.17 & 76.11 $\\pm$ 15.02 \\\\\nG028.83$-$00.25 & 4.50 $\\pm$ 0.48 & 78.57 $\\pm$ 15.87 & 80.37 $\\pm$ 15.83 \\\\\nG030.70$-$00.06 & 4.20 $\\pm$ 0.10 & 76.54 $\\pm$ 15.18 & 62.68 $\\pm$ 12.57 \\\\\nG030.74$-$00.04 & 5.76 $\\pm$ 0.36 & 86.28 $\\pm$ 17.07 & 72.73 $\\pm$ 14.12 \\\\\nG030.78$+$00.20 & 4.19 $\\pm$ 0.05 & $\\cdots$ & 72.50 $\\pm$ 17.46 \\\\\nG030.81$-$00.05 & 5.73 $\\pm$ 0.24 & 88.21 $\\pm$ 17.23 & 72.72 $\\pm$ 14.52 \\\\\nG031.24$-$00.11 & 7.48 $\\pm$ 2.00 & $\\cdots$ & 90.57 $\\pm$ 21.01 \\\\\nG032.74$-$00.07 & 4.55 $\\pm$ 0.23 & 75.67 $\\pm$ 14.99 & 68.63 $\\pm$ 13.88 \\\\\nG032.79$+$00.19 & 5.26 $\\pm$ 1.57 & 79.87 $\\pm$ 16.07 & 76.72 $\\pm$ 14.93 \\\\\nG034.41$+$00.23 & 5.99 $\\pm$ 0.06 & 80.54 $\\pm$ 15.82 & 72.63 $\\pm$ 14.63 \\\\\nG034.79$-$01.38 & 6.21 $\\pm$ 0.09 & 103.60 $\\pm$ 22.03 & 82.58 $\\pm$ 16.21 \\\\\nG036.11$+$00.55 & 5.45 $\\pm$ 0.43 & 39.43 $\\pm$ 13.64 & $\\cdots$ \\\\\nG037.42$+$01.51 & 6.78 $\\pm$ 0.05 & 110.92 $\\pm$ 21.76 & 76.06 $\\pm$ 17.77 \\\\\nG040.28$-$00.21 & 6.02 $\\pm$ 0.10 & 110.40 $\\pm$ 31.44 & 71.86 $\\pm$ 17.32 \\\\\nG045.45$+$00.06 & 6.41 $\\pm$ 0.50 & 83.04 $\\pm$ 21.44 & 76.44 $\\pm$ 16.33 \\\\\nG048.99$-$00.29 & 6.18 $\\pm$ 0.02 & 107.90 $\\pm$ 27.60 & 87.21 $\\pm$ 20.50 \\\\\nW51-IRS2 & 6.22 $\\pm$ 0.06 & 74.69 $\\pm$ 14.46 & 66.43 $\\pm$ 12.72 \\\\\nG071.31$+$00.82 & 8.02 $\\pm$ 0.16 & 95.37 $\\pm$ 31.90 & 106.87 $\\pm$ 30.60 \\\\\nG073.65$+$00.19 & 13.54 $\\pm$ 2.90 & 102.16 $\\pm$ 32.61 & 128.66 $\\pm$ 31.17 \\\\\nG075.29$+$01.32 & 10.69 $\\pm$ 0.58 & 122.06 $\\pm$ 29.02 & 92.08 $\\pm$ 19.53 \\\\\nDR21 & 8.10 $\\pm$ 0.00 & 86.80 $\\pm$ 16.37 & 80.51 $\\pm$ 15.38 \\\\\nG090.92$+$01.48 & 10.13 $\\pm$ 0.63 & 87.88 $\\pm$ 19.68 & 95.73 $\\pm$ 20.14 \\\\\nG097.53$+$03.18 & 11.81 $\\pm$ 0.70 & 74.99 $\\pm$ 14.50 & 87.92 $\\pm$ 16.65 \\\\\nG109.87$+$02.11 & 8.49 $\\pm$ 0.01 & 94.41 $\\pm$ 18.40 & 98.52 $\\pm$ 23.17 \\\\\nNGC7538 & 9.47 $\\pm$ 0.07 & 104.53 $\\pm$ 19.54 & $\\cdots$ \\\\\n\\hline\n\\end{tabular}\n\\tablefoot{The $^{32}$S\/$^{33}$S isotope ratios from the $J$ = 2-1 transition are corrected for the line saturation effects, while the ones in the 3-2 line are not corrected because of the missing main isotopic species.}\n\\end{table*}\n\n\n\n\\subsection{$^{34}$S\/$^{36}$S ratios}\n\\label{section_34s36s}\n\nThe detection of C$^{36}$S in the $J$ = 2-1 and $J$ = 3-2 transitions allows us to also calculate $^{34}$S\/$^{36}$S ratios. Around the CMZ, the C$^{36}$S $J$ = 2-1 line in the $+$50 km~s$^{-1}$ cloud was detected, resulting in a $^{34}$S\/$^{36}$S ratio of 41~$\\pm$~4. In the local arm toward DR21 and Orion-KL \\citep{2007A&A...474..515M,2013ApJ...769...15X}, the C$^{36}$S $J$ = 2-1 and $J$ = 3-2 transitions were detected, respectively, leading to $^{34}$S\/$^{36}$S values of 117~$\\pm$~24 and 83~$\\pm$~7. This yields an average $^{34}$S\/$^{36}$S ratio of 100~$\\pm$~16 in the ISM near the Sun. In the Perseus arm beyond the Solar System \\citep{2006Sci...311...54X}, we detected the C$^{36}$S $J$ = 2-1 line toward W3OH and obtained a $^{34}$S\/$^{36}$S value of 140~$\\pm$~16. These results reveal the possibility of the existence of a $^{34}$S\/$^{36}$S gradient from the Galactic center region to the outer Galaxy. Five tentative detections in the C$^{36}$S $J$ = 2-1 line and also five tentative detections in the $J$ = 3-2 line provide additional $^{34}$S\/$^{36}$S ratios but with large uncertainties. All of the $^{34}$S\/$^{36}$S values are listed in Table~\\ref{table_all36s} and are plotted as a function of the distance to the Galactic center in Fig.~\\ref{fig_all36S}. \n\n\n\\subsection{$^{32}$S\/$^{36}$S ratios}\n\\label{section_32s36s}\n\nAs in the case of the $^{32}$S\/$^{33}$S ratios, the $^{32}$S\/$^{36}$S values could also be obtained from the resulting $^{34}$S\/$^{36}$S ratios in Section~\\ref{section_34s36s} and the $^{32}$S\/$^{34}$S ratios in Section~\\ref{section_ratios3234_13c34s} using the following equation:\n\\begin{equation}\n\\frac{^{32}{\\rm S}}{^{36}{\\rm S}} = \\frac{^{34}\\rm S}{^{36}\\rm S} \\times \\frac{^{32}\\rm S}{^{34}\\rm S}.\n\\end{equation}\nThe uncertainties of both isotope ratios in the product on the right-hand side of the equation are included in our error budget. The resulting $^{32}$S\/$^{36}$S ratios are listed in Table~\\ref{table_all36s}. In the CMZ, a $^{32}$S\/$^{36}$S ratio of 884~$\\pm$~104 is obtained toward the $+$50 km~s$^{-1}$ cloud. $^{32}$S\/$^{36}$S values of 1765~$\\pm$~414 and 3223~$\\pm$~742 are derived toward Orion-KL and DR21, leading to an average $^{32}$S\/$^{36}$S value of 2494 $\\pm$ 578 in the local regions near the Solar System. In the Perseus arm beyond the solar neighborhood toward W3OH, we obtain the highest $^{32}$S\/$^{36}$S value, 4181~$\\pm$~531. All these results indicate that there could be a positive $^{32}$S\/$^{36}$S gradient from the Galactic center region to the outer Galaxy. Figure~\\ref{fig_all36S} shows the $^{32}$S\/$^{36}$S ratios plotted as a function of the distance to the Galactic center.\n\n\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[height=620pt]{newFigures\/0Allratio33s-corrected.pdf}\n \\caption{ $^{34}$S\/$^{33}$S and $^{32}$S\/$^{33}$S isotope ratios (for the latter, see equations (12) and (13)) plotted as functions of the distance from the Galactic center. \\textbf{Top:} $^{34}$S\/$^{33}$S ratios derived from C$^{34}$S\/C$^{33}$S in the $J$ = 2-1 and $J$ = 3-2 transitions plotted as black and green dots, respectively. The red solid and the two dashed lines show the average value and its standard deviation, 4.35~$\\pm$~0.44, of $^{34}$S\/$^{33}$S with corrections of optical depth toward our sample in the $J$ = 2-1 transition. The yellow solid and the two dashed lines show the average value and its standard deviation, 3.49~$\\pm$~0.26, of $^{34}$S\/$^{33}$S toward our sample in the $J$ = 3-2 transition. The red symbol $\\odot$ indicates the $^{34}$S\/$^{33}$S isotope ratio in the Solar System. \\textbf{Bottom:} Black and green dots show the $^{32}$S\/$^{33}$S ratios in the $J$ = 2-1 and $J$ = 3-2 transitions, respectively. The red symbol $\\odot$ indicates the $^{32}$S\/$^{33}$S value in the Solar System. The resulting first-order polynomial fits to the $^{32}$S\/$^{33}$S ratios in the $J$ = 2-1 and $J$ = 3-2 transitions in this work are plotted as red and green solid lines, respectively, with the pink and yellow shaded area showing the 1~$\\sigma$ standard deviations. The red crosses are the results from the GCE model of \\citet[][see also Section~\\ref{section_discussion_model}]{2011MNRAS.414.3231K,2020ApJ...900..179K}. }\n \\label{fig_all33S}\n\\end{figure*}\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[height=270pt]{newFigures\/0Allratio33sCompared2.pdf}\n \\caption{Comparison of $^{34}$S\/$^{33}$S ratios between the $J$ = 2-1 and 3-2 data. The $J$ = 2-1 ratios are opacity corrected, while the same correction factors were also applied to the $J$ = 3-2 data. The red solid line indicates that the $^{34}$S\/$^{33}$S ratios are equal in these two transitions.}\n \\label{fig_33S_compared}\n\\end{figure}\n\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[height=550pt]{newFigures\/0Allratio36s-corrected.pdf}\n \\caption{ $J$ = 2-1 (opacity corrected) and $J$ = 3-2 (no opacity corrections) $^{34}$S\/$^{36}$S and $^{32}$S\/$^{36}$S isotope ratios (see equations (18) and (19)) plotted as functions of the distance from the Galactic center. \\textbf{Top:} Filled black circles and filled black triangle present the $^{34}$S\/$^{36}$S ratios in the $J$ = 2-1 and $J$ = 3-2 transitions derived from C$^{34}$S\/C$^{36}$S in this work with detections of C$^{36}$S, respectively. The open gray circles and open gray triangles present the $^{34}$S\/$^{36}$S ratios in the $J$ = 2-1 and $J$ = 3-2 transitions derived from C$^{34}$S\/C$^{36}$S in this work with tentative detections of C$^{36}$S, respectively. The blue diamonds show the $^{34}$S\/$^{36}$S ratios in \\citet{1996A&A...313L...1M}. The red symbol $\\odot$ indicates the $^{34}$S\/$^{36}$S isotope ratio in the Solar System. The resulting first-order polynomial fit to the $^{34}$S\/$^{36}$S ratios in \\citet{1996A&A...313L...1M} and this work, excluding the values from tentative detections, is plotted as a black solid line, with the green shaded area showing the 1~$\\sigma$ interval of the fit. \\textbf{Bottom:} $^{32}$S\/$^{36}$S ratios obtained from $^{34}$S\/$^{36}$S ratios combined with the $^{32}$S\/$^{34}$S ratios derived in this work. The filled black circles and filled black triangle present the values in the $J$ = 2-1 and $J$ = 3-2 transitions from this work with detections of C$^{36}$S, respectively. The open gray circles and open gray triangles present ratios in the $J$ = 2-1 and $J$ = 3-2 transitions from this work with tentative detections of C$^{36}$S, respectively. The $^{32}$S\/$^{36}$S ratios derived with $^{34}$S\/$^{36}$S values from \\citet{1996A&A...313L...1M} are plotted as blue diamonds. The red symbol $\\odot$ indicates the $^{32}$S\/$^{36}$S isotope ratio in the Solar System. The $^{32}$S\/$^{36}$S gradient, excluding tentative detections, is plotted as a black solid line, with the pink shaded area showing the 1~$\\sigma$ interval of the fit. The red crosses visualize results from the GCE model of \\citet[][see Section~\\ref{section_discussion_model}]{2011MNRAS.414.3231K,2020ApJ...900..179K}. }\n \\label{fig_all36S}\n\\end{figure*}\n\n\\begin{table*}[h]\n\\centering\n\\caption{\\label{table_all36s}Isotope ratios of $^{34}$S\/$^{36}$S and $^{32}$S\/$^{36}$S}\n\\begin{tabular}{lc|cc|cc}\n\\hline\\hline\nSource & $R_{\\rm GC}$ & \\multicolumn{2}{c}{$^{34}$S\/$^{36}$S} & \\multicolumn{2}{c}{$^{32}$S\/$^{36}$S} \\\\\n & (kpc) & $J$ = 2-1 & $J$ = 3-2 & $J$ = 2-1 & $J$ = 3-2 \\\\\n\\hline\nW3OH & 9.64 $\\pm$ 0.03 & 140 $\\pm$ 16 & 174 $\\pm$ 29\\tablefootmark{*} & 4181 $\\pm$ 531 & 5195 $\\pm$ 919\\tablefootmark{*} \\\\\nOrion-KL & 8.54 $\\pm$ 0.00 & 92 $\\pm$ 53\\tablefootmark{*} & 83 $\\pm$ 7 & 1954 $\\pm$ 1207\\tablefootmark{*} & 1765 $\\pm$ 414 \\\\\n$+$20 km~s$^{-1}$ cloud & 0.03 $\\pm$ 0.03 & 69 $\\pm$ 11\\tablefootmark{*} & $\\cdots$ & 1015 $\\pm$ 278\\tablefootmark{*} & $\\cdots$ \\\\\n$+$50 km~s$^{-1}$ cloud & 0.02 $\\pm$ 0.04 & 41 $\\pm$ 4 & $\\cdots$ & 884 $\\pm$ 104 & $\\cdots$ \\\\\nG024.78$+$00.08 & 3.51 $\\pm$ 0.15 & 151 $\\pm$ 39\\tablefootmark{*} & 226 $\\pm$ 66\\tablefootmark{*} & 2424 $\\pm$ 699\\tablefootmark{*} & 3640 $\\pm$ 1164\\tablefootmark{*} \\\\\nG030.81$-$00.05 & 5.73 $\\pm$ 0.24 & 55 $\\pm$ 15\\tablefootmark{*} & $\\cdots$ & 829 $\\pm$ 233\\tablefootmark{*} & $\\cdots$ \\\\\nW51-IRS2 & 6.22 $\\pm$ 0.06 & 109 $\\pm$ 27\\tablefootmark{*} & 203 $\\pm$ 45\\tablefootmark{*} & 1814 $\\pm$ 506\\tablefootmark{*} & 3397 $\\pm$ 855\\tablefootmark{*} \\\\\nDR21 & 8.10 $\\pm$ 0.00 & 117 $\\pm$ 24 & 159 $\\pm$ 64\\tablefootmark{*} & 3223 $\\pm$ 742 & 4384 $\\pm$ 1829\\tablefootmark{*} \\\\\nNGC7538 & 9.47 $\\pm$ 0.07 & $\\cdots$ & 109 $\\pm$ 28\\tablefootmark{*} & $\\cdots$ & 2912 $\\pm$ 897\\tablefootmark{*} \\\\\n\\hline\n\\multicolumn{6}{c}{Below are the isotope ratios of $^{34}$S\/$^{36}$S and $^{32}$S\/$^{36}$S from \\citet{1996A&A...313L...1M}} \\\\\n\\hline\nW3OH & 9.64 $\\pm$ 0.03 & $\\cdots$ & 181 $\\pm$ 49 & $\\cdots$ & 4118 $\\pm$ 1124 \\\\\nOrion-KL & 8.54 $\\pm$ 0.00 & $\\cdots$ & 104 $\\pm$ 7 & $\\cdots$ & 2281 $\\pm$ 174 \\\\\nIRAS15491\\tablefootmark{**} & 5.48 $\\pm$ 0.31 & $\\cdots$ & 108 $\\pm$ 15 & $\\cdots$ & 2120 $\\pm$ 307 \\\\\nIRAS15520\\tablefootmark{**} & 5.67 $\\pm$ 0.31 & 128 $\\pm$ 32 & 133 $\\pm$ 11 & 2531 $\\pm$ 641 & 2629 $\\pm$ 243 \\\\\nIRAS16172\\tablefootmark{**} & 5.03 $\\pm$ 0.29 & 121 $\\pm$ 18 & 119 $\\pm$ 14 & 2334 $\\pm$ 361 & 2296 $\\pm$ 287 \\\\\nNGC6334A & 6.87 $\\pm$ 0.12 & 108 $\\pm$ 15 & 154 $\\pm$ 20 & 2232 $\\pm$ 322 & 3183 $\\pm$ 431 \\\\\nNGC6334B & 6.87 $\\pm$ 0.12 & 163 $\\pm$ 46 & 156 $\\pm$ 26 & 3369 $\\pm$ 960 & 3225 $\\pm$ 552 \\\\\nW51(M) & 6.22 $\\pm$ 0.004 & 105 $\\pm$ 11 & 104 $\\pm$ 15 & 2119 $\\pm$ 237 & 2099 $\\pm$ 313 \\\\\n\\hline\n\\end{tabular}\n\\tablefoot{\\tablefoottext{*}{Values with large uncertainties are derived from the tentative detection of C$^{36}$S lines.} \\tablefoottext{**}{For these three sources without parallax data, their kinematic distances were estimated (for details, see Section~\\ref{section_distance}).} The $^{34}$S\/$^{36}$S and $^{32}$S\/$^{36}$S isotope ratios in this work from the $J$ = 2-1 transition are corrected for the optical depth effects, while the ones for the 3-2 line could not be corrected.}\n\\end{table*}\n\n\\section{Discussion}\n\\label{discussion}\n\nWith the measurements of optically thin lines of the rare CS isotopologs, C$^{34}$S, $^{13}$CS, C$^{33}$S, $^{13}$C$^{34}$S, and C$^{36}$S, we derived $^{12}$C\/$^{13}$C, $^{32}$S\/$^{34}$S, $^{34}$S\/$^{33}$S, $^{32}$S\/$^{33}$S, $^{34}$S\/$^{36}$S, and $^{32}$S\/$^{36}$S isotope ratios. Combined with accurate galactocentric distances, we established a $^{32}$S\/$^{33}$S gradient for the first time and confirmed the existing gradients of $^{12}$C\/$^{13}$C and $^{32}$S\/$^{34}$S, as well as uniform $^{34}$S\/$^{33}$S ratios across the Milky Way, which are lower than previously reported. Furthermore, we may have detected $^{34}$S\/$^{36}$S and $^{32}$S\/$^{36}$S gradients for the first time. In Section~\\ref{section_comparisons_12c13c}, we compare the $^{12}$C\/$^{13}$C gradient obtained in this work with previous published ones derived from a variety of molecular species. A comparison between the $^{32}$S\/$^{34}$S gradients we obtained and previously published ones is presented in Section~\\ref{section_comparisons_32s34s}. The condition of LTE for C$^{33}$S with its HFS line ratios is discussed in Section~\\ref{section_hfs_c33s}. We then also compare our results on $^{34}$S\/$^{33}$S ratios with previously published values and discuss the $^{32}$S\/$^{33}$S gradient. In Section~\\ref{section_discussion_all36} we evaluate whether or not $^{34}$S\/$^{36}$S, $^{33}$S\/$^{36}$S, and $^{32}$S\/$^{36}$S ratios may show gradients with galactocentric distance. Observational bias due to distance effects, beam size effects, and chemical fractionation are discussed. A comparison of several isotopes with respect to primary or secondary synthesis is provided in Section~\\ref{section_discussion_all}. Results from a Galactic chemical evolution (GCE) model, trying to simulate the observational data, are presented in Section~\\ref{section_discussion_model}.\n\n\n\n\n\n\\subsection{Comparisons of $^{12}$C\/$^{13}$C ratios determined with different species}\n\\label{section_comparisons_12c13c}\n\n$^{12}$C\/$^{13}$C ratios have been well studied in the CMZ where the value is about 20--25 \\citep[e.g.,][]{1983A&A...127..388H,1985A&A...149..195G,1990ApJ...357..477L,2005ApJ...634.1126M,2010A&A...523A..51R,2013A&A...559A..47B,2017ApJ...845..158H,2020A&A...642A.222H}, similar to the results that we obtained from C$^{34}$S in the current work. In the inner Galaxy, the $^{12}$C\/$^{13}$C ratios are higher than the values in the CMZ, which were $\\sim$50 as derived from H$_2^{12}$CO\/H$_2^{13}$CO in \\citet{1985A&A...143..148H} and 41~$\\pm$~9 from C$^{34}$S\/$^{13}$C$^{34}$S in this work. As can be inferred from Fig.~\\ref{fig_gradient_12C13C}, $^{12}$C\/$^{13}$C ratios at 3.0 kpc $\\le R_{\\rm GC} \\le$ 4.0 kpc might be as low as in the CMZ, but this is so far tentative and uncertain, as we only have one detection (G024.78$+$00.08) in this region and data from other groups referring to this small galactocentric interval are also relatively few. In the local regions near the Solar System, an average $^{12}$C\/$^{13}$C ratio of 66~$\\pm$~10 was obtained from C$^{34}$S\/$^{13}$C$^{34}$S in this work, which is consistent with $^{12}$C\/$^{13}$C values derived from other molecular species and their $^{13}$C isotopes, that are 75~$\\pm$~9 from C$^{18}$O \\citep{1998ApJ...494L.107K}, 60~$\\pm$~19 from CN \\citep{2005ApJ...634.1126M}, 74~$\\pm$~8 from CH$^+$ \\citep{2011ApJ...728...36R}, and 53~$\\pm$~16 from H$_2$CO \\citep{2019ApJ...877..154Y}. All these $^{12}$C\/$^{13}$C values for the solar neighborhood are well below the value for the Sun \\citep[89,][]{1989GeCoA..53..197A,2007ApJ...656L..33M}. This indicates that $^{13}$C has been enriched in the local ISM during the last 4.6 billion years following the formation of the Solar System. Beyond the Sun, at a galactocentric distance of about 10~kpc, our results from C$^{34}$S\/$^{13}$C$^{34}$S show a slightly higher value of 67~$\\pm$~8, which is similar to $^{12}$C\/$^{13}$C ratios from CN (66~$\\pm$~20), C$^{18}$O (69~$\\pm$~10), and H$_2$CO (64~$\\pm$~10). These values are still below the value for the Sun. In the far outer Galaxy at 13.8~kpc toward WB89~437, \\citet{1996A&AS..119..439W} found a 3$\\sigma$ lower limit of 201~$\\pm$~15 from C$^{18}$O\/$^{13}$C$^{18}$O. This suggests that the $^{12}$C\/$^{13}$C gradient extends well beyond the solar neighborhood to the outer Galaxy. Additional sources with large galactocentric distances have to be measured to further improve the statistical significance of this result.\n\nPreviously published $^{12}$C\/$^{13}$C ratios derived from CN \\citep{2002ApJ...578..211S,2005ApJ...634.1126M}, C$^{18}$O \\citep{1990ApJ...357..477L,1996A&AS..119..439W,1998ApJ...494L.107K}, H$_2$CO \\citep{1980A&A....82...41H,1982A&A...109..344H,1983A&A...127..388H,1985A&A...143..148H,2019ApJ...877..154Y}, CH$^+$ \\citep{2011ApJ...728...36R}, and CH \\citep{2020A&A...640A.125J} are shown in Fig.~\\ref{fig_gradient_12C13C}, but with respect to the new distance values (see details in Section~\\ref{section_distance}). In Fig.~\\ref{distribution_12C13C}, the $^{12}$C\/$^{13}$C values from different molecular species are projected onto the Galactic plane. This also visualizes the gradient from the Galactic center to the Galactic outer regions beyond the Solar System. In Table~\\ref{table_all12C13Cgradients}, the fitting results for the old and new distances are presented. The comparison shows that the adoption of the new distances has indeed an effect on the fitting results, such as for the $^{12}$CN\/$^{13}$CN gradient. In \\citet{2002ApJ...578..211S} and \\citet{2005ApJ...634.1126M}, the slope and intercept become (6.75~$\\pm$~1.44) and (5.77~$\\pm$~11.29) from (6.01~$\\pm$~1.19) and (12.28~$\\pm$~9.33), respectively. The fitting for H$_2^{12}$CO\/H$_2^{13}$CO from \\citet{1980A&A....82...41H,1982A&A...109..344H,1983A&A...127..388H,1985A&A...143..148H} and \\citet{2019ApJ...877..154Y} becomes (5.43~$\\pm$~1.04) and (13.87~$\\pm$~6.38) from (5.08~$\\pm$~1.10) and (11.86~$\\pm$~6.60), respectively. The Galactic $^{12}$C\/$^{13}$C gradients derived based on measurements of CN, C$^{18}$O, and H$_2$CO are in agreement with our results from C$^{34}$S and therefore indicate that chemical fractionation has little effect on the $^{12}$C\/$^{13}$C ratios. It is noteworthy that all these fits show a significant discrepancy with observations from the Galactic center. Indeed, they suggest values of 5--17 at $R_{\\rm GC}$=0, substantially below observed values of 20--25 (see also Tables~\\ref{table_all12C13Cgradients}, \\ref{table_allratios}). While the values in the CMZ are clearly lower than in the inner Galactic disk (with the potential exception at galactocentric distances of 2.0--4.0 kpc), they are larger than suggested by a linear fit encompassing the entire inner 12.0 kpc of the Galaxy.\n\n\n\n\\subsection{The $^{32}$S\/$^{34}$S gradient across the Milky Way}\n\\label{section_comparisons_32s34s}\n\nThe existence of a $^{32}$S\/$^{34}$S gradient was first proposed by \\citet{1996A&A...305..960C} based on observations of $^{13}$CS and C$^{34}$S $J$ = 2-1 lines toward 20 mostly southern HMSFRs with galactocentric distances of between 3.0 and 9.0 kpc. Very recently, \\citet{2020ApJ...899..145Y} confirmed the existence of this $^{32}$S\/$^{34}$S gradient and enlarged the sample of measurements of $^{13}$CS and C$^{34}$S $J$ = 2-1 lines to a total of 61 HMSFRs from the inner Galaxy out to a galactocentric distance of 12.0 kpc. In the CMZ, \\citet{2020A&A...642A.222H} found $^{32}$S\/$^{34}$S ratios of 16.3$^{+2.1}_{-1.7}$ and 17.9~$\\pm$~5.0 for the $+$50 km s$^{-1}$ cloud and Sgr B2(N), which is consistent with our values derived from $^{13}$C$^{34}$S and also with our results using the double isotope method. In the inner disk at 2.0~kpc $\\le R_{\\rm GC} \\le$ 6.0~kpc, a similar $^{32}$S\/$^{34}$S value of 18~$\\pm$~4 was derived based on our results in Sections~\\ref{section_ratios3234_13c34s} and \\ref{section_double_32s34s}. While $^{12}$C\/$^{13}$C ratios in the inner disk at $R_{\\rm GC} \\ge$ 4.0~kpc are clearly higher than in the CMZ (see details in Section~\\ref{section_comparisons_12c13c}), this is not the case for $^{32}$S\/$^{34}$S. On the contrary, $^{32}$S\/$^{34}$S ratios in the CMZ and inner disk are similar, as already suggested for the first time by \\citet{2020A&A...642A.222H}. In the local ISM, our results lead to an average $^{32}$S\/$^{34}$S ratio of 24~$\\pm$~4, which is close to the value in the Solar System \\citep[22.57,][]{1989GeCoA..53..197A}. That is also differing from the $^{12}$C\/$^{13}$C ratio and its clearly subsolar value in the local ISM. A more detailed discussion is given in Section~\\ref{section_discussion_all}. \n\n\nFor the first time, we established a $^{32}$S\/$^{34}$S gradient directly from measurements of $^{13}$CS and $^{13}$C$^{34}$S (see Section~\\ref{section_ratios3234_13c34s} for details). Similar $^{32}$S\/$^{34}$S gradients were also found in the $^{32}$S\/$^{34}$S values derived by the double isotope method with the $J$ = 2-1 and $J$ = 3-2 transitions (for details, see Section~\\ref{section_double_32s34s}). A gradient of $^{32}$S\/$^{34}$S = (0.75 $\\pm$ 0.13)$R_{\\rm GC}$+(15.52 $\\pm$ 0.78) was obtained based on a large dataset of 90 values from our detections of $^{13}$CS and C$^{34}$S $J$ = 2-1 lines with corrections of opacity, which is flatter than previous ones presented by \\citet{1996A&A...305..960C} and \\citet{2020ApJ...899..145Y}. Following \\citet{2020ApJ...899..145Y}, the gradient does not significantly change when the ratios in the CMZ or in the outer regions are excluded, indicating that the $^{32}$S\/$^{34}$S gradient is robust. \n\n\n\n\n\n\n\n\\begin{table*}[h]\n\\caption{Measurements of the $^{12}$C\/$^{13}$C gradient.}\n\\centering\n\\begin{tabular}{c|cc|cc}\n\\hline\\hline\n &\\multicolumn{2}{c}{Previous fitting results} & \\multicolumn{2}{c}{This work} \\\\\n & slope & intercept & slope & intercept \\\\\n\\hline\n\\label{table_all12C13Cgradients}\nCN\\tablefootmark{a} & 6.01 $\\pm$ 1.19 & 12.28 $\\pm$ 9.33 & 6.75 $\\pm$ 1.44 & 5.57 $\\pm$ 11.29 \\\\ \nC$^{18}$O\\tablefootmark{b} & 5.41 $\\pm$ 1.07 & 19.03 $\\pm$ 7.90 & 5.72 $\\pm$ 1.20 & 14.56 $\\pm$ 9.25 \\\\ \nH$_2$CO\\tablefootmark{c} & 5.08 $\\pm$ 1.10 & 11.86 $\\pm$ 6.60 & 5.43 $\\pm$ 1.04 & 13.87 $\\pm$ 6.38 \\\\ \nC$^{34}$S (this work) & $\\cdots$ & $\\cdots$ & 4.77 $\\pm$ 0.81 & 20.76 $\\pm$ 4.61 \\\\ \n\\hline\n\\end{tabular}\n\\tablefoot{Fitting results for the old (left) and new (right) distances, respectively. \n\\tablefoottext{a}{From \\citet{2002ApJ...578..211S} and \\citet{2005ApJ...634.1126M}.}\n\\tablefoottext{b}{From \\citet{1990ApJ...357..477L}, \\citet{1996A&AS..119..439W}, and \\citet{1998ApJ...494L.107K}.}\n\\tablefoottext{c}{From \\citet{1980A&A....82...41H,1982A&A...109..344H,1983A&A...127..388H,1985A&A...143..148H} and \\citet{2019ApJ...877..154Y}.}}\n\\end{table*}\n\n\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[width=520pt]{newFigures\/MW_12C13C_corrected.png}\n \\caption{Distribution of $^{12}$C\/$^{13}$C ratios from 93 sources in the Milky Way. The background image is the structure of the Milky Way from the artist's impression [Credit:\nNASA\/JPL-Caltech\/ESO\/R. Hurt]. The $^{12}$C\/$^{13}$C isotope ratios with corrections for optical depth from C$^{34}$S\/$^{13}$C$^{34}$S in this work are plotted as circles. The triangles, pentagons, stars, squares, and diamonds indicate the $^{12}$C\/$^{13}$C ratios derived from CN\/$^{13}$CN \\citep{2002ApJ...578..211S,2005ApJ...634.1126M}, C$^{18}$O\/$^{13}$C$^{18}$O \\citep{1990ApJ...357..477L,1996A&AS..119..439W,1998ApJ...494L.107K}, H$_2$CO\/H$_2^{13}$CO \\citep{1980A&A....82...41H,1982A&A...109..344H,1983A&A...127..388H,1985A&A...143..148H,2019ApJ...877..154Y}, CH$^+$\/$^{13}$CH$^+$ \\citep{2011ApJ...728...36R}, and CH\/$^{13}$CH \\citep{2020A&A...640A.125J}, respectively. The red symbol $\\odot$ indicates the position of the Sun. The color bar on the right-hand side indicates the range of the $^{12}$C\/$^{13}$C ratios. }\n \\label{distribution_12C13C}\n\\end{figure*}\n\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[width=520pt]{newFigures\/MW_32S34S_corrected.png}\n \\caption{Distribution of $^{32}$S\/$^{34}$S ratios from 112 sources in the Milky Way. The background image is the structure of the Milky Way from the artist's impression [Credit:\nNASA\/JPL-Caltech\/ESO\/R. Hurt]. The $^{32}$S\/$^{34}$S isotope ratios with corrections of opacity derived in this work from $^{13}$CS\/$^{13}$C$^{34}$S and the double isotope method in the $J$ = 2-1 transition are plotted as circles and stars, respectively. The triangles indicate the results from the CMZ obtained by \\citet{2020A&A...642A.222H}. The red symbol $\\odot$ indicates the position of the Sun. The color bar on the right-hand side indicates the range of the $^{32}$S\/$^{34}$S ratios.}\n \\label{distribution_32S34S}\n\\end{figure*}\n\n\n\n\n\n\\subsection{C$^{33}$S}\n\\label{section_hfs_c33s}\n\n We detected at least three components of the C$^{33}$S $J$ = 2-1 line toward 26 sources. This will guide us to obtain information with respect to LTE or nonLTE conditions. As mentioned in Section~\\ref{section_34s33s}, the main component ($I_{main}$) consists of four HFS lines ($F$=7\/2-5\/2, $F$=5\/2-3\/2, $F$=1\/2-1\/2, $F$=3\/2-5\/2). Under LTE conditions in the optically thin case, the ratio between the other identified components (not belonging to the main one) and the main component can be obtained:\n\\begin{equation}\n\\begin{aligned}\nR_{211} &= \\frac{I(F=3\/2-1\/2 + F=5\/2-5\/2)}{I_{main}}\\\\ &= 0.25,\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\nR_{212} &= \\frac{I(F=3\/2-3\/2)}{I_{main}}\\\\ &= 0.15,\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\nR_{213} &= \\frac{I(F=1\/2-3\/2)}{I_{main}}\\\\ &= 0.02.\n\\end{aligned}\n\\end{equation}\nA detailed examination of our 26 objects is presented in Table~\\ref{table_c33s_hfs}. Except for Orion-KL there is no evidence for nonLTE effects. All the remaining sources are found to be compatible with LTE conditions. The spectra of CS, C$^{34}$S, and $^{13}$CS toward Orion-KL show broad wings (see Fig.~\\ref{fig_orion}), which might lead to a highly complex C$^{33}$S $J$ = 2-1 line shape, which may be what is causing apparent LTE deviations in Orion-KL. \n\n\n\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=220pt]{figure\/Orion-KL-c33s.eps}\n \\caption{Line profiles of the $J$ = 2-1 transitions of CS, C$^{33}$S, C$^{34}$S, and $^{13}$CS toward Orion-KL.}\n \\label{fig_orion}\n\\end{figure}\n\n\nNo systematic dependence of the $^{34}$S\/$^{33}$S ratios on galactocentric distance was found in previous studies. The average $^{34}$S\/$^{33}$S values were 6.3~$\\pm$~1.0 and 5.9~$\\pm$~1.5 in \\citet{1996A&A...305..960C} and \\citet{2020ApJ...899..145Y}, respectively. A particularly low $^{34}$S\/$^{33}$S ratio of 4.3~$\\pm$~0.2 was found in the Galactic center region by \\citet{2020A&A...642A.222H}. These authors speculated about the possible presence of a gradient with low values at the center. However, in view of the correction for HFS in Section~\\ref{section_34s33s}, it appears that this is simply an effect of different HFS correction factors, with the one at the Galactic center with its wide spectral lines being 1.0, thus (exceptionally) not requiring a downward correction. \\citet{2020ApJ...899..145Y} considered the effect of HFS, while they overestimated the ratio of the main component to the total intensity of the C$^{33}$S $J$ = 2-1 line (for details, see their Section~4.2). This resulted in higher $^{34}$S\/$^{33}$S values. However, the $^{34}$S\/$^{33}$S ratios appear to be independent of galactocentric distance based on our results (see details in Section~\\ref{section_34s33s}). The average $^{34}$S\/$^{33}$S value with corrections of optical depth toward our sample in the $J$ = 2-1 transition is 4.35~$\\pm$~0.44, which is similar to the value in the Galactic center region derived by \\citet{2020A&A...642A.222H} and lower than the value of 5.61 in the Solar System \\citep{1989GeCoA..53..197A}. This indicates that no systematic variation exists in the $^{34}$S\/$^{33}$S ratios in our Galaxy, that $^{33}$S is (with respect to stellar nucleosynthesis) similar to $^{34}$S, and that the Solar System (see Fig.~\\ref{fig_all33S}) must be peculiar. An approximately constant $^{34}$S\/$^{33}$S ratio with opacity correction from the $J$ = 2-1 transition across the Galactic plane leads to a $^{32}$S\/$^{33}$S gradient in our Galaxy as we already mentioned in Section~\\ref{section_32s33s}: $^{32}{\\rm S}\/^{33}{\\rm S}$ = $(2.64 \\pm 0.77)R_{\\rm GC}+(70.80 \\pm 5.57)$, with a correlation coefficient of 0.46.\n\n\n\n\n\\begin{table*}[h]\n\\caption{Line intensity ratios of C$^{33}$S $J$ = 2-1 hyperfine components.}\n\\centering\n\\begin{tabular}{lcccc}\n\\hline\\hline\nSource & FWHM & $R_{211}$ & $R_{212}$ & $R_{213}$ \\\\\n & (km s$^{-1}$) & & & \\\\\n\\hline\n\\label{table_c33s_hfs}\nW3OH & 4.41 & 0.29 $\\pm$ 0.04 & 0.15 $\\pm$ 0.03 & 0.05 $\\pm$ 0.03 \\\\ \nOrion-KL & 4.25 & 0.60 $\\pm$ 0.02 & 0.30 $\\pm$ 0.01 & 0.44 $\\pm$ 0.02 \\\\ \nG359.61$-$00.24 & 3.17 & 0.40 $\\pm$ 0.05 & 0.62 $\\pm$ 0.05 & 0.14 $\\pm$ 0.04 \\\\ \nG006.79$-$00.25 & 2.73 & 0.28 $\\pm$ 0.02 & 0.17 $\\pm$ 0.02 & $\\cdots$ \\\\ \nG010.32$-$00.15 & 2.66 & 0.27 $\\pm$ 0.04 & 0.23 $\\pm$ 0.06 & $\\cdots$ \\\\ \nG016.86$-$02.15 & 3.32 & 0.30 $\\pm$ 0.05 & 0.18 $\\pm$ 0.03 & $\\cdots$ \\\\ \nG017.02$-$02.40 & 3.74 & 0.17 $\\pm$ 0.03 & 0.17 $\\pm$ 0.08 & $\\cdots$ \\\\ \nG018.34$+$01.76 & 2.30 & 0.24 $\\pm$ 0.07 & 0.13 $\\pm$ 0.06 & $\\cdots$ \\\\ \nG019.36$-$00.03 & 3.02 & 0.29 $\\pm$ 0.06 & 0.14 $\\pm$ 0.04 & $\\cdots$ \\\\ \nG023.43$-$00.18 & 3.81 & 0.40 $\\pm$ 0.14 & 1.15 $\\pm$ 0.34 & $\\cdots$ \\\\ \nG024.78$+$00.08 & 4.29 & 0.34 $\\pm$ 0.03 & 0.21 $\\pm$ 0.03 & 0.08 $\\pm$ 0.03 \\\\ \nG028.39$+$00.08 & 3.04 & 0.26 $\\pm$ 0.04 & 0.24 $\\pm$ 0.06 & $\\cdots$ \\\\ \nG028.83$-$00.25 & 2.27 & 0.25 $\\pm$ 0.04 & 0.18 $\\pm$ 0.03 & $\\cdots$ \\\\ \nG030.70$-$00.06 & 4.85 & 0.33 $\\pm$ 0.02 & 0.23 $\\pm$ 0.03 & 0.11 $\\pm$ 0.02 \\\\ \nG030.74$-$00.04 & 3.55 & 0.19 $\\pm$ 0.03 & 0.25 $\\pm$ 0.06 & $\\cdots$ \\\\ \nG030.81$-$00.05 & 6.15 & 0.27 $\\pm$ 0.03 & 0.16 $\\pm$ 0.02 & 0.14 $\\pm$ 0.02 \\\\ \nG032.74$-$00.07 & 4.54 & 0.36 $\\pm$ 0.04 & 0.16 $\\pm$ 0.03 & $\\cdots$ \\\\ \nG032.79$+$00.19 & 4.78 & 0.09 $\\pm$ 0.05 & 1.72 $\\pm$ 0.28 & $\\cdots$ \\\\ \nG034.41$+$00.23 & 4.56 & 0.32 $\\pm$ 0.04 & 0.28 $\\pm$ 0.04 & 0.12 $\\pm$ 0.03 \\\\ \nG034.79$-$01.38 & 2.40 & 0.44 $\\pm$ 0.15 & 0.14 $\\pm$ 0.06 & $\\cdots$ \\\\ \nG037.42$+$01.51 & 3.18 & 0.20 $\\pm$ 0.05 & 0.10 $\\pm$ 0.02 & $\\cdots$ \\\\ \nW51-IRS2 & 8.55 & 0.13 $\\pm$ 0.09 & 3.63 $\\pm$ 0.36 & 0.71 $\\pm$ 0.13 \\\\ \nDR21 & 2.71 & 0.28 $\\pm$ 0.01 & 0.20 $\\pm$ 0.01 & 0.14 $\\pm$ 0.01 \\\\ \nG097.53$+$03.18 & 5.22 & 0.30 $\\pm$ 0.05 & 0.13 $\\pm$ 0.04 & 0.14 $\\pm$ 0.04 \\\\ \nG109.87$+$02.11 & 3.19 & 0.23 $\\pm$ 0.06 & 0.38 $\\pm$ 0.07 & $\\cdots$ \\\\ \nNGC7538 & 3.71 & 0.22 $\\pm$ 0.01 & 0.21 $\\pm$ 0.01 & $\\cdots$ \\\\ \n\\hline\n\\end{tabular}\n\\tablefoot{ Full width at half maximum values of the main component were obtained from measurements of C$^{33}$S; see Table \\ref{fitting_all}. The errors provided here are 1$\\sigma$.\n}\n\\end{table*}\n\n\\subsection{C$^{36}$S}\n\\label{section_discussion_all36}\n\nAs mentioned in Section~\\ref{section_34s36s}, we find novel potential indications for a positive $^{34}$S\/$^{36}$S gradient with galactocentric radius. The $^{36}$S-bearing molecule C$^{36}$S was first detected by \\citet{1996A&A...313L...1M}. These authors observed the $J$ = 2-1 and 3-2 transitions toward eight Galactic molecular hot cores at galactocentric distances of between 5.0 kpc and 10.0 kpc. \\citet{1996A&A...313L...1M} reported an average $^{34}$S\/$^{36}$S ratio of 115~$\\pm$~17, which is smaller than the value in the Solar System \\citep[200.5,][]{1989GeCoA..53..197A}. This is consistent with this nucleus being of a purely secondary nature. Combining the ratios of \\citet{1996A&A...313L...1M} --- after applying new distances (see details in Table~\\ref{table_all36s})--- with our results in the $J$ = 2-1 transition, the following fit could be achieved:\n\\begin{equation}\n^{34}{\\rm S}\/^{36}{\\rm S} = (10.34 \\pm 2.74)R_{\\rm GC}+(57.45 \\pm 18.59), \n\\end{equation}\nwith a correlation coefficient of 0.71. As the $^{34}$S\/$^{33}$S ratios show a uniform distribution across our Galaxy (see details in Section~\\ref{section_hfs_c33s}), a $^{33}$S\/$^{36}$S gradient is also expected. We obtain (2.38~$\\pm$~0.67)$R_{\\rm GC}$+(13.21~$\\pm$~4.48). After applying our $^{32}$S\/$^{34}$S gradient to the $^{34}$S\/$^{36}$S ratios in \\citet{1996A&A...313L...1M} with equation (9), $^{32}$S\/$^{36}$S ratios were then derived and listed in Table~\\ref{table_all36s}. Combined with our results in the $J$ = 2-1 transition, a linear fit to the $^{32}$S\/$^{36}$S ratios is obtained:\n\\begin{equation}\n ^{32}{\\rm S}\/^{36}{\\rm S} = (314 \\pm 55)R_{\\rm GC}+(659 \\pm 374), \n \\end{equation}\nwith a correlation coefficient of 0.84. The $^{34}$S\/$^{36}$S and $^{32}$S\/$^{36}$S ratios are plotted as functions of galactocentric distances in Fig.~\\ref{fig_all36S}. Measurements of $^{34}$S\/$^{36}$S, $^{33}$S\/$^{36}$S, and $^{32}$S\/$^{36}$S are still not numerous. More sources with detected C$^{36}$S lines would be highly\ndesirable, especially in the CMZ and the inner disk within $R_{\\rm GC}$~=~5.0~kpc.\n\n\n\\subsection{Observational bias due to distance effects}\n\\label{section_discussion_distance}\n\nWhile we have so far analyzed isotope ratios as a function of galactocentric distances, there might be a bias in the sense that the ratios could at least in part also depend on the distance from Earth, a bias caused by different linear resolutions. In Appendix~\\ref{appendix_spectra}, the $^{12}$C\/$^{13}$C, $^{32}$S\/$^{34}$S, $^{34}$S\/$^{33}$S, and $^{32}$S\/$^{33}$S, as well as the $^{34}$S\/$^{36}$S and $^{32}$S\/$^{36}$S isotope ratios are plotted as functions of the distance from the Sun and shown in Figs.~\\ref{fig_12C13C_2Sun} to \\ref{fig_all36S_2Sun}, respectively. No apparent gradients can be found, which indicates that any observational bias on account of distance-dependent effects is not significant for $^{12}$C\/$^{13}$C, $^{32}$S\/$^{34}$S, $^{32}$S\/$^{33}$S, $^{34}$S\/$^{36}$S, and $^{32}$S\/$^{36}$S. This agrees with the findings of \\citet[][see their Section~4.5]{2020ApJ...899..145Y}.\n\n\n\\subsection{Beam size effects}\n\\label{section_beam}\n\nA good way to check whether the different beam sizes for different lines could affect our results is to compare the isotope ratios derived from different transitions at different frequencies observed with different beam sizes. As shown in Section~\\ref{section_double_32s34s}, $^{32}$S\/$^{34}$S ratios obtained from the double isotope method in the $J$ = 2-1 and 3-2 transitions are in good agreement, suggesting that the effect of beam size is negligible. Furthermore, \\citet{2020A&A...642A.222H} found an average $^{32}$S\/$^{34}$S ratio of 17.9~$\\pm$~5.0 in the envelope of Sgr B2(N) with the Atacama Large Millimetre\/submillimetre Array (ALMA) with 1$\\farcs$6 beam size, which is consistent with our results in the CMZ from the IRAM 30 meter telescope with beam sizes of about 27$\\arcsec$. \\citet{2020ApJ...899..145Y} derived similar $^{32}$S\/$^{34}$S and $^{34}$S\/$^{33}$S ratios from different telescopes ---that is, the IRAM 30 meter and the ARO 12 meter--- toward six HMSFRs and concluded that beam-size effects are insignificant (see details in their Section~4.1). All this suggests that beam-size effects could not obviously affect our results. \n\n\n\n\n\\subsection{Chemical fractionation}\n\\label{section_fractionation}\n\nIsotopic fractionation could possibly affect the isotope ratios derived from the molecules in the interstellar medium. \\citet{1976ApJ...205L.165W} firstly proposed that gas-phase CO should have a tendency to be enriched in $^{13}$CO because of the charge-exchange reaction of CO with $^{13}$C$^+$. Several theoretical studies also support this mechanism \\citep[e.g.,][]{1984ApJ...277..581L,2020MNRAS.497.4333V}, which was then extended to other carbon bearing species \\citep{2020MNRAS.498.4663L}. Formaldehyde forming in the gas phase is suggested to be depleted in the $^{13}$C bearing isotopolog \\citep[e.g.,][]{1984ApJ...277..581L}. However, if H$_2$CO originates from dust grain mantles, then the $^{13}$C bearing isotopolog might be enhanced relative to species like methanol and CO \\citep{2012LPI....43.1611W,2019ApJ...877..154Y}. Recently, \\citet{2020A&A...640A..51C} performed a new gas-grain chemical model and proposed that molecules formed starting from atomic carbon could also show $^{13}$C enhancements through the reaction $^{13}$C + C$_{3}$ $\\rightarrow$ $^{12}$C + $^{13}$CC$_{2}$. As already mentioned in Section~\\ref{section_comparisons_12c13c}, the Galactic $^{12}$C\/$^{13}$C gradient derived from C$^{34}$S in this work is in good agreement with previous results based on measurements of CN, C$^{18}$O, and H$_2$CO. Therefore, chemical fractionation cannot greatly affect the carbon isotope ratios. \n\nTo date, little is known about sulfur fractionation. \\citet{2019MNRAS.485.5777L} proposed a low $^{34}$S enrichment through the reaction of $^{34}$S$^+$ + CS $\\rightarrow$ S$^+$ + C$^{34}$S in dense clouds. A slight enrichment in $^{13}$C was predicted for CS with the $^{13}$C$^{+}$ + CS $\\rightarrow$ C$^+$ + $^{13}$CS reaction \\citep{2020MNRAS.498.4663L}. $^{32}$S\/$^{34}$S ratios derived directly from $^{13}$CS and $^{13}$C$^{34}$S and the double isotope method involving $^{12}$C\/$^{13}$C ratios (equation 8) turn out to agree very well, suggesting that sulfur fractionation is negligible as previously suggested by \\citet{2020A&A...642A.222H} and in this work. \n\n\\subsection{Interstellar C, N, O, and S isotope ratios}\n\\label{section_discussion_all}\n\nThe data collected so far allow us to evaluate the status of several isotopes with respect to primary or secondary synthesis in stellar objects. From the data presented here in Table~\\ref{table_allratios}, we choose the $^{12}$C\/$^{13}$C, $^{32}$S\/$^{34}$S, $^{32}$S\/$^{33}$S, and $^{32}$S\/$^{36}$S ratios because $^{12}$C is mostly primary \\citep{1995ApJS...98..617T} while $^{32}$S is definitely a primary nucleus \\citep{1995ApJS..101..181W}, against which the other isotopes can be evaluated. A question arises as to whether all these ratios, as well as those from nitrogen and oxygen, can be part of the same scheme.\n\n Comparing the CMZ with the ratios in the inner disk, the CMZ with the outer Galaxy, the inner disk with the ratios in the outer Galaxy, and the local ISM values with those of the Solar System, we obtain increases in values for $^{12}$C\/$^{13}$C, $^{32}$S\/$^{34}$S, $^{32}$S\/$^{33}$S, and $^{32}$S\/$^{36}$S ratios. All of these values are listed in Table~\\ref{table_allratios_com}. Percentages are clearly highest between $^{32}$S and $^{36}$S. These indicate that $^{36}$S is, as opposed to $^{32}$S, definitely secondary. Percentages between $^{12}$C and $^{13}$C are also high but not as extreme, presumably because $^{12}$C is also synthesized in longer lived stars of intermediate mass (e.g., \\citealt{2020ApJ...900..179K}). More difficult to interpret are the $^{32}$S\/$^{33}$S and $^{32}$S\/$^{34}$S ratios, where percentages are smaller, indicating that $^{33}$S and $^{34}$S are, as already mentioned in Section~\\ref{section_hfs_c33s}, neither fully primary nor secondary. However, percentages in the case of the $^{32}$S\/$^{33}$S ratio systematically surpass those of the $^{32}$S\/$^{34}$S ratio, suggesting a more secondary origin of $^{33}$S with respect to $^{34}$S, even though $^{34}$S\/$^{33}$S appears to be constant across the Galaxy. Finally, local interstellar $^{32}$S\/$^{34}$S and $^{32}$S\/$^{33}$S ratios behave strikingly differently with respect to solar values. While $^{32}$S\/$^{34}$S is (almost) solar, $^{32}$S\/$^{33}$S is far below the solar value. Peculiar Solar System abundance ratios may be the easiest way to explain this puzzling situation. Most likely there is an overabundance of $^{34}$S in the gas and dust that formed the Solar System.\n\n\nAnother clearly primary isotope is $^{16}$O, which allows us to look for $^{16}$O\/$^{18}$O and $^{16}$O\/$^{17}$O ratios \\citep{1993A&A...274..730H,1994LNP...439...72H,1999RPPh...62..143W,2008A&A...487..237W,2020ApJS..249....6Z}. The high percentages in the $^{16}$O\/$^{17}$O ratios show that $^{17}$O is more secondary than $^{18}$O, which is consistent with models of stellar nucleosynthesis, because $^{17}$O is a main product of the CNO cycle while $^{18}$O can also be synthesized by helium burning in massive stars.\n\n$^{14}$N\/$^{15}$N can also be measured; both nuclei can be synthesized in rotating massive stars and AGB stars as primary products \\citep[e.g.,][]{2002A&A...390..561M,2011MNRAS.414.3231K,2020ApJ...900..179K,2018ApJS..237...13L}. However, most of the $^{14}$N is produced through CNO cycling, and is therefore secondary \\citep[e.g.,][]{2011MNRAS.414.3231K,2014PASA...31...30K}. The production of $^{15}$N remains to be understood and may be related to novae \\citep[e.g.,][]{2020ApJ...900..179K,2022arXiv221004350R}. None of the stable nitrogen isotopes are purely primary. While $^{14}$N appears to be less secondary than $^{15}$N \\citep{1975A&A....43...71A,1994LNP...439...72H,1999RPPh...62..143W,2012ApJ...744..194A,2015ApJ...804L...3R,2018A&A...609A.129C,2021ApJS..257...39C}, in this case we do not have a clear calibration against an isotope that can be considered to be mainly primary. Remarkably, \\citet{2022arXiv220910620C} reported a rising $^{14}$N\/$^{15}$N gradient that peaks at $R_{\\rm GC}$ = 11 kpc and then decreases, and suggested that $^{15}$N could be mainly produced by novae on long timescales.\n\n\n\\begin{table*}[h]\n\\caption{Interstellar C, N, O, and S isotope ratios.}\n\\centering\n\\begin{tabular}{ccccccc}\n\\hline\\hline\n & Molecule & CMZ & Inner disk &Local ISM & Outer Galaxy & Solar System\\tablefootmark{*} \\\\\n\\hline\n\\label{table_allratios}\n$^{12}$C\/$^{13}$C & C$^{34}$S\\tablefootmark{a}& 27 $\\pm$ 3 & 41 $\\pm$ 9 & 66 $\\pm$ 10 & 74 $\\pm$ 8 & 89 \\\\ \n & CN\\tablefootmark{b} & $\\cdots$ & 44 $\\pm$ 12 & 41 $\\pm$ 11 & 66 $\\pm$ 19 & \\\\ \n & C$^{18}$O\\tablefootmark{c} & 24 $\\pm$ 1 & 41 $\\pm$ 2 & 60 $\\pm$ 5 & 70 $\\pm$ 10 & \\\\ \n & H$_2$CO\\tablefootmark{d} & $\\cdots$ & 40 $\\pm$ 7 & 50 $\\pm$ 13 & 64 $\\pm$ 10 & \\\\ \n & average & 25 $\\pm$ 2 & 42 $\\pm$ 9 & 54 $\\pm$ 10 & 69 $\\pm$ 12 & \\\\\n\\hline\n$^{14}$N\/$^{15}$N & CN\\tablefootmark{e} & $\\cdots$ & 269 $\\pm$ 59 & 314 $\\pm$ 104 & 289 $\\pm$ 85 & 270 \\\\ \n & HCN\\tablefootmark{f} & $\\cdots$ & 284 $\\pm$ 63 & 398 $\\pm$ 48 & 388 $\\pm$ 32 & \\\\ \n & HNC\\tablefootmark{f} & $\\cdots$ & 363 $\\pm$ 100 & 378 $\\pm$ 79 & 395 $\\pm$ 74 & \\\\ \n & N$_2$H$^{+}$\\tablefootmark{g} & $\\cdots$ & 900 $\\pm$ 250 & 496 $\\pm$ 65 & 581 $\\pm$ 140 & \\\\ \n & NH$_3$\\tablefootmark{h} & 40 $\\pm$ 13 & 175 $\\pm$ 46 & 297 $\\pm$ 99 & 96 $\\pm$ 44 & \\\\\n\\hline\n$^{16}$O\/$^{18}$O & H$_2$CO\\tablefootmark{i} & 263 $\\pm$ 45 & 327 $\\pm$ 32 & 560 $\\pm$ 25 & 625 $\\pm$ 144 & 490 \\\\\n$^{18}$O\/$^{17}$O & CO\\tablefootmark{j} & 3.4 $\\pm$ 0.1 & 3.6 $\\pm$ 0.2 & 3.9 $\\pm$ 0.4 & 4.8 $\\pm$ 0.6 & 5.5 \\\\\n$^{16}$O\/$^{17}$O\\tablefootmark{**} & & 894 $\\pm$ 155 & 1177 $\\pm$ 132 & 2184 $\\pm$ 244 & 3000 $\\pm$ 786 & 2625 \\\\\n\\hline\n$^{32}$S\/$^{34}$S\\tablefootmark{a} & & 19 $\\pm$ 2 & 18 $\\pm$ 4 & 24 $\\pm$ 4 & 28 $\\pm$ 3 & 23 \\\\\n$^{34}$S\/$^{33}$S\\tablefootmark{a} & & 4.2 $\\pm$ 0.2 & 4.3 $\\pm$ 0.4 & 4.2 $\\pm$ 0.5 & 4.1 $\\pm$ 0.3 & 5.6 \\\\\n$^{32}$S\/$^{33}$S\\tablefootmark{a} & & 70 $\\pm$ 16 & 82 $\\pm$ 19 & 88 $\\pm$ 21 & 105 $\\pm$ 19 & 127 \\\\\n$^{34}$S\/$^{36}$S\\tablefootmark{a} & & 41 $\\pm$ 4 & 122 $\\pm$ 18 & 111 $\\pm$ 16 & 161 $\\pm$ 32 & 200 \\\\\n$^{32}$S\/$^{36}$S\\tablefootmark{a} & & 884 $\\pm$ 104 & 2382 $\\pm$ 368 & 2752 $\\pm$ 458 & 4150 $\\pm$ 828 & 4525 \\\\\n\\hline\n\\end{tabular}\n\\tablefoot{The inner disk values refer to the mean values at galactocentric distances of 2.0 kpc~$\\le R_{\\rm GC} \\le$~6.0 kpc. The local ISM values refer to 7.5 kpc~$\\le R_{\\rm GC} \\le$~8.5 kpc. The outer Galaxy values point to 9.0 kpc~$\\le R_{\\rm GC} \\le$~11.0 kpc. \n\\tablefoottext{*}{From \\citet{1989GeCoA..53..197A}.} \n\\tablefoottext{a}{This work.}\n\\tablefoottext{b}{From \\citet{2002ApJ...578..211S} and \\citet{2005ApJ...634.1126M}.}\n\\tablefoottext{c}{From \\citet{1990ApJ...357..477L,1993ApJ...408..539L}, \\citet{1996A&AS..119..439W}, and \\citet{1998ApJ...494L.107K}.}\n\\tablefoottext{d}{From \\citet{1980A&A....82...41H,1982A&A...109..344H,1983A&A...127..388H,1985A&A...143..148H} and \\citet{2019ApJ...877..154Y}.} \n\\tablefoottext{e}{From \\citet{2012ApJ...744..194A}, \\citet{2015ApJ...804L...3R}, and \\citet{2015ApJ...808L..46F}. } \n\\tablefoottext{f}{From \\citet{2012ApJ...744..194A} and \\citet{2018A&A...609A.129C}} \n\\tablefoottext{g}{From \\citet{2015ApJ...804L...3R}. } \n\\tablefoottext{h}{From \\citet{2021ApJS..257...39C}. } \n\\tablefoottext{i}{From \\citet{1981MNRAS.194P..37G} and \\citet{1994ARA&A..32..191W}. } \n\\tablefoottext{j}{From \\citet{2020ApJS..249....6Z}. } \n\\tablefoottext{**}{The $^{16}$O\/$^{17}$O ratios are derived with values of $^{16}$O\/$^{18}$O and $^{18}$O\/$^{17}$O.} }\n\\end{table*}\n\n\\begin{table*}[h]\n\\caption{Comparison of isotope ratios at different galactocentric distances. Given are percentage enhancements.}\n\\centering\n\\begin{tabular}{ccccc}\n\\hline\\hline\n & CMZ & CMZ & Inner disk & Local ISM \\\\\n & $\\downarrow$ & $\\downarrow$ & $\\downarrow$ & $\\downarrow$ \\\\\n & Inner disk & Outer Galaxy & Outer Galaxy & Solar System \\\\\n\\hline\n\\label{table_allratios_com}\n$^{12}$C\/$^{13}$C & 68 $\\pm$ 33 & 176 $\\pm$ 54 & 64 $\\pm$ 21 & 65 $\\pm$ 31 \\\\\n\\hline\n$^{16}$O\/$^{18}$O & 24 $\\pm$ 9 & 138 $\\pm$ 61 & 91 $\\pm$ 43 & -12 $\\pm$ 5 \\\\\n$^{16}$O\/$^{17}$O & 32 $\\pm$ 8 & 236 $\\pm$ 111 & 155 $\\pm$ 73 & 20 $\\pm$ 13 \\\\\n\\hline\n$^{32}$S\/$^{34}$S & -5 $\\pm$ 11 & 47 $\\pm$ 10 & 56 $\\pm$ 18 & -4 $\\pm$ 17 \\\\\n$^{32}$S\/$^{33}$S & 17 $\\pm$ 8 & 50 $\\pm$ 16 & 28 $\\pm$ 6 & 44 $\\pm$ 34 \\\\\n$^{32}$S\/$^{36}$S & 169 $\\pm$ 50 & 369 $\\pm$ 125 & 74 $\\pm$ 31 & 64 $\\pm$ 27 \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\\subsection{Galactic chemical environment}\n\\label{section_discussion_model}\n\n\n\\citet{2020ApJ...900..179K} established a Galactic chemical evolution (GCE) model based on the GCE model in \\citet{2011MNRAS.414.3231K} with updates with respect to new solar abundances and also accounting for failed supernovae, super-AGB stars, the s-process from AGB stars, and various r-process sites. Based on this GCE model, the predicted $^{12}$C\/$^{13}$C, $^{32}$S\/$^{34}$S, $^{32}$S\/$^{33}$S, and $^{32}$S\/$^{36}$S ratios at $R_{\\rm GC}$~=~2.0, 4.0, 6.0, 8.5, 12.0, and 17.0~kpc are obtained and plotted in Figs.~\\ref{fig_gradient_12C13C}, \\ref{fig_gradient_32S34S}, \\ref{fig_all33S}, and \\ref{fig_all36S}. The initial mass function and nucleosynthesis yields are the same for different galactic radii but star formation and inflow timescales ($\\tau_{\\rm s}$ and $\\tau_{\\rm i}$) depend on the Galactic radius (see \\citealt{2000ApJ...539...26K} for the definition of the timescales). Adopted values are $\\tau_{\\rm s}$~=~1.0, 2.0, 3.0, 4.6, 6.5, and 8.8 Gyr as well as $\\tau_{\\rm i}$~=~4.0, 5.0, 5.0, 5.0, 7.0, and 50.0 Gyr for $R_{\\rm GC}$~=~2.0, 4.0, 6.0, 8.5, 12.0, and 17.0~kpc, respectively. The predicted $^{12}$C\/$^{13}$C ratios are in good agreement with our results, while $^{32}$S\/$^{34}$S and $^{32}$S\/$^{36}$S ratios show significant deviations at larger galactocentric distances. $^{32}$S\/$^{33}$S ratios show an offset along the entire inner 12 kpc of the Milky Way. This indicates that current models of Galactic chemical evolution are still far from perfect. In this context, our data will serve as a useful guideline for further even more refined GCE models.\n\nVery recently, \\citet{2022arXiv220910620C} predicted $^{12}$C\/$^{13}$C gradients with four different models addressing nova systems (see details in their Table~2 and Sect.~4), following \\citet{2017MNRAS.470..401R,2019MNRAS.490.2838R,2021A&A...653A..72R}. The gradients from these four models are shown in Fig.~\\ref{fig_12C13C_GCEmodels}. The results from model 1 show a large deviation with respect to the observed values. The other three models could reproduce the ratios within the dispersion at galactocentric radii beyond the solar neighborhood, while the inner Galaxy is not as well reproduced.\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=280pt]{newFigures\/1allratio12c_13c-mosels.pdf}\n \\caption{$^{12}$C\/$^{13}$C isotope ratios from observations in this work and GCE models. The red symbol $\\odot$ indicates the $^{12}$C\/$^{13}$C isotope ratio of the Sun. The $^{12}$C\/$^{13}$C gradient obtained from C$^{34}$S in the current work is plotted as a black solid line, with the gray shaded area showing the 1$\\sigma$ interval of the fit. The red crosses visualize the results from the GCE model of \\citet[][see also Section~\\ref{section_discussion_model}]{2011MNRAS.414.3231K,2020ApJ...900..179K}. The dark green, light green, magenta, and pink lines refer to the predicted gradients from models in Table 2 from \\citet{2022arXiv220910620C}. }\n \\label{fig_12C13C_GCEmodels}\n\\end{figure}\n\n\\section{Summary}\n\\label{summary}\n\nWe used the IRAM 30 meter telescope to perform observations of the $J$ = 2-1 transitions of CS, C$^{33}$S, C$^{34}$S, C$^{36}$S, $^{13}$CS, $^{13}$C$^{33}$S, and $^{13}$C$^{34}$S as well as the $J$ = 3-2 transitions of C$^{33}$S, C$^{34}$S, C$^{36}$S, and $^{13}$CS toward a large sample of 110 HMSFRs. The CS $J$ = 2-1 line was detected toward 106 sources, with a detection rate of 96\\%. The $J$ = 2-1 transitions of C$^{34}$S, $^{13}$CS, C$^{33}$S, $^{13}$C$^{34}$S, and C$^{36}$S were successfully detected in 90, 82, 46, 17, and 3 of our sources, respectively. The $J$ = 3-2 lines of C$^{34}$S, $^{13}$CS, C$^{33}$S, and C$^{36}$S were detected in 87, 71, 42, and 1 object(s). All the detected rare CS isotopologs exhibit optically thin lines and allow us to measure the isotope ratios of $^{12}$C\/$^{13}$C, $^{32}$S\/$^{34}$S, $^{32}$S\/$^{33}$S, $^{32}$S\/$^{36}$S, $^{34}$S\/$^{33}$S, $^{34}$S\/$^{36}$S, and $^{33}$S\/$^{36}$S with only minor saturation corrections. Our main results are as follows:\n\\begin{itemize}\n \\item Based on the measurements of C$^{34}$S and $^{13}$C$^{34}$S $J$ = 2-1 transitions, we directly measured the $^{12}$C\/$^{13}$C ratios with corrections of opacity. With accurate distances obtained from parallax data \\citep{2009ApJ...700..137R, 2014ApJ...783..130R, 2019ApJ...885..131R}, we confirm the previously determined $^{12}$C\/$^{13}$C gradient. A least-squares fit to our data results in $^{12}\\rm C$\/$^{13}\\rm C$ = (4.77~$\\pm$~0.81)$R_{\\rm GC}$+(20.76~$\\pm$~4.61), with a correlation coefficient of 0.82. \n \\item The Galactic $^{12}$C\/$^{13}$C gradients derived based on measurements of CN \\citep{2002ApJ...578..211S,2005ApJ...634.1126M}, C$^{18}$O \\citep{1990ApJ...357..477L,1996A&AS..119..439W,1998ApJ...494L.107K}, and H$_2$CO \\citep{1980A&A....82...41H,1982A&A...109..344H,1983A&A...127..388H,1985A&A...143..148H,2019ApJ...877..154Y} are in agreement with our results from C$^{34}$S and emphasize that chemical fractionation has little effect on $^{12}$C\/$^{13}$C ratios.\n \\item While previously it had been assumed that a linear fit would provide a good simulation of carbon isotope ratios as a function of galactocentric distance, our analysis reveals that this does not hold for the Galactic center region. While $^{12}$C\/$^{13}$C ratios are lowest in this part of the Milky Way, they clearly surpass values expected from a linear fit to the Galactic disk sources. This indicates that there is no strict linear correlation of carbon isotope ratios across the Galaxy.\n \\item We confirm the previously determined $^{32}$S\/$^{34}$S gradients \\citep{1996A&A...305..960C,2020ApJ...899..145Y,2020A&A...642A.222H} with the direct method from $^{13}$CS and $^{13}$C$^{34}$S, as well as the double isotope method also using $^{12}$C\/$^{13}$C ratios in the $J$ = 2-1 and $J$ = 3-2 transitions. Opacity corrections could be applied to the $J$ = 2-1 transitions, but not to the $J$ = 3-2 lines that may show, on average, slightly higher opacities. A $^{32}$S\/$^{34}$S gradient of (0.75 $\\pm$ 0.13)$R_{\\rm GC}$+(15.52 $\\pm$ 0.78) was obtained based on a large dataset of 90 values from our double isotope method in the $J$ = 2-1 transition. The 19 sources permitting the direct determination of this ratio with $^{13}$CS\/$^{13}$C$^{34}$S yield $^{32}$S\/$^{34}$S=(0.73 $\\pm$ 0.36)$R_{\\rm GC}$+(16.50 $\\pm$ 2.07). \n \\item Differences between the behavior of the $^{12}$C\/$^{13}$C and $^{32}$S\/$^{34}$S ratios as a function of galactocentric distance are reported and should be used as input for further chemical models: (a) In the inner disk the $^{12}$C\/$^{13}$C ratios at $R_{\\rm GC} \\ge$ 4.0 kpc are clearly higher than the value in the CMZ, while the $^{32}$S\/$^{34}$S ratios in the CMZ and inner disk are similar, as already suggested for the first time by \\citet{2020A&A...642A.222H}. (b) In the local ISM, the $^{12}$C\/$^{13}$C ratio is well below the Solar System value but $^{32}$S\/$^{34}$S is still quite close to it. All of this indicates that, unlike $^{13}$C, $^{34}$S is not a clean secondary isotope.\n \\item There is no notable $^{34}$S\/$^{33}$S gradient across the Galaxy. Ratios are well below the values commonly reported in earlier publications. This is a consequence of accounting for the full hyperfine structure splitting of the C$^{33}$S lines. The average value of $^{34}$S\/$^{33}$S derived from the $J$ = 2-1 transition lines after corrections for opacity toward our sample is 4.35~$\\pm$~0.44.\n \\item While there is no $^{34}$S\/$^{33}$S gradient with galactocentric radius, interstellar\n $^{34}$S\/$^{33}$S values near the solar neighborhood are well below the Solar System ratio, most likely suggesting the Solar System ratio is peculiar, and perhaps also the $^{18}$O\/$^{17}$O ratio. A comparison of local interstellar and Solar System $^{32}$S\/$^{34}$S and $^{34}$S\/$^{33}$S ratios suggests that the Solar System may have been formed from gas and dust with a peculiarly high $^{34}$S abundance. The data also indicate that $^{33}$S is not a clean primary or secondary product of nucleosynthesis, similarly\nto $^{34}$S .\n \\item For the first time, we report a $^{32}$S\/$^{33}$S gradient in our Galaxy: $^{32}{\\rm S}\/^{33}{\\rm S}$ = $(2.64 \\pm 0.77)R_{\\rm GC}+(70.80 \\pm 5.57)$, with a correlation coefficient of 0.46. \n \\item We find first potential indications for a positive $^{34}$S\/$^{36}$S gradient with galactocentric radius. Combined $^{34}$S\/$^{36}$S ratios from \\citet{1996A&A...313L...1M} and our new data with corrections of opacity in the $J$ = 2-1 transition and applying new up-to-date distances yield a linear fit of $^{34}{\\rm S}\/^{36}{\\rm S}$ = $(10.34 \\pm 2.74)R_{\\rm GC}+(57.45 \\pm 18.59)$, with a correlation coefficient of 0.71. Considering the uniform $^{34}$S\/$^{33}$S ratios in our Galaxy, a $^{33}$S\/$^{36}$S gradient of (2.38~$\\pm$~0.67)$R_{\\rm GC}$+(13.21~$\\pm$~4.48) is also obtained. \n \\item For the first time, we report a tentative $^{32}$S\/$^{36}$S gradient with galactocentric radius: $^{32}{\\rm S}\/^{36}{\\rm S}$ = $(314 \\pm 55)R_{\\rm GC}+(659 \\pm 374)$, with a correlation coefficient of 0.84. Our measurements are consistent with $^{36}$S being a purely secondary nucleus. However, observations of $^{34}$S\/$^{36}$S and $^{32}$S\/$^{36}$S isotope ratios are still relatively few, especially in the CMZ and the inner disk within $R_{\\rm GC}$~=~5.0~kpc.\n \\item The predicted $^{12}$C\/$^{13}$C ratios from the latest Galactic chemical evolution models \\citep[e.g.,][]{2020ApJ...900..179K,2021A&A...653A..72R,2022arXiv220910620C} are in good agreement with our results, while $^{32}$S\/$^{34}$S and $^{32}$S\/$^{36}$S ratios show significant differences at larger galactocentric distances. $^{32}$S\/$^{33}$S ratios even show clear offsets along the entire inner 12 kpc of the Milky Way. Taken together, these findings provide useful guidelines for further refinements of models of the chemical evolution of the Galaxy.\n\\end{itemize}\n\n \n\\begin{acknowledgements}\nWe wish to thank the referee for useful comments. Y.T.Y. is a member of the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. Y.T.Y. would like to thank the China Scholarship Council (CSC) and the Max-Planck-Institut f\\\"{u}r Radioastronomie (MPIfR) for the financial support. Y.T.Y. also thanks his fiancee, Siqi Guo, for her support during this pandemic period. C.K. acknowledges funding from the UK Science and Technology Facility Council through grant ST\/R000905\/1 and ST\/V000632\/1. We thank the IRAM staff for help provided during the observations.\n\\end{acknowledgements}\n\n\n\n\\bibliographystyle{aa}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSimple organisms like fungi and slime moulds are able to display complex behaviours. This is surprising given that their network-like body plan lacks any central organizing centre. The slime mould \\emph{Physarum polycephalum}\\ has emerged as a model system to study the complex dynamics these organisms use to adapt to their environment. The organism has been shown to find the shortest path through a maze \\cite{Nakagaki:2000} and connect food sources in an efficient and at the same time robust network comparable to man-made transport networks \\cite{Tero:2010}. Furthermore, the slime mould distributes its body mass among several resources to obtain an optimal diet \\cite{Dussutour:2010} and is able to anticipate recurring stimuli \\cite{Saigusa:2008}.\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=\\textwidth]{cutsite2.pdf}\n\\caption{Wound healing process in \\emph{P. polycephalum}\\ illustrated at four time points using bright field images. The cut occurred at \\SI{18}{\\minute} and the fan grown at cut site reached its maximal size at \\SI{60}{\\minute}. The network morphology was restored after \\SI{85}{\\minute}.}\n\\label{img:cutsite}\n\\end{figure*}\n\n\\emph{P. polycephalum}\\ is a true slime mould that forms a plasmodial network. Nuclei keep on dividing without forming cell walls, which results in a syncytial web-like network. The cytoplasm within this tubular network flows back and forth in a shuttle flow \\cite{Kamiya:1981}. These cytoplasmic flows are driven by cross-sectional contractions of the actin-myosin meshwork lining the gel-like tube walls \\cite{WohlfarthBottermann:1979}. Flows are organized across the entire network in a peristaltic wave of contractions that matches organism size \\cite{Alim:2013}. Flows generated in the organism are optimized for transport as contractions increase the effective dispersion of particles way beyond molecular diffusivity by a mechanism called Taylor dispersion \\cite{Marbach:2016}.\n\n\\emph{P. polycephalum}\\ adapts its network-like morphology to its environment by chemotaxis \\cite{Ueda:1976,DURHAM:1976,Chet:1977}. Here, stimulants are classified by being an attractant or a repellant depending on the organism's response to migrate toward or away from the stimulant. Stimulants have also been shown to affect cross-sectional contractions organism-wide by an increase in their frequency and amplitude for an attractant or a decrease for a repellant \\cite{Miyake:1994,Hejnowicz:1980}. A variety of chemical stimuli have been discussed for \\emph{P. polycephalum}, with glucose being a prominent attractant and salts like NaCl being effective repellants \\cite{Kincaid:1978,HIROSE:1982,MCCLORY:1985}. Temperature \\cite{Matsumoto:1988,Takamatsu:2004} and light \\cite{WohlfarthBottermann:1982,Nakagaki:1999} have also been found to act as stimulants that trigger organism-wide restructuring of the transport networks' morphology. In fact, the cytoplasmic flows themselves serve as the medium by which stimuli pervade the organism \\cite{Alim:2017}. \n\nA lot less is known about the impact of mechanical perturbations on the organism. In its natural habitat the slime mould suffers predation from grazing invertebrates causing severing that disrupts the transport network and its cytoplasmic flows. In experiments it has been found that quickly stretching a strand to 10-20\\% of its length while keeping it intact increases the amplitude of oscillations \\cite{Kamiya:1972}. Excising a single strand from a plasmodial network has been observed to lead to a roughly 20 minute cessation of contractions in the strand until recovery \\cite{Yoshimoto:1978}. This phenomenon was not observed for strands excised from the growing fan region of the slime mould resulting in speculations about the motive force being limited to the fan only. Yet, the cessation of contractions turned out to be hard to reproduce, see \\cite{Cieslawska:1984} and references therein. Among these discordant observations what remains established is local gelation of cytoplasmic flows upon touch without severing the organism \\cite{Achenbach:1981}. Despite the limited knowledge, wounding the organism by severing the network is part of daily laboratory routines and an eminent perturbation in natural habitat. \n\nHere we investigate \\emph{P. polycephalum} 's dynamics during wound healing following the quick and complete severing of a tube within the organism's network. We follow the process of wound healing across the individual's entire body, over the course of one hour after severing. The exemplary quantitative analysis of organism-wide contractions reveals a stepwise response spanning four different states. Briefly after severing, the contractions are often marked by an increase in amplitude and frequency, followed by a several minutes long cessation of contractions and stalling of cytoplasmic flows. This resting state is terminated by a sudden restart of vigorous contractions as the severed tube re-fuses. The vigorous state then transitions into a state of network-spanning contractions and continuous fan growth at the wounding site until the organism reverts back to pre-stimulus dynamics. Timing and significance of individual steps varies with the severity of cutting and cutting site location within the network. For example, stalling is found to be less pronounced when the network is cut in fan-like region. Overall, quick and complete severing triggers a response pattern with characteristics of the response to an attractive stimulus, including an increase in amplitude and frequency and net movement to stimulus site, see Fig.~\\ref{img:cutsite}. The reproducibility of stalling clarifies earlier contradictions and at the same time opens new avenues to investigate the biochemical dynamics behind the highly coordinated acto-myosin contractions underlying \\emph{P. polycephalum}'s arguably fascinating dynamics. \n\\section{Methods}\n\\subsection{Culturing and data acquisition}\nThe plasmodium is prepared from microplasmodia grown in liquid medium. The recipe for the medium is inspired by \\cite{Fessel:2012}, see Sec. S1. The advantage of this method over growing the plasmodium on oat flakes or bacteria is the ability to precisely control the nutritional state and amount of the organism. Also, plasmodia grown this way are free from oat flake residues or vacuoles containing food, which provides a cleaner sample for imaging. To prepare the plate for imaging, 0.2-0.5 mL of the microplasmodia grown in a shaking culture at $30^{\\circ}$C are transferred to an 1.5\\% agar plate and stored in a closed, but not sealed dish in the dark. After 12-24 hours, the microplasmodia fuse into a single plasmodium. The plasmodium is ready for imaging when there are no visible traces of liquid medium and the organism assumed its characteristic network shape, which usually occurs up to 36 hours after plating.\n\nThe imaging is performed with a Zeiss Axio Zoom V.16 microscope, equipped with a Zeiss PlanNeoFluar 1x\/0.25 objective and a Hamamatsu ORCA-Flash 4.0 digital camera. A green filter (550\/50nm) is placed over the transmission light source of the microscope to diminish \\emph{P. polycephalum}'s response to the light, and a humidity chamber prevents the sample from drying out. The acquisition of the images is done in Zeiss ZEN 2 (Blue Edition) software with bright-field setting. During the acquisition, the illumination of the sample is kept constant, and an image is taken every 3 seconds. The plasmodium is imaged for $\\sim$1 hour before the application of the mechanical stimulus to allow for the accommodation to the light \\cite{DURHAM:1976}. The stimulus is applied manually, using a microinjection needle with a blunt tip. The needle tip is held above the surface of the agar at a small angle and quickly dragged across the chosen plasmodial tube. The cut is severe and complete if the two parts of the tube separate completely. The plasmodium is then further imaged for more than 1 hour.\n\nUsing microplasmodia is so far the optimal way of obtaining non-severed networks, where the size and nutritional state are reproducible. However, there are challenges during the imaging that decrease the reproducibility of the experiment. In particular, plasmodia are highly motile and change their morphology accordingly. Furthermore, the organism tends to develop very large foraging fronts, which are not a suitable input for the presented comprehensive data analysis as they lack network characteristics. Lastly, the microscope light can act as stimulus \\cite{WohlfarthBottermann:1982,Nakagaki:1999,Tero:2010}, and even the green-filtered low-intensity illumination may cause the network to respond and change its behaviour to escape the imaging region. These challenges combined make the reproducibility and required stability of the network morphology over time challenging.\n\n\\subsection{Comprehensive network-based contraction analysis}\nTo quantify contraction dynamics we analyse bright field recordings in two different ways: for two morphologically static networks (see E2 and E3 in the experiment list) we perform an exhaustive network-based analysis as outlined in the following (see Fig.~\\ref{img:05_lineplots} and Fig. S4). For the additional 19 specimen which alter their network morphology dramatically over the course of the experiment, we analyse kymographs along static parts of the network as described in detail in Sec. S3 (see exemplary E1 and Mov. S5).\n\nImages recorded as a time series are processed as 8-bit uncompressed TIFs. At first every image is processed separately, then the results are stitched together, largely following Ref.~\\cite{Alim:2013}, and lastly the collective is analysed. On every single image, background is removed with the rolling-ball method. Then the image is used to create a mask, a binary image, with an intensity threshold that separates the network from the background. The mask is enhanced further, i.e.~only the biggest structure is considered, small holes are filled and single-pixel edges are smoothed. Subsequently, the resulting mask is used as a template for extracting the network's skeleton with a thinning method. In the skeletonized mask each pixel can be understood as a data point representing local intensity and diameter (see Fig.~\\ref{img:02_skel}). Local diameter is calculated as the largest fitting disk radius around the point within the mask. Within this disk the average intensity is computed and saved as intensity at the considered data point. Intensity and diameter anti-correlate due to the optical density of the slime mould and can therefore be used interchangeably considering Beer-Lambert law. Individual data points are attributed to a specific network branch of the network skeleton. To represent network topology, the network is broken down into vertices and edges where vertices describe pixel positions of branching points and edges represent two connected vertices. Each edge then acts as a parent for one specific branch. In this sense edges are abstracted simple connections and branches represent pixel-based resolution of a tube.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{02_skeleton_170220.pdf}\n\\caption{Scheme of intensity and diameter data extraction based on \\emph{P. polycephalum}\\ bright field images. The light grey area depicts the network mask based on the bright field images. Dark grey lines represent the network skeleton and the corresponding topology is shown in blue. Each pixel of the skeleton acts as a reference point for data derived during the analysis. The diameter is set as the distance from the reference point to the next non-mask pixel. The intensity is calculated by averaging individual pixel intensities over a corresponding disk (red).} \\label{img:02_skel}\n\\end{figure}\n\nAfter the network is extracted in space, the edges, vertices, diameters, and intensities are concatenated in time. To map intensity and diameter over time, a reference image is used, usually from an early time point. For every data point the shortest distance to any pixel in the reference image is calculated. This gives a quasi-static (x, y, t) $\\rightarrow$ (intensity, diameter) dataset, i.e.~the topology and vertex positions stay the same, but intensity and diameter can vary. This is justified as long as growth of the organism and vertex movement is minimal. The oscillatory behaviour of tubes in a certain time window can be described by four time dependent variables, namely amplitude $A$, frequency $f$ (or period $P$), phase $\\varphi$ and trend (base diameter) $d$. Each can be calculated from the time-evolution of the diameter or the intensity data, but if not stated otherwise the following results are only derived from intensity analysis. \n\nThe trend $d(t)$ is obtained with a moving-average filter with a kernel width of \\SI{200}{s} on each time trace (see Fig.~\\ref{img:03_linecomp}). The dataset is detrended with the calculated trend and smoothed with a Gaussian using a kernel width of \\SI{39}{s}. The kernel widths were chosen to extract the characteristic contraction pattern which usually has a frequency of \\SI{\\sim90}{\\second}. The values at every data point are stored as a complex valued time array, with the detrended and smoothed intensity representing the real part and the corresponding Hilbert transform representing the complex part, see S2 for more details. This time array, denoted analytic signal, serves as a basis to get instantaneous phase, frequency and amplitude by computing the angle or absolute value of the complex time series. Finally, the results are mapped back onto the network structure for each time point. In this fashion one can follow oscillatory behaviour resolved in time and space. Furthermore, the maps can be clustered in sub-networks and averaged separately to pinpoint local events in time. It should be mentioned that averaging of results for line plots, i.e.~Fig.~\\ref{img:05_lineplots}, is always done after the data-point based analysis took place. In this way for example, the apparent amplitude of the averaged intensity (Fig.~\\ref{img:05_lineplots}D) can be lower than the amplitude of each data point averaged (Fig.~\\ref{img:05_lineplots}B).\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{03_linecomp_170306.pdf}\n\\caption{Derivation of oscillation specific parameters, i.e.~amplitude $A$(t), frequency $f$(t) and trend $d$(t), from single pixel time series. The trend is calculated using a moving average with a kernel width of \\SI{200}{s}. Intensity is filtered with a Gaussian of width \\SI{39}{s}. Amplitude and frequency are calculated from the absolute value and angle of the complex-valued analytic signal, respectively.} \\label{img:03_linecomp}\n\\end{figure}\n\\section{Results}\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=\\textwidth]{04_maps_170303.pdf}\n\\caption{Time evolution of an exemplary network and its spatially mapped oscillation parameters at \\SI{13}{\\minute}, \\SI{27}{\\minute}, \\SI{33}{\\minute} and \\SI{55}{\\minute}. The network was cut in the centre at \\SI{17.3}{\\minute} (\\emph{scissor icon}). Top row depicts the raw bright field data, middle row the local amplitude, and bottom row the local frequency. Amplitude and frequency decrease locally, first at the lower sub-network (\\emph{small dotted arc}) at \\SI{27}{\\minute}, subsequently at upper sub-network (\\emph{large dotted arc}) at \\SI{33}{\\minute}. At \\SI{38}{\\minute} cytoplasmic flows are re-established at the wounding site. Finally, amplitude and frequency values recover.} \\label{img:04_maps}\n\\end{figure*}\n\\subsection{Wounding induces fan growth at cut site}\nWe observe specimens before and after a quick and complete severing of a tube to follow the response of \\emph{P. polycephalum}\\ to wounding (see Fig.~\\ref{img:04_maps}A, Mov. S1 and Mov. S5). Bright field movies reveal that cutting of main tubes distal to fans triggers cessation of contractions followed by stalling of cytoplasmic flow (n=15 out of 21). After contractions resume the severed tube fuses back together (n=21 out of 21), i.e.~flow is re-established, and a fan starts to grow at the cut site. Furthermore, we observe accumulation of body mass close to the cut site which is most prominent in peripheral cuts (Fig. S2). However, the growth is transient and after a given time the initial morphology is restored and the organism returns to typical behaviour comparable to before wounding.\\\\\n\nIn consideration of previously mentioned technical limits, we selected one representative dataset with prominent discernible features for network-based analysis. The following findings are derived from this dataset and later compared with other experiments. The specific timing of events in the representative data set is as follows (see Fig.~\\ref{img:04_maps}). Two tubes are severed at \\SI{17.3}{\\minute} effectively dividing the network into two parts. In both sub-networks, the size-wise bigger and smaller part, flows stall transiently around \\SI{30}{\\minute}. At \\SI{38}{\\minute} a connecting tube is reinstated and starts to re-establish cytoplasmic flows across the cut site. Until about \\SI{63}{\\minute} a transient fan is created at the cut site. At \\SI{90}{\\minute} the initial morphology is restored and fans are grown elsewhere.\n\n\\subsection{Spatial mapping reveals localized stalling}\nWe perform network-based analysis on the wounded specimen to extract the interplay of contractions during the healing response. In particular, we map out the amplitude and frequency of contractions spatially (see Fig.~\\ref{img:04_maps}, Mov. S2 and Mov. S3). This allows us to exactly localize the onset of stalling as it goes hand in hand with low values of amplitude and frequency. Likewise, patterns in contraction dynamics in a region of interest are identified by spatially averaging amplitude and frequency in this region (see Fig.~\\ref{img:05_lineplots}).\n\nIn the representative dataset, wounding separates the network into two sub-networks. Spatial mapping reveals that oscillations cease on different time-scales in the two sub-networks. By identifying the two sub-networks as separate regions of interest, we quantify the patterns in contraction taking the spatial average of the respective contraction variables in each region. The small sub-network shows a drop in amplitude at \\SI{21.5}{\\minute} by \\SI{63}{\\percent} and only recovers eight and a half minutes later to comparable values. Here, the percentage is given as ratio of time averages before, during and after stalling. In detail, the averages of the first \\si{21.5} minutes, the \\si{9.5} minutes during stalling and \\si{15} minutes after stalling were considered. The bigger sub-network drops significantly later at \\SI{28}{\\minute} by \\SI{51}{\\percent} and recovers to \\SI{29}{\\percent} below the initial value nine minutes later. In the same time frames the frequency drops by \\SI{32}{\\percent} and \\SI{45}{\\percent} for the small and big sub-network, respectively. Yet, neither sub-network recovers its frequency fully right after the stalling phase. Only the small sub-network recovers 35 minutes later to initial frequencies whereas the bigger region levels off \\SI{35}{\\percent} below the initial value.\\\\\nFurthermore, the phase patterns over time (see Mov. S4) reveal changes in the travelling waves upon cutting. Initially (0 to \\SI{17.3}{\\minute}) one can observe peristaltic waves from the tail (right-hand side) to the front (left-hand side) which finally merge into concentric patterns in the fan regions. Then, at 18 to \\SI{30}{\\minute}, the small sub-network slows down noticeably (see change in frequency) and the big sub-network contracts with less apparent spatial correlation, i.e.~the peristaltic wave pattern is temporarily lost.\n\n\\subsection{Fan growth phase coincides with stable network-spanning contractions}\nAfter re-fusing of the two sub-networks, another distinct phase characterized by stable network-spanning contraction dynamics can be observed. In Fig.~\\ref{img:05_lineplots}D contractions appear uniform from \\SI{44}{\\minute} until \\SI{63}{\\minute}. During this phase, amplitude and frequency level off to a stable value with little fluctuations. The small sub-network shows a slight increase in frequency over this period and has more fluctuations in the average intensity data than the big sub-network. Note, that the time frame of these contractions coincide with fan formation at the cut site. Furthermore, the end of this phase also coincides with the largest fan in respect to area.\\\\\nNetwork-spanning contractions are further supported by the phase time series. When considering the phase development one can already observe a peristaltic wave travelling towards the cut site in the small sub-network as early as \\SI{30}{\\minute}. A spanning pattern in the large sub-network is reinstated around the \\SI{35}{\\minute} mark and a global pattern (small and large sub-network) appears roughly three minutes after re-fusing (\\SI{40}{\\minute}). Then a standing wave pattern appears between the central region including the cut site and the periphery. It is stable and network-spanning until \\SI{63}{\\minute}. Subsequently the phase pattern breaks into a peristaltic wave similar to pre-cut and propagates from the tail and the small sub-network into fan regions in the large sub-network.\n\n\\subsection{Stalling and fan growth periods are bridged by distinct transition periods}\nCloser analysis of contraction dynamics over time reveals that the time point of the cut, the stalling phase and the fan growth phase are transitioned by phases of high fluctuations. Particularly in the presented case, before stalling occurs, amplitude and frequency peak shortly in both sub-networks (see arrows in Fig.~\\ref{img:05_lineplots}). In the small sub-network this peak coincides with the cut, whereas another ten minutes pass for the big sub-network before the amplitude reaches its maximum. Surprisingly, here the frequency decline occurs three minutes before the amplitude drops. After stalling the amplitude increases sharply in both sub-networks, yet stays below previous values in the big sub-network. The small network undergoes a phase of roughly \\SI{13}{\\minute} where the amplitude oscillates vigorously. This also coincides with a second frequency drop even though there is no apparent drop in amplitude at this time point. After the fan growth phase, amplitude and frequency show slight gradients once more. Here behaviour becomes comparable to the pre-cut state as the slime mould develops a preferred growth direction in the periphery and continues foraging.\n\n\\begin{figure}[hpt]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{05_lineplots_170303.pdf}\n\\caption{Comparison of oscillation parameters in the big- and small-sub-network depicted in (A) which result from the cut. Grey area (cut site) is not considered in the analysis. Time series of amplitude (B), frequency (C) and intensity (D) are averaged in the respective domains and compared; top : big sub-network, bottom: small sub-network. In each of these plots the black dashed line indicates the moment of the cut. The first grey dashed line marks the time point of fusion and the second the moment of maximal fan size. Black bars underline periods of stalling in (B) and fan growth in (D). In (D) the solid line represents the Gaussian filtered intensity (kernel width = 39s) and markers show raw averaged data. Black arrows indicate respective extremal peaks in the transition periods. Four black dots on the time line correspond to the four time points chosen in Fig.\\ref{img:04_maps}.\n}\\label{img:05_lineplots}\n\\end{figure}\n\n\\subsection{Fan creation and stalling is reproducible for complete severing}\nFor comparison we analysed a second dataset with the same network-based method (see Fig. S4). The key features, i.e.~cut repair, stalling, a transition phase, stable network-spanning contractions and return to pre-cut behaviour are found likewise, but the succession and timing of the specific events vary. This dataset has a weaker fan growth at the cut site and the time point of maximal fan size follows immediately after fusion. Given the short period of fan growth global network-spanning contractions are not observed. However, standing phase wave patterns are visible in the larger sub-network before fusion. Lastly, the transition phase shows peaking amplitude and fluctuating frequencies and reverberates for more than 30 minutes. At \\SI{70}{\\minute} the network reinstates a peristaltic wave toward peripheral fan regions resuming pre-cut dynamics.\\\\\n\nIn further experiments analysed with a kymograph based approach, we confirmed stalling to be a common response after a cut (see Fig. E1, n=15 out of 21). However, the degree and duration of stalling is varying between experiments and is most reproducible for a severe cut close to the centre of the network.\\\\\nIn detail, we observe that both the degree and duration of stalling, depend on the network size and morphology, cut location, possibilities of re-routing the flow through neighbouring tubes and presence of large fans. Also, a network undergoing quick changes in morphology due to a presumed light shock is less likely to show stalling. Varying cut location shows that complete severing of a tube, with a diameter comparably large in size and few neighbouring tubes, results in strong stalling, see experiments E[2, 3, 5, 6, 8, 9, 12, 13, and 18]. The effect is even more pronounced in smaller networks and on tubes close to the centre of the network (E[2, 3, 5, 8, and 18]). Stalling is less pronounced, as measured by relative change in amplitude and frequency as well as visual inspection of bright field data, if severing was applied to fan-like regions or peripheral tubes (E[10, 11, 14, 15, 16, 17, 19, 20, and 21]). If a severed tube had alternative routes with a comparable flow direction, neighbouring tubes inflated shortly after the cut, indicating a re-routing of flow. Yet, in this case stalling severity ranged from non-existent (E19) to full-stop (E1). In all data sets fan growth is observed around the cut site, yet duration and fan sized varied greatly (see E2 and E9 as maximal and minimal examples). \\\\\nIn all 15 experiments that show stalling, the period lasted for a minimum of three minutes. The exact time point of stalling onset and its duration varied. Duration of transition periods also varied from complete omission up to \\num{22} minutes between cut and stalling. In 7 out of 15 experiments, a vigorous phase of increase in frequency or amplitude fluctuations could be observed in the transition phases.\n\n\\section{Discussion}\nWe investigate \\emph{P. polycephalum}'s response to wounding in the form of a quick and complete severing of tubes using bright field microscopy and quantitative analysis of contraction patterns. Mapping out the contractions amplitude and frequency in space and time allows us to uncover a multi-step pattern of wound healing in \\emph{P. polycephalum}. \n\nThe key of our network-based analysis is mapping contraction variables onto a few pixels serving as the skeletonized backbone of the complete network. This representation allows us to capture contraction dynamics across the entire network over the course of several hours with handleable amount of data. Furthermore, spatial mapping visualizes abstract variables in an approachable way which outlines region of interests or patterns in space. For example, in the representative data set the time-shift in the response pattern between the two sub-domains of the network would have been lost when averaging contraction dynamics across the entire network (see Fig. S1).\n\nAmong the multiple steps in the response to wounding the cessation of oscillations and stalling of the cytoplasmic streaming is most striking. The phenomenon of stalling of cytoplasmic flows has been observed previously ~\\cite{Kamiya:1972,Yoshimoto:1978}, but its reproducibility was deemed questionable \\cite{Cieslawska:1984}. Our work shows that cut location and severity are crucial parameters for inducing reproducible stalling. The stalling period is omitted when a tube is not completely severed, or cut in a way that allows the cut ends to rejoin quickly. In addition, the specific body plan affects the impact of a cutting stimulus. For example, severed fan-like regions show less pronounced stalling. However, we find reproducible strong stalling in networks where the affected tubes are crucial connections that cannot be re-routed easily - thereby clarifying previously discordant observations.\\\\\n\nStimuli are commonly classified into attractants or repellants. The response of \\emph{P. polycephalum}\\ to an attractive stimulus includes fan growth and mass transport towards the stimulus site, often accompanied with an increase of oscillation frequency and amplitude. When we apply a wounding stimulus resulting in complete cutting of a tube, we observe a multi-step response pattern where only two out of four steps show a noticeable increase in amplitude and frequency. Yet, wounding implies that the network architecture is perturbed. Taken into account that contraction frequency decreases as organism size decreases \\cite{Kuroda:2015} the impairment of network architecture itself might counteract any increase in frequency. Despite the weak indication from contraction frequency and amplitude, we always observe fan growth and movement of mass toward the cut site regardless of the tube hierarchy, plasmodium size or the severity of the cut. Fan growth is a lot bigger than initial spillage of cytoplasm due to cutting. Furthermore, we often identify a specific fan growth phase of network-spanning contractions well separated in time from the cutting event by the stalling phase. We therefore identify wounding as an attractive stimulus. The observation of network-spanning oscillations during fan outgrowth adds to our confidence about cutting being an attractive stimulus since the observed phase patterns resemble contraction patterns found in earlier work with attractive stimuli using glucose as a stimulant \\cite{Alim:2017}.\\\\\n\nEmploying spatial data analysis we uncovered that wounding triggers a choreography of multiple successive steps to heal the severed tube. The mere duration of the healing response now defines a suggested minimal wait time after trimming for \\emph{P. polycephalum}\\ experiments. The complexity of the response hints at an intricate signalling pattern underlying the coordination of contractions. It is likely that also the response to classical attractants and repellants, when scrutinized, reveal multiple steps. Unravelling the workings behind \\emph{P. polycephalum}'s ability to adapt, is arguably a fascinating albeit challenging question. Here, the reproducible cessation of contractions arising during this wound-healing response may open up new avenues to investigate the biochemical wiring underlying \\emph{P. polycephalum}'s complex behaviours. Furthermore, it is fascinating that the impact of wounding can be weakened by network architecture. This suggests that \\emph{P. polycephalum}'s body plan itself could be part of the organisms strategy to not only adapt to its environment, but also specifically prevent severe consequences of wounding. \n\n\\section*{Acknowledgements}\nWe thank Christian Westendorf for instructions on growing microplasmodia, as well as for invaluable discussions and advice. M.K. and F.B. acknowledge support by IMPRS for Physics of Biological and Complex Systems.\n\n\\section*{Bibliography}\n\\bibliographystyle{iopart-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAmong AGNs, blazars are the most luminous and violent objects in the\nuniverse and emit $\\gamma$-rays at energies higher than 100 MeV. They are\ndivided into two main subgroups: Highly\nvariable quasars, sometimes called optically violent variable (OVV) quasars,\nand BL Lacertae objects (BL Lac). Since one of the jets of blazars is\npointed toward the Earth, we see the jet emission strongly Doppler enhanced and\nhighly variable. According to the unification scheme \\citep{up95}, radio\ngalaxies are the mis-aligned parent population of blazars. Synchrotron peak\nfrequencies of BL Lac objects cover a large range from IR to X-rays, and based\non its location they are called low-frequency peaked BL Lac objects (LBL;\nsynchrotron peak in the IR), intermediate BL Lac objects (IBL; synchrotron\npeak in the optical\/UV), or high-frequency peaked BL Lac objects (HBL;\nsynchrotron peak in the X-rays).\n\nWhile there is little evidence for dense radiation environments\nin the nuclear regions of BL~Lac objects --- in particular, HBLs\n---, strong line emission in Flat Spectrum Radio Quasars (FSRQs)\nas well as the occasional detection of emission lines in the spectra of some\nBL~Lac objects \\citep[e.g.,][]{vermeulen95} indicates dense nuclear\nradiation fields in those objects. This is supported by spectral modeling\nof the SEDs of blazars using leptonic models which prefer scenarios\nbased on external radiation fields as sources for Compton scattering\nto produce the high-energy radiation in FSRQs, LBLs and also some\nIBLs \\citep[e.g.,][]{ghisellini98,madejski99,bb00,acciari08,abdo11}. If the\nVHE $\\gamma$-ray emission is indeed produced in the high-radiation-density\nenvironment of the broad line region (BLR) and\/or the dust torus of an\nAGN, it is expected to be strongly attenuated by $\\gamma\\gamma$ pair\nproduction \\citep[e.g.][]{pb97,donea03,reimer07,liu08,sb08}.\n \\cite{akc08} have suggested that such intrinsic $\\gamma\\gamma$ absorption may be\nresponsible for producing the unexpectedly hard intrinsic (i.e., after\ncorrection for $\\gamma\\gamma$ absorption by the extragalactic background\nlight) VHE $\\gamma$-ray spectra of some blazars at relatively high redshift.\nA similar effect has been invoked by \\cite{ps10} to explain the spectral\nbreaks in the {\\it Fermi} spectra of $\\gamma$-ray blazars.\nThis absorption process will lead to the development of Compton-supported\npair cascades in the circumnuclear environment \\citep[e.g.,][]{bk95,sb10,rb10,rb11}.\n\nIn \\cite{rb10,rb11}, we considered the full 3-dimensional development of\nCompton-supported VHE $\\gamma$-ray induced cascades in the external radiation\nfields in AGN environments.\n In those works, we have left the origin (leptonic SSC or IC, or hadronic) \nof the primary VHE $\\gamma$-ray emission deliberately unspecified in order to \ninvestigate the cascade development in as model-independent a way as possible.\nWe have shown that even very weak magnetic fields $(B\\lesssim\n\\mu$G) may be sufficient for efficient quasi-isotropization of the cascade emission.\nWe applied this idea to fit the {\\it Fermi}\n$\\gamma$-ray emission of the radio galaxies NGC~1275 and Cen~A. In\n\\cite{rb10,rb11}, parameters were chosen such that the synchrotron emission\nfrom the cascades was negligible.\n\nIn this paper, we present a generalization of the Monte-Carlo cascade\ncode developed in \\cite{rb11} to non-negligible magnetic fields and consider\nthe angle dependent synchrotron emission from the cascades. In section\n\\ref{setup}, we will outline the general model setup and assumptions\nand describe the modified Monte-Carlo code. Numerical results for generic\nparameters will be presented in section \\ref{parameterstudy}. We confirm\nthat for the objects Cen~A and NGC~1275 the synchrotron radiation from the\ncascades is negligible for those parameters used in \\cite{rb10,rb11}. In section\n\\ref{degeneracy}, we investigate the effect of the magnetic field and its\ndegeneracy. We show that by studying only the high energy emission from\nthe cascades, the magnetic field can not be determined, and additional\nconstraints are needed from the synchrotron emission component. This\nis illustrated for the case of NGC~1275. In section \\ref{3C279}, we will\ndemonstrate that for moderately strong magnetic fields the synchrotron\nemission from the cascades can produce a signature resembling the big blue\nbump (BBB) observed in several blazars and demonstrate \nthat this may make a non-negligible contribution to UV -- soft X-ray\nSED. We illustrate this for the case of 3C~279. We summarize in Section \n\\ref{summary}.\n\n\n\\section{\\label{setup}Model Setup and Code Description}\n\nThe general model setup used for this work is described in \\cite{rb10,rb11}.\nThe primary VHE $\\gamma$-ray emission is represented as a mono-directional\nbeam of $\\gamma$-rays propagating along the X axis, described by a power-law \nwith photon spectral index $\\alpha$ and a high-energy cut-off at $E_{\\gamma, max}$. \nWe assume that the primary $\\gamma$-rays interact via $\\gamma\\gamma$\nabsorption and pair production with an isotropic radiation field with \narbitrary spectrum within a fixed boundary, given by a radius $R_{\\rm ext}$.\n\n\\begin{figure}[ht]\n\\vskip 1.5cm\n\\centering\n\\includegraphics[width=12cm]{f1.eps}\n\\caption{\\label{diskabs}$\\gamma\\gamma$ opacity due to accretion\ndisk photons as a function of height $z$ (in units of gravitational\nradii $r_g = GM\/c^2$) of the emission region above the black hole,\nfor three different VHE $\\gamma$-ray photon energies, and different\nblack-hole masses and disk luminosities. }\n\\end{figure}\n\nThe assumption of an isotropic external radiation field is appropriate\nfor line emission from the BLR, for distances from the central engine comparable\nto the size of the BLR ($\\sim 10^{17}$ -- $10^{18}$~cm), and for infrared emission\nfrom cold dust in the nuclear environment, on typical scales of $\\sim$~parsec.\nClose to the central black hole and accretion disk, direct emission from the\naccretion disk may dominate the radiation energy density. However, for moderate\ndistances from the disk, the primary $\\gamma$-ray beam will interact with the\naccretion disk emission under an unfavorable angle for $\\gamma\\gamma$ pair\nproduction. To illustrate this point, we plot in Figure \\ref{diskabs} the opacity\nto $\\gamma\\gamma$ absorption in the radiation field of an optically thick,\ngeometrically thin \\citep{ss73} accretion disk as a function of height\n$z$ of the emission region from the black hole. The figure shows that\ntypically, the accretion-disk $\\gamma\\gamma$ opacity drops below one\nat distances of a few 100 -- $10^3 \\, r_g$ from the black hole. This\nis of the order of the characteristic height of the emission region in\nleptonic models of blazar emission, and much smaller than the size of\nthe BLR or the dust torus. We therefore conclude that the neglect of\nthe accretion-disk emission in our simulations is a good approximation\nthroughout almost all of our simulation volume.\nFor the case of thermal blackbody radiation fields considered below, we\nchoose energy densities and blackbody temperatures characteristic of the\nobserved properties (temperatures and total luminosities) of thermal \ninfrared emission seen in AGN.\nIn order to keep our treatment as model independent as possible, we do not \nspecify the physical origin of the primary VHE $\\gamma$-ray spectrum. The\nsimplest assumption consistent with most models of $\\gamma$-ray emission\nin blazars is a simple power-law, which we use as input spectrum in our\nsimulations. The shape of the resulting pair cascade emission is only \nvery weakly dependent on the exact shape of the incident VHE $\\gamma$-ray \nspectrum. This is illustrated in Figure \\ref{PLExpcomparison}, in which\nwe run a cascade simulation, once with a straight power-law, once with\na powerlaw + exponential cut-off (as a more realistic representation of\na physical blazar high-energy spectrum), with identical environmental\nparameters, as listed in the figure caption. While the overall normalization\nof the cascade spectrum, obviously, depends on the flux of absorbed\nVHE $\\gamma$-rays (which is higher in the pure power-law case), the\ncascade spectra at energies below the $\\gamma\\gamma$ absorption\ntrough are virtually identical. In the cases relevant for this study,\nonly a small fraction of the $\\gamma$-ray power is absorbed and re-processed\ninto cascades. Therefore, feedback between the primary $\\gamma$-ray production\nregion and the cascade emission may be neglected, and the cascade development\ncan be treated as a process separate from the (unspecified) VHE $\\gamma$-ray\nproduction mechanism.\n\n\n\\begin{figure}[ht]\n\\vskip 1.5cm\n\\centering\n\\includegraphics[width=12cm]{f2.eps}\n\\caption{\\label{PLExpcomparison}Comparison of the Compton emission\nfrom cascades for two different input VHE $\\gamma$-ray spectral\nshapes: Blue (thin) lines indicate the emission for a pure power-law\ninput spectrum; red (thick) lines for a power-law + exponential cut-off.\nEnvironmental parameters are the same as in Figure \\ref{standardfig}\n(see below). Different line styles correspond to different viewing\nangles, $\\mu = \\cos\\theta_{\\rm obs}$, with respect to the jet axis,\nas indicated in the legend. Parameters: $B_x = B_y = 1$~$\\mu$G, \n$\\theta_B = 45^o$; $u_{\\rm ext} = 10^{-6}$~erg~cm$^{-3}$, \n$R_{\\rm ext} = 10^{18}$~cm, $T = 1000$~K. \nThe angular bin $0.8 \\le \\mu \\le 1$ contains the forward direction. \nThe (unabsorbed) primary $\\gamma$-ray input spectra are shown by the \ndot-dot-dashed lines.\n}\n\\end{figure}\n\nOur code evaluates $\\gamma\\gamma$ absorption and pair production using the\nfull analytical solution to the pair production spectrum of \\cite{bs97} under\nthe assumption that the produced electron and positron travel initially along\nthe direction\nof propagation of the incoming $\\gamma$-ray. The trajectories of the particles are\nfollowed in full 3-D geometry. Compton scattering is evaluated using the head-on\napproximation, assuming that the scattered photon travels along the direction of\nmotion of the electron\/positron at the time of scattering. The Compton\nenergy loss to the electron is properly accounted for at the time of each scattering.\n\nFor simplicity, the magnetic field in our simulations is treated as \nhomogeneous, oriented at an angle $\\theta_B$ with respect to the jet axis.\nThis may be considered an appropriate proxy for a helical magnetic field\nwith the ratio of toroidal ($B_{\\rm tor}$) and poloidal ($B_x$) magnetic\nfields given by $\\tan\\theta_B = B_{\\rm tor}\/B_x$.\nThe code calculates the synchrotron energy loss of cascade particles in the\nfollowing way: The energy of electrons\/positrons is decreased by $\\Delta E_{\\rm sy}\n= \\dot E_{\\rm sy} \\frac{l_c}{c}$ between successive Compton scatterings, where\n$l_c$ is the Monte Carlo generated distance traveled to the next scattering\nand $ \\dot E_{\\rm sy} = - 2 c \\sigma_{T} u_B \\gamma^2 \\sin^2 \\psi$ with\n$\\psi$ being a pitch angle between the particle momentum and the magnetic field.\nWe assume that the trajectory of the particles between two Compton scatterings\nis not affected by synchrotron radiation which is valid for $ u_B \\lesssim u_{\\rm ext}$.\nThen for $10$ random points between two successive Compton scatterings, we determine\nthe position and direction of motion of the particles at these points and write\nthe spectral power in synchrotron radiation $P_{\\nu}$ into a synchrotron output\nfile for the angular bin corresponding to the electron's\/positron's direction of\nmotion. The synchrotron power $P_{\\nu}$ of a single $e^{\\pm}$ is approximated as:\n\n\\begin{equation}\nP_{\\nu} = 2 \\frac{c \\sigma_T}{\\Gamma(\\frac{4}{3})} u_B \\beta^2\\gamma^2\n\\frac{\\nu^{1\/3}}{\\nu_c^{4\/3}} e^{-\\nu\/\\nu_c}\n\\label{sy_asymptotic}\n\\end{equation}\n\n\\citep{boettcher12} where the critical frequency \n$\\nu_c = \\frac{3 q B}{4\\pi m_e c}\\gamma^2 \\sin\\psi = \n4.2\\times10^6 \\sin\\psi B_G \\gamma^2$~Hz with $B_G$ being the magnetic \nfield in units of Gauss. \nThe approximation (\\ref{sy_asymptotic}) represents the synchrotron\nspectrum to whithin a few \\% at all frequencies.\n\n\n\\section{\\label{parameterstudy}Numerical Results}\n\n\\begin{figure}[ht]\n\\vskip 1.5cm\n\\centering\n\\includegraphics[width=12cm]{f3.eps}\n\\caption{\\label{standardfig}\nCascade emission at different viewing angles ($\\mu = \\cos\\theta_{\\rm obs}$). \nParameters of the target photon field are the same as for Figure 1. The \ninput photon spectrum is a pure power-law with $\\alpha = 2.5$, \n{$E_{\\gamma, {\\rm max}} = 5$~TeV}. The green solid line represents \nthe target photon field.\n}\n\\end{figure}\n\nWe have used the cascade Monte-Carlo code described in the previous section\nto evaluate the angle-dependent Compton and synchrotron spectra from VHE\n$\\gamma$-ray induced pair cascades for a variety of generic\nparameter choices. Figure \\ref{standardfig} illustrates the viewing angle\ndependence of the cascade emission. For this simulation, we assumed a\nmagnetic field of $B = \\sqrt{2} \\mu$G, oriented at an angle $\\theta_B = 45^o$\nwith respect to the X axis ($B_x = 1 \\, \\mu$G, $B_y = 1 \\, \\mu$G). The\nexternal radiation field is a thermal blackbody with $u_{\\rm ext} =\n10^{-6}$~erg~cm$^{-3}$, extended over a region of radius $R_{\\rm ext} = \n10^{18}$~cm, with a blackbody temperature of $ T = 10^3$~K (corresponding \nto a peak of the blackbody spectrum at a photon energy of $E_s^{\\rm pk}= \n0.25$~eV). This leads to a $\\gamma\\gamma$ absorption cut-off at an energy\n$E_c = (m_e c^2)^2 \/ E_s \\sim 2$~TeV. The incident $\\gamma$-ray spectrum \nhas a photon index of $\\alpha = 2.5$ and extends out to $E_{\\gamma, {\\rm max}} \n= 5$~TeV.\n\nFor any given viewing angle $\\theta$ with respect to the direction of \npropagation of the primary $\\gamma$-rays a critical electron energy for \nwhich the deflection angle over a Compton length equals the observing \nangle, i.e., $\\theta \\sim \\lambda_{\\rm IC} \/ r_g$ can be defined. This \nyields the characteristic electron energy $E_{\\rm e, br} = \\gamma _c \nm_e c^2$ corresponding to a given observing angle $\\theta$:\n\n\\begin{equation}\n\\gamma_c =\\sqrt{\\frac {3 e B}{4 \\sigma_T u_{\\rm ext}\\theta}}\\sim 7.2\\times10^5\nB_{-6}^{1\/2} u_{-3}^{-1\/2} \\theta^{-1\/2}\n\\end{equation}\nwhere $B_{-6} = B\/\\mu$G and $u_{-3} = u_{\\rm ext}\/(10^{-3}$~erg~cm$^{-3}$).\n\nThis expression has been derived assuming that the Compton cooling\nlength can be calculated in the Thomson regime, which is valid for\n$\\gamma \\lesssim 2 \\times 10^6 T_3^{-1}$ for a thermal target photon\nfield with temperature $T = 10^3 \\, T_3$~K, or $\\gamma \\lesssim 5 \\times \n10^4$ for a Ly$\\alpha$-dominated target photon field.\nIf these electrons radiate their energy by synchrotron radiation and Compton\nupscattering with the soft photon field in the Thomson regime, we can find the\ncorresponding spectral breaks for synchrotron radiation and Compton scattering\nas a function of viewing angle:\n\n\\begin{equation}\nE_{\\rm sy,br}\\cong \\gamma_c^2 B m_e c^2\/B_{cr}= \\frac{ 3m_e c^2 e B^2}{4\n\\sigma_T u_{\\rm ext}\n\\, \\theta \\ B_{\\rm cr}}\\sim 6.15 B_{-6}^2 u_{-3}^{-1}\n\\theta^{-1}{\\rm meV}\n\\label{ViewingangleSy}\n\\end{equation}\n\n\\begin{equation}\nE_{\\rm IC, br}\\cong\\gamma_c^2 E_s = {3 \\, e \\, B \\over 4 \\, \\sigma_T \\, u_{ext}\n\\, \\theta}\n\\, E_s \\sim 5.4\\times 10^2 \\, E_{s,1}B_{-6} u_{-3}^{-1} \\theta^{-1}\\; {\\rm GeV}.\n\\label{ICbreak}\n\\end{equation}\nwhere the $B_{\\rm cr}= \\frac{m_e^2 c^3}{e \\hbar} = 4.4 \\times10^{13}~G$ and\n$E_{s,1} = E_s\/(1$~eV).\n\nTherefore, the ratio of the Compton to the synchrotron peak frequency is given by:\n\\begin{equation}\n\\frac{E_{\\rm IC, br}}{ E_{\\rm sy,br}} = \\epsilon_s \\frac{B_{\\rm cr}}{B}\n\\label{RatioComToSyn}\n\\end{equation}\nwhere $\\epsilon_s = \\frac{E_s}{m_e c^2}$.\n\nFigure \\ref{standardfig} shows that with increasing viewing angle, the\nspectral peaks of both the synchrotron and Compton emission shift to\nlower energy. This is because the Compton cooling length of the high\nenergy particles is much smaller than their Larmor radius $\\lambda_{IC}\\ll\nr_g$, so they are emitting while traveling in the forward direction. Instead,\nfor low energy particles, $\\lambda_{IC}\\geq r_g$, so that they are deflected\nbefore they are emitting. For the Compton emission this effect was already\ndiscussed in \\cite{rb10,rb11}.\n\n\n\\begin{figure}[ht]\n\\vskip 1.5cm\n\\centering\n\\includegraphics[width=12cm]{f4.eps}\n\\caption{\\label{ufig}The effect of a varying external radiation energy density.\nParameters: $B_x = B_y = 10^{-6}$~G; $R_{\\rm ext} = 10^{16}$~cm, $T = 10^3$~K;\n$\\alpha = 2.5$, $E_{\\gamma, {\\rm max}} = 5$~TeV. The cascade emission in the \nangular bin $0.2 \\leq \\mu \\leq 0.4$ is shown.}\n\\end{figure}\n\nFigure \\ref{ufig} shows the cascade spectra for different values\nof the external radiation field energy density $u_{\\rm ext}$. In accordance\nwith equations \\ref{ViewingangleSy} and \\ref{ICbreak}, for higher values of\nthe external radiation field, the spectral breaks of both radiation components\nshift to lower energies. Figure \\ref{ufig} also shows that the synchrotron\nluminosities of the cascades decrease with increasing $u_{\\rm ext}$ while\nthe Compton luminosities of the cascades increase. For a larger value of\n$u_{\\rm ext}$ and fixed blackbody temperature the soft target photon number\ndensity increases and $\\tau_{\\gamma\\gamma}$ becomes larger so that the number\nof VHE photons which will be absorbed increases and the photon flux of Compton\nemission from the cascades becomes larger. For very large values of $u_{\\rm ext}$,\n$\\tau_{\\gamma\\gamma}\\gg 1$ for photons above the pair production threshold so\nthat essentially all VHE photons will be absorbed and the Compton flux from\nthe cascade becomes independent of $u_{\\rm ext}$ \\cite[]{rb10,rb11}. The ratio\nof emitted power in Compton to synchrotron radiation in the linear regime\n$(\\tau_{\\gamma\\gamma} \\lesssim 1)$ is given by:\n\\begin{equation}\n\\frac{P_{sy}}{P_{IC}}=\\frac{B^2 \/ 8 \\pi}{u_{ext}}\n\\label{RatioCom}\n\\end{equation}\nif Compton scattering occurs in the Thomson regime.\nThe flux ratio $\\frac{F_{sy}}{F_{IC}} \\varpropto\nu_{\\rm ext}^{-1}$, so that by increasing the $u_{\\rm ext}$ the synchrotron\nflux decreases.\n\n\\begin{figure}[ht]\n\\vskip 1.5cm\n\\centering\n\\includegraphics[width=12cm]{f5.eps}\n\\caption{\\label{Bfig}The effect of a varying magnetic field strength for a\nfixed angle of $\\theta_B = 45^o$ between jet axis and magnetic field. Parameters:\n$u_{\\rm ext} = 10^{-6}$~erg~cm$^{-3}$, $R_{\\rm ext} = 10^{18}$~cm, $T = 1000$~K,\n$\\alpha = 2.5$, {$E_{\\gamma, {\\rm max}} = 5$~TeV}. The cascade emission in the\nangular bin $0.2\\leq\\mu\\leq0.4$ is shown. The solid green line represents \nthe target photon field.\n}\n\\end{figure}\n\nFigure \\ref{Bfig} illustrates the effect of a varying magnetic field strength\nfor fixed magnetic field\norientation ($\\theta_B = 45^o$). We see that the synchrotron peak energy\nincreases proportional to the square of the magnetic field strength as expected\nfrom Eq. \\ref{ViewingangleSy}. As already discussed in \\cite{rb10,rb11} the\nCompton energy break is proportional to the magnetic field strength. as long\nas it occurs below the $\\gamma\\gamma$ absorption cut-off energy. The synchrotron\nflux is proportional to the square of the magnetic field strength.\nThe flux ratio is $\\frac{F_{sy}}{F_{IC}} \\varpropto B^2$ until the fluxes become\ncomparable, at which point our treatment of synchrotron losses breaks down.\n\n\\begin{figure}[ht]\n\\vskip 1cm\n \\centerline{\n \\includegraphics[width=8cm,height=8cm]{f6a.eps}\n \\includegraphics[width=8cm,height=8cm]{f6b.eps}}\n \\caption{\\label{BfigF}The effect of a varying magnetic field orientation\nfor a fixed magnetic field strength of $B = 1 \\, \\mu$G, $u_{\\rm ext} =\n10^{-6}$~erg~cm$^{-3}$, $R_{\\rm ext} = 10^{18}$~cm, $T = 1000$~K, $\\alpha = 2.5$,\n{$E_{\\gamma, {\\rm max}} = 5$~TeV}. Left figure: angular bin $0.8 \\leq\\mu\\leq 1.0$\n(dominated by the forward direction, i.e., the blazar case). Right figure: angular \nbin $0.2 \\le \\mu \\le 0.4$, representative of radio galaxies.\nThe green solid lines represent the target photon fields.\n}\n \\end{figure}\n\nFigure \\ref{BfigF} illustrates the effects of a varying magnetic-field orientation\nwith respect to the jet axis, for fixed magnetic-field strength $B = 1 \\, \\mu$G\nfor different angular bins. The results for the Compton component have been\ndiscussed \\cite{rb11}. The figure illustrates that primarily the perpendicular\n($B_y$) component of the magnetic field is responsible for synchrotron radiation.\n\n\n\\begin{figure}[ht]\n\\vskip 1cm\n \\centerline{\n \\includegraphics[width=8cm,height=8cm]{f7a.eps}\n \\hskip 1cm\n \\includegraphics[width=8cm,height=8cm]{f7b.eps}}\n \\caption{\\label{fitCenAandNGC1275}Compton and synchrotron radiation form\n the cascades. Left figure: (Fit to the SED of Cen~A.); Right figure:\n (spectrum of NGC~1275 with a simulated cascade spectrum from a mis-aligned\n blazar, along with the cascade spectra at larger viewing angles)}\n \\end{figure}\n\nFigures \\ref{fitCenAandNGC1275} illustrate that the cascades emissions from\nthe synchrotron radiation for parameters used in \\cite[]{rb10,rb11} are\nnegligible compared to the cascade Compton emission and much smaller than\nthe synchrotron radiation from the jet itself. This confirms that neglecting\nsynchrotron radiation in our previous works was justified.\n\n\\section{\\label{degeneracy}Magnetic Field degeneracy}\n\n\\begin{figure}[ht]\n\\vskip 1.5cm\n\\centering\n\\includegraphics[width=12cm]{f8.eps}\n\\caption{\\label{Degeneracy1}\nSynchrotron and Compton emission form the cascades for NGC~1275\n($0.6 \\leq\\mu\\leq 0.8 $). Parameters: $\\theta_B = 11^o$;\n$u_{\\rm ext} = 5\\times10^{-2}$~erg~cm$^{-3}$, $R_{\\rm ext} = 10^{16}$~cm,\n$E_s=E_{L\\alpha}$,\n$\\alpha = 2.5$, {$E_{\\gamma, {\\rm max}} = 5$~TeV}.\n}\n\\end{figure}\n\nIn \\cite{rb10}, we presented a fit to the \\emph{Fermi} spectrum of the radio\ngalaxy NGC~1275. We now show that there is a degeneracy of the magnetic field,\nboth orientation and strength, if only the high energy output from the cascades\nis considered. Figure \\ref{Degeneracy1} shows this effect for NGC~1275. In this\nplot, the external radiation field is\nparameterized through $u_{\\rm ext} = 5 \\times 10^{-2}$~erg~cm$^{-3}$ with photon\nenergy $E_s = E_{Ly\\alpha}$ and $R_{\\rm ext} = 10^{16}$~cm. This size scale is\nappropriate for low-luminosity AGN as observed in NGC~1275 \\citep[e.g.][]{kaspi07},\nand the parameters combine\nto a BLR luminosity of $L_{\\rm BLR} = 4 \\pi R_{\\rm ext}^2 \\, c \\, u_{\\rm ext} =\n1.9\\times 10^{42}$~erg~s$^{-1}$, in agreement with the observed value for NGC~1275.\nThe magnetic field orientation is at an angle of $\\theta_B = 11^o$. \nThe mass of the black hole in NGC~1275 is uncertain, and estimates \nrange from a few times $10^6 \\, M_{\\odot}$ \\citep{levinson95} to \n$\\sim 10^8 \\, M_{\\odot}$ \\citep{wilman05}. Assuming a characteristic\nfraction of $0.1$ of the accretion-disk luminosity to be re-processed\nin the BLR, the accretion-disk luminosity may be estimated to be $L_D\n\\sim 10^{43}$~erg~s$^{-1}$. Figure \\ref{diskabs} shows the $\\gamma\\gamma$\nabsorption depth due to the disk radiation field for $L_D = 10^{43}$~erg~s$^{-1}$\nfor the two possible extreme values of the black-hole mass, as a function\nof height $z$ of the emission region above the accretion disk. It \nshows that for $M_{\\rm BH} = 10^6 \\, M_{\\odot}$, the $\\gamma\\gamma$\nopacity drops below one at $\\sim 10^3 \\, r_g \\sim 10^{14}$~cm from \nthe black hole, while for $M_{\\rm BH} = 10^8 \\, M_{\\odot}$ $\\gamma\\gamma$\nabsorption becomes negligible at $\\sim 10^2 \\, r_g \\sim \\sim 10^{15}$~cm \nfor primary $\\gamma$-rays of $E_{\\gamma} = 1$~TeV, and much earlier for\nlower-energy photons. Therefore, throughout most of our simulation \nvolume ($R_{\\rm ext} = 10^{16}$~cm), $\\gamma\\gamma$ absorption in\nthe disk radiation field can be safely neglected.\n\nThe cascade spectrum shown in Figure \\ref{Degeneracy1} pertains\nto the angular bin $0.6 < \\mu < 0.8$ (corresponding to $37^o \\lesssim\n\\theta \\lesssim 53^o$), appropriate for the known orientation of NGC~1275. In\n\\cite[]{rb10,rb11}, we have shown that for magnetic field values of $B\\geq 1$~nG\nand for energy density $u_{ext} \\geq 10^{-3}$~erg~cm$^{-3}$, there is no pronounced\nbreak in the cascade spectrum and the cascade is independent of magnetic field.\nIn general we expect no break in the cascade Compton emission if $E_{\\rm IC,br}\n\\gtrsim \\frac{(mc^2)^2}{E_s}$, which leads to the condition:\n\n\\begin{equation}\nB \\gtrsim \\frac{{(m_e c^2)}^2 4\\sigma_T u_{\\rm ext} \\theta}{3 e (E_s)^2} \\sim 5 \\,\nu_{ext,-3} E_{s,1}^{-2} \\theta \\; {\\rm nG}\n\\label{relation}\n\\end{equation}\n\nFigure \\ref{Degeneracy1} shows that while the high energy emission due to\ndeflection of the cascade up to the $\\gamma\\gamma$ absorption trough remains\nthe same for the different magnetic fields, the synchrotron emission from the\ncascade changes. Therefore, determining the B field requires knowledge of the\nsynchrotron emission.\n\nIn the regime where $E_{\\rm IC, br}$ is independent of the magnetic field,\n$\\nu_{\\rm sy}\\varpropto B$ according to Eq. \\ref{RatioComToSyn} and the synchrotron\npower is proportional to the square of magnetic field in agreement with figure\n\\ref{Degeneracy1}.\n\nSince the synchrotron\/Compton flux ratio $\\frac{F_{\\rm sy}}{F_{\\rm IC}}\n\\varpropto B^2$, we expect that for sufficiently high magnetic fields,\nwe will reach the regime where the Compton flux from the cascades is\nequal to or smaller than the synchrotron flux in which case our numerical\nscheme is no longer applicable.\n\n\n\\section{\\label{3C279}The Big Blue Bump}\n\nThe spectral Energy distribution of AGN in the ultraviolet (UV) to soft X-ray\nband ($ \\sim 10$~eV-$1$~keV) is notoriously difficult to observe because of\ndust and gas in our galaxy and the AGN environment. The SEDs of many blazars\nexhibits a UV soft X-ray excess, called the big blue bump (BBB)\n\\citep[]{pian99,palma11,raiteri05,raiteri06,raiteri07}. It is often\nattributed to the thermal emission from the accretion disk. In blazars,\nits signature is often particularly hard to detect because of dominant\nnon-thermal emission from the jet. Understanding the origin of the BBB\nis important since this provides information on the central engine of\nthe AGN.\n\n3C~279 was among the first blazars discovered as a $\\gamma$-ray source with\nthe Compton Gamma-Ray Observatory \\citep[]{hartman92}. In 2007 it was detected\nas a VHE $\\gamma$-ray source with the MAGIC I telescope, making it the most\ndistant known VHE $\\gamma$-ray source at a redshift of $0.536$ \\citep[]{HB93}.\nIts relativistic jet is oriented at a small angle to the line of the sight\nof $< 0.5^0$ \\citep[]{J04}. It is also detected by \\emph{Fermi} \\citep[]{abdo09c}\nwith photon spectral index $2.23$. There is evidence of a spectral break of\naround a few GeV to a photon spectral index of $2.50$. It is strongly believed\nthat the radio to optical emission is due to synchrotron radiation by relativistic\nparticles in the jet. However, the origin of the high energy emission is still not\nwell understood \\citep[see, e.g.,][]{br09}.\n\n\\cite{pian99} monitored 3C~279 in the ultraviolet, using IUE, and combined\ntheir data with higher-energy observations from ROSAT and EGRET from 1992 December\nto 1993 January. During this period, the source was in a very low state, allowing\nfor the detection of a UV excess (the BBB), which is typically hidden below a\ndominant power-law continuum attributed to non-thermal emission from the jet.\n\\cite{pian99} proposed that the $\\gamma$-ray emission in the SED of 3C~279\nis produced by the external Compton mechanism, and suggested that the observed\nUV excess might be due to thermal emission from an accretion disk.\n\nAs an alternative to thermal emission from the accretion disk, \\cite{S97} proposed\nthe bulk Compton mechanism as a possible explanation of a UV\/X-ray excess in quasar\nSEDs. If the jet contains a substantial population of cold (i.e., thermal,\nnon-relativistic or mildly relativistic) electrons, they could scatter\nexternal optical\/UV photons with the bulk Lorentz factor of $\\Gamma \\thicksim 10$,\nresulting in bulk Compton radiation in the far UV or soft X-ray range.\n\nHere we suggest an alternative contribution to the BBB feature from\ncascade synchrotron emission. Figures 2 -- 4\nillustrate that the synchrotron emission from cascades may peak in the UV\/X-ray\nrange, thus mimicking a BBB for sufficiently strong magnetic fields\n($B \\gtrsim 1$~mG). Figure \\ref{fit3C279} illustrates \nthe contribution that synchrotron emission from VHE $\\gamma$-ray\ninduced pair cascades can make to the BBB in 3C~279.\nThe primary HE $ \\gamma$-ray spectrum with a photon spectral index of\n$\\alpha = 2.5$ matcheds the Fermi spectrum of 3C~279.\nThe external radiation field is parameterized through $u_{\\rm ext} =\n10^{-4}$~erg~cm$^{-3}$ and $R_{\\rm ext} = 5\\times10^{17}$~cm,\nand the parameters combine to the luminosity of $L = 4 \\pi R_{\\rm ext}^2 \\, c\n\\, u_{\\rm ext}\n\\sim 10^{43}$~erg~s$^{-1}$ corresponding to a $\\nu F_{\\nu}$ peak\nflux of $\\thicksim 10^9$~JyHz, about $2$ orders of magnitude below the observed\nIR\/optical -- UV flux level. The magnetic field is $B = 10^{-2}$~G, oriented at\nan angle of $\\theta_B = 85^o$. The incident $\\gamma$-ray spectrum extends out\nto $E_{\\gamma, {\\rm max}} = 5$~TeV, and the external radiation field is modeled\nas a blackbody with a temperature of $ T= 2000$~K (corresponding to a peak of the\nblackbody spectrum at a photon energy of $E_s^{\\rm pk}= 0.5$~eV). This leads\nto a $\\gamma\\gamma$ absorption cut-off at an energy $E_c = (m_e c^2)^2 \/ E_s\n\\sim 1$~TeV.\n\n\\begin{figure}[ht]\n\\vskip 1.5cm\n\\centering\n\\includegraphics[width=10cm]{f9.eps}\n\\caption{\\label{fit3C279}Illustration of a possible BBB in 3C~279 from\ncascade synchrotron emission.\nParameters: $B= 10^{-2}$~G, $\\theta_B = 85^0$;\n $R_{\\rm ext} = 5\\times10^{17}$~cm, $T = 2000$~K,\n$\\alpha = 2.37$, $u_{\\rm ext} = 10^{-4}$~erg~cm$^{-3}$, {$E_{\\gamma, {\\rm max}}\n= 5$~TeV}. Data from \\cite{abdo10}.\n}\n\\end{figure}\n\nWe suggest that synchrotron emission from VHE $\\gamma$-ray induced pair\ncascades can enhance the BBB feature in the SEDs of several\nblazars such as 3C~279. An observational test of this hypothesis may be\nprovided through spectropolarimetry. A BBB due to (unpolarized) thermal\nemission from an accretion disk will produce a decreasing percentage of\npolarization with increasing frequency throughout the optical\/UV range.\nIn contrast, if the BBB is produced as synchrotron emission from cascade\npairs in globally ordered magnetic fields, it is also expected to be polarized.\nTherefore, we predict that a BBB due to cascade synchrotron emission would result\nin a degree of polarization showing only a weak dependence on frequency over the\noptical\/UV range. As an example, in recent observations of the high-redshift\n$\\gamma$-ray loud quasar PKS~0528+134, \\cite{palma11} found a decreasing\ndegree of polarization with increasing frequency throughout the optical range,\narguing for an increasing contribution from thermal emission towards the blue\nend of the optical spectrum.\n\n\n\\section{\\label{summary}Summary}\n\nWe investigated the magnetic-field dependence and synchrotron emission\nsignatures of Compton-supported pair cascades initiated by the interaction\nof nuclear VHE $\\gamma$-rays with arbitrary external radiation fields, \nfor a model-independent, generic power-law shape of the primary\nVHE $\\gamma$-ray emission.\nWe\nfollow the spatial development of the cascade in full 3-dimensional geometry\nand study the dependence of the radiative\noutput on various parameters pertaining to the external radiation field and\nthe magnetic field in the cascade region.\nWe confirm that synchrotron radiation from the cascades is negligible in\nNGC~1275 and Cen~A for the parameters we used in our previous works. We\ndemonstrated that the magnetic field can not be well constrained by considering\nthe high-energy (Compton) output from the cascade emission alone, without\nobservational signatures from their synchrotron emission. This was illustrated\nfor the case of NGC~1275, for which we could produce equally acceptable fits\nto the Fermi spectrum for a variety of magnetic-field values, which resulted\nin substantially different synchrotron signatures.\n\nWe have shown that synchrotron emission from VHE $\\gamma$-ray induced pair\ncascades may produce UV\/X-ray signatures resembling the BBB observed in the\nSEDs of several blazars, in particular in their low states. \nWe used the example of 3C~279 to illustrate that cascade synchrotron\nemission may make a substantial contribution to the BBB feature.\nWe point out that spectropolarimetry may serve\nas a possible observational test to distinguish a thermal from a non-thermal\n(cascade) origin of the BBB.\n\n\\acknowledgements{ This work was supported by NASA through Fermi Guest\nInvestigator Grants NNX09AT81G and NNX10AO49G. We thank the anonymous \nreferee for valuable suggestions. }\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcduh b/data_all_eng_slimpj/shuffled/split2/finalzzcduh new file mode 100644 index 0000000000000000000000000000000000000000..729e1e13a819fbd3a466cdcf15c534755cc26ae6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcduh @@ -0,0 +1,5 @@ +{"text":"\\section{The new cosmological context}\n\n\\subsection{Model expansion versus universe expansion}\n\nIt is currently assumed that matter does not expand itself during an\neventual universe expansion. {\\it If this were true} some standard\nmodel could be used as the base for a non expanding theoretical\nreference framework. In it, ${dr(ik)}\/{r(ik)}=Hdt $. Then, from (1.1),\n(1.2), and NL mass-energy conservation,\n\\begin{equation}\n\\label{2.1}\nd\\phi (i)=-dw(i)=-\\,d\\sum_{k=1}^\\infty \\frac{Gm(k)f(ik)}{r(ik)}%\n=\\left[ \\sum_{k=1}^\\infty \\frac{Gm(k)f(ik)}{r(ik)}\\right] Hdt,\n\\end{equation}\n\\begin{equation}\n\\label{2.2}\nd\\phi \\left( i\\right) =w(U)Hdt=Hdt=dr(ik)\/r(ik)=d\\lambda\n(i)\/\\lambda (i).\n\\end{equation}\n\nThis means that {\\it every wave and particle would expand itself in\nthe same proportion as the intergalactic distances}. An eventual\nuniverse expansion would not change the relative values of all of\nthem: the distances, the velocities, the temperatures, the WRS, the\nHRS, and therefore, the local physical laws\\footnote {This\ngeneralizes the relativity postulates for eventual universe\nexpansions.}. {\\it Then the universe would have not a well-defined\nage and it may last indefinitely}. Of course, this would invalid the\ncurrent deductions normally made for the universe age, which anyway\nseem to be not consistent with the last measurements made with the\nHubble telescope.\n\n\\subsection{The new kind of black hole (BH)}\n\nThe new exponential G relations have not a singularity at $r=2GM$.\nThen, the new kind of BH is different to that of GR \\cite{V81b}. Its\nnucleus would be just a neutron star (NS) with a strong external {\\it\ngradient of the NL refraction index } that would act as a mirror for\nmost of the internal radiation's. Its outcoming {\\it critical\nreflection angle}, given\nby $sin^{-1}[\\left( 2eGM\/r\\right) e^{-2GM\/r}]$, would be rather\nnegligible. Thus the escape probabilities would be not strictly null.\nThen the BH would absorb and store for long time most of the\nradiation's traveling within the impact parameter $2eGM$.\n\n\\subsection{Relativistic particle generation}\n\nIn a way similar to the earth {\\it auroras}, most of the positively\ncharged nuclei would be driven by the magnetic fields towards the\nBH polar regions. Since the neutron binding energy in a BH is of a\nhigher order of magnitude compared with that in atoms, then one the\nmost probable reactions between them is {\\it nuclear stripping} \\cite\n{V77}, \\cite{V81b}. In it, some of the atomic neutrons\nwould be captured by the BH while the remaining nucleus (proton or\nproton rich nucleus) would be rejected by the NS. The last one\nwould take away the NL mass-energy difference between the\noriginal and final states of the captured neutrons. They could only\nescape from the magnetic fields, in axial directions, within the small\nescape angle given above. They would form {\\it narrow jets of\nrelativistic particles} richer in protons and with higher energies for\nhigher $p\/n$ ratios. They are consistent with the composition and\nenergies of {\\it cosmic ray particles} \\cite{V81b}, \\cite\n{R81}. They are also consistent with the {\\it radio sources\nand jets} going away from central regions of galaxies, most of them\nin just the expected orientations.\n\nThis process would be most important because it would convert G\nwork into mechanical and nuclear latent energies. This would\nregenerate new gas of high nuclear and kinetic energies at the cost\nof rather burnt out materials like He or heavier elements. Such low\nentropy materials can in principle extend the luminous lifetimes of\ngalaxies beyond the limits estimated from the current models.\n\n\\subsection{The entropy switch}\n\n{}From the BH surface, just to the contrary of the outside regions, the\nexternal universe would look as a source of {\\it blue shifted\nradiation's} that would increase both the local temperature and the\nprobabilities for filling up the local SW levels up to the highest NL\nfrequencies. This is equivalent to a decrease of the local entropy. In\nthis way the average NL mass and kinetic energy of the nucleons\nwould increase with the time, with the radiation energy coming from\nthe rest of the universe, up to some unstable state in which any\ndecrease of the NL refraction index gradient generated by external\nbodies would produce {\\it frustrated reflections} that can trigger the\nmass outflow. Thus the BH can {\\it explode} producing low density\ngas flowing away through older stellar remnants orbiting around it.\nThis would transform a fraction of the kinetic energy into rotational\none associated with randomly oriented angular momentum's. This is\nalso consistent with the fronts of H rich matter diverging from very\nsmall regions in the universe.\n\n\\section{The new astrophysical context}\n\n{}From above the universe would last, indefinitely, in a kind of\nconservative and {\\it isentropic steady state}. In it, {\\it matter and\nradiation's would evolve, indefinitely, in rather closed cycles,\nbetween the states of gas and BHs, and vice versa}.\n\n\\subsection{Matter cycles}\n\nSingle and chain of BH explosions would produce {\\it rather\nspherical stellar clusters and elliptical galaxies, rather free of\nmetals}. They would regenerate randomly oriented angular\nmomentum's that, in the long run, would be canceled out at faster\nrates compared with those parallel to the galactic axis. Thus an {\\it\nelliptical galaxy} would progressively get {\\it disc and spiral shapes}\nof smaller volumes. Finally it would become reduced to a small\ncentral luminous volume (AGN and {\\it quasar}) with massive stars\nand high density black bodies (black holes, neutron stars)\nsurrounded by a halo of dead stars and planetesimals [{\\it black\ngalaxy}]. The explosive events as supernovas would produce large\nchanges of luminosity, within relatively short periods, that are\nconsistent with those of quasars \\cite {N90}.\n\nDue to the low $\\phi (r)$ in the black galaxy center, their atoms\nwould emit strongly red shifted light rather scattered and reflected by\nthe external bodies. This accounts for the fact that quasar\ncorrelation's improve under the assumption that most of the\nobserved red shift is intrinsic \\cite {B73}. The detection of\nmetal lines would also prove the existence of highly evolved (old)\nmatter.\n\nThe {\\it black galaxy} (BG) resulting from a luminous one would be\ncooled down by its BHs. It would also capture and store radiation\ncoming from the external universe, in a way similar to a huge BH.\nAfter a long period, the explosion of some central BH can trigger a\nchain of BH explosions that would regenerate a luminous galaxy.\n\nWithin a larger time scale, the galaxy regeneration would look like a\nBG explosion that can trigger the virtual explosions of the next BGs,\nand so on. They would produce {\\it clusters}. Superclusters would also\nbe due to similar mechanisms. Thus the {\\it fronts} of galaxies in\nluminous stages would also account for the large scale structure of\nthe universe.\n\n\\subsection{High energy step down in stellar objects}\n\nMechanisms of nuclear stripping similar to those occurring BHs\ncould also occur during the matter fall over neutron stars (NSs),\neither steadily or in pulsed ways. They may also occur rather hidden\ninside some stars or gas clouds. They would transform heavy (burnt\nout) elements into protons of higher kinetic and nuclear latent\nenergies that would promote convection currents. This would\nprevent overheating and stellar collapse after neutrino cooling.\n\nThis kind of stellar model, \\cite{V93}, is consistent with all of\nthem: the low neutrino luminosity's, the higher densities and\ntemperatures, the better defined mass-luminosity's relations and the\nmagnetic structures of {\\it main sequence stars}.\n\n\\subsection{Density and isotropy of the universe}\n\nDue to the higher rates of energy emitted by the luminous galaxies\ncompared with those absorbed by the BGs, it is inferred (after a\nmass-energy balance) that {\\it most of the universe should be in the\nstate of low temperature BGs}, cooled down by their own BHs. This\nis consistent with the high average density of the universe derived\nfrom (1.1), and assuming $H=75 km\/\\sec $ per mps\\footnote {When\nthe common mass and energy unit is the joule, $G = G_{newt} c^\n4$.}.This one is $ \\simeq [4\\pi GR^2]^{-1}$, i. e., about $ 10^{-29}\ngm\/cm^3 $. This is of a higher order of magnitude than that of the\nluminous fractions of the universe. This is also consistent with the\ncurrent mass excesses detected from dynamic methods in galaxies\nand clusters.\n\nAfter integration of (2), the space properties are fixed, mostly, by\nmatter existing between $R$ and $3R$. The contribution of\nrelatively local matter is extremely small compared with that of the\nrather uniform universe. This is consistent with {\\it the weakness of\nordinary G interactions, and with the high isotropy of both the space\nproperties and of the cosmological radiation background}.\n\nThe low temperature black-body radiation coming from BGs, red\nshifted during its long average trip up to the observer, ($2R $),\nwould fix {\\it a rather uniform low temperature cosmic radiation\nbackground}. Thus {\\it the universe would always look like a perfect\nradiation absorber}\\footnote {Only steady state cosmologies can\naccount for the {\\it arrows} in nature \\cite {N90}.}.\n\n\\section{Conclusions}\n\nThe theoretical properties of the SW particle model fix a new kind of\n{\\it conservative and isentropic steady state} in which matter and\nradiation's evolve, indefinitely, in rather closed cycles. These cycles\nare fairly consistent with the luminous bodies ranging between\nelliptical galaxies and quasars, and also with larger scale structures\nof the universe.\n\nThis theory opens the way for new stellar models and non\nconventional interpretations of many celestial phenomena. The new\nuniverse would have not the narrow limits of time fixed by the rather\nconventional theories. In this way, also, astrophysics could do\nwithout the relatively large number of non testable hypotheses that\ncan be advanced on the universe origin.\n\nThere is simultaneous consistency of the theoretical properties of\nthe SW model with fundamental physics, and of the new\ncosmological context with a wide range of astronomical\nobservations. This seems to be a fair reliability test for all of them,\nthe SW particle model, for the relationships derived from it, and for the\nnew cosmological and astrophysical contexts. This unified way may\ncontribute to understand nature in terms of the most elemental\nproperties of radiation's (or vice versa), thus depending on the\nminimum number of parameters, postulates, and arbitrary\nassumptions normally made on relations between matter and its G field.\n\nDue to the large amount of subjects and materials accumulated\nfrom 1976 up to day, the author intends to complilate all of this\nwork into a single book for that may be useful to those that may\nlike to go in this way for undestanding nature from a self-consistent\nand unified viewpoint\\cite{V95c}.\n\nAcknowledgement. I appreciate very much the help of TW Andrews, after\nsending me some helpful literature, including his own ideas. I would also\nappreciate some encouragement and colaboration for finding\nfurther astronomical tests for this theory.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\tMany measurement operations in signal and image processing as well as in communication follow a bilinear model. Namely, in addition to the measurements depending linearly on the unknown signal, also certain parameters of the measurement procedure enter in a linear fashion. Hence one cannot employ a linear model (for example, in connection compressed sensing techniques \\cite{candes2006robust}) unless one has an accurate estimate of these parameters.\n\n\tWhen such estimates are not available or too expensive to obtain, there are certain asymmetric scenarios when one of the inputs can be recovered even though the other one is out of reach (e.g., \\cite{xu1995least,lee2017spectral}, this scenario is sometimes referred to as passive imaging). In most cases, however, the natural aim will be to recover both the signal and the parameters, that is, to solve the associated bilinear inverse problem. Even when some estimates of the parameters are available, such a unified approach will be preferred in many situations, especially when information is limited. Consequently, the study of bilinear inverse problems, including but not limited to the important problem of blind deconvolution, has been an active area of research for many years \\cite{Haykin1994}.\n\t\n\tObserving that bilinear maps admit a representation as a linear map in the rank one outer product of the unknown signal and the parameter vector, one can approach such problems using tools from the theory of low-rank recovery (see, e.g., \\cite{ahmedrechtromberg,lingstrohmer,jung2017blind}). Under sparsity assumptions, that is, when the signals and\/or parameter vectors admit an approximate representation using just a small (but unknown) subset of an appropriate basis (for more details regarding when such assumptions appear in bilinear inverse problems, see \\cite{strohmer}), however, the direct applicability of these approaches is limited, as two competing objectives arise: one aims to simultaneously minimize rank and sparsity. As a consequence, the problem becomes considerably more difficult; Oymak et al., for example, have demonstrated that minimizing linear combinations of the nuclear norm (a standard convex proxy for the rank) and the $\\ell_1$ norm (the corresponding quantity for sparsity) exhibits suboptimal scaling \\cite{oymak2015simultaneously}.\tIn fact it is not even clear if without additional assumptions efficient recovery is at all possible for a near-linear number of measurements (as it would be predicted identifiability considerations \\cite{doi:10.1137\/16M1067469}).\n\t\n\tRecently, a number of nonconvex algorithms for bilinear inverse problems have been proposed. For example, for such problems without sparsity constraints several such algorithms have been analyzed for blind deconvolution and related problems\\cite{li2016rapid,ling2017regularized} with near-optimal recovery guarantees. In contrast, our understanding of bilinear inverse problems with sparsity constraints is only in its beginning. Recently, several algorithms have been analyzed for sparse phase retrieval \\cite{bahmani2017anchored,soltanolkotabi2017structured} or blind deconvolution with sparsity constraints \\cite{qu2017convolutional}. The recovery guarantees for these algorithms, however, are either suboptimal in the number of necessary measurements or only local convergence guarantees are available, i.e., one relies on the existence of a good initialization. (A noteworthy exception are the two related papers \\cite{bahmani2016near,iwen2017robust}, where a two-stage approach for (sparsity) constrained bilinear inverse problems is proposed, which achieves recovery at near-optimal rate. However, the algorithm relies on a special nested structure of the measurements, which is not feasible for many practical applications.)\n\t\n\tIn \\cite{paper2} Lee, Wu, and Bresler introduced the {\\em sparse power factorization} (SPF) method together with a tractable initialization procedure based on alternating minimization. They also provide a first performance analysis of their method for random bilinear measurements in the sense that their lifted representation is a matrix with independent Gaussian entries.\n\tThat is, they work with linear operators $\\mathcal{A}\\colon \\mathbb{C}^{n_1 \\times n_2} \\longrightarrow \\mathbb{C}^{m} $ that admit a representation as\n\t\\begin{equation*}\n\t\\left( \\mathcal{A} \\left( X \\right) \\right) \\left( \\ell \\right)= \\text{trace} \\left( A_{\\ell}^\\ast X \\right)\n\t\\end{equation*}\n\tfor i.\\,i.\\,d.\\ Gaussian matrices $ A_{\\ell} \\in \\mathbb{C}^{n_1 \\times n_2} $.\n\t\n\t\n\tFor such measurements they show that with high probability, SPF converges locally to the right solution, i.e., one has convergence for initializations not too far from the signal to be recovered. \n\t\n\t\n\tFor signals that have a very large entry, they also devise a tractable initialization procedure -- they call it thresholding initialization -- such that one has global convergence to the right solution. Local convergence has also been shown for the multi-penalty approach {\\em A-T-LAS$_{1,2}$} \\cite{fornasier2018}, but to our knowledge, comparable global recovery guarantees are not available to date. This is why we focus on SPF in this paper, using the results of \\cite{paper2} as our starting point.\n\t\n\tThe precise condition for their guarantee to hold is that both (normalized) input signals need to be larger than some $c>\\tfrac{1}{2}$ in supremum norm -- more than one quarter of its mass needs to be located in just one entry, that is, the signals must have a very high peak to average power ratio.\n\t\n\t In this paper, we considerably weaken this rather strong restriction in two ways. Firstly, we show that similar results hold for smaller lower bounds $c$ at the expense of a moderately increased number of measurements. Secondly, we show that similar results can be obtained when the mass of one of the signals is concentrated in more than one, but still a small number of entries.\n\t\n\t\n\tThe SPF algorithm, the thresholding initialization, and the resulting recovery guarantees are reviewed in \\Cref{spfsection} before we discuss and prove our results in \\Cref{resultsection} and Section \\Cref{sectionproof}.\n\t\n\t\\subsection*{Notation}\n\tThroughout the paper we will use the following notation. By $ \\left[n\\right] $ we will denote the set $ \\left\\{1; \\ldots; n \\right\\} $. For any set $J$ we will denote its cardinality by $ \\vert J \\vert $. For a vector $v\\in \\mathbb{C}^m$ we will denote by $ \\Vert v \\Vert $ its $ \\ell_2$-norm and by $ \\Vert v \\Vert_{\\infty}$ the modulus of its largest entry. If $J \\subset \\left[n\\right] $ we will by $v_J$ denote the restriction of $v$ to elements indexed by $J$. For matrices $ A \\in \\mathbb{C}^{n_1 \\times n_2} $ we will denote by $ \\Vert A \\Vert_F$ its Frobenius norm and by $\\Vert A \\Vert $ its spectral norm, i.e., the largest singular value of $A$. \t\n\t\n\t\\section{Sparse Power Factorization: Algorithm and Initialization}\\label{spfsection}\n\t\n\t\\subsection{Problem formulation}\n\t\n\n\n\n\n\n\t\n\tLet $ b \\in \\mathbb{C}^m $ be given by\n\t\\begin{equation*}\n\tb:= B(u,v)+z,\n\t\\end{equation*}\n\twhere $ B \\colon \\mathbb{C}^{n_1} \\times \\mathbb{C}^{n_2} \\rightarrow \\mathbb{C}^m $ is a bilinear map and $z\\in \\mathbb{C}^m$ is noise. Recall that one can represent the bilinear map $B \\colon \\mathbb{C}^{n_1}\\times \\mathbb{C}^{n_2} \\rightarrow \\mathbb{C}^{m}$ by a linear map\n\t$\\mathcal{A}\\colon\\mathbb{C}^{n_1\\times n_2}\\longrightarrow\\mathbb{C}^m$, which satisfies\n\t\\[\n\tB(u,v)= \\mathcal{A}(uv^\\ast).\n\t\\]\n\tfor all vectors $u \\in \\mathbb{C}^{n_1}$ and all $ v \\in \\mathbb{C}^{n_2}$. Note that such a linear map $ \\mathcal{A}$ is characterized by a (unique) set of matrices $ \\left\\{ A_{\\ell} \\right\\}^m_{\\ell =1} \\subset \\mathbb{C}^{n_1 \\times n_2} $ such that the $\\ell$th entry of $ \\mathcal{A}\\left( X \\right)$ is given by\n\t\\begin{equation}\\label{operatorrepresentation}\n\t\\left( \\mathcal{A} \\left( X \\right) \\right) \\left( \\ell \\right)= \\text{trace} \\left( A_{\\ell}^\\ast X \\right).\n\t\\end{equation}\n\tIn this notation, our goal will be to reconstruct $u$ and $v$ from linear measurements given by\n\t\\begin{equation*}\n\tb_{\\ell} = \\text{trace} \\left( A^*_{\\ell} uv^* \\right)\n\t\\end{equation*}\n\tAt the core of the Sparse Power Factorization Algorithm, as introduced in \\cite{paper2}, are the linear operators $F \\colon \\mathbb{C}^{n_2} \\longrightarrow \\mathbb{C}^{m \\times n_1} $ and $ G \\colon \\mathbb{C}^{n_1} \\longrightarrow \\mathbb{C}^{m \\times n_2} $ defined by\n\t\n\t\t$$F(y) := \n\t\t\\begin{pmatrix}\n\t\ty^\\ast A_1^\\ast\\\\\n\t\t\\vdots\\\\\n\t\ty^\\ast A_m^\\ast\n\t\t\\end{pmatrix},\n\t\t\\quad\n\t\tG(x) := \n\t\t\\begin{pmatrix}\n\t\tx^\\ast A_1\\\\\n\t\t\\vdots\\\\\n\t\tx^\\ast A_m\n\t\t\\end{pmatrix}.$$\n\t\tA direct consequence of this definition is that $$\\mathcal{A}(xy^\\ast) = [F(y)]x = \\overline{[G(x)]y}$$ for all $ x \\in \\mathbb{C}^{n_1} $ and all $ y \\in \\mathbb{C}^{n_2} $.\n\n\n\t\\subsection{Sparse Power Factorization}\n\t\nThe idea of Sparse Power Factorization is to iteratively update estimates $u_t$ and $v_t$ for $u$ and $v$ in an alternating fashion.\nThat is, in each iteration one keeps one of $v_t$ and $u_t$ fixed and updates the respective other one by solving an (underdetermined) linear system. Solving each of these linear systems then amounts to solving a linear inverse problem with sparsity constraints. Hence, many pursuit algorithms proposed in the context of compressed sensing can be applied such as CoSaMP \\cite{needell2009cosamp}, Hard Thresholding Pursuit \\cite{foucart2011hard} or Basis Pursuit. In \\cite{paper2} the authors used Hard Thresholding Pursuit (HTP) for their analysis and in this paper, we will also restrict ourselves to HTP. With this, the Sparse Power Factorization Algorithm reads as follows.\n\t\n\t\n\\begin{alg}[Algorithm 1 in \\cite{paper2}]\\label{SPF.alg}~\\\\\\vspace*{-4mm}\n\t\\begin{algorithmic}[1]\n\t\t\\sffamily\\small\n\t\t\\Require Operator $\\mathcal{A}$, Measurement $b$, Sparsity Constraints $s_1, s_2$, Initialisation $v_0$.\n\t\t\\Ensure Estimate $\\widehat{X}$.\n\t\t\\State $t \\gets 0$\n\t\t\\While{stop condition not satisfied}\n\t\t\\State $t \\gets t + 1$\n\t\t\\State $v_{t - 1} \\gets \\frac{v_{t - 1}}{\\big\\|v_{t - 1}\\big\\|}$\\label{vt1norm}\n\t\t\\If{$s_1 < n_1$}\n\t\t\\State $u_t \\gets \\mathrm{HTP}(\\mathrm{F}(v_{t - 1}), b, s_1)$\n\t\t\\Else\n\t\t\\State $u_t \\gets \\operatorname*{{arg\\,min}}\\limits_x \\big\\|b - [\\mathrm{F}(v_{t - 1})]x\\big\\|^2$\n\t\t\\EndIf\n\t\t\\State $u_t \\gets \\frac{u_{t}}{\\big\\|u_{t}\\big\\|}$\n\t\t\\If{$s_2 < n_2$}\n\t\t\\State $v_t \\gets \\mathrm{HTP}(\\mathrm{G}(u_{t}), \\bar{b}, s_2)$\n\t\t\\Else\n\t\t\\State $v_t \\gets \\operatorname*{{arg\\,min}}\\limits_b \\big\\|\\bar{b}- [\\mathrm{G}(u_{t})]b\\big\\|^2$\\label{sparse-else}\n\t\t\\EndIf\n\t\t\\EndWhile \\\\\n\t\t\\Return $\\widehat{X} \\gets u_tv_t^{\\ast}$\n\t\\end{algorithmic}\n\\end{alg}\t\n\\noindent The Hard Thresholding Pursuit Algorithm is defined as follows:\n\\begin{alg}HTP(A, b, s)\\label{HTP.alg}~\\\\\\vspace*{-4mm}\n\t\\begin{algorithmic}[1]\n\t\t\\sffamily\\small\n\t\t\\Require Measurement matrix $A \\in \\mathbb{C}^{m \\times n}$, measurement $b \\in \\mathbb{C}^m$, sparsity constraint $s \\in \\mathbb{N}$.\n\t\t\\Ensure $ \\hat{x} \\in \\mathbb{C}^n $.\n\t\t\\State $t \\gets 0$\n\t\t\\While{stop condition not satisfied}\n\t\t\\State $t \\gets t+1 $\n\t\t\\State $ w= x_{t-1} + A^* \\left( b-Ax_{t-1} \\right) $\n\t\t\\State $J \\gets \\underset{J \\subset \\left[n\\right], \\ \\vert J \\vert =s}{\\arg \\max} \\ \\Vert w_J \\Vert $\n\t\t\\State $x_t \\gets \\underset{x: \\text{supp} \\left(x\\right) \\subset J }{\\arg \\min} \\Vert Ax-b \\Vert$\n\t\t\\EndWhile \\\\\n\t\t\\Return $ \\hat{x} \\gets x $\n\t\\end{algorithmic}\n\\end{alg}\t\t\n\t\n\t\n\t\n\t\\subsection{Initialization}\\label{Initialization}\n\tAs for many other non-convex algorithms (e.g., \\cite{Jain,candes2015phase}), the convergence properties of Sparse Power Factorization depend crucially on the choice of the starting point. In \\cite{Jain,candes2015phase} the starting point is chosen via a spectral initialization. That is, one chooses the leading left- and right-singular vectors of $ \\mathcal{A}^* \\left(y\\right) $ as the starting point. However, in order to work this approach requires that the number of measurements is at the order of $ \\max \\left\\{n_1, n_2\\right\\} $, which will in general not be optimal as it does not take into account the sparsity of the vectors $u$ and $v$. One way to incorporate the sparsity assumption would be to solve the Sparse Principal Component Analysis (SparsePCA) problem.\n\t\\begin{equation}\\label{equ:sparsePCA}\n\t\\begin{split}\n\t\\max \\quad & \\text{Re} \\left( \\tilde{u}^* \\mathcal{A}^* \\left(y \\right) \\tilde{v} \\right) \\\\\n\t\\text{subject to} \\quad &\\Vert \\tilde{u} \\Vert_0 \\le s_1, \\Vert \\tilde{u} \\Vert =1\\\\\n\t&\\Vert \\tilde{v} \\Vert_0 \\le s_2, \\Vert \\tilde{v} \\Vert =1,\n\t\\end{split}\n\t\\end{equation}\n\twhere $\\Vert \\cdot \\Vert_0 $ denotes the number of non-zero entries. As it was shown in \\cite[Proposition III.4]{paper2}, Algorithm \\ref{SPF.alg}, if initialized by a solution of \\eqref{equ:sparsePCA} is able to recover the solution $u$ and $v$ from a number of measurements at the order of $ \\left( s_1 + s_2 \\right) \\max \\left\\{ \\frac{s_1}{n_1}, \\frac{s_2}{n_2} \\right\\} $. However, the SparsePCA problem has been shown to be NP-hard \\cite{tillmann2014computational}. Nevertheless, in the last fifteen years there has been a lot of research on the SparsePCA problem and, in particular, on tractable (i.e., polynomial-time) algorithms, which yield good approximations to the true solution. Several computationally tractable algorithms have been proposed for solving (\\ref{equ:sparsePCA}), e.g., thresholdings algorithms \\cite{ma2013sparse}, a general version of the power method \\cite{journee2010generalized} and semidefinite programs \\cite{d2008optimal}. From the statistical perspective, a particular emphasis has been put for computationally efficient or at least tractable algorithms on the analysis of the single spike model\\cite{amini2009high,krauthgamer2015semidefinite,deshpande2014sparse}. These approaches, however, require that the number of samples scales with the square of the number of non-zero entries of the signal to estimate (up to $\\log$-factors). This raised the question whether there are fundamental barriers preventing the SparsePCA problem to be solved in polynomial time at a sampling rate close to the information theoretic limit. Indeed, it has been shown that an algorithm, that achieves this, would also allow for an algorithm which solves the $k$-clique problem in polynomial time \\cite{berthet2013optimal,wang2016statistical}. However, a widely believed conjecture in theoretical computer science states, that this is not the case, which indicates that this approach will not be suited for initializing bilinear recovery problems either.\\\\\n\n\t\\noindent In this manuscript we will analyse the following initialization algorithm, which is the one proposed in \\cite{paper2}. For a set $ J_1 \\subset \\left[n\\right] $, respectively $ J_2 \\subset \\left[n_2\\right] $ in the following we will denote by $ \\Pi_{J_1} $, respectively $ \\Pi_{J_2} $ the matrix, which projects a vector onto the components which belong to $ J_1$, respectively $J_2$.\n\t\n\\begin{alg}[Algorithm 3 in \\cite{paper2}]\\label{SPF_alginit}~\\\\\\vspace*{-4mm}\n\t\\begin{algorithmic}[1]\n\t\t\\sffamily\\small\n\t\t\\Require Operator $\\mathcal{A}$, Measurement $b$, Sparsity Constraints $s_1, s_2$, \n\t\t\\Ensure Initial guess $v_0$ for $ v \\in \\mathbb{C}^{n_2} $.\n\t\n\t\n\t\n\t\n\t\n\t\t\\State For all $ i \\in \\left[ n_1 \\right] $ let $ \\xi_i $ be the $ \\ell_2$-norm of the best $s_2$-sparse approximation of the $i$th row of the matrix $ \\mathcal{A}^* \\left(b\\right) \\in \\mathbb{C}^{n_1 \\times n_2} $.\n\t\t\\State Let $ \\widehat{J_1} \\subset \\left[n_1\\right] $ be the set of the $ s_2 $ largest elements in $ \\left\\{\\xi_1; \\xi_2; \\ldots ; \\xi_{n_1} \\right\\} $\n\t\t\\State Choose $\\widehat J_2$ to contain the indices of the $s_2$ columns of $\\Pi_{\\widehat J_1} \\mathcal{A}^* \\left( b \\right)$ largest in $\\ell_2$ norm, i.e.,\n\t\t\\begin{equation}\\label{equ:defwidetilde}\n\t\t\\widehat J_2 := \\underset{ J \\subset \\left[ n_2 \\right], \\ \\vert J \\vert = s_2 }{\\operatorname*{arg\\,max}} \\big\\|\\Pi_{\\widehat J_1}[\\mathcal{A}^\\ast(b)]\\Pi_{ J }\\big\\|_\\mathrm{F}.\n\t\t\\end{equation}\\\\\n\t\t\\Return $v_0$, the leading right singular vector of $\\Pi_{\\widehat{J_1}}[\\mathcal{A}^{\\ast}(b)]\\Pi_{\\widehat{J_2}}$.\n\t\n\t\n\t\\end{algorithmic}\n\\end{alg}\t\n\t\n\n\n\t\n\t\n\t\\section{Previous results}\n\t\n\tIn the following we will work with the that the model (\\ref{operatorrepresentation}), i.e., we observe\n\t\\begin{equation*}\n\t\\text{trace} \\left( A_{\\ell}^* uv^* \\right) + z_{\\ell}\n\t\\end{equation*}\n\twhere $u \\in \\mathbb{C}^{n_1}$ is $s_1$-sparse, $ v \\in \\mathbb{C}^{n_2}$ is $s_2$-sparse, and $z \\in \\mathbb{C}^m $ is noise. As in \\cite{paper2}, $ \\nu \\left(z\\right) $ will quantify the Noise-to-Signal Ratio by \n\t\\begin{equation}\\label{def:noiselevel}\n\t\\nu \\left(z\\right) := \\frac{\\Vert z \\Vert}{\\Vert \\mathcal{A} \\left(uv^*\\right) \\Vert }.\n\t\\end{equation}\n\t\\noindent For our analysis, $ \\mathcal{A} $ will be a Gaussian linear operator, that is, all the entries of the matrices $ A_1, \\ldots, A_{m}$ are independent with distribution $ \\mathcal{CN} \\left(0, \\frac{1}{m} \\right) $. (Here a complex-valued random variable $X$ has distribution $ \\mathcal{CN} \\left(0,\\frac{1}{m}\\right) $ if its real and complex part are independent Gaussians with expectation $0$ and variance $\\sqrt{\\frac{\\sigma}{2}} $.)\n\t\n\t\n\t\n\t\n\t\n\n\t\n\t\\noindent In \\cite{paper2}, the authors derived that Algorithm \\ref{SPF.alg}, if initialized by Algorithm \\ref{SPF_alginit}, is able to recover both $u$ and $v$ (up to scale ambiguity), if both $u$ and $v$ belong to a certain restricted class of signals. More precisely, they proved the following result.\n\n\t\\begin{thm}[{\\cite[see Theorems III.7 and Theorem III.10]{paper2}}]\\label{th1}\n\t\tAssume that $\\mathcal{A} \\colon \\mathbb{C}^{n_1 \\times n_2 } \\longrightarrow \\mathbb{C}^{m} $ is a Gaussian linear operator as described above. Let $ b= \\mathcal{A} \\left( uv^* \\right) + z$, where $u$ is $s_1$-sparse and $v$ is $s_2$-sparse. Suppose that $ \\Vert u \\Vert_{\\infty} \\ge 0.78 \\Vert u \\Vert $, $ \\Vert v \\Vert_{\\infty} \\ge 0.78 \\Vert v \\Vert $, and that the noise level satisfies $ \\nu \\left(z\\right) \\le 0.04 $. Then, with probability exceeding $ 1- \\exp \\left(-c_1 m\\right) $, the output of the Algorithm \\ref{SPF.alg}, initialized by Algorithm \\ref{SPF_alginit}, converges linearly to $uv^*$ provided that\n\t\\begin{equation*}\n\t\tm \\ge c_2 \\left(s_1 + s_2 \\right) \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right),\n\t\t\\end{equation*}\t\n\twhere $c_1,c_2>0$ are absolute constants.\n\t\\end{thm}\t\n\t\\noindent Note that in order to apply Theorem \\ref{th1} to signals $u$ and $v$ one needs to require that more than half of the mass of $u$ and $v$ are located in one single entry, which is a severe restriction, which can be prohibitive for many applications. Our goal in the following will be to considerably relax this assumption by slightly increasing the amount of required measurements. We will relax this assumption in two different ways: On the one hand we will show that one can replace $ 0.78$ by an arbitrary small constant that will then show up in the number of measurements. On the other hand we generalize the result to the case that a significant portion of mass of $u$ is concentrated on a small number of entries $k$, rather than just one of them.\n\t\\section{Main Result}\\label{resultsection}\n\tIn this section we will state the main result of this article, Theorem \\ref{thm:mainresultreadable}.\n\tFor that, we need to define the norm\n\t\\begin{equation*}\n\t\\Vert x \\Vert_{\\left[k\\right]} := \\underset{I \\subset \\left[n\\right], \\ \\vert I \\vert = k}{\\max} \\left( \\sum_{i \\in I} \\vert x_i \\vert^2 \\right)^{1\/2} = \\left( \\sum_{i=1}^{k} \\left( x^*_i \\right)^2 \\right)^{1\/2},\n\t\\end{equation*}\n\tfor any $x\\in \\mathbb{C}^{n_1} $, where $ \\left( x^*_i \\right)^{n_1}_{i=1} $ denotes the non-increasing rearrangement of $ \\left( \\vert x_i \\vert \\right)^{n_1}_{i=1} $. Our main requirement on the vector $u$ will be that a significant amount of its mass is located in the largest $k$ entries, i.e., that $ \\frac{\\Vert u \\Vert_{\\left[k\\right]}}{\\Vert u \\Vert} $ is large enough.\n\t\\begin{thm}\\label{thm:mainresultreadable}\n\t\n\t\tLet $ k \\in \\left[ n_1 \\right] $ and $ 0<\\xi<1, 0<\\mu<1$. Then, there are absolute constants $C_1, C_2, C_3 >0$ such that if\n\t\t\\begin{equation}\n\t\tm \\ge C_1 \\max \\left\\{ \\frac{1}{\\xi^4 \\mu^4}, \\frac{k}{\\xi^2} \\right\\} \\left(s_1 + s_2 \\right) \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right),\n\t\t\\end{equation}\n\t\tthen with probability at least $ 1-\\exp \\left( - C_2 m \\right) $ the following holds.\\\\\n\t\n\t\n\t\n\t\n\t\n\t\n\t\t\n\t\tFor all $s_1$-sparse $u\\in \\mathbb{C}^{n_1}$ with $ \\Vert u \\Vert_{\\left[k\\right]} \\ge \\xi \\Vert u \\Vert $, all $s_2$-sparse $u\\in \\mathbb{C}^{n_2}$ with $ \\Vert v \\Vert_{\\infty} \\ge \\mu \\Vert v \\Vert $, and all $ z \\in \\mathbb{C}^m $ with $ \\nu\\left(z\\right) \\le C_3 \\min \\left\\{ \\xi^2 \\mu^2 ; \\frac{\\xi}{ \\sqrt{k}} \\right\\} $ the iterates $\\{X_t\\}_{t\\in\\mathbb{N}}$ generated by applying Algorithm \\ref{SPF.alg}, initialized by Algorithm \\ref{SPF_alginit}, satisfy\n\t\t$$\\limsup_{t\\to\\infty} \\frac{\\|X_t - uv^* \\|_\\mathrm{F}}{\\| uv^* \\|_\\mathrm{F}} \\leq 8.3 \\nu.$$\n\t\tFurthermore, the convergence is linear, i.e., for all $ t \\gtrsim \\log \\left( \\frac{1}{\\varepsilon}\\right) $ we have that\n\t\t\\begin{equation}\\label{equ:linearconvergence}\n\t\t\\frac{\\| X_{t} - uv^* \\|_\\mathrm{F}}{\\| uv^* \\|_\\mathrm{F}} \\leq 8.3 \\nu + \\varepsilon.\n\t\t\\end{equation}\n\t\\end{thm}\n\t\n\n\n\n\n\n\n\t\n\t\n\t\n\t\n\nIn the following we will discuss some important special cases of Theorem \\ref{thm:mainresultreadable}.\n\\begin{itemize}\n\\item \\textbf{Peaky signals: } In \\cite{paper2} the authors discuss recovery guarantees for signals $u$ and $v$ with $\\tfrac{\\Vert u \\Vert_{\\infty} }{\\Vert u \\Vert} $ and $\\tfrac{\\Vert v \\Vert_{\\infty} }{\\Vert v \\Vert} $, both bounded below by an absolute constant $\\mu \\approx 0.78$. The case $k=1$ of our theorem yields a direct improvement of this result in the sense that $\\mu$ can be chosen arbitrarily small with the number of required measurements only increasing by a factor of order $ \\mu^{-8} $. Hence, even when this constant decays logarithmically in the dimension, the required number of measurements will only increase by logarithmic factors.\n\\item \\textbf{Signals with multiple large entries: } When one of the input signals has multiple large entries, using the $\\Vert \\cdot \\Vert_{[k]} $ norm improves upon the resulting guarantee as compared to the scenario just discussed. As an example, assume that $s_1=s_2=s $, that $u$ and $v$ are normalized with $\\|v\\|_\\infty \\geq c_1 s^{-1\/8}$, and that $k=c_2 s^{1\/2}$ of the entries of $u$ are of absolute value at least $ c_3s ^{-1\/4}$. Then $ \\Vert u \\Vert_{\\left[k\\right]} \\ge \\sqrt{ c_2 } c_3 $. Using Theorem \\ref{thm:mainresultreadable} we obtain that the vectors $u$ and $v$ can be recovered if the number of measurements is on the order of $ s^{3\/2}$, thus below the order of $s^2$ that has been established for arbitrary sparse signals in \\cite{strohmer} (cf. next item). In contrast, applying Theorem \\ref{thm:mainresultreadable} with $k=1$ would yield that the number of measurements would have to be on the order of $ s^{5\/2} $, which is worse than the state-of-the-art.\n\\item \\textbf{Arbitrary sparse signals:}\tApplying Theorem \\ref{thm:mainresultreadable} to non-peaky signals yields suboptimal results. Indeed, let $u \\in \\mathbb{C}^{n_1}$ $s_1$-sparse and $v \\in \\mathbb{C}^{n_2} $ $s_2$-sparse be generic vectors. Observe that $ \\Vert v \\Vert_{\\infty} \\asymp \\frac{1}{\\sqrt{s_2}} \\Vert v \\Vert $.\nConsequently, Theorem \\ref{thm:mainresultreadable} applied with $ \\xi =1 $, $ k= s_1 $, and $ \\mu = \\frac{1}{\\sqrt{s_2}} $ yields that with high probability a generic $s_1$-sparse $u$ and a generic $s_2$-sparse $v$ can be recovered from $ y = \\mathcal{A} \\left( uv^* \\right) +z $, if the number of measurements satisfies\n\\begin{equation*}\nm \\ge C \\max \\left\\{ s_1; s_2^2 \\right\\} \\left(s_1 + s_2 \\right) \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right),\n\\end{equation*}\t\nand if the noise level $\\nu$ is on the order of $ \\mathcal{O} \\left( \\max \\left\\{ \\frac{1}{s_2}; \\frac{1}{\\sqrt{s_1}} \\right\\} \\right) $. Previous results (see, e.g., \\cite{strohmer}), in contrast, require $ m \\ge C \\max \\left\\{ s^2_1; s^2_2 \\right\\} \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right) $ samples.\n\\end{itemize}\t\n\n\n\\begin{remark}\nThe peakiness assumptions in Theorem \\ref{thm:mainresultreadable} may seem arbitrary at first sight but in certain applications they are reasonable. Namely, when $u$ is the signal transmitted via a wireless channel and $v$ is the unknown vector of channel parameters it is natural to assume that $v$ has a large entry, as the direct path will always carry most of the energy. The signal $u$ can be modified by the sender, so some large entries can be artificially introduced. In this regard, being able to consider multiple entries of comparable size is of advantage as adding a single very large entry will result in a dramatic increase of the peak-to-average power ratio.\n\\end{remark}\n\t\t\n\n\t\n\t\n\t\n\t\t\n\n\n\t\n\\section{Proofs}\\label{sectionproof}\n\t\\subsection{Technical tools}\n\tThe goal of this section is to prove Theorem \\ref{thm:mainresultreadable}.\n\tWe will start by recalling the following variant of the well-known restricted isometry property.\n\t\\begin{defi}[see \\cite{paper2}]\n\tA linear operator $ \\mathcal{A}$ has the $(s_1, s_2, r )$-restricted isometry property with constant $\\delta$ if \n\t\\begin{equation}\n\t\\left(1-\\delta\\right) \\Vert X \\Vert_F^2 \\le \\Vert \\mathcal{A} \\left( X \\right) \\Vert^2 \\le \\left( 1+\\delta \\right) \\Vert X \\Vert^2_F\n\t\\end{equation}\n\tfor all matrices $ X \\in \\mathbb{C}^{n_1 \\times n_2}$ of rank at most $r$ with at most $s_1$ non-zero rows and at most $s_2$ non-zero columns.\n\t\\end{defi}\n\t\\noindent The following lemma tells us that this property holds with high probability for a number of measurements close to the information-theoretic limit.\n\t\\begin{lemma}[See, e.g., Theorem III.7 in \\cite{paper2}]\\label{thm37}\n\tThere are absolute constants $c_1, c_2>0 $, such that if\n\t\\begin{equation}\\label{necessarymeasurementsrip}\n\tm \\ge \\frac{c_1}{\\delta^2} r \\left(s_1 + s_2 \\right) \\log \\left( \\max \\left\\{ \\frac{n_1}{s_1}, \\frac{n_2}{s_2} \\right\\} \\right),\n\t\\end{equation}\n\tfor some $\\delta >0$, then with probability at least $1-\\exp \\left( - c_2 m \\right) $ $ \\mathcal{A}$ has the $(s_1, s_2, r)$-restricted isometry property with restricted isometry constant $\\delta$.\n\\end{lemma}\n\\noindent As in \\cite[Lemma VIII.7]{paper2} we will need the following quantity, which depends on $ \\delta$ and $ \\nu $.\n\t\\begin{align}\\label{def:omegasup}\n\t\t\t&\\omega_\\mathrm{sup} :=\\sup\\left\\{\\omega\\in[0,\\tfrac{\\pi}{2}):\\omega\\geq\\arcsin\\left(C_\\delta[\\delta\\tan(\\omega)+(1+\\delta)\\nu\\sec(\\omega)]\\right)\\right\\}\n\t\\end{align}\n\tHere, the constant $C_\\delta $ is given by the expression\n\t\\begin{equation*}\n\tC_{\\delta} = 1.1 \\frac{ \\sqrt{ \\frac{2}{1-\\delta^2} } + \\frac{1}{1-\\delta}}{1- \\sqrt{ \\frac{2}{1-\\delta^2} } \\delta },\n\t\\end{equation*}\n\tas it can be seen by an inspection of the proof of Lemma VIII.1 in \\cite{paper2}. The precise value of $ C_{\\delta}$ will not be important in the following, we will only use that $ 2 \\le C_{\\delta} \\le 5 $ for $\\delta \\le 0.04 $. \\\\\n\t\n\\noindent A simple estimate for $ \\omega_{\\sup}$ is given by the following lemma.\n\\begin{lemma}\\label{sin05}\n\tAssume that $ \\delta \\le 0.04 $ and $\\nu \\le 0.04 $. Then it holds that \n\t\\begin{equation*}\n\t\t\\tfrac{1}{2} \\leq \\sin(\\omega_\\mathrm{sup}) \\leq 1.\n\t\\end{equation*}\n\\end{lemma}\n\t\n\\begin{proof}\n\n\tWe observe that in order to show the claim it is enough to verify that $ \\omega= \\arcsin \\frac{1}{2} $ fulfills the inequality in (\\ref{def:omegasup}). Indeed, using $ \\cos \\omega = \\sqrt{\\frac{3}{4}} $ and $ C_{\\delta} \\le 5 $ we obtain that\n\t\\begin{align*}\n\t\tC_{\\delta} \\left[ \\delta \\tan \\left( \\arcsin \\frac{1}{2} \\right) + \\left(1+ \\delta\\right) \\nu \\sec \\left( \\arcsin \\frac{1}{2} \\right) \\right] &= C_{\\delta} \\left[ 0.04 \\frac{1\/2}{\\sqrt{3\/4}} + \\frac{ 1.04 \\cdot 0.04 }{\\sqrt{3\/4}} \\right] \\\\\t\t\n\t\t&\\le \\frac{1}{2}.\n\t\\end{align*}\n\n\\end{proof}\n\t\n\t\\noindent The quantity $ \\omega_{\\sup} $ controls the maximal angle between the initialization $ v_0$ and the ground truth $v$ such that the Sparse Power Factorization is guaranteed to converge as captured by the following theorem.\n\t\t\\begin{thm}[Theorem III.9 in \\cite{paper2}]\\label{thm39}\n\t\tAssume that\n\t\t\\begin{enumerate}[1)]\n\t\t\n\t\t\t\\item $\\mathcal{A}$ has the $(3s_1,3s_2,2)$-RIP with isometry constant $\\delta\\leq0.08$,\n\t\t\t\\item $\\nu \\leq 0.08$,\n\t\t\t\\item the initialization $v_0$ satisfies $\\sin(\\angle(v_0,v)) < \\sin \\left( \\omega_{\\sup} \\right)$.\n\t\t\\end{enumerate}\n\t\tThen the iterates $\\{X_t\\}_{t\\in\\mathbb{N}} $ generated by Algorithm \\ref{SPF.alg}, initialized via Algorithm \\ref{SPF_alginit}, satisfy\n\t\t$$\\limsup_{t\\to\\infty} \\frac{\\|X_t - uv^*\\|_\\mathrm{F}}{\\|uv^*\\|_\\mathrm{F}} \\leq 8.3 \\nu.$$\n\t\tFurthermore, the convergence is linear in the sense of (\\ref{equ:linearconvergence}).\n\t\\end{thm}\n\t\\noindent Thus, it remains to verify that the initialization satisfies $\\sin(\\angle(v_0,v)) < \\sin \\left( \\omega_{\\sup} \\right)$. The following lemma gives an upper bound on $\\sin(\\angle(v_0,v))$.\t\n\t\t\\begin{lemma}[Lemma 8 in \\cite{paper2}]\\label{lemma8.10}\n\t\tAssume that the $(3s_1, 3s_2, 2 )$-restricted isometry property holds for some constant $ \\delta >0 $. Furthermore, assume that $ \\Vert u \\Vert = \\Vert v \\Vert =1 $. Let $\\widehat{J_1} \\subseteq \\left[n_1\\right] $ and $\\widehat{J_2} \\subseteq \\left[n_2\\right]$ denote the output resulting from Algorithm \\ref{SPF_alginit}.\n\t\t\n\t\t Denote by $v_0$ the leading right singular vector of $\\Pi_{\\widehat{J_1}}[\\mathcal{A}^\\ast(b)]\\Pi_{\\widehat{J_2}}$. Then it holds that\n\t\t\\begin{equation}\\label{ineq:sufficientcondition2}\n\t\t\\sin(\\angle(v_0,v)) \\leq \\frac{\\big\\|\\Pi_{\\widehat J_1}u\\big\\|\\big\\|\\Pi_{\\widehat J_2}^\\perp v\\big\\| + (\\delta + \\nu+\\delta\\nu)}{\\big\\|\\Pi_{\\widehat J_1}u\\big\\|-(\\delta + \\nu+\\delta\\nu)}.\n\t\t\\end{equation}\n\t\\end{lemma}\n\n\t\\noindent Furthermore, we will need the following two lemmas for our proof.\n\\begin{lemma}\\label{lemma:lastlemma}[Lemma VIII.12 in \\cite{paper2}]\nLet $u$ and $v$ be as in Lemma \\ref{lemma:supportlowerbound} and assume that the measurement operator $ \\mathcal{A}$ satisfies the $ \\left( 3s_1, 3s_2, 2 \\right) $-restricted isometry property with constant $ \\delta $. Recall that $\\widehat J_1 \\subset \\left[n_1\\right] $ is the support estimate for $v_0$ given by the initialization algorithm \\ref{SPF_alginit}. Define\n\\begin{equation}\\label{equ:definitionJ1}\n\t\\widetilde{J_1} := \\left\\{ j \\in \\left[ n_1 \\right]: \\ \\vert u_j \\vert \\ge 2 \\left( \\delta + \\nu + \\delta \\nu \\right) \\right\\}.\n\\end{equation}\t\nThen we have that $ \\widetilde{J_1} \\subset \\widehat J_1 $. \n\\end{lemma}\t\n\t\n\\begin{lemma}\\label{lemma:supportlowerbound}\n\tAssume that $ \\mathcal{A}$ has the $(3s_1, 3s_2, 2)$-restricted isometry property with isometry constant $ \\delta >0 $ and assume that $u$, respectively $v$, are $s_1$-sparse, respectively $s_2$-sparse, and satisfy $ \\Vert u \\Vert = \\Vert v \\Vert =1 $. Let $ \\widetilde{J_1} $ be defined as in (\\ref{equ:definitionJ1}).\n\tThen, it holds that\n\t\\begin{equation*}\n\t\\big\\|\\Pi_{ \\widehat J_1 } u \\big\\| \\big\\| \\Pi_{ \\widehat J_2 } v \\big\\| \\ge \\big\\|\\Pi_{ \\widetilde{J_1} } u \\big\\| \\Vert v \\Vert_{\\infty} - 2 \\left( \\delta + \\nu +\\delta \\nu \\right).\n\t\\end{equation*}\n\\end{lemma}\n\n\n\\noindent Lemma \\ref{lemma:supportlowerbound} is actually a slight generalization of what has been shown in \\cite[p. 1685]{paper2}. For completeness we have included a proof in Section \\ref{section:supportlowerbound}, which closely follows the proof in \\cite{paper2}.\\\\\n\n\n\t\\subsection{Proof of our main result}\n\t\tWe will now piece together these ingredients to obtain a sufficient condition; in the remainder of the section we will then show that the condition holds in our measurement setup. First note that in order to apply Theorem \\ref{thm39} we need to check that \t$\\sin(\\angle(v_0,v)) < \\sin \\left( \\omega_{\\sup} \\right)$ is satisfied. By Lemma \\ref{lemma8.10} it is sufficient to show that the right-hand side of inequality (\\ref{ineq:sufficientcondition2}) is strictly smaller than $ \\sin \\left( \\omega_{\\sup} \\right) $. Combining this with the equality $ \\big\\|\\Pi_{\\widehat J_2}^{\\perp}v\\big\\| = \\sqrt{ 1 - \\big\\|\\Pi_{\\widehat J_2}v\\big\\|^2 } $ we obtain the sufficient condition\n\t\t\\begin{equation*}\n\t\t\\big\\|\\Pi_{\\widehat J_1}u\\big\\| \\sqrt{ 1 - \\big\\|\\Pi_{\\widehat J_2}v\\big\\|^2 } < \\sin \\left( \\omega_{\\sup} \\right) \\left( \\big\\|\\Pi_{\\widehat J_1}u\\big\\| - \\left( \\delta + \\nu+\\delta\\nu\\right) \\right) - \\left( \\delta + \\nu+\\delta\\nu \\right) \n\t\t\\end{equation*}\n\t\tFurther manipulations yield that this is equivalent to\n\t\t\\begin{equation}\\label{ineq:sufficientcondition}\n\t\t\\begin{split}\n\t\t\\big\\|\\Pi_{\\widehat J_1}u\\big\\|^2 < &\\left( \\sin \\left( \\omega_{\\sup} \\right) \\big\\|\\Pi_{\\widehat J_1}u\\big\\| - \\left( 1+ \\sin \\left( \\omega_{\\sup} \\right) \\right) \\left( \\delta + \\nu+ \\delta \\nu \\right) \\right)^2\\\\\n\t\t+ & \\big\\|\\Pi_{\\widehat J_1}u\\big\\|^2 \\big\\|\\Pi_{\\widehat J_2} v\\big\\|^2.\n\t\t\\end{split} \n\t\t\\end{equation}\t\t\n \t\tHence, in the following our goal will be to verify (\\ref{ineq:sufficientcondition}).\n\t\t\\noindent We already noticed that the angle $ \\omega_{\\sup} $ measures how much the vector $v_0$ given by the initializiation has to be aligned with the ground truth $v$ in order for the Sparse Power Factorization to converge. Consequently, it is natural to expect that the smaller the constant $ \\delta$ and the noise-to-signal ratio $\\nu$, the less the initializiation vector has to be aligned with the ground truth, i.e., the larger $ \\omega_{\\sup} $ can be. This fact is captured by the following lemma.\n\t\\begin{lemma}\\label{sin2}\n\t\tLet $\\delta \\leq 0.04$ and $\\nu \\leq 0.04$. Then it holds that\n\t\t$$\\sin(\\omega_\\mathrm{sup}) \\geq 1 -C_{\\delta}^2\\left(\\delta + 2\\delta\\nu+2\\nu\\right)^2.$$\n\t\\end{lemma}\n\t\\begin{proof}\n\t\tIt follows directly from \\eqref{def:omegasup} that\n\t\t\\begin{align*}\n\t\t \\omega_{\\sup} &= \\arcsin \\left( C_{\\delta} \\left[ \\delta \\tan \\left( \\omega_{\\sup} \\right) + \\left( 1+ \\delta \\right) \\nu \\sec \\left( \\omega_{\\sup} \\right) \\right] \\right).\n\t\t\\end{align*}\n\t\tUsing trigonometric identities we obtain that\n\t\t\\begin{align*}\n\t\t \\sin \\left( \\omega_{\\sup} \\right) &= C_{\\delta} \\left[ \\delta \\frac{\\sin \\left( \\omega_{\\sup} \\right)}{\\sqrt{1-\\sin \\left( \\omega_{\\sup} \\right)^2 }} + \\left(1+\\delta \\right) \\nu \\frac{1}{\\sqrt{ 1- \\sin \\left( \\omega_{\\sup} \\right)^2 }} \\right].\n\t\t\\end{align*}\n\t\tLemma \\ref{sin05} implies that\n\t\t\\begin{equation*}\n\t\t\\sin \\left( \\omega_{\\sup} \\right) \\le \\frac{ \\sin \\left( \\omega_{\\sup} \\right) }{\\sqrt{1-\\sin \\left( \\omega_{\\sup} \\right)^2}} C_{\\delta} \\left( \\delta + 2 \\left( 1+ \\delta \\right) \\nu \\right).\n\t\t\\end{equation*}\n\tRearranging terms yields that\n\t\t\\begin{equation*}\n\t\t\\sin \\left( \\omega_{\\sup} \\right) \\ge \\sqrt{1 - C^2_{\\delta} \\left( \\delta +2\\delta \\nu +2\\nu \\right)^2 }.\n\t\t\\end{equation*}\n\t\tThe claim follows then using the fact that $ \\sqrt{x} \\ge x $ for all $ x \\in \\left[ 0,1\\right] $.\n\t\t\\end{proof}\t\n\t\\noindent With these preliminary lemmas, we can now prove the following proposition, which is a slightly more general form of Theorem \\ref{thm:mainresultreadable}.\n\t\t\\begin{prop}\\label{prop:mainproposition}\n\tThere are absolute constants $c_1, c_2, c_3 >0$ such that if\n\t\\begin{equation}\\label{equ:numbermeasurements}\n\tm \\ge c_1 \\delta^{-2} \\left(s_1 + s_2 \\right) \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right),\n\t\\end{equation}\t\t\n\tfor some $ 0 < \\delta <0.01$, then with probability at least $ 1-\\exp \\left( - c_2 m \\right) $ the following statement holds uniformly for all $s_1$-sparse $u \\in \\mathbb{C}^{n_1} $, $s_2$-sparse $v \\in \\mathbb{C}^{n_2} $ and $ z \\in \\mathbb{C}^m $ such that $\\Vert u \\Vert = \\Vert v \\Vert =1 $ and $ \\nu \\left(z\\right) \\le 0.01 $:\\\\\t\n\t\\noindent Let the measurements be given by $ b= \\mathcal{A} \\left( uv^* \\right) +z $ for $ \\mathcal{A} $ Gaussian as above and let $ \\widetilde J_1 $ be defined by\n\t\\begin{equation}\\label{equ:definitionJ2}\n\t\\widetilde J_1 := \\left\\{ j \\in \\left[ n_1 \\right]: \\ \\vert u_j \\vert \\ge M_{\\delta,\\nu} \\right\\},\n\t\\end{equation}\n\twhere\n\t\\begin{equation*}\n\tM_{\\delta, \\nu} := 2 \\left( \\delta + \\nu + \\delta \\nu \\right).\n\t\\end{equation*}\n\n\n\n\n\n\tThen, whenever\n\t\\begin{equation}\\label{ineq:peakinessassumption}\n\n\t\\big\\|\\Pi_{ \\widetilde J_1 } u \\big\\| \\Vert v \\Vert_{\\infty} > c_3 \\sqrt{ M_{\\delta,\\nu} } ,\n\t\\end{equation}\n\tthe iterates $\\{X_t\\}_{t\\in\\mathbb{N}}$ generated by Algorithm \\ref{SPF.alg} initialized via Algorithm \\ref{SPF_alginit}, satisfy\n\t$$\\limsup_{t\\to\\infty} \\|X_t - uv^*\\|_\\mathrm{F} \\leq 8.3 \\nu.$$ \n\tFurthermore, the convergence is linear in the sense of (\\ref{equ:linearconvergence}).\n\\end{prop}\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:mainproposition}]\t\n\n\n\tAssumption (\\ref{equ:numbermeasurements}) and \\Cref{thm37} yield that with probability at least $1 - \\exp \\left(-c m \\right) $ the ($3s_1$,$3s_2$,$2$)-restricted isometry property holds with constant $\\delta $.\n\tFor the remainder of the proof, we will consider the event that the restricted isometry property holds for such $\\delta$. \t\n\n\n\n\n\n\n\n\n\tWe obtain\n\t\\begin{equation*}\n\t\\big\\|\\Pi_{ \\widetilde J_1 } u \\big\\| \\Vert v \\Vert_{\\infty} \\ge \\left( \\sqrt{ C^2_{\\delta} +1 } + 1 \\right) \\sqrt{M_{\\delta,\\nu}}\n\t\\end{equation*}\n\tfrom $ 2\\le C_{\\delta} \\le 5 $ and by choosing the constant $c_3$ in assumption (\\ref{ineq:peakinessassumption}) large enough. Combining this with Lemma \\ref{lemma:supportlowerbound} we obtain that\n\t\\begin{equation}\\label{ineq:chain4}\n\t\\begin{split}\n\t\\big\\|\\Pi_{ \\widehat J_1 } u \\big\\| \\big\\| \\Pi_{ \\widehat J_2 } v \\big\\| &\\ge \\big\\|\\Pi_{ \\widetilde{ J_1} } u \\big\\| \\Vert v \\Vert_{\\infty} - M_{\\delta, \\nu}.\\\\\n\t&> \\sqrt{ \\left( C^2_{\\delta} + 1 \\right) M_{\\delta, \\nu} },\n\t\\end{split}\n\t\\end{equation}\n\twhere we used that $\\sqrt{x} \\ge x $ for all $ x \\in \\left[0,1\\right] $. This yields a lower bound for the second summand of the right-hand side of (\\ref{ineq:sufficientcondition}). To bound the first summand we estimate\n\t\\begin{equation}\\label{ineq:chain6}\n\t\\begin{split}\n\t&\\sin(\\omega_\\mathrm{sup}) \\|\\Pi_{\\widehat J_1}u\\| - \\left( \\sin(\\omega_\\mathrm{sup}) +1 \\right) \\left( \\delta+\\nu + \\delta \\nu \\right)\\\\\n\t\\ge& \\left( 1- C^2_{\\delta} \\left( \\delta + 2\\nu + 2\\delta \\nu \\right)^2 \\right) \\|\\Pi_{\\widehat J_1}u\\| -2 \\left( \\delta + \\nu +\\delta \\nu \\right) \\\\\n\t\\ge& \\|\\Pi_{\\widehat J_1}u\\| - C^2_{\\delta} \\left( \\delta +2 \\nu + 2\\delta \\nu \\right)^2 -2 \\left( \\delta + \\nu + \\delta \\nu \\right) \\\\\n\t\\ge & \\|\\Pi_{\\widehat J_1}u\\| - \\left( C^2_{\\delta}+1 \\right) M_{\\delta,\\nu} \\\\\n\t\\ge & 0.\n\t\\end{split}\n\t\\end{equation}\n\tIn the first line we used \\Cref{sin2} and the fact that $ \\sin(\\omega_\\mathrm{sup}) \\le 1 $. The second line is due to $ \\|\\Pi_{\\widehat J_1}u\\| \\le 1 $ and the third inequality is due to $ \\delta \\ge 0 $, $ \\nu \\ge 0 $. In order to verify the last inequality it is enough to observe that due to Lemma \\ref{lemma:lastlemma} and due to assumption (\\ref{ineq:peakinessassumption}) with $c_3$ large enough\n\t\\begin{align*}\n\t\\|\\Pi_{\\widehat J_1}u\\| &\\ge \\Vert \\Pi_{\\widetilde{J_1}} u \\Vert \\ge \\Vert \\Pi_{\\widetilde{J_1}} u \\Vert \\big\\| \\Vert v \\Vert_{\\infty} \\ge \\left( C^2_{\\delta} + 1 \\right) M_{\\delta,\\nu},\n\t\\end{align*}\n\twhere the last inequality uses that $ C_{\\delta} \\le 5 $ and $ 0 \\le \\delta, \\nu \\le 0.01 $. Hence, by squaring (\\ref{ineq:chain6}) we obtain that\n\t\\begin{equation}\\label{ineq:chain5}\n\t\\begin{split}\n\t&\\left( \\sin(\\omega_\\mathrm{sup}) \\|\\Pi_{\\widehat J_1}u\\| - \\left( \\sin(\\omega_\\mathrm{sup}) +1 \\right) \\left( \\delta + \\nu + \\delta \\nu \\right) \\right)^2\\\\\n\t\\ge & \\left( \\|\\Pi_{\\widehat J_1}u\\| - \\frac{1}{2}\\left( C^2_{\\delta}+1 \\right) M_{\\delta, \\nu} \\right)^2\\\\\n\t\\ge& \\|\\Pi_{\\widehat J_1}u\\|^2 - \\left( C^2_{\\delta}+1 \\right) M_{\\delta, \\nu} \\|\\Pi_{\\widehat J_1}u\\| \\\\\n\t\\ge & \\|\\Pi_{\\widehat J_1}u\\|^2 - \\left( C^2_{\\delta}+1 \\right) M_{\\delta, \\nu},\n\t\\end{split}\n\t\\end{equation}\n\twhere in the last line we again used that $ \\|\\Pi_{\\widehat J_1}u\\| \\le 1 $. Together with (\\ref{ineq:chain4}) this yields (\\ref{ineq:sufficientcondition}), as desired.\n\t\n\\end{proof}\n\\noindent Finally, we will deduce Theorem \\ref{thm:mainresultreadable} from Proposition \\ref{prop:mainproposition}.\n\\begin{proof}[Proof of Theorem \\ref{thm:mainresultreadable}]\n\tWe will prove this result by applying Proposition \\ref{prop:mainproposition} with \n\t\\begin{equation}\n\t\\delta = \\min \\left\\{ \\frac{\\xi}{6 \\sqrt{2k}} ; \\frac{\\xi^2 \\mu^2}{8c_3^2} \\right\\}.\n\t\\end{equation}\n Let $ u \\in \\mathbb{C}^{n_1}$ $s_1$-sparse, $ v \\in \\mathbb{C}^{n_2} $ $s_2$-sparse and $z \\in \\mathbb{C}^m $ such that the assumptions of Theorem \\ref{thm:mainresultreadable} are satisfied. Without loss of generality we may assume in the following that $\\Vert u \\Vert = \\Vert v \\Vert =1 $.\n\tFirst, we note that invoking $ \\delta, \\nu < 0.01 $ and potentially decreasing the size of $C_3$ we have that\n\t\\begin{align*}\n\t2 \\left( \\delta + \\nu \\left(z\\right) + \\delta \\nu \\left(z\\right) \\right) < 2 \\left( \\delta + 2 \\nu \\left(z\\right) \\right) \\le \\frac{\\xi}{\\sqrt{2k}}.\n\t\\end{align*}\n\tHence, we obtain that\n\t\\begin{equation}\\label{equ:Jinclusion}\n\t\\breve{J}_1:= \\left\\{ j \\in \\left[ n_1 \\right]: \\ \\vert u_j \\vert \\ge \\frac{\\xi}{\\sqrt{2k}} \\right\\} \\subset \\widetilde{J}_1,\n\t\\end{equation}\n\twhere $ \\widetilde{J}_1 $ is the set defined in (\\ref{equ:definitionJ2}). \n\n\tNote that\n\t\\begin{equation*}\n\t\\sum_{i \\in \\left[k\\right] \\backslash \\breve{J}_1 } \\left( u^*_i \\right)^2 < \\sum_{i \\in \\left[k\\right] \\backslash \\breve{J}_1 } \\frac{\\xi^2}{2k} \\le \\frac{\\xi^2}{2},\n\t\\end{equation*}\n\twhere in the first inequality we have used that $ u^*_i < \\frac{\\xi}{\\sqrt{2k}} $ for all $ i \\in \\left[k\\right] \\backslash \\breve{J}_1 $. By the assumption $ \\Vert u \\Vert_{\\left[k\\right]} \\ge \\xi $ this yields that $ \\sum_{i \\in \\left[k\\right] \\cap \\breve{J}_1 } \\left( u^*_i \\right)^2 \\ge \\frac{\\xi^2}{2} $, which in turn implies that $ \\Vert \\Pi_{\\breve{J}_1} u \\Vert \\ge \\frac{\\xi}{\\sqrt{2}} $. By the inclusion (\\ref{equ:Jinclusion}) we obtain that $ \\Vert \\Pi_{ \\widetilde J_1 } u \\Vert \\ge \\frac{\\xi}{\\sqrt{2}} $. Hence, using the assumption $ \\Vert v \\Vert_{\\infty} \\ge \\mu $, our choice of $\\delta$, the assumption on the noise level $ \\nu \\left( z \\right) $ and potentially again decreasing the value of the constant $C_3$ we obtain that\n\t\\begin{equation*}\n\t\\Vert \\Pi_{\\widetilde{J}_1} u \\Vert \\Vert v \\Vert_{\\infty} \\ge \\frac{\\xi \\mu }{ \\sqrt{2} } \\ge c_3 \\sqrt{ M_{\\delta, \\nu} }.\n\t\\end{equation*}\n\tThis shows that (\\ref{ineq:peakinessassumption}) is satisfied. Hence, we can apply Proposition \\ref{prop:mainproposition} and by inserting our choice of $\\delta$ into (\\ref{equ:numbermeasurements}), so choosing the constant $C_1$ large enough, we obtain the main result.\n\t\n\\end{proof}\n\n\n\n\t\\section{Outlook}\n\tWe see many interesting directions for follow-up work. Most importantly, it remains to explore whether additional constraints on the signals to be recovered are truly necessary (cf. our discussion on to SparsePCA in Section \\ref{Initialization}). Even if this is the case, there is substantial room for improvement with respect to the noise-dependence of the recovery results. A direction to proceed could be to consider stochastic noise models instead of deterministic noise. Also in this work we exclusively considered operators $\\mathcal{A}$ constructed using Gaussian matrices. However, in many applications of interest, the measurement matrices possess a significantly reduced amount of randomness. For example, in blind deconvolution one typically encounters rank-one measurements. That is, the restricted isometry property as used in this paper does not hold. Thus, one needs additional insight to study whether there exists a computationally tractable initialization procedure at a near-optimal sampling rate. First steps in this direction were taken in \\cite{lee2015rip,lee2017blind}, but a lot of questions remain open.\n\t\n\t\n\t\n\t\n\t\\section*{Acknowledgements}\n\tJakob Geppert is supported by the German Science Foundation (DFG) in the Collaborative Research Centre ``SFB 755: Nanoscale Photonic Imaging'' and partially in the framework of the Research Training Group ``GRK 2088: Discovering Structure in Complex Data: Statistics meets Optimization and Inverse Problems''. Felix Krahmer and Dominik St\\\"oger have been supported by the German Science Foundation (DFG) in the context of the joint project ``SPP 1798: Bilinear Compressed Sensing'' (KR 4512\/2-1). Furthermore, the authors want to thank Yoram Bresler and Kiryung Lee for helpful discussions.\n\t\n\t\n\n\t\n\t\n\t\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=1]{f1.eps}\n\\caption{A cartoon picture of our MHD model of BH jets, \nwhere magnetic field lines (solid lines) cross the ergosphere (dashed line) and the event horizon. \nThere is an extending plasma loading zone (shaded region) near the central BH, where particles are injected. \nThe inflow and outflow pattern naturally forms under the mutual influence of the central BH and the EM fields. \\label{fig:cartoon}}\n\\end{figure}\n\n\nRelativistic jets launched by accreting black holes (BHs) play an essential role in several energetic astrophysical phenomena,\nincluding stellar-mass BH X-ray binaries, active galactic nuclei and possibly gamma-ray bursts.\nAfter decades of debates among astrophysical communities, many open questions concerning the nature of the BH jets still remain to be answered \\citep[see e.g.,][for reviews]{Meier01, Blandford18}. To name a few of the most fundamental ones: what are the central engines of the jets; how the fluid within the jets is accelerated to the relativistic speed; how the jets are collimated.\n\nThe Blandford-Znajek (BZ) mechanism \\citep{BZ77,Znajek77}, which describes an electromagnetic (EM) process of extracting the rotation energy of the central BH in the form of Poynting flux, is believed to be the most promising candidate for the central engines\nof the BH jets. For understanding the jets powered by the BZ mechanism, one needs to study magnetohydrodynamic (MHD) process in the Kerr spacetime, where the EM fields and the fluid motion are coupled in a complicate way. Therefore people treat some components of the MHD process as dynamical variables with other components prescribed in many previous studies on this subject.\nFor studying the EM fields of the jet, force-free electromagnetodynamics (FFE) is a convenient assumption, where\nthe fluid energy is ignored and the EM fields are self-contained \\citep[e.g.,][]{Tanabe08, Kom01, Kom02, Kom04a, \nKom04b, Kom05, Kom07,Beskin13, Contop13,Nathan14, Gralla14,Gralla15,Gralla16,Pan14,Pan15,Pan15b,Pan16,Pan17,Pan18, \nYang14, Yang15, East18, Mahlmann18}.\nFor studying the fluid motion within the jet, people usually treat the fluid as test fluid in prescribed EM fields \\citep[e.g.,][]{Takahashi90, Beskin06, Globus13,Globus14,Pu12, Pu15}. There have also been some full MHD attempts where only\noutflow pattern and jet structure in weak gravity regime are addressed \\citep[see e.g.,][for self-similar outflow solutions in pseudo-potential]{Polko13, Polko14,Cecco18} \nand \\citep[see e.g.,][for outflow solutions in Minkowski spacetime]{Beskin98, Beskin00,Lyub09, Tchek09, Beskin10,Beskin17}.\nFor understanding the physics of BH accretion systems, general relativistic MHD (GRMHD) simulation has been another powerful tool \nin the past two decades, in which full MHD equations in curved spacetime are solved. \nNevertheless, GRMHD codes tend to become unstable to a vacuum and therefore a matter density floor is usually introduced \n\\citep[e.g.,][]{Gammie03, Shibata05, Porth17}, which may obscure our understanding of plasma loading and the flow within the jet. \n\nBesides all the theoretical explorations summarized above, substantial progress\nin spatial resolution has been made on the observation side. Especially the Event Horizon Telescope (EHT) \n\\citep[e.g.,][]{Doel08,Doel12,Ricarte15,EHT19I,EHT19V} is expected \nto resolve the structure of supermassive BHs nearby (Sgr A$^*$ and M87) down to horizon scales.\nIt is promising to unveil the physical nature of the jets in these systems if the coming EHT observations can \nbe correctly deciphered. This motivates us to construct a full GRMHD jet model, considering that \nall the previous studies are confined by different limitations.\n\nIn this paper, we aim to construct a GRMHD framework for investigating the structure of steady and axisymmetric jets \nof spinning BHs, in which the EM fields and fluid motion are self-consistently taken care of. \nA cartoon picture in Fig.~\\ref{fig:cartoon} is shown to illustrate the major elements of our jet model:\na central BH, EM fields, a plasma loading zone, inflow and outflow. \nThe magnetic field lines penetrate the event horizon of a spinning BH and \nextract the rotation energy from the BH in the form of Poynting flux.\nQuantifying plasma loading within the BH jet is also a complicate problem considering \nthe rich plasma sources, including the accretion flow centered on the equatorial plane, \npair production inside the jet \\citep{Lev11, Brod2015, Hiro16, Chen18}\nand neutrino pair annihilation from an extremely hot accretion flow \\citep[see e.g.][]{Pop99, Narayan01}. \nIn our jet model, we do not deal with these detailed processes. For convenience, we \nintroduce a plasma loading zone where plasma is injected and prescribe the loading function, i.e., particle number flux per magnetic flux $\\eta(r,\\theta)$. Under the mutual influence of the central BH and the EM fields, the inflow and outflow pattern naturally forms.\nIn summary: we aim to construct a framework for investigating MHD jet structure of spinning BHs, in which the EM fields and the fluid motion are self-consistently obtained given proper boundary conditions and a proper plasma loading function $\\eta(r,\\theta)$.\n\n\nThis paper is organized as follows. In Section~\\ref{sec:setup}, we summarize some basic equations and assumptions to be used in this paper. We derive the two governing equations: the Bernoulli equation and the MHD Grad-Shafranov (GS) equation in Section~\\ref{sec:Bern} and Section~\\ref{sec:GS}, respectively. We detail the numerical techniques\nfor solving the governing equations in Section~\\ref{sec:eg}. \nThe numerical solutions of MHD jet structure with split monopole magnetic field configuration are presented in Section~\\ref{sec:results}. \nSummary and discussion are given in Section~\\ref{sec:summary}. \nFor reference, we place some details for deriving the governing equations in Appendix \\ref{sec:D_der} and \\ref{sec:GS_der}.\nThroughout this paper, we use the geometrical units $c=G=M=1$, where $M$ is the mass of the central BH. \n\n\n\n\n\\section{Basic Setting Up} \\label{sec:setup}\n\nThe background Kerr metric is written in the Boyer-Lindquist coordinates as follows,\n\\begin{eqnarray}\n\t{\\rm d}s^2&=& g_{tt} {\\rm d}t^2 + 2g_{t\\phi}{\\rm d}t {\\rm d}\\phi + g_{\\phi\\phi} {\\rm d}\\phi^2 + g_{rr} {\\rm d}r^2 + g_{\\theta\\theta} {\\rm d}\\theta^2 \\nonumber\\\\\n\t&\\ & \\nonumber\\\\\n\t&=& \\left( \\frac{2Mr}{\\Sigma}-1 \\right) {\\rm d}t^2 - 2\\ \\frac{2Mar\\sin^2\\theta}{\\Sigma} {\\rm d}t {\\rm d}\\phi \\nonumber\\\\\n\t&\\ & + \\frac{\\beta\\sin^2\\theta}{\\Sigma} {\\rm d}\\phi^2 + \\frac{\\Sigma}{\\Delta} {\\rm d}r^2 + \\Sigma {\\rm d}\\theta^2\\ ,\n\\end{eqnarray}\nwhere $a$ and $M$ are the BH spin and mass, respectively, $\\Sigma=r^2+a^2\\cos^2\\theta$, $\\Delta=r^2-2Mr+a^2$, $\\beta =(r^2+a^2)^2-a^2\\Delta\\sin^2\\theta$ and the square root of the determinant $\\sqrt{-g}=\\Sigma\\sin\\theta$.\n\n\nWe investigate the structure of a steady and axisymmetric BH jet and we assume the plasma within the jet is perfectly conducting, i.e., $\\partial_t = \\partial_\\phi = 0$ and $\\mathbf{E} \\cdot \\mathbf{B}=0$, where $\\mathbf{E}$ and \n$\\mathbf{B}$ are the electric and the magnetic fields, respectively. Then all the non-vanishing components of Maxwell tensor are expressed as follows \\citep[see e.g.,][]{Pan14}\n\\begin{equation}\n\\label{eq:Maxwell}\n\\begin{aligned}\n F_{r\\phi} &= -F_{\\phi r} =\\Psi_{,r} \\ , &F_{\\theta\\phi} &= -F_{\\phi\\theta} = \\Psi_{,\\theta} \\ , \\\\\n F_{tr} &= -F_{rt} =\\Omega\\Psi_{,r} \\ , &F_{t\\theta} &= -F_{\\theta t} = \\Omega \\Psi_{,\\theta} \\ , \\\\\n F_{r\\theta} &= -F_{\\theta r} = - \\frac{\\Sigma }{\\Delta\\sin\\theta} I \\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\Psi = \\Psi(r,\\theta)$ is the magnetic flux and $\\Omega = \\Omega(\\Psi)$ is the angular velocity of magnetic field lines. \nFor convenience, we have defined poloidal electric current $I(r,\\theta) \\equiv \\sqrt{-g} F^{\\theta r}$. Therefore,\nthe EM fields are completely determined by three quantities: $\\{\\Psi(r,\\theta), \\Omega(\\Psi), I(r,\\theta)\\}$.\n\nBefore proceeding on, it is useful to define a few conserved quantities.\nFrom the perfectly conducting condition $F_{\\mu\\nu} u^\\nu = 0$, we find different components\nof fluid velocity are related by\n\\begin{equation}\n \\frac{ u^r}{\\Psi_{,\\theta}} = - \\frac{u^\\theta}{\\Psi_{,r}}\n\t= \\frac{(u^\\phi-\\Omega u^t)}{F_{r\\theta}}\\ ,\n\\end{equation}\nfrom which we can define the particle number flux per magnetic flux\n\\begin{equation}\\label{eq:eta}\n\\begin{aligned}\n \\eta&\\equiv\\frac{\\sqrt{-g} n u^r}{\\Psi_{,\\theta}} = - \\frac{\\sqrt{-g} n u^\\theta}{\\Psi_{,r}} \\\\\n\t&= \\frac{\\sqrt{-g} n(u^\\phi-\\Omega u^t)}{F_{r\\theta}} \\ .\n\\end{aligned}\n\\end{equation}\nFrom the energy-momentum tensor $T^{\\mu\\nu} = T^{\\mu\\nu}_{\\rm EM} + \tT^{\\mu\\nu}_{\\rm MT}$, \nwhere the EM part and the matter (MT) part are \n\\begin{equation}\n\\label{eq:energy_tensor}\n\\begin{aligned}\n\tT^{\\mu\\nu}_{\\rm EM}&= \\frac{1}{4\\pi} \\left( F^{\\mu\\rho}F_{\\ \\ \\rho}^{\\nu} - \\frac{1}{4}g^{\\mu\\nu}F_{\\alpha\\beta}F^{\\alpha\\beta} \\right)\\ , \\\\\n\tT^{\\mu\\nu}_{\\rm MT}&= \\rho u^\\mu u^\\nu = nm u^\\mu u^\\nu \\ ,\n\\end{aligned}\n\\end{equation}\nwe define total energy per particle $E$ and total angular moment $L$ per particle as follows,\n\\begin{equation}\\label{eq:EandL}\n\\begin{aligned}\n\tE&\\equiv E_{\\rm MT}+E_{\\rm EM}= -mu_t +\\frac{\\Omega I}{4\\pi\\eta}\\ , \\\\\n\tL&\\equiv L_{\\rm MT}+L_{\\rm EM} = mu_\\phi +\\frac{I}{4\\pi\\eta}\\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\rho$, $n$ and $m$ are the proper energy density, the proper number density and the particle rest mass, respectively;\nand we have assumed cold plasma.\n\nNow let us examine the conservation property of these quantities along magnetic field lines.\nFor this purpose, we define derivative along field lines\n\\begin{equation}\n D^\\parallel_\\Psi \\equiv \\frac{1}{\\sqrt{-g}}(\\Psi_{,\\theta}\\partial_r - \\Psi_{,r}\\partial_\\theta)\\ ,\n\\end{equation}\nand it is straightforward to obtain (see Appendix \\ref{sec:D_der})\n\\begin{equation} \\label{eq:D_eta}\n\tD^\\parallel_\\Psi\\eta = (nu^\\mu)_{;\\mu} \\ ,\n\\end{equation}\ni.e., $D^\\parallel_\\Psi\\eta$ quantifies the plasma loading rate. In general, we can write the\nenergy-momentum conservation as \n\\begin{equation}\\label{eq:smu}\n T^{\\mu\\nu}_{\\phantom{xy};\\nu} = S^\\mu,\n\\end{equation}\nwhere the source term $S^\\mu$ comes from plasma loading. As a simple example, we assume \n $S^\\mu = (D^\\parallel_\\Psi\\eta) mu^\\mu$ in this paper,\ni.e., the source term is contributed by the kinetic energy of newly loaded plasma.\nWith a few steps of calculation as detailed in Appendix \\ref{sec:D_der}, we obtain\n\\begin{equation}\\label{eq:D_etaEL}\n\\begin{aligned}\n\tD^\\parallel_\\Psi(\\eta E)&= (D^\\parallel_\\Psi\\eta)(-mu_t)\\ ,\\\\\n\tD^\\parallel_\\Psi(\\eta L)&= (D^\\parallel_\\Psi\\eta)(mu_\\phi)\\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\eta E$ and $\\eta L$ are the energy flux per magnetic flux and angular momentum flux per magnetic flux, respectively.\n\\emph{Outside} the plasma loading zone where there is no particle injection,\nthe particle number conservation reads as\n\\begin{eqnarray}\n\t\\left(n u^\\mu\\right)_{;\\mu}&=&0\\ ,\n\\end{eqnarray}\ntherefore $\\eta, E, L$ are conserved along field lines,\ni.e., $\\eta=\\eta(\\Psi), E = E(\\Psi), L=L(\\Psi)$.\n\nIn summary: with assumptions of steady and axisymmetric jet structure and perfectly conducting plasma within the jet, we have obtained one conserved quantity $\\Omega(\\Psi)$\nand three ``quasi-conserved\" quantities $\\{\\eta(\\Psi), E(\\Psi), L(\\Psi)\\}$\nwhich are only conserved \\emph{outside} the plasma loading zone.\n\n\\section{Bernoulli equation} \\label{sec:Bern}\n\nFrom the normalization condition $u^\\mu u_\\mu=-1$ and Eqs.~(\\ref{eq:eta},\\ref{eq:EandL}), we obtain the relativistic Bernoulli equation\n\\begin{eqnarray}\\label{eq:Bern}\n\t\\mathcal{F}(u) = u_p^2+1-\\left(\\frac{E}{m}\\right)^2 U_g(r,\\theta) = 0\\ ,\\end{eqnarray}\nwhere $u_p^2 \\equiv u^Au_A$ with the dummy index $A=\\{r,\\theta\\}$.\nIn the Kerr space-time, the characteristic function $U_g$ is writen as \\citep{Camen86a,Camen86b,Camen87,Takahashi90,Fendt01,Fendt04,Levinson06,Pu15}\n\\begin{eqnarray}\n\\label{Ug}\n\tU_g(r,\\theta)&=& \\frac{k_0k_2-2k_2\\mathcal{M}^2-k_4\\mathcal{M}^4}{(\\mathcal{M}^2-k_0)^2}\\ ,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\label{Ugk}\n\tk_0&=& -[g_{tt}+2g_{t\\phi}\\Omega+g_{\\phi\\phi}\\Omega^2]\\ ,\\nonumber\\\\\n\tk_2&=& \\left[ 1-\\Omega(E\/L)^{-1} \\right]^2\\ ,\\nonumber\\\\\n\tk_4&=& \\frac{\\left[ g_{tt}(E\/L)^{-2} + 2g_{t\\phi}(E\/L)^{-1} +g_{\\phi\\phi} \\right]}{g_{tt}g_{\\phi\\phi}-g_{t\\phi}^2}\\ ,\n\\end{eqnarray}\nand the \\Alfven Mach number $\\mathcal{M}$ is given by\n\\begin{equation}\\label{eq:mach}\n \\mathcal{M}^2=\\frac{4\\pi m\\eta^2}{n} = 4\\pi mn\\frac{u_p^2}{B_p^2} = 4\\pi m\\eta\\frac{u_p}{B_p},\n\\end{equation}\nwith the poloidal magnetic field $B_p$ defined by\n\\begin{equation}\n(\\sqrt{-g} B_p)^2 = g_{rr} (\\Psi_{,\\theta})^2 + g_{\\theta\\theta} (\\Psi_{,r})^2 \\ .\n\\end{equation}\n\nSeveral characteristic surfaces can be defined according to the critical points of the flow velocity\n\\citep[see e.g., ][for details]{Michel69,Michel82,Camen86a,Camen86b, Takahashi90, Beskin09}.\nThe light surface (LS) is defined by where the rotation velocity of field lines approaches light speed and where\nparticles are forbidden to corotate with the field lines,\n\\begin{equation}\nk_0 \\big|_{r=r_{\\rm LS}} = 0 \\ .\n\\end{equation}\nThe \\Alfven surface is defined by where the denominator of characteristic function $U_g(r,\\theta)$ vanishes, i.e.,\n\\begin{equation}\n - k_0 + \\mathcal{M}^2 \\big|_{r=r_A} = 0 \\ .\n\\end{equation}\nOn the \\Alfven surface, we find \n\\begin{equation} \n\\frac{E}{L}= - \\frac{g_{tt} + g_{t\\phi}\\Omega }{g_{t\\phi} + g_{\\phi\\phi}\\Omega}\\Bigg|_{r=r_A}\\ , \n\\end{equation} \nwhere we have used Eqs.(\\ref{eq:Bern}-\\ref{Ugk}).\nThe stagnation surface where $u_p = 0$ is determined by\n\\begin{equation}\\label{eq:stag}\nD^\\parallel_\\Psi k_0 \\big|_{r=r_*}=0 \\ .\n\\end{equation}\nThe fast magnetosonic (FM) surface and slow magnetosonic (SM) surface are defined by where the denominator of $D_\\Psi^\\parallel u_p$ vanishes. In the cold plasma limit, the SM surface coincides with the stagnation surface.\nOn the stagnation surface, where both $u_p$ and $\\mathcal{M}$ vanish, we find\n\\begin{equation} \\label{eq:Estag}\n\\left(\\frac{E}{m} \\right)^2 = \\frac{k_0}{k_2}\\Bigg|_{r=r_*}\\ ,\n\\end{equation}\nwhere we have used Eqs.(\\ref{eq:Bern}, \\ref{Ug}). \n\nPlugging Eq.(\\ref{eq:eta}) into Eq.(\\ref{eq:Bern}), we find that the Bernoulli equation is a polynomial equation of\nfourth order in $u_p$ with to-be-determined eigenvalue $E\/L$ (or equivalently the location of the \\Alfven surface $r_A$), given prescribed angular velocity $\\Omega$ and particle number flux per field line $\\eta(r,\\theta)$ \\citep[see e.g.,][]{Camen86a,Camen86b, Takahashi90, Fendt01, Pu12, Pu15}.\n\n\\subsection{Single Loading Surface}\n\nAs a first step towards a full MHD jet solution, we mathematically idealize the plasma loading zone as a single surface, and we choose the stagnation surface (Eq.~(\\ref{eq:stag})) as the plasma loading surface \\citep[see e.g.,][ for a detailed gap model]{Brod2015} in this paper. \nTo define the plasma loading for both inflow and outflow, \nwe introduce a pair of dimensionless magnetization parameters on the loading surface,\n\\begin{equation}\\label{sigM}\n\t\\sigma_*^{\\rm in;out} =\\frac{B_{p,*}}{4\\pi m |\\eta|_{\\rm in;out}}\\ ,\n\\end{equation}\nwhere $B_{p,*}$ is the poloidal field on the loading surface.\nIn this way, the particle number flux per magnetic flux $\\eta$ is completely determined by $\\sigma_*^{\\rm in;out} $, recalling that\n$\\eta$ is a conserved quantity along field lines outside the loading zone. Note that $\\eta_{\\rm in} < 0$ and\n$\\eta_{\\rm out} > 0$, therefore there is a jump in $\\eta$ at the loading surface, i.e., \n$D_\\Psi^\\parallel \\eta \\propto \\delta(r-r_*)$.\n\nUsing Eq.(\\ref{eq:Estag}), the Bernoulli equation (\\ref{eq:Bern}) can be rewritten into a fourth-order polynomial equation as\n\\begin{eqnarray} \\label{Bern2}\n\t\\sum_{i=0}^4A_i u_p^i&=&0\\ ,\n\\end{eqnarray}\nwhere the coefficients $A_i$ are functions expressed by\n\\begin{equation}\n\\begin{aligned}\n\tA_4&= \\frac{1}{\\sigma_*^2}\\frac{B_{p,*}^2}{B_p^2}\\ ,\\\\\n\tA_3&= -\\frac{2k_0}{\\sigma_*}\\frac{B_{p,*}}{B_p}\\ ,\\\\\n\tA_2&= k_0^2 + \\left(1 + \\frac{k_{0,*}}{k_{2,*}} k_4\\right) \\frac{1}{\\sigma_*^2}\\frac{B_{p,*}^2}{B_p^2}\\ ,\\\\\n\tA_1&= \\left(-k_0 + \\frac{k_{0,*}}{k_{2,*}} k_2\\right) \\frac{2}{\\sigma_*}\\frac{B_{p,*}}{B_p}\\ ,\\\\\n\tA_0&= k_0^2 - \\frac{k_{0,*}}{k_{2,*}} k_0k_2\\ .\\\\\n\\end{aligned}\n\\end{equation}\n\nAs explored in several previous studies \\citep[e.g.,][]{Takahashi90, Pu15},\nsolving the Bernoulli equation above is in fact an eigenvalue problem, where $(E\/L)_{\\rm in}$ is the to-be-determined eigenvalue ensuring inflow smoothly cross the FM surface, while $(E\/L)_{\\rm out}$ is given by the match condition on the loading surface. Eq.(\\ref{eq:D_etaEL}) provides conditions connecting the inflow and the outflow for the single surface loading,\n\\begin{eqnarray}\\label{eq:deltaEL}\n\t\\delta(\\eta E)&=& m(\\delta\\eta)(-u_t)_*\\ ,\\nonumber\\\\\n\t\\delta(\\eta L)&=& m(\\delta\\eta)(u_\\phi)_*\\ ,\n\\end{eqnarray}\nwhich give the match condition\n\\begin{eqnarray}\\label{eq:EOLout}\n\t(E\/L)_{\\rm out}&=& \\frac{(\\eta E)_{\\rm out}}{(\\eta L)_{\\rm out}} =\\frac{(\\eta E)_{\\rm in} + m(\\delta\\eta)(-u_t)_*}{(\\eta L)_{\\rm in} + m(\\delta\\eta)(u_\\phi)_*}\\ ,\n\\end{eqnarray}\nwhere $\\delta\\eta\\equiv\\eta_{\\rm out}-\\eta_{\\rm in}$, and we have used the fact that $D_\\Psi^\\parallel\\eta$ is \na $\\delta$-function centered on the loading surface in deriving Eq.~(\\ref{eq:deltaEL}).\nIt is straightforward to see Eq.~(\\ref{eq:deltaEL}) guarantees a same jump in the total energy\nflux and its matter component, therefore the Poynting flux (and all the EM field components) is continuous across the loading surface. \n\nAs along as the Bernoulli equation is solved, i.e., both the eigenvalues $(E\/L)_{\\rm in, out}$ and the poloidal velocity field $u_p$ are obtained,\n$u^r$ and $u^\\theta$ is obtained via Eq.(\\ref{eq:eta}) and $u_p^2=u^Au_A$,\nwhile $u_t$ and $u_\\phi$ are obtained via relation $m(u_t+\\Omega u_\\phi) = -(E-\\Omega L)$ and the normalization condition $u\\cdot u=-1$.\n\nBefore delving into the details of numerically solving the Bernoulli equation, we can now give an estimate of the eigenvalues. Combining the definitions of $E$ and $L$ Eqs.(\\ref{eq:EandL}) with Eq.(\\ref{eq:Bern}), we find\n\\begin{equation}\n(u_t+\\Omega u_\\phi)_*=-\\sqrt{k_{0,*}},\n\\end{equation}\nplugging which back into Eqs.(\\ref{eq:EandL}), we obtain\n\\begin{eqnarray}\n\\label{EOL}\n\t(E\/L)_{\\rm in}&=&\\Omega + \\frac{m\\eta_{\\rm in}\\sqrt{k_{0,*}}}{(\\eta L)_{\\rm in}}\\ <\\Omega\\ ,\\nonumber\\\\\n\t(E\/L)_{\\rm out}&=&\\Omega + \\frac{m\\eta_{\\rm out}\\sqrt{k_{0,*}}}{(\\eta L)_{\\rm out}}\\ >\\Omega\\ ,\n\\end{eqnarray}\nwhich imply $E\/L = \\Omega [1 + O(\\sigma_*^{-1})]$ and we have used the fact $\\eta_{\\rm in}<0$ and \n$\\eta_{\\rm out}>0$. \n\n\\section{MHD Grad-Shafranov equation} \\label{sec:GS}\nWith the aid of the Maxwell's equation\n\\begin{equation}\n\t {F^{\\mu\\nu}}_{;\\nu} = 4\\pi j^\\mu \\ ,\n\\end{equation}\nthe trans-field component of the energy conservation equation (\\ref{eq:smu}) is written as \\footnote{Eq.~(\\ref{eqn_MHD}) \nonly holds for the specific choice of source function $S^\\mu = (nu^\\nu)_{;\\nu} mu^\\mu$.} \n\\begin{equation}\\label{eqn_MHD}\n\t\\frac{F^A_{\\ \\phi}}{F_{C\\phi}F^C_{\\ \\phi}} (mn u^\\nu u_{A;\\nu}-F_{A\\nu}j^\\nu)=0\\ ,\n\\end{equation}\nwhere we have used Eq.~(\\ref{eq:D_eta}) and the source function $S^\\mu$. \nThe repeated Latin letters $A$ and $C$ run over the poloidal coordinates $r$ and $\\theta$ only \n\\citep{Nitta91,Beskin93,Beskin97}.\nThis is known as the MHD GS equation, with the $1^{\\rm st}$ and $2^{\\rm nd}$ terms\nin the bracket being the fluid acceleration and the electromagnetic force, respectively.\n\nAfter some tedious derivation (see Appendix \\ref{sec:GS_der}), we write the full MHD GS equation in a compact form\n\\begin{eqnarray}\n\\label{GS_MHD}\n\t&\\ &\\mathcal{L}\\Psi=\\mathcal{S}_{\\rm EM}+\\mathcal{S}_{\\rm MT}\\ .\n\\end{eqnarray}\nHere $\\mathcal{L}$ is a differential operator defined by\n\\begin{eqnarray}\n\\label{GS_MHD_L}\n\t\\mathcal{L}\\Psi&& = \\left[\\Psi_{,rr} + \\frac{\\sin^2\\theta}{\\Delta} \\Psi_{,\\mu\\mu} \\right]\\ \\mathcal{A}(r,\\theta;\\Omega) \\nonumber\\\\\n\t&\\ & + \\left[ \\Psi_{,r} \\partial^\\Omega_r + \\frac{\\sin^2\\theta}{\\Delta} \\Psi_{,\\mu} \\partial^\\Omega_\\mu \\right] \\ \\mathcal{A}(r,\\theta;\\Omega) \\nonumber\\\\\n\t&\\ & + \\frac{1}{2} \\left[ (\\Psi_{,r})^2 + \\frac{\\sin^2\\theta}{\\Delta} (\\Psi_{,\\mu})^2 \\right] D_\\Psi^\\perp\\Omega\\ \\partial_\\Omega \\mathcal{A}(r,\\theta;\\Omega) \\nonumber\\\\\n\t&\\ & - \\left[ (\\Psi_{,r})^2 + \\frac{\\sin^2\\theta}{\\Delta} (\\Psi_{,\\mu})^2 \\right] \\frac{D^\\perp_\\Psi\\eta}{\\eta}\\ \\mathcal{M}^2(r,\\theta)\\ ,\\nonumber\\\\\n\\end{eqnarray}\nwhere $\\mu=\\cos\\theta$, $\\mathcal{A}(r,\\theta;\\Omega)=-k_0(r,\\theta;\\Omega)+\\mathcal{M}^2(r,\\theta)$,\nand we have defined $\\partial^\\Omega_A(A=r,\\mu)$ as the partial derivative with respect to coordinate $A$ with $\\Omega$ fixed, $\\partial_\\Omega$ as the derivative with respect to $\\Omega$, $D_\\Psi^\\perp$ as the derivative perpendicular to field lines\n\\begin{eqnarray}\n\tD_\\Psi^\\perp&\\equiv& \\frac{F^A_{\\ \\phi}\\partial_A}{F_{C\\phi}F^C_{\\ \\phi}}\\ ,\n\\end{eqnarray}\nwhich is equivalent to the ordinary derivative $d\/d\\Psi$ when acting on functions of $\\Psi$.\nThe two source terms are\n\\begin{equation}\n\\label{GS_MHD_S}\n\\begin{aligned}\n\t \\mathcal{S}_{\\rm EM} &=\\frac{\\Sigma}{\\Delta} I D^\\perp_\\Psi I\\ , \\\\\n \\mathcal{S}_{\\rm MT} &=-4\\pi\\Sigma\\sin^2\\theta mn(u^tD_\\Psi^\\perp u_t + u^\\phi D_\\Psi^\\perp u_\\phi)\\ ,\n\\end{aligned}\n\\end{equation}\nwhere $I=4\\pi(\\eta L-\\eta m u_\\phi)$ [see Eq.(\\ref{eq:EandL})]. \n\nIn the FFE limit, $\\mathcal{M}^2=0$, $\\mathcal{S}_{\\rm MT}=0$, and the GS equation reduces to \\citep{Pan17}\n\\begin{eqnarray}\n\\label{GS_FFE}\n\t&\\ &\\mathcal{L}\\Psi = \\mathcal{S}_{\\rm EM}\\ \\ .\n\\end{eqnarray}\nThe FFE solutions $\\{\\Psi|_{\\rm FFE}, \\Omega|_{\\rm FFE}, (\\eta L)|_{\\rm FFE} \\}$ have been well explored\nboth analytically and numerically in many previous studies \\citep[see e.g.,][]{BZ77,Tanabe08, Contop13, Pan15, Pan15b}.\nSimilar to the FFE case, solving the MHD GS equation (\\ref{GS_MHD}) is also eigenvalue problem, where $\\Omega$ and $4\\pi\\eta L$ are the to-be-determined eigenvalues ensuring field lines smoothly cross the two Alfven surfaces \\citep{Contop13, Nathan14, Pan17,Mahlmann18}.\n\n\n\n\n\n\\section{A split monopole example} \\label{sec:eg}\n\nAs previewed in the Introduction, we aim to construct a framework for investigating MHD jet structure of spinning BHs, \nin which the EM fields $(F_{\\mu\\nu})$ and the fluid motion $(n, u^\\mu)$ are self-consistently obtained \ngiven a proper plasma loading function $\\eta(r,\\theta)$ and proper boundary conditions.\n\nIn this section, we detail the procedure of consistently solving the two governing equations for an example of\nthe split monopole magnetic field configuration around a rapidly spinning central BH with a dimensionless spin $a=0.95$. For simplicity, we explore two different scenarios\nwith magnetization parameters $\\sigma_*^{\\rm out}=2\\sigma_*^{\\rm in}$ and $\\sigma_*^{\\rm out}=\\sigma_*^{\\rm in}$,\nrespectively. Remember that the loading function $\\eta(r,\\theta)$ is completely determined by the \nmagnetization parameters via the definition (\\ref{sigM}).\n\nBoundary conditions used here are similar to those of force-free solutions. \nExplicitly, we choose $\\Psi|_{\\mu=0}=\\Psi_{\\rm max}$ on the equatorial plane, \n$\\Psi|_{\\mu=1}=0$ in the polar direction, $\\Psi_{,r}|_{r=r_{\\rm H}}=0$ and $\\Psi_{,r}|_{r=\\infty}=0$\nfor the inner and outer boundaries, respectively. Here $r_{\\rm H}$ is the radius of the event horizon.\n\n\\subsection{Numerical Techniques} \\label{sec:tech}\nWe define a new radial coordinate $R = r\/(r+1)$, confine our \ncomputation domain $R\\times\\mu$ in the region $[R(r_{\\rm H}), R_{\\rm max}]\\times[0,1]$,\nand implement a uniform $256\\times64$ grid. \nIn practice, we choose $R_{\\rm max}=0.995$, i.e., $r_{\\rm max}\\approx 200 M$.\n\nThe Bernoulli equation (\\ref{eq:Bern}) and the MHD GS equation (\\ref{GS_MHD}), governing the flow along the field lines\nand field line configuration, respectively, are coupled. So we solve them one by one in an iterative way:\n\\begin{eqnarray} \\label{Eq_set}\n\\left\\{\\begin{array}{c}\n \\mathcal{L}\\Psi^{(l)}=(\\mathcal{S}_{\\rm EM}+\\mathcal{S}_{\\rm MT})\\{(\\eta L)^{(l)}, n^{(l-1)}, u^{(l-1)}\\}\\ , \\\\\n \\mathcal{F}\\{u^{(l)}; (E\/L)^{(l)}, \\Omega^{(l)}, \\Psi^{(l)}\\} =0 \\ ,\n\\end{array}\n\\right.\n\\end{eqnarray}\nwith $l=1,2,3,\\cdots$ . In a given loop $l$, we solve the GS equation updating $\\Psi$ and $\\{\\Omega, (\\eta L)\\}$ (with $\\{n, u^\\mu\\}$ inherited from the previous loop $l-1$), ensuring field lines smoothly cross the two \\Alfven surfaces; \nin a similar way, we solve the Bernoulli equation updating $u^\\mu$ and $(E\/L)$ (with freshly updated $\\Omega$ and $\\Psi$ from solving the GS equation), ensuring a super-sonic inflow solution and an outflow solution satisfying the match condition \n(\\ref{eq:EOLout}). Combing solutions to both equations and definitions of $\\{\\eta, E, L\\}$, we finally obtain all the desired quantities $\\{F_{\\mu\\nu}, n, u^\\mu\\}$ as functions of coordinates $r$ and $\\theta$. \n\nWe activate the iteration with an initial guess\n\\begin{eqnarray} \\label{init}\n\\left\\{\\begin{array}{cccc}\n \\Psi^{(0)}(r,\\theta)&=&\\Psi_{\\rm max}(1-\\cos\\theta)\\ , \\\\\n \\Omega^{(0)}(\\Psi)&=&0.5\\Omega_{\\rm H}\\ , \\\\\n (\\eta L)^{(0)}(\\Psi)&=&\\Omega_{\\rm H} \\Psi[2-(\\Psi\/\\Psi_{\\rm max})]\/(8\\pi)\\ ,\\\\\n n^{(0)}(r,\\theta) &=& u^{(0)}(r,\\theta) = 0\\ ,\n\\end{array}\n\\right.\n\\end{eqnarray}\nwhere $\\Omega_{\\rm H}\\equiv a\/(r_{\\rm H}^2+a^2)$ is the BH angular velocity.\n\nThe numerical techniques for tackling the two eigenvalue problems are detailed as follows:\n\n\\begin{itemize}\n\\item[\\it Step 1] \nThe MHD GS equation is a second-order differential\nequation which degrades to first order on the\n\\Alfven surfaces where $\\mathcal{A}(r,\\theta)=0$. \nNumerical techniques for dealing this problem have been well developed\nin previous force-free studies \\citep{Contop13, Nathan14, Huang16,Huang18, Pan17,Mahlmann18}, and we briefly recap them here.\n\n\nIn each loop $l$, we solve the GS equation (\\ref{GS_MHD}) with\nthe approximate solution obtained from the previous loop $\\left\\{\\Omega^{(l-1)},(\\eta L)^{(l-1)},\\Psi^{(l-1)}\\right\\}$\nas the initial guess. \nWe evolve the flux function $\\Psi^{(l)}$ using the overrelaxation technique with Chebyshev acceleration \\citep{Press86}, and $\\Psi^{(l)}(r,\\theta)$ is updated on grid points except those in the vicinity of the two \\Alfven surfaces. The flux function $\\Psi^{(l)}(r,\\theta)$ on the \\Alfven surfaces are obtained via interpolation from neighborhood grid points and the directional derivatives on the \\Alfven surfaces \\citep{Pan17}. \nUsually we obtain two different flux function $\\Psi(r_{\\rm A}^-)$ versus $\\Psi(r_{\\rm A}^+)$ \non the \\Alfven surface via interpolations from grid points inside and outside, respectively. \nTo decrease this discontinuity, we adjust $\\Omega^{(l)}(\\Psi)$ at the outer Alfv\\'en (OA) surface:\n\\begin{eqnarray}\n\t\\Omega^{(l)}_{\\rm new}(\\Psi_{\\rm new})&=& \\Omega^{(l)}_{\\rm old}(\\Psi_{\\rm old}) \\nonumber\\\\\n\t&\\ & + 0.05 [\\Psi(r_{\\rm OA}^+)-\\Psi(r_{\\rm OA}^-)],\n\\end{eqnarray}\nwith $\\Psi_{\\rm new}=0.5[\\Psi(r_{\\rm OA}^+)+\\Psi(r_{\\rm OA}^-)]$, \nwhere the subscript old\/new represents quantities before\/after the above adjustment;\nand adjust both $\\Omega^{(l)}(\\Psi)$ and $(\\eta L)^{(l)}(\\Psi)$ at the inner Alfv\\'en surface (IA):\n\\begin{eqnarray}\n\t\\Omega^{(l)}_{\\rm new}(\\Psi_{\\rm new})&=& \\Omega_{\\rm old}(\\Psi_{\\rm old}) \\nonumber\\\\\n\t&\\ & + 0.05[\\Psi(r_{\\rm IA}^+)-\\Psi(r_{\\rm IA}^-)],\\nonumber\\\\\n\t(\\eta L)^{(l)}_{\\rm new}(\\Psi_{\\rm new})&=& (\\eta L)^{(l)}_{\\rm old}(\\Psi_{\\rm old}) \\nonumber\\\\\n\t&\\ & - 0.05[\\Psi(r_{\\rm IA}^+)-\\Psi(r_{\\rm IA}^-)],\n\\end{eqnarray}\nwith $\\Psi_{\\rm new}= 0.5[\\Psi(r_{\\rm IA}^+)+\\Psi(r_{\\rm IA}^-)]$.\n\nAfter sufficient evolution, we obtain a converged solution $\\{\\Omega^{(l)}, (\\eta L)^{(l)}, \\Psi^{(l)}\\}$ \n which ensures field lines smoothly cross the two \\Alfven surfaces.\n\n\\item[\\it Step 2] The Bernoulli equation in the form of Eq.(\\ref{Bern2}) is a fourth-order polynomial equation in $u_p$\n\\citep{Camen86a,Camen86b,Camen87,Takahashi90,Fendt01,Fendt04,Levinson06,Pu15},\nwhere the FM point is a standard `X'-type singularity, while the Alfv\\'en point turns out to be a higher-order\nsingularity \\citep{Weber67}.\nMathematically, a FM point is the location of a multiple root to the Bernoulli equation. \nThe existence of FM point is very sensitive to the value of $(E\/L)_{\\rm in}^{(l)}$. \nFor a slightly small value, there exists only sub-sonic solutions in the region $r\\Omega$ for outflow. Specifically, the fluid angular velocity on the\nevent horizon $\\Omega_{\\rm MT}(r=r_{\\rm H})$ slightly exceeds the BH angular velocity $\\Omega_{\\rm H}$,\nwhich guarantees the fluid energy to be positive on the horizon (see Fig.~\\ref{fig:ut}). \n\nIn Fig.~\\ref{fig:ut}, we show the specific particle energy $-u_t$ for the two cases.\nBoth of them are positive everywhere, while the outflow of \\emph{Case 1} gains more efficient\nacceleration.\n\n\\subsection{Energy Extraction Rates}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=1]{f6.eps}\n\\caption{Results of the energy extraction rates in relation to $\\sigma_*^{\\rm in}$ with $\\sigma_*^{\\rm out}=2\\sigma_*^{\\rm in}$ assumed.\nThe energy rates measured at the event horizon\n$\\{ \\dot{E}_{\\rm tot}^{\\rm H}, \\dot{E}_{\\rm Poynting}^{\\rm H}, \\dot{E}_{\\rm MT}^{\\rm H}\\}$, are presented in filled squares, small filled squares, and filled circles, respectively. The solid black line in top panel, the dotted black line in top panel, and the solid black line in bottom panel are the corresponding fitting curves.\nSimilarly, the energy rates measured at infinity\n$\\{ \\dot{E}_{\\rm tot}^{\\infty}, \\dot{E}_{\\rm Poynting}^{\\infty}, \\dot{E}_{\\rm MT}^{\\infty}\\}$, are presented in open symbols and the solid grey lines are the corresponding fitting curves. \\label{fig:Edot}}\n\\end{figure}\n\nIn this subsection, we investigate the energy extraction rate from the central BH via the MHD jet,\nwhich is defined as \n\\begin{equation} \n\\begin{aligned}\n \\dot{E}_{\\rm tot}(r)\n &= -2\\pi \\int_0^{\\pi} \\sqrt{-g} T^r_{\\ t}(r) {\\rm d} \\theta\\ , \\\\ \n &= 4\\pi\\int_0^{\\Psi_{\\rm max}} (\\eta E)(r) {\\rm d}\\Psi \\ , \\\\ \n &= 4\\pi\\int_0^{\\Psi_{\\rm max}} [(E\/L)\\times(\\eta L)](r) {\\rm d}\\Psi \\ ,\n\\end{aligned}\n\\end{equation} \nwhere we have used Eqs.(\\ref{eq:eta}-\\ref{eq:EandL}) in the second line. In the third line, $E\/L$ and $\\eta L$ are the eigenvalues of the Bernoulli equation (\\ref{eq:Bern}) and of the GS equation (\\ref{GS_MHD}), respectively. In the similar way, we can define its matter\/electromagnetic component as\n\\begin{equation} \\label{eq:Edot}\n\\begin{aligned}\n\t\\dot{E}_{\\rm MT}(r)&= 4\\pi\\int_0^{\\Psi_{\\rm max}}(- \\eta m u_t)(r) {\\rm d}\\Psi\\ , \\\\\n\t\\dot{E}_{\\rm Poynting}(r)\n\t&= 4\\pi \\int_0^{\\Psi_{\\rm max}}(\\Omega I\/4\\pi)(r) {\\rm d}\\Psi \\ .\\\\\n\t&= \\dot{E}_{\\rm tot}(r) - \\dot{E}_{\\rm MT}(r)\\ . \n\\end{aligned}\n\\end{equation}\n\nWe measure these energy extraction rates at $r=r_{\\rm H}$\/ $r=\\infty$, and quantify their dependence\non the magnetization parameter $\\sigma_*$. In practice, we find that these energy extraction rates are not sensitive to the value of $\\sigma_*^{\\rm out}$,\nexcept the matter component of energy rate at infinity $\\dot{E}_{\\rm MT}^\\infty$. \nWithout loss of generality, we only show the rates in relation to $\\sigma_*^{\\rm in}$\nfor the $\\sigma_*^{\\rm out}=2\\sigma_*^{\\rm in}$ scenario in Fig.~\\ref{fig:Edot}, where\nall the rates are displayed in unit of the energy extraction rate in the force-free limit \n$\\dot E_{\\rm FFE}\\approx 0.4 (\\Psi_{\\rm max}^2\/4\\pi)$.\n\nAs we see in Fig.~\\ref{fig:Om}, the rotation of magnetic field lines is dragged down by\nthe loaded plasma, i.e., $\\Omega|_{\\rm MHD} < \\Omega|_{\\rm FFE}$, while the fluid that does not corotate with\nthe field lines with angular velocity $\\Omega_{\\rm MT}|_{\\rm inflow} > \\Omega > \\Omega_{\\rm MT}|_{\\rm outflow}$, tends to bend the field lines and induce a stronger $\\phi$-component of magnetic field, i.e., $I|_{\\rm MHD} > I|_{\\rm FFE}$. The net result is that the Poynting energy extraction rate on the event horizon $\\dot E_{\\rm Poynting}^{\\rm H}$ has little dependence on the magnetization. Going outward along the field lines, part of the Poynting flux is converted into the fluid kinetic energy. For the case with magnetization parameter $\\sigma_*^{\\rm in} = 30$, the matter component makes up about $13\\%$ of the total energy flux at infinity.\n\n\\subsection{Penrose Process}\\label{subsec:Penrose}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.68]{f7.eps}\n\\caption{The positron\/electron component of energy extraction rates in relation to $\\sigma_*^{\\rm in}$. \nThe energy rates $\\{\\dot{E}^{\\rm H}_{e^+}, \\dot{E}^{\\rm H}_{e^-}\\}$ \nare presented in open and filled circles, respectively. \nThe solid and dashed lines are the corresponding fitting curves.\nThe shaded regime denotes where the Penrose process is \nworking (for the electron component). \\label{fig:EdMT}}\n\\end{figure}\n\nAn implicit assumption in our MHD jet model is the two-fluid description, since \nthe electric current density $j^\\mu$ is \\emph{not} proportional to the fluid velocity $u^\\mu$.\nTherefore we can decompose the charged fluid as two oppositely charged components, \npositron ($e^+$) and electron ($e^-$). \\footnote{Though there is a degree of freedom in doing \nthis decomposition, e.g., we can also decompose the fluid as electrons and ions, it does not change\nour conclusion qualitatively.} We denote the number densities and the velocity fields as $n_\\pm$ \nand $u^\\mu_\\pm$, respectively, which are related to $j^\\mu$ and $nu^\\mu$ via relations\n\\begin{eqnarray}\n j^\\mu&=& e(n_+u^\\mu_+ - n_-u^\\mu_-)\\nonumber\\\\\n mnu^\\mu&=& m(n_+u^\\mu_+ + n_-u^\\mu_-).\n\\end{eqnarray}\nConsequently, we obtain \n\\begin{equation} \n m(n u^\\mu)_\\pm = \\frac{1}{2}[\\pm j^\\mu(m\/e) + nmu^\\mu] \\ ,\n\\end{equation} \nand we can decompose the matter energy flux into two components $\\dot{E}_{e^\\pm}$.\nHere we are only interested in the energy extraction rates on the event horizon\n\\begin{eqnarray}\n \\dot{E}^{\\rm H}_{e^+}&=& 4\\pi \\int_0^{\\Psi_{\\rm max}} \\frac{(-\\eta m n_+u_{t+})(r_{\\rm H})}{n(r_{\\rm H})} {\\rm d}\\Psi\\ ,\\nonumber\\\\\n \\dot{E}^{\\rm H}_{e^-}&=& 4\\pi \\int_0^{\\Psi_{\\rm max}} \\frac{(-\\eta m n_-u_{t-})(r_{\\rm H})}{n(r_{\\rm H})} {\\rm d}\\Psi \\ .\n\\end{eqnarray}\nAs an example, we choose the horizon enclosed magnetic flux $\\Psi_{\\rm max}=1000(m\/e)$, and show $\\dot{E}^{\\rm H}_{e^+}\/\\dot{E}_{\\rm FFE}$ \nand $\\dot{E}^{\\rm H}_{e^-}\/\\dot{E}_{\\rm FFE}$ in relation to $\\sigma_*^{\\rm in}$ in Fig.~\\ref{fig:EdMT}. \nThe energy extraction rate from positrons is always negative, while \nthe energy extraction rate from electrons, become positive \nwhen the plasma loading is low enough. In this regime, denoted in shades in Fig.~\\ref{fig:EdMT}, the magnetic Penrose process is working, though, only for one of the two charged component. \\footnote{We should not do any quantitative interpretation for the results of this subsection, because the two-fluid decomposition done here is not accurate, e.g., there is no guarantee for the velocity of each component $u^\\mu_\\pm$ to be timelike and normalized.\nWe will leave a more accurate two-fluid description of MHD jet structure \\citep{Koide09, Liu18} to future work .} This finding is in good agreement with recent particle-in-cell simulations \\citep{Parfrey18}.\n\n\n\\section{Summary and Discussion}\\label{sec:summary}\n\\subsection{Summary}\nTo describe the MHD structure of BH jets, we need a minimum set of quantities as functions of spacetime: \nMaxwell tensor $F_{\\mu\\nu}$, fluid rest mass density $\\rho$ (or equivalently particle number density $n$), and \nfluid four-velocity $u^\\mu$. For determining all these quantities self-consistently, we constructed a full MHD framework, in which EM fields and fluid motion are governed by the MHD GS equation (\\ref{GS_MHD}) and the Bernoulli equation (\\ref{eq:Bern}), respectively. From these two governing equations, we can completely determine $\\{F_{\\mu\\nu}, \\rho ,u^\\mu\\}$ given proper boundary conditions and a proper plasma loading function $\\eta(r,\\theta)$ (see Eq.(\\ref{eq:eta})). As an example, we consider a split monopole field configuration and idealized plasma loading on the stagnation surface.\n\nAssuming steady and axisymmetric jet structure, and perfectly conductive plasma within the jet, the EM fields are \ncompletely determined by three functions: the magnetic flux $\\Psi(r, \\theta)$, the angular velocity of magnetic field \nlines $\\Omega(\\Psi)$ and the poloidal electric current $I(r,\\theta)$ (see Eq.(\\ref{eq:Maxwell})). \nGiven fluid energy density $\\rho$ and velocity $u^\\mu$, the MHD GS equation (\\ref{GS_MHD}) turns out to be a second-order differential equation with respect to $\\Psi(r,\\theta)$ which degrades to be first-order on the two \\Alfven surfaces. Solving the GS equation is an eigenvalue problem, with eigenvalues $\\Omega(\\Psi)$\nand $I(r,\\theta)$ (or more precisely, the conserved quantity $4\\pi\\eta L(\\Psi)$ defined in Eq.(\\ref{eq:EandL})) to be determined ensuring field lines smoothly cross the \\Alfven surfaces.\n\nGiven EM fields $F_{\\mu\\nu}$, the Bernoulli equation turns out to be a fourth-order polynomial equations in the poloidal fluid velocity $u_p$. Solving the Bernoulli equation is also an eigenvalue problem, with the eigenvalue $(E\/L)_{\\rm in}$ to be determined ensuring the inflow smoothly cross the FM surface, and $(E\/L)_{\\rm out}$ to be determined by the match condition (\\ref{eq:EOLout}) on the loading surface. With both $E\/L$ and $u_p$ obtained, it is straightforward to obtain $n$ and $u^\\mu$ via Eqs.(\\ref{eq:eta},\\ref{eq:EandL}) and the normalization \ncondition $u\\cdot u=-1$. \n\nThe two governing equations are coupled, therefore we numerically solved them in an iterative way (see Sec.~\\ref{sec:tech}). \nAs a result, we find the rotation of magnetic field lines is dragged down by the plasma loaded, i.e., $\\Omega|_{\\rm MHD} < \\Omega|_{\\rm FFE}$; for the fluid angular velocity, we find $\\Omega_{\\rm MT}|_{\\rm outflow}<\\Omega<\\Omega_{\\rm MT}|_{\\rm inflow}$ ; \nthe non-corotating fluid tends to bend the field lines and induce a stronger $\\phi$-component of magnetic field, therefore \na stronger poloidal electric current\ni.e., $I|_{\\rm MHD} > I|_{\\rm FFE}$. The net result is that the Poynting energy extraction on the horizon is insensitive to the\nmagnetization, i.e., $\\dot E_{\\rm Poynting}^{\\rm H} |_{\\rm MHD} \\approx \\dot E_{\\rm Poynting}^{\\rm H}|_{\\rm FFE}$ (see Fig.~\\ref{fig:Edot}).\nGoing outward along the field lines, part of the Poynting flux is converted to the fluid kinetic energy. For the case we explored\nwith $\\sigma_*^{\\rm in} = 30$, the matter component makes up $\\sim 13\\%$ of the total energy flux at infinity.\n\nFinally, we examined the MHD Penrose process for the cases we numerically solved. We found the specific fluid energy $-m u_t$ is always positive on the event horizon, i.e., the MHD Penrose process is not working and therefore the BZ mechanism defines fully the jet energetics. However, if we decompose the charged fluid as two oppositely charged components ($e^\\pm)$, we found the magnetic Penrose process does work for one of the two components when the plasma loading is low enough (see Fig.~\\ref{fig:EdMT}).\n\n\\subsection{Discussion}\n\nAs a first step towards a full MHD jet model, we have investigated the MHD jet structure\nof split monopole geometry assuming an idealized plasma loading on the stagnation surface. \nThis simplified plasma loading gives rise to a few unphysical problems in the vicinity of the loading surface, \nincluding divergence of particle number density $n(r_*)$, which shows up in the source terms of the MHD\nGS equation (\\ref{GS_MHD_S}). To avoid the singularity arising from the unphysical divergence, we smoothed the\nfunction $n(r,\\theta)$ in the vicinity of the loading surface. Another consequence of the simplified \nplasma loading is that we must impose the continuity equation (\\ref{eq:EOLout}) to ensure the EM fields to be continuous across the loading surface. As a result, $(E\/L)_{\\rm out}$ is specified by $(E\/L)_{\\rm in}$, i.e., $r_{A,\\rm out}$ is specified by $r_{A, \\rm in}$. Therefore, we lose the freedom to adjust $(E\/L)_{\\rm out}$ until a supersonic outflow solution is found\nas we did for the inflow solution. Consequently, all the outflow solutions obtained in this paper are subsonic (see Fig.~\\ref{fig:up}). \n\nIn future work, we aim to investigate a full MHD jet model with a more realistic extending loading zone \nwhere the plasma injection is described by a continuous function $\\eta(r,\\theta)$. Then all the unphysical\ndiscontinuity and divergence described above would be avoided. For the extending plasma loading, the smooth EM fields would be naturally preserved, and the continuity requirement would not be a constraint. As a result,\nwe can adjust $(E\/L)_{\\rm out}$ for finding a supersonic outflow solution, which is more consistent with recent observations \\citep{Hada16,Mertens16}.\nIn addition to the plasma loading, the BH surroundings also play an important role in shaping the jet structure \\citep[e.g.][]{Tchek10,Beskin17}. The role of more realistic BH environment, including accretion flows and hot plasma with non-zero pressure will also be considered in future work. \n\n\n\n\n\\section*{Acknowledgements}\nWe thank the referee for his\/her careful reading of this manuscript and giving insightful suggestions.\nL.H. thanks the support by the National\nNatural Science Foundation of China (grants 11590784 and 11773054),\nand Key Research Program of Frontier Sciences, CAS (grant No. QYZDJ-SSW-SLH057).\nZ.P. thanks Hung-Yi Pu for his invaluable help throughout this research. \nZ.P. is supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute\nis supported by the Government of Canada through the Department of Innovation, \nScience and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science.\nC.Y. has been supported by the National Natural Science Foundation of China (grants 11521303, 11733010 and 11873103).\nThis work made extensive use of the NASA Astrophysics Data System and\nof the {\\tt astro-ph} preprint archive at {\\tt arXiv.org}.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGravitational wave (GW) sources~\\cite{DI:2016,secondBBH:2016,thirddetection,fourth:2017,GW170608,o1o2catalog} are now routinely detected by the advanced LIGO~\\cite{DII:2016,LSC:2015} and Virgo~\\cite{Virgo:2015} detectors. The last two observing runs of these GW detectors indicate that, on average, one GW source has been detected for every fifteen days of analyzed data. It is expected that this number will be superseded in the upcoming third observing run, since the advanced LIGO and Virgo detectors have been undergoing commissioning since August 2017. \\new{In their enhanced sensitivity configuration, they will be able to probe a larger volume of space, thereby boosting the expected detection rate} for binary black hole (BBH) mergers and binary neutron star (BNS), and may yield the first observations of neutron star-black hole (NSBH) mergers~\\cite{o1o2catalog}.\n\nGiven the expected scale of GW discovery in upcoming observing runs, it is in order to explore the use of efficient signal-processing algorithms for low-latency GW detection and parameter estimation. This work is motivated by the need to probe a deeper parameter space that is available to GW detectors, in real-time, and using minimal computational resources to maximize the number of studies that can be conducted with GW data. This combination of constraints is a common theme for large-scale astronomical facilities, which will be producing large datasets in low-latency within the next decade, e.g., the Large Synoptic Survey Telescope~\\cite{lsstbook}. Scenarios in which both LSST, among other electromagnetic observatories, and advanced LIGO and Virgo work in unison, analyzing disparate datasets in real-time to realize the science goals of Multi-Messenger Astrophysics make this work timely and relevant~\\cite{whitepaper:SCIMMA,eliuMMA:2019}. \n\nAmong a number of recent developments in signal-processing, deep learning exhibits great promise to increase the speed and depth of real-time GW searches. The first deep learning algorithms to do classification and regression of GWs emitted by non-spinning BBHs on quasi-circular orbits were presented in~\\cite{geodf:2017a} in the context of simulated LIGO noise. The extension of that study to realistic detection scenarios using real advanced LIGO noise was introduced in~\\cite{geodf:2017b}. Even though these algorithms were trained to do real-time classification and regression of GWs in realistic detection scenarios for a 2-D signal manifold (non-spinning BBHs on quasi-circular orbits), the studies presented in~\\cite{geodf:2017a,geodf:2017b,geodf:2017c,Rebei:2018R} have demonstrated that deep learning algorithms generalize to new types of sources, enabling the identification of moderately eccentric BBH mergers, spin precessing BBH mergers, and moderately eccentric BBH signals that include higher-order modes, respectively. These studies also indicate that while the detection of these new types of GW sources is possible, it is necessary to use higher-dimensional signal manifolds to train these algorithms to improve parameter estimation results, and to go beyond point-parameter estimation analysis. This work has sparked the interest of the GW community, leading to a variety of studies including the classification of simulated BBH waveforms in Gaussian noise, GW source modeling and GW denoising of BBH mergers~\\cite{geodf:2017c,hshen:2017,positionML:2018,wei:2019W,Rebei:2018R,AlvinC:2018,2018GN,Fan:2018,Gonza:2018,2018GN,Fuji:2018,LiYu:2017,Nakano:2018}.\n\nWhile detection and parameter estimation are the key goals for the development of deep learning for GW astrophysics, in this article we focus on the application of deep learning for parameter estimation. At present, GW parameter estimation is done using Bayesian inference~\\cite{bambi:2012MNRAS,bambiann:2015PhRvD,Singer_Price_2016}, which is a well tested and extensively used method, though computationally-intensive. On the other hand, given the scalability of deep learning models in training mode (i.e., the ability to combine distributed training and large datasets to enhance the performance of deep learning algorithms in realistic data analysis scenarios), and their computational efficiency in inference mode, it is natural to explore their applicability for GW parameter estimation, the theme of this article. \n\n\\noindent \\textbf{Previous Work} The first exploration of deep learning for the detection and point-parameter estimation of a 2-D signal manifold was presented in~\\cite{geodf:2017a,geodf:2017b}. For waveform signals with matched-filtering signal-to-noise ratio (SNR) \\(\\textrm{SNR}\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} 10\\), these neural network models measure the masses of quasi-circular BBH mergers with a mean percentage absolute error \\(\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 15\\%\\), and with errors \\(\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 35\\%\\) for moderately eccentric BBH mergers. These results provided a glimpse of the robustness and scalability of deep neural network models, and the motivation to take these prototypical applications into a production run toolkit for GW parameter estimation. \n\n\\noindent \\textbf{Highlights of This Work} \n\n\\begin{cititemize2}\n\\item We have designed new architectures and training schemes to demonstrate that deep learning provides the means to reconstruct the parameters of BBH mergers in more realistic astrophysical settings, i.e., BHs whose spins are aligned or anti-aligned, and which evolve on quasi-circular orbits. \\new{This 4-D signal manifold marks the first time deep learning models \\textit{at scale} are used for GW data analysis, i.e., models trained using datasets with tens of millions of waveforms, and 1,024 nodes (64 processor per node) to significantly reduce the training stage.} Once fully trained, these deep learning models can reconstruct in real-time the parameters of the BBH catalog presented by the LIGO and Virgo Scientific Collaboration in~\\cite{o1o2catalog}. \n\\item The neural network models we introduce in this article have two different architectures. The first one is tailored for the measurement of the masses of the binary components, whereas the second is used to quantify the final spin and the quasi-normal modes (QNMs) of the BH remnant. Once both neural networks are fully trained, we use them in parallel for inferences studies, finding that we can reconstruct the parameters of BBH mergers within 2 milliseconds using a single Tesla V100 GPU. \n\\item \\new{We introduce a novel scheme to train Bayesian Neural Network (BNN) models at scale using 1,024 nodes on a High Performance Computing platform while keeping optimal performance for inference. We then adapted this framework to introduce for the first time the use of BNNs for GW parameter estimation. With this approach we can estimate the astrophysical parameters of the existing catalog of detected BBH mergers~\\cite{o1o2catalog}, and their posterior distributions, reporting inference times in the order of milliseconds.}\n\n\\item \\new{We use variational inference to approximate the posterior distribution of model parameters in the probabilistic layers of our neural networks. In the inference stage, we sample the network parameters to evaluate the posterior distribution of the physical parameters. Details of the model and training are in Sections~\\ref{sec:prob_model} and~\\ref{bnn_scale}.}\n\\end{cititemize2}\n\n\nThis article is structured as follows. Section~\\ref{method} introduces the model architectures used in these analyses, it describes the construction and curation of the datasets used to train, validate and test our neural network models. It also includes a revised curriculum learning for neural network training. We quantify the accuracy of these neural network models in realistic detection scenarios using real advanced LIGO noise in Section~\\ref{experiments}. We put at work our deep learning algorithms in Section~\\ref{discussion} to estimate the astrophysical parameters of the BBH mergers reported in~\\cite{o1o2catalog}. We summarize our findings and future directions of work in Section~\\ref{conclusion}.\n\n\\begin{figure*}[t!]\n\t\\centerline{\n\t\t\\raisebox{1cm}{\n\t\t\t\\includegraphics[width=90mm]{spin_omegas_diagram.png}}\n\t\t\\hspace{10mm}\n\t\t\\includegraphics[width=65mm]{masses_diagram.png}\n\t}\n\t\\caption{The \\new{left}\n\tarchitecture is used to estimate the final spin and quasi-normal modes of the black hole remnant. The \\new{right}\n\tarchitecture is used to estimate the masses of the binary black hole components. }\n\t\\label{model_diagram_spin_omegas}\n\\end{figure*}\n\n\\section{Methods}\n\\label{method}\n\nIn this section, we introduce the neural network models used for parameter estimation, and describe a novel curriculum learning scheme to accurately measure the masses of the binary components, and the final spin and QNMs of the BH remnant. We have used \\texttt{TensorFlow}~\\cite{abadi2016tensorflow,abadi2015tensorflow} to design, train, validate and test the neural network models presented in this section. \n\nThe rationale to use two neural network models stems from the fact that the masses, spins and QNMs span rather different scales. Therefore, to improve the accuracy with which deep learning can measure these parameters we have designed one neural network that is tailored to measure the masses of the binary components, and one to measure the final spin and QNMs of the remnant. The astute reader may have noticed that the final spin of the BH remnant and its QNMs have a similar range of values when the QNMs are cast in dimensionless units, and this is the approach we have followed. In practice, we train the second neural network model using the fact that the QNMs are determined by the final spin \\(a_f\\) using the relation~\\cite{Berti:2006b}\n\n\\begin{equation}\n\\omega_{220}\\left(a_f\\right)= \\omega_R + i\\, \\omega_{I}\\,,\n\\label{qnms}\n\\end{equation}\n\n\\noindent where \\((\\omega_R,\\,\\omega_{I})\\) correspond to the frequency and damping time of the ringdown oscillations for the fundamental \\(\\ell=m=2\\) bar mode, and the first overtone \\(n=0\\). We have computed the QNMs following~\\cite{Berti:2006b}. One can readily translate \\(\\omega_R\\) into the ringdown frequency (in units of Hertz) and \\(\\omega_I\\) into the corresponding (inverse) damping time (in units of seconds) by computing \\(M_f\\,\\omega_{220}\\). \\(M_f\\) represents the final mass of the remnant, and can be determined using Eq. (1) in~\\cite{HealyLous:2017PRDH}.\n\nAs we describe below, we have found that to accurately reconstruct the masses of the binary components, it is necessary to use a more complex and deeper neural network architecture. It is worth mentioning that once these models are fully trained, a single GPU is sufficient to perform regression analyses in milliseconds using both neural network models. \n\n\n\\subsection{Neural network model to measure the properties of the black hole remnant \\label{subsec:properties_model}}\n\nThe neural network model consists of two main parts: a shared root component for all physical parameters, and three leaf components for individual parameters ($a_f$, $\\omega_R$, and $\\omega_I$), as illustrated in the left panel of Figure~\\ref{model_diagram_spin_omegas}, and Table~\\ref{spin_omega_model_config}. The model architecture looks like a rooted tree. The root is composed of seven convolutional layers, and its output is shared by the leaves. Each leaf component has the same network architecture with three fully connected layers. This approach is inspired by the hierarchical self decomposing of convolutional neural networks described in~\\cite{DBLP:journals\/corr\/abs-1811-04406,hu2018squeeze}. The key idea behind this approach is that the neural network structures are composed of a general feature extractor for the first seven layers, which is then followed up by sub-networks that take values from the output of the general feature extractor. \n\nThe rationale to have splits after the universal structure is to use sub-structures that focus on different sub-groups of the data. As a simile: even though the human body has multiple limb locations (``leaves\"), human motion is controlled by the overall motion of the body (``the root\"). In practice this means that the tree structure of our models leverages the hierarchical structure of the data. It first extracts the universal features through the root, and then passes the information to the different sub-networks (``leaves\") to learn specialized features for different physical parameters. Notice that the root will also prevent overfitting in the ``leaves\", since each leaf is optimized through the root. \n\nAnother change to the conventional architecture is that we remove the nonlinear activation in the second to last layer in the leaf component, i.e., it is a linear layer with identity activation function (see Table~\\ref{spin_omega_model_config}). This allows more neurons to be activated and passed to the final layer. As discussed in~\\cite{DBLP:journals\/corr\/abs-1709-07634}, removing the nonlinear activation in some intermediate layers smooths the gradients and maintains the correlation of the gradients in the neural network weights, which, in turn, allows more information to be passed through the network as the depth increases.\n\n\\begin{table}[t]\n\t\\setlength{\\tabcolsep}{1.5pt}\n\t\\caption{Architecture of the neural network model used to measure the final spin and QNMs of the black hole remnant. For the root convolutional layers, the setup indicates: (kernel size, \\# of output channels, stride, dilation rate, max pooling kernel, max pooling stride). All convolutional layers have ReLU activation function and the padding is set to ``VALID'' mode. There is no max pooling layer if the last two entries in the configuration are 0's. The leaf fully connected layers setup: (\\# of output neurons, dropout rate). For the last layer, we use \\(\\tanh\\) activation function. However, the activation function in the second last layer is removed.}\n\t\\label{spin_omega_model_config}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{tabular}{c|c|c}\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Layer \\\\ Component} & \\specialcell{Layer \\\\ Configurations} & \\specialcell{Activation \\\\ Functions}\\\\\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Root Layer: \\\\ Convolutional} & $\\begin{array}{c}\n\t\t\t\t(16, 64, 1, 1, 4, 4) \\\\\n\t\t\t\t(16, 128, 1, 2, 4, 4) \\\\\n\t\t\t\t(16, 256, 1, 2, 4, 4) \\\\\n\t\t\t\t(32, 256, 1, 2, 4, 4) \\\\\n\t\t\t\t(4, 128, 1, 2, 0, 0) \\\\\n\t\t\t\t(4, 128, 1, 2, 0, 0) \\\\\n\t\t\t\t(2, 64, 1, 1, 0, 0) \\\\\n\t\t\t\t\\end{array}$ & ReLU \\\\\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Leaf Layer: \\\\ Fully Connected} & $\\begin{array}{c}\n\t\t\t\t(128, 0.0) \\\\\n\t\t\t\t(128, 0.0) \\\\\n\t\t\t\t(1, 0.0)\n\t\t\t\t\\end{array}$ & $\\begin{tabular}{c}\n\t\t\t\tReLU \\\\\n\t\t\t\tIdentity \\\\\n\t\t\t\tTanh \\\\\n\t\t\t\t\\end{tabular}$ \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table}\n\n\\subsection{Neural network model to measure the masses of the binary components} \n\\label{subsec:mass_model}\n\nThe tree-like network model used for this study is described in the right panel of Figure~\\ref{model_diagram_spin_omegas} and Table~\\ref{mass_model_config}. With respect to the architecture described in the previous section, we reduce the number of convolutional layers in the root from seven to three. We have done this because we are now using more layers in the leaves, which in turn makes the gradient back-propagation harder. Reducing the number of root layers improves gradient updates to the front layers.\n\nEach leaf component uses a squeeze-and-excitation (SE) structure~\\cite{hu2018squeeze}. The SE block is a sub-structure between two layers (squeeze step). It applies a global pooling, and assigns weights to each of the channels in the convolutional layers (excitation step). Compared to conventional convolutional structures with universal weights, the SE components adjust the importance of each channel with an adaptively learned weight, which, as described in~\\cite{hu2018squeeze}, effectively results in 25\\% improvement in image classification. For images, channels are usually represented in RGB. Since we are using 1-D time-series signals, we treat channels of the original input signals to be 1. The SE block adaptively recalibrates channel-wise feature responses. Furthermore, the weights are optimally learned through a constraint introduced by the global pooling. This ensures that the weights encode both spatial and channel-wise information. Furthermore, the weights help the channels represent group specific features at deeper layers, which is consistent with our objective of using ``leaves\" for different parameters. \n\nFollowing the SE components, the neural networks have two highway blocks~\\cite{srivas:2015S}. The structures are a variant of the residual structure, as proposed in~\\cite{he2016deep}. In the residual block, instead of directly learning the feature, it learns the residual components by an identity shortcut connection, which resolves the gradients vanishing when the model goes deeper. The highway block only introduces weights to the components in the residual block, which is similar to the application of importance weights on channels in SE components. Finally, we apply three fully connected layers with dropouts after the highway blocks to prevent overfitting~\\cite{JMLR:v15:srivastava14a}. The same nonlinearity reduction is also applied in the second last layer. \n\n\n\\begin{table}[t]\n\t\\setlength{\\tabcolsep}{1.5pt}\n\t\\caption{Architecture of the neural network model used to measure the masses of the binary components. For the root convolutional layers, the setup indicates: (kernel size, \\# of output channels, stride, dilation rate, max pooling kernel, max pooling stride). All convolutional layers have ReLU activation function and the padding is set to ``VALID'' mode. For the Leaf SE layer, the setup is: (\\# of output channels, \\# of residual blocks). The general structure for the SE layer follows the configuration described in~\\cite{hu2018squeeze}. Leaf highway layer setup: (kernel size, \\# of channels, stride, \\# of highway blocks). The configuration for the highway is described in~\\cite{srivas:2015S}. The leaf fully connected layers setup is: (\\# of output neurons, dropout rate). For the last layer we use ReLU activation. However, the activation function in the second last layer is removed.}\n\t\\label{mass_model_config}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{tabular}{c|c|c}\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Layer \\\\ Component} & \\specialcell{Layer \\\\ Configurations} & \\specialcell{Activation \\\\ Functions}\\\\\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Root Layer: \\\\ Convolutional} & $\\begin{array}{c}\n\t\t\t\t(16, 64, 1, 2, 4, 4) \\\\\n\t\t\t\t(16, 128, 1, 2, 4, 4) \\\\\n\t\t\t\t(16, 128, 1, 2, 4, 4)\n\t\t\t\t\\end{array}$ & ReLU\\\\\n\t\t\t\t\\hline\n\t\t\t\tLeaf Layer: SE & $\\begin{array}{c}\n\t\t\t\t(128, 3) \\\\\n\t\t\t\t(128, 3) \n\t\t\t\t\\end{array}$& ReLU \\\\ \n\t\t\t\t\\hline\n\t\t\t\tLeaf Layer: Highway& $\\begin{array}{c}\n\t\t\t\t(4, 128, 2, 30)\n\t\t\t\t\\end{array}$ & ReLU\\\\\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Leaf Layer: \\\\ Fully Connected} & $\\begin{array}{c}\n\t\t\t\t(512, 0.1) \\\\\n\t\t\t\t(256, 0.1) \\\\\n\t\t\t\t(1, 0.0)\n\t\t\t\t\\end{array}$ & $\\begin{tabular}{c}\n\t\t\t\tReLU \\\\\n\t\t\t\tIdentity \\\\\n\t\t\t\tReLU \\\\\n\t\t\t\t\\end{tabular}$\\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table}\n\n\\subsection{Probabilistic Model}\n\\label{sec:prob_model}\n\\new{In this section we present the probabilistic framework based on Bayesian inference, which we have applied to the neural networks outlined in Sections \\ref{subsec:mass_model} and \\ref{subsec:properties_model}. We use Bayesian neural networks (BNNs)~\\cite{neal2012bayesian,mackay1992practical}, which are neural networks with uncertainty over their weights, to provide estimates of the BBH masses and properties of the BH remnant posterior distributions. This is in contrast to standard neural networks which provide point estimates of parameters. We use prior and posterior distribution functions on the last two layers of each leaf. With this approach, each of the leaves becomes an independent probabilistic model that regresses the physical parameters. The root layers, on the other hand, can be viewed as feature extractors for each probabilistic leaf.}\n\n\\new{A BNN can be viewed as a probabilistic model for the posterior distribution, $p(\\boldsymbol{w}|{\\mathcal{D}})$, where $\\boldsymbol{w}$ are the model weights and $\\mathcal{D} = \\{\\boldsymbol{x}_j,\\boldsymbol{y}_j\\}_{j= 1}^n$ is the training dataset. Here, $\\boldsymbol{x}_j $ are the input noisy waveforms and $\\boldsymbol{y}_j$ are the continuous parameters of interest, i.e., the BBH masses and the properties of the BH remnant.} \n\n\\new{According to Bayes theorem, $p({\\boldsymbol{w}} | \\mathcal{D}) \\propto p(\\mathcal{D}|{\\boldsymbol{w}})p({\\boldsymbol{w}}),$ where $p({\\boldsymbol{w}})$ is the prior distribution for the weights and \n$p(\\mathcal{D}|{\\boldsymbol{w}})$ is the likelihood. We assume that the likelihood function for each pair of the training data is, }\n\n\\new{\n\\begin{equation}\n \\label{eq:likelihood}\n p\\left(\\boldsymbol{y} \\vert \\boldsymbol{x}, \\boldsymbol{w} \\right) = \\frac{1}{\\sqrt{2\\pi} \\epsilon }\\exp{ \\left(- \\frac{\\| \\boldsymbol{y} - f_{\\boldsymbol{w}}(\\boldsymbol{x}) \\|^2}{2 \\epsilon^2} \\right)},\n\\end{equation}\n}\n\n\\noindent \\new{where $f_{\\boldsymbol{w}}$ represents the neural network function with weights $\\boldsymbol{w}$ and $\\epsilon$ is the standard deviation. The aleatoric uncertainty is covered by the likelihood distribution. A BNN allows a stochastic sampling of the weight parameters during a forward pass through the network while also encoding prior knowledge through the use of prior distributions. \nWe use a variational inference (VI) algorithm to approximate the weight posterior distribution $p(\\boldsymbol{w} | \\mathcal{D})$ using a Gaussian distribution for the weights assuming a mean field approximation, denoted by $q_{\\boldsymbol{\\theta}}(\\boldsymbol{w})$. \nIt is parameterized by $\\boldsymbol{\\theta} = (\\boldsymbol{\\mu}, \\boldsymbol{\\sigma})$, representing the mean vector and the standard deviation vector of the distribution respectively. }\n\n\n\\new{The corresponding cost function can be written as}\n\n\\new{\\begin{equation}\n \\label{eq:loss}\n \\mathcal{L} = \\mathrm{KL}\\left ( q_{\\boldsymbol{\\theta}}(\\boldsymbol{w} ) \\| p(\\boldsymbol{w}) \\right) - \\mathbb{E}_{q_{\\boldsymbol{\\theta}}(\\boldsymbol{w})} \\log p(\\mathcal{D} \\vert \\boldsymbol{w} ),\n\\end{equation}}\n\n\\noindent \\new{which is known as the variational free energy. The prior distribution is chosen to be a standard normal distribution. Since the probabilistic layers are parameterized by the mean and variance of the weight distributions, the number of parameters which need to be optimized is doubled compared to a standard neural network. The cost function can be approximated by drawing $N$ samples $\\boldsymbol{w}^{(i)}$ from $q_{\\boldsymbol{\\theta}}(\\boldsymbol{w})$,}\n\n\\new{\\begin{align}\n \\mathcal{L} & \\approx \\frac{1}{N} \\sum_{i = 1}^N \\left[ -\\log q_{\\boldsymbol{\\theta}} \\left(\\boldsymbol{w} \\right) - \\log p\\left(\\boldsymbol{w}^{(i)} \\right) \\right. \\nonumber \\\\\n & \\quad \\quad \\left . - \\log p\\left(\\mathcal{D} \\vert \\boldsymbol{w}^{(i)} \\right) \\right] \n \\label{eq:loss2} \n\\end{align}}\n\n\\noindent \\new{During training, for every forward model pass, the variational posterior distribution for the model parameters is estimated. Specifically, we use stochastic gradient descent to estimate $\\boldsymbol \\theta$ of $q_{\\boldsymbol{\\theta}}(\\boldsymbol{w} )$ by minimizing Eq.~\\eqref{eq:loss2}. In testing or inference mode, for input waveform $\\boldsymbol{x}^*$, our approximate predictive distribution is given by,}\n\\new{\n\\begin{align}\n\\label{uncertainty_approx_fun}\n q( \\boldsymbol{y}^* | \\boldsymbol{x}^* ) & = \\int p(\\boldsymbol{y}^*| \\boldsymbol{x}^*, \\boldsymbol{w}) q_{\\boldsymbol{\\theta}}(\\boldsymbol{w}) \\, d{\\boldsymbol{w}}\\,.\n\\end{align}\n}\n\n\\noindent \\new{ We use sampling to compute the statistics of the corresponding estimated physical parameters, e.g., median and 90\\% confidence interval. In addition to the aleatoric uncertainty, the uncertainty in the predictions arises from uncertainty in the weights or so called `epistemic uncertainty.'}\n\n\n\\new{In this probabilistic modeling, we apply the following simplifications: (1) the likelihood function is assumed to be Gaussian, and (2) neural network weight distributions are assumed to be independent Gaussians. Under these assumptions, the loss in Eq.~\\eqref{eq:loss2} is simplified and tractable. The statistical models and VI method are implemented using the computing framework TensorFlow Probability (TFP) \\cite{2017arXiv171110604D,2018arXiv181203973T} using a modified sampling scheme and distributed across nodes in a data parallel fashion using Horovod~\\cite{2018arXiv180205799S}. Details of the model training at scale are discussed in Section~\\ref{bnn_scale}.}\n\n\\subsection{Dataset Preparation}\n\nTo demonstrate the use of deep learning for parameter estimation, we consider the catalog of BBH mergers presented in~\\cite{o1o2catalog}. Based on the Bayesian analyses presented in that study, we consider the following parameter space to produce our training dataset: \\(m_1\\in[9{\\rm M}_{\\odot},\\,65{\\rm M}_{\\odot}]\\), \\(m_2\\in[5.2{\\rm M}_{\\odot},\\, 42{\\rm M}_{\\odot}]\\). The spin of the binary components span a range \\(a_{\\{1,\\,2\\}}\\in[-0.8,\\,0.8]\\). By uniformly sampling this parameter space we produce a dataset with 300,180 waveforms. These waveforms are produced with the surrogate waveform family~\\cite{blackman:2015}, considering the last second of the evolution which includes the late inspiral, merger and ringdown. The waveforms are produced using a sample rate of 8192Hz.\n\n\nFor training purposes, we label the waveforms using the masses and spins of the binary components, and then use this information to also enable the neural net to estimate the final spin of the BH remnant using the formulae provided in~\\cite{Hofmann:2016yih}, and the QNMs of the ringdown following~\\cite{Berti:2006b}. In essence, we are training our neural network models to identify the key features that determine the properties of the BBHs before and after merger using a unified framework.\n\nIn order to encapsulate the true properties of advanced LIGO noise, we whiten all the training templates using real LIGO noise from the Hanford and Livingstone detectors gathered during the first and second observing runs~\\cite{losc}.\n\nWe use 70\\% of these waveform samples for training, 15\\% for validation, and 15\\% for testing. The training samples are randomly and uniformly chosen. Throughout the training, we use ADAM optimizer to minimize the mean squared error of the predicted parameters with default hyper-parameter setups~\\cite{journals\/corr\/KingmaB14}. We choose the batch size to be 64, the learning rate to be 0.0008, the total number of iterations to be 120,000 (maximum). We use a dropout rate 0.1 for training and no dropout is applied for testing and validation. To simulate the environment where the true GWs are embedded, we use real advanced LIGO noise to compute power spectral density, which is then used to whiten the templates. In addition, we apply a random 0\\% to 6\\% left or right shifts. This endows the neural networks with time-invariance, and improves their performance to estimate the parameters of the signal irrespective of their position in the data stream. On the other hand, this technique also prevents overfitting of the data. Since the locations are randomly shifted with independent noise injected, the training data are different at each epoch. \n\n\\begin{figure*}\n\t\\centerline{\n\t\t\\includegraphics[width=0.48\\linewidth]{masses_plot.png}\n\t\t\\hspace{5mm}\n\t\t\\includegraphics[width=0.48\\linewidth]{spin_omegas_plot.png}\n\t}\n\t\\caption{Relative error with which our deep learning algorithm can measure the masses, final spin, \\(a_f\\), and quasi-normal modes (QNMs), \\((\\omega_R\\,,\\omega_I)\\) of the binary black hole components as a function of optimal matched-filtering signal-to-noise ration (SNR). \\textit{Left panel:} For waveform with \\(\\textrm{SNR}\\geq15\\), the primary and secondary masses can be constrained with relative errors less than \\((7\\%,\\,12\\%)\\), respectively. \\textit{Right panel:} For signals with \\(\\textrm{SNR}\\geq15\\), \\((a_f\\,,\\omega_R\\,,\\omega_I)\\) can be recovered with relative errors less than \\((13\\%,\\, 5\\%,\\,3\\%)\\), respectively.}\n\t\\label{relative_errors_mass}\n\\end{figure*}\n\n\n\\subsection{Curriculum learning with decreasing signal-to-noise ratio}\n\\label{dec_snr}\n\n\n\n\\begin{table}[t]\n\t\\setlength{\\tabcolsep}{7pt}\n\t\\caption{Decreasing peak SNR (pSNR) setup. The pSNR is uniformly chosen within the indicated range. Notice that the early stopping criterion is also applied if the number of iterations is greater than 60,000 and the relative error threshold is met. The relation between match-filtering SNR to pSNR is: 1.0 pSNR $\\approx$ 13.0 SNR.}\n\t\\label{CL_BNN}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{sc}\n\t\t\t\t\\begin{tabular}{c|c}\n\t\t\t\t\t\\hline\n\t\t\t\t\tIterations & pSNRs \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1-12000 & 2.0-3.0\\\\\n\t\t\t\t\t12001-24000 & 1.5-3.0\\\\ \n\t\t\t\t\t24001-36000 & 1.0-3.0 \\\\\n\t\t\t\t\t36001-60000 & 0.5-3.0\\\\\n\t\t\t\t\t60001-90000 & 0.3-3.0\\\\\n\t\t\t\t\t90001-120000 & 0.2-3.0\\\\\n\t\t\t\t\t120001- & 0.1-3.0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{sc}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table}\n\n\nIn realistic detection scenarios, GWs have moderate SNRs, and are contaminated by non-Gaussian and non-stationary noise. In order to ensure that neural networks identify GWs over a broad range of astrophysically motivated SNRs, we start training them with large SNRs, and gradually reduce the SNRs to a lower level. This is an idea taken from curriculum learning literature~\\cite{Bengio:2009:CL:1553374.1553380}, which allows the network to distill more accurate information of the underlying signals with larger SNRs to signals with lower SNRs. This approach has been demonstrated for classification, regression and denoising of GW signals~\\cite{geodf:2017a,geodf:2017b,geodf:2017c,Rebei:2018R,hshen:2017,dgNIPS,wei:2019W,George:2017qtr}. Specifically, each waveform is normalized to have maximum amplitude 1, and then we use curriculum learning with the decreasing SNR scheme detailed in Table~\\ref{CL_BNN} (The strategy for BNN models is the same). The noisy data is then normalized to have variance one. We normalize the data to ensure that the trained model can characterize true BBH signals in realistic detection scenarios, covering a broad range of SNRs.\n\nThe different steps followed in our curriculum learning scheme are presented in Table~\\ref{CL_BNN}. In addition, we use an early stopping criterion with the relative error threshold 0.026 for \\((m_1\\,,m_2)\\) and 0.0016 for \\((a_f,\\,\\omega_R\\,,\\omega_I)\\). One additional change to the mass model is we rescale the masses by 1\/20, to make the optimization converge faster. In the evaluation, we just scale the data back to its original amplitude.\n\n\\subsection{Training of the Bayesian Neural Network Model}\n\\label{bnn_scale}\n\n\\new{For the probabilistic layers, as the effective number of parameters to be optimized is double that of a standard layers, we examine the impact of scaling the BNN code across nodes on the pre-exascale Cray XC40 system, Theta, at Argonne National Laboratory. Using an optimized build of both Tensorflow and Horovod for the Intel Xeon-Phi [coded name Knights Landing (KNL)] architecture we distribute the code using one MPI rank per node and 128 hardware threads per node and scale up to 1024 nodes. Results for the number of samples processed per second during training is shown in Figure~\\ref{bnn_nn_scaling}. We achieve $\\sim 75$\\% efficiency up to 1024 nodes on Theta. As the number of nodes is increased, there is increased communication of the gradients at each iteration which causes an expected decrease in performance away from the ideal scaling. As the BNN layers have in effect twice the parameters of the standard layers, the communication cost is slightly higher which can be seen as a decrease in the number of samples processed per second.}\n\n\\new{In addition to evaluating the efficiency on Theta, we fully trained the two BNN models on Hardware-Accelerated Learning (HAL) cluster at the National Center for Supercomputing Applications. Each model was trained on 4 NVIDIA V100 GPUs with batch size of 64. The parameter $\\epsilon$ in the likelihood function Eq.~\\eqref{eq:likelihood} is chosen to be 0.1 for the mass model and $10^{-3}$ for the final spin and QNMs model. We draw $N = 100$ samples and $M = 1600$ samples from $q_{\\boldsymbol{\\theta}}(\\boldsymbol{w})$ at training and testing respectively. The learning rate for the two BNN models is $8 \\times 10^{-6}$. The total number of iterations is 200,000 to guarantee convergence.}\n\n\\begin{figure}[t!]\n\\includegraphics[width=90mm]{bnn_nn_scaling.pdf}\n\t\\caption{Samples processed per second with increasing number of nodes during training of the neural network. The results for the BNN are shown in cyan and standard neural network in blue. Ideal scaling is shown as a dashed black bar at each node count. Error bars are the variance from all iterations during training.}\n\t\\label{bnn_nn_scaling}\n\\end{figure}\n\n\n\\section{Experimental Results}\n\\label{experiments}\nUsing the signal manifold described in the previous section, we present results of the accuracy with which our neural network models can measure the masses of the binary components, and the properties of the corresponding remnant. \n\n\\indent Figure~\\ref{relative_errors_mass} presents the accuracy with which the binary components \\((m_1,\\,m_2)\\) can be recovered over a a broad range of SNRs. We notice that for signals with \\(\\textrm{SNR}\\geq15\\), the primary and secondary masses can be constrained with relative errors~\\cite{relerror:1965} less than \\((7\\%,\\,12\\%)\\), respectively. These results represent a major improvement to the analysis we reported in the context of a 2-D signal manifold in~\\cite{geodf:2017a,geodf:2017b}. Furthermore, we can also see from the same figure that for signals with \\(\\textrm{SNR}\\geq15\\) our neural network models can measure the triplet \\((a_f\\,,\\omega_R\\,,\\omega_I)\\) with relative errors less than \\((13\\%,\\, 5\\%,\\,3\\%)\\), respectively. To the best of our knowledge, this is the first time deep learning is used to infer the properties of BH remnants directly from GW signals.\n\n\n\\section{Deep learning parameter estimation of detected binary black hole mergers}\n\\label{discussion}\n\n\n\\begin{table*}[!htp]\n\t\\setlength{\\tabcolsep}{4.5pt}\n\t\\renewcommand{\\arraystretch}{2.0}\n\t\\caption{\\new{Parameter estimation results for the catalog of binary black hole mergers reported in~\\cite{o1o2catalog} using our deterministic deep learning models. We report median values with the the 90\\% confidence interval, which was computed by whitening gravitational wave strain data that contain real gravitational wave signals with up to 240 different power spectral densities.}}\n\t\\label{tab_real_event_results}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{sc}\n\t\t\t\t\\begin{tabular}{c|ccccc}\n\t\t\t\t\t\\hline\n\t\t\t\t\tEvent Name & $m_1\\, [{\\rm M}_{\\odot}]$ & $m_2\\, [{\\rm M}_{\\odot}]$ & $a_f$ & $\\omega_{R}$ & $\\omega_{I}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tGW150914 & $35.64_{-5.55}^{+5.19}$ & $29.74_{-3.90}^{+2.12}$ & $0.658_{-0.006}^{+0.039}$ & $0.5253_{-0.0026}^{+0.0186}$ & $0.0820_{-0.0009}^{+0.0002}$ \\\\\n\t\t\t\t\tGW151012 & $25.01_{-9.09}^{+12.00}$ & $16.45_{-6.01}^{+4.50}$ & $0.637_{-0.015}^{+0.011}$ & $0.5155_{-0.0086}^{+0.0028}$ & $0.0824_{-0.0002}^{+0.0002}$ \\\\ \n\t\t\t\t\tGW151226 & $12.39_{-0.25}^{+3.57}$ & $7.70_{-0.48}^{+5.77}$ & $0.725_{-0.140}^{+0.051}$ & $0.5558_{-0.0611}^{+0.0241}$ & $0.0776_{-0.0002}^{+0.0055}$ \\\\ \n\t\t\t\t\tGW170104 & $32.28_{-6.33}^{+4.31}$ & $22.31_{-3.06}^{+7.01}$ & $0.684_{-0.035}^{+0.014}$ & $0.5157_{-0.0068}^{+0.0071}$ & $0.0854_{-0.0015}^{+0.0004}$ \\\\\n\t\t\t\t\tGW170608 & $12.90_{-0.31}^{+3.27}$ & $9.93_{-0.09}^{+2.08}$ & $0.716_{-0.077}^{+0.017}$ & $0.5385_{-0.0154}^{+0.0057}$ & $0.0827_{-0.0004}^{+0.0006}$ \\\\\n\t\t\t\t\tGW170729 & $45.32_{-0.98}^{+2.23}$ & $24.41_{-02.32}^{+03.16}$ & $0.737_{-0.058}^{+0.036}$ & $0.5682_{-0.0303}^{+0.0038}$ & $0.0739_{-0.0016}^{+0.0054}$ \\\\\n\t\t\t\t\tGW170809 & $35.71_{-8.46}^{+7.53}$ & $24.09_{-2.44}^{+5.80}$ & $0.632_{-0.010}^{+0.008}$ & $0.5123_{-0.0041}^{+0.0034}$ & $0.0826_{-0.0002}^{+0.0001}$ \\\\\n\t\t\t\t\tGW170814 & $30.54_{-8.78}^{+2.01}$ & $22.33_{-7.96}^{+0.07}$ & $0.679_{-0.003}^{+0.002}$ & $0.5364_{-0.0030}^{+0.0009}$ & $0.0812_{-0.0001}^{+0.0003}$ \\\\\n\t\t\t\t\tGW170818 & $31.52_{-1.95}^{+2.15}$ & $25.97_{-0.87}^{+1.21}$ & $0.716_{-0.021}^{+0.015}$ & $0.5474_{-0.0104}^{+0.0062}$ & $0.0786_{-0.0013}^{+0.0013}$ \\\\\n\t\t\t\t\tGW170823 & $46.98_{-3.89}^{+0.58}$ & $33.01_{-5.92}^{+2.03}$ & $0.626_{-0.023}^{+0.014}$ & $0.5067_{-0.0057}^{+0.0070}$ & $0.0827_{-0.0003}^{+0.0006}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{sc}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table*}\n\n\\begin{table*}[!htp]\n\t\\setlength{\\tabcolsep}{4.5pt}\n\t\\renewcommand{\\arraystretch}{2.0}\n\t\\caption{\\new{As Table~\\ref{tab_real_event_results}, but now using our probabilistic deep learning models. The uncertainty for these models is captured by randomness in the network weights, not from various noisy realizations of the signals.}}\n\t\\label{tab_real_event_results_BNN}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{sc}\n\t\t\t\t\\begin{tabular}{c|ccccc}\n\t\t\t\t\t\\hline\n\t\t\t\t\tEvent Name & $m_1 [{\\rm M}_{\\odot}]$ & $m_2 [{\\rm M}_{\\odot}]$ & $a_f$ & $\\omega_{R}$ & $\\omega_{I}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tGW150914 & $36.08_{-4.45}^{+4.77}$ & $27.42_{-3.92}^{+3.49}$ & $0.689_{-0.032}^{+0.017}$ & $0.5390_{-0.0269}^{+0.0124}$ & $0.0797_{-0.0022}^{+0.0011}$ \\\\\n\t\t\t\t\tGW151012 & $21.56_{-2.12}^{+3.07}$ & $15.46_{-2.32}^{+2.44}$ & $0.681_{-0.032}^{+0.016}$ & $0.5365_{-0.0266}^{+0.0130}$ & $0.0804_{-0.0018}^{+0.0008}$ \\\\ \n\t\t\t\t\tGW151226 & $18.04_{-2.49}^{+1.98}$ & $11.96_{-2.89}^{+1.67}$ & $0.715_{-0.035}^{+0.017}$ & $0.5533_{-0.0280}^{+0.0142}$ & $0.0763_{-0.0036}^{+0.0017}$ \\\\ \t\n\t\t\t\t\tGW170104 & $31.23_{-3.26}^{+4.09}$ & $23.27_{-3.25}^{+3.62}$ & $0.692_{-0.033}^{+0.016}$ & $0.5358_{-0.0302}^{+0.0052}$ & $0.0796_{-0.0052}^{+0.0026}$ \\\\\n\t\t\t\t\tGW170608 & $16.73_{-2.19}^{+2.38}$ & $12.44_{-2.21}^{+2.03}$ & $0.673_{-0.036}^{+0.019}$ & $0.5235_{-0.0297}^{+0.0149}$ & $0.0818_{-0.0012}^{+0.0005}$ \\\\\n\t\t\t\t\tGW170729 & $45.28_{-6.42}^{+6.63}$ & $32.34_{-5.48}^{+3.91}$ & $0.751_{-0.038}^{+0.019}$ & $0.5776_{-0.0309}^{+0.0151}$ & $0.0756_{-0.0048}^{+0.0023}$ \\\\\n\t\t\t\t\tGW170809 & $32.88_{-3.45}^{+4.49}$ & $26.56_{-3.91}^{+3.54}$ & $0.714_{-0.034}^{+0.016}$ & $0.5492_{-0.0271}^{+0.0060}$ & $0.0760_{-0.0060}^{+0.0030}$ \\\\\n\t\t\t\t\tGW170814 & $32.40_{-3.60}^{+4.77}$ & $25.22_{-4.34}^{+4.38}$ & $0.675_{-0.033}^{+0.016}$ & $0.5329_{-0.0272}^{+0.0140}$ & $0.0794_{-0.0024}^{+0.0011}$ \\\\\n\t\t\t\t\tGW170818 & $33.49_{-3.19}^{+4.51}$ & $29.71_{-4.63}^{+4.59}$ & $0.631_{-0.032}^{+0.015}$ & $0.5159_{-0.0277}^{+0.0135}$ & $0.0829_{-0.0016}^{+0.0008}$ \\\\\n\t\t\t\t\tGW170823 & $38.24_{-5.38}^{+4.78}$ & $28.16_{-3.67}^{+4.63}$ & $0.664_{-0.036}^{+0.018}$ & $0.5321_{-0.0302}^{+0.0156}$ & $0.0757_{-0.0088}^{+0.0043}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{sc}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table*}\n\n\nIn this section we use our neural network models to measure \\((m_1,\\,m_2,\\,a_f,\\,\\omega_R,\\,\\omega_I)\\) from all the BBH mergers detected to date by the advanced LIGO and Virgo observatories~\\cite{o1o2catalog}. We present results for two types of neural network models, namely, deterministic and probabilistic. \n\n\\subsection{Parameter estimation with deterministic neural networks}\n\n\\new{To get insights into the performance of our deterministic neural network models to infer the astrophysical parameters of BBH mergers, we begin by evaluating them for a given BBH system whose ground truth parameters are \\((m_1,\\,m_2,\\,a_f,\\,\\omega_{R},\\,\\omega_{I})= (31.10M_{\\odot},\\,20.46M_{\\odot},\\,0.718,\\,0.5412,\\, 0.0800)\\). Using 1,600 different noise realizations, we have constructed the model predictions for two different SNR cases, as shown in Figure~\\ref{fig_multiple_noise_check}. We notice that these distributions capture the ground-truth values of the BBH system under consideration, and that the reconstruction of the actual parameters of the system improves for larger SNR values, which is in agreement with the analysis presented with traditional Bayesian analysis for GW parameter estimation~\\cite{bambiann:2015PhRvD}. Having conducted similar experiments for other BBH systems, we then went on to using these deep learning models for the parameter reconstruction of real BBH mergers.}\n\n\nIn Table~\\ref{tab_real_event_results} we present the median and 90\\% confidence level for the astrophysical parameters \\((m_1,\\,m_2,\\,a_f,\\,\\omega_{R},\\,\\omega_{I})\\) of all the BBH mergers presented in~\\cite{o1o2catalog}. These values are computed by whitening the data containing a putative signal with 240 different Power Spectral Densities (PSDs), half of them are constructed using LIGO Hanford noise and the rest with LIGO Livingstone noise. Through this approach we are effectively measuring the impact of PSD variations in the measurements of the astrophysical parameters of BBH mergers. We find that these estimates are in very good agreement with the results obtained with the Bayesian analyses presented in Table III of~\\cite{o1o2catalog}. \n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfigure[SNR = 13.0]{\n\t\t\\includegraphics[width=0.49\\linewidth]{mass_seaborn_det_multi_noise_real_1_9.png}}\n\t\t\\subfigure[SNR = 19.5]{\n\t\t\\includegraphics[width=0.49\\linewidth]{mass_seaborn_det_multi_noise_real_1_14.png}}\n \\subfigure[SNR = 13.0]{\n\t\t\\includegraphics[width=0.49\\linewidth]{spin_ome_test_GW170104_sim_3_9.png}}\n\t\t\\subfigure[SNR = 19.5]{\n\t\t\\includegraphics[width=0.49\\linewidth]{spin_ome_test_GW170104_sim_3_14.png}}\n\t\n\t\\caption{\\new{Model predictions produced by our deterministic models by evaluating them with 1,600 different noise realizations for a binary black hole system with ground truth parameters \\((m_1,\\,m_2,\\,a_f,\\,\\omega_{R},\\,\\omega_{I})= (31.10M_{\\odot},\\,20.46M_{\\odot},\\,0.718,\\,0.5412,\\, 0.0800)\\). The panels show results for the distribution of the estimates for (\\(m_1,\\,m_2,\\,a_f,\\,w_R,\\,w_I\\)) assuming \\(\\textrm{SNR}=\\{13,\\,19.5\\}\\).}}\n\t\\label{fig_multiple_noise_check}\n\\end{figure*}\n\n\\subsection{Bayesian neural network parameter estimation}\n\n\\new{In addition to parameter estimation results obtained with our deterministic models, based on varying the noise realization with different PSDs, we also evaluated our BNN models for two types of signals. First, on simulated signals to quantify the performance of our probabilistic models. Results of this exercise are presented in Figure~\\ref{bnn_dist}. We carried out an exhaustive study to confirm that our BNN models provide consistent results for different random initializations, and that the results exhibit strong convergence for the optimal choice of hyperparameters.} \n\n\\new{Upon confirming that our probabilistic models perform well, we used them to estimate the astrophysical parameters of the entire catalog of BBH signals reported in~\\cite{o1o2catalog}. These results, which provide the median and the 90\\% confidence intervals, are summarized in Table~\\ref{tab_real_event_results_BNN}.}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfigure[SNR = 13.0]{\n\t\t\\includegraphics[width=0.49\\linewidth]{mass_seaborn_BNN_newloss_newdata_multi_noise_real_1_9.png}}\n\t\t\\subfigure[SNR = 19.5]{\n\t\t\\includegraphics[width=0.49\\linewidth]{mass_seaborn_BNN_newloss_newdata_multi_noise_real_1_14.png}}\n \\subfigure[SNR = 13.0]{\n\t\t\\includegraphics[width=0.49\\linewidth]{spin_ome_BNN_test_GW170104_sim_3_9.png}}\n\t\t\\subfigure[SNR = 19.5]{\n\t\t\\includegraphics[width=0.49\\linewidth]{spin_ome_BNN_test_GW170104_sim_2_14.png}}\n\t\n\t\\caption{\\new{Variation inference distributions produced by our Bayesian Neural Network models for a binary black hole system with ground truth parameters \\((m_1,\\,m_2,\\,a_f,\\,\\omega_{R},\\,\\omega_{I})= (31.10M_{\\odot},\\,20.46M_{\\odot},\\,0.718,\\, 0.5412,\\, 0.0800)\\). The panels show results for the distribution of the estimates for (\\(m_1,\\,m_2,\\,a_f,\\,w_R,\\,w_I\\)). As in Figure~\\ref{fig_multiple_noise_check}, we consider \\(\\textrm{SNR}=\\{13,\\,19.5\\}\\).}}\n\t\\label{bnn_dist}\n\\end{figure*}\n\n\n The deep learning parameter estimation results presented in Tables~\\ref{tab_real_event_results_BNN} are consistent with those obtained with established, Bayesian parameter estimation pipelines~\\cite{o1o2catalog}. \\new{The reliable astrophysical information inferred in low-latency by these deep learning algorithms for each BBH signal (less than 2 milliseconds) warrants the extension of this framework to characterize other GW sources, including eccentric compact binary mergers, and sources such as BBH systems with significant spin and asymmetric mass-ratios that require the inclusion of higher-order modes for accurate GW source modeling. This work is under earnest development and will be presented shortly.} \n\n\n\nHaving demonstrated the application of deep learning at scale for the characterization of BBH mergers, it is now in order to design deep neural networks for real-time detection and characterization of GW sources that are expected to have electromagnetic and astro-particle counterparts, i.e., BNS and NSBH systems. For that study, we expect no additional computational challenges to the ones we have already addressed in this analysis. The central development for such an effort, however, will consist of designing a clever algorithm to readily identify BNS or NSBH in a hierarchical manner, i.e., in principle it is not needed to train neural networks using minute long waveforms. Rather, we need to figure out how much information is needed to accurately reconstruct the astrophysical parameters of one of these events in real-time. These studies should be pursued in the future. \n\n\\section{Conclusion}\n\\label{conclusion}\n\nWe have presented the first application of deep learning at scale to characterize the astrophysical properties of BHs whose spins are aligned or anti-aligned, and which evolve on quasi-circular orbits. Using over \\(10^7\\) waveforms to densely sample this parameter space, and encoding time- and scale-invariance, we have demonstrated that deep learning enables real-time GW parameter estimation. These studies mark the first time BNNs are trained using 1,024 nodes on a supercomputer platform tuned for deep learning research, and when applied for the analysis of real advanced LIGO data, they maintain similar accuracy to models trained on 4 V100 GPUs. Our results are consistent with established, compute-intensive, Bayesian methods that are routinely used for GW parameter estimation. \n\nThe approach we have presented herein provides the means to constrain the parameters of BBHs before and after the merger event. We have shown that deep learning can directly infer the final spin and QNMs of BH remnants, thereby paving the way to directly use QNMs to assess whether BH remnants are accurately described by general relativity. In future work, we will study how accurately these neural network models can tell apart ringdown waveforms described by astrophysically motivated alternative theories of gravity in realistic detection scenarios. The extension of this work to enable real-time detection and parameter estimation of GW sources that are central for Multi-Messenger Astrophysics discovery campaigns, and other astrophysically motivated sources, such as eccentric BBH mergers, should also be investigated. \n\n\n\n\\section{Acknowledgements}\nDue to the size of the data, the datasets utilized in this study are available from the corresponding author on reasonable request. Codes will be available before the official publication. \n\n\nThis work utilized the Hardware-Accelerated Learning (HAL) cluster, supported by NSF Major Research Instrumentation program, grant \\#1725729, as well as the University of Illinois at Urbana-Champaign. \n\nThis research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. \n\nThis research is part of the Blue Waters sustained-petascale computing project, \nwhich is supported by the NSF (awards OCI-0725070 and ACI-1238993) \nand the State of Illinois. Blue Waters is a joint effort of the University of Illinois at \nUrbana-Champaign and its National Center for Supercomputing Applications (NCSA). We acknowledge support from the NCSA, and thank the \\href{http:\/\/gravity.ncsa.illinois.edu}{NCSA Gravity Group} for useful feedback. Tesla P100 and V100 GPUs used for this project were donated by NVIDIA to the \\href{http:\/\/gravity.ncsa.illinois.edu}{NCSA Gravity Group}, and are hosted by Vlad Kindratenko at the Innovative Systems Lab at NCSA. This work also made used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system, which is supported by NSF award ACI-1445606, at the Pittsburgh Supercomputing Center (PSC); TG-PHY160053 grant is gratefully acknowledged. \n\nThis paper was reviewed and approved by the LIGO P\\&P committee.\n\n\\vspace{-0.0cm}\n\\section{Contribution}\n\nEAH envisioned this study, and directed the construction of the data sets used to train\/validate\/test the neural network models. H~Shen developed the neural network structure and carried out the training and evaluation. ZZ supervised on the evaluation of the neural network performance. EJ created the BNN, implemented this to run at scale and advised on its use for parameter predictions. H~Shen created the BNN code with a new sampling approach and training objective, trained and evaluated the BNN model on HAL machine for parameter predictions on real GW events. H~Sharma developed the BNN code and carried out extensive scaling tests on Theta. All co-authors contributed to drafting and editing the manuscript. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\nThe possible existence of a liquid-liquid critical point in deeply supercooled water has been a subject of debate in part due to the challenges associated with providing definitive experimental evidence.\nPioneering work by Mishima and Stanley [Nature 392, 164 (1998) and Phys.~Rev.~Lett. 85, 334 (2000)] sought to shed light on this problem by studying the melting curves of different ice polymorphs and their metastable continuation in the vicinity of the expected location of the liquid-liquid transition and its associated critical point.\nBased on the continuous or discontinuous changes in slope of the melting curves, Mishima suggested that the liquid-liquid critical point lies between the melting curves of ice III and ice V.\nHere, we explore this conjecture using molecular dynamics simulations with a purely-predictive machine learning model based on \\textit{ab initio} quantum-mechanical calculations.\nWe study the melting curves of ices III, IV, V, VI, and XIII using this model and find that the melting lines of all the studied ice polymorphs are supercritical and do not intersect the liquid-liquid transition locus.\nWe also find a pronounced, yet continuous, change in slope of the melting lines upon crossing of the locus of maximum compressibility of the liquid.\nFinally, we analyze critically the literature in light of our findings, and conclude that the scenario in which melting curves are supercritical is favored by the most recent computational and experimental evidence.\nThus, although the preponderance of experimental and computational evidence is consistent with the existence of a second critical point in water, the behavior of the melting lines of ice polymorphs does not provide strong evidence in support of this viewpoint, according to our calculations.\n\n \n\n\n\\newpage\n\\parindent 1em\n\n\\section*{Introduction}\n\\label{sec:introduction}\nWater continues to be the focus of intense scientific inquiry, not only because of its importance in the biological and physical sciences, but also on account of its distinctive thermophysical properties and phase behavior. Water exhibits at least 17 different crystalline phases (with new ones continuing to be uncovered) \\cite{Salzmann19,Hansen21}, multiple glassy states \\cite{Loerting11}, and possibly also a liquid-liquid phase transition (LLT) between high-density and low-density liquids (HDL and LDL, respectively) under supercooled conditions \\cite{Poole92,Gallo16}. As such, water provides a rich proving ground to stretch our understanding of diverse thermophysical phenomena including complex phase equilibria, metastable phase transitions, and glass physics \\cite{DebenedettiBook}, as well as the possible relationships between them \\cite{Handle17,Debenedetti03}. \n\nThe possibility of an LLT in water has been the focus of numerous studies \\cite{Gallo16}, and a preponderance of both experimental and computational evidence points to the existence of water's LLT at positive pressures ($P$) and supercooled temperatures ($T$) (i.e., below the melting $T$ of the stable ice I phase) \\cite{Kim20,NILSSON22,Palmer14,Debenedetti20,Palmer18,Gartner22,weis2022liquid}. However, there remain many unresolved questions around the LLT and its relationship to water's properties and various solid phases. A set of observations instrumental to the development of the argument in favor of the LLT came about when Mishima and Stanley characterized the melting of various ice polymorphs to liquid water upon decompression at different $T$ \\cite{Mishima98,Mishima00}. They observed that the melting curve of ice III exhibited a notable but continuous change in slope in the $T-P$ plane, while ice V and ice IV exhibited sharp and seemingly discontinuous changes in slope. Recall that, by the Clausius-Clapeyron equation \\cite{callen1998thermodynamics},\n\\begin{equation}\n\\frac{dP}{dT} = \\frac{\\Delta H}{T_m \\Delta V},\n\\label{eq:Clausius-Clapeyron}\n\\end{equation}\nthe slope $dP \/dT$ of a line of phase coexistence $T_m(P)$ is related to the change in enthalpy $\\Delta H$ and volume $\\Delta V$ across the transition. This idea suggests that if a melting curve exhibits a discontinuous change in slope, it correspondingly reflects a discontinuous change in the properties of ice and\/or liquid water at that point. Given that the enthalpy and volume of crystalline solids is only weakly dependent on $T$ and $P$, Mishima and Stanley concluded that the properties of the liquid phase were changing discontinuously (i.e., evidence of an LLT). This argument, if correct, would place the liquid-liquid critical point (LLCP) somewhere in between the ice V and ice III melting lines, with the LLT coexistence line intersecting the ice V and ice IV melting curves at the point of discontinuous change in slope. Mishima also probed the melting lines of ices VI and XIII, but was unable to extend those curves far enough to intersect with the possible LLT line.\n\nThis rationalization for the observed trends, while plausible, remains difficult to definitively explore experimentally due to rapid crystallization of the stable ice I phase upon melting of the other polymorphs. Similar practical challenges also hamper direct experimental demonstration of the LLT. Thus, open questions remain about the true relationship between a possible LLT and the metastable melting of the ice phases. Molecular modeling represents an attractive route to probe these ideas, as one can design simulation methodologies free from unwanted crystallization, which allow us to directly study the relationship between the LLT and the various ices. In parallel, advances in machine learning (ML)-based interaction potentials \\cite{Noe20,Wen22} allow us to develop predictive intermolecular potential models that describe water's interactions at the level of an \\textit{ab initio} reference calculation (e.g., density functional theory), thus enabling purely-predictive simulations of complex collective properties and phase behavior at tractable computational cost \\cite{Gartner20,reinhardt2021quantum,Zhang21,Piaggi21,Schran21,piaggi2022homog}. In this study, we coupled one such ML-potential method (Deep Potential Molecular Dynamics, DPMD) \\cite{Zhang18,Wang18} with several advanced simulation techniques to shed further light on the possible relationship between the LLT and water's liquid-solid phase behavior. \n\n\\section*{Potential scenarios}\n\\label{sec:potential_scenarios}\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{Figure1.pdf}\n\\caption{\\label{fig:Fig1} Hypothetical scenarios describing the possible relationship between ice polymorph melting curves and the LLT. The upper plots show the melting curve of a hypothetical ice polymorph (red solid line), the LLT line (gray solid line), the LLCP (gray circle), and the Widom line (gray dashed line). The lower plots show hypothetical free energy surfaces for the liquid density along the melting curves at the three points marked by \\textbf{+} signs. Scenario 1 (left) shows a case where the melting curve is significantly supercritical, Scenario 2 (center) shows a case where the melting curve is slightly supercritical, and Scenario 3 (right) shows a case where the melting curve is subcritical.}\n\\end{figure*}\n\nBefore describing the details of our approach and results, we illustrate schematically the possible classes of behavior in FIG.~\\ref{fig:Fig1}.\nIn this discussion, we assume the existence of an LLT.\nThe elements that we consider in our analysis are the melting curve of an ice polymorph, the liquid-liquid critical point, the liquid-liquid coexistence line (or binodal), and the Widom line. The Widom line can be regarded as an extension of the liquid-liquid coexistence line to supercritical conditions and is defined by the locus of maxima of the correlation length.\nResponse functions, such as the heat capacity at constant pressure $C_P$ and the isothermal compressibility $\\kappa_T$, also have pronounced maxima at supercritical conditions even far from the critical point, and the values of the response functions diverge as the critical point is approached \\cite{xu2005relation}.\nFurthermore, the lines of maxima of the response functions in the $T-P$ plane asymptotically converge to the Widom line as the critical point is approached from supercritical conditions \\cite{xu2005relation}.\n$C_P=(\\partial H\/ \\partial T)_P$ and $\\kappa_T=-(1\/V)(\\partial V\/\\partial P)_T$ are derivatives of the enthalpy $H$ and volume $V$, and thus we expect the fastest change in these liquid-state properties in the immediate vicinity the Widom line.\nIn turn, a pronounced change in the enthalpy and volume of the liquid at the Widom line will lead to correspondingly pronounced changes in slope of the ice melting line as predicted by Eq.~\\eqref{eq:Clausius-Clapeyron}.\n\nWe now analyze three possible scenarios.\nIf the melting curve of a particular polymorph were to be significantly supercritical (Scenario 1, FIG.~\\ref{fig:Fig1} left), the impact of the critical point would be minimal.\nTherefore, we would expect to observe a modest change in slope of the melting curve and the free energy surface of the liquid state would have a single basin that smoothly moves from high to low density as temperature decreases along the melting curve. If the melting curve passed near to the critical point but still at supercritical conditions (Scenario 2, FIG.~\\ref{fig:Fig1} center), a more significant but still continuous change in slope might be observed as the liquid properties change swiftly but continuously upon crossing the Widom line. In this case, the free energy surfaces would still only show one single minimum at a given state point yet they can show significant asymmetry \\cite{Gartner22}, and broadening at the intersection of the melting curve with the Widom line.\nThe broadening of the free energy surface of the liquid as a function of density at the Widom line follows from the fact that density fluctuations $\\sigma_{\\rho}$ are related to $\\kappa_T$ via $\\sigma_{\\rho}^2=\\rho^2 k_B T \\kappa_T\/ V$ where $\\rho$ is the density and $k_B$ the Boltzmann constant \\cite{pathria2016statistical}.\nFinally, if the melting curve was subcritical (Scenario 3, FIG.~\\ref{fig:Fig1} right), a discontinuous change in liquid properties across the LLT would result in a discontinuous change in the slope of the melting curve, and a free energy surface with two basins of equal depth would develop at the point of liquid-liquid phase coexistence (i.e., where the ice melting line meets the LLT line). Moving forward, we will situate our simulation results in the context of these three potential scenarios.\n\n\\section*{Calculation of melting curves}\n\\label{sec:methods1}\n\nOur molecular dynamics simulations were driven by a deep potential model\\cite{Zhang18} of water developed by Zhang et al. \\cite{Zhang21}\nThe model has been carefully trained to reproduce with high fidelity the potential energy surface of water based on density functional theory (DFT) calculations with the Strongly Constrained and Appropriately Normed (SCAN) exchange and correlation functional \\cite{Sun15}.\nSCAN is one of the best semilocal functionals available and describes with good accuracy many properties of water and ice, and their anomalies \\cite{Sun16,Chen17,Piaggi21}.\nEven though the model is short-ranged with a cutoff of 6 \\AA, it can capture subtle physical effects, such as polarization \\cite{piaggi2022homog} and many-body correlations \\cite{Zhang18}.\nFurthermore, this model describes qualitatively the behavior of water and ice polymorphs in a region of the phase diagram spanning temperatures 0-500 K and pressures 0-50 GPa \\cite{Zhang21}.\nIt is thus suitable to represent ice III, IV, V, VI, and XIII at the conditions of interest for this work.\nAnother aspect of critical importance is whether the model has a liquid-liquid transition at deeply supercooled conditions.\nWe recently proved rigorously using free energy calculations that this model has a liquid-liquid transition with a critical point at $T_c = 242 \\pm 5$ K and $P_c = 0.295 \\pm 0.015$ GPa \\cite{Gartner22}.\nIt is important to note that SCAN also has limitations.\nLargely due to the self-interaction error in semilocal functionals \\cite{sharkas2020self}, the strength of the hydrogen bond is overestimated, resulting in an upward displacement of melting temperatures of about 40 K with respect to experiments \\cite{Piaggi21}. Additionally, the solid polymorphs ice III and ice XV are incorrectly predicted by SCAN to be metastable at all ($T$, $P$) \\cite{Zhang21}. However, given the complexity of water's phase diagram, SCAN predicts the relative location of the various phase boundaries in good agreement with experiment \\cite{Zhang21}.\n\n\\begin{figure*}\n\\includegraphics[width=0.95\\textwidth]{Figure2.pdf}\n\\caption{\\label{fig:Fig2} Overview of the methodology to calculate melting curves of ice polymorphs. The procedure is illustrated using the case of ice III. (A) Number of ice III-like molecules as a function of time in the biased coexistence simulations at various $T$ and $P$. The colors of the curves correspond to the $T$, as labeled to the right of the figure. Empty plots denote that no simulations were run at that ($T$, $P$). The range (324,378) that is reversibly sampled corresponds to one layer of ice III. (B) Free energy surfaces as a function of number of ice III-like molecules, where the dashed line is a linear fit to the free energy surface and the shaded region denotes the uncertainty. Colors match the same $T$ reported in panel (A) above. (C) Chemical potential difference between ice III and liquid at various $T$ and $P$. The gray dashed line is a linear fit to the data, and the shaded region represents one standard deviation of uncertainty in the fit parameters. (D) Melting curve obtained by this procedure, where the blue points represent the $T$ and $P$ of zero chemical potential difference between ice and liquid obtained in panel (C). Error bars represent one standard deviation errors in the fit parameters as shown in (C). The dashed line is the melting curve obtained from the integration of the Clausius-Clapeyron equation. (E,F) Simulation snapshots illustrating ice III and the molecular environments used to generate the order parameter \\cite{Piaggi19,Bore22} to drive the biased coexistence (E), and the ice III-liquid coexistence geometry (F).}\n\\end{figure*}\n\nHerein, we computed the melting lines of the ice polymorphs in two stages.\nIn the first stage, we calculated a few points along the liquid-solid coexistence lines using a biased coexistence approach \\cite{Bore22} in which we simulate a particular ice polymorph and liquid water in direct coexistence (FIG.~\\ref{fig:Fig2}F), and use a bias potential to reversibly crystallize and melt a layer of solid (FIG.~\\ref{fig:Fig2}A).\nThis approach was used in a recent work to calculate the phase diagram of the state-of-the-art empirical model of water TIP4P\/Ice \\cite{Abascal05}, and can be regarded as a generalization of the interface pinning approach \\cite{Pedersen13}.\nFrom biased coexistence simulations carried out at different temperatures and pressures, we extract the difference in chemical potential between the liquid and ice from the slope of the free energy surfaces\\cite{Pedersen13,Bore22} (FIG.~\\ref{fig:Fig2}B), and locate the liquid-ice coexistence temperature at a given pressure as the temperature at which this difference is zero (FIG.~\\ref{fig:Fig2}C-D).\nWe applied this procedure to ice III, IV, V, and XIII to obtain a few coexistence points for each polymorph.\nSee FIG.~\\ref{fig:Fig2} for an overview of this procedure for the case of ice III.\nWe show the results for ice IV, V, and XIII in the Supplementary Material \\cite{SI}.\nWe also validated the coexistence points obtained via the biased coexistence method for ice IV and V using standard direct-coexistence simulations (see the Supplementary Material \\cite{SI}).\nWe subsequently obtained continuous and smooth coexistence lines by integrating the Clausius-Clapeyron equation as first proposed by Kofke \\cite{Kofke93}.\nThis technique is based on the numerical integration of Eq.~\\eqref{eq:Clausius-Clapeyron} using the enthalpy and volume obtained from constant temperature and pressure simulations of each phase (see Methods section and the Supplementary Material\\cite{SI} for further details).\n\n\\section*{Results}\n\nUsing the techniques described above, we calculated the coexistence points and lines shown in FIG.~\\ref{fig:Fig3}A for ice III, IV, V, and XIII.\nThe circles and error bars correspond to biased coexistence simulations, and the lines were computed by integrating the Clausius-Clapeyron equation.\nWe also show in FIG.~\\ref{fig:Fig3}A the data for the liquid-liquid critical point, liquid-liquid coexistence line, and Widom line reported recently by us \\cite{Gartner22}.\nAccording to these calculations, the melting curves of all ice polymorphs as predicted by the SCAN functional are supercritical, \\textit{i.e.}, they pass above the liquid-liquid critical point.\nThe melting line of ice VI is also supercritical and is shown in the Supplementary Material \\cite{SI}.\nThus, all of them intersect the Widom line rather than the LLT line.\nOur simulations result in melting curves that show a pronounced, yet continuous, change of slope upon crossing the Widom line.\nThis behavior is compatible with the expected change of properties of liquid water from HDL-like to LDL-like as the Widom line is traversed from high to low pressures.\nMoreover, the change in slope is smoother for ice III than for the other polymorphs, consistent with an increasingly abrupt change in the properties of the liquid closer to the critical point.\nThe smoother change in slope of the melting curve of ice III resembles the behavior hypothesized in Scenario 1 described in FIG.~\\ref{fig:Fig1} while the more abrupt change shown by ice V, IV, and XIII is reminiscent of Scenario 2. \n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{Figure3.pdf}\n\\caption{\\label{fig:Fig3} Melting curves of ice polymorphs III, IV, V, and XIII, and their location relative to the liquid-liquid critical point. A) Results obtained using a machine learning model based on the SCAN DFT functional. Circles represent melting points calculated using biased coexistence simulations \\cite{Bore22}, crosses were obtained by integrating the Clausius-Clapeyron equation, and lines are spline interpolations of the latter results. We also show the location of the critical point, the liquid-liquid coexistence line, and the Widom line (line of maxima of $\\kappa_T$) as calculated in our previous work \\cite{Gartner22}. B) Melting curves reported by Mishima \\cite{Mishima00} for heavy water based on decompression-induced melting experiments. The approximate location of the discontinuous change in slope in the melting curves of ice IV and V is marked with an X. The shaded region is the location of the critical point estimated by Bachler et al.~\\cite{Bachler21}. We also show the location of the critical point obtained by Shi and Tanaka using experimental measurements \\cite{Shi20}, by Debenedetti et al.\\ using molecular simulations with the empirical water models TIP4P\/2005 and TIP4P\/Ice \\cite{Debenedetti20}, and by Mishima and Sumita\\cite{mishima2023equation} using an extrapolation based on polynomial fits to equation of state data. On the left, we show atomic configurations representative of ices III, IV, V, and XIII.}\n\\end{figure*}\n\nOur results also show good agreement between the biased coexistence simulations and the integration of the Clausius-Clapeyron equation in the HDL-like region.\nOn the other hand, it was not possible to perform biased coexistence simulations in the LDL-like region due to the long relaxation times of the LDL-like liquid at those thermodynamic conditions.\nIndeed, even for the comparatively less expensive bulk liquid simulations for the Clausius-Clapeyron integration procedure, we needed long simulations (100 ns) of the bulk liquid in the LDL-like region for robust statistical certainty.\n\nThe analysis of the melting curves shown in FIG.~\\ref{fig:Fig3}A does not constitute proof of a continuous change in slope since the curves are obtained from a set of points interpolated with a spline, which is by construction smooth and differentiable.\nIn order to provide evidence for the continuous change in slope, we now analyze in detail the properties of liquid water along the melting curves of ice polymorphs.\nIn FIG.~\\ref{fig:Fig4} we show the enthalpy and density of liquid water as a function of pressure.\nBoth properties exhibit a swift change upon crossing of the Widom line and the change is more abrupt as the melting curves approach the critical point, with sequence ice III $\\rightarrow$ V $\\rightarrow$ IV $\\rightarrow$ XIII.\nWe ruled out that this behavior is a result of ice crystallization by analyzing configurations at regular intervals of 5 ps.\nWe calculated the structural fingerprints CHILL+ \\cite{nguyen2015identification} and Identify Diamond Structure \\cite{Larsen16}, as implemented in Ovito\\cite{Stukowski09}, and we did not find atomic environments compatible with ice I in any of our simulations.\nWe also show in FIG.~\\ref{fig:Fig4} the free energy surfaces (FES) as a function of the liquid water density for selected points along the coexistence lines.\nThe FES of the liquid along the melting curves of all studied ice polymorphs show a behavior reminiscent of Scenario 2 of FIG.~\\ref{fig:Fig1}.\nFor all ices, the FES at the state point closest to the Widom line shows clear broadening.\nFurthermore, the FES in the vicinity of the Widom line exhibits deviations from a quadratic form with significant asymmetry and a shoulder suggestive of the metastable free energy minimum that would appear below the critical point.\nTaken together, this behavior provides strong evidence of a continuous crossover from HDL-like to LDL-like liquids as the melting curves of ice III, V, IV, and XIII are traversed towards lower pressures.\nWe remark that none of the melting lines analyzed here have properties of the liquid compatible with subcritical Scenario 3 of FIG.~\\ref{fig:Fig1} that would lead to a discontinuous change in the slope of the melting line.\nBased on the analysis of the liquid properties described above, we conclude that the changes in slope of the melting curves shown in FIG.~\\ref{fig:Fig3} are indeed continuous.\n\n\\begin{figure*}\n\\includegraphics[width=0.9\\textwidth]{Figure4.pdf}\n\\caption{\\label{fig:Fig4} Properties of liquid water along melting curves of several ice polymorphs. Panels A, B, C, and D correspond to ice III, V, IV, and XIII, respectively. For each ice polymorph, we show the enthalpy of liquid water $H_L$, the density of liquid water $\\rho_L$, and the melting temperature $T$ as a function of pressure $P$. The locus of maxima of isothermal compressibility \\cite{Gartner22} is shown in the $T-P$ pane with a dashed line. We also show the free energy surfaces $F$ as a function of the density of liquid water $\\rho_L$. The free energy surfaces are color-coded to match the color of points along the $T$ vs $P$ coexistence line to specify the thermodynamic conditions at which they were calculated.}\n\\end{figure*}\n\nWe have so far focused on the properties of the liquid phase.\nHowever, according to Eq.~\\eqref{eq:Clausius-Clapeyron}, the properties of ice can also affect the slope of melting curves.\nIn the Supplementary Material \\cite{SI}, we show the change in enthalpy and density of ice polymorphs along the melting lines.\nThe data show that the changes experienced by the bulk ice polymorphs are much more subtle than the corresponding changes in the properties of the liquid phase.\nIn the pressure range shown in FIG.~\\ref{fig:Fig4}, the densities of ice polymorphs change by less than 1\\% while the density of liquid water changes by 10\\%.\nFurthermore, the enthalpy of ices varies by around 8\\% while the enthalpy of liquid water has a significantly larger variation of around 17\\%.\nThis analysis indicates that the changes of the properties of the liquid phase are the main factor driving the sharp changes in slope observed in FIG.~\\ref{fig:Fig3}.\n\nThe results described above correspond to a purely-predictive model derived from first principles calculations.\nAn alternative approach is to evaluate the melting lines of ice polymorphs using semi-empirical water models that are fit to experimental information.\nFor this reason, we calculated the melting line of ice V in the TIP4P\/Ice model \\cite{Abascal05}, which is a state-of-the-art semi-empirical model for the study of ice polymorphs.\nThe location of the liquid-liquid critical point for this model has been determined accurately by Debenedetti et al.~\\cite{Debenedetti20}.\nWe find that the melting curve of ice V within the TIP4P\/Ice model (shown in the Supplementary Material \\cite{SI}) is also supercritical, in agreement with the SCAN calculations reported above.\n\n\\section*{Discussion}\n\nThe picture that emerges from our present results is in contrast with Mishima and Stanley's interpretation \\cite{Mishima98,Mishima00}.\nAs described above, Mishima's interpretation of the experiments considers that the melting curve of ice III is supercritical, and the melting lines of ice IV, V, and XIII are subcritical \\cite{Mishima00}.\nOn the other hand, our calculations based on an \\textit{ab initio} model predict supercritical behavior for all the studied ice polymorphs.\nTo evaluate this discrepancy, we analyze the consistency of each of these two interpretations in the light of the most recent evidence for the location of the critical point.\nThe decompression-induced melting curves measured by Mishima \\cite{Mishima00} are shown in FIG.~\\ref{fig:Fig3}B together with recent estimates of the location of the liquid-liquid critical point.\nThe estimates include an extrapolation by Bachler et al. based on experimental data for the high- and low-density spinodals obtained from compression\/decompression experiments on glassy water \\cite{Bachler21}, an analysis by Shi and Tanaka using experimental measurements \\cite{Shi20}, calculations based on molecular simulations with the two realistic empirical water models TIP4P\/Ice and TIP4P\/2005 \\cite{Debenedetti20}, and a very recent extrapolation based on polynomial fits to equation of state data by Mishima and Sumita\\cite{mishima2023equation}.\nIt follows from FIG.~\\ref{fig:Fig3}B that, if such estimates are correct, all melting curves would be supercritical in experiments.\nFurthermore, the relative positions of the ice polymorph melting curves and the critical point provided by SCAN in FIG.~\\ref{fig:Fig3}A seems to be in excellent qualitative agreement with the experimental results shown in FIG.~\\ref{fig:Fig3}B, i.e., the relative stability of all phases is captured qualitatively.\nHowever, the quantitative positions of the melting curves and critical point in the $T-P$ plane differ significantly from experiments, which we attribute to the known limitations of SCAN \\cite{Gartner20,Piaggi21}. We note that it is possible that SCAN somehow shifts the location of the critical point relative to the ice melting curves, however, given the qualitative correspondence between FIG.~\\ref{fig:Fig3}A and FIG.~\\ref{fig:Fig3}B, we do not expect this to be the case.\nMoreover, the calculations described above based on a semi-empirical model also show that the melting line of ice V is supercritical, in disagreement with the original interpretation of the experiments and supporting the picture provided by the SCAN functional. \n\nIn FIG.~\\ref{fig:Fig3}B we have combined experimental melting curves for heavy water \\cite{Mishima00} with estimates of the critical point based on experiments carried out using light water \\cite{Bachler21,Shi20} and simulations that ignore nuclear quantum effects \\cite{Debenedetti20}.\nA figure equivalent to FIG.~\\ref{fig:Fig3}B, replacing the melting curves of heavy water ice polymorphs with melting curves of light water \n ices \\cite{mishima2021liquid} is shown in the Supplementary Material \\cite{SI}.\nThe isotopic effect in the melting lines is rather small, with melting temperatures of heavy water around 5 K higher than in light water \\cite{mishima2021liquid}.\nOn the other hand, the isotopic effect on the location of the critical point has recently been estimated by Eltareb et al.~\\cite{eltareb2022evidence} using path integral molecular dynamics and a semiempirical model of water.\nThey found a critical point location for heavy water 18 K and 9 MPa higher than in light water.\nThe combined isotopic effect on the melting curves and the location of the critical point may lead to a relative shift of around 12 K in light water compared to heavy water.\nTherefore, isotopic effects are unlikely to affect the picture shown in FIG.~\\ref{fig:Fig3}.\nWe also stress that our simulation results shown in FIG.~\\ref{fig:Fig3}A ignore nuclear quantum effects.\nThey are thus more representative of heavy water than of light water.\n\nThe discrepancy between our simulation results and Mishima's experiments lead to the question of why a sharp discontinuity in slope was observed in the experimental melting curves for ice V and ice IV. Such behavior could perhaps be explained by immediate crystallization of ice I rather than melting to a metastable (relaxed) liquid state, which of course is not an issue in the simulations due to the separation of time scales of ice nucleation and liquid-like equilibration\/relaxation. In this context, it should be noted that Mishima's hypothesized liquid-liquid phase transition is located very close to the homogeneous nucleation locus. Furthermore, the behavior reported by Mishima for the melting curves past the hypothesized LLT\\cite{Mishima00} is remarkably noisy on the low-pressure side. Experimental studies explicitly targeted towards this issue are needed to definitively evaluate this hypothesis.\n\n\\section*{Conclusions}\n\nOur results suggest that experiments reported by Mishima and Stanley that pointed to the existence of a liquid-liquid critical point at $\\sim$0.1 GPa and $\\sim$220 K \\cite{Mishima98}, and subcritical melting curves for ice IV, V, and XIII \\cite{Mishima00}, might call for a different interpretation.\nWhile our first principles calculations do support the existence of a liquid-liquid critical point \\cite{Gartner22}, they suggest its location to occur at lower temperatures than had been hitherto assumed, such that the melting curves of ice III, IV, V, VI, and XIII are in reality supercritical.\nThe relative stability of phases reported here is in excellent agreement with experiments, yet from a quantitative point of view our simulations are limited by the accuracy of our chosen semilocal DFT functional.\nFuture work could test our findings using more sophisticated DFT functionals or higher levels of electronic-structure theory.\nConsidering the plethora of known ice polymorphs, and the ones that continue to be discovered and characterized \\cite{gasser2021structural}, the search for ices with subcritical melting curves may be a fruitful endeavor.\nWe also hope that our work will stimulate further experimental efforts to elucidate the behavior of melting curves in the vicinity of the liquid-liquid critical point and definitively explain the discrepancies between the experimental and computational results.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdloi b/data_all_eng_slimpj/shuffled/split2/finalzzdloi new file mode 100644 index 0000000000000000000000000000000000000000..3546eb7b943898f4ef838d8069e947bde694f267 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdloi @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and Preliminary Discussion}\nAs for standard terminology and other terminology used in this paper, we refer to the book by Bondy and Murty, \\cite{Bon}, and to the papers quoted in the references. Let $G$ be a connected graph. A \\emph{2-block} is a 2-connected graph or a block of $G$ containing more than two vertices. The square of a graph $G$, denoted $G^2$, is the graph obtained from $G$ by joining any two nonadjacent vertices which have a common neighbor, by an edge.\n\nIt was shown in 1970 and published in 1974 that the square of every 2-block contains a hamiltonian cycle, \\cite{Fle2}. Key in proving this was the existence of EPS-graphs $S$ in connected bridgeless graphs $G$, where $S$ is the edge-disjoint union of a not necessarily connected eulerian subgraph $E$ and a linear forest $P$, and $S$ is connected and spans $G$, \\cite{Fle1}. In subsequent papers \\cite{Fle}, \\cite{FleHob} the existence of various types of EPS-graphs was established. Their relevance was based on the fact that the total graph $T(G)$ of any connected graph $G$ other than $K_1$ is hamiltonian if and only if $G$ has an EPS-graph, \\cite{FleHob}. This and the theory of EPS-graphs led to a description of the most general block-cutvertex graph $\\mbox{bc}(G)$ of a graph $G$ may have such that $T(G)$ is hamiltonian and if $\\mbox{bc}(G)$ does not have the corresponding structure, then exchanging certain 2-blocks in $G$ with some special 2-blocks yields a graph $G^*$ such that $\\mbox{bc}(G)$ and $\\mbox{bc}(G^*)$ are isomorphic but $T(G^*)$ is not hamiltonian, \\cite{FleHob}. In dealing with hamiltonian cycles and hamiltonian paths by methods developed up to that point, it was shown in \\cite{Fle} that in the square of graphs hamiltonicity and vertex-pancyclicity are equivalent concepts, and so are hamiltonian connectedness and panconnectedness. In this context Theorem~\\ref{2-blockcycle} stated below was established as a tool needed to prove the equivalences just mentioned.\n\nHowever, in the course of time much shorter proofs of Fleischner's Theorem were developed \\cite{Geo}, \\cite{Riha}; the same applies to Theorem \\ref{2-blockcycle} below, \\cite{MutRau}. More recently, an algorithm yielding a hamiltonian cycle in the square of a 2-block in linear time, was developed, \\cite{AlsGeo}. The methods developed in these much shorter proofs (including the algorithm just mentioned) do not seem to yield short proofs of Theorems \\ref{H_4} and \\ref{F_4} below, \\cite{EkFle}, \\cite{FleChia}. These latter theorems are, on the other hand, instrumental in proving the central results of this paper, i.e., Theorems \\ref{hamiltonian} and \\ref{hamconnected}, and related algorithms. \n\nLet $\\mbox{bc}(G)$ denote the \\emph{block-cutvertex graph} of $G$. Blocks corresponding to leaves of $\\mbox{bc}(G)$ are called \\emph{endblocks}, otherwise \\emph{innerblocks}. Note that a block in a graph $G$ is either a 2-block or a bridge of $G$. For each cutvertex $i$ of $G$, let $k_{i}$ be the number of 2-blocks of $G$ which include vertex $i$ and let $\\mbox{bn}(i)$ be the number of nontrivial bridges of $G$ which are incident with vertex $i$. In what follows a bridge is called nontrivial if it is not incident to a leaf.\n \nIn Theorem \\ref{hamiltonian}, we introduce an array $m_{i}(B)$ of numbers with an entry for each pair consisting of a cutvertex $i$ and a 2-block $B$ of $G$. We may think of this number $m_{i}(B)$ as the number of edges of $B$ incident with $i$ which are possibly contained in a hamiltonian cycle in $G^{2}$. \n\nStatement of Theorem \\ref{hamiltonian} describes the most general block-cutvertex structure a graph $G$ may have in order to guarantee that $G^2$ is hamiltonian using parameters $m_{i}(B)$ as in \\cite{FleHob}.\n\n\\begin{theorem}\n \\label{hamiltonian}\n Let $G$ be a connected graph with at least three vertices. Let the 2-blocks of $G$ be labelled $B_{1},B_{2},...,B_{n}$. Let the cutvertices of $G$ be labelled $1,2,...,s$. Suppose there is a labelling $m_{i}(B_{t})$ for each $i\\in\\{1,2,...,s\\}$ and each $t\\in\\{1,2,...,n\\}$ such that the following conditions are fulfilled.\n \\begin{mathitem}\n \\item[1)] $0\\leq m_{i}(B_t)\\leq2$ for all $i$ and all 2-blocks $B_t$;\n \\item[2)] for 2-block $B_t$ $m_{i}(B_t)=0$ if and only if cutvertex $i$ is not in $V(B_t)$;\n \\item[3)] for 2-block $B_t$, $m_{i}(B_t)\\geq\\mbox{bn}(i)$, if cutvertex $i\\in V(B_t)$;\n \\item[4)] $\\mbox{bn}(i)\\leq2$ for all $i\\in\\{1,2,...,s\\}$; \n \\item[5)] $\\sum_{i=1}^{s}m_{i}(B_t)\\leq4$ for each 2-block $B_t$ of $G$ and, if $m_i(B_t)=2$ \n for some $i$, then $\\sum_{i=1}^sm_{i}(B_t)\\leq3$; and\n \\item[6)] $\\sum_{t=1}^{n}m_i(B_t)\\geq2k_{i}+\\mbox{bn}(i)-2$ for each $i\\in\\{1,2,...,s\\}$.\n \\end{mathitem}\n Then $G^{2}$ is hamiltonian. \n\nMoreover, if the labelling $m_{i}(B_{t})$ satisfying conditions 1), 2) and 3) is given and at least one of conditions 4), 5), 6) is violated by some $G$, then there exists a class of graphs $G'$ with non-hamiltonian square but $\\mbox{bc}(G')$ and $\\mbox{bc}(G)$ are isomorphic.\n\\end{theorem}\n\nAlso, we obtain a similar result for hamiltonian connectedness (Theorem~\\ref{hamconnected}). Quite surprisingly, its formulation is much simpler than that of Theorem \\ref{hamiltonian}.\n\n\\begin{theorem}\n \\label{hamconnected}\n Let $G$ be a connected graph such that the following conditions are fulfilled:\n \\begin{itemize}\n \\item[1)] there is no nontrivial bridge of $G$;\n \\item[2)] every block contains at most 2 cutvertices.\n \\end{itemize}\n Then $G^2$ is hamiltonian connected.\n\nMoreover,\n\\begin{itemize}\n \\item[$\\cdot$] if a graph $G$ contains a nontrivial bridge, then $G^2$ is not hamiltonian connected; \n \\item[$\\cdot$] if $G$ contains a block containing more than 2 cutvertices, then there is a graph $G'$ such that $\\mbox{bc}(G)$ and $\\mbox{bc}(G')$ are isomorphic but $(G')^2$ is not hamiltonian connected. \n\\end{itemize}\n\\end{theorem}\n\nA fundamental result regarding hamiltonicity in the square of a 2-block is the following theorem.\n\n\\begin{theorem}\\emph{\\textbf{\\cite{Fle}}}\n\\label{2-blockcycle}\nSuppose $v$ and $w$ are two arbitrarily chosen vertices of a $2$-block $G$.\nThen $G^2$ contains a hamiltonian cycle $C$ such that the edges of $C$ incident to $v$ are in $G$ and at least one of the edges of $C$ incident to $w$ is in $G$. Furthermore, if $v$ and $w$ are adjacent in $G$, then these are three different edges. \n\\end{theorem}\n\nThe hamiltonian theme in the square of a 2-block has been recently revisited (\\cite{EkChiaFle}, \\cite{EkFle}, \\cite{FleChia}), yielding the following results which are essential for this paper. \n\nA graph $G$ is said to have the \\emph{$\\mathcal{H}_{k}$ property} if for any given vertices $x_1,...,x_k$ there is a hamiltonian cycle in $G^2$ containing distinct edges $x_1y_1,...,x_ky_k$ of~$G$.\n \n\\begin{theorem}\\emph{\\textbf{\\cite{EkFle}}}\n \\label{H_4}\n Given a 2-block $G$ on at least 4 vertices, then $G$ has the $\\mathcal{H}_{4}$ property, and there are 2-blocks of arbitrary order greater than 4 without the $\\mathcal{H}_{5}$ property.\n\\end{theorem}\n\nBy a \\emph{$uv$-path} we mean a path from $u$ to $v$ in $G$. If a $uv$-path is hamiltonian, we call it a \\emph{$uv$-hamiltonian path}. Let $A=\\{x_{1},x_{2},...,x_{k}\\}$ be a set of $k\\geq 3$ distinct vertices in $G$. An $x_{1}x_{2}$-hamiltonian path in $G^{2}$ which contains $k-2$ distinct edges $x_{i}y_{i}\\in E(G), i=3,...,k$, is said to be $\\mathcal{F}_{k}$. A graph $G$ is said to have the \\emph{$\\mathcal{F}_{k}$ property} if, for any set $A=\\{x_{1},x_{2},...,x_{k}\\}\\subseteq V(G)$, there is an $\\mathcal{F}_{k}$ $x_{1}x_{2}$-hamiltonian path in $G^{2}$.\n\n\\begin{theorem}\\emph{\\textbf{\\cite{FleChia}}}\n \\label{F_4}\n Every 2-block on at least 4 vertices has the $\\mathcal{F}_{4}$ property. \n\\end{theorem}\n\nA graph $G$ is said to have the \\emph{strong $\\mathcal{F}_{3}$ property} if, for any set of 3 vertices $\\{x_{1},x_{2},x_{3}\\}$ in $G$, there is an $x_{1}x_{2}$-hamiltonian path in $G^{2}$ containing distinct edges $x_{3}z_{3},x_{i}z_{i}\\in E(G)$ for a given $i\\in\\{1,2\\}$. Such an $x_{1}x_{2}$-hamiltonian path in $G^{2}$ is called a strong $\\mathcal{F}_{3}$ $x_{1}x_{2}$-hamiltonian path.\n\n\\begin{theorem}\\emph{\\textbf{\\cite{FleChia}}}\n \\label{strongF_3}\n Every 2-block has the strong $\\mathcal{F}_{3}$ property.\n\\end{theorem}\n\n\\begin{theorem}\\emph{\\textbf{\\cite{FleChia}}}\n\t\\label{strongF_3ends}\nLet $G$ be a $2$-connected graph and let $x, y$ be two vertices in $G$. Then $G^2$ has an $xy$-hamiltonian path $P(x, y)$ such that\n\n(i) $xz \\in E(G) \\cap E(P(x,y))$ for some $z \\in V(G)$, \nand\n\n(ii) either $yw \\in E(G) \\cap E(P(x,y))$ for some $w\\in V(G)$, or else $P(x, y)$ contains an edge $uv$ for some vertices $u, v \\in N(y)$.\n\\end{theorem} \n\n\\section{Proofs and algorithms}\n\nPROOF OF THEOREM \\ref{hamiltonian}\n\n\\begin{proof}\n Set $P_0=G-\\cup_{t=1}^{n}B_t$. Then every component of $P_0$ is a tree. Since by 4) $\\mbox{bn}(i)\\leq 2$ every component of $P_0$ is even a caterpillar. \n \n For every caterpillar $T$ of $P_0$ except $T=K_2$ we have the following observation which can be proved easily. \n\n\\medskip\n\n\\noindent\n\\emph{Observation: Let $T$ be a caterpillar with at least three vertices and $P=x_1x_2...x_m$ be some longest path in $T$. Then $T^2$ contains a hamiltonian cycle containing edges $x_1x_2, x_{m-1}x_m$ and different edges $u_jv_j$, where $u_j,v_j\\in N_G(x_j)$ for $j=2,3,...m-1$.}\n\nSee Figure \\ref{Catter} for illustration in which for $x_3$ we have $u_3=x_2$ and $v_3=x_4$.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\end{center}\\caption{Hamiltonian cycle in a caterpillar for $m=7$ (bold edges)}\\label{Catter}\n\\end{figure} \n\n\\medskip\n\n Every 2-block $B_t$ contains a hamitonian cycle in $(B_t)^2$ which is one of two types depending on labellings $m_{i}(B_{t})$:\n \nIf $m_i(B_t)\\neq 2$ for every $i=1,2,...,s$, then for at most 4 cutvertices $a,b,c,d$ it holds that $m_j(B_t)=1$ for $j=a,b,c,d$ by condition 5). By Theorem \\ref{H_4}, $(B_t)^2$ has a hamiltonian cycle $C_t$ containing 4 different edges $aa',bb',cc',dd'$ of~$B_t$. \n \nIf $m_i(B_t)=2$ for some $i\\in\\{1,2,...,s\\}$, then at most one cutvertex $a$ has $m_a(B_t)=1$ by condition 5). By Theorem \\ref{2-blockcycle}, $(B_t)^2$ has a hamiltonian cycle $C_t$ containing 3 different edges $ii',ii'',aa'$ of $B_t$. \n\nThe union of hamiltonian cycles $C_t$ in $(B_t)^2$, for $t=1,2,...n$, hamiltonian cycles in the square of each catepillar (nontrivial component of $P_0$) and trivial components of $P_0$ is a connected spanning subgraph $S$ of $G^2$. \n\nWe construct a hamiltonian cycle $C$ in $G^2$ from $S$ repeating step by step the following procedure for every cutvertex $i$ of $G$ with $m_{i}(B)\\geq 1$ for some 2-block $B$. \n\nIf $i$ does not exist, then $n=0$ and $G=P_0$ is a caterpillar. Hence $S$ is a hamiltonian cycle in $G^2$. Otherwise we join all hamiltonian cycles from $S$ containing $i$ together with trivial components of $P_0$ containing $i$ to one cycle in the following way.\n\n\\medskip\n\n\\noindent\nFirst assume that $\\mbox{bn}(i)=0$.\n\nBy condition 6) we have $\\sum_{t=1}^{n}m_i(B_t)\\geq 2k_i-2$. Without loss of ge\\-nerality for $k_i>1$ we may assume that $m_i(B_1)\\geq 1$, $m_i(B_2)\\geq 1$ and $m_i(B_3)=m_i(B_4)=...=m_i(B_{k_i})=2$, where $m_i(B_t)$ corresponds to the number of edges of $B_t$ incident to $i$ in $C_t$. If $k_i=1$, then by condition 2) we have $m_i(B_1)\\geq 1$.\n\nWe find a cycle $C^i$ on $\\cup_{r=1}^{k_i}V(C_r)\\cup L$, where $L$ is the set of all leaves incident to $i$, by appropriately replacing edges of $C_r\\cap B_r$, $r=1,2,...,k_i$, incident to $i$ (guaranteed by definition of $m_i(B_t)$) with edges of $G^2$ joining vertices in different $C_r$ adjacent to $i$ and leaves adjacent to $i$. Note that we preserve properties given by $m_j(B_t)$ for all $j\\neq i$.\n\n\\noindent\nNow assume that $\\mbox{bn}(i)=1$.\n\nBy condition 6) we have $\\sum_{t=1}^{n}m_i(B_t)\\geq 2k_i+1-2=2k_i-1$. Without loss of generality we may assume that $m_i(B_1)\\geq 1$ and $m_i(B_2)=m_i(B_3)=...=m_i(B_{k_i})=2$, where $m_i(B_t)$ corresponds to the number of edges of $B_t$ incident to $i$ in $C_t$.\nLet $T$ be the component of $P_0$ containing $i$.\n\nIf $T=K_2=ii'$, where $i'$ is also a cutvertex of $G$ with $m_{i'}(B)\\geq 1$ (T is a trivial component of $P_0$), then we find a cycle $C^i$ on $\\cup_{r=1}^{k_i}V(C_r)\\cup T$ containing the edge $ii'$ by appropriately replacing edges of $C_r\\cap B_r$, $r=1,2,...,k_i$, incident to $i$ (guaranteed by definition of $m_i(B_t)$) with edges of $G^2$ joining $i'$ and vertices in different $C_r$ adjacent to $i$. Also here we preserve properties given by $m_j(B_t)$ for all $j\\neq i$.\n\nIf $T$ is a nontrivial component of $P_0$, then $T^2$ contains a hamiltonian cycle $C_T$ containing end-edges of any fixed longest paths $P$ in $T$ (we choose end-edges containing cutvertices of $G$ with $m_{i}(B_t)\\geq 1$) - see Observation above. Again we find a cycle $C^i$ on $\\cup_{r=1}^{k_i}V(C_r)\\cup V(C_T)$ by appropriately replacing edges of $C_r\\cap B_r$, $r=1,2,...,k_i$, incident to $i$ (guaranteed by definition of $m_i(B_t)$) and the end-edge $ii^*$ of $P$ with edges of $G^2$ joining $i^*$ and vertices in different $C_r$ adjacent to $i$. Again we preserve properties given by $m_j(B_t)$ for all $j\\neq i$ and by $C_T$.\n\n\\noindent\nFinally assume that $\\mbox{bn}(i)=2$.\n\nBy condition 6) we have $\\sum_{t=1}^{n}m_i(B_t)\\geq 2k_i+2-2=2k_i$. It follows necessarily that $m_i(B_1)=m_i(B_2)=...=m_i(B_{k_i})=2$, where $m_i(B_t)$ corresponds to the number of edges of $B_t$ incident to $i$ in $C_t$.\n\nLet $T$ be the nontrivial component of $P_0$ containing $i$. Note that $i$ is not an endvertex of $T$ because of $\\mbox{bn}(i)=2$. Then $T^2$ contains a hamiltonian cycle $C_T$ containing end-edges of any fixed longest paths in $T$ (we choose end-edges containing cutvertices of $G$ with $m_{i}(B_t)\\geq 1$) and an edge $u_iv_i$ of $G^2$ where $u_i,v_i\\in N_G(i)$ (see Observation above). We find a cycle $C^i$ on $\\cup_{r=1}^{k_i}V(C_r)\\cup V(C_T)$ by appropriately replacing edges of $C_r\\cap B_r$, $r=1,2,...,k_i$, incident to $i$ (guaranteed by definition of $m_i(B_t)$) and the edge $u_iv_i$ of $P$ with edges of $G^2$ joining $u_i, v_i$ and vertices in different $C_r$ adjacent to $i$ if $k_i>1$. If, however, $k_i=1$, then $u_i$ and $v_i$ are joined to the neighbors of $C_r\\cap B_r$ in $N_G(i)$. Also here we preserve properties given by $m_j(B_t)$ for all $j\\neq i$ and by $C_T$.\n\nNow we choose next cutvertex $i$ with $m_{i}(B)\\geq 1$ for some 2-block $B$ successively and we use all cycles formed in the previous steps instead of previously formed cycles. Note that we preserve all properties given by $m_j(B)$ for all $j\\neq i$ in every case. We stop with the hamiltonian cycle in $G^2$ as required.\n\n\\vskip 8mm\n\nNow assume that there is no labelling satisfying conditions 1) - 6), that is, the labelling $m_i(B_t)$ satisfying conditions 1), 2) and 3) is given and at least one of conditions 4), 5), 6) is violated. We show that there exists a class of graphs $G'$ with non-hamiltonian square but $\\mbox{bc}(G')$ and $\\mbox{bc}(G)$ are isomorphic.\n\n\\bigskip\n\n\\begin{figure}[ht]\n\\begin{center}\n\\end{center}\\caption{Graphs without hamiltonian square}\\label{Counter}\n\\end{figure} \n\n\\noindent\n\\emph{Condition 4) does not hold.} \n\nHence $\\mbox{bn}(i)\\geq 3$ for at least one $i\\in\\{1,2,...,s\\}$. Clearly this is a class of graphs $G'$ such that the square of every such graph $G'$ does not contain a hamiltonian cycle (if we try to construct a hamiltonian cycle in the square, then the degree of the cutvertex $i$ is at least 3, a contradiction), e.g. see graphs in Figure \\ref{Counter} a), where $H_1$ is arbitrary connected graphs, $H_2$, $H_3$, $H_4$ are arbitrary connected graphs with at least one edge each and $\\mbox{bn}(i)=3$. Note that conditions 5) and 6) may hold.\n\n\\vskip 0.9cm\n\n\\noindent\n\\emph{Condition 5) does not hold.}\n\nHence $\\sum_{i=1}^{s}m_{i}(B)\\geq 5$ for some 2-block $B$ and $m_i(B)<2$ for all $i$ or $\\sum_{j=1}^{s}m_{j}(B)\\geq 4$ for some 2-block $B$ and $m_i(B)=2$ for some $i\\in\\{1,2,...,s\\}$.\n\nFirst suppose that $k=\\sum_{i=1}^{s}m_{i}(B)\\geq 5$ for some 2-block $B$ of $G$ and $m_i(B)<2$ for all $i$. Clearly $B$ has exactly $k$ cutvertices by condition 2). Then we exhange $B$ with $K_{2,k}$ where $k$ 2-valent vertices are cutvertices of $G$ and all other blocks with arbitrary blocks to get a class of graphs $G'$ such that $\\mbox{bc}(G')$ and $\\mbox{bc}(G)$ are isomorphic. The square of every such graph $G'$ does not contain a hamiltonian cycle (if we try to construct a hamiltonian cycle in the square, then the degree of at least one of the two $k$-valent vertices of $K_{2,k}$ is at least 3, a contradiction), e.g. see graphs in Figure \\ref{Counter} b), where $k=5$ and $H_1,...,H_5$ are arbitrary connected graphs with at least one edge each. Note that conditions 4), 6) and the second part of condition 5) may hold.\n\nNow suppose that $\\sum_{j=1}^{s}m_{j}(B)\\geq 4$ for some 2-block $B$ and $m_i(B)=2$ for some $i$. If $B$ contains at least 5 cutvertices of $G$, then we continue similarly as above. If $B$ contains $k$ cutvertices of $G$ where $2\\leq k \\leq 4$, then without loss of generality we may assume that we tried to set the labelling $m_i(B_t)$ satisfying firstly conditions 5) and subsequently condition 6). Hence $\\mbox{bn}(i)\\geq 2$ and $\\mbox{bn}(j)\\geq 2$ where $j$ is the second cutvertex of $G$ in $B$ if $k=2$, otherwise we find a labelling $m_i(B_t)$ satisfying condition 5), a contradiction (see Algorithm~1 cases e) and f) below).\n\nFor $k=3,4$ we exhange $B$ with a cycle $C_k$ to get a class of graphs $G'$ such that $\\mbox{bc}(G')$ and $\\mbox{bc}(G)$ are isomorphic. The square of every such graph $G'$ does not contain a hamiltonian cycle (if we try to construct a hamiltonian cycle in the square, then the degree of the cutvertex $i$ is at least 3, a contradiction), e.g. see graphs in Figure \\ref{Counter} c), where $k=3$ and $H_1,...,H_4$ are arbitrary connected graphs with at least one edge each. Note that conditions 4), 6) and the first part of condition 5) may hold.\n\nFor $k=2$, we exchange $B$ with $K_{2,3}$, where two of the three 2-valent vertices are $i$ and $j$, to get a class of graphs $G'$ such that $\\mbox{bc}(G')$ and $\\mbox{bc}(G)$ are isomorphic. The square of every such graph $G'$ does not contain a hamiltonian cycle (it is not possible to find a hamiltonian cycle in the square containing the third 2-valent vertex different from $i$, $j$, a contradiction), e.g. see graphs in Figure \\ref{Counter} d), where $H_1,...,H_4$ are arbitrary connected graphs with at least one edge each. Note that conditions 4), 6) and the first part of condition 5) may hold.\n\n\\medskip\n\n\\noindent\n\\emph{Condition 6) does not hold.}\n \nHence $\\sum_{t=1}^{n}m_{i}(B_t)<2k_i+\\mbox{bn}(i)-2$ for some $i$ and consequently $m_i(B_t)=1$ for at least $3-\\mbox{bn}(i)$ 2-blocks containing $i$. Note that, clearly, $\\mbox{bn}(i)<2$ with respect to condition 3).\n\nLet $r$ be the number of 2-blocks with $m_i(B_t)=1$. Each of these 2-blocks contains either exactly 2 cutvertices of $G$ or at least 3 cutvertices of $G$. Note that for 2-blocks containing only cutvertex $i$ we have $m_i(B_t)=2$ (see Algorithm 1 case d) below). We exchange every 2-block containing exactly 2 cutvertices of $G$ with a cycle $C_3$ and every 2-blocks containing $k$ cutvertices of $G$, $k\\geq 3$, with a cycle $C_k$. In the first case note that we assume without loss of generality that there is no labelling such that we switch values 1 and 2 for both cutvertices of this 2-block to get a permissible labelling (again see Algorithm~1 case e) below).\n\nSince $r\\geq 3-\\mbox{bn}(i)$, by the exchanging 2-block mentioned above we get a class of graphs $G'$ such that $\\mbox{bc}(G')$ and $\\mbox{bc}(G)$ are isomorphic. The square of every such graph $G'$ does not contain a hamiltonian cycle (if we try to construct a hamiltonian cycle in the square, then the degree of the cutvertex $i$ is at least 3, a contradiction), e.g. see graphs in Figure \\ref{Counter} $e_1$) and $e_2$). For the graph in Figure \\ref{Counter} $e_1$) it holds that $r=3-\\mbox{bn}(i)=3-1=2$, the 2-block $B_1$ has exactly 2 cutvertices of $G$, the 2-block $B_2$ has $k=3$ cutvertices of $G$ (and hence $B_1$, $B_2$ are isomorphic to $C_3$) and $H_1,...,H_5$ are arbitrary connected graphs with at least one edge. For the graph in Figure \\ref{Counter} $e_2$) it holds that $r=3-\\mbox{bn}(i)=3-0=3$, the 2-block $B_1$ has exactly 2 cutvertices of $G$, the 2-block $B_2$ has $k=3$ cutvertices of $G$, the 2-block $B_3$ has $k=4$ cutvertices of $G$ (hence $B_1$, $B_2$ are isomorphic to $C_3$ and $B_3$ is isomorphic to $C_4$) and $H_1,...,H_7$ are arbitrary connected graphs with at least one edge. Note that conditions 4) and 5) may hold.\n\nThis finished the proof of Theorem \\ref{hamiltonian}.\n\\end{proof}\n\nIf there is a graph $G$ such that every labelling $m_i(B_t)$ violates at least one of the conditions 4) - 6) of Theorem \\ref{hamiltonian}, then there is a graph $G'$ with $\\mbox{bc}(G')=\\mbox{bc}(G)$ such that $(G')^2$ is not hamiltonian as it has been shown in the proof of Theorem~\\ref{hamiltonian}. On the other hand, if we are able to construct a labelling $m_i(B_t)$ satisfying conditions 1) - 6) using the following algorithm, then $G^2$ is hamiltonian as it has been shown in the proof of Theorem~\\ref{hamiltonian}.\n\n\\bigskip\n\n\\noindent\n\\emph{ALGORITHM 1:} \n\nSet $P_0=G-\\cup_{t=1}^nB_t$. If any component of $P_0$ is not a caterpillar, then $\\mbox{bn}(i)\\geq 3$ for some $i\\in\\{1,2,...,s\\}$ contradicting condition 4) in Theorem \\ref{hamiltonian} and $G^2$ is not hamiltonian (e.g. see Figure \\ref{Counter} a)). STOP.\n\nIf $G=P_0$, then $G$ is a caterpillar, $n=0$ and $G^2$ is hamiltonian (see Observation in the proof of Theorem \\ref{hamiltonian}) and $m_i(B_t)$ is not defined ($n=0$). STOP.\n\nIf $G$ is a 2-block, $G^2$ is hamiltonian by Theorem \\ref{2-blockcycle} and $m_i(B_t)$ is not defined ($s=0$ and $n=1$). STOP.\n\nWe set $G_0=G-P_0$ and $m_i(B_t)=0$ if $i\\notin V(B_t)$ for $i\\in\\{1,2,...,s\\}$ and $t\\in\\{1,2,...,n\\}$.\n\n\\bigskip\n\n\\noindent\nSTART\n\nWe choose a 2-block $B$ containing at most 1 cutvertex of $G_0$. Note that $B$ is either a component of $G_0$ or an endblock of some component of $G_0$. If such endblock does not exist, we choose 2-block $B$ as a component of $G_0-H$ or an endblock of $G_0-H$ where $H$ is the union of all 2-blocks for which the labelling $m_i(B_t)$ is already set. Let $c_1,c_2,...,c_k$ be all cutvertices of $G$ contained in $B$, $k\\geq 1$. \n\n\\medskip\n\n\\begin{itemize}\n \\item[a)] If $k\\geq 5$, then by condition 2) $m_{c_i}(B)\\geq 1$ for $i=1,2,...,k$. Hence condition 5) in Theorem \\ref{hamiltonian} does not hold and $G^2$ may not be hamiltonian (e.g. see Figure \\ref{Counter} b)). STOP.\n\n \\item[b)] If $k\\geq 3$ and $\\mbox{bn}(c_i)=2$ for some $i\\in\\{1,2,...,k\\}$ , then by condition 3) $m_{c_i}(B)=2$ and by 2) $m_{c_j}(B)\\geq 1$ for $j=1,2,...,k$. Hence condition~5) in Theorem \\ref{hamiltonian} does not hold and $G^2$ may not be hamiltonian (e.g. see Figure \\ref{Counter} c)). STOP.\n\n \\item[c)] If $k=2$ and $\\mbox{bn}(c_1)=\\mbox{bn}(c_2)=2$, then by condition 3) $m_{c_1}(B)=2$ and $m_{c_2}(B)=2$. Hence condition 5) in Theorem \\ref{hamiltonian} does not hold and $G^2$ may not be hamiltonian (e.g. see Figure \\ref{Counter} d)). STOP.\n\n \\item[d)] If $k=1$, then we set $m_{c_1}(B)=2$ (we maximize values $m_i(B_t)$ with respect to condition 6) in Theorem \\ref{hamiltonian}). Note that, if the labelling $m_{i}(B_t)$ is set for all 2-blocks incident with $c_1$, then condition 6) holds for cutvertex $c_1$ with respect to the choice of $B$. \n \n If the labelling $m_{i}(B_t)$ is set for all 2-blocks of $G$, then the labelling $m_i(B_t)$ satisfies the conditions of Theorem \\ref{hamiltonian} and $G^2$ is hamiltonian. STOP.\n \n Otherwise we go to START. \n\n \\item[e)] If $k=2$ and $\\mbox{bn}(c_i)\\leq 1$ for $i\\in\\{1,2\\}$, then we set $m_{c_1}(B)$ and $m_{c_2}(B)$ in the following way (without loss of generality $i=1$).\n \n Let $\\mbox{bn}(c_2)=2$. Then we set $m_{c_1}(B)=1$ and $m_{c_2}(B)=2$ with respect to conditions 2), 3) and 5). \n \n Let $\\mbox{bn}(c_2)\\leq 1$. Then for at least one of $c_1$, $c_2$ it holds that $m_{c_j}(B_t)$ for $j\\in\\{1,2\\}$ is set for all 2-blocks $B_t$ except $B$ with respect to the choice of $B$ (again without loss of generality $j=1$). We set $m_{c_1}(B)=1$ and we verify condition 6) for $c_1$. If it holds, then we set $m_{c_2}(B)=2$ (again we maximize values $m_i(B_t)$ with respect to condition 6)). If condition~6) for $c_1$ does not hold for $m_{c_1}(B)=1$, then we set $m_{c_1}(B)=2$ and $m_{c_2}(B)=1$. \n \n Now in both cases we verify condition 6) for $c_1$ and $c_2$ if the labelling $m_{c_1}(B_t)$ and $m_{c_2}(B_t)$ is set for all 2-blocks $B_t$. \n \n If condition 6) does not hold in at least one case, then $G^2$ may not be hamiltonian (e.g. see Figure \\ref{Counter} $e_1)$). STOP. \n \n Hence suppose that condition 6) holds for $c_1$, $c_2$ if $m_{c_1}(B_t)$, $m_{c_2}(B_t)$ is set for all $B_t$, respectively.\n \n If the labelling $m_{i}(B_t)$ is set for all 2-blocks, then the labelling $m_i(B_t)$ satisfies the conditions of Theorem \\ref{hamiltonian} and $G^2$ is hamiltonian. STOP.\n \n Otherwise we go to START. \n \n \\item[f)] If $k\\in\\{3,4\\}$ and $\\mbox{bn}(c_i)\\leq 1$, then we set $m_{c_i}(B)=1$ for $i=1,2,...,k$. We verify condition 6) for all $c_i$ if the labelling $m_{c_i}(B_t)$ is set for all 2-blocks $B_t$. \n \n If condition 6) does not hold in at least one case, then $G^2$ may not be hamiltonian (e.g. see Figure \\ref{Counter} $e_2)$). STOP.\n \n Hence suppose that condition 6) holds for all $c_i$, $i=1,2,...,k$, for which $m_{c_i}(B_t)$ is set for all $B_t$.\n \n If the labelling $m_{i}(B_t)$ is set for all 2-blocks, then the labelling $m_i(B_t)$ satisfies the conditions of Theorem \\ref{hamiltonian} and $G^2$ is hamiltonian. STOP.\n \n Otherwise we go to START. \n\\end{itemize}\n\n\\noindent\nPROOF OF THEOREM \\ref{hamconnected}\n\n\\begin{proof}\n Let $x,y\\in V(G)$. First we prove that there exists an $xy$-hamiltonian path $P$ in $G^2$ if there is no nontrivial bridge of $G$ and every block contains at most 2 cutvertices.\n\n\\bigskip\n \n (A) Suppose that $x$ and $y$ are in the same block $B$ of $G$. We proceed by induction on $n$, where $n$ is the number of blocks of $G$, $n\\geq 1$. \n \n For $n=1$, clearly $G=B$. If $B=K_2=xy$, then $G$ is also the $xy$-hamiltonian path in $G^2$ as required. If $B$ is a 2-block, then by Theorem \\ref{strongF_3}, $G^2=B^2$ contains an $xy$-hamiltonian path $P$ as required.\n\n\n\\bigskip\n\nNow suppose that the statement of Theorem \\ref{hamconnected} is true for every graph with $n$ blocks and $G$ is a graph with $n+1$ blocks, $n\\geq 1$. We distinguish 2 cases.\n\n\\begin{itemize}\n \\item $B$ has exactly one cutvertex $c$.\n \n Without loss of generality we assume that $x\\neq c$. If $B$ is a 2-block, then by Theorem \\ref{strongF_3}, $B^2$ contains an $xy$-hamiltonian path $P_B$ containing an edge $cy'$ where $y'$ is a neighbor of $c$ in $B$. Note that $y'=x$ or $c=y$ is possible. If $B=K_2$, then $B=xy=y'c$ and $P_B=xy$ is an $xc$-hamiltonian path in $B^2$. By the induction hypothesis $(G-B)^2$ contains a $cc'$-hamiltonian path $P_G$ where $c'$ is a neighbor of $c$ in $G-B$. Then $P=P_B\\cup P_G-cy'+y'c'$ is an $xy$-hamiltonian path in $G^2$ as required.\n \n \\item $B$ has two cutvertices $c_1$, $c_2$.\n \n We denote by $G_1$, $G_2$ the two components of $G-B$ such that $c_i\\in V(G_i)$ and let $c'_i$ be a neighbor of $c_i$ in $G_i$, $i=1,2$. By the induction hypothesis $(G_i)^2$ contains a $c_ic'_i$-hamiltonian path $P_{G_i}$, $i=1,2$.\n \n \\begin{itemize}\n \\item[a)] $c_i\\notin \\{x,y\\}$ ($x$ and $y$ are not cutvertices).\n\n By Theorem \\ref{F_4}, $B^2$ contains an $xy$-hamiltonian path $P_B$ containing the edges $c_iz_i$ where $z_i$ is a neighbor of $c_i$ in $B$, $i=1,2$. Note that $z_i\\in\\{x,y\\}$ is possible. \n \n \\item[b)] Up to symmetry $c_1=x$ and $c_2\\neq y$ (either $x$ or $y$ is a cutvertex of $G$).\n \n By Theorem \\ref{strongF_3}, $B^2$ contains an $xy$-hamiltonian path $P_B$ containing the edges $c_iz_i$ where $z_i$ is a neighbor of $c_i$ in $B$, $i=1,2$. Note that $z_1=c_2$ or $z_2=y$ is possible. \n \n \\item[c)] $c_1=x$ and $c_2=y$ (similarly $c_1=y$ and $c_2=x$).\n \n By Theorem \\ref{strongF_3ends}, $B^2$ contains an $xy$-hamiltonian path $P_B$ containing either the edges $c_iz_i$ where $z_i$ is a neighbor of $c_i$ in $B$, $i=1,2$, or the edges $c_1z_1$, $uv$ where $z_1$ is a neighbor of $c_1$ in $B$ and $u,v$ are neighbors of $c_2$ in $B$. \n \\end{itemize}\n \n In all cases except the case c), if $uv$ is the edge of $P_B$, $$P=P_{G_1}\\cup P_B\\cup P_{G_2}-\\{c_1z_1,c_2z_2\\}\\cup\\{c'_1z_1,c'_2z_2\\}$$ is an $xy$-hamiltonian path in $G^2$ as required. \n \n It remains to find an $xy$-hamiltonian path in $G^2$ if $uv$ is the edge of $P_B$. \n \n If $G_2=K_2=c_2u_2$, then $$P=P_{G_1}\\cup P_B\\cup -\\{c_1z_1,uv,c_2u_2\\}\\cup\\{c'_1z_1,u_2u, u_2v\\}$$ is an $xy$-hamiltonian path in $G^2$ as required. \n \n If $G_2\\neq K_2$, then we prove that $(G_2)^2$ contains a hamiltonian cycle $C$ containing edges $c_2u_2$, $c_2v_2$ of $G_2$. Let $B_1,B_2,...,B_k$ be all 2-blocks of $G_2$ containing $c_2$. By Theorem \\ref{2-blockcycle}, for $i=1,2,...,k$, $(B_i)^2$ contains a hamiltonian cycle $C'_i$ containing three different edges $c_2u_2^i$, $c_2v_2^i$, $y_iy'_i$ of $B_i$ where $y_i$ is the second cutvertex of $G_2$ in $B_i$ if it exists.\n \n If $y_i$ exists, then we denote by $H_i$ a component of $G_2-(B_i-y_i)$ containing $y_i$. By the induction hypothesis $(H_i)^2$ contains a $y_id_i$-hamiltonian path $P_i$ where $d_i$ is a neighbor of $y_i$ in $H_i$. Then we set $C_i=C'_i\\cup P_i-y_iy'_i+y'_id_i$. If $y_i$ does not exist, then we set $C_i=C'_i$.\n \n Let $T$ be the set of all leaves of $G_2$ adjacent to $c_2$. Then we find a cycle $C$ on $\\cup_{i=1}^{k}V(C_i)\\cup T$ by appropriately replacing edges $c_2u_2^i$, $c_2v_2^i$ with edges of $G^2$ joining $u_2^i$, $v_2^i$ in different $C_i$ and leaves adjacent to $c_2$ (similarly as in the proof of Theorem \\ref{hamiltonian}) such that we preserve two edges ($c_2u_2^i$, $c_2v_2^i$ or $c_2l_1$, $c_2,l_2$ where $l_1,l_2$ are two leaves of $G_2$ adjacent to $c_2$) as $c_2u_2$, $c_2v_2$. \n \n Now $$P=P_{G_1}\\cup P_B\\cup C-\\{c_1z_1,uv,c_2u_2,c_2v_2\\}\\cup\\{c'_1z_1,u_2u, v_2v\\}$$ is an $xy$-hamiltonian path in $G^2$ as required. \n\n\n\\end{itemize}\n\n\\bigskip \n \n (B) Suppose that $x$ and $y$ are in different blocks of $G$.\n\n Let $P_G$ be any $xy$-path in $G$ and $c\\in V(P_G)\\setminus\\{x,y\\}$ be a cutvertex of $G$. Let $K$ be the component of $G-c$ containing $x$, $G_y=G-V(K)$ and $G_x=G-G_y$. Clearly $G_x\\cup G_y=G$ and $G_x\\cap G_y=c$. If $G_x$, $G_y$ are isomorphic to $K_2$, then we set $P_x=G_x$, $P_y=G_y$, respectively. If $G_x$, $G_y$ are 2-blocks, then $(G_x)^2$, $(G_y)^2$ contains an $xc$-hamiltonian path $P_x$, a $cy$-hamiltonian path $P_y$ by Theorem \\ref{strongF_3}, respectively. We proceed by induction on $n$, where $n$ is the number of blocks of $G$, $n\\geq 2$.\n \n First assume that $G$ has exactly 2 blocks. Hence $G_x$, $G_y$ are isomorphic to $K_2$ or 2-blocks and $P=P_x\\cup P_y$ is an $xy$-hamiltonian path in $G^2$ as required.\n \n Now suppose that the statement of Theorem \\ref{hamconnected} is true for every graph with $n$ blocks and $G$ is a graph with $n+1$ blocks, $n\\geq 2$. If $G_x$, $G_y$ is not a block, then by the induction hypothesis $(G_x)^2$, $(G_y)^2$ contains an $xc$-hamiltonian path $P_x$, a $cy$-hamiltonian path $P_y$, respectively. Then $P=P_x\\cup P_y$ is an $xy$-hamiltonian path in $G^2$ as required.\n \n\\bigskip \n\n Now it remains to prove that if there is a nontrivial bridge of $G$, then $G^2$ is not hamiltonian connected and if $G$ contains a block containing more than 2 cutvertices, then there is a graph $G'$ such that $\\mbox{bc}(G)$ and $\\mbox{bc}(G')$ are isomorphic but $(G')^2$ is not hamiltonian connected. \n \n Clearly, if there exists a nontrivial bridge $xy$ in $G$, then there is no $xy$-hamiltonian path in $G^2$ and $G^2$ is not hamiltonian connected.\n \n \\begin{figure}[ht]\n\\begin{center}\n\\end{center}\\caption{Graphs without $xy$-hamiltonian path in the square}\\label{Counter2}\n\\end{figure} \n\n Finally assume that $G$ contains a block $B$ containing $r$ cutvertices, where $r>2$. Then we exhange $B$ with a cycle $C_r$ and all other blocks with arbitrary blocks to get a class of graphs $G'$ such that $\\mbox{bc}(G')$ and $\\mbox{bc}(G)$ are isomorphic. Clearly the square of every such graph $G'$ does not contain a hamiltonian path between arbitrary two cutvertices of $G'$ in $C_r$ and hence $(G')^2$ is not hamiltonian connected, e.g. with Figure \\ref{Counter2}, where $r=3$ and $H_1,...,H_3$ are arbitrary connected graphs with at least one edge.\n\\end{proof}\n\nSimilarly as for Theorem \\ref{hamiltonian} we state the following algorithm to verify conditions of Theorem \\ref{hamconnected}.\n\n\\bigskip\n\n\\noindent\n\\emph{ALGORITHM 2:} \n\nLet $G'=G-S$ where $S$ is the set of all endblocks of $G$. Let $\\mbox{cvn}_G(B)$ be the number of cutvertices of $G$ in $B$.\n\n\\noindent\nSTART\n\nFind an endblock $B$ of $G'$.\n\n\\begin{itemize}\n \\item If $B$ is a bridge of $G'$, then $B$ is a nontrivial bridge of $G$ and $G^2$ is not hamiltonian connected. STOP.\n \\item Let $B$ be a 2-block. \n \\begin{itemize}\n \\item if $\\mbox{cvn}_G(B)>2$, then $G^2$ may not be hamiltonian connected (e.g. see Figure \\ref{Counter2}). STOP.\n \\item if $\\mbox{cvn}_G(B)\\leq 2$, then $G'=G'-B$.\n \\begin{itemize}\n \\item if $G'=\\emptyset$, then $G^2$ is hamiltonian connected. STOP.\n \\item if $G'\\neq\\emptyset$, then go to START.\n \\end{itemize}\n \\end{itemize}\n\\end{itemize}\n\nIn both algorithms in this paper, determining blocks and especially endblocks and bridges, cutvertices, block-cutvertex graphs, and the parameters $\\mbox{bn}(i)$, $\\mbox{cvn}_G(B)$ can be determined in polynomial time. \n\nAs a consequence, polynomial running time in Algorithm 2 is guaranteed. For, determining (potentially) not being Hamiltonian connected, can be determined instantly once a nontrivial bridge, a block with more than 2 cutvertices has been found. And deleting an endblock reduces the size of $G'$ linearly. \n\nNow consider the running time of Algorithm 1. The first decision to be made is whether $P_0$ is a forest of caterpillars \u2013 this can be done in linear time. After that, at every step 'one chooses a 2-block $B$ as a component of $G_0-H$ or an endblock of $G_0-H$ where $H$ is the union of all 2-blocks for which the labelling $m_i(B_t)$ is already set'. Clearly, identifying such $B$ can be done in linear time. The same applies to working through the cases for defining the various values of $m_i(B)$. \n\nSummarizing, it follows that both algorithms run in polynomial time. We note however, that these algorithms can only decide the existence or potential non-existence of hamiltonian cycles or hamiltonian paths in the square of graphs under consideration; they do not construct any such cycle or path.\n\n\n\\section{Conclusion}\nThe main results of this paper are Theorem \\ref{hamiltonian} and Theorem \\ref{hamconnected}.\nAs we mention in Introduction Fleischner in \\cite{Fle} proved that in the square of graphs hamiltonicity and vertex-pancyclicity are equivalent concepts, and so are hamiltonian connectedness and panconnectedness. Hence we proved in fact that for graphs satisfying assumptions of Theorem \\ref{hamiltonian}, Theorem \\ref{hamconnected} the square of these graphs is vertex-pancyclic, panconnected, respectively.\n\\medskip\n\n As easy corollary of Theorem \\ref{hamconnected} we get the following result.\n\n\\begin{corollary}\n\\label{Block-chain}\n Let $G$ be a block-chain. Then $G^2$ is panconnected if and only if every innerblock of $G$ is a 2-block. \n\\end{corollary}\n\nMoreover Corollary \\ref{Block-chain} is also the answer to Problem 1 stated by Chia et al. in \\cite{ChiaOngTan} that for a graph $G$ with only two cutvertices it is true that $G^2$ is pannconnected if and only if the unique block containing the two cutvertices is not the complete graph on two vertices.\n\n\\medskip\n\n\\noindent {\\bf Acknowledgements}.\nThis work was partly supported by the European Regional Development Fund (ERDF), project NTIS - New Technologies for Information Society, European Centre of Excellence, CZ.1.05\/1.1.00\/02.0090.\n\nThe first author was partly supported by project GA20-09525S of the Czech Science Foundation. The second author was supported in part by FWF grant P27615.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Basic Motivation and Structure for the Generalized Uncertainty~Principle}\n\n In this opening section, we review the ideas and motivations underlying the generalized uncertainty principle (GUP) approach to quantum gravity. GUP is a phenomenological approach to quantum gravity which introduces an absolute minimal length in the theory. Many different approaches to combining quantum mechanics and gravity are thought to require a minimal length~\\cite{vene,amati,amati2,gross,maggiore,garay,KMM,adler,scardigli}. There is a simple physical argument for this. From~the Heisenberg uncertainty relationship (i.e.,~$\\Delta x \\Delta p \\ge \\frac{\\hbar}{2}$), one sees that quantum mechanics gives the following relationship between uncertainty in position and momentum $\\Delta x \\sim \\frac{Const.}{\\Delta p}$. From~the gravity side, one argues that as one tries to probe smaller distances, one needs to go to higher center of mass energies\/momenta. At~some point, the energy\/momentum will be large enough that one will form a micro-black hole whose event horizon size can be estimated by the Schwarzschild radius $r_{Sch} = 2 G \\Delta E \/ c^2$, whereas in this expression, $\\Delta E$ has replaced the conventional mass of the black hole, $M$. Now, further setting $c=1$, replacing $r_{Sch}$ with $\\Delta x$ and $\\Delta E$ with $\\Delta p$, this relationship becomes $\\Delta x = 2 G \\Delta p$, i.e.,~there is now a linear relationship between $\\Delta x$ and $\\Delta p$. It should be noted that since we have a set $c=1$ mass, the energy and momentum are interchangeable. If~one combines this linear relationship from gravity with the inverse relationship from quantum mechanics, one finds $\\Delta x \\sim \\frac{\\hbar}{\\Delta p} + G \\Delta p$, (ignoring the factors of 2). The~interplay between the linear term and the inverse term lead to a minimum in $\\Delta x$ at $\\Delta p_m \\sim \\sqrt {\\hbar \/G}$ of $\\Delta x_m \\sim \\sqrt{\\hbar G}$.\n \n The minimal $\\Delta x$ that comes from the GUP, as~described above, is one way to avoid the point singularities that occur in certain solutions in general relativity such as black hole spacetimes. If~one cannot resolve distances smaller than $\\Delta x_m$, this may lead to the avoidance of the singularities of general relativity. This is the hope for theories of quantum gravity---that they will allow one to avoid the singularities of classical general relativity. Another approach to avoid these singularities of classical black hole solutions is non-commutative geometry~\\cite{nicolini}. In~this approach, one proposes that coordinates do not commute with one another. For~example, $[X,Y] \\ne 0$ or $[Y,Z]\\ne 0$. We will later show that the types of GUP models favored by our analysis are also connected to non-commutative geometry~theories. \n \n One of the strengths of the phenomenological GUP approach to quantum gravity is that it offers the possibility to make experimental tests of quantum gravity---to experimentally check whether there is a minimal distance resolution, $\\Delta x_m$, as~implied by the above arguments, and if so, what is the size of this minimal distance resolution? Some tests of the GUP scenario rely on astrophysical phenomena. For~example, reference~\\cite{AC-nature} proposed a test of minimal lengths based on the dispersion of high-energy photons coming from short gamma ray bursts. The~idea of~\\cite{AC-nature} was that having a minimal distance scale in ones' theory would alter the standard energy--momentum relationship of special relativity, $E^2 = p^2 c^2 + m^2 c^4$. This altered energy--momentum relationship would then lead to an energy-dependent speed of light in the vacuum, which in turn would lead to photons of different energies dispersing or spreading out as they travel long distances through the vacuum. In~2009~\\cite{abdo}, the Fermi gamma ray satellite detected high energy photons coming from a distant gamma ray burst. Using the analysis of~\\cite{AC-nature}, the observation by the Fermi satellite was able to place bounds on the deviations from $E^2 = p^2 c^2 + m^2 c^4$ due to a minimal distance scale. Surprisingly, if the deviations from the special relativistic photon energy and momentum relationship were linear in energy (i.e.,~$p^2 c^2= E^2 [1 + \\zeta (E\/E_{QG})+\\ldots]$ with $E_{QG}$ being the quantum gravity scale and $\\zeta$ is a parameter of order $1$) then the observations of~\\cite{abdo} implied the bound $E_{QG} > E_{Planck}$, i.e.,~that there was no deviation to energies beyond the Planck energy scale. Or~putting this in terms of length $l_{QG} < l_{Planck}$, which is counter to the expectation that hints of quantum gravity should occur before reaching the~Planck-scale.\n \n There are also tabletop and small-scale laboratory test proposals for testing for effects connected with the minimal distance scale coming from GUP. In~the works~\\cite{vagenas,vagenas2}, the proposal was made to use the Lamb shift, Landau levels, quantum tunneling in scanning tunneling microscopes. This work showed than using these tabletop experiments, one could put a bound on the parameter $\\beta$ which were, in~units of Planck momentum squared, of~$\\beta < 10^{36}$, $\\beta < 10^{50}$ and $\\beta < 10^{21}$ for the Lamb shift, Landau levels and tunneling , respectively. There is also a proposal~\\cite{bekenstein} to test Planck scale physics with a tabletop cryogenic optical step-up where the optical photon's momentum is coupled to the center of mass motion of a macroscopic transparent block in such a way that the block is displaced in space by approximately a Planck length. In~the works~\\cite{sujoy,sujoy2,sujoy3}, it was shown that one could use detailed studies of wavefunctions of large molecule to probe Planck-scale physics. Finally there is recent work~\\cite{bosso} which looked at using gravitational waves to put bounds on the parameters coming from GUP models. All of these various experimental approaches are welcome since they hold out the hope that one may experimentally probe Planck-scale~physics.\n\nWe now briefly review some of the basic background behind GUP models and particularly focus on the role that modified operators have in determining whether or not there is a minimal length. The~uncertainty relationship between two physical quantities is closely tied to the commutation relationship between the operators which represent this quantities. In~general, for two operators ${\\hat A}$ and ${\\hat B}$, one has the following relationship between the uncertainties and the commutator:\n\\begin{equation}\n \\label{dAdB}\n \\Delta A \\Delta B \\ge \\frac{1}{2i} \\langle [ {\\hat A} , {\\hat B}] \\rangle ~.\n\\end{equation}\nwhere the uncertainties are defined as $\\Delta A = \\sqrt{\\langle {\\hat A} ^2 \\rangle - \\langle {\\hat A} \\rangle ^2 }$ and similarly for $\\langle {\\hat B} \\rangle $.\nFor standard position and momentum operators in position space ${\\hat x} = x$ and ${\\hat p} = -i \\hbar \\partial _x$, one obtains the usual commutator $[{\\hat x} , {\\hat p}] = i \\hbar$ which then implies the standard uncertainty~principle, \n$$\\Delta x \\Delta p \\ge \\frac{\\hbar}{2}$$ \nor the operators are ${\\hat x} = i \\hbar \\partial _p$ and ${\\hat p} = p$ in momentum~space. \n\nTo obtain a GUP characterized by $\\Delta x \\sim \\frac{\\hbar}{\\Delta p} + \\beta \\Delta p$, \\cite{KMM} proposed the following modified commutator:\n\\begin{equation}\n \\label{KMMxp}\n [ {\\hat X} , {\\hat p}] = i \\hbar (1 + \\beta {\\hat p}^2) ~.\n \\end{equation}\n \nNote that following~\\cite{KMM}, we replace $G$ by a phenomenological parameter $\\beta$ which characterizes the scale where quantum gravity is important. Naively, the~quantum gravity scale would be set by $G, \\hbar$ and $c$, but~in large extra dimension models~\\cite{ADD,ADD2} or brane world models~\\cite{RS,RS2,Gog,Gog2,Gog3}, the quantum gravity scale can be lower than the typical Planck scale. In~these brane world models, $\\beta$ would be set by the value of the higher dimensional Newton's constant and the size of the extra dimensions. Furthermore, in \\eqref{KMMxp}, the position operator is capitalized while the momentum operators, on~both the left and right sides of \\eqref{KMMxp}, are not. This is because in reference~\\cite{KMM}, the choice was made that in order to obtain the modified commutator in \\eqref{KMMxp}, they would modify the position and momentum operators as\n\\begin{equation}\n\\label{xp1}\n{\\hat X} =i \\hbar (1 + \\beta p^2) \\partial_p ~~~{\\rm and}~~~ {\\hat p} = p \n\\end{equation}\nwhere only the position operator is modified. This choice of operators in \\eqref{xp1}\nis absolutely crucial to obtaining a minimal length. Using \\eqref{KMMxp} in \\eqref{dAdB} gives:\n\\begin{equation}\n \\label{min-Xp}\n \\Delta X \\Delta p \\ge \\frac{\\hbar}{2}(1+ \\beta \\Delta p ^2)\n\\end{equation}\nIn arriving at \\eqref{min-Xp}, we are taking the center of mass coordinates with $\\langle {\\hat p} \\rangle =0$. Dividing both sides by $\\Delta p$ gives:\n\\begin{equation}\n \\label{min-x}\n \\Delta X \\ge \\frac{\\hbar}{2}\\left( \\frac{1}{\\Delta p} + \\beta \\Delta p \\right) ~,\n\\end{equation}\nso that one arrives at the type of GUP described in the opening paragraph which definitely leads to a minimal length. Minimizing $\\Delta X$ in \\eqref{min-x} shows it has a minimal length at \\mbox{$\\Delta p = \\sqrt{1\/\\beta}$} which then yields $\\Delta X_{min} = \\hbar \\sqrt{\\beta}$.\n\n\nIn contrast to the work on GUP from reference~\\cite{KMM}, as outlined above, most recent works have followed a different approach, such as that found in \\cite{pedram} where the {\\it same} modified commutator of \\eqref{KMMxp} was obtained by modifying the momentum operator {\\it but} not the position operator. By~taking the operators to be of the form:\n\\begin{equation}\n\\label{xp2}\n {\\hat x}= i \\hbar \\partial _p ~~~{\\rm and}~~~ {\\hat P} = p\\left( 1 + \\frac{\\beta}{3}p^2 \\right)~, \n\\end{equation}\none can see that plugging these into the commutator immediately leads to the same right hand side of the commutator as in \\eqref{KMMxp} where capitals indicate the operator is modified. However, the~uncertainty between the two operators in \\eqref{xp2} is subtly different from the two operators in \\eqref{xp1}. Using the operators in \\eqref{xp2} to calculate the uncertainty relationship via \\eqref{dAdB}~gives:\n\\begin{equation}\n\\label{min-xP} \n\\Delta x \\Delta P \\ge \\frac{\\hbar}{2}(1+ \\beta \\Delta p ^2).\n\\end{equation}\nAlthough this relationship superficially appears to be the same as before, now the left hand side has $\\Delta P$ for the modified momentum, while the right hand side has $\\Delta p$ for the standard momentum. Thus, instead of \\eqref{min-x} one obtains:\n\\begin{equation}\n \\label{min-x2}\n \\Delta x \\ge \\frac{\\hbar}{2} \\left( \\frac{1}{\\Delta P} + \\frac{\\beta \\Delta p ^2}{ \\Delta P} \\right). \n\\end{equation}\nIn contrast to \\eqref{min-x}, it is not clear whether the interplay between $\\Delta p$ and $\\Delta P$ on the right hand side of \\eqref{min-x2} will lead to a minimum. If~the second term $\\beta \\frac{\\Delta p^2}{\\Delta P}$ is increasing with $\\Delta p$, then there is a minimum; but if this term decreases with $\\Delta p$, then there is no minimum. Since the highest power of $p$ in ${\\hat P} =p (1 + \\frac{1}{3} \\beta p^2)$ is $p^3$, one can show that $\\Delta P \\approx \\beta \\Delta p ^3$, which implies that the second term $\\beta \\frac{\\Delta p^2}{\\Delta P}$ will go like $\\frac{1}{\\Delta p}$, i.e.,~decreasing with $\\Delta p$ and thus there is no minimum in~position. \n\nTo explicitly see the difference between the GUP from \\eqref{min-x} versus the GUP from \\eqref{min-x2}, one can plot the uncertainty in position versus the uncertainty in momentum in a family of test functions for each operator pair. In~Figure~\\ref{fig1}, we plot $\\Delta X$ versus $\\Delta p$ from \\eqref{min-x} for $\\beta = 0.01$. One can see that there is a minimum in $\\Delta X$ around $\\Delta p = 1\/\\sqrt{\\beta}$ as~expected.\n\\begin{figure}[H]\n \\includegraphics[scale=0.8]{kmm.jpg}\n \\caption{The relationship between $\\Delta X$ and $\\Delta p$ for the GUP from \\eqref{min-x} using $\\beta =0.01$. As~expected, a minimum length occurs at approximately $\\Delta p = 1\\sqrt{\\beta}$, and~after this minimum, $\\Delta X$ increases with $\\Delta p$.}\\label{fig1}\n\\end{figure}\nIn Figure~\\ref{fig2}, we plot $\\Delta x$ versus $\\Delta p$ from \\eqref{min-x2} again with $\\beta = 0.01$. In~contrast to Figure~\\ref{fig1}, Figure~\\ref{fig2} has no minimum $\\Delta x$ and in fact looks like the standard relationship between $\\Delta x$ and $\\Delta p$ from quantum~mechanics. \n\\begin{figure}[H]\n \\includegraphics[scale=0.8]{pendram.jpg}\n \\caption{The relationship between $\\Delta x$ and $\\Delta p$ for the GUP from \\eqref{min-x2} using $\\beta =0.01$. This GUP has no minimal length and essentially behaves in a manner resembling the standard uncertainty~principle. }\\label{fig2}\n\\end{figure}\nThere are some subtleties in how Figures~\\ref{fig1} and \\ref{fig2} were obtained. First, for~both figures, we used the lower bound (i.e.,~the equal sign) of Equations \\eqref{min-x} and \\eqref{min-x2}. Second, in~calculating $\\Delta P$ for use with \\eqref{min-x2}, we used a test Gaussian wavefunction in momentum space, $\\Psi (p) \\propto e^{p^2\/2 \\sigma}$. The~reason for using a specific test wave function, rather than making a general argument that would apply regardless of the form of the wavefunction, was the complicated relationship between ${\\hat P}$ and ${\\hat p}$. For~a GUP like that in \\eqref{min-Xp}, one has the same $\\Delta p$ on both the left and right hand side of the equation, regardless of the wavefunction. This is the result of not modifying the momentum operator in this GUP model. On~the other hand for a GUP such as \\eqref{min-xP}, the momentum uncertainty is different between the left and right hand sides of the equation, and~there is not a simple relationship between $\\Delta P$ and $\\Delta p$. Thus, for this case, we picked a specific wavefunction to calculate $\\Delta P$ and $\\Delta p$ and check how the uncertainty principle worked out. One could choose a different test wavefunction instead of the Gaussian, but~the results would be qualitatively similar to that shown in Figure~\\ref{fig2}. Note that the effects for the type of GUP shown in Figure~\\ref{fig2} are most pronounced in the $\\Delta p \\to 0$ limit. The~above calculations are a prelude and check for the examples of GUPs given in the following~section. \n\nFinally, even though the momentum operator associated with \\eqref{min-x} is the standard one from quantum mechanics, the~change in the position operator forces one to change the measure of integration in $p$ in order to ensure that ${\\hat X}$ and ${\\hat p}$ are symmetric~\\cite{KMM}. This change in the measure of the momentum integration amounts to $\\int \\ldots dp \\to \\int \\ldots \\frac{dp}{1+ \\beta p^2}$. This modifies the normalization constants but does not qualitatively affect the behavior $\\Delta X$ in the limit as $\\Delta p \\to \\infty$. All of these issues are more fully examined in~\\cite{BLS}.\n\nThe main point of this section is to show that many works on GUP following the seminal work of KMM~\\cite{KMM} and make a different choice for how the operators are modified, which does not necessarily result in a minimum length scale. KMM~\\cite{KMM} modified the position operator, but~left the momentum operator unchanged, whereas many other works chose to modify the momentum but leave the position operator unchanged. Thus, certain choices of modifying the operators are wrong in the sense that they do not lead to a minimal length. From~the above, we conclude that, when it comes to determining whether a theory has a minimum length scale, {\\bf how the operators are modified is more important than how the commutator is modified}. In~short, a modified commutator is insufficient to guarantee a minimum length scale; in fact, a~modified commutator is not necessary at all, as shown in the next~section.\n \n\n\\section{A GUP with an Unmodified~Commutator}\n\nWe now move on to give details of how a modified commutator is not necessary to achieve a minimum length scale. If~we do not modify the commutator nor modify the operators, we would obviously end up with ordinary quantum mechanics, which does not have a minimal length. Thus, we want modified position and momentum operators ${\\hat X}$ and ${\\hat P}$ which give rise to the usual commutator $[{\\hat X} , {\\hat P}] = i \\hbar$, \\cite{BJLS,mlake}. This in turn leads to a standard looking uncertainty relationship but now in terms of the modified operators $\\Delta X \\Delta P \\ge \\frac{\\hbar}{2}$. In~order to have a minimum $\\Delta X$, one needs to modify ${\\hat P}$ so that $\\Delta P$ is either capped at some constant value or decreases. In~reference~\\cite{BJLS}, a~GUP of this kind was constructed by defining a modified momentum by\n\\begin{equation}\n \\label{p-tanh}\n {\\hat P} = p_M \\tanh \\left( \\frac{{\\hat p}}{p_M} \\right)~,\n\\end{equation}\nwhere $p_M$ is the maximum cap on the modified momentum. This maximum momentum, $p_M$, is an upper limit to the momentum in the relationship $E^2 = p^2 + m^2$. This cap in momentum also implies a cap in the energy $E$. This connection between a cap on momentum and a cap on the energy of a single particle, as~well as how this relates to modified position and time operators, is discussed in greater detail in~\\cite{BJLS}. Picking the modified position to have the form:\n\\begin{equation}\n\\label{x-tanh}\n {\\hat X} = i\\hbar\\cosh^2{\\left(\\frac{{\\hat p}}{p_M}\\right)}\\partial_p \n\\end{equation}\nthen leads to the standard looking commutator $[{\\hat X} , {\\hat P}] = i \\hbar$ as can be verified by the substitution of \\eqref{p-tanh} and \\eqref{x-tanh} into the commutator. Another variant with a capped momentum is given in~\\cite{BJLS} by\n\\begin{equation}\n \\label{p-tan}\n {\\hat P} = \\frac{2 p_M}{\\pi } \\arctan \\left( \\frac{\\pi p}{2 p_M} \\right)~,\n\\end{equation}\nand for the modified position operator, one has:\n\\begin{equation}\n \\label{x-tan}\n {\\hat X} = i \\hbar \\left[ 1+ \\left( \\frac{\\pi p}{2 p_M}\\right)^2 \\right] \\partial _p ~.\n\\end{equation}\nBoth forms of the modified momenta given in \\eqref{p-tanh} and \\eqref{p-tan} lead to the result $\\Delta P \\le p_M$ which then gives $\\Delta X \\ge \\frac{\\hbar}{2 p_M}$. Despite modified operators \\eqref{p-tanh} and \\eqref{x-tanh} or \\eqref{p-tan} and \\eqref{x-tan} leading to a minimum $\\Delta X$, there is a difference with how this is achieved compared to the KMM modified operators of \\eqref{xp1}. Plotting $\\Delta X$ versus $\\Delta p$ coming from \\eqref{p-tanh}, \\eqref{x-tanh}, the~associated uncertainty principle gives the result shown in Figure~\\ref{fig3}; the graph of $\\Delta X$ versus $\\Delta p$ for the operators \\eqref{p-tan} and \\eqref{x-tan} looks very similar. In~Figure~\\ref{fig1}, which shows the plot of the Kempf--Mangano--Mann GUP~\\cite{KMM}, the~minimum in $\\Delta X$ is reached at some $\\Delta p$ and thereafter $\\Delta X$ linearly increases with $\\Delta p$. In~contrast, the~GUP shown in Figure~\\ref{fig3} asymptotically approaches the minimum $\\Delta X$. \n\\begin{figure}[H]\n \\includegraphics[scale=0.8]{mup.jpg}\n \\caption{The relationship between $\\Delta X$ and $\\Delta p$ for the modified operators \\eqref{p-tanh} and \\eqref{x-tanh} and the associated GUP using $p_M =0.25$}\\label{fig3}\n\\end{figure}\nThe GUP model given by Figure~\\ref{fig1} has a physical motivation for its form: $\\Delta x$ inversely proportional to $\\Delta p$ for small momentum where quantum mechanics dominates and has $\\Delta x$ proportional to $\\Delta p$ for a large momentum where quantum gravity is thought to dominate. The~motivation for this behavior was laid out in the introduction and relied on the formation of micro black holes at large momentum\/small distances. In~contrast the GUP model given by Figure~\\ref{fig3}, which has $\\Delta x$ inversely proportional to $\\Delta p$ for small momentum, but~at a large momentum has $\\Delta x$ approach a constant value asymptotically, does not have a simple, physical~motivation. \n\nThe main take-away message of this section was to emphasize that the most important factor in whether or not a given GUP leads to a minimum length is how the operators are~modified. \n\n\\section{Connection to Non-Commutative Geometry and Running of \\boldmath$G$}\n\nIn this section, we tie the above considerations about GUPs to another approach to quantizing gravity and a potential physical consequence of quantum gravity. The~other approach to quantizing gravity is non-commutative geometry and the physical consequence is the running of the coupling constant of a theory---in this case, the running of Newton's $G$.\n\nIn general, for non-commutative spacetimes, one has a non-trivial commutation relationship between coordinates of the form $[X_i, X_j] = i \\theta _{ij}$ where $\\theta _{ij}$ is an anti-symmetric matrix~\\cite{nicolini}. This non-commutativity between coordinates implies an uncertainty relationship $\\Delta X_i \\Delta X_j \\ge \\frac{1}{2} |\\theta _{ij}|$, which in turn implies that there is a minimal area and volume---one cannot make $\\Delta X \\Delta Y$ or $\\Delta X \\Delta Y \\Delta Z$ (for example) arbitrarily small due to the non-commutativity between the coordinates. This has the effect, similar to GUP models, of~preventing the formation of point singularities that occur in a black hole and other solutions of classical general relativity. It has previously been noted~\\cite{KMM} that certain GUP models lead to modified position operators in three dimensions (3D) which naturally result in the non-commutativity of the modified position operators. In~one spatial dimension (1D), one does not encounter this non-commutativity since a single operator will always commute with itself. However, in~three dimensions, different coordinates may not commute with one another, e.g.,~$[{\\hat X} , {\\hat Z}] \\ne 0$. This is the reason behind the matrix $\\theta _{ij}$ being~antisymmetric. \n\nAs a specific example of the GUP-non-commutative geometry connection, one can look at the 3D version of the modified position operators of~\\cite{KMM}. Letting the position and momentum in \\eqref{xp1} go from 1D to 3D (i.e.,~${\\hat X} \\to {\\hat X}_i$, ${\\hat p} \\to {\\hat p}_i$ and $\\partial_p \\to \\partial _{p_i}$ gives a 3D version of \\eqref{xp1} of the form:\n\\begin{equation}\n\\label{x13d}\n{\\hat X}_i =i \\hbar (1 + \\beta |{\\vec p}|^2) \\partial_{p_i} ~.\n\\end{equation}\nWith this 3D version of \\eqref{xp1}, the coordinate commutator becomes:\n\\begin{eqnarray}\n\\label{x13d-a}\n[{\\hat X}_i , {\\hat X}_j ] &=& -2 \\beta \\hbar ^2 (1 + \\beta |{\\vec p}|^2 \\partial_{p_j} ) (p_i \\partial _{p_j} - p_j \\partial _{p_i} )\\nonumber \\\\\n&=& 2 i \\hbar \\beta ({\\hat p}_i {\\hat X}_j -{\\hat p}_j {\\hat X}_i ) ~,\n\\end{eqnarray}\nwhich implies a connection to the non-commutative parameter of $\\theta_{ij} = 2 \\hbar \\beta ({\\hat p}_i {\\hat X}_j -{\\hat p}_j {\\hat X}_i )$. As~required, this $\\theta_{ij}$ is antisymmetric. In~the second line in \\eqref{x13d-a}, we used \\eqref{x13d} to turn this back into an expression in terms of the (modified) position operator and (unmodified) momentum operator. The~result in \\eqref{x13d-a} was previously derived in~\\cite{KMM}.\n\nOne can also create 3D versions of the modified position operators from \\eqref{x-tanh} and \\eqref{x-tan} which take the form:\n\\begin{equation}\n\\label{x-tanh3d}\n {\\hat X}_i = i\\hbar\\cosh^2{\\left(\\frac{|{\\vec p}|}{p_M}\\right)}\\partial_{p_i} ~,\n\\end{equation}\nand:\n\\begin{equation}\n \\label{x-tan3d}\n {\\hat X}_i = i \\hbar \\left[ 1+ \\left( \\frac{\\pi |{\\vec p}|}{2 p_M}\\right)^2 \\right] \\partial _{p_i} ~.\n\\end{equation}\nJust as in the case of the 3D modified position operators in \\eqref{x13d}, the~modified 3D position operators in \\eqref{x-tanh3d} and \\eqref{x-tan3d} also lead to a non-commutativity of the coordinates, $[{\\hat X}_i, {\\hat X}_j] \\ne 0$. We do not give the specific form of $\\theta_{ij}$ for the modified 3D position operators from \\eqref{x-tanh3d} and \\eqref{x-tan3d} since we only want to make the point here that some GUPs (particularly those studied in this work) lead to non-commutative~geometry.\n\nIn quantum field theory, when interactions are quantized, one encounters the phenomenon that the coupling ``constants'' of the interaction become dependent on the energy\/momentum scale at which the effects of the interaction are measure. Colloquially one talks about coupling constant ``running'' with the energy\/momentum scale. For~example, in~quantum electrodynamics, the fine structure constant, $\\alpha = \\frac{e^2}{\\hbar c}$, which measures the strength of the electromagnetic interaction, depends on the energy\/momentum scale, $E\/p$, at~which it is measured. Similarly, the weak and strong nuclear interactions have couplings which scale with energy\/momentum. \nOne can view the GUP approach to quantum gravity to be as running from Newton's constant $G$. The~commutator from \\eqref{KMMxp} implies that the GUP parameter $\\beta$ depends on the momentum of the interaction as $\\beta (p) = \\beta p^2$, i.e.,~the parameter $\\beta$ scales quadratically with momentum in this~case. \n\nTo translate this running of the phenomenological parameter $\\beta$ into a running of Newton's constant $G$. We simply recall that from the heuristic arguments $\\Delta x_m \\sim \\sqrt{\\hbar G}$ as well as in terms of $\\beta$, one has $\\Delta x_m \\sim \\hbar \\sqrt{\\beta}$. Thus, we have the connection between $G$ and $\\beta$ of $\\beta \\sim \\frac{G}{\\hbar}$. Thus, a $\\beta$ which ``runs'' with the momentum directly implies a running $G$. There are two things to note: (i) the usual running of a coupling like the fine structure parameter $\\alpha = \\frac{e^2}{\\hbar c}$ is usually logarithmic in perturbative quantum fields theory; (ii) there are differences between gravity and the other interactions that make it unclear that one can consistently define a running gravitational coupling, at~least in the usual approach of quantum fields theory, as detailed in the arguments of reference~\\cite{anber}. \n\n\\section{Summary and Conclusions}\n\nIn this short note, we examined the role that modifying the position and momentum operators plays in determining a minimum length and that focusing on modifying the commutator is insufficient. We examined models that had modified commutators with the same right hand sides, as~in \\eqref{KMMxp}, but~where the operators on the left hand side were different. We considered the case where the position was modified and the momentum remained unchanged (see \\eqref{xp1}) and we considered the case where the position remained the same and the momentum was modified (see \\eqref{xp2}). The~former case led to a minimum length while the latter case did not. This can be explicitly seen by comparing \\mbox{Figures~\\ref{fig1} and \\ref{fig2}}. \nFinally, we presented a case where the position and momentum were modified, but~the commutator remained the same as the standard one from quantum mechanics---\\mbox{Equations \\eqref{p-tanh} and \\eqref{x-tanh}}. This system show that there can be a minimum length scale without modifying the commutation~relation.\n\nA general conclusion that can be distilled from the work in this paper is the following. {\\bf In order for a GUP to have a minimal length, the key factor is the modification of the operators rather than the modification of the commutator.} \n\nThe thrust of this paper was to argue that there are constraints on the specifics of how one can formulate a GUP to obtain a minimal length. This is welcome since while the phenomenological approach of GUPs has strengths (e.g.,~the possibility to confront ideas about quantum gravity with experiments and observations), one would also like a way to constrain and focus on the range of variations in the operators and commutator. Other recent works~\\cite{mlake1,mlake2} have also looked at ways to constrain the form of~GUPs. \n\nFinally, in the last section, we tied the type of GUP models discussed here with other models of quantum gravity and with other potential consequences of quantum gravity, namely non-commutative geometry and the ``running'' of the coupling strengths of interactions. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuantum computers have attracted much attention recently, mainly due to the rapid development of actual hardware \\cite{Barends2014, Bernien2017, Wright2019}.\nThe quantum computer that is to appear shortly is called noisy intermediate scale quantum devices, or in short, NISQ devices \\cite{Preskill2018}.\nWe expect NISQ devices to have $\\sim$100 of qubits with non-negligible noise in the near future.\nSuch devices are believed to be not simulatable by classical computers when the control precision of the qubits is sufficiently high \\cite{Harrow2017, Boixo2018, Neill2018, Bravyi2018}.\nIn this sense, NISQ devices have computational power that exceeds classical computers.\nMany researchers are actively developing ways to exploit their power for practical applications \\cite{Peruzzo2014, Kandala2017, Nam2019, Farhi2014, Otterbach2017, Mitarai2018, Havlicek2019}.\nHowever, we still suffer from the limited number of qubits available on actual devices and the limited depth of circuits that can be run while maintaining the resultant quantum state meaningful.\n\nIf techniques to decompose a quantum circuit to a smaller one are developed, they can extend the applicability of such devices.\nSmaller quantum circuits may refer to ones with the smaller number of qubits or gates.\nPeng \\textit{et al.} recently proposed a clustering approach based on a tensor network representation of a quantum circuit~\\cite{Peng2019}, which greatly progressed the technical development.\nThey showed that we can ``cut'' an identity gate, by sampling measure-and-prepare channels on a qubit according to a certain quasi-probability distribution.\nIn Ref.~\\cite{Mitarai2019}, we proposed methods to construct quantum circuits equivalent to the Hadamard test, which successfully reduces the depth of certain quantum circuits.\nThese techniques share a same idea in that they reconstruct a result of a coherent quantum operation from certain incoherent operations by combining the results obtained from them.\n\nAn approach which has the same flavor as the above have been utilized in the context of memory-efficient classical simulation of quantum circuits.\nSince the direct simulation of a quantum circuit with over 50 qubits breaks down due to the need of storing $2^{50}$ complex numbers in memory, the classical simulator must decompose the given quantum circuit to smaller ones, especially in the number of qubits.\nRefs. \\cite{Chen2018, Pednault2017} have provided one way for such decomposition, which ``cuts'' controlled-Z gates by separately simulating two cases where the control qubit is $\\ket{0}$ or $\\ket{1}$ and then combining them, and they performed classical simulation of over 50-qubit quantum circuits.\nA similar technique has been utilized by Bravyi \\textit{et al.} in Ref. \\cite{Bravyi2016} to remove a relatively small number of qubits from a large quantum circuit by replacing the qubits with a classical simulator.\nTheir approach can be viewed as ``space-like'' cut rather than the ``time-like'' cut proposed by Peng \\textit{et al} \\cite{Peng2019}.\nHowever, their techniques are intended to run on a classical computer and cannot be utilized for simulating a large quantum circuit with a small quantum computer.\n\nIn this work, we present a technique to perform ``space-like'' cut on a quantum computer.\nMore specifically, we present a way to decompose a controlled gate into a sequence of single-qubit operations which consists of projective measurements of Pauli $X$, $Y$, and $Z$ operators, and single-qubit rotations around x, y, and z-axes.\nWe note that our method does not generate any entanglement between the qubits as it is impossible to do so with such single-qubit operations.\nOur method only ``simulates'' effects of entanglement using classical post-processing and sampling.\nMore concretely, although entangling gates cannot be performed with local operations and classical communications in single-shot experiments as widely known \\cite{PhysRevA.52.3457}, we show that it is possible to perform a computational task of evaluating expectation values of the output of entangling circuits by sampling certain sets of gates and applying classical post-processing.\nThe overhead required for our proposed technique, which scales exponentially to the number of decomposition performed, gives a characterization of the entangling gates from a computational viewpoint, which is different from the existing theories of entanglement quantification in e.g. \\cite{Vidal_2000}.\n\nThe method proposed here can be considered as a generalization of our previous work \\cite{Mitarai2019} and a variant of the quantum circuit decomposition presented in Ref.~\\cite{Peng2019}.\nIt can also be viewed as a fully quantum version of the technique utilized in efficient classical simulation schemes \\cite{Chen2018, Pednault2017, Bravyi2016}. \nIn some cases, our method provides a better scaling against Ref. \\cite{Peng2019} when simulating a large quantum circuit with smaller ones.\nThe proposed technique is also useful when we want to apply two-qubit gates between a distant pair of qubits, which otherwise would require many swap operations to perform.\nThis work extends the applicability of NISQ devices whose circuit depth and connectivity are limited.\n\n\\section{Gate decomposition}\n\\subsection{Tensor network representation of quantum circuits}\nQuantum computation is completely specified with a quantum circuit, $U$, an initial state with its density matrix representation, $\\rho$, and an observable, $O$, measured at the output.\nGiven $U$, $\\rho$, and $O$, Any quantum computation can be represented by a tensor network \\cite{Shi2006,Markov2008,Vidal2003}.\nWe define the tensor representation of $U$, $\\rho$, and $O$ in the following manner.\n\nSuppose that our quantum computer has $n$ qubits.\nWe define a complete set of basis in the space of $2 \\times 2$ complex matrix and its dual as $\\{\\kket{e_i}\\}_{i=1}^{4}$ and $\\{\\bbra{e_i}\\}_{i=1}^{4}$ respectively, and assume orthonormality under the trace inner product; $\\bbraket{e_i|e_j}=\\delta_{ij}$.\nWe use the trace inner product, that is, for matrices $A$ and $B$, $\\bbraket{A|B} = \\mathrm{Tr}(A^\\dagger B)$.\nA density matrix $\\rho$ can be decomposed into the sum of $\\kket{e_{j_1}}\\otimes \\kket{e_{j_2}} \\otimes \\cdots \\otimes \\kket{e_{j_n}} = \\kket{e_{j_1}e_{j_2}\\cdots e_{j_n}}$ as \n\\begin{align}\n \\kket{\\rho} = \\sum_{j_1,\\cdots, j_n} \\rho_{\\bm{j}} \\kket{e_{j_1}e_{j_2}\\cdots e_{j_n}},\n\\end{align}\nwhere $\\bm{j} = (j_1,j_2,\\cdots,j_n)$. \nWe refer to the elements $\\rho_{\\bm{j}} = \\bbraket{e_{j_1}e_{j_2}\\cdots e_{j_n}|\\rho}$ as the tensor representation of $\\rho$.\nAn observable $O$ can also be decomposed into the same form.\nNote that we can naturally assume tensor representations of observables and density matrices consist of real numbers because they are always Hermitian and we can choose the basis $\\{\\kket{e_i}\\}_{i=1}^{4}$ as Hermitian, e.g., we can use the Pauli matrices $\\{I, X, Y, Z\\}$ as the basis.\nTherefore, we assume $\\rho_i$ and $O_i$ are real henceforth.\nThe quantum circuit, $U$, transforms $\\rho$ into $U\\rho U^\\dagger$.\nWe define a corresponding superoperator $\\mathcal{S}(U)$ whose action is defined by $\\mathcal{S}(U) \\rho = U\\rho U^\\dagger$.\nSuperoperator can be decomposed as,\n\\begin{align}\n \\mathcal{S}(U) = \\sum_{j_1,\\cdots, j_n}\\sum_{k_1,\\cdots, k_n} \\mathcal{S}(U)_{\\bm{j}, \\bm{k}} \\kket{e_{j_1}\\cdots e_{j_n}}\\bbra{e_{k_1}\\cdots e_{k_n}}.\n\\end{align}\nNote that this decomposition is not limited to superoperators of unitary matrices, but also is applicable for any linear operator that acts on a density matrix.\nWe call $\\mathcal{S}(U)_{\\bm{j}, \\bm{k}} = \\bbra{e_{j_1}\\cdots e_{j_n}}\\mathcal{S}(U)\\kket{e_{k_1}\\cdots e_{k_n}}$ tensor representation of $\\mathcal{S}(U)$. \nWhen we use the Pauli operators as basis set, $\\mathcal{S}(U)_{\\bm{j}, \\bm{k}}$ is refered as Pauli transfer matrix.\n\nQuantum computation ends with measuring the observable $O$.\nThis output can be written down as,\n\\begin{align}\n \\bbra{O}\\mathcal{S}(U)\\kket{\\rho} &= \\mathrm{Tr}(O U\\rho U^\\dagger) \\\\\n &= \\sum_{j_1,\\cdots, j_n}\\sum_{k_1,\\cdots, k_n} O_{\\bm{j}}\\mathcal{S}(U)_{\\bm{j}, \\bm{k}} \\rho_{\\bm{k}},\n\\end{align}\nIn many cases, $U$ is a product of elementary gates $\\{U_i\\}_{i=1}^L$, that is, $U=U_L\\cdots U_1$.\nThe tensor representation of the overall gate, $\\mathcal{S}(U)$, is also a product of $\\mathcal{S}(U_i)$; $\\mathcal{S}(U) = \\mathcal{S}(U_L)\\cdots\\mathcal{S}(U_1)$.\nAn important note is that as long as the tensor representation of each element is unchanged, the result of the overall computation is also unchanged.\nIf $\\mathcal{S}(U)$ can be represented by a sum of some simple operations as $\\mathcal{S}(U) = \\sum_i c_i \\mathcal{S}(V_i)$ with coefficients $\\{c_i\\}$, the expectation value of an observable $O$ can be computed with the following equality,\n\\begin{equation}\\label{eq:superop_sum_decomposition}\n \\bbra{O}\\mathcal{S}(U)\\kket{\\rho} = \\sum_i c_i \\bbra{O}\\mathcal{S}(V_i)\\kket{\\rho}.\n\\end{equation}\nNote that $c_i$ can, in general, depend on the state $\\kket{\\rho}$.\nWe use this scheme to perform the ``decomposition'' of a circuit in this work.\n\nIt is noteworthy that as we perform decompositions of a superoperator rather than an operator such as $U$ itself, the method becomes friendly for a realistic quantum device.\nA direct decomposition of $U$ into some simple operators $\\{V_i\\}$, i.e. $U=\\sum_{i} c_i V_i$, can also be utilized for the same task; however, as expectation values are calculated as $\\bra{0}U^\\dagger O U\\ket{0}$ where $\\ket{0}$ is an initial state, this approach requires us to evaluate $\\sum_{i,j} c_ic_j^* \\bra{0}V_j^\\dagger O V_i\\ket{0}$ which are rather hard for the NISQ devices.\nThis fact demonstrates the advantage of using the above formalism.\nThe tensor network representation of the superoperator formalism allows us to graphically understand the decompositions.\n\n\n\\subsection{Virtual two-qubit gate}\nWe can show the following, which can then be utilized to decompose any two-qubit gate into a sequence of single-qubit operations.\n\\begin{lemma}\\label{thm:two_to_single}\n For operators $A_1$ and $A_2$ such that $A_1^2=I$ and $A_2^2=I$,\n \\begin{align}\n &\\mathcal{S}(e^{i\\theta A_1\\otimes A_2}) = \\cos^2\\theta \\mathcal{S}(I\\otimes I) + \\sin^2\\theta \\mathcal{S}(A_1\\otimes A_2)+ \\nonumber\\\\\n &\\frac{1}{8}\\cos\\theta\\sin\\theta\\sum_{(\\alpha_1,\\alpha_2)\\in\\{\\pm 1\\}^2} \\alpha_1 \\alpha_2\\left[\\mathcal{S}((I+ \\alpha_1 A_1)\\otimes (I+ i\\alpha_2 A_2)) \\right. \\nonumber\\\\\n &\\qquad\\qquad\\qquad\\qquad\\qquad \\left.+ \\mathcal{S}((I+ i\\alpha_1 A_1)\\otimes (I+ \\alpha_2 A_2))\\right]\n \\end{align}\n\\end{lemma}\n\n\\begin{figure*}\n \\includegraphics[width=\\linewidth]{two_qubit_operations_cut_wide_crop.pdf}\n \\caption{\\label{fig:two_qubit_gate_cut} Decomposition of (a) a non-local gate and (b) a non-local non-destructive measurement into a sequence of local operations. $A_1$ and $A_2$ are operators such that $A_1^2 = I$ and $A_2^2 = I$.\n }\n\\end{figure*}\n\nTo prove this, we can directly check the tensor representation of both hand side is equivalent. For detailed calculation, see Appendix \\ref{app:two_to_single}.\nThis theorem is schematically depicted in Fig. \\ref{fig:two_qubit_gate_cut} (a).\nNotice that the operation that is proportional to $I\\pm A$ and $I \\pm iA$ for $A \\in \\{X, Y, Z\\}$ can respectively be performed by a projective measurement and a single-qubit rotation.\n\nThe correspondence with a single-qubit rotation is clear from the formula, $e^{\\pm i\\pi A\/4} = \\frac{1}{\\sqrt{2}}(I\\pm iA)$, which is the rotation of angle $\\pi\/2$ around the $A$ axis.\nLet $\\mathcal{M}_A$ be the projective measurement on the $A$ basis ($A\\in \\{X,Y,Z\\}$), that is, $\\mathcal{M}_A$ acts on a density matrix $\\rho$ as,\n\\begin{align}\n \\mathcal{M}_A\\rho &= \\frac{1}{\\mathrm{Tr}\\left(\\rho \\frac{I + \\alpha A}{2}\\right)} \\left(\\frac{I+\\alpha A}{2}\\right)\\rho \\left(\\frac{I+\\alpha A}{2}\\right),\n\\end{align}\ndepending on the result of the measurement $\\alpha \\in \\{1,-1\\}$.\nThis is equivalent to $\\mathcal{S}(I\\pm A)$ up to the factor of $4\\mathrm{Tr}\\left(\\rho \\frac{I + \\alpha A}{2}\\right)$, that is,\n\\begin{align}\\label{eq:corresp_measurement}\n \\mathcal{S}(I + \\alpha A) &= 4\\mathrm{Tr}\\left(\\rho \\frac{I + \\alpha A}{2}\\right) \\mathcal{M}_{A,\\alpha},\n\\end{align}\nwhere $\\mathcal{M}_{A,\\alpha}$ is a measurement operation postselected with the measurement outcome $\\alpha$.\n$\\mathrm{Tr}\\left(\\rho \\frac{I + \\alpha A}{2}\\right)$ is the probability of getting the result $\\alpha$ by measuring $\\rho$ on the $A$ basis.\nLemma \\ref{thm:two_to_single} with this fact implies that the gate $e^{i\\theta A_1\\otimes A_2}$ can be decomposed, in a sense of Eq.~(\\ref{eq:superop_sum_decomposition}), into a sum of $I\\otimes I$, $A_1\\otimes A_2$, $\\mathcal{M}_{A_1}\\otimes e^{\\pm i\\pi A_2\/4}$, and $e^{\\pm i\\pi A_1\/4}\\otimes \\mathcal{M}_{A_2}$, which can be stated as Lemma below.\nNotably, this technique can be applied for any $\\theta$, which enables us to perform continuous two-qubit gates.\n\\begin{lemma}\\label{thm:two_qubit_gate_decomposition}\n A quantum gate $e^{i\\theta A_1\\otimes A_2}$ with operators $A_1$ and $A_2$ such that $A_1^2=I$ and $A_2^2=I$ can be decomposed into 6 single-qubit operations.\n For any quantum state $\\kket{\\rho}$, to achieve the error $\\epsilon$ of the decomposition with respect to the trace distance with probability at least $1-\\delta$, the required number of circuit runs is $O(\\log (1\/\\delta)\/\\epsilon^2)$.\n\\end{lemma}\nThe detailed proof is given in Appendix \\ref{appsec:proof_of_two_qubit_gate_decomp}.\nIntuitively, since the error comes from the probabilistic part of the decomposition, that is the renomalization factor in Eq. (\\ref{eq:corresp_measurement}) $\\mathrm{Tr}\\left(\\rho \\frac{I + \\alpha A}{2}\\right)$, if we want to estimate $\\mathrm{Tr}\\left(\\rho \\frac{I + \\alpha A}{2}\\right)$ within error $\\epsilon$, $O(1\/\\epsilon^2)$ repetition would suffice.\n\nLet us finally mention the case of the controlled-Z gate, which we denote by $CZ$.\n$CZ$ can be decomposed into\n\\begin{equation}\n CZ = e^{i\\pi I\\otimes Z\/4}e^{i\\pi Z\\otimes I\/4}e^{-i\\pi Z\\otimes Z\/4},\n\\end{equation}\nignoring the global phase.\nThis means we can decompose a CZ gate using Lemma \\ref{thm:two_qubit_gate_decomposition}.\nThe decomposition is shown in Fig.~\\ref{fig:cz_gate_cut}.\nSimilar decompositions can be performed on some basic two-qubit gates such as CNOT.\nEndo \\textit{et al.} \\cite{Endo2018} also provides such decomposition (Ref. \\cite{Endo2018}, Appendix B).\nHowever, our protocol above is slightly advantageous in that the number of single-qubit operations required is 6 compared to theirs which requires 9 of them.\n\n\\begin{figure*}\n \\includegraphics[width=0.8\\linewidth]{cz_gate_cut_wide_crop.pdf}\n \\caption{\\label{fig:cz_gate_cut} Decomposition of controlled-Z gate into a sequence of single-qubit operations.\n }\n\\end{figure*}\n\n\n\n\n\\subsection{Virtual non-destructive measurement of two-qubit operators}\nIn the previous subsection, we showed that any two-qubit rotation can be decomposed into a sum of single-qubit operations.\nHere, we extend the strategy to construct virtual non-destructive measurement\nof two-qubit operators.\nSimilar to the previous section, we can show the following.\nThis theorem is schematically shown in Fig.~\\ref{fig:two_qubit_gate_cut}~(b).\n\\begin{lemma}\\label{thm:two_to_single_meas}\n For operators $A_1$ and $A_2$ such that $A_1^2=I$ and $A_2^2=I$,\n \\begin{align}\n &\\mathcal{S}(I + A_1\\otimes A_2) = \\mathcal{S}(I\\otimes I) + \\mathcal{S}(A_1\\otimes A_2)+ \\nonumber\\\\\n &\\frac{1}{8}\\sum_{(\\alpha_1, \\alpha_2)\\in\\{\\pm 1\\}^2} \\alpha_1 \\alpha_2\\left[\\mathcal{S}((I+ \\alpha_1 A_1)\\otimes (I+ \\alpha_2 A_2)) \\right. \\nonumber\\\\\n &\\qquad\\qquad\\qquad \\left.- \\mathcal{S}((I+ i\\alpha_1 A_1)\\otimes (I+ i\\alpha_2 A_2))\\right]\n \\end{align}\n\\end{lemma}\nThis can also be shown by the direct calculation of both hand side. See Appendix \\ref{app:two_to_single_meas} for detailed calculation.\n\nThe above Lemma can be utilized to show the following.\n\\begin{lemma}\\label{thm:twoq_meas_decomp}\n A non-local projection $\\frac{\\red{I}+A_1\\otimes A_2}{2}$ with operators $A_1$ and $A_2$ such that $A_1^2=1$ and $A_2^2=2$ can be decomposed into 6 single-qubit operations.\n For any quantum state $\\kket{\\rho}$, to achieve the error $\\epsilon$ of the decomposition with respect to the trace distance with probability at least $1-\\delta$, the required number of circuit runs is $O(\\log (1\/\\delta)\/\\epsilon^2)$.\n\\end{lemma}\nThis can be shown with exactly the same approach taken to prove Lemma \\ref{thm:two_qubit_gate_decomposition}, which is provided in Appendix \\ref{appsec:proof_of_two_qubit_gate_decomp}.\n\n\n\n\n\\section{Application}\n\\subsection{Simulation of large quantum circuits}\\label{sec:simulation}\nThe idea of simulating a large quantum circuit by a small quantum computer has been put forward in Ref.~\\cite{Peng2019}.\nPeng \\textit{et al.} utilized the equivalence shown in Fig. \\ref{fig:identitygatecut}.\nIn the figure,\n\\begin{equation}\\label{eq:Pengcut_pairs}\n\\begin{array}{lll}\n O_1=I, & \\rho_1 = \\ket{0}\\bra{0}, & c_1 = +1\/2, \\\\\n O_2=I, & \\rho_2 = \\ket{1}\\bra{1}, & c_2 = +1\/2, \\\\\n O_3=X, & \\rho_3 = \\ket{+}\\bra{+}, & c_3 = +1\/2, \\\\\n O_4=X, & \\rho_4 = \\ket{-}\\bra{-}, & c_4 = -1\/2, \\\\\n O_5=Y, & \\rho_5 = \\ket{+i}\\bra{+i}, & c_5 = +1\/2, \\\\\n O_6=Y, & \\rho_6 = \\ket{-i}\\bra{-i}, & c_6 = -1\/2, \\\\\n O_7=Z, & \\rho_7 = \\ket{0}\\bra{0}, & c_7 = +1\/2, \\\\\n O_8=Z, & \\rho_5 = \\ket{1}\\bra{1}, & c_8 = -1\/2, \n\\end{array}\n\\end{equation}\nwhere $\\ket{\\pm}=(\\ket{0}\\pm \\ket{1})\/\\sqrt{2}$ and $\\ket{\\pm i}=(\\ket{0}\\pm i\\ket{1})\/\\sqrt{2}$. \nThe symbols $\\triangleright$ and $\\triangleleft$ denotes the measurement of a certain observable and the preparation of a certain state, respectively.\nContrasting this technique and ours, we refer to the former and the latter as ``time-like'' and ``space-like'' cut, respectively.\nMore concretely, \\textit{a time-like cut} of a quantum channel can be defined as a decomposition of the channel in the sense of Eq. (\\ref{eq:superop_sum_decomposition}) using measure-and-prepare channels only.\nIn contrast, \\textit{a space-like cut} of a non-local quantum channel is a decomposition of the channel using local quantum channels only.\n\n\n\\begin{figure}\n \\includegraphics[width=0.7\\linewidth]{identitygatecut_crop.pdf}\n \\caption{\\label{fig:identitygatecut} Time-like cut employed in Ref.~\\cite{Peng2019}.}\n\\end{figure}\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{simpleclustering_crop.pdf}\n \\caption{\\label{fig:simpleclustering} Two decomposition approach compared in main text. The top-right approach is the presented, and the bottom-right approach is of Ref.~\\cite{Peng2019}.}\n\\end{figure}\n\nThe decomposition presented in the previous section can also be used in this direction.\nLet us compare the scaling of cost of our decomposition scheme and that of Peng \\textit{et al.} by a simple example.\nWe consider the case where we have an $n$-qubit quantum computer to simulate a $2n$-qubit quantum circuit of Fig.~\\ref{fig:simpleclustering}, which has only one CZ gate between n-qubit ``cluster''.\nThe task is to estimate the expectation value of a final observable $O_f$ by measuring it in the computational basis.\nTo simplify the discussion, we assume $O_f$ is a string of Pauli $Z$'s.\n\nLet $v$ be a desired variance of the estimation of the expectation value of $O_f$.\nWe can show a naive algorithm, which runs the equal number of circuits for each terms appearing in the decomposition, to perform the decomposition with time-like cuts, in the worst case, requires $2048\/v$ runs of $n$-qubit circuit, while the space-like cut approach takes $\\frac{15}{2v}$ runs.\nThe analysis of this simple example is given in Appendix \\ref{app:simpleexample}.\nAlthough the analysis given here is based on a naive algorithm and there are possibilities to improve it, this analysis somewhat shows the enhancement provided by our space-like cut protocol.\n\n\n\n\\subsubsection*{General case}\nWe can consider a general case where we perform the time-like and space-like cuts simultaneously to make a given $m$-qubit quantum circuit runnable on an $n$-qubit quantum computer.\nLet the number of time-like and space-like cuts be $M_t$ and $M_s$, respectively.\nFor space-like cuts, we assume they are performed only on CZ gates.\nThe input state $\\rho$ is initialized in $\\ket{0}\\bra{0}^{\\otimes m}$ and $O_f$ is an output (diagonal) observable calculated from some output function $f:\\{0,1\\}^m\\to [-1,1]$.\nOur task here is to estimate the expectation $\\mathbb{E}[f(y)]$ for a random bitstring $y\\in\\{0,1\\}^m$ sampled from the original circuit.\nThis model is adopted from Ref. \\cite{Peng2019} which originates in Ref. \\cite{Bravyi2016}.\nWith this definition, we can get the following.\n\n\\begin{theorem}\\label{thm:multiple_twoqgate_cut}\n The number of $n$-qubit circuit runs required to estimate $\\mathbb{E}[f(y)]$ within accuracy $\\epsilon$ with some high probability $1-\\delta$ is $O\\left(\\frac{9^{M_s} 16^{M_t}}{\\epsilon^2}\\log\\left(\\frac{1}{2\\delta}\\right)\\right)$.\n\\end{theorem}\nThis implies that the decomposition of the circuit should be performed to minimize $9^{M_s} 16^{M_t}$.\nA detailed proof is given in Appendix \\ref{appsec:proof_of_multiple_twoqgate_cut}, however, the above can roughly be explained as follows.\nAt each space-like cut, we get 6 different sets of single-qubit operations, so $M_s$ cuts induce $6^{M_s}$ terms.\nLikewise, $M_t$ time-like cuts induce $8^{M_t}$ terms, which makes the total number of circuits in decomposition $6^{M_s}8^{M_t}$.\nWith this decomposition, we can take a Monte-Carlo approach to estimate the sum, that is, we randomly choose circuits to run and average them.\nHoeffding's inequality can be used to bound the error of such protocol, which states that if a magnitude of a random variable is always bounded by some constant $a$, then $O(a^2\/\\epsilon^2)$ samples would suffice to obtain an accuracy of $\\epsilon$.\nIn this case, we are to estimate $\\mathbb{E}[f(y)] = \\sum_{i=1}^{6^{M_s}8^{M_t}} c_i \\bbra{O_f}\\mathcal{S}(V_i)\\kket{\\rho}$ with $i$ randomly drawn from $\\{1,\\cdots,6^{M_s}8^{M_t}\\}$ and $|c_i|=1\/2^{M_s+M_t}$, that is, $\\mathbb{E}[f(y)]$ is estimated by $\\mathbb{E}_i[6^{M_s}8^{M_t} c_i\\bbra{O_f}\\mathcal{S}(V_i)\\kket{\\rho}]$.\nThe magnitude of random variable $6^{M_s}8^{M_t} c_i\\bbra{O_f}\\mathcal{S}(V_i)\\kket{\\rho}$ is roughly $3^{M_s}4^{M_t}$, thus we can apply the Hoeffding bound to get the result.\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{combination_crop.pdf}\n \\caption{\\label{fig:combination}Schematic illustration of performing the space-like cut and the time-like cut simultaneously.}\n\\end{figure}\n\n\\subsection{Distant two-qubit gates}\nThe theorem introduced above can be utilized to ``virtually'' perform a two-qubit gate between qubits at distance.\nFigure \\ref{fig:distant_2qubit_gate} shows an example of such a virtual two-qubit gate.\nNotice that this protocol works irrespective of the distance between the qubits.\nMany swap gates are otherwise necessary for performing such gates, which makes them impractical on NISQ devices due to the non-negligible amount of decoherence and gate error of such devices.\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{distant_2qubit_gate_crop.pdf}\n \\caption{\\label{fig:distant_2qubit_gate} Decomposition of distant two-qubit gate on a square lattice. \n Each vertex of the graph represents a qubit and the edge represents the connectivity of the qubits.\n $S$ is the set of pairs of single-qubit operations which appears in the formula in Lemma \\ref{thm:two_to_single}, and $c_s$ is the corresponding coefficient for each pair.}\n\\end{figure}\n\nThis protocol might be useful for the variational algorithms such as the variational quantum eigensolver (VQE) \\cite{Peruzzo2014} and the quantum approximate optimization algorithms (QAOA) \\cite{Farhi2014}.\nHere, we describe an example in the QAOA.\nIn the QAOA, we seek to find a ground state of a Hamiltonian $H$ on $n$-qubit which is a sum of Pauli $Z$'s and its products.\nFor example, a Hamiltonian may have the form of,\n\\begin{align}\n H = \\sum_{ij} J_{ij} Z_iZ_j.\n\\end{align}\nThe QAOA tries to solve the problem by converting it to a optimization problem of a continuous variable $\\bm{\\beta}$ and $\\bm{\\gamma}$.\nThe optimization of $\\bm{\\beta}$ and $\\bm{\\gamma}$ are performed so as to minimize the function,\n\\begin{align}\n \\expect{H(\\bm{\\beta}, \\bm{\\gamma})} =\\bra{+}^{\\otimes n} U^\\dagger(\\bm{\\beta}, \\bm{\\gamma}) H U(\\bm{\\beta}, \\bm{\\gamma})\\ket{+}^{\\otimes n},\n\\end{align}\nwhere,\n\\begin{align}\\label{eq:qaoa-circuit}\n U(\\bm{\\beta}, \\bm{\\gamma}) = e^{i\\beta_p \\sum_i X_i}e^{i\\gamma_p H} \\cdots e^{i\\gamma_2 H}e^{i\\beta_1 \\sum_i X_i}e^{i\\gamma_1 H}.\n\\end{align}\nThis algorithm has been experimentally demonstrated \\cite{Otterbach2017} with the connectivity of the target Hamiltonian being equivalent to the connectivity of the actual device.\n\nThe equivalence of the connectivity is almost necessary from the requirement to perform $e^{i\\gamma H}$.\nThis requirement can somewhat be relaxed by our protocol which enables qubits to virtually interact irrespective of the distance between them.\nLet us now assume that an available device has a square-lattice connectivity of Fig. \\ref{fig:distant_2qubit_gate}, and a Hamiltonian of the QAOA which we aim to solve has a interaction between one pair of qubits that is not included in the hardware connectivity graph.\nIn this case, to execute the QAOA circuit (Eq. (\\ref{eq:qaoa-circuit})), we can use our space-like technique $p$ times to virtually apply the unitary.\nThe scaling of the cost can be bounded by setting $M_t=0$ and $M_s=p$ in Theorem \\ref{thm:multiple_twoqgate_cut} which gives us a scaling of $9^p\\epsilon^{-2}\\log[1\/(2\\delta)]$.\nThe time-like cut approach of Peng \\textit{et al.} \\cite{Peng2019} can also be utilized in this direction.\nHowever, as this approach would require 4 cuts per gate, the cost scaling is bounded by $16^{4p}\\epsilon^{-2}\\log[1\/(2\\delta)]$ by setting $M_t=4p$ and $M_s=0$ in Theorem \\ref{thm:multiple_twoqgate_cut}.\nThis demonstrates an advantage, albeit in this special settings, of our technique over the previous result.\n\nIn the context of the VQE, which is also an algorithm to find a ground state of a Hamiltonian but mainly targets a concrete physical system such as molecules, it has been proposed to use the same kind of quantum circuits as the QAOA \\cite{Wecker2015, Mitarai2019g}.\nOur result may also be applicable in constructing such circuits.\n\n\\section{discussion and conclusion}\nWe described a technique to decompose a non-local operations into a sequence of local operations.\n\\red{As the single-qubit operations are generally more accurate on NISQ devices, the proposed technique can be used to enhance their capability. We believe intrinsic noise on single-qubit operations can be compensated by recent sophisticated error mitigation techniques \\cite{Endo2018}.}\nIn particular, our technique of the space-like cut of two-qubit gates can improve the simulation of a large quantum circuit with a small quantum computer in some cases.\nIt would be interesting to investigate the best strategy to perform ``cuts'' to reduce the number of qubits compatible with an available device.\nAlso, the algorithm we have given to bound the cost scaling is rather straight forward and we believe it can be improved with a more sophisticated strategy.\n\n\\red{The proposed algorithm can also be compared to the classical simulation strategy that splits a large circuit by decomposing two-qubit gates. For example, a controlled-NOT gate can be splitted using a tensor network based technique \\cite{biamonte2017tensor}. However, such techniques generally does not focus on decompositions of $\\mathcal{S}(U)$ considered in this work but rather the two-qubit unitary $U$ itself, which takes makes them difficult to be used on NISQ devices as Eq. (\\ref{eq:superop_sum_decomposition}) cannot be utilized anymore. }\n\nOur technique can induce a entanglement-like effect without performing any two-qubit gate with the cost mentioned in Lemmas \\ref{thm:two_qubit_gate_decomposition} and \\ref{thm:twoq_meas_decomp}.\nThis connects this work to areas like quantum communication.\nThis ``virtual'' entanglement creation could be done with the time-like cut proposed by Peng \\textit{et al.}, but our work lowered the cost to perform the task.\nIt is interesting to know whether ours is the optimal protocol or there is a more efficient way.\n\nTo summarize, our technique allows qubits to virtually interact irrespective of physical distances between them.\nThe result is useful for applying a two-qubit gate to a distant pair of qubits.\nIn particular, when applied to the NISQ devices, this may be employed to enhance the power of them.\nFuture direction can be to explore if we can lower the resource to perform such virtual operations.\n\n\\begin{acknowledgements}\n KM thanks the METI and IPA for their support through the MITOU Target program.\n KM is also supported by JSPS KAKENHI No. 19J10978 and No. 20K22330, and JST PRESTO JPMJPR2019.\n\tKF is supported by KAKENHI No.16H02211, JST PRESTO JPMJPR1668, JST ERATO JPMJER1601, and JST CREST JPMJCR1673.\n\tThe authors thank Suguru Endo for fruitful discussions and letting us become aware of Ref. \\cite{Endo2018}.\n This work is supported by MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant Number JPMXS0118067394.\n\\end{acknowledgements}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sect:intro} \nThe exclusion process is a paradigm for non-equilibrium behaviour \\cite{bib:M1}. \nIn the one-dimensional totally asymmetric simple exclusion process (TASEP),\neach site $i$ $ ( i=1,2,\\dots, L) $ is either occupied by one particle ($ \\tau_i =1 $) or empty ($ \\tau_i =0$),\nand a particle at site $ i$ hops to $i+1 $ with rate 1, if the target site is empty. Investigation of shocks is one of central issues in the TASEP\\cite{bib:BCFG,bib:FF,bib:DJLS,bib:DEM,bib:M2,bib:KSKS,bib:CHA}. Let us recall a known result in the TASEP with open boundaries (shortly, open TASEP), where a particle is injected at site $i=1$ with rate $\\alpha$, and extracted at site $i=L$ with rate $ \\beta $ \\cite{bib:DEHP}. These rates are regarded as reservoir densities $\\alpha$ and $ 1-\\beta $, respectively. The case $ \\alpha = \\beta < 1\/2 $ is called the co-existence line. Two plateaus with densities $ \\alpha$ and $ 1-\\alpha$ co-exist in the domains\n $ 1 < x < S(t) $ and $ S(t) < x < L $, respectively. The position $ S (t) $ of the shock (domain wall) is \\textit{dynamical}, and its behaviour is diffusive \\cite{bib:DEM,bib:KSKS}, \\textit{i.e.} \n\\begin{align} \\label{eq:D=} \n \\big\\langle \\big( S (t) - S (0) \\big)^2 \\big\\rangle_{\\mathrm E} \\simeq 2 D (\\alpha ) t, \\ \n D (\\alpha ) = \\frac{ \\alpha(1-\\alpha) }{ 1 - 2\\alpha } , \n\\end{align} \nas $t\\to \\infty$. Here $ \\big\\langle \\cdot \\big\\rangle_{\\mathrm E} $ denotes the ensemble average. \nThis asymptotic diffusion coefficient is also true on $ \\mathbb Z $ \\cite{bib:FF}.\n\n\nThe so-called second-class particle ($ \\tau_i =2 $) microscopically \\textit{defines} \n the positions of shocks \\cite{bib:BCFG}. It behaves as a hole for particles, and as a particle for holes, \\textit{i.e.} the hops $ 20 \\to 02 $ and $ 12 \\to 21 $ occur between sites $i$ and $ i+1 $, with the same rate as for $ 10 \\to 01 $. We confine only one second-class particle into the system in the initial state. It is not extracted from the system and we do not inject another second-class particle, \\textit{i.e.} \n``semi-permeable boundary condition'' \\cite{bib:A1,bib:A2,bib:U,bib:ALS}. Figure \\ref{fig:openTASEP} (a,b) shows simulation results of the mean-squared displacements (MSD) of the second-class particle, which agree with the formula \\eqref{eq:D=}. We notice that finite-time effect becomes strong, when $\\alpha$ approaches $1\/2$. The density profile averaged over a large time interval \n\\begin{align}\n \\rho_i := \\langle \\tau_i ( 2-\\tau_i ) \\rangle_{\\mathrm T} \n \\label{eq:rho_i:=}\n\\end{align}\nlooks different from snapshots in simulations. \nSince the shock position moves evenly in the chain, $ \\rho_i $ is given by \n\\begin{align} \\rho_i \\simeq ( 1 - 2\\alpha ) \\frac i L + \\alpha, \\label{eq:rho_i=open} \\end{align}\nwhich was shown by the exact stationary state \\cite{bib:DEHP}.\n\n\n\\begin{figure}\\begin{center}\n \\includegraphics[width=0.24\\textwidth]{MSD-open.pdf} \\ \n \\includegraphics[width=0.24\\textwidth]{D-vs-rho.pdf} \\end{center}\n\\caption{ \n (a) MSD of the second-class particle vs time, and\n (b) diffusion coefficient vs boundary rate \n on the co-existence line of the open TASEP with $ L=10^4 $. \n The markers in (a) are simulation results averaged over $ 10^3 $ runs.\n We used simulation data $ ( S(t) -S(0) )^2 \/ (2t ) $ in $ 5\\times 10^3 \\le t \\le 10^4 $ to plot the markers in (b). The lines in (a,b) correspond to eq.~\\eqref{eq:D=}. \n \\label{fig:openTASEP}} \n\\end{figure}\n\n\nOn the other hand, static shocks are realized in the TASEP, by imposing attachment and detachment of particles in the bulk of the chain (the Langmuir kinetics \\cite{bib:PFF}). Static shocks were also found in TASEPs with inhomogeneous hopping rates on a ring. One of the simplest cases is the Janowsky-Lebowitz (JL) model \\cite{bib:JL}: \n \\begin{align} p_i = 1 \\ ( 1 \\le i < L ) , \\ r<1 \\ ( i=L ) , \\end{align} \n where $ p_i $ denotes the hopping rate from site $i$ to $i+1$ ($ L+1 := 1 $). \n Due the inhomogeneity of the bond between sites $ L $ and 1, the JL model exhibits a shock profile in a certain parameter region. A mean-field theory qualitatively explains a phase transition between shock and flat density profiles, which is still a fascinating problem \\cite{bib:CLST,bib:SPS}. \n \nIn \\cite{bib:TB}, another inhomogeneous TASEP was introduced: \n\\begin{align}\\label{eq:two_pi=} p_i = 1 \\ ( 1 \\le i \\le \\ell ) , \\ r \\ ( \\ell < i \\le 2 \\ell=L ) .\\end{align}\n We refer to this model as 2-segment TASEP. In a certain region of the parameter space $ (r,\\rho) $ (where $ \\rho$ is the global density \\textit{i.e.} the number of particles $\/L$), this model also exhibits a static shock. Recently the authors of \\cite{bib:BSB} investigated TASEPs with three and four parts, generalizing \\eqref{eq:two_pi=}. Here, we mainly study a specific 4-segment TASEP \n\\begin{align}\\label{eq:four_pi=}\n p_i = \n \\begin{cases} \n 1 & ( 1 \\le i \\le \\ell \\ \\vee \\ 2\\ell < i \\le 3\\ell ) , \\\\ \n r & ( \\ell < i \\le 2\\ell \\ \\vee\\ 3\\ell < i \\le 4\\ell =L) . \\\\ \n \\end{cases}\n\\end{align}\nThere can exist two shocks in the 1st and 3rd segments. The positions of the shocks cannot be fixed even in the macroscopic level, but they are related to each other by an equation derived by the particle number conservation \\cite{bib:BSB}. \n The purpose of this work is performing more detailed Monte Carlo simulations (in continuous time), in order to deeply understand this synchronization phenomenon. \n\nBefore investigating the 4-segment TASEP, we reconsider the case of 2 segments via the second-class particle. As an evidence that the second-class particle indicates the shock position, we check that its spatial distribution becomes gaussian, corresponding to the density profile written in the error function. We also examine properties of the standard deviation of the shock position. \nThen we turn to the 4-segment TASEP. We find that the MSD exhibits various behaviours, depending on the time scale that we focus. The open boundary condition that we have already reviewed is the reference case. We quantify the interaction between the shocks by a correlation function.\n Furthermore we introduce a crossover time distinguishing between time scales of independency and synchronization of shocks. We also numerically estimate the dynamical exponent of the crossover time. Finally we give the conclusions of this work and some remarks, including possible future studies. Overall in this work, we use the following definition for macroscopic density profiles with \\textit{mesoscopic} length of the lattice $ 2 \\delta +1 $ ($ 1 \\ll \\delta \\ll \\ell $), which is in general different from the microscopic density profile \\eqref{eq:rho_i:=}:\n\\begin{align}\n \\rho (x) = \\rho ( i\/\\ell ) = \n \\frac{1}{ 2 \\delta + 1 } \\sum_{ i' = i - \\delta }^{ i + \\delta } \\tau_{i'} ( 2 - \\tau_{i'} ) . \\label{eq:macro-density}\n\\end{align}\n \n\n\n\n\n\n\\section{2-segment TASEP}\\label{sect:2-seg}\n Let us consider the model \\eqref{eq:two_pi=}. \n We begin with the assumption that the global density $ \\rho $ is enough small, and \n each segment has a flat density profile \n\\begin{align}\\label{eq:rho(x)=12}\n \\rho(x) = \\alpha_1 ( 0 < x < 1 ) , \\ \\alpha_2 ( 1 < x < 2 ) . \n\\end{align}\nThe conservation laws of the number of particles and the stationary current $ J $ are written as \n\\begin{align}\n \\label{eq:2rho=a1+a2}\n 2 \\rho &= \\alpha_1 + \\alpha_2 , \\\\ \n J &= \\alpha_1 (1-\\alpha_1) = r \\alpha_2 (1-\\alpha_2) , \n \\label{eq:J=J1=J2}\n\\end{align}\nrespectively. \nThese are easy to solve \\cite{bib:TB}: \n\\begin{align}\n\\label{eq:alpha1=,alpha2=}\n \\alpha_1 &= \\frac{ 1 + r -4r\\rho - R }{ 2 (1-r) } , \\ \n \\alpha_2 = \\frac{ 4\\rho - (1+r) + R}{ 2 (1-r) } , \\\\\n J &= \\frac{ r(1-2\\rho)(R-(1+r)(1-2\\rho) ) }{ (1-r)^2 } \n \\label{eq:J:LDLD} \n\\end{align} \n with $ R = \\sqrt{(1+r)^2-16r\\rho (1-\\rho)} $. \nThe density profile \\eqref{eq:rho(x)=12} is realized as long as $ \\alpha_2 < 1\/2 $\nwith \\eqref{eq:alpha1=,alpha2=}, \\textit{i.e.} \n\\begin{align} \\label{eq:rho_c=} \\rho < \\rho_c : = \\frac { 2-\\sqrt{1- r} }{4} .\n \\end{align}\nSince both $ \\alpha_1 $ and $ \\alpha_2 $ are less than $ 1\/2 $, \n the case $ \\rho < \\rho_c $ is called LD-LD phase (LD$=$low density). \n\n\n\\begin{figure}\\begin{center}\n \\includegraphics[width=0.24\\textwidth]{Phase-diagram.pdf} \\ \n \\includegraphics[width=0.24\\textwidth]{Fundamental-diagram.pdf} \\end{center} \n\\caption{ \n (a) Phase diagram and\n (b) fundamental diagram \n of the 2- and 4-segment TASEPs \\cite{bib:TB, bib:BSB}. \nThe phase boundaries in (a) are given by $ \\rho = \\rho_c ,1 - \\rho_c$ \n\\textit{i.e.} the parabola $ r = 1 - 1 6 ( \\rho - 1\/2 )^2 $. \nFor (b), we have set $ \\ell = 10^3 $ and $ r =1\/2$. To obtain each marker, we counted the number of flowing particles in a single simulation run, and took average over $ 10^6 \\le t \\le 10^7 $. The line in (b) corresponds to the predictions \\eqref{eq:J:LDLD}, \\eqref{eq:J:SMC}. \n \\label{fig:diagrams}} \n\\end{figure}\n\n\n\n\nWhen the total density $ \\rho $ exceeds the critical density, \nthe 2nd segment maintains the density $1\/2 $ and\na shock appears on the 1st segment: \n\\begin{align}\\label{eq:rho(x)=1lh2}\n \\rho(x) = \n \\begin{cases} \n \\alpha_1 \\ ( 0 < x < s ) , \\ \n 1- \\alpha_1 \\ ( s < x < 1 ) , \\\\\n \\alpha_2 = \\frac{1}{2} \\ ( 1 < x < 2 ) . \n \\end{cases}\n\\end{align}\nWe refer to this case as S-MC (shock-maximal current) phase. The shock position $s$ is macroscopically static, \\textit{i.e.} localized. By solving the conservation laws of the number of particles and the stationary current \n\\begin{align}\n 2 \\rho =& s \\alpha_1 + (1-s)(1 - \\alpha_1 ) + \\ \\alpha_2 , \\\\\n J ( \\alpha_1 ) =& \\alpha_1 (1-\\alpha_1) = r \\alpha_2 (1-\\alpha_2 ) = r \/ 4 \\label{eq:J:SMC} , \n\\end{align}\nthe densities and the shock position in the 1st segment are determined as \n\\begin{align}\n \\alpha_1 = \\frac{1-\\sqrt{1-r}}{2} , \\ s = \\frac{1}{2} + \\frac{1-2\\rho}{ \\sqrt{1-r}} . \n \\label{eq:alpha1=,s=}\n\\end{align}\nThe parameters $ ( r , \\rho ) $ are inversely specified by $ (\\alpha_1,s) $ in the S-MC phase. We have emphasized the current as a function of the density $ \\alpha_1 $. \n \nWhen the global density $\\rho$ exceeds $1-\\rho_c $, the shock position reaches site $i=1 $. The densities in both segments are flat, and larger than $ 1\/2 $, \\textit{i.e.} the HD-HD phase (HD$=$high density). Figure \\ref{fig:diagrams} (a) summarizes the three phases. In fig.~\\ref{fig:diagrams} (b), we check that the predicted currents \\eqref{eq:J:LDLD} and \\eqref{eq:J:SMC} are realized by simulations. Because of the particle-hole symmetry, we restrict our consideration to $ \\rho \\le 1\/2 $. \n\n\nLet us investigate properties of the shock in the S-MC phase in more detail. There is a microscopic deviation of the density profile near the shock position. It obeys a gaussian distribution, \\textit{i.e.} the probability distribution $ P ( S ) := \\frac 1 2 \\langle \\tau_S (\\tau_S-1) \\rangle_{\\mathrm T} $ of the position $S$ of the second-class particle is given as \n\\begin{align}\n\\label{eq:P(S)=Gauss}\n P (S) = \\frac{1}{ \\sqrt{2 \\pi \\sigma^2 } } \n \\exp \\bigg[ - \\frac{1}{ 2\\sigma^2 } ( S - \\langle S \\rangle_{\\mathrm T} ) ^2 \\bigg] \n\\end{align}\nwith $ \\langle S \\rangle_{\\mathrm T} \\simeq \\ell s $ $ ( \\ell \\to \\infty ) $. We see good agreement of simulations to this distribution in fig.~\\ref{fig:2-seg} (a). This corresponds to the fact that the (microscopic) density profile \\eqref{eq:rho_i:=} in the 1st segment is well described in terms of the error function $\\mathrm{erf}[\\cdot]$ \\cite{bib:BSB}, see fig.~\\ref{fig:2-seg} (b): \n\\begin{align}\n \\label{eq:=erf}\n \\rho_i = \\frac{\\sqrt{ 1-r } }{2} \\ \\mathrm{erf} \n \\bigg[ \\frac{ i - ( \\langle S \\rangle_{\\mathrm T} + \\frac 1 2 ) }{ \\sqrt{2 \\sigma^2 } } \\bigg] \n + \\frac{1}{2} .\n\\end{align} \nWe chose the values of parameters, such that corresponding densities and shock positions are given as \n\\begin{align}\n \\nonumber \n & ( r , \\rho ) = (0.96, 0.5 ) , ( 0.84 , 0.48 ) , ( 0.64,0.44 ) , (0.36,0.38 ) \\Leftrightarrow \\\\ \n & ( \\alpha_1 , s ) = ( 0.4 , 0.5 ) , (0.3, 0.6 ) , ( 0.2 , 0.7 ) , ( 0.1, 0.8 ) , \n\\end{align}\nrespectively, according to eq.~\\eqref{eq:alpha1=,s=}. \n(In the 2nd segment, we expect that the density profile, which is deviated from $ 1\/2 $, can be well written by the exact finite-size effect in the maximal current phase of the open TASEP \\cite{bib:DEHP}.)\n\n\n\\begin{figure}\\begin{center}\n \\includegraphics[width=0.24\\textwidth]{gauss.pdf} \\ \n \\includegraphics[width=0.24\\textwidth]{erf.pdf} \\ \n \\includegraphics[width=0.24\\textwidth]{sigma-vs-ell.pdf} \\ \n \\includegraphics[width=0.24\\textwidth]{sigma-vs-r.pdf} \n \\end{center} \n\\caption{ \n (a) Distributions of the second-class particle, \n (b) density profiles,\n (c) standard deviation $ \\sigma $ vs segment length $ \\ell $, and \n (d) $\\sigma$ vs hopping rate $r$. \n For (a) and (b) we have set the system length as $ \\ell =200$, and for (d) $ \\ell =10^3 $. \n Each plot marker was obtained by averaging data of a single simulation run over \n $ 10^6 \\le t \\le 10^7$ \n [exceptionally $ 10^6 \\le t \\le 5\\times 10^7$ for the cases $ \\ell > 10^3 $ in (c)]. \n The curves in (a) and (b) correspond to \n eqs.~\\eqref{eq:P(S)=Gauss} and \\eqref{eq:=erf}, \n respectively, with $ \\langle S \\rangle $ and $ \\sigma$ obtained from simulation data. Each line in (c) corresponds to $ ( \\ell \/10^4 )^{1\/2} \\times \\sigma |_{\\ell=10^4} $ with $ \\sigma |_{\\ell=10^4} $ obtained by simulations. \n\\label{fig:2-seg} } \n\\end{figure}\n\n In fig.~\\ref{fig:2-seg} (c), we observe $ \\sigma \\propto \\sqrt \\ell $ \\cite{bib:BSB}. Furthermore $ \\sigma \/ \\sqrt \\ell $ vs $r $ is shown in fig.~\\ref{fig:2-seg} (d) with different global densities $ \\rho $. So far we have not found an explicit formula of $ \\sigma \/ \\sqrt \\ell $, but it seems independent of $ \\rho $. \n\n\n\n \n \n\\section{4-segment TASEP}\\label{sect:4-seg}\nNow we turn to the 4-segment TASEP \\eqref{eq:four_pi=} \\cite{bib:BSB}. By the same argument as for the 2-segment case, the density $\\alpha_j $ of each segment $j$ is flat, when $ \\rho < \\rho_c $. (The phase transition line $ \\rho = \\rho_c $ is identical to that of the 2-segment case, fig.~\\ref{fig:diagrams} (a).)\n From the translational invariance, we have $ \\alpha_1 = \\alpha_3 $ and $ \\alpha_2 = \\alpha_4 $. Furthermore these densities have the same form \\eqref{eq:alpha1=,alpha2=} as for the 2-segment case \\cite{bib:BSB}. \n After the global density exceeds $ \\rho_c $, the 2nd and 4th segments maintain the density $\\alpha_2 =\\alpha_4 = 1\/2 $, and shocks appear in the 1st and 3rd segments \\cite{bib:BSB}:\n with $ \\alpha_1 = \\alpha_3 $, \n\\begin{align}\\label{eq:rho(x)_SMCSMC}\n \\rho(x) = \n \\begin{cases} \n \\alpha_1 \\ ( 0 < x < s_1 ) , \\ 1- \\alpha_1 \\ ( s_1 < x < 1 ) , \\\\ \n \\frac{1}{2} \\ ( 1 < x < 2 ) , \\\\ \n \\alpha_3 \\ ( 2 < x < s_3' ) , \\ 1- \\alpha_3 \\ ( s_3' < x < 3 ) , \\\\ \n \\frac{1}{2} \\ ( 3 < x < 4 ) . \n \\end{cases}\n\\end{align}\nThe shock positions are denoted by $ s_1 $ and $ s_3' = s_3 +2$ [$ s_3=0 $ (resp. $ s_3=1 $) corresponds to the boundary between 2nd and 3rd (resp. 3rd and 4th) segments]. \n The conservation of the number of particles is written as \n\\begin{align}\n 4 \\rho = \n s_1 \\alpha_1 + (1-s_1) (1-\\alpha_1 ) + \\alpha_2 \n + s_3 \\alpha_3 + (1-s_3)(1- \\alpha_3 ) + \\alpha_4 .\n\\end{align}\nSolving this together with the current conservation \\eqref{eq:J=J1=J2}, \nwe find a restriction on the shock positions \\cite{bib:BSB} \n \\begin{align} s_1 + s_3 = 2 s , \\label{eq:s1+s3=s} \\end{align} with $s$ \\eqref{eq:alpha1=,s=}. \nThe form of the density $ \\alpha_1 $ is the same as for the 2-segment case \\eqref{eq:alpha1=,s=}.\nThe current $J$ is also unchanged \\cite{bib:BSB}, see fig.~\\ref{fig:diagrams}(b). One of the interesting findings in \\cite{bib:BSB} is that we cannot fix the shock positions $s_1$ and $s_3$ even in the macroscopic level, but they are synchronized by the restriction \\eqref{eq:s1+s3=s}, see fig.~\\ref{fig:4-seg} (a). The right bound of $ s_j $ is 1, and the left $ \\lambda $ is given by solving $ 2 s - \\lambda = 1 $: \n\\begin{align} \\label{eq:lambda2 $, respectively. For each marker in (b), we averaged $ M(t) \/ (2t) $ of $ 10^{-3} \\le t\\le 10^{-2} $ over $ 10^6 $ simulation runs. \n\\label{fig:4-MSD} } \n\\end{figure}\n\nLet us investigate the correlation of the shocks. We expect that, in a very short time, the two shocks move independently, since they are far from each other. On the other hand, they are synchronized in the frame of a larger time scale, e.g. as fig.~\\ref{fig:4-seg} (a). In order to quantify (in)dependency of the two shocks, we introduce a correlation function \n\\begin{align}\n\\label{eq:C(t)=}\n C(t) = - \\big\\langle \\big( S_1 (t) - S_1 (0) \\big) \\big( S_3 (t) - S_3 (0) \\big) \\big\\rangle_{\\mathrm E} . \n\\end{align}\nDue the synchronization $ S_1 (t) + S_3 (t) \\approx \\ell s $, we have \n\\begin{align} C(t) \\simeq M (t) \\quad ( t \\to + \\infty), \\end{align} \n which is observed in fig.~\\ref{fig:4-Correlation} (a).\nOn the other hand, their initial behaviours are completely different, see fig.~\\ref{fig:4-Correlation} (b,c). In the vicinity of $t= 0$, we conjecture that $C(t)$ increases more slowly than any power function $t^a (a>0)$. \n \n\n\\begin{figure}\\begin{center} \n \\includegraphics[width=0.24\\textwidth]{long.pdf} \\ \n \\includegraphics[width=0.24\\textwidth]{medium.pdf} \\ \n \\includegraphics[width=0.24\\textwidth]{short.pdf} \\ \n \\includegraphics[width=0.24\\textwidth]{t-vs-ell.pdf} \\end{center} \n \\caption{ \n (a,b,c) Comparison between the MSD and the correlation function \n in different time windows, and\n (d) crossover time vs segment length. \n For (a,b,c), we have set the values of the parameters as $ (\\ell,r, \\rho) = (1000,0.64,0.44) $, \n and averaged over $ 10^3,10^4 $ and $ 10^6 $ simulation runs, respectively.\n In (d), each marker was plotted by averaging over $ 10^3 $ simulation runs, \n and the lines are fitting curves $ c \\, \\ell^z$. \n\\label{fig:4-Correlation} } \n\\end{figure}\n\nLet us define the time $T$ as \n \\begin{align} T = \\inf \\big\\{ t > 0 \\big| 2 C(t) > M(t) \\big\\} , \\end{align} \ncharacterizing the crossover between the time scales of independency and synchronization. \nFigure \\ref{fig:4-Correlation} (d) shows $T$ vs segment length $\\ell$. By assuming the power law $ T \\simeq c\\, \\ell^z $, we perform fitting from the simulation data that we have. We write the results directly in fig.~\\ref{fig:4-Correlation} (d), and draw corresponding straight lines as well. \n Under the further assumption that the exponent is independent from parameters, \nwe simply average the four obtained exponents: $z \\approx 1.71$. \n\n\n\n\\section{Discussions}\\label{sect:disc}\nIn this work, we investigated synchronization of shocks in the 4-segment TASEP, by means of the two second-class particles. We found that the behaviour of the MSD $ M(t) $ of the shocks is not so simple. In the initial regime (very short time scale), it is diffusive and the coefficient is nothing but the formula for the particle current. In the intermediate diffusive regime, the coefficient $D$ known in the open TASEP provides a good estimation. This fact indicates that, up to the intermediate diffusion regime, the shocks' motions are determined by the values of low and high densities, and independent from details of boundary conditions. Via sub-diffusive regime, the MSD achieves the asymptotic diffusive regime, where the diffusion coefficient is different from $D$. From the log-log graph and the numerical estimation, fig.~\\ref{fig:4-MSD} (a,c), it seems that $ M(t)\\propto \\sqrt t$ in the sub-diffusive regime. In some random walks, similar changes of the diffusivity have been found \\cite{bib:PM,bib:TSJS}. In our case, the \\textit{regime change} is spontaneously induced by the particle number conservation, without directly imposing any interaction between the two second-class particles in defining the model. \n\n\nWe also investigated the correlation function of the shocks, and the crossover time between independency and synchronization of the shocks. The correlation function $C(t)$ \\eqref{eq:C(t)=} becomes identical to $M(t)$ as $t\\to +\\infty$. In the vicinity of $ t=0 $, $ C(t) $ increases very slowly, whereas $ M(t) $ is linear. We defined the crossover time $T$ as the time when the ratio $ C\/M $ exceeds $ 1\/2 $. We estimated the dynamical exponent $z$ for $ T $ by using our simulation data, which was found between the KPZ $ z=3\/2 $ \\cite{bib:GS} and the normal diffusion $ z=2 $. The exponent $z$ as well as $ \\mathcal D $ and $ \\epsilon $ should be more precisely estimated in larger systems with longer simulation time and by more simulation runs. \n\n\n\nThe generalization to the $2n $-segment case is straightforward: the $j$th segment has the rate 1 (resp. $r$) when $j$ is odd (resp. even). When $ \\rho < \\rho_c $, the density profile $ \\rho (x) \\ ( j - 1 < x < j ) $ of $j$th segment is give as $\\rho (x) = \\alpha_1$ for odd $j$, and $\\rho (x) = \\alpha_2$ for even $j$, with the same forms as in the 2-segment case \\eqref{eq:alpha1=,alpha2=}. When the global density $ \\rho_c < \\rho < 1- \\rho_c $, the density profiles are flat with density $ 1\/2 $ in the segments with even numbers. On the other hand, $n$ shocks appear in segments $j=1,3, \\dots,2n-1 $. Denoting their positions by $ s_j + j - 1 $, we find \n $ s_1 + s_3 + \\cdots + s_{2n-1 } = n s , $ from the conservation of the number of particles. As an analogue to the $n=2$ case, we expect that they are not static but synchronized. Since only one equation governs the synchronization of the $ n $ shocks, we naturally expect that the asymptotic diffusion constant $ \\mathcal D_n( \\alpha )$ enjoys \n $ \\mathcal D_1 ( \\alpha ) < \\mathcal D_2 ( \\alpha ) < \\mathcal D_3 ( \\alpha ) < \\cdots $\n with $ \\mathcal D_1 \\equiv 0 $ and $ \\mathcal D_2 ( \\alpha ) = \\mathcal D ( \\alpha ) $. \nA further intuitive conjecture is that the sub-diffusive regime vanishes as $ n \\to + \\infty $, and \n$\\lim_{ n\\to\\infty } \\mathcal D_n ( \\alpha ) = D(\\alpha)$. \n \n It is known that two shocks are synchronized in the model \\cite{bib:CCB}. A simple generalization of the JL model \\cite{bib:SB}, e.g. $ p_i = r ( i \\in \\{\\ell , 2\\ell\\} ) , 1 ( i \\notin \\{\\ell , 2\\ell\\} ) $ with $L =2 \\ell $, also exhibits the same type of synchronization. One of important questions is whether the MSDs of shocks in these models behave like fig.~\\ref{fig:4-MSD} (a), and if yes, whether the asymptotic diffusion coefficient is the same as for the 4-segment TASEP. Note that the positions of the two synchronized shocks are, in general, far from each other in these models as well as the 4-segment TASEP. (See e.g. \\cite{bib:MH,bib:JWW,bib:J,bib:DG} for other types of synchronization.) While the motions of the second-class particles are locally defined, the shocks move as if they \\textit{knew} each other's position. We believe that this viewpoint gives hints to study self-organization phenomena, e.g. in biological cells, as the exclusion process is one of basic models also in biophysics \\cite{bib:CMZ}. Application to traffic flows would be also an interesting problem. The local inhomogeneities in the JL and generalized JL models are very similar to traffic lights, and combinations of segments that we studied here evoke different limit speeds of cars.\n\n\n\n\n\\section*{Acknowledgements}\n The author thanks Ludger Santen and M Reza Shaebani for useful discussions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nComposition and behavior of ecological communities are shaped by direct and indirect interactions between the species of these communities, such as the competition for the physical space and the intrinsic and the extrinsic resources. \nThe examples of such competitive ecosystems are microbial communities in various biomes such as the soil~\\cite{ratzke2020strength}, the ocean~\\cite{tilman1977resource,strom2008microbial} and the human body~\\cite{foster2017evolution} - in particular the human gut which hosts a diverse microbiome whose dynamics are important for human health~\\cite{coyte2015ecology,gorter2020understanding}. \nIn the context of cellular populations within organisms, the evolution of neoplasms and tumor cells \\cite{merlo2006cancer,kareva2015cancer,smart2021roles}, interactions within the immune system ~\\cite{tauber2008immune,schmid2003evolutionary}, as well as the appearance of dominant clones during cell reprogramming~\\cite{shakiba2019cell}, exhibit phenomenology akin to ecological competition. \nBeyond biology \\cite{tilman1982resource,morin2009community,tuljapurkar2013population}, competitive interactions shape behaviors in a vast array of systems such as competition economics \\cite{budzinski2007monoculture} and social networks \\cite{koura2017competitive}.\n\nA classical example of the effects of inter-species competition - which inspired important ecological competition paradigms - is the differentiation in beak forms of finches in the Gal\\'apagos islands \\cite{lewin1983finches,lack1983darwin}. \nOn these islands, dissimilar finch species possess beaks of varying shapes and sizes allowing them to consume different food sources and thus occupy distinct niches; this type of ecosystem structure is commonly referred to as an ecological niche model \\cite{grant1979darwin,pocheville2015ecological}. \nVarious niche models have been used to describe the community structures observed in diverse ecosystems such as plant grassland communities \\cite{zuppinger2014selection,silvertown2004plant}, marine plankton~\\cite{cullen1998behavior} and conservation ecology~\\cite{melo2020ecological,aguirre2015similar}.\nCommonly, niche specialization results in weaker competition for resources between individuals occupying separate niches (inter-species competition) compared to the competition between individuals of the same kind residing in the same niche (intra-species competition). The competition strength (determining the niche overlap \\cite{badali2020effects,capitan2015similar}) defined as the ratio between the inter-specific and intra-specific competition strengths. \n\nAnother paradigmatic class of ecological models comprises neutral models that are often used to describe noisy ecosystems wherein individuals from distinct species are functionally equivalent. \nIn contrast to niche models, interactions between all individuals in neutral models are identical regardless of their species \\cite{bell2001neutral,hubbell2001unified,chave2004neutral}.\nThus, neutral models have commonly served as the null hypotheses for the exploration of ecological processes in various settings where the differences between inter-specific and intra-specific interaction are functionally negligible \\cite{bell2001neutral,gotelli2006null,blythe2012neutral}.\nNeutral theories can be viewed as a limit of niche theories where inter-specific and intra-specific interactions are equal: in other words, all species reside in completely overlapping niches ~\\cite{grover1997resource,begon2006ecology,pocheville2015ecological}.\n\nIn multi-species communities, the intra- and inter-species interactions as well as interactions with the environment, can lead to complex community composition and population dynamics; some species survive in the long term, while others are driven to extinction.\nHowever, in large communities with high numbers of competing species, it is often impractical or impossible to characterize the entire system composition by the assemblage of abundances for each species.\nHence, coarse-grained paradigmatic descriptions are often used to provide general insights into the common behavior of these ecological communities.\n\nTwo variables commonly used to characterize complex ecological communities are 1) the richness, reflecting the number of co-occurring species \\cite{adams2009species,kery2020applied}, and 2) the species abundance distributions (SAD) - the number of species present at a given abundance. The latter is closely related to the species rank abundance (SRA) - the species ranked in terms of their abundance \\cite{nias1968clone, rulands2018universality, de2020naive, mcgill2007species, matthews2015species}.\nThese aggregate variables are observable experimentally and serve as the reporters on the underlying community structure, dynamics and the interaction network \\cite{rahbek2001multiscale,hong2006predicting,adler2011productivity,valencia2020synchrony}. \nRichness, for example, is commonly considered to be an indicator of the competition strength and stability of the ecosystem \\cite{pimm1984complexity, ives2000stability, jousset2011intraspecific,mallon2015microbial,capitan2017stochastic}.\n\n\nThe shape of the SAD is also used as a proxy for the structure of the underlying interactions' network. For high immigration or weak inter-species competition, the SAD commonly has a peak at high species abundance, away from extinction.\nThis community structure is closely related to the niche models whereby different species co-exist: most species inhabit their own niches with their species abundance fluctuating around the peak of the SAD. Conversely, other ecosystems, such as many microbial communities and T-cell repertoires, exhibit few high-abundance species alongside highly diverse populations of low-abundance species \\cite{lynch2015ecology, de2020naive}.\nThis `rare biosphere' or `hollow-curved distribution' is described by a unimodal, monotonically decreasing SAD. Interestingly, this unimodal behaviour is empirically observed in many different ecosystems and is often considered universal (see \\cite{leidinger2017biodiversity} and references therein). Neutral models have been championed to describe the emergence of this universality, although other theoretical explanations for the `rare biosphere' SAD in competitive ecosystems have been suggested \\cite{mcgill2007species,magurran2013measuring}.\n\n\n\n\n\n\n\n\n\n\nMany theoretical studies that have aimed to quantify the competitive dynamics, the richness and the abundance distributions in ecological populations applied to various systems, commonly employ a small number of paragidmatic models.\nOne common model of ecological competition is the deterministic, competitive Lotka-Voltera (LV) model, which has been especially useful in characterizing the niche regime by describing stable species coexistence as stable fixed points of the model. \nDepending on the ratios of inter- and intra-species competition strengths, deterministic LV models provide examples of both the `niche-like' regimes of multiple species coexistence, and the competitive exclusion where species with weaker intra-species interactions drive others to extinction \\cite{hardin1960competitive,macarthur1967limiting,MacArthur1969species, gause2019struggle}.\nIn complex scenarios, such as when the strengths of inter-specific interactions are randomly distributed among different species pairs, multi-species deterministic LV models can exhibit not only deterministic fixed point coexistence but also chaotic behavior reflected in the SAD shapes and richness \\cite{scheffer2006self,vergnon2012emergent,kessler2015generalized,bunin2016interaction,roy2020complex}. \nBeyond disorder in the interaction network, dynamical noise from various sources - both extrinsic and intrinsic - can have important effects on the system composition and dynamics, especially in the neutral regime.\nIn order to capture experimentally observed stochastic fluctuations of population abundances, environmental noise is often introduced into the mathematical models \\cite{fisher2014transition,lynch2015ecology,verberk2011explaining,fowler2013colonization,barabas2016effect}.\nIn particular, by tuning the strength of environmental noise the shape of the SAD can change from unimodal to bimodal \\cite{fisher2014transition}, indicating a transition between `niche-like' and `neutral-like' regimes.\n\nRegardless of the presence of the external environmental noise or randomness in the interaction network, the demographic noise - the inherent randomness of birth and death events - is ever-present and has fundamental impact on the community structure and stochastic population dynamics \\cite{hubbell2001unified,alonso2006merits,haegeman2011mathematical}. \n\nIn particular, demographic noise has been suggested to be responsible for the SAD shape in neutral systems; these are characterized by the power law decay with an exponential cutoff that may account for `rare biosphere' abundance and distributions observed in many experimental systems \\cite{hubbell2001unified,baxter2007exact,mckane2004analytic}. On the other hand, birth-death-immigration processes with demographic noise have also been shown to exhibit bimodal SADs at very low immigration rates \\cite{xu2018immigration} breaking from the paradigm wherein neutrality is synonymous to an SAD of `rare biosphere' type.\nAlthough demographic noise models have been shown to reproduce observed features of a number of ecological systems \\cite{haegeman2011mathematical,capitan2015similar,capitan2017stochastic,capitan2020competitive}, a complete picture of the different regimes of community structures, is still missing. \nIn particular, it remains to be fully understood how the interplay of the competition strength, the immigration rate, demographic noise and the resulting dynamics of species turnover shape transitions between these different community structure regimes.\n\n\n\n\\begin{figure}[t!]\n \\begin{flushleft}\n A\n \\end{flushleft}\n \\includegraphics[width=\\columnwidth]{figures\/neutral-vs-niche.pdf}\n \\begin{flushleft}\n B\n \\end{flushleft}\n \\includegraphics[width=\\columnwidth]{figures\/island-2.pdf}\n \\caption{Island model. Panel A: Conventionally, weak competition is associated with `niche-like' bimodal SAD, while strong competition is linked to `neutral-like' monotonously decreasing SAD. However, this paradigm is not complete, since the dependence on other parameters, such as immigration rate $\\mu$ or diversity $S$, is not fully investigated. Thus, the entire phase space, e.g. $(\\mu, \\rho)$ or $(S, \\rho)$, remains unexplored. Panel B: The model illustration. An island with $J$ individuals from $S^*$ species. Each individual may proliferate and die with some rate corresponding to inter- and intraspecific interactions within the island. Here we consider deterministic, symmetric, fully-connected interspecific interactions network, governed by single parameter; the competitive overlap $\\rho$. Additionally, individuals may migrate from a cloud\/mainland, contains $S$ species, into the island with a constant rate $\\mu$. }\n \\label{fig:fig1}\n\\end{figure}\n\nIn this paper, we systematically study the full parameter space of the community composition and structure using a competitive LV model with the demographic noise and an interaction network of minimal complexity; more complex scenarios may be examined by building on this paradigmatic null model. We show that, beyond the perception of dichotomous neutral-niche regimes, many different regimes of richness and shape of the abundance distribution emerge from the interplay between the competition strength and immigration in the presence of stochasticity as illustrated in Fig.~\\ref{fig:fig1}).\nThese regimes exhibit contrasting dynamics that underpin the differences in the community structures in different regimes, and the transitions between them. \n\nThe paper is structured as follows. In Sec.\\ref{sec:model} we introduce the minimal model. In Sec.~\\ref{sec:results} we present our main results, including the regimes boundaries, their richness and the abundance distributions, as well as their associated underlying dynamics, and the species correlation structure. Lastly, in Sec.~\\ref{sec:discussion}, we discuss our results in the context of experimental observations. \n\n\n\n\n\\section{Mathematical models and methods}\n\\label{sec:model}\n\n\n\nThe minimal model studied in this paper incorporates three essential features of the ecological processes: competitive interactions, immigration and intrinsic demographic noise~\\cite{black2012stochastic,haegeman2011mathematical}.\nIn the model, illustrated in Fig.~\\ref{fig:fig1}B, the community composition is characterized by the species abundances, $\\vec{n}=(n_1,\\dots n_i\\dots n_S)$ where the discrete random variable $n_i$ represents the number of individuals of the $i$-th species, and $S$ is the total number of species.\nThe dynamics of the system are described by a birth-death process with interactions, whereby the abundance (number of individuals) of any species can increase by one with the birth rate $q^+$ or decrease by one with the death rate $q^-$ defined as \n\\begin{align}\nq_i^+(\\vec{n})&=r^+ n_i +\\mu, \\\\\nq_i^-(\\vec{n})&=r^- n_i + \\frac{r}{K} n_i \\left(n_i +\\sum_{j\\neq i} \\rho _{j,i} n_j\\right) \\nonumber\n\\end{align}\nfor each species $i\\in \\{1,2,\\dots,S\\}$.\n\nThe birth rate incorporates two factors: the per-capita birth rate $r^+$ corresponding to procreation, and the constant and positive immigration rate $\\mu$ from an external basin which ensures that the system possesses no global absorbing extinction state \\cite{capitan2015similar}. \nThe death rates include the `bare' per-capita death rate of the organisms $r^-$ and the competitive interactions effects that increase the mortality at high population numbers, incorporated through a quadratic term in the death rates; Parameter $\\rho_{j,i}$ quantifies the competition strength between species $i$ and $j$.\nThe carrying capacity for each species is represented by $K$.\nThe per-capita turnover rate is $r=r^+-r^-$.\n\nThese aggregate coarse-grained parameters are determined by a variety of system factors such as the efficiency of resource consumption, interactions with the environment and external forces. Although it is possible to derive these rates from explicit resource competition models in several special cases, the expressions are highly model-dependent and are not explicitly modeled here\n~\\cite{macArthur1970species,chesson1990macarthur,o2018whence}.\nFor biological reasons, $K$, $r^+$, $r^- > 0$ are all positive, which results in strictly positive transition rates for all $ n_i\\geq 0$. In this paper, we focus on the homogeneous case where the parameters ($\\mu$, $K$, $\\rho$, $r^+$, and $r^-$) are identical for all species and the competitive interactions $\\forall i,j :\\rho_{j,i}=\\rho$ for all species pairs. \nThis symmetric and homogeneous interaction network \nhas been used in \\cite{badali2020effects,capitan2017stochastic,capitan2020competitive,haegeman2011mathematical} in contrast to the models wherein the competition strengths are inhomogeneous and drawn from a distribution \\cite{fisher2014transition,allesina2012stability}. \nThis minimal complexity model allows us to investigate the full phase space of the system to examine the underlying principle without impractical multi-parameter sweeps.\n\nThe stochastic evolution of the system is described by the master equation\n\\iffalse\n\\begin{multline}\n\\label{master-eq}\n\\partial_t {\\rm \\mathcal{P}}(\\vec{n};t)= \\sum_{i}\\left\\{ \\vphantom{\\left[ q^+_i\\right]} q^+_i (\\vec{n}-\\vec{e}_i){\\rm \\mathcal{P}}(\\vec{n}-\\vec{e}_i;t) \\right.\\\\\n+q^-_i (\\vec{n}+\\vec{e}_i){\\rm \\mathcal{P}}(\\vec{n}+\\vec{e}_i;t)\\\\ \n-\\left. \\left[q^+_i(\\vec{n})+q^-_i(\\vec{n})\\right]{\\rm \\mathcal{P}}(\\vec{n};t)\n\\right\\}\n\\end{multline}\n\\fi\n\\begin{multline}\n\\label{master-eq}\n\\partial_t {\\rm \\mathcal{P}}(\\vec{n};t)= \\sum_{i}\\left\\{ -\\left[q^+_i(\\vec{n})+q^-_i(\\vec{n})\\right]{\\rm \\mathcal{P}}(\\vec{n};t) \\vphantom{\\left[ \\sum q^+_i\\right]} \\right.\\\\\n\\left. \\vphantom{\\left[ \\sum q^+_i\\right]} +q^+_i (\\vec{n}-\\vec{e}_i){\\rm \\mathcal{P}}(\\vec{n}-\\vec{e}_i;t)+q^-_i (\\vec{n}+\\vec{e}_i){\\rm \\mathcal{P}}(\\vec{n}+\\vec{e}_i;t)\\right\\},\n\\end{multline}\nwhere $\\vec{e}_i$ is the standard basis vector and ${\\rm \\mathcal{P}}(\\vec{n},t)$ is the joint probability density function for the system to exhibit the species composition $\\vec{n}$ at time $t$ \\cite{gardiner1985handbook}. In the long time limit, the system reaches a stationary state where $\\partial_t \\mathcal{P}=0$.\n\nThe species abundance distribution (SAD) describing the mean fractions of species with $n$ individuals, can be related to the marginal single species probability distribution $P(n)$:\n\\begin{align}\n {\\rm SAD}&(n) = \\frac{1}{S}\\left\\langle \\sum_{i=1}^{S}\\delta (n_i -n) \\right\\rangle\\\\\n \\nonumber &=\\frac{1}{S}\\sum_{i=1}^S\\left[ \\sum_{n_1=0}^{\\infty}\\cdots \\sum_{n_{i-1}=0}^{\\infty}\\sum_{n_{i+1}=0}^{\\infty}\\cdots \\sum_{n_S=0}^{\\infty}{\\rm \\mathcal{P}}(\\vec{n})|_{n_i=n}\\right] \\\\ &=\\nonumber P_i(n) \\equiv P(n),\n\\end{align}\nwhere $\\delta$ is the Kronecker delta function, and using the fact that in this homogeneous system the marginal distributions $P_i(n)=P(n)$ of population abundance are identical for all species. \n\n\\iffalse\nThe full master of the Equation... can be reduced to the one dimensional master equation for the marginal distribution $P(n)$ with effective birth-death rates (see SM for derivation)[AZ: is this an exact reduction, or there is an approximation involved?]\n\n\\begin{eqnarray}\nq^+(n)&=&r^+ n +\\mu, \\\\\nq^-(n)&=&r^- n + \\frac{r}{K} n \\left((1-\\rho)n + \\rho \\langle J | n_i = n \\rangle \\right). \\nonumber\n\\end{eqnarray}\n\\fi\n\nIn the Fokker-Planck approximation, the continuous deterministic limit of the master equation (\\ref{master-eq}) recovers the well-known competitive Lotka-Volterra (LV) equations\n\\begin{align}\n \\frac{\\partial x_i}{\\partial t}&= q_i^+(\\vec{x}) - q_i^-(\\vec{x})\\nonumber\\\\\n &=r x_i \\left( 1 - \\frac{x_i}{K} + \\sum_{j\\neq i} \\rho \\frac{x_j}{K} \\right) + \\mu \n \\label{eq:LV}\n\\end{align}\nfor the variable $x_i$, which corresponds to the continuous deterministic limit\nof the discrete variable $n_i$ \\cite{gardiner1985handbook}; see Supplementary Materials (SM) for further details. \n\nThe deterministic steady state is given by\n\\begin{equation}\n \\tilde{x}(S) = \\frac{K}{2[1+\\rho(S-1)]}\\left\\{1 + \\sqrt{1+\\frac{4\\mu[1+\\rho(S -1)]}{r K}}\\right\\}.\n \\label{eq:fixed-LV}\n\\end{equation}\nNote that in the deterministic LV process all species survive with abundance $\\tilde{x}$ as long as $\\rho\\leq 1$ with $\\mu>0$ \\cite{capitan2015similar}.\nConversely, in the stochastic competitive environment the numbers of individuals of each species fluctuate, occasionally reaching extinction. \nThus, the number of (co-)existing species $S^*$ is a stochastic variable as well, and may be smaller than the overall number species $S$ in the immigration flux from the larger basin, with $S^*\\leq S$.\nThe richness, denoted as $\\langle S^* \\rangle $, is defined as the average number of the (co-)existing species, and is related to the SAD via\n\\begin{equation}\n\\label{eq:richness}\n\\langle S^* \\rangle = S(1-P(0))\n\\end{equation}\nwhich, intuitively, is determined by the probability that a species is present in the system, $1-P(0)$.\n\nNo exact analytical solution for the high-dimensional master equation \\eqref{master-eq} is known for a general competition strength $\\rho$. To understand the principles of the community organization and the impact of competition, immigration and demographic noise, we developed approximate analytical solutions to the master equation verified by Gillespie simulations (see SM for details).\n\n\\section{Results}\n\\label{sec:results}\n\n\\subsection{Mean-Field Approximation}\nThe full master equation \\eqref{master-eq} can be reduced to a one dimensional approximation for the marginal distribution $P(n)$ with effective birth-death rates (see SM). The SAD, $P(n)$, is obtained as a self-consistent stationary solution of this equation as\n\\begin{multline}\n P(n) \\equiv P_i(n_i=n)\\\\\n =P(0)\\frac{(r^+)^{n}(\\mu\/r^+)_{n}}{n!\\prod_{n_i=1}^{n}\\left(r^-+r n_i\/K+r\\rho \\sum_{j\\neq i}^S\\langle n_j |n_i \\rangle \/K\\right)}.\\\\\n \\label{eq:mean-field}\n\\end{multline}\nTo obtain an analytical approximation to $P(n)$ we use a mean field closure for the unknown conditional averages $\\langle n_j |n_i\\rangle$ as\n$\\left\\langle \\sum_{j\\neq i} n_j |n_i\\right\\rangle\\approx (S-1) \\langle n \\rangle$.\nThus, \\eqref{eq:mean-field} becomes a closed-form implicit equation for the probability distribution $P(n)$ which can be solved numerically.\nWe have found a good agreement between exact stochastic simulation results and this mean-field approximation for most of the parameter space examined (see SM). \n\nFollowing \\eqref{eq:richness}, the average richness in the mean-field approximation is (See SM)\n\\begin{eqnarray}\n\\langle S^*\\rangle = S \\left( 1 - \\frac{1}{{_1}F_1[a,b+1;c]}\\right ) , \n\\label{eq:mf-richness}\n\\end{eqnarray}\nwhere $P(0)=1\/{_1F_1}[a,b+1+1;c]$ is the normalization constant of $P(n)$ where ${_1F_1}[a,b;c] $ is the hypergeometric Kummer confluent function, with $a=\\mu\/r^+$, ${b}=\n[r^-K+r\\rho (S-1)\\langle n\\rangle ]\/r$, and ${c}\n{r^+ K}\/{r}$.\n\n\n\n\n \n \n\n\n\n\\subsection{The system exhibits rich behavior with distinct regimes of population structures controlled by competition strength, immigration rate and the species number} \n\\label{sec:Phases}\n\nDepending on the values of the competitive strength and the immigration rate, the number of species and the system size, the population can exhibit a number of different regimes of behavior, which can be categorized by their richness and the shape of their SAD, as visualized in Fig.~\\ref{fig:phases_sim} and described below.\n.\n\n\\subsubsection{Richness regimes}\nIn the classical deterministic LV model, the systems exhibits either an interior fixed-point with full coexistence of all species at abundances given by Eq.~\\ref{eq:fixed-LV}, or mass extinction with a single surviving species, in agreement with the well-known Gause's law of deterministic competitive exclusion \\cite{capitan2015similar}. By contrast, the stochastic model may also exhibit partial coexistence due to the\nabundance fluctuations arising from the demographic noise whereby a subset of the species are driven to temporary extinction. Overall, the number of co-existing species and their abundances are determined by the balance between the immigration and stochastic competitive exclusion events.\nThree distinct richness regimes can be discerned as shown in Fig.~\\ref{fig:phases_sim}, based on the variations of the richness of the system {$\\langle S^* \\rangle$} in different regimes in the ($\\rho$,$\\mu$,$S$) parameter space.\n\nAt low competition strength - region (a) in Fig.~\\ref{fig:phases_sim}A - all species co-exist so that the richness of the system is equal to the number of species, same as in the deterministic regime $\\langle S^*\\rangle \\approx S$. \nIn this regime, each species effectively inhabits its own niche because the inter-species competition is not sufficiently strong to drive any of the species to extinction even in the presence of abundance fluctuations arising from the demographic noise.\nThe probability for a species to be present is determined by the balance of its immigration rate and the death rate.\nAt higher immigration rates this regime extends into regions with higher competition strength $\\rho$: high immigration rates stabilize full richness populations even with a relatively high competition strength.\n\n\n\nIn the second regime - region (b) in Fig.~\\ref{fig:phases_sim}A - only a fraction of the species are simultaneously present on average, which we denote as the partial coexistence regime.\nIn this regime, the immigration influx is not high enough to prevent temporary stochastic extinctions of some species resulting from the competition.\n\nAt the very high competition strengths a complete exclusion regime - region (c) in Fig.~\\ref{fig:phases_sim}A - is found. \nHigh competition along with the very low immigration rates act in unison such that the richness is less than two species.\nAlthough regime (c) may appear similar to regime (b) since both present partial coexistence, they are distinguished by key behavioral features as explained below. \n\nNote that the stochasticity is central to the effect of the competition on the observed richness.\nStochastic fluctuations increase the risk of extinction with increasing competitive overlap, unlike in the deterministic case where the richness is independent of the interaction strength for $\\rho < 1$ \\cite{capitan2015similar}.\n\n\n\n\n\n\n\n\n\\subsubsection{SAD shape and modality regimes}\n\\begin{figure*}[ht!]\n \\begin{minipage}{.30\\linewidth}\n \\begin{flushleft}\n A\n \\end{flushleft}\n \\includegraphics[width=\\textwidth]{figures\/Fig3-A.pdf}\n \\end{minipage}\n \\begin{minipage}{.65\\linewidth}\n \\begin{flushleft}\n B\n \\end{flushleft}\n \\includegraphics[width=\\textwidth]{figures\/Fig3-B.pdf}\n \\end{minipage}\n \\begin{minipage}[t]{.48\\textwidth}\n \\begin{flushleft}\n C\n \\end{flushleft}\n \\vspace{0pt}\n \\includegraphics[width=\\textwidth]{figures\/Fig3-C.pdf}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[t]{.48\\textwidth}\n \\begin{flushright}\n \\begin{flushleft}\n D\n \\end{flushleft}\n \\vspace{8pt}\n \\includegraphics[width=\\textwidth]{figures\/Fig3-D.pdf}\n \\end{flushright} \n \\end{minipage}\n \\caption{Phenomenology of the population structures. Panel \\textbf{A}: The system possesses three distinct richness phases. (a): full coexistence of all the species $\\langle S^* \\rangle \\approx S$; (b): partial coexistence with $\\langle S^* \\rangle < S$; (c): a single species exists on average. Panel \\textbf{B}: Different population regimes are distinguished by different SAD modalities. (I): immigration dominated regime with unimodal SAD at a typical abundance given by the positive root of $\\tilde{n}$; (II): bimodal regime with species at non-zero abundance $\\tilde{n}$ and a rapid species turnover peak a zero abundance; (III): `rare biosphere' regime of a unimodal SAD with peak at zero abundance resulting from the rapid turnover of the temporarily extinct species; (IV) multimodal regime.\n Panels \\textbf{C} and \\textbf{D}: Intersection of the modality and richness regimes in the $(\\mu,\\rho)$ plane (\\textbf{ C}) and $(S,\\rho)$ plane (\\textbf{D}); see text for discussion. In panels \\textbf{ A}, \\textbf{B} and \\textbf{C} the number of species $S=30$. In panel \\textbf{D} the immigration rate is $\\mu=10^{-1}$. For all panels; Colored regions represent data from simulation (see Methods), whereas boundaries from the mean-field approximation are represented by solid black lines.\n \n The solution for the master equation \\eqref{master-eq} is simulated using the Gillespie algorithm with $6 \\cdot 10^8$ time steps, $r^+=2$, $r^-=1$, and $K=100$.\n \n }\n \\label{fig:phases_sim}\n\\end{figure*}\nBesides the richness, the balance between immigration and stochastic competitive extinctions also dictates the mean abundances of individual species and the species abundance distribution (SAD).\nWhen the immigration influx of individuals into the system is higher than the average out-flux due to the transient extinctions, shown in Fig.~\\ref{fig:phases_sim}B as region (I), most species are forced away from extinction.\nIn this regime, the SAD is unimodal with a peak at relatively high species abundances $\\tilde{n}$ which is approximately located at \n\\begin{equation}\n \\label{eq:dom-level}\n \\tilde{n} =\n \\frac{K-\\rho(S-1)\\langle n\\rangle}{2}\\left\\{1 \\pm \\sqrt{1+4\\frac{(\\mu-r^+) K}{r(K-\\rho(S-1)\\langle n\\rangle)^2}}\\right\\},\n\\end{equation}\nwhich agrees with the simulation results, see Fig.~\\ref{fig:Fig3}; see SM.\n\n\n\nAt lower immigration rates - regime (II) in Fig.~\\ref{fig:phases_sim}B - the immigration rate is insufficiently strong to overcome the competition driven temporary extinctions of some of the species, and the SAD develops an additional peak around $n=0$ corresponding to the temporarily extinct species. \nThe subset of the `quasi-stably' co-existing species whose abundances fluctuate around $\\tilde{n}$ within the high `niche-like' abundance peak dominate the population number, punctuated by rare fluctuation-driven extinctions and the occasional invasion of a temporarily extinct species into the dominant population.\nBy contrast, the dynamics of species in the $n=0$ zero peak is characterized by the rapid turnover of the remaining species close to extinction.\n\n\n\n\nAt low immigration rates, the peak at \\eqref{eq:dom-level} coincides with the deterministic stable solution in \\eqref{eq:fixed-LV} (see SM)\n\\begin{equation}\n\\lim_{\\mu\\rightarrow 0}\\tilde{n}=\\lim_{\\mu\\rightarrow 0}\\tilde{x}\\left(\\langle S^* \\rangle \\right) = \\frac{K}{1+\\rho(\\langle S^*\\rangle -1)}.\n\\label{eq:Lim_ss}\n\\end{equation}\nNamely, in the bimodal regime the coexisting dominant species are fluctuating around $\\tilde{n}$ which, at low immigration, is the deterministic fixed point with $\\langle S^* \\rangle$ species. \nThus, the dynamics of abundant species around the $\\tilde{n}$ can be heuristically understood as spatially dependent diffusion in an effective potential well of the Fokker-Plank Equation (See Sec.\\ref{sec:model} and SM). \n\nSomewhat unexpectedly, at low immigration rate $\\mu \\lesssim .05$, the bimodal regime extends onto the neutral line at $\\rho = 1$ where the SAD has been commonly believed to have the monotonically decreasing `rare biosphere' shape \\cite{hubbell2001unified,baxter2007exact}.\nHowever, in this regime the competition is so strong that most of the time either no species are present at high abundance, or only one species survives in a kinetically `frozen' long lived quasi-stable state with the abundance $\\tilde{n}\\simeq K$, as observed previously \\cite{xu2018immigration}. \n\nFurthermore, at the intermediate immigration rates and relatively high competition strengths we observe a unimodal behaviour with a peak at zero rather than at finite $\\tilde{n}$ - region (III) in Fig.~\\ref{fig:phases_sim}B.\nIn this regime, the competition is strong enough so that the fluctuations competitively drive populations\nto temporary extinction before any species is able to establish a `quasi-stable' state at a high abundance.\nAll species undergo rapid turnover around zero resulting from the balance between random immigration and extinction events.\nThis regime corresponds to what was previously described as the `rare biosphere': fewer number of species are found at higher abundances resulting in a monotonically decreasing SAD. \nThis SAD shape is classically recognized as a hallmark of a `neutral-like' regime. However, as shown in Figure~\\ref{fig:phases_sim} the unimodal regime (III) unexpectedly extends substantially beyond the neutral manifold $\\rho=1$, into the non-neutral regions with $\\rho < 1$, and the monotonic-decreasing SAD persists even for competitions strengths as low as $\\rho \\approx 0.1$ - an order of magnitude weaker than the classical neutral regime.\nThis challenges the common perception that the SAD of the `rare biosphere' is necessarily closely related to neutrality.\n\nFinally, we have found an entirely novel multimodal regime with more than two peaks - regime (IV) in Fig.~\\ref{fig:phases_sim} - which possesses one rapid turnover peak around extinction and multiple peaks at non-zero abundances. \nSimilar to region (IIc), the peak at $n=0$ comprises species which rapidly turnover around extinction. However, in addition to the peak at positive abundances $K$ formed by one surviving species ($S^*=1$) in a meta-stable frozen state, this regime possesses a second peak at $\\sim K\/(1+\\rho)$ with two simultaneously surviving quasi-stable species ($S^*=2$), following \\eqref{eq:Lim_ss}. The slow fluctuations between the states with $S^*=1$ and $S^*=2$ result in the appearance of the SAD with two non-zero modes at quasi-stable dominance abundance, $\\tilde{n} \\sim K$ and $\\tilde{n}\\sim K\/(1+\\rho)$ observed in the region (IV); these two peaks are only visibly separated when the richness is low and carrying capacity is high, as follows from \\eqref{eq:Lim_ss}.\n \n\n\n\n\n\nThe transitions between the different regimes and the corresponding changes in the SAD shapes are illustrated in Fig. ~\\ref{fig:Fig3}. Generally, at low competition strength $\\rho$, where the species are practically independent of each other and reside in largely non-overlapping niches, their typical abundance $\\tilde{n}$ is close to the carrying capacity $K$. Increasing competition strength $\\rho$ makes it harder to sustain co-existing species at high abundances, and accordingly $\\tilde{n}$ decreases, as illustrated in the top panels of Fig.~\\ref{fig:Fig3}A) and Fig.~\\ref{fig:Fig3}B). With further increase in $\\rho$ the system behavior bifurcates depending on the immigration rats $\\mu$. \nAt high immigration rates, $\\mu \\gtrsim 0.05$, the competition-driven decrease in $\\tilde{n}$ continues up to the critical competition strength (calculated in the next section) where the peak around $\\tilde{n}$ disappears (top right panel of Fig.~\\ref{fig:Fig3}A) and Fig.~\\ref{fig:Fig3}B), as the system is not able to sustain `quasi-stable' niche-like species co-existence. This corresponds to the transition from bimodal region (II) to the `rare biosphere' neutral-like region (III) in Fig.~\\ref{fig:phases_sim}).\nAt lower immigration rates (top left panel of Fig.~\\ref{fig:Fig3}A) and Fig.~\\ref{fig:Fig3}B)), further increases in the competition strength eventually cause mass species extinctions which allows the remaining few dominant species to maintain higher abundances (region (III) Fig.~\\ref{fig:phases_sim}). As $\\rho \\rightarrow 1$, the system transitions to the region (IIc) of the Fig.~\\ref{fig:phases_sim}: only one dominant species remains, as described in \\cite{xu2018immigration}, with abundance $K$. \n\n\n\\subsubsection{Global Phase Diagram and Regime Boundaries}\n\\label{sec:regime-bound}\n\n\\begin{figure}[t!]\n \\begin{flushleft}\n A\n \\end{flushleft}\n \n \n \\includegraphics{figures\/Fig2-A.pdf}\n \\begin{flushleft}\n B\n \\end{flushleft}\n \n \\includegraphics[width=\\columnwidth, trim= 0 0 0 15,clip]{figures\/Fig2-B.pdf}\n \\caption{SAD changes between different regimes. Panel \\textbf{A}: (upper left) Simulation results for species abundance distributions (SADs) for fixed $\\mu=10^{-3}$ as a function of $\\rho$. (upper right) same for $\\mu=1$. Different values of the interaction strength $\\rho$ are emphasized with different colors indicated in the color-bar. (lower left) Simulation results for SADs as a function of $\\mu$ for fixed $\\rho=0.5$ (lower right) same for $\\rho=1$. Different immigration rates $\\mu$ are emphasised with different color shown in the color-bar. \n \\textbf{B}: The non-zero mode of the SAD given by the positive solution of $\\tilde{n}$ representing the dominant species abundance as a function of $\\rho$ for different values of $\\mu$. Markers and dotted lines represent simulation results, while solid lines are given from analytic analysis, \\eqref{eq:dom-level}.}\n \\label{fig:Fig3}\n\\end{figure}\n\nIn this section we describe the complete phase diagram of the system defined by the intersection of the different richness and the SAD shape\/modality regimes, derive the regime boundaries and discuss the transitions between them, as shown in the ($\\mu,\\rho$) space in Fig.~\\ref{fig:phases_sim}C, and in ($S,\\rho$) space in Fig.~\\ref{fig:phases_sim}D.\nWe show that the boundaries between different regimes observed in simulations can be understood within simple mean field theories, and discuss the underlying physical factors responsible for the transitions between different regimes.\n\n\\iffalse\n\\begin{center}\n\\begin{tabular}{lccc}\n\\hline\n\\multirow{2}{4em}{\\textcolor{black}{modality:}} & &\n \\textcolor{black}{$R_n(\\tilde{x} \\rightarrow \\tilde{x} +1)=R_n(\\tilde{x}+1 \\rightarrow \\tilde{x})$} \\\\ &\n & \\textcolor{black}{$R_n(0 \\rightarrow 1)=R_n(1 \\rightarrow 0)$} \\\\\n \\hline \n \\multirow{2}{4em}{\\textcolor{black}{richness:}} & & \\textcolor{black}{$R_{S^*}(1 \\rightarrow 2)=R_{S^*}(2 \\rightarrow 1)$} \\\\ &\n & \\textcolor{black}{$R_{S^*}(S \\rightarrow S-1)=R_{S^*}(S-1 \\rightarrow S)$}\n\\\\\n\\hline\n\\end{tabular} \n\\end{center}\n\n\\fi\n\nWe define the boundary between the full coexistence (a) and partial coexistence (b) regimes to be at $\\langle S^* \\rangle=S-1\/2$: the midpoint between full richness $S^*=S$ and the loss of 1 species on average.\nSimilarly, the boundary between the partial coexistence (b) and exclusion (c) regimes is located at $\\langle S^* \\rangle=3\/2$, that is to say where the richness is between one and two species such that on average only 1 species is present in regime (c).\n\nTo derive the boundaries corresponding to the transitions of the SAD modality regimes, we use discrete derivatives of the approximated SAD to determine the existence of peaks and their location (See SM).\nThe immigration dominated regime (I) is characterized by a unimodal SAD with a peak at the positive root of $\\tilde{n}$ given in \\eqref{eq:dom-level}.\nCompared to this immigration dominated regime, the neighboring bimodal and monotonically-decreasing unimodal regimes - regions (II) and (III) respectively - differ by the emergence of a new mode at zero abundance. \n\nThus, the boundary that defines transitions to either regime (II) or (III) from the immigration dominated regime (I) is described by a flattening of SAD at $n=0$: $\\partial P(n)\/ \\partial n|_{n=0} = 0$ which, in the discrete case, heuristically corresponds to $P(0)=P(1)$. Combining this condition for the boundary with the global-balance of the master equation \\eqref{master-eq} results in the rate balance equation,\n$\\langle q_i^+(\\vec{n})|n_i=0 \\rangle=\\langle q_i^-(\\vec{n})|n_i=1 \\rangle$.\n\nIn the mean-field approximation, this boundary is found at\n\\begin{equation}\n \\label{eq:boundary-I}\n \\mu = r^- +\\frac{r}{K}[1+\\rho(S-1)\\langle n \\rangle ].\n\\end{equation}\nThis equation recovers the similar transition for $\\rho=1$ derived independently in \\cite{xu2018immigration}. \n\nThe boundary between the bimodal regime (II) and the `rare biosphere' regime (III) is characterized by the disappearance of the peak at high abundance $\\tilde{n}$ in \\eqref{eq:dom-level}.\nIn the bimodal regime at least one solution to $\\tilde{n}$ is real and positive, then a maximal, real peak-location exists. \nConversely, in the `rare biosphere' regime, both solutions of $\\tilde{n}$ are negative or imaginary. \nWe find that the boundary between the real and imaginary $\\tilde{n}$ is\n\\begin{equation}\n\\label{eq:boundary-IIIa}\nr(K-\\rho (S-1) \\langle n \\rangle )^2=4(r^+-\\mu){K}\n\\end{equation}\nand the transition line between positive and negative solutions, $\\tilde{n}=0$, is\n\\begin{equation}\n \\frac{(K- \\rho (S-1) \\langle n \\rangle )^4}{16}=1+ \\frac{K(\\mu-r^+)}{r}.\n \\label{eq:boundary-IIIb}\n\\end{equation}\nThe intersection of these two conditions defines the `rare biosphere' regime and is shown as the blue line in Fig.~\\ref{fig:phases_sim}B,C.\n\nThe modality and the richness of the system are also affected by the number of species $S$ as shown in Fig.~\\ref{fig:phases_sim}D.\nIn brief, the frequency of the immigration events rises as more species are present in the immigration flux.\nIncreased immigration causes the total population to rise without providing more room for each species in the system; this increases the stochastic competition, driving more species to extinction.\nHence, as $S$ increases, the transition from the bimodal regime (II) to the unimodal regime (III) occurs at lower values of competition strength $\\rho$, and the fraction of the concurrently surviving species decreases. This effect has been qualitatively observed experimentally \\cite{gore2021}, and we return to it in the Discussion.\n\nThese analytical expressions for the regime boundaries - confirmed by stochastic simulations - provide insights into the effects of different control parameters on the regime boundaries. In particular, using the low $\\mu$ deterministic approximation for $\\langle n \\rangle \\approx K \/ \\left[ 1+\\rho (S-1) \\right]$, shows that the location of the boundary of the `rare biosphere' regime grows proportionally to the carrying capacity and is a decreasing function of the number of species $S$. Thus, the size of the `rare biopshere' neutral-like regime increases with the number of species $S$ as shown in Fig.~\\ref{fig:Fig3}D,\nwhereas increasing the carrying capacity shrinks this regime (See SM).\n\n\n\n\n\n\n\n\n\n\\subsection{Kinetics of the species turnover, extinction and recovery underlie the transitions between different regimes}\n\\label{sec:Dynamics}\n\nSo far we have focused on the steady-state properties of the system, such as the dominant species abundance, SAD modality, and richness to categorize the different regimes. \nHowever, the transitions between different regimes are closely related to the underlying kinetics of species turnover and fluctuations, which are investigated in this section.\n\nThe kinetics of an individual species change drastically between the unimodal `rare biosphere' regime (III) and the regimes that exhibit a peak in SAD at non-zero abundance as shown in Fig.~\\ref{fig:turnover}A that contrasts the kinetics in these cases. \nIn regime (III), all species undergoes rapid turnover in the relatively broad range of abundances around extinction.\nIn contrast, in the `niche-like' regimes (I, II, and IV) the `quasi-stable' dominant species undergo fast fluctuations around the co-existence peak at $\\tilde{n}$ in addition to fast turnover of the remaining species near extinction. The `niche-like' regimes also possess slow timescales corresponding to individual species leaving the high abundance peak as they are temporarily driven towards extinction, and the reverse invasions of temporarily extinct species into the dominant `niche-like' peak.\n\nTo characterize the differences in the kinetics in different regimes, we calculate the mean first-passage times (MFPT) of the transitions between different abundance levels characterizing different regimes, denoting the MFPT for a species transition from an abundance $a$ to another abundance $b$ as $T(a \\rightarrow b)$. Similarly, $T(a\\rightarrow a)$ refers to the mean time of return to an abundance $a$ having left that same abundance.\nThe MFPT is calculated from the one-dimensional backward Master equation (see SM).\n\nAlthough these times are important indicators of the system dynamics, the MFPT ratios of two processes\/events is more informative than the MFPT of each event separately; this ratio measures the discrepancy between the timescales at which these events occur. \n\nTo understand the intrinsic kinetics that give rise to the different regimes in Fig.~\\ref{fig:phases_sim}, we first focus on the ratio of the MFPTs of the transitions from dominance to exclusion over the mean time of return to the dominant abundance level (starting from the dominant abundance level), $T(\\tilde{x}(\\langle S^* \\rangle )\\rightarrow 0) \/ T(\\tilde{x}(\\langle S^* \\rangle )\\rightarrow \\tilde{x}(\\langle S^* \\rangle ))$, shown in (Fig.~\\ref{fig:turnover}B).\nRecall that $\\tilde{x}$ is the solution given in ~\\eqref{eq:fixed-LV}; this deterministic solution is a natural extension of the peak $\\tilde{n}$ in regimes where positive modes are non-existent.\nAs explained below, this ratio underlies the changes in the modality of the SAD as a function of the immigration rates and the competition strength.\n\nLarge values of the ratio $T(\\tilde{x}(\\langle S^* \\rangle )\\rightarrow 0)\/ T(\\tilde{x}(\\langle S^* \\rangle )\\rightarrow \\tilde{x}(\\langle S^* \\rangle ))$\nimply that the extinction rate from $\\tilde{x}(\\langle S^* \\rangle )$ is much slower than the rate of local fluctuations in the effective potential well around $\\tilde{x}(\\langle S^* \\rangle )$. Accordingly, Fig.~\\ref{fig:turnover}B shows that this ratio is high in the bimodal and immigration-dominated regimes. Conversely, this ratio is lower within the `rare biosphere' regime that does not possess a high abundance peak with `quasi-stable' co-existing species.\nThis ratio approximately delineates the `rare biosphere' neutral-like regime from the `niche-like' regimes, as shown in Fig.~\\ref{fig:turnover}B: the contour lines qualitatively recover the boundaries of region IIIb in Fig.~\\ref{fig:phases_sim}C.\n\nThe second ratio, which underlies the richness transitions in the system, \n$T(0\\rightarrow \\tilde{x}(\\langle S^* \\rangle ))\/T(0\\rightarrow0)$ (Figure.~\\ref{fig:turnover} panel C) relates the mean return time to extinction to the invasion time from extinction at zero abundance to dominance at $\\tilde{x}$. \nHigh values of this ratio indicate that, for an extinct species, the mean invasion times are longer than return times back to extinction.\nThis MFPT ratio approximates the ratio of the average number of temporarily extinct species, $S - \\langle S^* \\rangle$ to the average number of existing species, $\\langle S^* \\rangle$, see Fig.~\\ref{fig:turnover}C. \nConsequently, the ratio quantitatively recovers the boundaries of richness regimes in Fig.~\\ref{fig:phases_sim} in most regions of the parameter space.\nThese MFPT ratios can be understood as the reciprocal of the rates ratios which describe how much more frequently an event occurs than the other.\nFurther discussion on the dynamical features are presented in the SM.\n\n\\begin{figure*}[t!]\n \\centering\n \\begin{minipage}{11cm}\n \\begin{flushleft}\n A\n \\end{flushleft}\\includegraphics{figures\/Fig4-A.pdf}\n \\begin{flushleft}\n B\n \\end{flushleft}\n \\includegraphics{figures\/Fig4-B.pdf}\n \\begin{flushleft}\n C\n \\end{flushleft}\n \\includegraphics{figures\/Fig4-C.pdf}\n \\end{minipage}\n \\caption{Kinetics of species extinction, invasion and turnover. Panel \\textbf{A}: Sample of the trajectories of the species abundances (grey lines). We highlight five species' trajectories for visibility.\n (Upper panel): stable `niche-like' dynamics, where the abundances of the dominant species mostly fluctuate in the vicinity of $\\tilde{n}$, with occasional transitions from the dominance to nearly-extinct states. The red curve represents the corresponding bimodal SAD. Lower panel shows the erratic dynamics obtained in the `rare biosphere' regime, where all species are rapidly fluctuating close to extinction. The SAD in this case is unimodal monotonically decreasing function. Panel \\textbf{B}: The MFPT ratio $ T(\\tilde{x}\\rightarrow 0) \/ T(\\tilde{x}\\rightarrow \\tilde{x})$ as a function of $\\rho$ (left). For very weak immigration rates $\\mu\\approx 10^{-3}$ the ratio is non-monotonic in competition strength, revealing regime (c) in Fig.~\\ref{fig:phases_sim}. The MFPT ratios as a function of both $\\mu$ and $\\rho$ is represented with a color-map (right). \\textbf{C}: The MFPT ratio $ T(0\\rightarrow \\tilde{x}) \/ T(0\\rightarrow 0)$ as a function of the competition overlap (left) and as a function of both $\\rho$ and $\\mu$ (right). This ratio qualitatively captures the richness behaviour in Fig.~\\ref{fig:phases_sim}. Both panels (B) and (C): the values are represented with the logarithmic color scale.}\n \\label{fig:turnover}\n \n\\end{figure*}\n\nOverall, the `niche-like' Regimes I, II and IV are characterized by a relatively stable behavior; generally species stay longer about the\ndominant species abundances\npunctuated by the occasional crossings between dominance to nearly-extinct states and the reverse invasions from extinction into the dominance.\nThe transition between partial coexistence (b) to regime of competitive exclusion (c) is captured by the non-monotonic behaviour of the ratio as shown in Fig.~\\ref{fig:turnover}C.\nSurprisingly, unlike the other regimes, increasing the competition strength in the competitive exclusion regime (c) increases the stability of the dominant species abundance: return times to the dominant abundance are much shorter than the time to extinction for the single species in the frozen `quasi-stable' state.\nConversely, the `rare biosphere' regime (III) features rapid dynamics where species cycle rapidly between extinction and a broad range of abundances without establishing `quasi-stable' states with slow turnover. \nThese different dynamic types are illustrated by illustrative trajectory plots in Fig.~\\ref{fig:turnover}A. \n \n\n\n\n\n\n\n\n\n\n\\subsection{The abundances of different species are weakly anti-correlated}\n\\label{sec:Correlation}\n\nSo far, we have investigated the single species marginal abundance distribution $P(n)$ and the average richness $\\langle S^*\\rangle $. However,\nthe species are not independent of each other due to the inter-species competitive interactions. It has been suggested that inter-species correlations reflect on the underlying community structure and the phase space \\cite{carr2019use,chance2019native}\nTo investigate the connection between the population structure and the cross-species correlations, we calculated the cross-species abundances correlations, quantified via the Pearson correlation coefficient, as shown in Fig.~\\ref{fig:correlation}A. The population exhibits weak cross-species anti-correlation that increases with the competition strength $\\rho$. This is expected given that the death rate of each species increases with the abundance of the other species and, consequently, these cross-species influences are more pronounced at high competition strengths.\nConversely, higher immigration rates ensure that the abundance fluctuations of different species are less likely to be correlated with those of other species.\nThus, the anti-correlation is most pronounced in the high competition and low immigration regime\n\n\nFurthermore, the impact of individual species on the total population size varies between community structures, which can be quantified by the correlation between the total population size $J$ and individual abundances.\nWe found that the individual species abundances are positively correlated with the total abundance $J=\\sum_i n_i$, which also fluctuates as the individuals of all species undergo birth and death events.\nInterestingly, as shown in Fig.~\\ref{fig:correlation}B, the magnitude of this correlation $\\text{cov}(J,n_i)\/\\sigma_n \\sigma_J$ exhibits inverse \ntrends compared to the inter-species anti-correlation: the correlation $\\text{cov}(J,n_i)\/\\sigma_n \\sigma_J$ is weaker when the cross-species anti-correlation is stronger.\nThe magnitude of the correlation between the total population size $J$ and a species abundance $n$ exhibits similar behavior to the average richness: $\\text{cov}(J,n_i)\/\\sigma_n \\sigma_J$ is high in the high immigration, low competitive overlap regime and is low otherwise.\nThis behaviour may be understood heuristically:\nwhereas each species in a system with $S^*$ dominant species only contributes $\\sim J\/S^*$ to the total population size. \n\nSomewhat unexpectedly, neither the inter-species correlations nor the correlations between the species abundance and the total abundance distinguish between the different modality regimes but rather both increase with the richness. As expected, our mean-field approximation works best at very low ${\\rm cov}(n_i,n_j)\/\\sigma_{n_i}\\sigma_{n_j}$, whereas our mean-field deviates from the solution at anti-correlation gets stronger (see SM). \n\n\\begin{figure}[t!]\n \\centering\n \\begin{flushleft}\n A\n \\end{flushleft}\n \\includegraphics[width=0.75\\columnwidth,trim= 10 10 10 10, clip]{figures\/Fig5-A.pdf}\n \\begin{flushleft}\n B\n \\end{flushleft}\n \\includegraphics[width=0.75\\columnwidth,trim= 10 10 10 10, clip]{figures\/Fig5-B.pdf}\n \\caption{Abundance correlations. Panel \\textbf{A}: Pearson correlation coefficient between the abundances of any two different species. Panel \\textbf{B}: Pearson correlation coefficient between the total population size $J=\\sum n_j$ and an abundance of any species. Correlations were calculated from Gillespie simulation time course data with $6 \\cdot 10^8$ time steps\n }\n \\label{fig:correlation}\n\\end{figure}\n\n\\section{Summary and Discussion}\n\\label{sec:discussion}\n\nEcological systems display a wide variety of different behavior regimes that have been commonly analysed through a limited number of paradigmatic models such as the `niche' and `neutral' theories. However, it remains incompletely understood what features of ecological population structure and dynamics are universal and which are system specific, how different models relate to each other, and what behavior is expected in the full range of the parameter space. Using a minimal model of the competitive population dynamics with demographic noise, we have investigated\nthe different regimes of the population structures and dynamics as a function of the immigration rate $\\mu$, competitive overlap $\\rho$, and the number of species $S$.\nAlthough this minimal model may not fully capture the more complex interaction structures of many ecological communities, it already exhibiting rich and unexpected behaviours paralleling many experimentally observed ones (see Table \\ref{tab:my_label} below and Table S1 in SM), and illuminates the underlying mechanisms that shape population structures in different ecosystems.\n\nWe have focused on the system richness reflecting the number of the co-existing species, and the SAD shape as the characteristics of the different population regimes, using a combination of simulations and analytical mean-field approaches. Our analysis shows that the ecosystem behaviors can be partitioned into different regimes of richness and SAD shape\/modality, parameterized by the immigration rate and the competitive overlap\n\nOur model recovers the expected limits of the well known `neutral-like' and the `niche-like' regimes. In particular, at $\\rho = 1$ and intermediate values of $\\mu$, the SAD has the `rare-biosphere' monotonously decreasing shape characteristic of the classical neutral regime. On the other hand, at low competition strength, the system SAD exhibits a peak at high species abundance where all species co-exist, effectively occupying distinct ecological niches. \n\nNote that even independent species with no inter-species competition with $\\rho=0$ may present either a unimodal or bimodal SAD depending on the immigration rate, see Fig.~\\ref{fig:phases_sim}B\\&C. Unlike the immigration dominated high abundance peak at high immigration rates, at the very low immigration rates the SAD is peaked around zero due to high extinction probability solely from the intra-species competition\n\nChanges in the SAD between different regimes, reflected in the changes in the locations and the heights of the SAD peaks as a function of the immigration and the competition strength, occur through different routes.\nFor instance, starting at the bimodal regime (II) at $\\rho=1$, as the immigration rate increases, the SAD peak height gradually decreases without changing significantly its location, until it finally disappears at the boundary of the `rare biosphere' regime (III). \nBy contrast, at lower competition strengths $\\rho<1$, the transition from the bi-modality to the `rare biosphere' regime occurs via simultaneous changes in the peak's height and location. This is discussed in Sec.~\\ref{sec:results}B.\n\nWe found that for the intermediate immigration rates the system maintains the monotonically decaying `neutral-like' SAD in regime (III) even at rather low competition strength ($\\rho \\approx 0.1$) contrary to the common expectation that different species inhabit separate niches away from neutrality. Conversely, at low immigration rates, the system SAD unexpectedly maintains the peak at non-zero abundance characteristic of `niche-like' regimes even for the high values of the competition strength $\\rho$ usually considered to be in the `neutral-like' domain (regime (IIc)); see Sec.~\\ref{sec:results}B and Figure \\ref{fig:phases_sim}.\n\nWe have also uncovered an unusual - and to the best of our knowledge hitherto not described - regime characterized by the multi-modal SAD with more than one positive, `quasi-stable' abundance peak (Regime (IV) in Fig~\\ref{fig:phases_sim}). This multi-modality arises from the richness fluctuations in this regime: the number of co-existing species is switching randomly between two relatively long-lasting states with $S^*=1$ and $S^*=2$. Thus, one peak of the SAD is found around $\\sim K$ and the other one in the vicinity of $\\sim K\/2$, as explained in Sec.~\\ref{sec:results}B. We observe that for low $K$, the multimodal regime is non-existent and appears as $K$ increases; see the corresponding phase diagrams in SM.\n\nWe show that the population structures in different regimes stem from the underlying dynamics of species fluctuations, extinctions and invasions. In the `rare biosphere' neutral-like regimes, all species undergo relatively fast turnover around extinction. This is reflected in the low ratio of the turnover to the extinction mean first-passage times. Conversely, in the `niche-like' regimes the system develops two additional time scales: relatively fast fluctuations about the high abundance peak, and the long waiting times for the transitions from the `quasi-stable' co-existence at high abundance to extinction. Accordingly, the ratio of the mean extinction time to the mean time of return to dominance is higher in the `niche-like' regime, as discussed in Sec.~\\ref{sec:results}C. \n\nInterestingly, ecological regimes akin to those predicted by our demographic noise model (except for the multimodal SAD regime) have been also found using deterministic, noiseless LV models with a random matrix of inter-species competitive interaction strengths \\cite{may1972will,allesina2008network,allesina2012stability,kessler2015generalized,gore2021}.\nHowever, the underlying mechanisms that give rise to the apparently similar regimes in the two model types are very different. In the demographic noise model, the partial richness `niche-like' regime is formed by the coexistence of some species at a positive abundance and temporary extinctions of other species induced by the stochastic abundance fluctuations. By contrast, in the deterministic LV models with random asymmetric interactions, this regime is formed from a large number of fixed points where different sets of species are deterministically excluded. At higher interaction strengths, the system transitions to the deterministically chaotic behavior that resembles the `neutral-like' regimes, however the nature of the species turnover and the SAD shape are very different to the behaviour we described above \\cite{bunin2017ecological,kessler2015generalized,gore2021}\n\nThe existence of the predicted regimes and the transitions between them can be tested experimentally by measuring the SAD and the dynamics of the species abundances in ecosystems with varying immigration and competition strengths.\nLong-term observations may provide measurements of the stationary species abundance distributions \\cite{weigelt2010jena} although the steady-state SAD may be difficult to estimate due to the limited amount of data.\nIt may be difficult to experimentally determine and control the immigration rate, the competition strength, and carrying capacity, but practically useful proxies for these parameters exist. For example, the flow rate carrying bacteria into a chamber of a microfluidic device is a well controlled quantity that approximates well the immigration rate for populations encased in the chamber~\\cite{duran2021slipstreamin}.\nFurthermore, measurements of the SADs and the community compositions have become more attainable due to the advances in single cell gene sequencing techniques \\cite{ratzke2020strength, gore2021,shakiba2019cell}. Another commonly used and robustly estimated experimental observable is the species rank abundance (SRA), which can be used to infer the SAD to which it is closely mathematically related, although in practice the conversion might be constrained by limitations of noise and quantity of the experimental data. \n\nDespite these difficulties, the asymptotic behaviour of the SADs may provide indication of qualitative dissimilarities between the various regimes to discern different regimes of behavior among the experimental observations. In the mean-field approximation in our model the asymptotic behaviour of the SAD on the neutral line $\\rho=1$ approximates a power law with an exponential cutoff (see SM) - similar to the commonly used neutral birth-death models with the fixed total population size \\cite{baxter2007exact,mckane2004analytic}.\nNotably, the Yule process that is often used to model neutral processes also results in the SAD of a similar form. However, the Yule process is substantially different from the model of this paper because it does not include inter-species interactions and reaches the steady state SAD only if $r^+ < r^-$\n\nIn Table 1, we qualitatively compare the family of the regimes predicted by our model to the various behaviors inferred from experimental findings based on the SAD measurements and population abundance time series.\nThe notable abundance of the neutral ecosystems observed experimentally may pertain to our finding (Section~\\ref{sec:results}B) that the `neutral-like' `rare biosphere' regime extends substantially beyond the neutral line $\\rho =1$: non-neutral communities appear neutral as they exhibit SAD's characteristic of neutral communities, such as gastrointestinal microbiomes \n\\cite{jeraldo2012quantification}. Furthermore, multimodal SAD's predicted by our model that are related to the richness fluctuations may provide an explanation for the multimodal SADs observed in some ecological data, complementary to the current explanations such as spatial heterogeneity or emergent neutrality \\cite{dornelas2008multiple,vergnon2012emergent}\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|l|\n \n System (Ref.) & Regimes \\\\ \\hline \nmicrobial competition\\cite{gore2021} & stable full coexistence (IIa) \\\\ & stable partial coexistence (IIb)\\\\ & persistent fluctuation (IIIb) \\\\ \nglobal birds species \\cite{callaghan2021global} &\tunimodal - log skew (I) \\\\ \nplankton \\cite{ser2018ubiquitous} &\tpower-law decay (III) \\\\ \ncoral \\cite{dornelas2008multiple}\t& multimodal (IV)\t\n\\\\ \narthropods \\cite{matthews2014multimodal} &\tmultimodal (IV)\n\\\\\nT-cell receptors \\cite{oakes2017quantitative} & bimodal (II)\\\\\n& unimodal (III)\n\\\\\nmicrobial competition \\cite{descheemaeker2020stochastic} & `neutral-like' (III) \\\\ & `niche-like' (I \\& II)\n\\\\ \ngastrointestinal &\t`neutral-like' (III)\n\\\\ \nmicrobiomes \\cite{jeraldo2012quantification} & \\\\\n \\end{tabular}\n \\caption{Qualitative classification of observed population regimes in various ecological systems. }\n \\label{tab:my_label}\n\\end{table}\n\n\n\n\nOne quantity that is experimentally relatively easy to control is the total number of species $S$. The regimes predicted by the model and the transitions between them shown in Fig.~\\ref{fig:phases_sim}D show similarities with the experimentally observed ones - which were previously explained within the deterministic LV models with random interaction matrix~\\cite{gore2021}. As shown in Fig.~\\ref{fig:phases_sim}D; our model yields `neutral-like' regimes for high $S$ and $\\rho$, which are characterized by erratic dynamics, and `niche-like' behavior with more stable behavior for either low $S$ or $\\rho$. This qualitatively agrees with observed phase-space the in \\cite{gore2021}, which for strong competition and large pool of species the system presents `persistent fluctuation' regime, while for small species pool or weak competition exhibit `stable full\/partial coexistence'. \nThe fact that both the deterministic LV model with random interaction matrix and the homogeneous LV model with demographic noise are in qualitative agreement with the experimental data raises interesting and important questions concerning the role of stochastic and deterministic dynamics on community composition.\n\nAnother quantity that may enable qualitative and quantitative testing of different models is the carrying capacity $K$ which may be controllable experimentally in some systems.\nAs shown in SM, `rare biosphere' regime shrinks in size with increasing $K$ because a higher carrying capacity sustains higher average abundance, and larger (less likely) fluctuations are needed for the extinction events to occur.\nHigher average abundance together with insufficiently\nstrong fluctuations,\nresult in longer MFPTs from\ndominance to extinction abundances and vice-versa.\nThis might be captured by the dependence of $K$ in \\eqref{eq:boundary-I}, but further work on the mean-field approximations is needed.\nUnfortunately, rarer turnover events imply longer times to reach the steady state, thus comparing our analytical prediction of the dependence on very large $K$ becomes unfeasible using simulations.\n\nWe expect that the minimal model of this paper can be used for more complicated scenarios, including incorporating speciation to probe the interaction of the natural selection, inter-species interactions and population diversity and structure.\nFurthermore, the deterministic models inspire further extension\nof our framework to more complex distributions of the interaction network $\\rho_{i,j}$.\nFinally, our examination of the local ecosystem within an island in the mainland-island model (see Fig.~\\ref{fig:fig1}),\ncan be expanded to a many-island model which allows studying differences in dynamics between the local community and metacommunity, a prominent topic for conservation ecologists and the study of the human microbiome, amongst others.\n \n\n\n\n \n \\iffalse\n\\section{Deterministic Resilience}\n\nIn this section we examine how fast the deterministic system approaches its fixed point. The time for an ecosystem to return to its steady-state, also known as the system resilience, is one of the properties associated with the system stability. As was mentioned, when $0\\leq \\rho\\leq 1$ the deterministic fixed point, given in ~\\eqref{eq:solstat}, is stable. The direction and pace\/rate\\textcolor{red}{\/another term instead of pace\/rate? } of the flow toward the fixed point are determined by the eigenvalues and eigenvectors of the considered system. The eigenvalues are obtained as \n\\begin{equation}\n\\lambda_i =\n\\begin{cases}\n\\frac{r}{K}\\left( K - 2 \\tilde{x}(1+\\rho(S-1)) \\right) & \\text{, if } i=1 \\\\\n\\frac{r}{K}\\left( K - \\tilde{x}(2+\\rho(S-2)) \\right) & \\text{, otherwise}.\n\\end{cases}\n\\end{equation}\nand the eigenvectors are \n\\begin{equation}\n\\vec{v}_i =\n\\begin{cases}\n(1,1,\\cdots,1,1)^T & \\text{, if } i=1 \\\\\n(-1,\\delta_{2,i},\\delta_{3,i},\\cdots,\\delta_{S-1,i},\\delta_{S,i})^T & \\text{, otherwise}.\n\\end{cases}\n\\end{equation}\nFor analyzing $\\{\\lambda_i,\\vec{v}_i\\}|_{\\tilde{x}}$ we can deduce the following behaviour of the flow toward the fixed point. \n\nFirst, we found that $\\lambda_1|_{\\tilde{x}} \\leq \\lambda_{i\\neq 1}|_{\\tilde{x}}$ (the equality is for $\\rho=0$), which means that the system approach faster to constant $J$. Then, in the vicinity of the circle $||\\vec{n}||_1=J$ in $\\ell_1$, it approaches slower toward its fixed point. \n\nSecond, $|\\lambda_1|$ increases with $S$. It means that for highly diverse system, the process reaches faster to the vicinity of its stable community size. In addition, increasing the immigration rate $\\mu$ increase the pace to approaching constant $J$. \n\nThird, the other (degenerated) eigenvalues describe the behaviour of the flow in the vicinity of constant $J$ (note that it is not necessarily on the {\\em exact} constant J surface). Similarly to $|\\lambda_1|$, the flow in the neighborhood of $||\\vec{n}||_1=J$ toward the fixed point is faster for higher immigration rate. \n\nForth, we have found that $\\lambda_{i\\neq 1}|_{\\tilde{x}}(\\rho,S)$ is not a necessarily a monotone function. For example, when $\\mu=0.01, K=100$ and $r=1$, we find that for $\\rho=0.1$ higher $S$ gives slower approach on the direction of $\\vec{v}_{i\\neq 1}$. However, for $\\rho=1$ an opposite effect is found; high diversity gives faster flow to the fixed point. \n\n\n\n\\section{Extinction and Invasion Rates}\n\n\\subsection{Stochastic Stability}\n\nHow to define stability? In stochastic models it is not well defined (depends on the paper you read). Note that the boundaries (or any other point) are not absorbing ($\\mu>0$). Stability might mean: recurrence (with probability 1) for any point, $\\rho$-stability, ergodicity, absorbing region, Lyapanov stability for the mean. \n\n\\textcolor{red}{Q:can we state something about, the bimodality of the abundance distribution, the level of dominate species, and the mean time of extinction? }\n\n\\subsubsection{Mean First Passage Time}\nOne of the properties associated with stability is the first passage time. One may ask how long would it take for a species with low abundance to become dominant (or vice versa). \n\nAssume that a species almost surely reaches $x=a$. In addition, we assume that the species level is described by the one-dimensional probability. Then, the mean first passage time from point $x$ to $a$ (where $x>a$) is given by \n\\begin{equation}\n \\langle T_a(x) \\rangle = \\sum_{y=a}^{x-1}\\frac{1}{F_{\\rm right}(y)}{\\rm Prob}\\left[z\\geq y+1\\right] \\label{eq:MFPT}\n\\end{equation}\nwhere $F_{\\rm right}(y)\\equiv q^+(y)P(y)$ is the flux right from point $y$ and ${\\rm Prob}[z\\geq y+1]\\equiv \\sum_{z=y+1}^{\\infty}P(z)$ is the probability to be found on larger (or equal) level than $y+1$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth,trim= 130 300 120 300,clip]{figures\/MFPT_differentMu.pdf}\n \\caption{Mean first passage time from level $x_0$ to extinction. Here the total number of species is $S=20$. We choose $r^+=2$, $r^-=1$ and $\\rho=0.5$. The immigration rate is $\\mu=1$ (blue circles), $\\mu=0.1$ (green rectangles) and $\\mu=0.01$ (purple diamonds). The results are generated from $10^5$ statistically similar systems. The initial position is uniformly distributed in $[1,60]$, i.e. $x_0\\in U[1,60]$. }\n \\label{fig:MFPT_differentMu}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth,trim= 130 300 120 300,clip]{figures\/MFPT_rho.pdf}\n \\caption{Mean first passage time from level $x_0$ to extinction. Here the total number of species is $S=20$. We choose $r^+=2$, $r^-=1$, $\\mu=1$, and $\\rho=1$ (blue circles), $\\rho=0.5$ (green rectangles) and $\\rho=0.1$ (purple diamonds). The results are generated from $10^5$ statistically similar systems. The initial position is uniformly distributed in $[1,60]$, i.e. $x_0\\in U[1,60]$. \\textcolor{red}{no agreement with $\\rho=1$} }\n \\label{fig:MFPT_rho}\n\\end{figure}\n\n\\subsubsection{Mean Extinction Time from Level $n_0$}\n\nIn fig.~\\ref{fig:MFPT_differentMu} we present the mean time to extinction (i.e. $a=0$) where the species has started at the level $x_0$. We find that higher immigration rate $\\mu$ give lower time to extinction. This is due to the fact that the influx is proportional to $\\mu$, and the process is stationary (i.e. inbound flux, from zero to positive numbers, is equal to outbound flux, from positive number to extinction). Moreover, In Fig.~\\ref{fig:MFPT_rho} we fixed the immigration rate, and we have found that for small $\\rho$ (weak mutual competition) the MFPT is higher than high $\\rho$. \n\nImportantly, is some cases, the abundance (one dimension) distribution is not sufficient to evaluate the mean first passage time (see SM). \n\n\\subsubsection{Mean Extinction Time for the Core Species}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth,trim= 130 280 130 300,clip]{figures\/DominantSpecies_differentS1.pdf}\n \\caption{The mean extinction time of the core species \\textcolor{red}{(approximated analytic solution, no simulation yet)} versus competition strength $\\rho$. The number of species varies between $S=10$ (blue circles), $S=50$ (green rectangles), $S=100$ (pink crosses) and $S=200$ (yellow starts). }\n \\label{fig:DeterminsticVsStochastic}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth,trim= 130 270 110 300]{figures\/BimodalUnimodalRegionAnalytic_3regions_sim_K50_S30}\n \\includegraphics[trim= 150 270 130 300,width=\\columnwidth]{figures\/BimodalUnimodalRegionAnalytic_3regions_conv_K50_S30.pdf}\n\\includegraphics[trim= 150 270 130 300,width=\\columnwidth]{figures\/BimodalUnimodalRegionAnalytic_3regions_MeanField_K50_S30.pdf}\n \\caption{Unimodality and bimidality of SAD depending in immigration rate $\\mu$ and competition $\\rho$. The upper panel is obtain from simulation, and the middle and button panels are given from the approximations. This results are obtained from the approximated abundance distribution given using \\textcolor{red}{ 1st approximation (upper panel) and mean-field approximation (lower panel)}. Here we choose the following parameters $S=30$, $r^+=2$, $r^-=1$, $K=50$. The yellow, turquoise and blue regions represent the values of $(\\mu, \\rho)$ where $P(n)$ is a unimoal distribution with maximum at zero, a bimodal distribution with two maxima, or unimodal distribution with maximum at an existence level, respectively. \\textcolor{red}{More or less all have similar map, accept $\\rho=1$ where 1st approx cannot find bimodality there. } }\n \\label{fig:BimodalUnimodal}\n\\end{figure}\n\n \\begin{figure}\n \\centering\n \\includegraphics[trim=150 270 150 270,width=\\columnwidth]{figures\/14Oct2.pdf}\n \\caption{Simulation results for the location and height of the `most-right' peak (2nd peak in bimodality and the only peak at the unimodal phases) . Upper:Location of the peak. Lower:Height of the peak. }\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth,trim= 130 270 120 300,clip]{figures\/Richness.pdf} \n \\caption{Richness vs competition.\n The total number of species change between $S=50$ (red stars), $S=30$ (green squares) and $S=10$ (blue circles). The lines correspond to the approximated analytic solution: 1st and 2nd methods are represented with black and pink curves (respectively). The immigrating rate is $\\mu=1$,\n $r^+=50$, $r^-=0.1$, and $K=50$.\n \\textcolor{red}{To run the same figures for $r^+=2$ and $r^-=1$. $K=50$\n }\n }\n \\label{fig:Ricness}\n\\end{figure}\n\n \\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth,trim= 130 270 120 280,clip]{figures\/Richness_10to7_mu.pdf}\n \\includegraphics[width=\\columnwidth,trim= 130 270 120 280,clip]{figures\/Richness_10to7_rho.pdf}\n \\caption{Upper: The richness versus $\\rho$ for different $\\mu$. Lower: The richness vs $\\mu$ for different $\\rho$. Here $K=50$, $S=30$, $r^+=2$ and $r^-=1$. The simulation results are given from $10^7$ reactions.}\n \\label{fig:Richness_mu}\n \\end{figure}\n \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth,trim= 140 280 140 300,clip]{figures\/Richness_10to7_sim.pdf}\n \\caption{Simulation results for the richness for different $\\mu$ and $\\rho$. The values of the richness are represented with colors corresponding the colorbar; from high richness (yellow) to low richness (dark blue). Here $K=50$, $S=30$, $r^+=2$ and $r^-=1$. The simulation results are given from $10^7$ reactions. }\n \\label{fig:Richness_heatMap_sim}\n \\end{figure}\n \n \\fi\n \n\\section*{Methods}\nThe solution for the master equation \\eqref{master-eq} is simulated using the Gillespie algorithm with $10^8$ time steps. We use $r^+=2$, $r^-=1$, $K=100$. \nModalities' classification is numerically executed after smoothing the simulated SAD. The MFPT is evaluated via the simulated SAD ($\\tilde{x}(S^*)$ is rounded), where a uni-dimensional approximation of the process is considered, see details in SM.\n\n\n\\begin{acknowledgments}\nThe authors acknowledge helpful discussions and comments from all the members of the Goyal and Zilman Groups. AZ acknowledges the support from the National Science and Engineering Research Council of Canada (NSERC) through the Discovery Grant Program. SG acknowledges the support from the National Science and Engineering Research Council of Canada (NSERC) through the Discovery Grant Program and from the Medicine by DEsign Program at the University of Toronto.\n\\end{acknowledgments}\n\n\n\\section*{Approximations of Species Abundance Distribution }\n\n\\subsection*{Derivation of zero flux in $n_i$ - Global balance equation - derivation for the exact SAD}\n\\label{App:ZeroFlux}\n\n\nConsider the multi-dimensional master equation \n\\begin{eqnarray}\n \\partial_tP(n_1,n_2\\dots,n_S) &=& \\sum_{i}\\left\\{ q_{i}^+(\\vec{n}-\\vec{e_i})P(\\vec{n}-\\vec{e_i})+q^-_{i}(\\vec{n_i}+\\vec{e_i}) P(\\vec{n}+\\vec{e_i})-\\left[q_{i}^+(\\vec{n})+q_{i}^-(\\vec{n}) \\right]P(\\vec{n}) \\right\\}\n\\end{eqnarray}\n where $q^+_{n_i}(\\vec{n})$ and $q^-_{n_i}(\\vec{n)}$ represents the birth and death rate of species $i$ (respectively), which are generally depends in $\\vec{n}=(n_1,\\dots, n_s)$. Here, $e_i=\\{0, \\dots, 1, \\dots , 0\\}$ (the one is located in the $i$-th component). \n To find Master equation for $n_1$ we sum over all other components; i.e. \n\\begin{eqnarray}\n &&\\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty} \\partial_tP(n_1,n_2\\dots,n_S) = \\\\ \\nonumber &&= \\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty} \\left\\{\\sum_{i}\\left\\{ q_{i}^+(\\vec{n}-\\vec{e_i})P(\\vec{n}-\\vec{e_i})+q^-_{i}(\\vec{n_i}+\\vec{e_i}) P(\\vec{n}+\\vec{e_i})-\\left[q_{i}^+(\\vec{n})+q_{i}(\\vec{n}) \\right]P(\\vec{n}) \\right\\}\\right\\} \n \\end{eqnarray}\n thus\n \\begin{equation}\n \\partial_t P_1(n_1) = \\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty} \\left\\{\\sum_{i}\\left\\{ q_{i}^+(\\vec{n}-\\vec{e_i})P(\\vec{n}-\\vec{e_i})+q^-_{i}(\\vec{n_i}+\\vec{e_i}) P(\\vec{n}+\\vec{e_i})-\\left[q_{i}^+(\\vec{n})+q_{i}(\\vec{n}) \\right]P(\\vec{n}) \\right\\}\\right\\} .\n \\end{equation}\n We can now use the fact that for every $n_i$:\n \\begin{eqnarray}\n \\sum_{n_i=0}^{\\infty} q_{i}^+(n_1, \\dots n_i-1, \\dots, n_S)P(n_1, \\dots n_i-1, \\dots, n_S) &=& \\sum_{n_i=0}^{\\infty} q_{i}^+(n_1, \\dots n_i, \\dots, n_S)P(n_1, \\dots n_i, \\dots, n_S)\n, {\\rm \\ \\ and} \\\\ \\nonumber\n \\sum_{n_i=0}^{\\infty} q_{n_i}^-(n_1, \\dots n_n+1, \\dots, n_S)P(n_1, \\dots n_i+1, \\dots, n_S)&=& \\sum_{n_i=0}^{\\infty} q_{n_i}^-(n_1, \\dots n_i, \\dots, n_S)P(n_1, \\dots n_i, \\dots, n_S)\n \\end{eqnarray}\n [note that $q_{n_i}^+(n_1, \\dots -1, \\dots, n_S)P(n_1, \\dots, -1, \\dots, n_S)=q_{n_i}^-(n_1, \\dots 0, \\dots, n_S)P(n_1, \\dots 0, \\dots, n_S)=0$]. Thus, the above equation is given by\n\\begin{eqnarray}\n \\partial_t P_1(n_1) &=& \\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty} \\left\\{ q_{1}^+(\\vec{n}-\\vec{e_1})P(\\vec{n}-\\vec{e_1})+q^-_{1}(\\vec{n}+\\vec{e_1}) P(\\vec{n}+\\vec{e_1})-\\left[q_{1}^+(\\vec{n})+q^-_{1}(\\vec{n}) \\right]P(\\vec{n})\\right\\}.\n\\end{eqnarray}\nFor simplicity, we define $F^{+}(\\vec{n})\\equiv q_1^+(\\vec{n})P(\\vec{n})$ and $F^{-}(\\vec{n})\\equiv q_1^-(\\vec{n})P(\\vec{n})$, thus Eq. \ncan be written as \n\\begin{eqnarray}\n \\partial_t P(n_1) =&& \\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty} \\left\\{ F^{+}(n_1-1,n_2,\\dots)-F^{+}(n_1,n_2,\\dots)\n +F^{-}(n_1+1,n_2,\\dots)-F^{-}(n_1,n_2,\\dots) \\right\\}.\n\\end{eqnarray}\nBy using z-transform ($n_1\\rightarrow z$), which is defined for a function $k(n_1)$ as $K(z)=\\sum_{n_1=0}^{\\infty} k(n_1) z^{-n_1} $, we obtain\n\\begin{eqnarray}\n \\partial_t P(z) = \\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty} F_{\\rm right}(z,n_2,\\dots)(1-z^{-1})+F_{\\rm left}(z,n_2,n_3\\dots)(1-z) \n\\end{eqnarray}\n\\{used ${\\cal Z}[g(n)-g(n-1)]=[1-z^{-1}]\\hat{G}(z)$, and ${\\cal Z}[g(n+1)-g(n)]]=[1-z]\\hat{G}(z)-zg(0)$ \\}. Stationary solution; $\\partial_t P(z)=0 $ and re-organize the equation yields \n\\begin{eqnarray}\n \\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty}F_{\\rm right}(z,n_2,n_3,\\dots)=\\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty} F_{\\rm left}(z,n_2,n_3,\\dots)\\frac{1-z}{z^{-1}-1}=\\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty}F_{\\rm left}(z,n_2,n_3\\dots)z .\n\\end{eqnarray}\nThen, we use the inverse z-transform ($z\\rightarrow n_1$), and find\n\\begin{eqnarray}\n \\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty}q^+_{1}(\\vec{n})P(\\vec{n})= \\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_s=0}^{\\infty}q^-_{1}(\\vec{n}+\\vec{e_1})P(\\vec{n}+\\vec{e_1}). \n\\end{eqnarray}\nWe use Bayes formula; $P(n_1,n_2,n_3,\\dots,n_S)=P(n_2,n_3,\\dots, n_S|n_1)P(n_1)$ and obtain\n\\begin{eqnarray}\n \\langle q_{n_1}^+(\\vec{n})|n_1\\rangle_{n_2,n_3,\\dots,n_S}P_1(n_1) = \\langle q_{n_1}^-(\\vec{n}+\\vec{e_1}) |n_1+1\\rangle_{n_2,\\dots n_S} P_1(n_1+1), \n\\end{eqnarray}\nwhere $ \\langle *|n_1\\rangle_{n_2, \\dots, n_S}\\equiv\\langle *|n_1\\rangle \\equiv\\sum_{n_2=0}^{\\infty}\\dots \\sum_{n_S}^{\\infty} (*)P(n_2, \\dots, n_S|n_1) $. \nUp to now there are no assumption in the derivation, and the above is general. For our case, as specified in the main text, $\\langle q_{1}^+(\\vec{n})|n_1\\rangle=\\mu + r^+ n_1$ (note that $q^+_{i}$ depends solely on $n_i$) and $\\langle q_{1}^-(\\vec{n})|n_1\\rangle=n_1\\left(r^-+r n_1\/K + r \\rho \\sum_{j\\neq 1 }\\langle n_j |n_1\\rangle \/K \\right)$. Additionally, from symmetry, $P_i(n_i)=P_j(n_j) = P(n)$ for every $i,j$.\nThus, solving the recursive equation and obtain\n\\begin{eqnarray}\n P(n)&=&P(0)\\prod_{n'=1}^{n}\\frac{q^{+}(n'-1)}{\\langle q^-(\\vec{n})|n'\\rangle}= \\label{eq:exact_appendix}\n \\\\ \\nonumber\n &=&P(0)\\prod_{n'=1}^{n}\\frac{r^+(n'+a)}{n\\left(r^-+r n'\/K + r\\rho \\sum_{j\\neq 1 }\\langle n_j |n'\\rangle \/K \\right)}= P(0)\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}\\left(r^-+r n'\/K + r\\rho \\sum_{j\\neq 1 }\\langle n_j |n'\\rangle \/K \\right)}.\n\\end{eqnarray}\nwhere\n $a=\\mu\/r^+$ and $(a)_{n} \\equiv a(a+1)\\dots (a+n-1)$, in the Pochhammer symbol. Here, $P(0)$ is given from normalization.\nWe emphasize that the above abundance distribution $P(n)$ in \\eqref{eq:exact_appendix} is exact, means no approximations have been taken so far.\n\nNote, that the denominator in the exact solution depends on the effect of interactions from all other species over $n_1$, through the term $\\sum_{j\\neq 1}\\langle n_j |n_1 \\rangle $. Therefore, in order to provide an explicit expression to $P_1(n_1)$, we need to use some approximations. Three approximation approaches, and a discussion about their limitations are given in the following subsections. We note that all the presented approximations below provide decent results. However, we find that none of them manage to provide adequate fit for every set of parameters, see Figures. \n\n\\subsection*{Approximation Approach I: Estimating $\\sum_{j\\neq i}\\langle n_j|n_i\\rangle $ Using Mean-Field Approximation }\nWe assume $\\langle n_j |n_i \\rangle = \\langle n_j \\rangle $. Thus, \n\\begin{equation}\n P(n)\\approx P(0)\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}\\left(r^-+r n'\/K + r\\rho \\sum_{j\\neq 1 }\\langle n_j\\rangle \/K \\right)} = P(0)\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}\\left(r^-+r n'\/K + r\\rho (S-1)\\langle n\\rangle \/K \\right)},\n \\label{eq:MF1}\n\\end{equation}\nwhere the last equality is given from symmetry; $\\langle n_j \\rangle = \\langle n_i \\rangle = \\langle n\\rangle $ for every species $i,j$. In addition, by definition, $\\langle n \\rangle = \\sum_{n=0}^{\\infty }n P(n)$, hence\n\\begin{equation}\n \\langle n \\rangle \\approx P(0)\\sum_{n=0}^{\\infty}n\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}\\left(r^-+r n'\/K + r\\rho (S-1)\\langle n\\rangle \/K \\right)}\n \\label{eq:MF1_closure}\n\\end{equation}\nwhere $P(0)=1\/{_1}F_1[a,b;c]$ is the normalization coefficient, with ${_1}F_1[a,b;c]$ is the Kummer confluent hypergeometric function, $a=\\frac{\\mu}{r^+}$, $b=\\frac{r^- K + r \\rho (S-1) \\langle n \\rangle }{r}+1$ and $c=\\frac{r^+ K}{r}$. Solving numerically the above implicit equation and evaluate $\\langle n\\rangle$. The last step is to substitute the numerical solution of $\\langle n\\rangle$, obtained from \\eqref{eq:MF1_closure}, into \\eqref{eq:MF1}. \n\n\\subsection*{Approximation Approach II: Estimating $\\langle J|n\\rangle $ using Convolution }\nThe exact solution can be written as \n\\begin{eqnarray}\n P_1(n_1)=P(n)= P(0)\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}\\left(r^-+r (1-\\rho)n'\/K + r\\rho \\langle J |n_1\\rangle \/K \\right)},\n\\end{eqnarray}\nwhere $J=\\sum_{i=1}^{\\infty} n_i $ is the total population size. Here we assume that the total number of individuals in the system, $J$, is weekly depends on $n_1$. Thus $J$ is an independent random variable. Hence, \n\\begin{equation} \n P_1(n_1)=P(n_1|\\langle J |n_1 \\rangle ) \\approx P(n_1|J)=P(0)\n \\frac{(a)_{n_1} \\Tilde{c}^{n_1}}{n_1 ! (\\Tilde{b}+1)_{n_1} } \n\\end{equation}\nwith $a=\\frac{\\mu}{r^+}$, $\\tilde{b}= \\frac{r^-K+r\\rho J}{r(1-\\rho)}$, and $\\tilde{c}=\\frac{r^+ K}{r(1-\\rho)}$ [note that both $\\tilde{b}$ and $\\tilde{c}$ differ from $b$ and $c$ defined in previous subsection].\nMoreover, we assume that the species levels are mutually independent, means ${\\cal P}(n_1,\\dots n_S) \\approx \\prod_i P_i(n_i)$. Thus, the PDF of $\\sum_i n_i$ reads \n\\begin{equation}\n P\\left(\\left.\\sum_i n_i\\right|J\\right)=\\underbrace{P_1(n_1|J)*P_2(n_2|J)* \\dots * P_S(n_S|J)}_{S {\\rm \\ times}}\n\\end{equation}\nwhere $A*B$ means the convolution of $A$ with $B$. $P\\left(\\sum_i n_i|J\\right)$ is the `analytical' PDF to have $\\sum_i n_i$ individuals where we assume that a single species PDF is $P_1(n_1|J)$ with a given $J$. \nTo capture the fact that $J$ has a meaning of number of individuals as well, we consider\n\\begin{equation}\n P(J)\\approx \\frac{{\\rm Prob}\\left(\\left.\\sum_i n_1=J\\right|J\\right)}{\\sum_J {\\rm Prob}\\left(\\left.\\sum_i n_1=J\\right|J\\right)},\n\\end{equation}\nwhere $P(J)$ is the approximated distribution of $J$. \nThen\n\\begin{equation}\n P_1(n_1) = \\sum_{J}P_1(n_1|J) P(J)\n\\end{equation}\nis the approximated PDF. \n\nNote that when $S$ is large, we find\n\\begin{equation}\n P\\left(\\left.\\sum_i n_i\\right|J\\right) \\sim {\\cal N}\\left(S\\langle n_1 |J \\rangle, S \\cdot Var(n_i) \\right),\n\\end{equation}\nthus $P(J)\\approx {\\rm Prob}(\\sum_i n_i =J|J)$ reaches its maximum in the vicinity of $J$ which satisfies $J\\approx S \\langle n_i |J \\rangle = \\left\\langle \\sum_i n_i |J \\right\\rangle $. Furthermore, for the approximation $P(J)\\approx {\\rm Prob}(\\sum_i n_i =J|J)$, the values of $J$ where $J\\ll S\\langle n_i |J \\rangle $ or $J\\gg S\\langle n_i |J \\rangle $ are highly improbable, due to the Gaussian nature of $P(\\sum_i n_i|J)$ for large $S$.\n \n\\subsection*{Approximation Approach III: Estimating $\\langle J|n\\rangle $ using Mean-Field Approximation } In a similar fashion to previous approximation approaches, we assume $\\langle J|n\\rangle \\approx \\langle J\\rangle $, thus\n\\begin{eqnarray}\n P(n) \\approx P(0)\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}\\left(r^-+r (1-\\rho)n'\/K + r\\rho \\langle J \\rangle \/K \\right)}= P(0)\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}\\left(r^-+r (1-\\rho)n'\/K + r\\rho S\\langle n\\rangle \/K \\right)}.\n\\end{eqnarray}\nThen, $\\langle n \\rangle$ is given by the numerical solution of\n\\begin{eqnarray}\n \\langle n \\rangle = \\sum_{n=0}^{\\infty} n P(0)\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}\\left(r^-+r (1-\\rho)n'\/K + r\\rho S\\langle n\\rangle \/K \\right)}.\n\\end{eqnarray}\nwhere here the normalization factor is $P(0)=1\/{_1}F_1[a,b;c]$ with $a=\\frac{\\mu}{r^+}$, $\\tilde{\\tilde{b}}= \\frac{r^-K+r\\rho \\langle J\\rangle }{r(1-\\rho)}$, and $\\tilde{c}=\\frac{r^+ K}{r(1-\\rho)}$.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.45\\columnwidth,trim=100 100 100 100]{figures\/Quality.pdf}\n \\includegraphics[width=0.45\\columnwidth,trim=100 100 100 100]{figures\/examples_approx.pdf}\n \\caption{Quality of Approximation}\n \\label{fig:quality}\n\\end{figure}\n\n\\subsection*{Interspecies Correlations and Limitations of the Approximations Approaches}\nFor approximation approaches described above, we assume the species are mutually independent, meaning $P(\\vec{n})=\\prod_{i=1}^S P(n_i)$.\nOf course, this mutual independence cannot exactly be obtained, except for $\\rho=0$, since for every $\\rho>0$ the dynamics of a species, i.e its birth and death rates, depends on other species levels. Therefore, we expect deviations of our approximation from the true simulated SAD. In particular, we expect a relation between the interspecies correlation and the quality of approximation. \n\nWe quantify the quality of approximation, using two metrics. The first one, Kolmogorov-Smirnov (KS), is defined with $KS(P,Q)\\equiv\\max \\left|{\\rm CDF}(P)-{\\rm CDF}(Q)\\right|$ for two PDFs, $P$ and $Q$. The second metric we use is the Kullback\u2013Leibler divergence, which is defined by $KL(P,Q)\\equiv \\sum_x (P-Q)\\ln\\left(P\/Q\\right)$. \nIntuitively, the KS metric captures the difference between the approximation and the simulation results, and the KL divergence capture the ratio between the distributions. \nFig.~\\ref{fig:quality} shows the KS and KL metrics comparing the three approximations presented above and the simulation results.\n\nIn the main text, we choose to present the results for the regime boundaries obtained from the Approximation I above. Even so, all approximations present reasonable agreement with the simulation. \nHowever, in some regimes, some approximations work better than the others. \nAdditionally, one approximation may better capture some features of the system, while other approximation show better agreement with other features. \nFor example, only approximation (I) captures the bi-modality at very low $\\mu$ and $\\rho=1$, where the other approximations seem to align with the simulated SAD slightly better.\n\n\\subsection*{Tail-end of distributions}\n\nThe above model does not describe an ecological zero-sum game as the total population size $J=\\sum_i n_i$ may fluctuate in this proposed formulation.\nConversely, classical Moran models describe stochastic processes in which the finite population is fixed. \nThe species abundance distribution can be solved for exactly in certain cases where this assumption of a fixed total population size holds.\n\nIn this limit,\n\\begin{equation}\n P(n) = P(0)\\frac{(a)_{n}}{n!}\\approx P(0)\\frac{n^{a-1}}{\\Gamma[a]}\n\\end{equation}\nIn the case where $\\langle J|n_1\\rangle \\approx \\langle J \\rangle $: \\begin{equation}\n P(n)= P(0) \\left(\\frac{r^+}{r^- + r \\langle J \\rangle \/K}\\right)^{n}\\frac{ (a)_{n}}{n!} \\approx P(0) \\left(\\frac{r^+}{r^- + r \\langle J \\rangle \/K}\\right)^{n_1}\\frac{ n_1^{a-1}}{\\Gamma[a]}.\n\\end{equation}\nNote that this is valid only when $n \\gg a$, that is to say for the tail-end of the distributions.\n\n\nMoran models are generally neutral $\\rho = 1$), however using our demographic noise model allows for observing non-neutral systems. \nWe can approximate the tail of the distribution for non-neutral models ($0 < \\rho < 1$) using similar arguments as above, that is to say approximating $\\langle J |n \\rangle \\approx J$.\nUsing our previous solution and this approximation, we find that\n\\begin{equation}\n P(n) = P(0)\\frac{(r^+)^{n}(a)_{n}}{n!\\prod_{n'=1}^{n}(r^-+r(1-\\rho)n'\/K+\\rho \\langle J |n' \\rangle \/K)}\\approx P(0)\\frac{(a)_n \\tilde{c}^n}{n!(\\tilde{b}+1)_n}\n\\end{equation}\nwhere $a=\\frac{\\mu}{r^+}$, $\\tilde{b}= \\frac{r^-K+r\\rho J}{r(1-\\rho)}$, and $\\tilde{c}=\\frac{r^+ K}{r(1-\\rho)}$.\nLooking at the tail of the distribution ($n \\gg a, \\tilde{b}, \\tilde{c}$) we find that \n\\begin{equation}\n P(n) \\xrightarrow{n \\rightarrow \\infty} P(0)\\frac{\\Gamma[\\tilde{b}+1]}{\\sqrt{2 \\pi} \\Gamma[a]} n^{a-\\tilde{b}-\\frac{3}{2}}(\\tilde{c}\/n)^{n}e^n\n\\end{equation} \nwhere we have used Stirling's approximation that $n! \\approx \\sqrt{2\\pi n}n^n e^{-n}$.\nIn the high $n$ limit, we can further approximate $J\\sim n$ such that $P(n)\\sim n^{-n\/(1-\\rho)}e^n$\n\nIn the case where $\\rho \\rightarrow 0$, there is no need for approximating $\\langle J |n\\rangle$, as that term disappears in the exact solution.\nWe are left with the solution\n\\begin{equation}\n P(n) \\xrightarrow{\\rho\\rightarrow 0}P(0) \n \\frac{(\\mu\/r^+)_{n} (r^+ K\/r)^{n}}{n ! (r^-K\/r+1)_{n} }\n\\end{equation}\nwhich is exact.\nThe tail of this distribution goes as\n\\begin{equation}\n P(n) \\xrightarrow{\\rho\\rightarrow 0,n\\rightarrow \\infty} P(0)\\frac{\\Gamma[\\tilde{r^-K\/r}+1]}{\\sqrt{2 \\pi} \\Gamma[\\mu\/r^+]} n^{\\frac{\\mu}{r^+}-\\frac{r^-K}{r}-\\frac{3}{2}}(r^-K\/rn)^{n}e^n.\n\\end{equation}\n\nNote that the denominator of $\\tilde{b}$ and $\\tilde{c}$ goes to 0 as $\\rho \\rightarrow 1$, as such the limit must be taken carefully: $\\tilde{b} \\xrightarrow{\\rho \\rightarrow 1} \\infty $.\nWe use the fact that $(x+1)_n \\xrightarrow{ x \\rightarrow\\infty} x^{n}$ to write\n\\begin{equation}\n P(n) \\xrightarrow{\\rho\\rightarrow 1}P(0) \n \\frac{(a)_{n} \\{r^+ K\/[r(1-\\rho)]\\}^{n}}{n ! \\{(r^-K+rJ)\/[r(1-\\rho)]\\}^{n} }= P(0) \n \\frac{(a)_{n} (r^+ K)^{n}}{n ! (r^-K+rJ)^{n} } \\xrightarrow{n \\rightarrow \\infty } P(0) \\left( \\frac{r^+K}{r^-K + rJ} \\right)^n \\frac{n^{a-1}}{\\Gamma[a]}\n\\end{equation}\nwhich agrees with what we found earlier for $\\rho=1$ and constant $J$. For $\\rho = 1$, we know that if any one species abundance gets large it should dominate the system. Therefore, we can approximate $J\\approx n$ for large $n$ in the neutral regime.\n\nThis asymptotic behaviour may be compared to analytical solutions for which $J$ is held constant.\nThese Moran type models are often solvable exactly, we choose to show their results wherein $J=S n_{det}$ where $n_{det}$ is the solution to the mean deterministic equation Lotka-Voltera equation. \nIn~\\cite{mckane2004analytic}, an analytical solution to the Hubbel model with immigration is found such that\n\\begin{equation}\n P(n)={J \\choose n}\\frac{\\beta (n+p,n^*-n)}{\\beta (p^*,n^*-J)} \n\\end{equation}\nwhere $p=1\/S$, $n^*=(J-m)\/(1-m) - p$, and $p^*=m p (J-1)\/(1-m)$ .\nIn this model, $m$ is defined as the probability of immigration at any step.\nThis is different from our immigration rate, however we find a suitable transformation to be $m \\approx \\mu \/ \\langle r^+_n + r^-_n \\rangle$: the probability of immigration is the rate of immigration divided by the mean rate of a reaction.\nNote that the function $\\beta(a,b)=\\Gamma (a) \\Gamma (b)\/ \\Gamma (a+b)$.\n\nIn~\\cite{baxter2007exact}, a continuum Fokker-Planck equation is solved to evaluate a similar multi-allelic diffusion model abundance. \nHowever, in this formalism, immigration is replaced by mutations wherein $u_i$ is the rate of mutation of cell allele $i$.\nAssuming all the mutation rates are equivalent, $u_i=u$\nThe steady state joint probability distribution is \n\\begin{equation}\n P(\\vec{x})=\\Gamma (2 S u ) \\delta (1-\\sum_i x_i)\\prod^S_{i=0} \\frac{x_i^{2u-1}}{\\Gamma (2 u )}\n\\end{equation}\nwhich may be integrated to find the SAD\n\\begin{equation}\n SAD(n) = \\langle \\sum_j \\delta (x_j - n\/J) \\rangle_{P(\\vec{x})} \\approx \\left( \\frac{n}{J} \\right)^{2 u - 1 } e^{- (2 u (S-1) -1 ) n \/ J } \n\\end{equation}\nAlthough mutations and immigration are not completely equivalent, mutations may take on a heuristic role similar to immigration that allows for no species to be truly extinct.\nAs such, we assume $u=\\mu\/r$.\nComparisons of these different asymptotic behaviours are found in Fig. ~\\ref{fig:asymptotic}.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{figures\/asymptotic_neutral.pdf}\n \\caption{Asymptotic behaviour of various neutral models compared to simulations. (top panels) Using $\\langle J |n \\rangle \\approx n $, our approximation does not recover a bimodality, however the analytical approximation clearly follows the simulation's power law with exponential cutoff. (bottom panels) Moran-like models in the literature of power-law SADs with exponential cutoff. Here, the total population size used is the total population size of the steady-state Lotka-Voltera equation, $J=S n_{LV}$. The continuous Fokker-Planck diffusion model of Baxter, Blythe \\& McKane\\cite{baxter2007exact} shows the immigration dominated peak. However, the Hubbel community model solved by Alonso, McKane \\& Sole\\cite{mckane2004analytic} shows a bimodality in the low immigation regime. Both have power laws with exponential cutoff in different regimes.}\n \\label{fig:asymptotic}\n\\end{figure}\n\n\\section*{Derivations of Boundary Equations}\n\n\\subsection*{Boundaries for Richness Regimes} For the boundaries defined from richness, we use\n$ \\langle S^* \\rangle = S \\left(1-P(0)\\right) \n$, where $P(0)$ obtained numerically from the approximated SAD. Note that in the mean-filed approximations $P(0)$ is explicitly given as Kummer confluent hypergeometric function. Then, the transition between full richness to partial coexistence is given with $S P(0) = 1\/2$ (as the arithmetic mean between the two boundaries). Similarly, the transition boundary between partial coexistence and excluded regime is drawn where $SP(0)=S-3\/2$. \n\n\\subsection*{Derivation of $\\tilde{n}$ } The boundaries defined by the modalities can be given directly from the above approximations, see Figures. However, we have found that using the mean-field approximation allows us to derived a closed expression for the boundaries. \n\nThe transition between neutral-like to bimodal regimes, is defined by the presence or absence of a local maximum in a real positive level. In another words, in the neutral-like regime $P(n)>P(n+1)$, since the SAD is monotony decreasing, while in the bimodal regime, there is $n>0$ where $P(n)P(1)$; correspond to bimodality or neutral-like regimes) or local minimum ($P(0)> 1$ is the system size, we define $x_i$ to be the corresponding continuous limit of $n_i$.\nThe variable may be rescaled by the characteristic system size, $y_i = x_i \/ K$\nThis continuous approximation has as probability density $ P(\\vec{n},t) = p(\\vec{y},t)\/K$ and we may write the the scaled rates as $q_i^{+\/-}(\\vec{n})=K Q_i^{+\/-}(\\vec{y})$ where\n\\begin{align*}\n Q_i^{+}(\\vec{y}) &= r^+ y_i + \\mu \/ K \\\\\n Q_i^{+}(\\vec{y}) &= r^- y_i + r y_i \\left( y_i + \\sum_{j\\neq i}\\rho_{j,i} y_j \\right)\n\\end{align*}\nis defined on the continuum.\nThen we can write the corresponding master equation as\n\\begin{equation}\n \\label{eq:master-eq-cont}\n \\partial_t p(\\vec{x};t) =\n K \\sum_{i}\\left\\{Q^+_i(\\vec{y}-\\vec{e}_i)p(\\vec{y}-\\vec{e}_i,t)+ \n Q^-_i (\\vec{y}+\\vec{e}_i)p(\\vec{y}+\\vec{e}_i,t) - \\left[Q^+_i(\\vec{y})+Q^-_i(\\vec{y})\\right]p(\\vec{y},t)\\vphantom{\\vec{e}_i} \\right\\}\n\\end{equation}\nwherein $\\vec{e}_i$ is the change in abundance $\\vec{y}$ from the respective event, which in our single birth-death process will be $\\vec{e}_i=(\\delta_{1i}\/K, \\delta_{2i}\/K, ..., \\delta_{Si}\/K)$, in other words a vector of zeros except for $1\/K$ located at species $i$.\nTo go from the master equation to the Fokker-Planck equation, we Taylor expand each of the expressions from the right-hand side of \\ref{eq:master-eq-cont}.\nAs such,\n\\begin{multline*}\n Q^+_i(\\vec{y}-\\vec{e}_i)p(\\vec{y}-\\vec{e}_i,t)= Q^+_i(\\vec{y})p(\\vec{y},t)+\\sum_j (-(\\vec{e}_i)_j) \\frac{\\partial}{\\partial y_j}\\left( Q^+_i(\\vec{y})p(\\vec{y},t)\\vphantom{\\frac{1}{1}}\\right)\n + \\frac{1}{2!}\\sum_j \\sum_k (\\vec{e}_i)_j(\\vec{e}_i)_k \\frac{\\partial^2}{\\partial y_j \\partial y_k}\\left( Q^+_i(\\vec{y})p(\\vec{y},t)\\vphantom{\\frac{1}{1}}\\right)+...\n\\end{multline*}\n\\begin{multline*}\n Q^-_i(\\vec{y}+\\vec{e}_i)p(\\vec{y}+\\vec{e}_i,t)= Q^-_i(\\vec{y})p(\\vec{y},t)+\\sum_j (\\vec{e}_i)_j \\frac{\\partial}{\\partial x_j}\\left( Q^-_i(\\vec{y})p(\\vec{y},t)\\vphantom{\\frac{1}{1}}\\right)\n + \\frac{1}{2!}\\sum_j \\sum_k (\\vec{e}_i)_j (\\vec{e}_i)_k \\frac{\\partial^2}{\\partial y_j \\partial y_k}\\left( Q^-_i(\\vec{y})P(\\vec{y},t)\\vphantom{\\frac{1}{1}}\\right)+...\n\\end{multline*}\nNote that $(\\vec{e}_i)_j=\\delta_{ij}\/K$ in our single birth-death event process, which simplifies these equations considerably. \nNow, replacing these expressions in our master equation \\ref{eq:master-eq-cont}, we note that we can write our equation in orders of $1\/K$.\nThus, we obtain\n\\begin{equation}\n\\label{eq:fokker-planck}\n \\partial_t p(\\vec{y};t) =\n -\\sum_j \\frac{\\partial}{\\partial y_j}\\left[\\left( Q^+_i(\\vec{y})-Q^-_i(\\vec{y})\\right)p(\\vec{y},t)\\vphantom{\\frac{1}{1}}\\right]\n + \\frac{1}{2 K}\\sum_j \\frac{\\partial^2}{\\partial y_j \\partial y_k}\\left[\\left( Q^+_i(\\vec{y})+Q^-_i(\\vec{y})\\right)p(\\vec{y},t)\\vphantom{\\frac{1}{1}}\\right] + \\mathcal{O}(1\/K^2).\n\\end{equation}\nUp to order $1\/K$, \\ref{eq:fokker-planck} is the Fokker-Planck Equation (FPE) of the process.\nUsing Ito's prescription for SDEs, this corresponds to the Langevin equation\n\\begin{equation}\n \\label{eq:langevin}\n d y_i = \\left( Q^+_i(\\vec{y})-Q^-_i(\\vec{y}) \\right)dt + \\sqrt{ \\frac{Q^+_i(\\vec{y})+Q^-_i(\\vec{y})}{K} } dW_i\n\\end{equation}\nwhere $W_i$ is a standard Wiener process. By multiplying both sides of this equation by the characteristic size $K dy_i = dx_i$,\n\\begin{equation}\n \\label{eq:langevin-LV}\n d x_i = \\left( q^+_i(\\vec{x})-q^-_i(\\vec{x}) \\right)dt + \\sqrt{ K \\left( q^+_i(\\vec{x})+q^-_i(\\vec{x})\\right) } dW_i\n\\end{equation}\nwhere the force term in the Langevin equation recovers the Lotka-Voltera equation.\n\nIn particular, a common choice is for the diffusion term to be proportional to the square root of the abundance, such that the noise is independent of other species abundances, as in \n\\begin{equation}\n \\label{eq:langevin-LV-sqrt-noise}\n d x_i = \\left( q^+_i(\\vec{x})-q^-_i(\\vec{x}) \\right)dt + \\sqrt{ K r x_i } dW_i.\n\\end{equation}\nHowever not all choice of birth and death rate appropriately recovers this form.\n\nUsing an Euler integration method, simulations of the Langevin equation assess how well the Fokker-Planck approximates the SAD. \nWe find that the Fokker-Planck approximation does not reproduce the complete phase space of the modality regimes for either noise specified in \\eqref{eq:langevin-LV} and \\eqref{eq:langevin-LV-sqrt-noise}, see Fig.~\\ref{fig:langevin}.\nIn both cases, the `neutral-like' regime at high competitive overlap ($\\rho > 2\\cdot10^-1$) is present even at low immigration such that no bimodality is observed on the neutral manifold ($\\rho = 1$).\nThe boundary between the immigration dominated unimodal regime and other regimes is recovered; in this regime, few fluctuations arise that are sufficiently strong enough to bring any species close to the excluded state, $x_i=0$.\nNote that the force term is always positive for some $x_i>0$, which implies that species will assuredly be deterministically pushed back from exclusion.\nAdditionally, the multimodal regime is absent in all Langevin results.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{figures\/supp-langevin.pdf}\n \\caption{Langevin numerical simulations. Left: Modality regimes with noise \\eqref{eq:langevin-LV} from the rates defined in the main text. Right: Modality regimes with $\\sqrt{n}$ noise as in \\eqref{eq:langevin-LV-sqrt-noise}. Colours correspond to the modality regime presented in the main text: yellow is the `neutral-like' regime, teal is the bimodal regime and purple is the immigration dominated unimodal regime.}\n \\label{fig:langevin}\n\\end{figure}\n\n\\section*{Species Rank Abundances vs Species Abundance Distribution }\nIn the vast majority of the manuscript we use species abundance distributions (SADs), together with some dynamical properties, in order to examine and classify processes into deferment regimes. However, in many experimental studies, the species rank abundances (SRAs) are frequently reported instead, e.g see \\cite{ser2018ubiquitous,mora2018quantifying,hubbell1979tree,descheemaeker2020stochastic,jeraldo2012quantification}. The SRA is closely related to the cumulative distribution correspond to SAD, as is described in the following. First, the cumulative abundance distribution is computed with ${\\rm CAD}(n)\\equiv\\sum_0^n P(n')$. Then, we note that the most abundance species, namely species with rank 1, has abundance between ${\\rm CAD}^{-1}(1-1\/S)$ to ${\\rm CAD}^{-1}(1)$. The second most abundance species, i.e. the ranked 2, has abundance between ${\\rm CAD}^{-1}(1-2\/S)$ to ${\\rm CAD}^{-1}(1-1\/S)$, and so on. Therefore, the x-axis in Fig.~\\ref{fig:SRAvsSAD_rho1} is computed with $1+S(1-{\\rm CAD}(n))$ and the y-axis are the abundances $n$. Using this approach, we generated the SRAs correspond to SAD, see Fig.~\\ref{fig:SRAvsSAD_rho1} for the results for $\\rho=1$. However, as is shown in Fig.~\\ref{fig:SRAvsSAD_rho1}, classification through the SRAs is less significant.\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[trim=140 220 100 230, width=0.49\\columnwidth]{figures\/SAD_rho1.pdf}\n \\includegraphics[trim=140 220 100 230, width=0.49\\columnwidth]{figures\/SRA_rho1.pdf}\n \\caption{A comparison between the species abundance distribution (left panel) and its corresponding species rank abundances (right panel). Here $\\rho=1$ is fixed, and the immigration rate $\\mu$ varies. Solid lines represent $\\mu\\in [10^{-3}, 10^{-1}, 10]$ following the legend. The dashed, dotted and dashed-dotted lines are in-between $\\mu$-s. The color scheme corresponds the modality classification of SAD; teal (bimodal, very low $\\mu$), yellow (rare-biosphere, intermediate $mu$) and blue (unimodal, high $\\mu$). }\n \\label{fig:SRAvsSAD_rho1}\n\\end{figure}\n\n\\section*{Experimental Studies}\n\n\\begin{table}[h!]\n \\centering\n \\begin{tabular}{|p{3cm}|p{4.5cm}|p{5cm}|p{3cm}|}\n System (Ref.) & Regimes &\tConclusions & Observations \\\\ \\hline \nMicrobial competition \\cite{gore2021} & Stable full coexsitence (IIa), stable partial coexistnce (IIb), persisnt fluctuation (IIIb) &\tPositive correlation between species and instability (rapid fluctuations)& \tCommunity composition\/ richness\/ fluctuating communities \n\\\\\nGlobal birds species \\cite{callaghan2021global} &\tUnimodal - log skew &\t&\tSAD \\\\\nPlankton \\cite{ser2018ubiquitous} &\tAbundance is power-law decay , neutral-like &\t& SAD and SRA \\\\\nCoral \\cite{dornelas2008multiple}\t& Multimodal distribution &\tSAD Not from habitat preferences, most likely due to spatial effects. Common of large samples &\tSAD\t\n\\\\\nLymphocyte repertoire \\cite{mora2018quantifying} &\tPower law distributions&\tFunctional repertoire is more relevant than actual repertoire, overlap in antigen coverage reduces size for repertoire &\tSRA \n\\\\\nArthropods \\cite{matthews2014multimodal} &\tMulti-modal distribution\t& Propose that multiple modes come from ecologically distinct communities &\tSAD \n\\\\\nTrees \\cite{hubbell1979tree} &\tPower law distributions &\tDifferent forest types show curves with different slopes, explained by random walk model &\tSRA \n\\\\\nBacteria \\cite{bell2005larger}\t& Increasing K, related to Species Area Relation (SAR) &\tLarger islands have more bacteria taxa on them (increased diversity)\t& Diversity per islands size\t\n\\\\\nT-cell receptors \\cite{oakes2017quantitative}\t& Bimodal and unimodal (exponential) &\tAlthough total population (CD8 naive cells) has bimodal distirbution, subpopulations have exponential\tclonotype & SAD\t\n\\\\\nMicrobial competition \\cite{descheemaeker2020stochastic} &\tNeutral-like and niche-like & Heavy tailed rank abundance (microbiome tongue seems to have 2 slopes). Model with linear (extrinsic noise) reproduces this & SRA and time series \n\\\\ \nCompetition in gastrointestinal microbiomes \\cite{jeraldo2012quantification} &\tNon-neutral (niche) &\tThe species abundance patterns are seemingly well fit by the neutral theory, however the operational taxonomic units (OTUs) classify it as niche &\tSRA and OTUs\n\\\\ \n \\end{tabular}\n \\caption{Experimental studies consider competitive species and their reported results. }\n \\label{tab:my_label}\n\\end{table}\n\n\n\\iffalse\n\\begin{table}\\centering\n\\caption{This is a table}\n\n\\begin{tabular}{lrrr}\nSpecies & CBS & CV & G3 \\\\\n\\midrule\n1. Acetaldehyde & 0.0 & 0.0 & 0.0 \\\\\n2. Vinyl alcohol & 9.1 & 9.6 & 13.5 \\\\\n3. Hydroxyethylidene & 50.8 & 51.2 & 54.0\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\\fi\n\n\\iffalse\n\\movie{Type legend for the movie here.}\n\n\\movie{Type legend for the other movie here. Adding longer text to show what happens, to decide on alignment and\/or indentations.}\n\n\\movie{A third movie, just for kicks.}\n\n\\dataset{dataset_one.txt}{Type or paste legend here.}\n\n\\dataset{dataset_two.txt}{Type or paste legend here. Adding longer text to show what happens, to decide on alignment and\/or indentations for multi-line or paragraph captions.}\n\\fi\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzewcm b/data_all_eng_slimpj/shuffled/split2/finalzzewcm new file mode 100644 index 0000000000000000000000000000000000000000..5852e83b5a893ed5324b1aa19a4f9d8cab2568c7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzewcm @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:introduction}\nLet $q\\geq 2$ be a positive integer. Then every real $\\theta\\in[0,1)$\nadmits a unique expansion of the form\n\\[\\theta=\\sum_{k\\geq1}a_kq^k\\quad(a_k\\in\\{0,\\ldots,q-1\\})\\]\ncalled the $q$-ary expansion. We denote by\n$\\mathcal{N}(\\theta,d_1\\cdots d_\\ell,N)$ the number of occurrences of the block\n$d_1\\cdots d_\\ell$ amongst the first $N$ digits, \\textit{i.e.}\n\\[\\mathcal{N}(\\theta,d_1\\cdots d_\\ell,N):=\\#\\{0\\leq i< n\\colon\na_{i+1}=d_1,\\ldots,a_{i+\\ell}=d_\\ell\\}.\\] Then we call a number normal\nof order $\\ell$ in base $q$ if for each block of length $\\ell$ the\nfrequency of occurrences tends to $q^{-\\ell}$. As a qualitative\nmeasure of the distance of a number from being normal we introduce for\nintegers $N$ and $\\ell$ the discrepancy of $\\theta$\nby \\[\\mathcal{R}_{N,\\ell}(\\theta)=\\sup_{d_1\\ldots\n d_\\ell}\\lvert\\frac{\\mathcal{N}(\\theta,d_1\\cdots\n d_\\ell,N)}{N}-q^{-k}\\rvert,\\] where the supremum is over all blocks\nof length $\\ell$. Then a number $\\theta$ is normal to base $q$ if for\neach $\\ell\\geq1$ we have that $\\mathcal{R}_{N,\\ell}(\\theta)=o(1)$ for\n$N\\to\\infty$. Furthermore we call a number absolutely normal if it is\nnormal in all bases $q\\geq2$.\n\nBorel \\cite{borel1909:les_probabilites_denombrables} used a slightly\ndifferent, but equivalent (\\textit{cf.} Chapter 4 of \\cite{bugeaud2012:distribution_modulo_one}), definition of normality to show that almost\nall real numbers are normal with respect to the Lebesgue\nmeasure. Despite their omnipresence it is not known whether numbers\nsuch as $\\log2$, $\\pi$, $e$ or $\\sqrt2$ are normal to any base. The\nfirst construction of a normal number is due to Champernowne\n\\cite{champernowne1933:construction_decimals_normal} who showed that\nthe number\n\\begin{align*}\n0.1\\,2\\,3\\,4\\,5\\,6\\,7\\,8\\,9\\,10\\,11\\,12\\,13\\,14\\,15\\,16\\,17\\,18\\,19\\,20\\dots\n\\end{align*}\nis normal in base $10$.\n\nThe construction of Champernowne laid the base for a class of\nnormal numbers which are of the form\n\\begin{gather*}\n\\sigma_q=\\sigma_q(f)=\n 0.\\left\\lfloor f(1)\\right\\rfloor_q\\left\\lfloor f(2)\\right\\rfloor_q\\left\\lfloor f(3)\\right\\rfloor_q \\left\\lfloor f(4)\\right\\rfloor_q \\left\\lfloor f(5)\\right\\rfloor_q \\left\\lfloor f(6)\\right\\rfloor_q \\dots,\n\\end{gather*}\nwhere $\\left\\lfloor\\cdot\\right\\rfloor_q$ denotes the expansion in base $q$ of the integer\npart. Davenport and Erd{\\H o}s\n\\cite{davenport_erdoes1952:note_on_normal} showed that $\\sigma(f)$ is\nnormal for $f$ being a polynomial such that $f(\\mathbb{N})\\subset\\mathbb{N}$. This\nconstruction was extended by Schiffer\n\\cite{schiffer1986:discrepancy_normal_numbers} to polynomials with\nrational coefficients. Furthermore he showed that for these\npolynomials the discrepancy $\\mathcal{R}_{N,\\ell}(\\sigma(f))\\ll (\\log\nN)^{-1}$ and that this is best possible. These results where extended\nby Nakai and Shiokawa\n\\cite{nakai_shiokawa1992:discrepancy_estimates_class} to polynomials\nhaving real coefficients. Madritsch, Thuswaldner and Tichy\n\\cite{madritsch_thuswaldner_tichy2008:normality_numbers_generated}\nconsidered transcendental entire functions of bounded logarithmic\norder. Nakai and Shiokawa\n\\cite{nakai_shiokawa1990:class_normal_numbers} used\npseudo-polynomial functions, \\textit{i.e.} these are function of the\nform\n\\begin{gather}\\label{mani:pseudopoly}\n f(x)=\\alpha_0 x^{\\beta_0}+\\alpha_1x^{\\beta_1}+\\cdots+\\alpha_dx^{\\beta_d}\n\\end{gather}\nwith $\\alpha_0,\\beta_0,\\alpha_1,\\beta_1,\\ldots,\\alpha_d,\\beta_d\\in\\mathbb{R}$,\n$\\alpha_0>0$, $\\beta_0>\\beta_1>\\cdots>\\beta_d>0$ and at least one\n$\\beta_i\\not\\in\\mathbb{Z}$. Since we often only need the leading term we write\n$\\alpha=\\alpha_0$ and $\\beta=\\beta_0$ for short. They were also able\nto show that the discrepancy is $\\mathcal{O}((\\log N)^{-1})$. We refer\nthe interested reader to the books of Kuipers and Niederreiter\n\\cite{kuipers_niederreiter1974:uniform_distribution_sequences}, Drmota\nand Tichy \\cite{drmota_tichy1997:sequences_discrepancies_and} or\nBugeaud \\cite{bugeaud2012:distribution_modulo_one} for a more complete\naccount on the construction of normal numbers.\n\nThe present method of construction by concatenating function values is in\nstrong connection with properties of $q$-additive functions. We call a\nfunction $f$ strictly $q$-additive, if $f(0)=0$ and the function\noperates only on the digits of the $q$-ary representation, i.e.,\n\\[\n f(n)=\\sum_{h=0}^\\ell f(d_h)\\quad\\text{ for }\\quad n=\\sum_{h=0}^\\ell d_hq^h.\n\\]\nA very simple example of a strictly $q$-additive function is the sum of digits\nfunction $s_q$, defined by\n\\[\n s_q(n)=\\sum_{h=0}^\\ell d_h\\quad\\text{ for }\\quad n=\\sum_{h=0}^\\ell d_hq^h.\n\\]\n\nRefining the methods of Nakai and Shiokawa\n\\cite{nakai_shiokawa1990:class_normal_numbers} the author obtained the\nfollowing result.\n\\begin{thm}[{\\cite[Theorem 1.1]{madritsch2012:summatory_function_q}}]\n Let $q\\geq2$ be an integer and $f$ be a strictly $q$-additive\n function. If $p$ is a pseudo-polynomial as defined in\n (\\ref{mani:pseudopoly}), then there exists $\\eta>0$ such that\n\\begin{gather*}\n \\sum_{n\\leq N}f\\left(\\left\\lfloor p(n)\\right\\rfloor\\right)\n =\\mu_fN\\log_q(p(N))\n +NF\\left(\\log_q(p(N))\\right)\n +\\mathcal{O}\\left(N^{1-\\eta}\\right),\n\\end{gather*}\nwhere\n\\[\n\\mu_f=\\frac1q\\sum_{d=0}^{q-1}f(d)\n\\]\nand $F$ is a $1$-periodic function depending only on $f$ and $p$.\n\\end{thm}\n\nIn the present paper, however, we are interested in a variant of\n$\\sigma_q(f)$ involving primes. As a first example, Champernowne\n\\cite{champernowne1933:construction_decimals_normal} conjectured and\nlater Copeland and Erd{\\H o}s\n\\cite{copeland_erdoes1946:note_on_normal} proved that the number\n\\begin{align*}\n0.2\\,3\\,5\\,7\\,11\\,13\\,17\\,19\\,23\\,29\\,31\\,37\\,41\\,43\\,47\\,53\\,59\\,61\\,67\\dots\n\\end{align*}\nis normal in base $10$. Similar to the construction above we want to\nconsider the number\n\\begin{gather*}\n\\tau_q=\\tau_q(f)=0.\\left\\lfloor f(2)\\right\\rfloor_q \\left\\lfloor f(3)\\right\\rfloor_q \\left\\lfloor f(5)\\right\\rfloor_q \\left\\lfloor f(7)\\right\\rfloor_q \\left\\lfloor f(11)\\right\\rfloor_q \\left\\lfloor f(13)\\right\\rfloor_q \\dots,\n\\end{gather*}\nwhere the arguments of $f$ run through the sequence of primes.\n\nThen the paper of Copeland and Erd{\\H o}s corresponds to the function\n$f(x)=x$. Nakai and Shiokawa\n\\cite{nakai_shiokawa1997:normality_numbers_generated} showed that the\ndiscrepancy for polynomials having rational coefficients is\n$\\mathcal{O}((\\log N)^{-1})$. Furthermore Madritsch, Thuswaldner and\nTichy\n\\cite{madritsch_thuswaldner_tichy2008:normality_numbers_generated}\nshowed, that transcendental entire functions of bounded logarithmic\norder yield normal numbers. Finally in a recent paper Madritsch and\nTichy \\cite{madritsch_tichy2013:construction_normal_numbers}\nconsidered pseudo-polynomials of the special form $\\alpha x^\\beta$\nwith $\\alpha>0$, $\\beta>1$ and $\\beta\\not\\in\\mathbb{Z}$.\n\nThe aim of the present paper is to extend this last construction to\narbitrary pseudo-polynomials. Our first main result is the following\n\\begin{thm}\\label{thm:normal}\nLet $f$ be a pseudo-polynomial as in (\\ref{mani:pseudopoly}). Then\n\\[\n\\mathcal{R}_N(\\tau_q(f))\\ll(\\log N)^{-1}.\n\\]\n\\end{thm}\n\n\nIn our second main result we use the connection of this construction\nof normal numbers with the arithmetic mean of $q$-additive functions\nas described above. Known results are due to Shiokawa\n\\cite{shiokawa1974:sum_digits_prime} and Madritsch and Tichy\n\\cite{madritsch_tichy2013:construction_normal_numbers}. Similar\nresults concerning the moments of the sum of digits function over\nprimes have been established by K\\'atai\n\\cite{katai1977:sum_digits_primes}.\n\nLet $\\pi(x)$ stand for the number of primes less than or equal to\n$x$. Then adapting these ideas to our method we obtain the following\n\\begin{thm}\\label{thm:summatoryfun}\nLet $f$ be a pseudo-polynomial as in (\\ref{mani:pseudopoly}). Then \n\\[\n\\sum_{p\\leq P}s_q(\\left\\lfloor f(p)\\right\\rfloor)=\\frac{q-1}2\\pi(P)\\log_qP^\\beta+\\mathcal{O}(\\pi(P)),\n\\]\nwhere the sum runs over the primes and the implicit $\\mathcal{O}$-constant may\ndepend on $q$ and $\\beta$.\n\\end{thm}\n\n\\begin{rem}\nWith simple modifications Theorem \\ref{thm:summatoryfun} can be extended to\ncompletely $q$-additive functions replacing $s_q$.\n\\end{rem}\n\n\nThe proof of the two theorems is divided into four parts. In the\nfollowing section we rewrite both statements in order to obtain as a\ncommon base the central theorem -- Theorem \\ref{mani:centralthm}. In\nSection~\\ref{sec:proof-prop-refm1} we start with the proof of this\ncentral theorem by using an indicator function and its Fourier\nseries. These series contain exponential sums which we treat by\ndifferent methods (with respect to the position in the expansion) in\nSection \\ref{sec:expon-sum-estim}. Finally, in\nSection~\\ref{sec:proof-prop-refm2} we put the estimates together in\norder to proof the central theorem and therefore our two statements.\n\n\\section{Preliminaries}\\label{sec:preliminaries}\n\nThroughout the rest $p$ will always denote a prime. The implicit\nconstant of $\\ll$ and $\\mathcal{O}$ may depend on the\npseudo-polynomial $f$ and on the parameter\n$\\varepsilon>0$. Furthermore we fix a block $d_1\\cdots d_\\ell$ of\nlength $\\ell$ and $N$, the number of digits we consider.\n\nIn the first step we want to know in the\nexpansion of which prime the $N$-th digit occurs. This can be seen as\nthe translation from the digital world to the world of blocks. To this\nend let $\\ell(m)$ denote the length of the $q$-ary\nexpansion of an integer $m$. Then we define an integer $P$ by\n\\begin{gather*}\n\\sum_{p\\leq P-1}\\ell\\left(\\lfloor f(p)\\rfloor\\right) P^{\\gamma}.\n \\end{cases}\n\\end{gather*}\nEstimating all summands with $\\lvert\\nu\\rvert>P^{\\gamma}$ trivially we get\n\\begin{gather*\n\\sum_{\\substack{\\nu=-\\infty\\\\\\nu\\neq0}}^\\infty\n A_\\pm(\\nu)e\\left(\\frac{\\nu}{q^j}f(p)\\right)\n\\ll\\sum_{\\nu=1}^{P^{\\gamma}}\\nu^{-1}e\\left(\\frac{\\nu}{q^j}f(p)\\right)+P^{-\\gamma}.\n\\end{gather*}\nUsing this in \\eqref{mani:0.5} yields\n\\begin{gather*}\n\\lvert\\sum_{p\\leq P}\\mathcal{I}(q^{-j}f(p))-\\frac{\\pi(P)}{q^{\\ell}}\\rvert\n\\ll\\pi(P)P^{-\\gamma}+\\sum_{\\nu=1}^{P^{\\gamma}}\n\\nu^{-1}S(P,j,\\nu),\n\\end{gather*}\nwhere we have set \n\\begin{gather}\\label{S_Pjnu}\nS(P,j,\\nu):=\\sum_{p\\leq P}e\\left(\\frac{\\nu}{q^j}f(p)\\right).\n\\end{gather}\n\n\\section{Exponential sum estimates}\\label{sec:expon-sum-estim}\nIn the present section we will focus on the estimation of the sum\n$S(P,j,\\nu)$ for different ranges of $j$. Since $j$ describes the\nposition within the $q$-ary expansion of $f(p)$ we will call these\nranges the ``most significant digits'', the ``least significant\ndigits'' and the ``digits in the middle'', respectively.\n\nNow, if $\\theta_r>k\\geq0$, \\textit{i.e} the leading coefficient of $f$ origins\nfrom the pseudo polynomial part $g$, then we consider the two ranges\n$$1\\leq q^j\\leq P^{\\theta_r-\\rho}\\quad\\text{and}\\quad\nP^{\\theta_r-\\rho}\\theta_r>0$, meaning that the leading coefficient of\n$f$ origins from the polynomial part $h$, then we have an additional\npart. In particular, in this case we will consider the three ranges\n$$1\\leq q^j\\leq P^{\\theta_r-\\rho},\\quad\nP^{\\theta_r-\\rho}0$. Then\n$$S(P,j,\\nu)\\ll\\frac1{\\log P}\\Lambda^{-\\frac1k}+\\frac{P}{(\\log P)^G}.$$\n\\end{prop}\n\nThe main idea of the proof is to use Riemann-Stieltjes integration\ntogether with \n\\begin{lem}[{\\cite[Lemma 8.10]{iwaniec_kowalski2004:analytic_number_theory}}]\n\\label{ik:lem8.10}\nLet $F\\colon[a,b]\\to\\mathbb{R}$ and suppose that for some $k\\geq1$\nwe have $\\lvert F^{(k)}(x)\\rvert\\geq\\Lambda$ for any $x$ on $[a,b]$\nwith $\\Lambda>0$. Then\n\\[\n\\lvert\\int_a^be(F(x))\\mathrm{d}x\\rvert\n\\leq k2^k\\Lambda^{-1\/k}.\n\\]\n\\end{lem}\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:most_significant}]\nWe rewrite the sum into a Riemann-Stieltjes integral:\n\\begin{align*}\nS(P,j,\\nu)=\\sum_{p\\leq P}e\\left(\\frac{\\nu}{q^j}f(p)\\right)\n=\\int_{2}^{P}e\\left(\\frac{\\nu}{q^j}f(t)\\right)\\mathrm{d}\\pi(t)+\\mathcal{O}(1).\n\\end{align*}\nThen we apply the prime number theorem in the form\n\\eqref{pnt} to gain the usual integral back. Thus\n\\begin{align*}\nS(P,j,\\nu)\n=\\int_{P(\\log P)^{-G}}^{P}\ne\\left(\\frac{\\nu}{q^j}f(t)\\right)\n\\frac{\\mathrm{d}t}{\\log t}\n+\\mathcal{O}\\left(\\frac{P}{(\\log P)^G}\\right).\n\\end{align*}\nNow we use the second mean-value theorem to get\n\\begin{equation}\\label{mani:res:most}\n\\begin{split}\nS(P,j,\\nu)\\ll\\frac1{\\log P}\\sup_{\\xi}\n \\lvert\\int_{P(\\log P)^{-G}}^{\\xi}e\\left(\\frac{\\nu}{q^j}f(t)\\right)\\mathrm{d}t\\rvert\n +\\frac{P}{(\\log P)^G}.\n\\end{split}\n\\end{equation}\nFinally an application of Lemma \\ref{ik:lem8.10} proves the lemma.\n\\end{proof}\n\n\\subsection{Least significant digits} Now we turn our attention to the\nlowest range of $j$. In particular, the goal is the proof of the\nfollowing\n\\begin{prop}\\label{prop:least_significant}\nLet $P$ and $\\rho$ be positive reals and $f$ be a pseudo-polynomial as in\n\\eqref{pseudo:poly:split}. If $j$ is such that\n\\begin{gather}\\label{mani:gammarange}\n 1\\leq q^j\\leq P^{\\theta_r-\\rho}\n\\end{gather}\nholds, then for $1\\leq\\nu\\leq P^\\gamma$ there exists $\\eta>0$\n(depending only on $f$ and $\\rho$) such that\n\\begin{gather*}\n S(P,j,\\nu)=(\\log P)^8P^{1-\\eta}.\n\\end{gather*}\n\\end{prop}\n\n\nBefore we launch into the proof we collect some tools that will be\nnecessary in the sequel. A standard idea for estimating exponential\nsums over the primes is to rewrite them into ordinary exponential sums\nover the integers having von Mangoldt's function as weights and then\nto apply Vaughan's identity. We denote by\n\\[\n\\Lambda(n)=\\begin{cases}\n\\log p,&\\text{if $n=p^k$ for some prime $p$ and an integer $k\\geq1$;}\\\\\n0,&\\text{otherwise}.\n\\end{cases}\n\\]\nvon Mangoldt's function. For the rewriting process we use the following\n\\begin{lem}\n\\label{mr:lem11}\nLet $g$ be a function such that $\\lvert g(n)\\rvert\\leq 1$ for all\nintegers $n$. Then\n\\[\n\\lvert\\sum_{p\\leq P}g(p)\\rvert\\ll\\frac1{\\log P}\\max_{t\\leq P}\n\\lvert\\sum_{n\\leq t}\\Lambda(n)g(n)\\rvert+\\mathcal{O}(\\sqrt{P}).\n\\]\n\\end{lem}\n\n\\begin{proof}\nThis is Lemma 11 of \\cite{mauduit_rivat2010:sur_un_probleme}. However,\nthe proof is short and we need some piece later.\n\nWe start with a summation by parts yielding\n\\[\\sum_{p\\leq P}g(p)=\\frac1{\\log P}\\sum_{p\\leq\n x}\\log(p)g(p)+\\int_2^P\\left(\\sum_{p\\leq\n t}\\log(p)g(p)\\right)\\frac{\\mathrm{d}t}{t(\\log t)^2}.\\]\n\nNow we cut the integral at $\\sqrt{P}$ and use Chebyshev's inequality\n(\\textit{cf.} \\cite[Th\\'eor\\`eme 1.3]{tenenbaum1995:introduction_la_theorie})\nin the form $\\sum_{p\\leq\n t}\\log(p)\\leq\\log(t)\\pi(t)\\ll t$ for the lower part. Thus\n\\begin{align*}\n\\lvert\\sum_{p\\leq P}g(p)\\rvert\n&\\leq\\left(\\frac1{\\log\n P}+\\int_{\\sqrt{P}}^P\\frac{\\mathrm{d}t}{t(\\log t)^2}\\right)\n \\max_{\\sqrt{P} Z, \\frac{X}{x} < y < \\frac{2X}{x}} e\\left(\\frac{\\nu}{q^j}(g(xy)+h(xy))\\right)\\\\\nS_2&=\\sum_{\\frac XVZ}}e\\left(\\frac{\\nu}{q^j}(g(xy)+h(xy))\\right)\\rvert.\n\\end{align*}\nFor estimating the inner sum we fix $x$ and denote $Y=\\frac Xx$. Since\n$\\theta_r\\not\\in\\mathbb{Z}$ and $\\theta_r>k\\geq0$, we have that\n\\[\\lvert\\frac{\\partial^\\ell g(xy)}{\\partial y^\\ell}\\rvert\n\\asymp X^{\\theta_r}Y^{-\\ell}.\\]\n\nNow on the one hand, since $q^j\\leq P^{\\theta_r-\\rho}$, we have $\\nu\nq^{-j}X^{\\theta_r}\\gg X^{\\rho}$. On the other hand for\n$\\ell\\geq5(\\lfloor\\theta_r\\rfloor+1)$ we get\n\\[\\frac{\\nu}{q^j}X^{\\theta_r}Y^{-\\ell}\\leq P^\\gamma X^{\\theta_r-\\frac25\\ell}\\ll X^{-\\frac12}.\\] \n\nThus an application of Lemma \\ref{bkmst:lem25} yields the following\nestimate:\n\\begin{equation}\\label{mani:estim:S1}\n\\begin{split}\n\\lvert S_1\\rvert &\\ll X^{\\varepsilon}\\sum_{x \\leq 2X\/Z} Y \\left[\n Y^{-\\frac{1}{K}} + (\\log Y)^kX^{-\\frac{\\rho}{K}} + X^{-\\frac{1}{2} \\frac{1}{4K \\cdot 8L^5 - 2K}} \\right] \\\\\n&\\ll X^{1+\\varepsilon}(\\log X)\\left(X^{-\\rho} + X^{-\\frac{1}{64L^5-4} } \\right)^{\\frac1K}, \n\\end{split}\\end{equation}\nwhere we have used that $\\frac kK<1$ and $\\rho<\\frac13$.\n\nFor the second sum $S_2$ we start by splitting the interval $(\n\\frac{X}{V} , \\frac{2X}{U} ]$ into $\\leq \\log X$ subintervals of the\nform $(X_1, 2X_1]$. Thus\n\\begin{align*}\n\\lvert S_2\\rvert\n&\\leq (\\log X)X^{\\varepsilon}\\sum_{X_10$ and $H\\leq X$\n$$\\sum_{X0$,\n$(a,b)=1$,\n\\[1\\leq b\\leq H^{k-\\rho}\n\\quad\\text{and}\\quad\n\\lvert\\frac{\\nu\\alpha_k}{q^j}-\\frac ab\\rvert\\leq\n\\frac{H^{\\rho-k}}{b}.\\]\nWe distinguish three cases according to the size of $b$.\n\\begin{itemize}\n\\item[] \\textbf{Case 1.} $H^\\rhok-1+2\\rho\\geq\\theta_r\\] yielding a\n contradiction.\n \\item[] \\textbf{Case 3.2.2}\n $P^{1-\\theta_r}\\lvert\\nu\\rvert^{-1}q^j>X$. Then $H=X\\geq\n P^{1-2\\rho}$ and \\eqref{case3.2}\n becomes \\[P^{k-1+\\rho}\\geq\\lvert\\nu\\alpha_k\\rvert\n P^{(1-2\\rho)(k-\\rho)}\\] yielding a similar contradiction as in\n \\textbf{Case 3.2.1}.\n \\end{itemize}\n \\end{itemize}\n\\end{itemize}\nTherefore \\textbf{Case 1} is the only possible and we may always apply\nLemma \\ref{lem:exponential_sum_primes_poly} together with\n\\eqref{mani:log_Mangoldt_equivalence}. Plugging this\ninto~\\eqref{mani:eq_1} yields\n\\begin{align*}\n\\sum_{X< n\\leq X+H}\\Lambda(n)\ne\\left(\\frac{\\nu}{q^j}(g(n)+h(n))\\right)\n&\\ll H^{1-\\frac{\\rho}{4^{k-1}}+\\varepsilon}\\left(1+\\sum_{X< n\\leq X+H}\\left|\\varphi(n)-\\varphi(n+1)\\right|\\right)\n\\end{align*}\n\nNow by our choice of $H$ together with an application of the mean\nvalue theorem we have that\n$$\\sum_{X\\leq n\\leq X+H}\\lvert \\varphi(n)-\\varphi(n+1)\\rvert\n\\ll H\\frac{\\nu}{q^j}P^{\\theta-1}\\ll 1.$$\nThus \n\\begin{align*}\n\\sum_{X\\leq n\\leq X+H} \\Lambda(n) \ne\\left(\\frac{\\nu}{q^j}(g(n)+h(n))\\right)\n\\ll H^{1-\\frac{\\rho}{4^{k-1}}+\\varepsilon}.\n\\end{align*}\n\n\\end{proof}\n\n\\section{Proof of Theorem \\ref{mani:centralthm}, Part II}\\label{sec:proof-prop-refm2}\nNow we use all the tools from the section above in order to estimate\n\n\\begin{gather}\\label{distance_from_mean}\n\\sum_{j=\\ell}^J\\lvert\\sum_{p\\leq P}\\mathcal{I}(q^{-j}f(p))-\\frac{\\pi(P)}{q^{\\ell}}\\rvert\n\\ll\\pi(P)H^{-1}J+\\sum_{\\nu=1}^{H}\n\\nu^{-1}\\sum_{j=\\ell}^JS(P,j,\\nu).\n\\end{gather}\n\nAs indicated in the section above, we split the sum over $j$ into two\nor three parts according to whether $\\theta_r>k$ or not. In any case\nan application of Proposition \\ref{prop:least_significant} yields for the\nleast significant digits that\n\\begin{gather}\\label{estimate:least}\n\\sum_{1\\leq \\nu\\leq P^\\gamma}\\nu^{-1}\\sum_{1\\leq q^{j}\\leq\n P^{\\theta_r-\\rho}} S(P,j,\\nu)\n\\ll (\\log P)^9JP^{1-\\eta}.\n\\end{gather}\n\nNow let us suppose that $\\theta_r>k$. Then an application of Proposition\n\\ref{prop:most_significant} yields\n\\begin{equation}\\label{estimate:most_non_integer}\n\\begin{split}\n\\sum_{1\\leq \\nu\\leq P^\\gamma}\\nu^{-1}&\\sum_{P^{\\theta_r-\\rho}< q^{j}\\leq\n P^{\\theta_r}}S(P,j,\\nu)\\\\\n&\\ll \\sum_{1\\leq \\nu\\leq P^\\gamma}\\nu^{-1}\\sum_{P^{\\theta_r-\\rho}< q^{j}\\leq\n P^{\\theta_r}}\\frac1{\\log\n P}\\left(\\frac{\\nu}{q^j}\\right)^{-\\frac1{\\left\\lfloor\\theta_r\\right\\rfloor}}+\\frac{P}{(\\log P)^{G-2}}\\\\\n&\\ll \\frac{P}{\\log P}.\n\\end{split}\n\\end{equation}\n\nPlugging the estimates \\eqref{estimate:least} and\n\\eqref{estimate:most_non_integer} into~\\eqref{distance_from_mean} we\nget that\n$$\\sum_{j=\\ell}^J\\lvert\\sum_{p\\leq P}\\mathcal{I}(q^{-j}f(p))-\\frac{\\pi(P)}{q^{\\ell}}\\rvert\n\\ll\\frac{P}{\\log P},$$\nwhich together with \\eqref{mani:NthetatoNstar} proves Theorem\n\\ref{mani:centralthm} in the case that $\\theta_r>k$.\n\nOn the other side if $\\theta_r0$, for which Pontryagin spaces are\nrequired, appears to be new. Our methods allow the entire class of\nGPI Hamiltonians to be constructed, along with their spectral\nrepresentations. A particularly interesting subclass of the models\nconstructed corresponds to the case $L=\\infty$, with scattering theory\n$\\cot\\delta_0(k)=kM$. Such models reproduce the leading order\nbehaviour of non-point interactions exhibiting a zero energy\nresonance. We refer to these models as {\\em resonance point\ninteractions} (RPI).\n\nWe also discuss how these GPI models may be used as models for\nSchr\\\"{o}dinger operators with spherically symmetric potentials of\ncompact support. To do this, we employ a general methodology for\ndiscussing the `large scale effects of small objects' developed by\nKay and the author \\cite{KF}. In particular, we develop {\\em fitting\nformulae} (analogous to those given in \\cite{KF}) for matching a\ngiven potential $V(r)$ to the `best fit' GPI model. Finally, in\nSection 6, we conclude by discussing various extensions to our\nmethod.\n\nThe motivation for the present work arose in a consideration of the\nscattering of charged particles off magnetic flux tubes of small\nradius \\cite{FK}, in which it was found that the scattering lengths\nfor spin-$\\frac{1}{2}$ particles generically take the values $0$ or\n$\\infty$ in certain angular momentum sectors. In consequence, the\nanalogue of PI models representing dynamics in\nthe background of an infinitesimally thin wire of flux fails to\ndescribe the leading order scattering theory in these sectors, and\nshould be replaced by models analogous to the RPI models mentioned\nabove. The special nature of this system can be attributed to the\nfact that it is an example of supersymmetric quantum mechanics.\nElsewhere \\cite{F}, we will construct the appropriate class of RPI\nfor this system.\n\n\n\\sect{Preliminaries}\n\\subsection{Unitary Dilations}\n\nWe begin by describing the unitary dilation theory required in the\nsequel. Let ${\\cal H}_1,\\ldots,{\\cal H}_4$ be Hilbert spaces and\n$T\\in{\\cal L}({\\cal H}_1,{\\cal H}_2)$. Then\n$\\hat{T}\\in{\\cal L}({\\cal H}_1\\oplus{\\cal H}_3,{\\cal H}_2\\oplus{\\cal H}_4)$ is called a\n{\\em dilation} of $T$ if\n$T= P_{{\\cal H}_2}\\hat{T}|_{{\\cal H}_1}$\nwhere $P_{{\\cal H}_2}$ is the orthogonal projector onto ${\\cal H}_2$. In\nblock matrix form, $\\hat{T}$ takes form\n\\begin{equation}\n\\hat{T} = \\left(\\begin{array}{cc} T & P \\\\ Q & R \\end{array}\\right).\n\\label{eq:dilfm}\n\\end{equation}\nOur nomenclature follows that of Halmos \\cite{Halmos}. Elsewhere\n(e.g., in the work of Davis \\cite{Davis}), the term `dilation' (or\n`dilatation') often means that $\\hat{T}^n$ is a dilation of $T^n$\nand $(\\hat{T}^*)^n$ is a dilation of $(T^*)^n$ for each $n=1,2,\\ldots$\n(in addition, ${\\cal H}_1={\\cal H}_2$, and ${\\cal H}_3={\\cal H}_4$). We refer to such\noperators as {\\em power dilations}: in the block\nform~(\\ref{eq:dilfm}), this requires $PR^nQ=0$ for each\n$n=0,1,2,\\ldots$.\n\nAccording to a result of Sz.-Nagy \\cite{Nagy}, any contraction $T$\nfrom one Hilbert space to another (i.e., a bounded operator\nsatisfying $\\|T\\|\\le 1$) has a unitary dilation between larger\nHilbert spaces. Subsequently, Davis \\cite{Davis} extended this\nresult to arbitrary closed densely defined operators at the\ncost of introducing indefinite inner product spaces. (It is clear\nthat if $\\|T\\|>1$, no Hilbert space unitary dilation is possible.)\nIn fact, Davis' construction yields a unitary {\\em power} dilation\nof the original operator. This has no physical relevance in our\nconstruction, and so we use a more economical `cut-down' version of\nDavis' result, described below. First, we briefly review the salient\nfeatures of analysis in indefinite inner product spaces. Full\ntreatments can be found in the monographs of Bogn\\'ar \\cite{Bognar}\nand Azizov and Iokhvidov~\\cite{Azizov}.\n\nWe employ a particular class of indefinite inner product\nspaces known as {\\em $J$-spaces}. Let ${\\cal H}$ be a Hilbert space with\n(positive definite) inner product $\\inner{\\cdot}{\\cdot}$,\nequipped with a unitary involution, $J$. We define a non-degenerate\nindefinite inner product $[\\cdot,\\cdot]$ on ${\\cal H}$ by\n\\begin{equation}\n[x,y]=\\inner{x}{Jy},\n\\end{equation}\nwhich we call the {\\em $J$-inner product}. ${\\cal H}$ equipped with the\n$J$-inner product is called a $J$-space. ${\\cal H}$ admits decomposition\n${\\cal H}={\\cal H}_+\\oplus{\\cal H}_-={\\cal H}_+[+]{\\cal H}_-$ into the eigenspaces ${\\cal H}_\\pm$\nof $J$ with eigenvalue $\\pm 1$, where $[+]$ denotes the orthogonal\ndirect sum in the $J$-inner product. If at least one of the\n${\\cal H}_\\pm$ is finite dimensional, then ${\\cal H}$ is a {\\em Pontryagin\nspace} with respect to $[\\cdot,\\cdot]$ .\n\nThe topology of a $J$-space is determined by the Hilbert space norm;\nhowever, operator adjoints and the notion of unitarity are defined\nrelative to the $J$-inner product. Thus if ${\\cal H}_i$ ($i=1,2$) are\n$J_i$-spaces, and $T\\in{\\cal L}({\\cal H}_1,{\\cal H}_2)$, the {\\em\n$(J_1,J_2)$-adjoint} $T^\\dagger$ of $T$ is defined in terms of the\nHilbert space adjoint $T^*$ by\n\\begin{equation}\nT^\\dagger = J_1T^*J_2.\n\\end{equation}\nEquivalently, $[T^\\dagger x,y]_{{\\cal H}_1}=[x,Ty]_{{\\cal H}_2}$ for all\n$x\\in{\\cal H}_2$, $y\\in{\\cal H}_1$. If $[Ux,Uy]_{{\\cal H}_2}=[x,y]_{{\\cal H}_1}$ for all\n$x,y\\in {\\cal D}\\subset{\\cal H}_1$, $U$ is said to be {\\em\n$(J_1,J_2)$-isometric}; if in addition $U$ is a linear isomorphism of\n${\\cal H}_1$ and ${\\cal H}_2$, and ${\\cal D}={\\cal H}_1$, $U$ is said to be {\\em\n$(J_1,J_2)$-unitary}. Equivalently, $UU^\\dagger = \\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}_{{\\cal H}_1}$ and\n$U^\\dagger U = \\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}_{{\\cal H}_2}$. If ${\\cal H}_1={\\cal H}_2$ with $J_1=J_2=J$, terms\nsuch as $(J_1,J_2)$-isometric are abbreviated to $J$-isometric etc.\n\nReturning to the construction of unitary dilations, let $T$ be any\nbounded operator $T\\in{\\cal L}({\\cal H}_1,{\\cal H}_2)$, and define operators\n$M_1 = \\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}-TT^*$ and $M_2=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}-T^*T$. It is trivial to show that the\nrespective closures ${\\cal M}_i=\\overline{{\\rm Ran}\\, M_i}$ of their ranges are\n${\\rm sgn}\\, (M_i)$-spaces, and hence that ${\\cal K}_i={\\cal H}_i\\oplus{\\cal M}_i$ are\n$J_i$-spaces, where $J_i=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}_{{\\cal H}_i}\\oplus{\\rm sgn}\\,(M_i)$. We now define\na dilation $\\hat{T}$ of $T$ by\n\\begin{equation}\n\\hat{T} = \\left(\\begin{array}{cc} T & -{\\rm sgn}\\,(M_1)|M_1|^{1\/2} \\\\\n |M_2|^{1\/2} & T^*|_{{\\cal M}_1}\n \\end{array} \\right),\n\\end{equation}\nwhich has $(J_1,J_2)$-adjoint $\\hat{T}^\\dagger$ equal to\n\\begin{equation}\n\\hat{T}^\\dagger = J_1\\hat{T}^*J_2=\n \\left(\\begin{array}{cc}\n T^* & {\\rm sgn}\\, (M_2) |M_2|^{1\/2} \\\\\n - |M_1|^{1\/2} & T|_{{\\cal M}_2}\n \\end{array} \\right).\n\\end{equation}\nHere, we have used the intertwining relations $Tf(T^*T)=f(TT^*)T$ and\n$T^*f(TT^*)=f(T^*T)T^*$, which hold for any continuous Borel function\n$f$. It is now easy to show that $\\hat{T}^\\dagger\\hat{T}=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}_{{\\cal K}_1}$\nand $\\hat{T}\\hat{T}^\\dagger=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}_{{\\cal K}_2}$, thus verifying that\n$\\hat{T}$ is a $(J_1,J_2)$-unitary dilation of $T$. In our\napplication, $M_1$ and $M_2$ are finite rank, and so the $J$-spaces\nconstructed above are Pontryagin spaces.\n\nWe briefly consider the uniqueness of the unitary dilations\nconstructed above. Suppose ${\\cal N}_i$ are $J_i$\nspaces ($i=1,2$) and that $\\tilde{T}:{\\cal H}_1\\oplus{\\cal N}_1 \\rightarrow\n{\\cal H}_2\\oplus{\\cal N}_2$ is a unitary dilation of $T$ with matrix\nform~(\\ref{eq:dilfm}). Then, provided that the $M_i$ are finite\nrank, one may show that\n\\begin{equation}\nP_{{\\cal H}_2\\oplus {\\cal Q}}\n\\tilde{T}|_{{\\cal H}_1\\oplus {\\cal P}}=\n\\left(\\begin{array}{cc} \\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1} & 0 \\\\ 0 & U_2 \\end{array}\\right)\n\\hat{T}\n\\left(\\begin{array}{cc} \\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1} & 0 \\\\ 0 & U_1^\\dagger\n\\end{array}\\right),\n\\end{equation}\nwhere ${\\cal P}=P^\\dagger\\overline{{\\rm Ran}\\, M_1}$, ${\\cal Q}=Q\\overline{{\\rm Ran}\\,\nM_1}$, and $U_1$ and $U_2$ are unitaries (with respect to the\n$J$-inner products) from ${\\cal M}_1$ and ${\\cal M}_2$ to ${\\cal P}$ and ${\\cal Q}$\nrespectively. In addition, $P_{{\\cal H}_2\\oplus {\\cal Q}}$ is an orthogonal\nprojection onto ${\\cal H}_2\\oplus{\\cal Q}$ in ${\\cal H}_2\\oplus{\\cal N}_2$.\n\nThus $\\hat{T}$ is unique up to further dilation and unitary\nequivalence of the above form. If the $M_i$ are not of finite rank,\nthis statement also holds if the $M_i$ are strictly positive. More\ngenerally, it is not clear whether ${\\cal Q}$ is necessarily\northocomplemented, and therefore whether $P_{{\\cal H}_2\\oplus {\\cal Q}}$ exists.\n\n\\subsection{Abstract Setting}\n\\label{sect:abs}\n\nIn this section, we sketch our construction in a general setting,\nwhich makes clear how it may be extended. In particular, we show how\nthe domain and action of the Hamiltonian is determined.\n\nLet ${\\cal H}_i$ ($i=1,2$) be Hilbert spaces and let $A$ be a densely\ndefined symmetric operator with domain ${\\cal D}\\subset {\\cal H}_1$. Suppose\nthat $A$ possesses two self-adjoint extensions $A_\\pm$ such that\n\\begin{equation}\nA_\\pm = {\\cal T}_\\pm^* \\tilde{A}{\\cal T}_\\pm\n\\end{equation}\nwhere $\\tilde{A}$ is a self-adjoint operator on ${\\cal H}_2$ with\n$(\\tilde{A}+\\omega)^{-1}$ bounded for some $\\omega\\in{\\rm I\\! R}$, and\n${\\cal T}_\\pm$ are unitary operators ${\\cal T}_\\pm :{\\cal H}_1\\rightarrow{\\cal H}_2$. Let\n$a_+$ and $a_-$ be bounded operators on ${\\cal H}_2$ which commute with\n$\\tilde{A}$ and define\n\\begin{equation}\n{\\cal T} = a_+{\\cal T}_+ + a_-{\\cal T}_-.\n\\end{equation}\nIn our application, $a_\\pm$ are determined by the scattering\ndata. We define $M_1$ and $M_2$ as above, for simplicity\nassuming that they are finite rank (as they are in our application).\nThe unitary dilation $\\hat{{\\cal T}}$ derived above is then used to\ndefine a self-adjoint operator $B$ on the Pontryagin space $\\Pi_1\n={\\cal H}_1\\oplus{\\cal M}_1$ by\n\\begin{equation}\nB = \\hat{{\\cal T}}^\\dagger\n\\left(\\begin{array}{cc} \\tilde{A} & 0 \\\\ 0 & \\Lambda \\end{array}\n\\right)\n\\hat{{\\cal T}},\n\\end{equation}\nwhere $\\Lambda$ is a self-adjoint operator on ${\\cal M}_2$ (with respect\nto its inner product). Thus\n\\begin{equation}\nB\\left(\\begin{array}{c} \\varphi\\\\ \\Phi\\end{array}\\right)=\n\\left(\\begin{array}{c}\n{\\cal T}^*\\tilde{A}({\\cal T}\\varphi-\\Theta)\n+{\\rm sgn}\\, M_2|M_2|^{1\/2}\\Lambda(|M_2|^{1\/2}\\varphi+{\\cal T}^*|_{{\\cal M}_1}\\Phi) \\\\\n-|M_1|^{1\/2}\\tilde{A}({\\cal T}\\varphi-\\Theta)\n+{\\cal T}|_{{\\cal M}_2}\\Lambda (|M_2|^{1\/2}\\varphi+{\\cal T}^*|_{{\\cal M}_1}\\Phi)\n\\end{array}\\right),\n\\label{eq:actB}\n\\end{equation}\nwhere $\\Theta={\\rm sgn}\\, M_1 |M_1|^{1\/2}\\Phi$ (considered as an element of\n${\\cal H}_2$), and $B$ has domain\n\\begin{equation}\nD(B) = \\{ (\\varphi,\\Phi)^T\\mid {\\cal T}\\varphi-\\Theta\n\\in D(\\tilde{A})\\}. \\label{eq:domB}\n\\end{equation}\n{}To gain a more explicit description of $D(B)$, we impose the\nrequirement that $B$ be a self-adjoint extension of the\n{\\em non-densely defined} operator $A\\oplus 0$ on ${\\cal D}\\oplus\n0\\subset\\Pi_1$, i.e., $B(\\varphi,0)^T=(A\\varphi,0)^T$ for all\n$\\varphi\\in{\\cal D}$. Later this will carry the physical interpretation of\na locality condition. It is easy to show that this requirement is\nsatisfied if and only if ${\\cal M}_2$ is invariant under $A^*$ and\n\\begin{equation}\n\\Lambda=(|M_2|^{-1\/2}A^*|_{{\\cal M}_2}|M_2|^{1\/2})^*.\n\\end{equation}\n\nAs a consequence of locality, we note that if $(\\varphi,\\Phi)^T\\in\nD(B)$ with $B (\\varphi,\\Phi)^T=(\\tilde{\\varphi},\\tilde{\\Phi})^T$,\nthen $\\varphi\\in D(A^*)$, and $\\tilde{\\varphi}=A^*\\varphi$. For take\nany $\\psi\\in {\\cal D}$. Then\n\\begin{equation}\n\\inner{\\tilde{\\varphi}}{\\psi}_{{\\cal H}_1} =\n\\left[\n\\left(\\begin{array}{c}\\tilde{\\varphi}\\\\\n\\tilde{\\Phi}\\end{array}\\right), \\left(\\begin{array}{c}\\psi\\\\\n0\\end{array}\\right)\\right]_{\\Pi_1} =\n\\left[\n\\left(\\begin{array}{c}\\varphi\\\\ \\Phi\\end{array}\\right),\nB\\left(\\begin{array}{c}\\psi\\\\ 0\\end{array}\\right)\n\\right]_{\\Pi_1}= \\inner{\\varphi}{A\\psi}_{{\\cal H}_1}.\n\\end{equation}\nWe may therefore re-write~(\\ref{eq:domB}) as\n\\begin{equation}\nD(B)=\\left\\{ \\left(\\begin{array}{c} \\varphi \\\\ \\Phi \\end{array}\n\\right) \\mid \\varphi\\in D(A^*), \\quad \\Theta_1 \\in D(\\tilde{A})\n\\right\\},\n\\end{equation}\nwhere $\\Theta_1 =a_+{\\cal T}_+\\chi_+ + a_-{\\cal T}_-\\chi_-+\\Theta$ and\n$\\chi_\\pm=(A_\\pm+\\omega)^{-1}(A^*+\\omega)\\varphi-\\varphi$. The\nadvantage of this expression is that $\\chi_\\pm$ can be shown to be the\nunique element of $\\ker (A^*+\\omega)$ such that $\\varphi+\\chi_\\pm\\in\nD(A_\\pm)$. In our application, $\\chi_\\pm$ may be expressed in terms\nof the value of $\\varphi$ and its first derivative at the origin.\n\n{}To determine the action of $B$ more explicitly, we use the fact that\nthe upper component of the right-hand side of~(\\ref{eq:actB}) is\nequal to $A^*\\varphi$ in order to compute $\\tilde{\\Theta}={\\rm sgn}\\,\nM_1|M_1|^{1\/2} \\tilde{\\Phi}$. We obtain\n\\begin{equation}\n\\tilde{\\Theta}=-M_1\\tilde{A}({\\cal T}\\varphi-\\Theta)+\n{\\cal T}(A^*\\varphi-{\\cal T}^*\\tilde{A}({\\cal T}\\varphi-\\Theta))=\n{\\cal T} A^*\\varphi-\\tilde{A}({\\cal T}\\varphi-\\Theta)\n\\end{equation}\nUsing the fact that $\\Theta_1\\in D(\\tilde{A})$, this becomes\n\\begin{equation}\n\\tilde{\\Theta}=\\tilde{A}\\Theta_1 +\\omega(\\Theta_1-\\Theta) +\n{\\cal T}(A^*+\\omega)\\varphi-(\\tilde{A}+\\omega)\n({\\cal T}\\varphi+\\Theta_1-\\Theta).\n\\end{equation}\nThe last two terms cancel by definition of $\\chi_\\pm$ and we\nconclude that\n\\begin{equation}\nB\\left(\\begin{array}{c}\\varphi\\\\ \\Phi\\end{array}\\right)=\n\\left(\\begin{array}{c} A^*\\varphi \\\\\n({\\rm sgn}\\, M_1 |M_1|^{1\/2})^{-1}\\tilde{\\Theta}\n\\end{array}\\right)\n\\end{equation}\nwhere $\\tilde{\\Theta}=\\tilde{A}\\Theta_1 + \\omega(a_+{\\cal T}_+\\chi_+ +\na_-{\\cal T}_-\\chi_-)$.\n\n\\sect{Determination of $M_1$ and $M_2$}\n\nIn this section, we determine the operators $M_1=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}-{\\cal T}\\Tt^*$ and\n$M_2=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}-{\\cal T}^*{\\cal T}$, where ${\\cal T}$ is an integral transformation\narising from the scattering data in the Shondin $R$ class\n\\cite{Shond1} given by\n\\begin{equation}\n\\cot\\delta_0(k) = k^{-1}\\frac{p(k^2)}{q(k^2)},\\qquad\n\\delta_\\ell(k)\\equiv 0\\quad {\\rm for}~\\ell\\ge 1,\n\\label{eq:GPIlow}\n\\end{equation}\nwhere $p(z)$ and $q(z)$ are coprime polynomials in ${\\rm I\\! R}[z]$, the\nring of polynomials with real coefficients. In particular, we will\nshow how the rank and signature of the $M_i$ are determined by two\n`Levinson indices' defined below. We emphasise that our methods are\nvery different to those of Shondin.\n\nThe scattering amplitude corresponding to $\\delta_0(k)$ is\n\\begin{equation}\nf_0(k) = \\frac{1}{k}e^{i\\delta_0(k)}\\sin\\delta_0(k)=\n\\frac{q(k^2)}{p(k^2) - ikq(k^2)}.\n\\end{equation}\nDefining the polynomial $W(z)$ by\n\\begin{equation}\nW(z) =\\left\\{\\begin{array}{cl} p(-z^2)-zq(-z^2) & p(0)\\not=0 \\\\\n p(-z^2)\/z-q(-z^2) & p(0)=0,\n\\end{array}\\right. \\label{eq:W}\n\\end{equation}\nwe note that $f_0(k)$ exhibits poles where $W(ik)=0$. The set\n$\\Omega$ of zeros of $W(z)$ in the left-hand half-plane ${\\rm Re}\\,\nz<0$ corresponds to poles of $f_0(k)$ such that $k^2$ lies on the\nphysical sheet. We refer to the situation where these poles\n(and hence the corresponding zeros of $W(z)$) are simple as the\n{\\em generic case}. In Theorem~\\ref{Thm:local}, we will show\nthat the discrete spectrum of the GPI Hamiltonian is precisely\n$\\{E=-\\omega^2\\mid\\omega\\in\\Omega\\}$ under the requirement of\nlocality.\\footnote{These eigenvalues can be complex: we will return\nto this point in section~5.3.}\n\nThe qualitative features of the scattering data~(\\ref{eq:GPIlow})\nare described by the degrees of $p$ and $q$, two indices $I_L^\\pm$\ndefined below, and the asymptotic behaviour of $\\cot\\delta_0(k)$\ngiven by\n\\begin{equation}\n\\sigma_0 = {\\rm sgn}\\,\\lim_{k\\rightarrow 0^+}\n\\cot\\delta_0(k)\\qquad {\\rm and}\\qquad \\sigma_\\infty\n={\\rm sgn}\\,\\lim_{k\\rightarrow\\infty}\\cot\\delta_0(k),\n\\end{equation}\nwhere the limits are allowed to be $\\pm\\infty$. The indices\n$I_L^\\pm$ are defined by\n\\begin{equation}\nI_L^+= \\frac{\\delta_0(0)-\\delta_0(\\infty)}{\\pi}\\qquad{\\rm and}\n\\qquad\nI_L^-=\\frac{\\zeta(0)-\\zeta(\\infty)}{\\pi},\n\\end{equation}\nwhere the auxiliary scattering data $\\zeta(k)$ is defined\nas a continuous function on ${\\rm I\\! R}^+$ by\n\\begin{equation}\n\\cot\\zeta(k) = -k^{-1}\\frac{p(-k^2)}{q(-k^2)}. \\label{eq:zeta}\n\\end{equation}\nWe refer to $I_L^\\pm$ as the Levinson indices (although Levinson's\ntheorem \\cite{Newt} will not hold in its usual form).\n\nWe now define the integral transform\n${\\cal T}=\\cos\\delta_0(k){\\cal S}+\\sin\\delta_0(k){\\cal C}$, which is suggested by\nthe\nna\\\"{\\i}ve generalised eigenfunctions $u_k(r) =\n(2\/\\pi)^{1\/2}\\sin (kr+\\delta_0(k))$.\nHere, ${\\cal S}$ and ${\\cal C}$ are the sine and cosine transforms,\ndefined by\n\\begin{eqnarray}\n({\\cal S} \\psi)(k) =\n\\sqrt{\\frac{2}{\\pi}}\\int_0^\\infty dr\\, \\psi(r)\\sin kr & {\\rm and}\n& ({\\cal C} \\psi)(k) =\n\\sqrt{\\frac{2}{\\pi}}\\int_0^\\infty dr\\, \\psi(r)\\cos kr\n\\end{eqnarray}\n(the integrals are intended as limits in $L^2$-norm).\nBoth are unitary maps from ${\\cal H}_r$ to ${\\cal H}_k$;\ntheir inverses have the same form, with $r$ and $k$ exchanged.\nThus ${\\cal T}$ is given explicitly by\n\\begin{equation}\n{\\cal T} = \\frac{p(k^2)}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}}{\\cal S} +\n\\frac{kq(k^2)}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}} {\\cal C}.\n\\end{equation}\nBecause ${\\cal S}$ and ${\\cal C}$ furnish the spectral representations of\n$-d^2\/dr^2$ on $L^2({\\rm I\\! R}^+)$ with Dirichlet and Neumann boundary\nconditions respectively at the origin, we are in the general situation\nof Section~2.2.\n\nWe now restrict to the generic case and explicitly construct the\n$M_i$ and compute their rank and signature. $M_2$ is given by the\nfollowing proposition, whose proof is given later in this section.\n\\begin{Prop}\n\\label{Prop:M2} In the generic case,\n\\begin{equation}\nM_2 = \\sum_{\\omega\\in\\Omega} \\alpha_\\omega\n\\ket{\\xi_\\omega}\\bra{\\xi_{\\overline{\\omega}}}, \\label{eq:M2}\n\\end{equation}\nwhere $\\xi_\\omega(r) =e^{\\omega r}$, and $\\alpha_\\omega$ is the\nresidue\n\\begin{equation}\n\\alpha_\\omega= {\\rm Res}_\\omega\n2zf_0(-iz). \\label{eq:aw}\n\\end{equation}\nIn addition, ${\\rm Ran}\\,\nM_2={\\rm span}\\,\\{\\xi_\\omega\\mid\\omega\\in\\Omega\\}$,\nand\n\\begin{eqnarray}\n{\\rm rank}\\, M_2 &=& \\frac{1}{2}\\deg W + I_L^+ \\label{eq:M2rnk} \\\\\n{\\rm sig}\\, M_2 &=& \\frac{1}{2}\\left(\\sigma_0^2-\\sigma_\\infty^2\\right)\n-I_L^- . \\label{eq:M2sig}\n\\end{eqnarray}\n\\end{Prop}\n\nNext, define ${\\cal M}_1$ to be the space of all\n$L^2$-vectors of form $Q(k^2)k(p(k^2)^2+k^2q(k^2)^2)^{-1\/2}$,\nsuch that $Q(z)\\in \\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}[z]$ is a polynomial with complex\ncoefficients. Thus\n\\begin{equation}\n{\\cal M}_1 =\n(p(k^2)^2+k^2q(k^2)^2)^{-1\/2}k\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}_{\\mho-1}[k^2]\n\\end{equation}\nwhere $\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}_r[z]$ is the $r+1$-dimensional complex vector space of\npolynomials with complex coefficients and degree at most $r$,\nand $\\mho=\\dim {\\cal M}_1$ is given by\n\\begin{equation}\n\\mho=\\frac{1}{2}\\deg W+\\frac{1}{2}(\\sigma_\\infty^2-\\sigma_0^2) =\n\\max\\{\\deg p ,\\deg q\\}.\n\\end{equation}\n$M_1$ is described by\n\\begin{Prop} \\label{Prop:M1}\nIn the generic case, $M_1$ vanishes on\n${{\\cal M}_1}^\\perp$, and its action on ${\\cal M}_1$ is given by\n$M_1 Q(k^2)k(p(k^2)^2+k^2q(k^2)^2)^{-1\/2}=\\tilde{Q}(k^2)k\n(p(k^2)^2+k^2q(k^2)^2)^{-1\/2}$, where\n\\begin{equation}\n\\tilde{Q}(k^2)=\nQ(k^2)+\\sum_{\\omega\\in\\Omega}\n\\frac{Q(-\\omega^2)\\alpha_\\omega}{q(-\\omega^2)}\n\\frac{p(k^2)-\\omega q(k^2)}{\\omega^2+k^2}.\n\\end{equation}\nMoreover, ${\\rm Ran}\\, M_1={\\cal M}_1$ and\n\\begin{eqnarray}\n{\\rm rank}\\, M_1 & = &\\frac{1}{2}\\deg W +\n\\frac{1}{2}(\\sigma_\\infty^2-\\sigma_0^2) \\label{eq:M1rnk} \\\\\n{\\rm sig}\\, M_1 &=& -(I_L^+ +I_L^-). \\label{eq:M1sig}\n\\end{eqnarray}\n\\end{Prop}\n\nAs an example, let us consider the sub-class of the $R$ class\nconsidered by Shondin \\cite{Shond1}; namely, the case where\n$r(z)=p(z)\/q(z)$ has negative imaginary part in the upper half-plane.\nIn this case, it is easy to show that there can be no solutions to\n$r(-z^2)=z$ and hence to $W(z)=0$ in the left-hand half-plane,\nexcept on the real axis. Moreover, one can show that the residues\n$\\alpha_\\omega$ at these zeros are necessarily positive, so $M_2$ is\na positive operator as a result of~(\\ref{eq:M2}). Accordingly, ${\\cal T}$\nis contractive, and our method yields a unitary dilation defined on\nHilbert spaces. This explains why Shondin was able to construct\nthese GPI models on enlarged {\\em Hilbert} spaces.\n\nWe now prove the above propositions.\n\n{\\noindent\\em Proof of Proposition~\\ref{Prop:M2}:}\n$M_2$ may be written in two equivalent forms:\n\\begin{eqnarray}\nM_2 &=& {\\cal S}^{-1}\\sin^2\\delta_0(k){\\cal S}\n -{\\cal C}^{-1}\\sin^2\\delta_0(k){\\cal C} \\nonumber \\\\\n& &-{\\cal C}^{-1}\\sin\\delta_0(k)\\cos\\delta_0(k){\\cal S} -\n{\\cal S}^{-1}\\sin\\delta_0(k)\\cos\\delta_0(k){\\cal C} \\label{eq:ker1}\\\\\n&=& {\\cal C}^{-1}\\cos^2\\delta_0(k){\\cal C} -{\\cal S}^{-1}\\cos^2\\delta_0(k){\\cal S}\n\\nonumber \\\\\n& &-{\\cal C}^{-1}\\sin\\delta_0(k)\\cos\\delta_0(k){\\cal S} -\n{\\cal S}^{-1}\\sin\\delta_0(k)\\cos\\delta_0(k){\\cal C} . \\label{eq:ker2}\n\\end{eqnarray}\n{}To convert this into an integral kernel we use the following Lemma,\nwhich may be proved by standard means (cf. Theorem IX.29 in\n\\cite{RSii}). Here, $v(x)$ and $w(x)$ stand for either $\\sin x$ or\n$\\cos x$, and ${\\cal V}$ and ${\\cal W}$ are the corresponding integral\ntransforms from ${\\cal H}_r$ to ${\\cal H}_k$.\n\\begin{Lem} \\label{Lem:ker}\nLet $g(k)\\in L^2({\\rm I\\! R}^+)\\cap L^\\infty({\\rm I\\! R}^+)$ and define\n$G={\\cal V}^{-1}g(k){\\cal W}$. Then $G$ has integral kernel\n\\begin{equation}\nG(r,r^\\prime) = \\frac{2}{\\pi} \\int_0^\\infty v(kr)w(kr^\\prime)g(k)dk,\n\\end{equation}\n(where the integral is a limit in $L^2$-norm).\n\\end{Lem}\n\nIn the case $\\deg p>\\deg q$, $\\sin^2\\delta_0(k)$ and\n$\\sin\\delta_0(k)\\cos\\delta_0(k)$ are $L^2\\cap L^\\infty$ and so,\napplying Lemma~\\ref{Lem:ker} to~(\\ref{eq:ker1}) and combining\nterms, $M_2$ has integral kernel\n\\begin{equation}\nM_2(r,r^\\prime) =\n\\frac{i}{\\pi}\\int_{-\\infty}^\\infty e^{i\\delta_0(k)}\n\\sin\\delta_0(k) e^{ik(r+r^\\prime)}dk\n=\\frac{1}{\\pi}\\int_{-\\infty}^\\infty\n\\frac{ikq(k^2)e^{ik(r+r^\\prime)}}{p(k^2)-ikq(k^2)} dk.\n\\end{equation}\nMaking the substitution $z=ik$ and closing the contour in the\nleft-hand half-plane, the integrand has a simple pole at each\n$\\omega\\in\\Omega$ and~(\\ref{eq:M2}) follows. If $\\deg q\\ge\\deg p$, we\nargue similarly using~(\\ref{eq:ker2}) to obtain the same result as\nbefore.\n\nBy linear independence of the $\\xi_\\omega$ and non-vanishing of the\n$\\alpha_\\omega$, it follows that ${\\rm Ran}\\,\nM_2={\\cal M}_2={\\rm span}\\,\\{\\xi_\\omega\\mid\\omega\\in\\Omega\\}$, so ${\\rm rank}\\,\nM_2=|\\Omega|$, the cardinality of $\\Omega$.\nUsing residue calculus, one may show that\n\\begin{equation}\n|\\Omega| = \\frac{1}{2}\\deg W +\n\\frac{1}{2\\pi}\\int_{-\\infty}^\\infty \\frac{W^\\prime(ik)}{W(ik)} dk.\n\\end{equation}\nBy rewriting the second term as an integral over $(0,\\infty)$, a\nsmall amount of algebra shows that the integrand is\n$-\\pi^{-1}\\delta^\\prime_0(k)$. Thus~(\\ref{eq:M2rnk}) is established.\n\n{}To compute ${\\rm sig}\\, M_2$, we define the hermitian\nform $m_2(\\varphi,\\psi): {\\cal M}_2\\times{\\cal M}_2\\rightarrow \\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}$ by\n$m_2(\\varphi,\\psi)=\\inner{\\varphi}{M_2\\psi}$. Labelling the\nelements of $\\Omega$ as $\\omega_1,\\ldots,\\omega_{|\\Omega|}$, and\nwriting $\\psi=\\sum_i c_i\\xi_{\\omega_i}$, we have\n\\begin{equation}\nm_2(\\psi,\\psi) = \\sum_{i,j,k} \\overline{c_i}\n\\inner{\\xi_{\\omega_i}}{\\xi_{\\omega_j}}\\alpha_j\n\\inner{\\xi_{\\overline{\\omega_j}}}{\\xi_{\\omega_k}}c_k\n= c^\\dagger\\Xi^\\dagger A\\Xi c,\n\\end{equation}\nwhere $A$ and $\\Xi$ are hermitian. $\\Xi$ has components\n$\\Xi_{ij}=\\inner{\\xi_{\\omega_i}}{\\xi_{\\omega_j}}$,\nand is non-singular by\nlinear independence of the $\\xi_\\omega$. By Sylvester's Law of\nInertia \\cite{Cohn}, the signature of $M_2$ equals that of $A$,\nwhich has components\n\\begin{equation}\nA_{ij} = \\left\\{ \\begin{array}{cl}\n\\alpha_{\\omega_i} & \\omega_i=\\overline{\\omega_j} \\\\\n0 & {\\rm otherwise}. \\end{array}\\right.\n\\end{equation}\n$A$ has eigenvalues $\\{\\alpha_\\omega\\mid \\omega\\in{\\rm I\\! R}\\}\\cup \\{\\pm\n|\\alpha_\\omega|\\mid\\omega\\not\\in{\\rm I\\! R}\\}$.\nLabelling the $\\omega_i$ so that $\\omega_1,\\ldots,\\omega_r$ are\nthe real elements of $\\Omega$, we therefore have\n${\\rm sig}\\, M_2={\\rm sig}\\,{\\rm diag}\\, (\\alpha_{\\omega_1}\\ldots,\\alpha_{\\omega_r})$.\n(We have used the fact that\n$\\alpha_{\\overline{\\omega}}=\\overline{\\alpha_\\omega}$, and\nin particular that $\\omega_r\\in{\\rm I\\! R}$ implies\n$\\alpha_r\\in{\\rm I\\! R}$.) Defining $\\zeta(k)$ by~(\\ref{eq:zeta}), it is easy\nto show that $\\cot\\zeta(-\\omega)=1$ for $\\omega\\in\\Omega$, and that\n\\begin{equation}\n\\alpha_\\omega =2\\lim_{z\\rightarrow -\\omega}\n\\frac{z+\\omega}{1-\\cot\\zeta(z)} = \\frac{1}{\\zeta^\\prime(-\\omega)}.\n\\end{equation}\nThus ${\\rm sig}\\,{\\rm diag}\\,(\\alpha_1,\\ldots,\\alpha_r)$ is equal to the number of\ntimes that $\\zeta(k)\\equiv\\pi\/4 \\pmod\\pi$ as $k$\ntraverses ${\\rm I\\! R}^+$, counted according to the sign of\n$\\zeta^\\prime(k)$ at such points. This is related to the Levinson\nindex $I_L^-$ by~(\\ref{eq:M2sig}). $\\vrule height 1.5ex width 1.2ex depth -.1ex $\n\n{\\noindent\\em Proof of Proposition~\\ref{Prop:M1}:} We compute\n\\begin{eqnarray}\nM_1 &=& -\\frac{p(k^2)}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}}{\\cal S}{\\cal C}^{-1}\n\\frac{kq(k^2)}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}} \\nonumber \\\\\n&& -\\frac{kq(k^2)}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}}{\\cal C}{\\cal S}^{-1}\n\\frac{p(k^2)}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}},\n\\end{eqnarray}\nwhich vanishes identically on the closure of\n${\\cal D}=(p(k^2)^2+k^2q(k^2))^{1\/2}{\\cal S} C_0^\\infty(0,\\infty)$ as a result\nof elementary properties of the sine and cosine transforms.\nFurthermore, $\\overline{{\\cal D}}^\\perp$ is precisely the space ${\\cal M}_1$\ndefined above, because $\\psi\\perp{\\cal D}$ if and only if\n$(p(k^2)^2+k^2q(k^2))^{1\/2}\\psi$ is the sine transform of a\ndistribution supported at the origin and therefore an odd polynomial\n(cf. Theorem~V.11 in \\cite{RSi}). Hence $M_1$ vanishes on\n${\\cal M}_1^\\perp$ and ${\\rm Ran}\\, M_1\\subset {\\cal M}_1$.\n\nNext, we compute the action of $M_1$ on ${\\cal M}_1$. By contour\nintegration,\n\\begin{equation}\n{\\cal T}^*\\frac{kQ(k^2)}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}}\n=-\\left(\\frac{\\pi}{2}\\right)^{1\/2}\\sum_{\\omega\\in\\Omega}\n\\frac{Q(-\\omega^2)\\alpha_\\omega}{q(-\\omega^2)}\\xi_\\omega(r),\n\\label{eq:Tstr}\n\\end{equation}\nfor polynomials $Q(z)$ such that the operand is in $L^2$. Moreover,\nit is easy to show that\n\\begin{equation}\nT\\xi_{\\omega} = \\left(\\frac{2}{\\pi}\\right)^{1\/2}\n\\frac{k}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}}\n\\frac{p(k^2)-\\omega q(k^2)}{\\omega^2+k^2},\n\\label{eq:Txi}\n\\end{equation}\nfrom which the action of $M_1$ can be read off as required.\n\n{}To compute the rank and signature of $M_1$, we use the fact that\n\\begin{equation}\n{\\rm rank}\\, M_1 - {\\rm rank}\\, M_2 = {\\rm sig}\\, M_1 -{\\rm sig}\\, M_2 =\n\\dim\\ker {\\cal T}^*-\\dim\\ker{\\cal T} ,\n\\end{equation}\nwhich follows from the intertwining relations $M_1{\\cal T}={\\cal T} M_2$ and\n$M_2{\\cal T}^*={\\cal T}^*M_1$. It therefore remains to determine the dimensions\nof the relevant kernels. Firstly, note that $\\ker {\\cal T}^*\\subset{\\cal M}_1$\nand that (from~(\\ref{eq:Tstr}))\n$\\psi=Q(k^2)k(p(k^2)^2+k^2q(k^2)^2)^{-1\/2}\\in\\ker {\\cal T}^*$ if and only\nif $\\psi\\in{\\cal M}_1$ and $Q(-\\omega^2)=0$ for each $\\omega\\in\\Omega$.\nThus $\\prod_{\\omega\\in\\Omega} (z+\\omega^2)$ divides $Q(z)$ and so\n\\begin{equation}\n\\dim\\ker{\\cal T}^* = \\min\\{\\mho-|\\Omega|,0\\}.\n\\end{equation}\n\nNow consider $\\ker{\\cal T}$. We note that~(\\ref{eq:Txi}) may be rewritten\n\\begin{equation}\nq(-\\omega_i^2){\\cal T}\\xi_{\\omega_i}=\\left(\\frac{2}{\\pi}\\right)^{1\/2}\n\\frac{k}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}}\n\\frac{p(k^2)q(-\\omega_i^2)-p(-\\omega_i^2)q(k^2)}{k^2+\\omega_i^2},\n\\end{equation}\nand apply the following abstract algebraic result:\n\\begin{Lem}\n\\label{Lem:FW}\nLet $Q,R\\in \\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}[z]$ be coprime with $\\max\\{\\deg Q,\\deg R\\}=k\\ge 0$,\nand let $\\lambda_1,\\ldots,\\lambda_m$ be distinct elements of $\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}$.\nThen the polynomials $P_1(z),\\ldots P_m(z)$, defined by\n\\begin{equation}\n(z-\\lambda_i) P_i(z) = R(\\lambda_i)Q(z)-Q(\\lambda_i)R(z)\n\\end{equation}\nspan a $\\min\\{k,m\\}$-dimensional subspace of $\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}_{k-1}[z]$.\n\\end{Lem}\n{\\em Proof:} Let $n=\\min\\{k,m\\}$. Then it is enough to show that\n$P_1,\\ldots,P_n$ are linearly independent. Assuming that $\\deg Q=k$,\nwe note that $P_i(z)=R(z) \\tilde{Q}_i(z)-Q(z)\\tilde{R}_i(z)$, where\n$\\tilde{Q}_i(z)=(Q(z)-Q(\\lambda_i))\/(z-\\lambda_i)$ and\n$\\tilde{R}_i(z)=(R(z)-R(\\lambda_i))\/(z-\\lambda_i)$. Suppose the $P_i$\nare linearly dependent. Then $R(z) S(z) = Q(z) T(z)$ where $S(z)\n=\\sum_i \\alpha_i \\tilde{Q}_i(z)$ and $T(z) =\\sum_i\\alpha_i\n\\tilde{R}_i(z)$, for some $0\\not= (\\alpha_1,\\ldots,\\alpha_n)^T\\in\n\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}^n$. Because $Q$ and $R$ are coprime, this implies that $S$ and\n$T$ vanish identically. But one may easily show that the\n$\\tilde{Q}_i$ are linearly independent, by explicitly considering\ntheir coefficients. We therefore obtain a contradiction. $\\vrule height 1.5ex width 1.2ex depth -.1ex $\n\nIn our application, $m=|\\Omega|$ with $\\lambda_i=-\\omega_i^2$ for\neach $i=1,\\ldots,m$ and $k=\\max\\{\\deg p,\\deg q\\}=\\mho$. Thus\n$\\dim{\\cal T}{\\rm Ran}\\, M_2=\\min\\{|\\Omega|,\\mho\\}$ and so\n\\begin{equation}\n\\dim\\ker{\\cal T}= \\min\\{|\\Omega|-\\mho,0\\}.\n\\end{equation}\nIt follows that ${\\rm rank}\\, M_1-{\\rm rank}\\, M_2={\\rm sig}\\, M_1-{\\rm sig}\\,\nM_2=\\mho-|\\Omega|$, from which~(\\ref{eq:M1rnk}) and~(\\ref{eq:M1sig})\nfollow. $\\vrule height 1.5ex width 1.2ex depth -.1ex $\n\n\\sect{The GPI Hamiltonian}\n\\label{sect:GHm}\n\\subsection{Locality and Spectral Properties}\n\nThe results of the previous two sections allow the construction of a\nunitary dilation $\\hat{{\\cal T}}$ of the integral transform ${\\cal T}$. Here,\nwe employ $\\hat{{\\cal T}}$ to define a GPI Hamiltonian consistent with\nscattering theory~(\\ref{eq:GPIlow}). We denote\n$\\Pi_r={\\cal H}_r\\oplus{\\cal M}_1$ and $\\Pi_k={\\cal H}_k\\oplus{\\cal M}_2$ with $J$-inner\nproducts specified by $J_r=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}_{{\\cal H}_r}\\oplus{\\rm sgn}\\, (M_1)$, and\n$J_k=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}_{{\\cal H}_k}\\oplus{\\rm sgn}\\, (M_2)$. In terms of our general\ndiscussion in Section~2.2, we set $A=-d^2\/dr^2$ on domain\n$C_0^\\infty(0,\\infty)$, and define ${\\cal T}_+={\\cal S}$, ${\\cal T}_-={\\cal C}$, setting\n$a_+$ and $a_-$ to be multiplication by $\\cos\\delta_0(k)$ and\n$\\sin\\delta_0(k)$ respectively. Thus\n$A_+={\\cal S}^*k^2{\\cal S}$, the self-adjoint extension of $A$ with Dirichlet\nboundary conditions at the origin, whilst $A_-={\\cal C}^*k^2{\\cal C}$ is the\nextension with Neumann boundary conditions at the origin. The\noperators $A_\\pm+1$ both have bounded inverse.\n\nThe $S$-wave GPI Hamiltonian is defined by\n\\begin{equation}\nh_{\\rm gpi} = \\hat{{\\cal T}}^\\dagger\\left(\n\\begin{array}{cc} k^2 & 0 \\\\ 0 & \\Lambda \\end{array}\\right)\n\\hat{{\\cal T}}, \\label{eq:GPIham}\n\\end{equation}\nwhere $\\Lambda$ is a ${\\rm sgn}\\, (M_2)$-self-adjoint operator\n$\\Lambda^\\dagger=\\Lambda$ on ${\\cal M}_2$. To fix $\\Lambda$, we require\nthat $h_{\\rm gpi}(\\psi,0)^T=(-\\psi^{\\prime\\prime},0)^T$ for\nall $\\psi\\in C_0^\\infty(0,\\infty)$ as a locality requirement.\nFor general $\\psi\\in{\\cal M}_2$, we have\n\\begin{equation}\nA^*\\psi = -\\sum_{\\omega\\in\\Omega}\\alpha_\\omega \\omega^2\n\\ket{\\xi_\\omega}\\inner{\\xi_{\\overline{\\omega}}}{M_2^{-1}\\psi},\n\\end{equation}\nso ${\\cal M}_2$ is invariant under $A^*$ and it follows immediately from\nSection~2.2 that\n\\begin{Thm} \\label{Thm:local}\nIn the generic case, the unique choice of $\\Lambda$ consistent with\nlocality is\n\\begin{equation}\n\\Lambda= -\\left({\\rm sgn}\\,(M_2) |M_2|^{1\/2}\\right)^{-1}\n\\sum_{\\omega\\in\\Omega} \\alpha_\\omega \\omega^2\\ket{\\xi_\\omega}\n\\bra{\\xi_{\\overline{\\omega}}} |M_2|^{-1\/2}.\n\\end{equation}\n\\end{Thm}\n\nWe proceed to determine the eigenvectors and eigenvalues of\n$\\Lambda$. First note that\n$\\inner{\\xi_{\\overline{\\omega_j}}}{M_2^{-1}\\xi_{\\omega_i}}=\n\\alpha_{\\omega_{i}}^{-1}\\delta_{ij}$, which follows from the identity\n$\\xi_{\\omega_i}=\\sum\\alpha_\\omega\\ket{\\xi_\\omega}\n\\inner{\\xi_{\\overline{\\omega}}}{M_2^{-1}\\xi_{\\omega_i}}$. It is then\na matter of computation to see that $\\varphi_i=\\left({\\rm sgn}\\,(M_2)\n|M_2|^{1\/2}\\right)^{-1}\\xi_{\\omega_i}$ is an eigenvector of $\\Lambda$\nwith eigenvalue $-\\omega_i^2$ for each $i=1,\\ldots,|\\Omega|$. Because\n$\\Lambda$ has rank $|\\Omega|$, this exhausts the discrete spectrum of\n$h_{\\rm gpi}$. The following is then immediate.\n\\begin{Thm}\nIn the generic case, and with\n$\\Lambda$ is defined as above, $h_{\\rm gpi}$ has the following spectral\nproperties: $\\sigma(h_{\\rm gpi})=\\sigma_{\\rm ac}(h_{\\rm gpi}) \\cup\\sigma_{\\rm\npp}(h_{\\rm gpi})$ where $\\sigma_{\\rm ac}(h_{\\rm gpi})={\\rm I\\! R}^+$ and $\\sigma_{\\rm\npp}(h_{\\rm gpi})$ consists of the $|\\Omega|$ eigenvalues $-\\omega_i^2$, whose\ncorresponding eigenvectors are\n\\begin{equation}\n\\psi_i = \\hat{{\\cal T}}^\\dagger\\varphi_i=\n\\left(\\begin{array}{c} \\xi_{\\omega_i} \\\\\n{\\cal T} \\left({\\rm sgn}\\,(M_2) |M_2|^{1\/2}\\right)^{-1}\\xi_{\\omega_i}\n\\end{array}\\right).\n\\end{equation}\nThe absolutely continuous subspace is the Hilbert space\n$\\hat{{\\cal T}}^\\dagger{\\cal H}_k$.\n\\end{Thm}\n\nThis bears out our earlier statement that the poles of the\nscattering amplitude on the physical sheet correspond to the discrete\nenergy spectrum, if locality is imposed.\n\nThe physical Hilbert space is required to be a positive definite\ninvariant subspace of $\\Pi_r$ relative to $h_{\\rm gpi}$.\\footnote{An\ninvariant subspace ${\\cal L}$ of a $J$-space ${\\cal K}$ relative to a linear\noperator $A$ on ${\\cal K}$ is a subspace of ${\\cal K}$ such that\n$\\overline{D(A)\\cap{\\cal L}}={\\cal L}$ and ${\\rm Ran}\\, A|_{{\\cal L}}\\subset {\\cal L}$, where\nthe closure is taken in the norm topology of ${\\cal K}$.} In $\\Pi_k$, we\nhave the $[\\cdot,\\cdot]_{\\Pi_k}$-orthogonal decomposition\n$\\Pi_k={\\cal H}_k [+] {\\cal M}_2$, where ${\\cal M}_2$ is spanned by the eigenvectors\n$\\varphi_i$ of $\\Lambda$. We compute\n\\begin{equation}\n[\\varphi_i,\\varphi_j]_{{\\cal M}_2}\n=\\inner{\\xi_{\\omega_i}}{M_2^{-1}\\xi_{\\omega_j}} \\nonumber \\\\\n= \\left\\{\\begin{array}{cl} 0 & \\omega_i\\not=\\overline{\\omega_j} \\\\\n\\alpha_{\\omega_j}^{-1} & \\omega_i=\\overline{\\omega_j}.\n\\end{array}\\right.\n\\end{equation}\nHence $\\Pi_k$ is decomposable as\n$\\Pi_k = {\\cal H}_k [+] E_+ [+] E_- [+] H$\nwhere $E_+$ is spanned by the $\\varphi_i$ with\n$[\\varphi_i,\\varphi_i]_{{\\cal M}_2}>0$ ($\\alpha_{\\omega_i}>0$),\n$E_-$ is spanned\nby those with $[\\varphi_i,\\varphi_i]_{{\\cal M}_2}<0$\n($\\alpha_{\\omega_i}<0$), and\n$H$ is the {\\em hyperbolic invariant subspace} spanned by those\n$\\varphi_i$ with $\\omega_i\\not\\in{\\rm I\\! R}$. Moreover, this is a\ndecomposition into invariant subspaces, because $D(k^2)$ is dense in\n${\\cal H}_k$. The physical Hilbert space ${\\cal H}_{\\rm phys}$ is therefore\ndefined by\n\\begin{equation}\n{\\cal H}_{\\rm phys} = \\hat{{\\cal T}}^\\dagger ({\\cal H}_k[+]E_+).\n\\end{equation}\n\nWe briefly discuss the uniqueness of the GPI Hamiltonian constructed\nin this way. As noted in Section~2.1, $\\hat{{\\cal T}}$ is unique up to\nfurther unitary dilation and unitary equivalence because the $M_i$\nare of finite rank. Further dilation merely corresponds to the\n(trivial) freedom to form the direct sum of $h_{\\rm gpi}$ with the\nHamiltonian of an arbitrary independent system. On the other hand,\nreplacing $\\hat{{\\cal T}}$ by $(\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}\\oplus U_2)\\hat{{\\cal T}}(\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}\\oplus U_1)$\nwhere $U_i$ is a ${\\rm sgn}\\, M_i$-unitary operator on ${\\cal M}_i$ for $i=1,2$,\nit is easy to show that the local GPI Hamiltonian $h_{\\rm gpi}^\\prime$\nobtained is given by\n\\begin{equation}\nh_{\\rm gpi}^\\prime=\n\\left(\\begin{array}{cc} \\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1} & 0\\\\0 & U_1\\end{array}\\right)^\\dagger\nh_{\\rm gpi}\\left(\\begin{array}{cc} \\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1} & 0\\\\0 & U_1\\end{array}\\right).\n\\end{equation}\nWe have therefore constructed a family of unitarily equivalent GPI\nHamiltonians on $\\Pi_r$ corresponding to the same scattering data. It\nis clearly sufficient to study $h_{\\rm gpi}$ alone in order to determine the\ndomain and scattering properties of $h_{\\rm gpi}^\\prime$.\n\n\\subsection{Domain and Resolvent}\n\nWe now determine the domain and explicit action of the operator\n$h_{\\rm gpi}$ under the locality assumption. Our result is the following:\n\\begin{Thm} \\label{Thm:dom}\nLet $\\Theta_0\n=(2\/\\pi)^{1\/2}k^{2\\mho-1}(p(k^2)^2+k^2q(k^2)^2)^{-1\/2}$. Then in\nthe generic case,\n\\begin{eqnarray}\nD(h_{\\rm gpi})&=&\\left\\{\n\\left( \\begin{array}{c} \\varphi \\\\ \\Phi\\end{array}\\right)\n\\mid \\varphi,\\varphi^\\prime\\in\nAC_{\\rm loc}(0,\\infty),~\\varphi,\\varphi^{\\prime\\prime}\\in L^2;\\quad\n\\Phi\\in{\\cal M}_1, \\right.\\nonumber \\\\\n&&\\qquad\\qquad\n\\left.\\begin{array}{c} \\ \\\\ \\ \\end{array}\n\\Theta -\\lambda[\\varphi]\\Theta_0 \\in\nD(k^2)\\cap {\\cal M}_1\\right\\} ,\n\\end{eqnarray}\nwhere $\\Theta={\\rm sgn}\\, M_1|M_1|^{1\/2}\\Phi$ and\n\\begin{equation}\n\\lambda[\\varphi] = \\left\\{\\begin{array}{cl}\nP\\varphi(0) & \\deg p>\\deg q \\\\\nP\\varphi(0)-Q\\varphi^\\prime(0) & \\deg p=\\deg q \\\\\n-Q\\varphi^\\prime(0) & \\deg p< \\deg q, \\end{array}\\right.\n\\end{equation}\nand $P$ and $Q$ are the leading coefficients of $p(z)$ and $q(z)$\nrespectively. (In the case $M_1=0$,\n$D(h_{\\rm gpi})=\\{\\varphi\\mid\\varphi,\\varphi^\\prime\\in\nAC_{\\rm loc}(0,\\infty),~\\varphi,\\varphi^{\\prime\\prime}\\in\nL^2;~\\lambda[\\varphi]=0\\}$.) Moreover,\n\\begin{equation}\nh_{\\rm gpi}\\left(\\begin{array}{c} \\varphi \\\\ \\Phi\\end{array}\\right)=\n\\left(\\begin{array}{c} -\\varphi^{\\prime\\prime}\\\\ \\tilde{\\Phi}\n\\end{array}\\right) ,\n\\end{equation}\nwhere $\\tilde{\\Phi}$ is given in terms of $\\tilde{\\Theta}={\\rm sgn}\\, M_1\n|M_1|^{1\/2}\\tilde{\\Phi}$ by\n\\begin{equation}\n\\tilde{\\Theta}= k^2(\\Theta-\\lambda[\\varphi]\\Theta_0) +\n\\left(\\frac{2}{\\pi}\\right)^{1\/2}\n\\frac{k(\\lambda[\\varphi] k^{2\\mho} -\\varphi(0)p(k^2)\n+\\varphi^\\prime(0)q(k^2))}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}} .\n\\end{equation}\n\\end{Thm}\n{\\em Proof:} The result is a direct application of the discussion in\nSection~2.2. The key point is that, for each $\\varphi\\in\nD(-d^2\/dr^2|_{C_0^\\infty(0,\\infty)}^*)$, the vectors $\\chi_+$ and\n$\\chi_-$ are given by\n\\begin{equation}\n\\chi_+ = -\\varphi(0)e^{-r}\\qquad {\\rm and} \\qquad\n\\chi_-=\\varphi^\\prime(0)e^{-r},\n\\end{equation}\nwhich follows because $\\chi_+$ ($\\chi_-$) is the unique element of\n$\\ker (-d^2\/dr^2|_{C_0^\\infty(0,\\infty)}^*+1)$ such that\n$\\varphi+\\chi_+$ ($\\varphi+\\chi_-$) is in the domain of the Laplacian\nwith Dirichlet (Neumann) boundary conditions at the origin. $\\vrule height 1.5ex width 1.2ex depth -.1ex $\n\nThe resolvent of $h_{\\rm gpi}$ may be written in the form of Krein's formula\nas\n\\begin{equation}\n(h_{\\rm gpi}-z)^{-1} =\n\\left(\\begin{array}{cc} R_0(z) & 0 \\\\ 0 & R_1(z)\\end{array}\\right)\n+\\frac{q(z)}{p(z)+(-z)^{1\/2}q(z)}F(z)F(\\overline{z})^\\dagger.\n\\label{eq:reso}\n\\end{equation}\nHere, $R_0(z)={\\cal S}^{-1}(k^2-z)^{-1}{\\cal S}$ is the free resolvent\nand the defect element $F(z)\\in\\Pi_r$ is given by\n\\begin{equation}\nF(z)=\\left( \\begin{array}{c} e^{-(-z)^{1\/2}r} \\\\\n({\\rm sgn}\\, M_1|M_1|^{1\/2})^{-1}\\Psi(z) \\end{array}\\right),\n\\end{equation}\nwhere $\\Psi(z)\\in{\\cal M}_1$ is\n\\begin{equation}\n\\Psi(z) = \\left(\\frac{2}{\\pi}\\right)^{1\/2}\n\\frac{k(p(k^2)q(z)-p(z)q(k^2))}{(k^2-z)(p(k^2)^2+k^2q(k^2)^2)^{1\/2}},\n\\end{equation}\nand the operator $R_1(z)$ is defined on ${\\cal M}_1$ by\n\\begin{equation}\nR_1(z)\\Phi=({\\rm sgn}\\, M_1|M_1|^{1\/2})^{-1}\\left(\\frac{2}{\\pi}\\right)^{1\/2}\n\\frac{k(Q(k^2)-Q(z)q(k^2)\/q(z))}{(k^2-z)(p(k^2)^2+k^2q(k^2)^2)^{1\/2}},\n\\end{equation}\nwhere $Q(z)$ is defined in terms of $\\Phi$ by\n\\begin{equation}\n\\Theta={\\rm sgn}\\, M_1|M_1|^{1\/2}\\Phi=\\left(\\frac{2}{\\pi}\\right)^{1\/2}\n\\frac{kQ(k^2)}{(p(k^2)^2+k^2q(k^2)^2)^{1\/2}}.\n\\end{equation}\nThe above expression for $R(z)$ may be verified directly using\nTheorem~\\ref{Thm:dom}, and the fact that\n\\begin{equation}\n[({\\rm sgn}\\, M_1|M_1|^{1\/2})^{-1}\\Psi(\\overline{z}),\\Phi]_{{\\cal M}_1}=\n-\\frac{Q(z)}{q(z)},\n\\label{eq:clm}\n\\end{equation}\nwhich is required when one takes inner products with\n$F(\\overline{z})$. Using this result, it follows that~(\\ref{eq:reso})\nholds for elements of form $(0,\\Phi)^T$ with $Q(z)=0$; direct\ncomputation establishes it for $Q(z)\\equiv 1$ and also for vectors\nof form $(\\varphi,0)^T$ with $\\varphi\\in{\\cal H}_r$.\\footnote{Here, it is\nuseful to employ the decomposition ${\\cal H}_r= \\overline{{\\rm Ran}\\,\n(-d^2\/dr^2-z)|_{C_0^\\infty(0,\\infty)}}\\oplus \\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}\ne^{-(-\\overline{z})^{1\/2}r}$.} Thus~(\\ref{eq:reso}) holds on the whole\nof $\\Pi_r$. It remains to establish equation~(\\ref{eq:clm}).\nMultiplying through by $q(z)$, the LHS of~(\\ref{eq:clm}) is equal to\n\\begin{equation}\n\\inner{q(\\overline{z})\\Psi(\\overline{z})}{M_1^{-1}\\Theta}=\n\\inner{{\\cal T}^*q(\\overline{z})\n\\Psi(\\overline{z})}{M_2^{-1}{\\cal T}^*\\Theta}+\\inner{q(\\overline{z})\n\\Psi(\\overline{z})}{\\Theta}. \\end{equation}\nUsing the identity\n$\\inner{\\xi_{\\overline{\\omega_j}}}{M_2^{-1}\\xi_{\\omega_i}}=\n\\alpha_{\\omega_{i}}^{-1}\\delta_{ij}$ and the results of Section~3,\nthe first term is\n\\begin{equation}\n\\inner{{\\cal T}^*q(\\overline{z})\n\\Psi(\\overline{z})}{M_2^{-1}{\\cal T}^*\\Theta}=\n\\sum_{\\omega\\in\\Omega}\\frac{p(z)-\\omega\nq(z)}{q(-\\omega^2)(\\omega^2+z)}Q(-\\omega^2)\\alpha_\\omega.\n\\end{equation}\nThe required result then follows from the calculation\n\\begin{eqnarray}\n\\inner{q(\\overline{z})\n\\Psi(\\overline{z})}{\\Theta} &=&\n\\frac{1}{\\pi}\\int_{-\\infty}^\\infty dk\\frac{ikQ(k^2)\n(p(z)-ikq(z))}{(k^2-z)(p(k^2)-ikq(k^2))} \\nonumber \\\\\n&=& -Q(z)-\\sum_{\\omega\\in\\Omega}\\frac{p(z)-\\omega\nq(z)}{q(-\\omega^2)(\\omega^2+z)}Q(-\\omega^2)\\alpha_\\omega.\n\\end{eqnarray}\n\n\n\\subsection{Scattering Theory}\n\\label{sect:GPIsc}\n\nIn this section, we construct M{\\o}ller wave operators for $h_{\\rm gpi}$\nrelative to the free Hamiltonian $h_0={\\cal S}^{-1} k^2{\\cal S}$ on ${\\cal H}_r$ in\norder to check that $h_{\\rm gpi}$ actually exhibits the required scattering\nbehaviour. Because scattering is a function of the continuous\nspectrum only, our results in this section are actually independent\nof the precise form of $\\Lambda$, and therefore of the locality\nrequirement.\n\nWe work in the $S$-wave, and employ a two space setting: let $B$\nbe self-adjoint on ${\\cal H}_1$, $A$ be self-adjoint on ${\\cal H}_2$ and ${\\cal J}$\nbe a bounded operator from ${\\cal H}_1$ to ${\\cal H}_2$. Then the M{\\o}ller\noperators $\\Omega^\\pm(A,B;{\\cal J})$ are defined by\n\\begin{equation}\n\\Omega^\\pm(A,B;{\\cal J}) = \\lim_{t\\rightarrow\\mp\\infty}\ne^{iAt}{\\cal J} e^{-iBt}P_{\\rm ac}(B),\n\\end{equation}\nand are said to be complete if the closure of ${\\rm\nRan}\\Omega^\\pm(A,B;{\\cal J})$ is equal to ${\\rm Ran} P_{\\rm ac}(A)$.\n\nIn the following,\n${\\cal J}_r$ and ${\\cal J}_k$ are the natural embeddings of ${\\cal H}_r$ and\n${\\cal H}_k$ into $\\Pi_r$ and $\\Pi_k$ respectively.\n\n\\begin{Thm} Let ${\\cal J}:{\\cal H}_r\\rightarrow\\Pi_r$ be given by\n${\\cal J}=\\hat{{\\cal T}}^\\dagger {\\cal J}_k{\\cal T}$. Then\n$\\Omega^\\pm(h_{\\rm gpi},h_0;{\\cal J})$ exist, are complete, and given by\n\\begin{equation}\n\\Omega^\\pm(h_{\\rm gpi},h_0;{\\cal J}) = \\hat{{\\cal T}}^\\dagger {\\cal J}_k\ne^{\\pm i\\delta_0(k)}{\\cal S},\n\\label{eq:Mops}\n\\end{equation}\nwhere $\\delta_0(k)$ is given by~(\\ref{eq:GPIlow}).\n \\end{Thm}\n{\\em Proof:} Writing $U_t$ for multiplication by $e^{-ik^2t}$ on\n${\\cal H}_k$, we have\n\\begin{eqnarray}\ne^{ih_{\\rm gpi} t} {\\cal J} e^{-ih_0t}P_{\\rm ac}(h_0) &=& \\hat{{\\cal T}}^\\dagger\n\\left(\\begin{array}{cc} U_{-t} & 0 \\\\ 0 & \\exp i\\Lambda t\n\\end{array}\\right) \\hat{{\\cal T}} {\\cal J} {\\cal S}^{-1} U_t {\\cal S}\n\\nonumber \\\\\n&=& \\hat{{\\cal T}}^\\dagger\n{\\cal J}_k U_{-t} {\\cal T} {\\cal S}^{-1} U_t {\\cal S}.\n\\end{eqnarray}\nNow, for any $u(k)\\in C_0^\\infty(0,\\infty)$,\n\\begin{eqnarray}\n\\| U_{-t}{\\cal T}{\\cal S}^{-1}U_{t}u(k)-e^{\\pm i\\delta_0(k)} u(k)\\|^2 & = &\n\\|\\sin\\delta_0(k){\\cal C} ({\\cal C}^{-1}\\pm i{\\cal S}^{-1})U_t u(k)\\|^2\n\\nonumber\\\\ & \\le & \\frac{2}{\\pi}\\int_0^\\infty dr\n\\left|\\int_0^\\infty dk e^{i(\\pm kr-k^2t)} u(k)\\right|^2,\n\\end{eqnarray}\nwhich vanishes as $t\\rightarrow\\mp\\infty$ by (non)-stationary phase\narguments (see the Corollary to Theorem XI.14 in \\cite{RSiii}). Thus\n$U_{-t}{\\cal T}{\\cal S}^{-1}U_t \\rightarrow e^{\\pm i\\delta_0(k)}$ strongly as\n$t\\rightarrow\\mp\\infty$. The existence and form of the M{\\o}ller\noperators are then immediate. One easily checks that they are unitary\nmaps from ${\\cal H}_r$ to $P_{\\rm ac}(h_{\\rm gpi})=\\hat{{\\cal T}}^\\dagger{\\cal J}_k{\\cal H}_k$,\nto establish completeness. $\\vrule height 1.5ex width 1.2ex depth -.1ex $\n\nWe conclude that our construction does indeed yield the required\nscattering theory, and also that -- as a by-product of the\nconstruction -- complete M{\\o}ller operators may easily and\nexplicitly be determined.\n\n\\sect{Examples}\n\nAs an application, we construct the class of GPI models with\nscattering data\n\\begin{equation}\n\\cot\\delta_0(k) = -\\frac{1}{kL}+kM, \\label{eq:ERlow}\n\\end{equation}\nwhere $L$ is the scattering length, and $M$ is twice the effective\nrange. These models therefore represent the effective range\napproximation to the behaviour of a non-point interaction in the\n$S$-wave. This class of models has been partially studied by\nShondin~\\cite{Shond1}, who considered the case $M<0$ (`models of type\n$B_2$') and also appears as a special case of the models considered\nby Pavlov in \\cite{Pav1}. (We also note that van Diejen and Tip\n\\cite{Diejen} have constructed models of type $\\cot\\delta_0(k) = (ak\n+ bk^3+ck^5)^{-1}$ using the distributional method.) The case $M>0$\ndoes not appear to have been treated before. Our construction\nprovides a unified construction for all models in the above class,\nand also provides the spectral representation such models as a\nby-product of the construction (although we will not state this\nexplicitly).\n\nThe above class of GPI models contains two interesting\nsub-families: the ordinary point interactions ($M=0$) and also the\nresonance point interactions arising formally by setting $L=\\infty$,\ni.e., $\\cot\\delta_0(k)=kM$ with $M\\in{\\rm I\\! R}\\cup\\{\\infty\\}$. Such models\nare required in situations where the scattering length is\ngenerically forced to be infinite, for example in certain systems of\nsupersymmetric quantum mechanics.\n\nWe begin by briefly treating the point interactions, both for\ncompleteness and also to demonstrate how this class arises in our\nformalism. We then turn to the general case, obtaining RPI models\nin the limit $L\\rightarrow -\\infty$.\n\n\\subsection{Point Interactions}\n\nThe required integral transform is\n\\begin{equation}\n{\\cal T} = (1+(kL)^2)^{-1\/2}{\\cal S} - kL(1+(kL)^2)^{-1\/2}{\\cal C}.\n\\end{equation}\nIn the cases $L=0,\\infty$, ${\\cal T}$ reduces to ${\\cal S}$ and ${\\cal C}$\nrespectively, and the Hamiltonian is given immediately by\n${\\cal T}^*k^2{\\cal T}$. We exclude these cases from the rest of our discussion.\n\nWe therefore apply the construction of Section 3, with $p(z)\\equiv\n-L^{-1}$ and $q(z)\\equiv 1$. We find that $\\mho=0$, so $M_1=0$ (i.e.,\n${\\cal T}\\Tt^*=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}$). Straightforward application of\nProposition~\\ref{Prop:M2} yields\n\\begin{equation}\nM_2=\\left\\{\\begin{array}{cl}\n\\ket{\\chi_L}\\bra{\\chi_L} & L>0 \\\\ 0 & L<0, \\end{array}\\right.\n\\end{equation}\nwhere $\\chi_L(r)=(2\/L)^{1\/2}e^{-r\/L}$ is normalised to unity.\nHence if $L<0$, ${\\cal T}$ is unitary and the Hamiltonian is\n$h_L = {\\cal T}^* k^2 {\\cal T}$,\nwith purely absolutely continuous spectrum ${\\rm I\\! R}^+$. In the case\n$L>0$, the momentum Hilbert space is extended to ${\\cal H}_k\\oplus\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}$,\nrepresenting a single bound state, and the unitary dilation\n$\\hat{{\\cal T}}:{\\cal H}_r\\rightarrow{\\cal H}_k\\oplus\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}$ takes form\n\\begin{equation}\n\\hat{{\\cal T}} =\n\\left(\\begin{array}{c} {\\cal T} \\\\ \\bra{\\chi_L} \\end{array}\\right);\n\\qquad \\hat{{\\cal T}}^*=\n\\left(\\begin{array}{cc} T^* & \\ket{\\chi_L}\\end{array}\\right).\n\\end{equation}\n(${\\cal H}_k\\oplus\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}$ has the obvious inner product.)\nThe Hamiltonian is\n\\begin{equation}\nh_L = \\hat{{\\cal T}}^{-1}\\left(\\begin{array}{cc} k^2 & 0 \\\\ 0 & \\lambda\n\\end{array}\\right) \\hat{{\\cal T}} = {\\cal T}^* k^2{\\cal T} +\n\\lambda\\ket{\\chi_L}\\bra{\\chi_L},\n\\end{equation}\nand the locality requirement fixes $\\lambda= -L^{-2}$, which is, of\ncourse, the usual value. Finally, the domain of $h_L$ is given by\nTheorem~\\ref{Thm:dom} as the space of $\\varphi$ with\n$\\varphi,\\varphi^\\prime\\in AC_{\\rm loc}(0,\\infty)$,\n$\\varphi^{\\prime\\prime}\\in L^2$ and satisfying\nthe well known boundary condition\n\\begin{equation}\n\\varphi(0)+L\\varphi^\\prime(0) = 0.\n\\end{equation}\n{}To summarise, all the well known properties of point interactions\nmay be derived within our formalism.\n\n\\subsection{Effective Range Approximation}\n\nIn this section, we maintain $M\\not=0$, $L\\not=0$, setting\n$p(z)=-L^{-1}+zM$ and $q(z)\\equiv 1$. We will not explicitly\nconstruct the dilation (although this follows immediately from our\ndiscussion), but will use the results of Section~4 to read off the\ndomain and action of the GPI Hamiltonian $h_{L,M}$.\n\nUsing the results of Section~3, we find\n\\begin{equation}\n\\mho=1;\\qquad |\\Omega|=\n\\left\\{\n\\begin{array}{cl} 1+\\frac{1}{2}({\\rm sgn}\\, M+{\\rm sgn}\\, L) & L\\not=\\infty \\\\\n\\frac{1}{2}(1+{\\rm sgn}\\, M) & L=\\infty. \\end{array}\n\\right.\n\\end{equation}\nWriting $W(z)=-M(z-\\omega_1)(z-\\omega_2)$, $\\Omega$ is\nthe subset of $\\{\\omega_1,\\omega_2\\}$ lying in the left-hand\nhalf-plane, and we have $\\omega_1+\\omega_2=-M^{-1}$,\n$\\omega_1\\omega_2=(ML)^{-1}$. The residues $\\alpha_\\omega$ are\n\\begin{equation}\n\\alpha_{\\omega_1}=-\\frac{2\\omega_1}{M(\\omega_1-\\omega_2)},\\qquad\n\\alpha_{\\omega_2}=\\frac{2\\omega_2}{M(\\omega_1-\\omega_2)}.\n\\end{equation}\nIn addition, the space ${\\cal M}_1={\\rm Ran}\\, M_1$ is equal to $\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}\\ket{\\eta}$,\nwhere\n\\begin{equation}\n\\eta(k) = {\\cal N} \\frac{k}{(k^2+(k^2M-L^{-1})^2)^{1\/2}},\n\\end{equation}\nand the normalisation constant is\n\\begin{equation}\n{\\cal N}=\\left\\{\\begin{array}{cl}\n(2|M|\/\\pi)^{1\/2} & ML>0 \\\\\n(2|M|\/\\pi)^{1\/2}(1-4ML^{-1})^{1\/4} & ML<0. \\end{array}\n\\right.\n\\end{equation}\nUsing Proposition~\\ref{Prop:M1}, we obtain\n\\begin{equation}\nM_1 = \\lambda\\ket{\\eta}\\bra{\\eta} ;\n\\qquad \\lambda = \\left\\{\\begin{array}{cl}\n+1 & M<0, L<0 \\\\\n-{\\rm sgn}\\, M (1-4ML^{-1})^{-1\/2} & ML<0 \\\\\n-1 & M>0, L>0.\n\\end{array}\\right.\n\\end{equation}\n\nAccordingly, the extended position inner product space is\n$\\Pi_r={\\cal H}_r\\oplus\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}$ with $J$-inner product specified by\n$J=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}\\oplus (-{\\rm sgn}\\, M)$. The scalar component is the coefficient of\n$\\ket{\\eta}$ in ${\\cal M}_1$. For all generic cases (i.e., all cases\nother than $L=4M>0$) Theorem~\\ref{Thm:dom} entails that the domain\nof $h_{L,M}$ is\n\\begin{equation}\nD(h_{L,M}) = \\left\\{\\left(\\begin{array}{c} \\varphi \\\\ \\Phi\n\\end{array}\\right) \\mid \\varphi,\\varphi^\\prime\\in AC_{\\rm\nloc}(0,\\infty),~\\varphi,\\varphi^{\\prime\\prime}\\in L^2;\\quad \\Phi =\n-|M|^{1\/2}\\varphi(0) \\right\\},\n\\label{eq:DhLM}\n\\end{equation}\nand that the action is\n\\begin{equation}\nh_{L,M} \\left(\\begin{array}{c} \\varphi \\\\ -|M|^{1\/2}\\varphi(0)\n\\end{array}\\right) =\n\\left(\\begin{array}{c}\n-\\varphi^{\\prime\\prime}\n\\\\ -{\\rm sgn}\\, M |M|^{-1\/2}(\\varphi^\\prime(0)+L^{-1}\\varphi(0))\n\\end{array}\\right).\n\\end{equation}\nMoreover, one may show that these equations also hold in the\nnon-generic case $L=4M>0$.\n\nIt is worth noting how this domain and action correspond to the\nscattering data~(\\ref{eq:ERlow}). Solving the equation\n$h_{L,M}(\\varphi,\\Phi)^T=k^2(\\varphi,\\Phi)^T$ for the generalised\neigenfunctions of $h_{L,M}$, we find\n$\\varphi(r)\\propto\\sin(kr+d(k))$ for some $d(k)$, and also obtain the\nrelation\n\\begin{equation}\n-{\\rm sgn}\\, M |M|^{-1\/2}(\\varphi^\\prime(0)+L^{-1}\\varphi(0))=\n-k^2|M|^{1\/2}\\varphi(0),\n\\end{equation}\nwhich entails that $k\\cot d(k) = \\varphi^\\prime(0)\/\\varphi(0) =\n-L^{-1}+k^2M$. Thus $d(k)$ is precisely the scattering data\n$\\delta_0(k)$.\n\nThe RPI models, which have scattering data $\\cot\\delta_0(k) =kM$ are\nobtained in the same way. The space ${\\cal M}_1$ is spanned by\n$\\psi_M(k) =(2|M|\/\\pi)^{1\/2}(1+(kM)^2)^{-1\/2}$, and the operator\n$M_1$ is found to be $M_1=-({\\rm sgn}\\, M)\\ket{\\psi_M}\\bra{\\psi_M}$. Thus\nthe inner product space is $\\Pi_r={\\cal H}_r\\oplus\\relax{\\hbox{$\\inbar\\kern-.3em{\\rm C}$}}$ with\n$J=\\leavevmode\\hbox{\\small1\\kern-3.8pt\\normalsize1}\\oplus(-{\\rm sgn}\\, M)$. They have the domain~(\\ref{eq:DhLM}) and\naction\n\\begin{equation}\nh_{{\\rm rpi},M} \\left(\\begin{array}{c} \\varphi \\\\ -|M|^{1\/2}\\varphi(0)\n\\end{array}\\right) =\n\\left(\\begin{array}{c}\n-\\varphi^{\\prime\\prime}\n\\\\ -{\\rm sgn}\\, M |M|^{-1\/2}\\varphi^\\prime(0)\n\\end{array}\\right).\n\\end{equation}\n\nLet us consider the physical Hilbert space for these models. From\nSection~4, this is constructed by projecting out the hyperbolic\ninvariant subspace, and also those eigenfunctions with negative norm\nsquared (if present). The bound states of $h_{L,M}$ are clearly\nvectors of form $(\\xi_\\omega,|M|^{1\/2})^T$ with norm squared equal to\n$-(2{\\rm Re}\\,\\omega)^{-1}-M$, where $\\omega$ is a root of\n$\\omega^2+M^{-1}\\omega+(ML)^{-1}=0$. There are four cases to consider:\n\n{\\noindent \\em Case (i): $M<0$}. $\\Pi_r$ is positive definite so no\nprojection is required.\n\n{\\noindent \\em Case (ii): $M>0$, $L<0$}. There is a unique bound\nstate with\n\\begin{equation}\n\\omega=\\frac{1+(1-4M\/L)^{1\/2}}{-2M}\n\\label{eq:ome}\n\\end{equation}\nand negative norm squared. Projecting this state out, we obtain\n\\begin{equation}\n{\\cal H}_{\\rm phys} =\n\\left\\{\\left(\\begin{array}{c} \\varphi \\\\\nM^{-1\/2}\\inner{\\xi_\\omega}{\\varphi}\\end{array}\\right)\\mid\n\\varphi\\in{\\cal H}_r \\right\\}.\n\\label{eq:HP}\n\\end{equation}\n\n{\\noindent \\em Case (iii): $M>0$, $00$, $L>4M$}. There are two bound\nstates with real eigenvalues. However, only the state specified\nby~(\\ref{eq:ome}) has negative norm. Projecting this out, we arrive\nat the same expression for ${\\cal H}_{\\rm phys}$ as in case (ii).\n\nRPI models are covered by Case (i) for $M<0$, and have ${\\cal H}_{\\rm\nphys}$ given by~(\\ref{eq:HP}) for $M>0$, with $\\omega=-1\/M$.\n\nThe GPI Hamiltonian acts on ${\\cal H}_{\\rm phys}$ by restriction. For\nexample, in case (ii) above, we have\n\\begin{eqnarray}\nD(h_{L,M}|_{{\\cal H}_{\\rm phys}}) &=& \\left\\{\n\\left(\\begin{array}{c} \\varphi \\\\\nM^{-1\/2}\\inner{\\xi_\\omega}{\\varphi}\\end{array}\\right)\\mid\n\\varphi,\\varphi^\\prime\\in AC_{\\rm loc}(0,\\infty),~\n\\varphi,\\varphi^{\\prime\\prime}\\in L^2;\\right. \\nonumber \\\\\n&&\\qquad\\qquad\\qquad\\qquad\\quad\n\\left.\\begin{array}{c} \\ \\\\ \\ \\end{array}\nM\\varphi(0)=-\\inner{\\xi_\\omega}{\\varphi} \\right\\}\n\\end{eqnarray}\non which $h_{L,M}|_{{\\cal H}_{\\rm phys}}$ acts as before. The restricted\noperator has the same continuum spectrum as $h_{L,M}$, but has no\nbound states in this case. Moreover, the property of locality is\npartially lost: it is clear that vectors of form $(\\varphi,0)^T$\nwith $\\varphi\\in C_0^\\infty(0,\\infty)$ are in ${\\cal H}_{\\rm phys}$ only if\n$\\varphi\\perp\\xi_\\omega$. However, for elements of this form in\n${\\cal H}_{\\rm phys}$, it remains the case that $h_{L,M}|_{{\\cal H}_{\\rm\nphys}}(\\varphi,0)^T= (-\\varphi^{\\prime\\prime},0)^T$. Thus the\nproperties of locality and `positivity' are not entirely compatible.\n\n\n\\subsection{Physical Interpretation}\n\nIn this section, we discuss how the effective range models\nconstructed above may be used to model Schr\\\"{o}dinger operators\n$H=-\\triangle+V$, where $V$ is smooth, spherically symmetric and\ncompactly supported within radius $a$ of the origin. Our methodology\nextends that described in~\\cite{KF}, in which the scattering length\napproximation is discussed.\n\nGiven a smooth spherically symmetric potential $V(r)$ supported\nwithin radius $a$ of the origin, we may find the `best fit' GPI model\n$h_{L,M}$ as follows. Let $u_0$ be the $S$-wave zero energy\neigenfunction, i.e., the solution to $-u_0^{\\prime\\prime}+Vu_0=0$\nwith regular boundary conditions at the origin. Then the arguments\nof Section 11.2 of \\cite{Newt} give the low energy parameters $L$ and\n$M$ as\n\\begin{equation}\nL= a - \\left.\\frac{u_0}{u_0^\\prime}\\right|_{r=a};\n\\label{eq:L}\n\\end{equation}\nand\n\\begin{equation}\nM = a\\left\\{ 1-\\frac{a}{L}\n+\\frac{1}{3}\\left(\\frac{a}{L}\\right)^{2}-\n\\left( 1-\\frac{a}{L}\\right)^2\n\\frac{\\int_0^a |u_0(r)|^2 dr}{a|u_0(a)|^2} \\right\\}.\n\\label{eq:M}\n\\end{equation}\nThus the scattering behaviour is $\\cot\\delta_0(k)=\n-(kL)^{-1}+kM+O(k^3)$ and the best fit GPI model in our class is\n$h_{L,M}$. We refer to equations~(\\ref{eq:L}) and~(\\ref{eq:M}) as\n{\\em fitting formulae}; equation~(\\ref{eq:L}) is the fitting formula\nemployed in \\cite{KF}. The range of energies for which the\napproximation is valid can be determined by a `believability'\nanalysis analogous to that described in \\cite{KF}. We will not do\nthis here.\n\nNote that $M$ obeys the bound\n\\begin{equation}\n-\\infty \\le M < a\\left\\{ 1-\\frac{a}{L}\n+\\frac{1}{3}\\left(\\frac{a}{L}\\right)^{2}\\right\\}.\n\\end{equation}\nMoreover, this bound is best possible: for any\n$L\\in{\\rm I\\! R}\\cup\\{\\infty\\}$ and any $M$ in the above range, one can\nclearly find a smooth function $u_0(r)$ satisfying regular boundary\nconditions at the origin, $u_0\\propto(1-r\/L)$ for $r>a$ and such\nthat~(\\ref{eq:M}) holds. Then the potential defined by\n$V(r)=u_0^{\\prime\\prime}(r)\/u_0(r)$ has $S$-wave scattering behaviour\napproximated to second order by $h_{L,M}$. The contribution to the\ntotal scattering cross section from the effective range term\ngenerally outweighs that from higher angular momenta, so the $S$-wave\nGPI model provides a second order approximation to the full\nscattering behaviour.\n\nFinally, we discuss the interpretation of the discrete spectrum of\n$h_{L,M}$. We have constructed $h_{L,M}$ so that its\nscattering behaviour matches that of a given Schr\\\"{o}dinger\noperator at low energies, $E$. For larger $|E|$, the approximation\nbreaks down -- in the language of \\cite{KF} we say that it is no\nlonger `believable'. Thus, deeply bound states are unlikely to be\nbelievable. In particular, for $01$ is a hard problem and very little is known \\cite{pavlovhardsquare2012}, however there are algorithms to compute approximating upper and lower bounds of the topological entropy of the hom-shifts \\cite{symmtricfriedlan1997,louidor2010improved}. Further if $\\mathcal H$ is a finite connected graph with at least two edges, then $h_{top}(X_\\mathcal H)>0$:\n\n\\begin{prop}\\label{proposition: hom-space positive entropy}\nLet $\\mathcal H$ be a finite graph with distinct vertices $a, b$ and $c$ such that $a\\sim_\\mathcal H b$ and $b\\sim_\\mathcal H c$. Then $h_{top}(X_{\\mathcal H})\\geq\\frac{\\log{2}}{2}$.\n\\end{prop}\n\\begin{proof}\n\nIt is sufficient to see this for a graph $\\mathcal H$ with exactly three vertices $a$, $b$ and $c$ such that $a\\sim_\\mathcal H b$ and $b\\sim_\\mathcal H c$. For such a graph any configuration in $X_\\mathcal H$ is composed of $b$ on one partite class of $\\mathbb{Z}^d$ and a free choice between $a$ and $c$ for vertices on the other partite class. Then\n$$|\\mathcal L_{B_n}(X_\\mathcal H)|=2^{{\\lfloor\\frac{(2n+1)^d}{2}\\rfloor}}+2^{{\\lceil\\frac{(2n+1)^d}{2}\\rceil}}$$\nproving that $h_{top}(X_{\\mathcal H})=\\frac{\\log{2}}{2}$.\n\\end{proof}\n\nA shift space $X$ is called \\emph{entropy minimal} if for all shift spaces $Y\\subsetneq X$, $h_{top}(X)>h_{top}(Y)$. In other words, a shift space $X$ is entropy minimal if forbidding any word causes a drop in entropy. From \\cite{quastrow2000} we know that every shift space contains an entropy minimal shift space with the same entropy and also a characterisation of same entropy factor maps on entropy minimal shifts of finite type.\n\nOne of the main results of this paper is the following:\n\\begin{thm}\\label{theorem:four cycle free entropy minimal}\nLet $\\mathcal H$ be a connected four-cycle free graph. Then $X_\\mathcal H$ is entropy minimal.\n\\end{thm}\nFor $d=1$ all irreducible shifts of finite type are entropy minimal \\cite{LM}. A necessary condition for the entropy minimality of $X_\\mathcal H$ is that $\\mathcal H$ has to be connected.\n\\begin{prop}\\label{proposition:entropy requires connectivity}\nSuppose $\\mathcal H$ is a finite graph with connected components $\\mathcal H_1, \\mathcal H_2, \\ldots \\mathcal H_r$. Then $h_{top}(X_\\mathcal H)=\\max_{1\\leq i \\leq r}h_{top}(X_{\\mathcal H_i})$.\n\\end{prop}\nThis follows from the observation that\n$$\\max_{1\\leq i \\leq r}|\\mathcal L_{B_n}({X_{\\mathcal H_i}})|\\leq |\\mathcal L_{B_n}({X_\\mathcal H})|= \\sum_{i=1}^r|\\mathcal L_{B_n}({X_{\\mathcal H_i}})|\\leq r\\max_{1\\leq i \\leq r}|\\mathcal L_{B_n}({X_{\\mathcal H_i}})|.$$\n\n\n\n\n\n\\section{Thermodynamic Formalism}\\label{section:thermodynamic formalism}\nHere we give a brief introduction of thermodynamic formalism. For more details one can refer to \\cite{Rue,walters-book}.\n\nBy $\\mu$ we will always mean a shift-invariant Borel probability measure on a shift space $X$. The \\emph{support} of $\\mu$ denoted by $supp(\\mu)$ is the intersection of all closed sets $Y \\subset X$ for which $\\mu(Y)= 1$. Note that $supp(\\mu)$ is a shift space as well.\nThe \\emph{measure theoretic entropy} is\n\\begin{equation*}\nh_\\mu:=\\lim_{i \\rightarrow \\infty}\\frac{1}{|D_i|}H^{D_i}_{\\mu},\n\\end{equation*}\n\\noindent where $H^{D_i}_{\\mu}$ is the Shannon-entropy of $\\mu$ with respect to the partition of $X$ generated by the cylinder sets on $D_i$, the definition of which is given by:\n\\begin{equation*}\nH^{D_i}_{\\mu}:=\\sum_{a\\in \\mathcal L_{D_i}(X)}-\\mu([a]_{D_i})\\log{\\mu([a]_{D_i})},\n\\end{equation*} with the understanding that $0\\log 0=0$.\n\nA shift-invariant probability measure $\\mu$ is a \\emph{measure of maximal entropy} of $X$ if the maximum of $\\nu \\mapsto h_\\nu$ over all shift-invariant probability measures on $X$ is obtained at $\\mu$. The existence of measures of maximal entropy follows from upper-semi-continuity of the function $\\nu \\mapsto h_\\nu$ with respect to the weak-$*$ topology.\n\n\nFurther the well-known \\emph{variational principle} for topological entropy of $\\mathbb{Z}^d$-actions asserts that if $\\mu$ is a measure of maximal entropy $h_{top}(X)=h_\\mu$ whenever $X$ is a $\\mathbb{Z}^d$-shift space.\n\nThe following is a well-known characterisation of entropy minimality (it is used for instance in the proof of Theorem 4.1 in \\cite{meestersteif2001}):\n\\begin{prop}\n\\label{proposition:entropyviamme}\nA shift space $X$ is entropy minimal if and only if every measure of maximal entropy for $X$ is fully supported.\n\\end{prop}\nWe understand this by the following: Suppose $X$ is entropy minimal and $\\mu$ is a measure of maximal entropy for $X$. Then by the variational principle for $X$ and $supp(\\mu)$ we get\n$$h_{top}(X)=h_\\mu\\leq h_{top}(supp(\\mu))\\leq h_{top}(X)$$\nproving that $supp(\\mu)=X$. To prove the converse, suppose for contradiction that $X$ is not entropy minimal and consider $Y\\subsetneq X$ such that $h_{top}(X)= h_{top}(Y)$. Then by the variational principle there exists a measure $\\mu$ on $Y$ such that $h_\\mu= h_{top}(X)$. Thus $\\mu$ is a measure of maximal entropy for $X$ which is not fully supported.\n\nFurther is known if $X$ is a nearest neighbour shift of finite type; this brings us to Markov random fields which we introduce next.\n\nGiven a set $A\\subset \\mathbb{Z}^d$ we denote the \\emph{$r$-boundary} of $A$ by $\\partial_r A$, that is,\n$$\\partial_r A=\\{w\\in \\mathbb{Z}^d\\setminus A\\:\\Big \\vert\\: \\|w-v\\|_1\\leq r \\text{ for some }v\\in A\\}.$$\nThe \\emph{1-boundary} will be referred to as the \\emph{boundary} and denoted by $\\partial A$.\nA \\emph{Markov random field} on $\\mathcal{A}^{\\mathbb{Z}^d}$ is a Borel probability measure $\\mu$ with the property that\nfor all finite $A, B \\subset \\mathbb{Z}^d$ such that $\\partial A \\subset B \\subset A^{c}$ and $a \\in {\\mathcal A}^A, b \\in {\\mathcal A}^B$ satisfying $\\mu([b]_B)>0$\n\\begin{equation*}\n\\mu([a]_A\\;\\Big\\vert\\;[b]_B)= \\mu([a]_A\\;\\Big\\vert\\;[b]_{ \\partial A}).\n\\end{equation*}\n\nIn general Markov random fields are defined over graphs much more general than $\\mathbb{Z}^d$, however we restrict to the $\\mathbb{Z}^d$ setting in this paper.\n\nA \\emph{uniform Markov random field} is a Markov random field $\\mu$ such that further\n\n\\begin{equation*}\n\\mu([a]_A\\;\\Big\\vert\\;[b]_{ \\partial A})=\\frac{1}{n_{A,b|_{\\partial A}}}\n\\end{equation*}\nwhere $n_{A,b|_{\\partial A}}=|\\{a\\in {\\mathcal A}^A\\:|\\: \\mu([a]_A\\cap [b]_{\\partial A})>0\\}|$.\n\nFollowing \\cite{petersen_schmidt1997, schmidt_invaraint_cocycles_1997}, we denote by $\\Delta_X$ the \\emph{homoclinic equivalence relation} of a shift space $X$, which is given by\n\\begin{equation*}\n\\Delta_X := \\{(x,y)\\in X\\times X\\;|\\; x_{\\vec i}=y_{\\vec i} \\text{ for all but finitely many } \\vec i\\in \\mathbb{Z}^d\\}.\n\\end{equation*}\n\nWe say that a measure $\\mu$ is \\emph{adapted} with respect to a shift space $X$ if\n$supp(\\mu)\\subset X$ and\n\\begin{equation*}\nx\\in supp(\\mu) \\Longrightarrow \\{y\\in X\\:|\\: (x,y)\\in \\Delta_X\\}\\subset supp(\\mu).\n\\end{equation*}\n\nTo illustrate this definition, let $X\\subset \\{0,1\\}^{\\mathbb{Z}}$ consist of configurations in $X$ in which at most a single $1$ appears. $X$ is uniquely ergodic; the delta-measure $\\delta_{0^{\\infty}}$ is the only shift-invariant measure on $X$. But\n$$supp(\\delta_{0^\\infty})= \\{0^\\infty\\}\\subsetneq \\{y\\in X\\;|\\; 0^\\infty_i= y_i \\text{ for all but finitely many } i\\in \\mathbb{Z}\\}=X,$$\nproving that it is not adapted. On the other hand, since the homoclinic relation of ${\\mathcal A}^{\\mathbb{Z}^d}$ is minimal, meaning that for all $x\\in {\\mathcal A}^{\\mathbb{Z}^d}$\n$$\\overline{\\{y\\in {\\mathcal A}^{\\mathbb{Z}^d}\\;|\\; y_{\\vec i}=x_{\\vec i} \\text{ for all but finitely many } {\\vec{i}}\\in \\mathbb{Z}^d\\}}= {\\mathcal A}^{\\mathbb{Z}^d},$$\nit follows that a probability measure on ${\\mathcal A}^{\\mathbb{Z}^d}$ is adapted if and only if it is fully supported.\n\n\n\nThe relationship between measures of maximal entropy and Markov random fields is established by the following theorem. This is a special case of the Lanford-Ruelle theorem \\cite{lanfruell,Rue}.\n\n\\begin{thm} All measures of maximal entropy on a nearest neighbour shift of finite type $X$ are shift-invariant uniform Markov random fields $\\mu$ adapted to $X$.\\label{thm:equiGibbs}\n\\end{thm}\n\nThe converse is also true under further mixing assumptions on the shift space $X$ (called the D-condition). The full strength of these statements is obtained by looking at \\emph{equilibrium states} instead of measures of maximal entropy. The measures obtained there are not uniform Markov random fields, rather Markov random fields where the conditional probabilities are weighted via an \\emph{interaction} giving rise to \\emph{Gibbs states}. Uniform Markov random fields are Gibbs states with interaction zero.\n\nWe will often restrict our proofs to the ergodic case. We can do so via the following standard facts implied by Theorem $14.15$ in \\cite{Georgii} and Theorem 4.3.7 in \\cite{kellerequ1998}:\n\n\\begin{thm}\\label{theorem: ergodic decomposition of markov random fields}\nLet $\\mu$ be a shift-invariant uniform Markov random field adapted to a shift space $X$. Let its ergodic decomposition be given by a measurable map $x\\longrightarrow \\mu_x$ on $X$, that is, $\\mu= \\int_X \\mu_x d\\mu$. Then $\\mu$-almost everywhere the measures $\\mu_x$ are shift-invariant uniform Markov random fields adapted to $X$ such that $supp(\\mu_x)\\subset supp(\\mu)$. Moreover $\\int h_{\\mu_x} d\\mu(x)= h_\\mu$.\n\\end{thm}\n\n\nWe will prove the following:\n\\begin{thm}\\label{theorem: MRF fully supported }\nLet $\\mathcal H$ be a connected four-cycle free graph. Then every ergodic probability measure adapted to $X_\\mathcal H$ with positive entropy is fully supported.\n\\end{thm}\n\nThis implies Theorem \\ref{theorem:four cycle free entropy minimal} by the following: The Lanford-Ruelle theorem implies that every measure of maximal entropy on $X_\\mathcal H$ is a uniform shift-invariant Markov random field adapted to $X_\\mathcal H$. By Proposition \\ref{proposition: hom-space positive entropy} and the variational principle we know that these measures have positive entropy. By Theorems \\ref{theorem: ergodic decomposition of markov random fields} and \\ref{theorem: MRF fully supported } they are fully supported. Finally by Proposition \\ref{proposition:entropyviamme}, $X_\\mathcal H$ is entropy minimal.\n\nAlternatively, the conclusion of Theorem \\ref{theorem: MRF fully supported } can be obtained via some strong mixing conditions on the shift space; we will describe one such assumption. A shift space $X$ is called \\emph{strongly irreducible} if there exists $g>0$ such that for all $x, y \\in X$ and $A, B\\subset \\mathbb{Z}^d$ satisfying $\\min_{\\vec i \\in A, \\vec j \\in B}\\|\\vec i - \\vec j\\|_1\\geq g$, there exists $z\\in X$ such that $z|_{A}= x|_A$ and $z|_B= y|_B$. For such a space, the homoclinic relation is minimal implying the conclusion of Theorem \\ref{theorem: MRF fully supported } and further, that every probability measure adapted to $X$ is fully supported. Note that this does not prove that $X$ is entropy minimal unless we assume that $X$ is a nearest neighbour shift of finite type. Such an argument is used in the proof of Lemma 4.1 in \\cite{meestersteif2001} which implies that every strongly irreducible shift of finite type is entropy minimal. A more combinatorial approach was used in \\cite{Schraudner2010minimal} to show that general shift spaces with a weaker mixing property called uniform filling are entropy minimal.\n\n\n\\section{The Pivot Property}\\label{section: the pivot property}\nA \\emph{pivot} in a shift space $X$ is a pair of configurations $(x,y)\\in X$ such that $x$ and $y$ differ exactly at a single site. A subshift $X$ is said to have \\emph{the pivot property} if for all distinct $(x,y)\\in \\Delta_X$ there exists a finite sequence of configurations $x^{(1)}=x, x^{(2)},\\ldots, x^{(k)}=y \\in X$ such that each $(x^{(i)}, x^{(i+1)})$ is a pivot. In this case we say $x^{(1)}=x, x^{(2)},\\ldots, x^{(k)}=y$ is a \\emph{chain of pivots} from $x$ to $y$.\nHere are some examples of subshifts which have the pivot property:\n\n\\begin{enumerate}\n\\item Any subshift with a trivial homoclinic relation, that is, the homoclinic classes are singletons.\n\\item Any subshift with a safe symbol\\footnote{A shift space $X\\subset {\\mathcal A}^{\\mathbb{Z}^d}$ has a \\emph{safe symbol} $\\star$ if for all $x\\in X$ and $A\\subset \\mathbb{Z}^d$ the configuration $z\\in {\\mathcal A}^{\\mathbb{Z}^d}$ given by\n\\begin{equation*}\nz_{\\vec i}:=\\begin{cases}\nx_{\\vec i} &\\text{ if } \\vec i \\in A\\\\\n\\star &\\text{ if } \\vec i \\in A^c\n\\end{cases}\u2022\n\\end{equation*}\nis also an element of $X$.}.\n\\item The hom-shifts $X_{C_r}$. This was proved for $r\\neq 4$ in \\cite{chandgotia2013Markov}, the result for $r=4$ is a special case of Proposition \\ref{proposition: frozenfoldpivot}.\n\\item $r$-colorings of $\\mathbb{Z}^d$ with $r\\geq 2d+2$. (It is well-known, look for instance in Subsection 3.2 of \\cite{chandgotia2013Markov})\n\\item\\label{item: pivot property list number 5} $X_\\mathcal H$ when $\\mathcal H$ is dismantlable. \\cite{brightwell2000gibbs}\n\\end{enumerate}\nWe generalise the class of examples given by (\\ref{item: pivot property list number 5}) in Proposition \\ref{proposition: frozenfoldpivot}. It is not true that all hom-shifts have the pivot property.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[angle=0,\nwidth=.1\\textwidth]{fivecolouring.pdf}\\caption{Frozen Pattern}\n\\label{Figure: Five colour}\n\\end{figure}\nThe following was observed by Brian Marcus: Recall that $K_n$ denotes the complete graph with $n$ vertices. $X_{K_4}, X_{K_5}$ do not possess the pivot property if the dimension is two. For instance consider a configuration in $X_{K_5}$ which is obtained by tiling the plane with the pattern given in Figure \\ref{Figure: Five colour}. It is clear that the symbols in the box can be interchanged but no individual symbol can be changed. Therefore $X_{K_5}$ does not have the pivot property. However both $X_{K_4}$ and $X_{K_5}$ satisfy a more general property as discussed in Subsection \\ref{subsection: Hom-shifts and the pivot property}.\n\nThe following theorem is another main result in this paper.\n\\begin{thm}\\label{theorem: pivot property for four cycle free}\nFor all four-cycle free graphs $\\mathcal H$, $X_\\mathcal H$ has the pivot property.\n\\end{thm}\nIt is sufficient to prove this theorem for four-cycle free graphs $\\mathcal H$ which are connected because of the following proposition:\n\\begin{prop}\\label{proposition: pivot for disconnected}\nLet $X_1, X_2, \\ldots, X_n$ be shift spaces on disjoint alphabets such that each of them has the pivot property. Then $\\cup_{i=1}^n X_i$ also has the pivot property.\n\\end{prop}\nThis is true since $(x, y)\\in \\Delta_{\\cup_{i=1}^n X_i}$ implies $(x, y)\\in \\Delta_{X_i}$ for some $1\\leq i \\leq n$.\n\n\\section{Folding, Entropy Minimality and the Pivot Property}\\label{section:Folding, Entropy Minimality and the Pivot Property} Given a graph $\\mathcal H$ we say that a vertex $v$ \\emph{folds} into a vertex $w$ if and only if $u \\sim_\\mathcal H v$ implies $u \\sim_\\mathcal H w$. In this case the graph $\\mathcal H\\setminus \\{v\\}$ is called a \\emph{fold} of $\\mathcal H$. The folding gives rise to a `retract' from $\\mathcal H$ to $\\mathcal H\\setminus\\{v\\}$, namely the graph homomorphism from $\\mathcal H$ to $\\mathcal H\\setminus \\{v\\}$ which is the identity on $\\mathcal H\\setminus \\{v\\}$ and sends $v$ to $w$. This was introduced in \\cite{nowwinkler} to help characterise cop-win graphs and used in \\cite{brightwell2000gibbs} to establish many properties which are preserved under `folding' and `unfolding'. Given a finite tree $\\mathcal H$ with more than two vertices note that a leaf vertex (vertex of degree $1$) can always be folded to some other vertex of the tree. Thus starting with $\\mathcal H$, there exists a sequence of folds resulting in a single edge. In fact using a similar argument we can prove the following proposition.\n\n\\begin{prop}\\label{proposition:folding trees into other trees}\nLet $\\mathcal H\\subset \\mathcal H^\\prime$ be trees. Then there is a graph homomorphism $f: \\mathcal H^\\prime \\longrightarrow \\mathcal H$ such that $f|_{\\mathcal H}$ is the identity map.\n\\end{prop}\n\nTo show this, first note that if $\\mathcal H\\subsetneq\\mathcal H^\\prime$ then there is a leaf vertex in $\\mathcal H^\\prime$ which is not in $\\mathcal H$. This leaf vertex can be folded into some other vertex in $\\mathcal H^\\prime$. Thus by induction on $|\\mathcal H^\\prime \\setminus \\mathcal H|$ we can prove that there is a sequence of folds from $\\mathcal H^\\prime$ to $\\mathcal H$. Corresponding to this sequence of folds we obtain a graph homomorphism from $\\mathcal H^\\prime$ to $\\mathcal H$ which is the identity on $\\mathcal H$.\n\nHere we consider a related notion for shift spaces. Given a nearest neighbour shift of finite type $X\\subset {\\mathcal A}^{\\mathbb{Z}^d}$, \\emph{the neighbourhood} of a symbol $v\\in {\\mathcal A}$ is given by\n$$N_X(v):=\\{a \\in {\\mathcal A}^{\\partial \\vec 0}\\:|\\: [v]_{\\vec 0}\\cap [a]_{\\partial \\vec 0}\\in \\mathcal L_{D_1}(X)\\},$$\nthat is the collection of all patterns which can `surround' $v$ in $X$. We will say that $v$ \\emph {config-folds} into $w$ in $X$ if $N_X(v)\\subset N_X(w)$. In such a case we say that $X$ \\emph{config-folds} to $X\\cap({\\mathcal A}\\setminus \\{v\\})^{\\mathbb{Z}^d}$. Note that $X\\cap({\\mathcal A}\\setminus \\{v\\})^{\\mathbb{Z}^d}$ is obtained by forbidding $v$ from $X$ and hence it is also a nearest neighbour shift of finite type. Also if $X=X_\\mathcal H$ for some graph $\\mathcal H$ then $v$ config-folds into $w$ in $X_\\mathcal H$ if and only if $v$ folds into $w$ in $\\mathcal H$. Thus if $\\mathcal H$ is a tree then there is a sequence of folds starting at $X_\\mathcal H$ resulting in the two checkerboard configurations with two symbols (the vertices of the edge which $\\mathcal H$ folds into). This property is weaker than the notion of folding introduced in \\cite{chandgotiahammcliff2014}.\n\nThe main thrust of this property in our context is: if $v$ config-folds into $w$ in $X$ then given any $x\\in X$, every appearance of $v$ in $x$ can be replaced by $w$ to obtain another configuration in $X$. This replacement defines a factor (surjective, continuous and shift-invariant) map $f: X\\longrightarrow X\\cap({\\mathcal A}\\setminus \\{v\\})^{\\mathbb{Z}^d}$ given by\n\\begin{equation*}\n(f(x))_{\\vec i}:=\\begin{cases}\nx_{\\vec i}&\\text{ if } x_{\\vec i}\\neq v\\\\\nw&\\text{ if } x_{\\vec i}= v.\n\\end{cases}\u2022\n\\end{equation*}\nNote that the map $f$ defines a `retract' from $X$ to $X\\cap({\\mathcal A}\\setminus \\{v\\})^{\\mathbb{Z}^d}$. Frequently we will config-fold more than one symbol at once (especially in Section \\ref{section: Proof of the main theorems}):\n\nDistinct symbols $v_1, v_2, \\ldots, v_n$ \\emph{config-fold disjointly} into $w_1, w_2, \\ldots, w_n$ in $X$ if $v_i$ config-folds into $w_i$ and $v_i\\neq w_j$ for all $1\\leq i, j \\leq n$. In this case the symbols $v_1, v_2, \\ldots, v_n$ can be replaced by $w_1, w_2, \\ldots, w_n$ simultaneously for all $x \\in X$. Suppose $v_1, v_2,\\ldots v_n$ is a maximal set of symbols which can be config-folded disjointly in $X$. Then $X\\cap({\\mathcal A}\\setminus \\{v_1, v_2, \\ldots, v_n\\})^{\\mathbb{Z}^d}$ is called a \\emph{full config-fold} of $X$.\n\nFor example consider a tree $\\mathcal H:=(\\mathcal V,\\mathcal E)$ where $\\mathcal V:=\\{v_1, v_2, v_3, \\ldots, v_{n+1}\\}$ and $\\mathcal E:=\\{(v_i, v_{n+1})\\:|\\: 1\\leq i \\leq n\\}$. For all $1\\leq i \\leq n$, $\\mathcal V\\setminus \\{v_i, v_{n+1}\\}$ is a maximal set of symbols which config-folds disjointly in $X_\\mathcal H$ resulting in the checkerboard patterns with the symbols $v_i$ and $v_{n+1}$ for all $1\\leq i \\leq n$. Thus the full config-fold of a shift space is not necessarily unique. However it is unique up to conjugacy:\n\n\\begin{prop}\\label{Proposition: Uniqueness of full config-fold}\nThe full config-fold of a nearest neighbour shift of finite type is unique up to conjugacy via a change of the alphabet.\n\\end{prop}\nThe ideas for the following proof come essentially from the proof of Theorem 4.4 in \\cite{brightwell2000gibbs} and discussions with Prof. Brian Marcus.\n\\begin{proof}\nLet $X\\subset {\\mathcal A}^{\\mathbb{Z}^d}$ be a nearest neighbour shift of finite type and\n$$M:=\\{v\\in {\\mathcal A} \\:|\\: \\text{ for all }w\\in {\\mathcal A},\\ v \\text{ config-folds into }w \\Longrightarrow w \\text{ config-folds into }v\\}.$$\nThere is a natural equivalence relation $\\equiv$ on $M$ given by $v\\equiv w$ if $v$ and $w$ config-fold into each other. Let $A_1, A_2, A_3, \\ldots, A_r\\subset M$ be the corresponding partition. Clearly for all distinct $v, w\\in M$, $v$ can be config-folded into $w$ if and only if $v, w\\in A_i$ for some $i$. It follows that $A\\subset A_i$ can be config-folded disjointly if and only if $\\emptyset\\neq A\\neq A_i$.\n\nLet $v\\in {\\mathcal A}\\setminus M$. We will prove that $v$ config-folds to a symbol in $M$. By the definition of $M$ there exists $v_1\\in {\\mathcal A}$ such that $N_X(v)\\subsetneq N_X(v_1)$. If $v_1\\in M$ then we are done, otherwise choose $v_2\\in {\\mathcal A}$ such that $N_X(v_1)\\subsetneq N_X(v_2)$. Continuing this process recursively we can find a sequence $v= v_0, v_1, v_2, \\ldots, v_n$ such that $N_X(v_{i-1})\\subsetneq N_X(v_i)$ for all $1\\leq i \\leq n$ and $v_n\\in M$. Thus $v$ config-folds into $v_n$, a symbol in $M$. Further if $v$ config-folds to a symbol in $A_i$ it can config-fold to all the symbols in $A_i$. Therefore $B$ is a maximal subset of symbols in ${\\mathcal A}$ which can be config-folded disjointly if and only if $B=\\cup_{i=1}^rB_i\\cup ({\\mathcal A}\\setminus M)$ where $B_i\\subset A_i$ and $|A_i\\setminus B_i|=1$. Let $B'\\subset {\\mathcal A}$ be another such maximal subset, ${\\mathcal A}\\setminus B:=\\{b_1, b_2, \\ldots, b_r\\}$ and ${\\mathcal A}\\setminus B':=\\{b'_1, b'_2, \\ldots, b'_r\\}$ where $b_i, b'_i\\in A_i$. Then the map\n$$f: X\\cap ({\\mathcal A}\\setminus B)^{\\mathbb{Z}^d} \\longrightarrow X\\cap ({\\mathcal A}\\setminus B')^{\\mathbb{Z}^d} \\text{ given by } f(x) := y\\text{ where } y_{\\vec i}=b'_j \\text{ whenever }x_{\\vec i}=b_j$$\nis the required change of alphabet between the two full config-folds of $X$. \\end{proof}\n\nLet $X\\cap({\\mathcal A}\\setminus \\{v_1, v_2, \\ldots, v_n\\})^{\\mathbb{Z}^d}$ be a \\emph{full config-fold} of $X$ where $v_i$ config-folds into $w_i$ for all $1\\leq i\\leq n$. Consider $f_X: {\\mathcal A} \\longrightarrow{\\mathcal A}\\setminus \\{v_1, v_2, \\ldots, v_n\\}$ given by\n\\begin{equation*}\nf_X(v):=\\begin{cases}\nv&\\text{ if } v\\neq v_j \\text{ for all }1\\leq j\\leq n\\\\\nw_j&\\text{ if } v= v_j\\text{ for some }1\\leq j \\leq n.\n\\end{cases}\u2022\n\\end{equation*}\n\\noindent This defines a factor map $f_X: X\\longrightarrow X\\cap({\\mathcal A}\\setminus \\{v_1, v_2, \\ldots, v_n\\})^{\\mathbb{Z}^d}$ given by $(f_X(x))_{{\\vec{i}}}:= f_X(x_{\\vec{i}})$ for all ${\\vec{i}} \\in \\mathbb{Z}^d$. $f_X$ denotes both the factor map and the map on the alphabet; it should be clear from the context which function is being used.\n\n\n\nIn many cases we will fix a configuration on a set $A\\subset \\mathbb{Z}^d$ and apply a config-fold on the rest. Hence we define the map $f_{X,A}: X\\longrightarrow X$ given by\n\\begin{equation*}\n(f_{X,A}(x))_{\\vec i}:=\\begin{cases}\nx_{\\vec i}&\\text{ if } \\vec i \\in A\\\\\nf_X(x_{\\vec i})&\\text{ otherwise.}\n\\end{cases}\u2022\n\\end{equation*}\u2022\n\nThe map $f_{X,A}$ can be extended beyond $X$:\n\n\\begin{prop}\\label{prop: folding_fixing_a_set}\nLet $X\\subset Y$ be nearest neighbour shifts of finite type, $Z$ be a full config-fold of $X$ and $y\\in Y$ such that for some $A\\subset \\mathbb{Z}^d$, $y|_{A^c\\cup\\partial (A^c)}\\in \\mathcal L_{A^c\\cup\\partial (A^c)}(X)$. Then the configuration $z$ given by\n\\begin{equation*}\nz_{\\vec i}:=\\begin{cases}\ny_{\\vec i}&\\text{ if } \\vec i \\in A\\\\\nf_X(y_{\\vec i})&\\text{ otherwise}\n\\end{cases}\u2022\n\\end{equation*}\nis an element of $Y$. Moreover $z|_{A^c}\\in \\mathcal L_{A^c}(Z)$.\n\\end{prop}\nAbusing the notation, in such cases we shall denote the configuration $z$ by $f_{X, A}(y)$.\n\nIf $A^c$ is finite, then $f_{X,A}$ changes only finitely many coordinates. These changes can be applied one by one, that is, there is a chain of pivots in $Y$ from $y$ to $f_{X,A}(y)$.\n\nA nearest neighbour shift of finite type which cannot be config-folded is called a \\emph{stiff shift}. We know from Theorem 4.4 in \\cite{brightwell2000gibbs} that all the stiff graphs obtained by a sequence of folds of a given graph are isomorphic. By Proposition \\ref{Proposition: Uniqueness of full config-fold} the corresponding result for nearest neighbour shifts of finite type immediately follows:\n\\begin{prop}\\label{proposition:uniqueness of stiff shifts}\nThe stiff shift obtained by a sequence of config-folds starting with a nearest neighbour shift of finite type is unique up to conjugacy via a change of the alphabet.\n\\end{prop}\n\nStarting with a nearest neighbour shift of finite type $X$ the \\emph{fold-radius} of $X$ is the smallest number of full config-folds required to obtain a stiff shift. If $\\mathcal H$ is a tree then the fold-radius of $X_\\mathcal H$ is equal to\n$$\\left\\lfloor\\frac{diameter(\\mathcal H)}{2}\\right\\rfloor.$$\nThus for every nearest neighbour shift of finite type $X$ there is a sequence of full config-folds (not necessarily unique) which starts at $X$ and ends at a stiff shift of finite type. Let the fold-radius of $X$ be $r$ and $X= X_0, X_1, X_2, \\ldots, X_r$ be a sequence of full config-folds where $X_r$ is stiff. This generates a sequence of maps $f_{X_i}:X_{i}\\longrightarrow X_{i+1}$ for all $0\\leq i \\leq r-1$. In many cases we will fix a pattern on $D_n$ or $D_n^c$ and apply these maps on the rest of the configuration. Consider the maps $I_{X,n}:X\\longrightarrow X$ and $O_{X,n}:X\\longrightarrow X$ (for $n>r$) given by\n\\begin{equation*}\nI_{X,n}(x):=f_{X_{r-1},D_{n+r-1} }\\left(f_{X_{r-2}, D_{n+r-2}}\\left(\\ldots\\left(f_{X_{0}, D_n}(x)\\right)\\ldots\\right)\\right)\\text{(Inward Fixing Map)}\n\\end{equation*}\u2022\nand\n\\begin{eqnarray*}\nO_{X,n}(x):=f_{X_{r-1}, D_{n-r+1}^c }\\left(f_{X_{r-2}, D_{n-r+2}^c}\\left(\\ldots\\left(f_{X_{0}, D_n^c}(x)\\right)\\ldots\\right)\\right)\\text{(Outward Fixing Map)}.\n\\end{eqnarray*}\u2022\nSimilarly we consider maps which do not fix anything, $F_X: X\\longrightarrow X_r$ given by\n\\begin{eqnarray*}\nF_X(x):= f_{X_{r-1}}\\left(f_{X_{r-2}}\\left(\\ldots\\left(f_{X_{0}}(x)\\right)\\ldots\\right)\\right).\n\\end{eqnarray*}\nNote that $D_k\\cup \\partial D_k= D_{k+1}$ and $D_k^c\\cup \\partial (D_k^c)=D_{k-1}^c$. This along with repeated application of Proposition \\ref{prop: folding_fixing_a_set} implies that the image of $I_{X,n}$ and $O_{X,n}$ lie in $X$. This also implies the following proposition:\n\n\\begin{prop}[The Onion Peeling Proposition]\\label{prop: folding_ to _ stiffness_fixing_a_set}\nLet $X\\subset Y$ be nearest neighbour shifts of finite type, $r$ be the fold-radius of $X$, $Z$ be a stiff shift obtained by a sequence of config-folds starting with $X$ and $y^1, y^2\\in Y$ such that $y^1|_{D_{n-1}^c}\\in \\mathcal L_{D_{n-1}^c}(X)$ and $y^2|_{D_{n+1}}\\in \\mathcal L_{D_{n+1}}(X)$. Let $z^1, z^2\\in Y$ be given by\n\\begin{eqnarray*}\nz^1&:=&f_{X_{r-1},D_{n+r-1} }\\left(f_{X_{r-2}, D_{n+r-2}}\\left(\\ldots\\left(f_{X_{0}, D_n}(y^1)\\right)\\ldots\\right)\\right)\\\\\nz^2&:=&f_{X_{r-1}, D_{n-r+1}^c }\\left(f_{X_{r-2},D_{n-r+2}^c}\\left(\\ldots\\left(f_{X_{0}, D_n^c}(y^2)\\right)\\ldots\\right)\\right)\\text{ for }n>r.\n\\end{eqnarray*}\u2022\nThe patterns $z^1|_{D_{n+r-1}^c}\\in \\mathcal L_{D_{n+r-1}^c}(Z)$ and $z^2|_{D_{n-r+1}}\\in \\mathcal L_{D_{n-r+1}}(Z)$. If $y^1, y^2\\in X$ then in addition\n\\begin{eqnarray*}\nz^1|_{D_{n+r-1}^c}&=&F_X(y^1)|_{D_{n+r-1}^c}\\text{ and}\\\\\nz^2|_{D_{n-r+1}}&=&F_X(y^2)|_{D_{n-r+1}}.\n\\end{eqnarray*}\u2022\n\\end{prop}\n\nAbusing the notation, in such cases we shall denote the configurations $z^1$ and $z^2$ by $I_{X, n}(y^1)$ and $O_{X,n}(y^2)$ respectively. Note that $I_{X, n}(y^1)|_{D_n}= y^1|_{D_n}$ and $O_{X,n}(y^2)|_{D_n^c}= y^2|_{D_n^c}$. Also, $O_{X,n}$ is a composition of maps of the form $f_{X,A}$ where $A^c$ is finite; there is a chain of pivots in $Y$ from $y$ to $O_{X,n}(y)$.\nThere are two kinds of stiff shifts which will be of interest to us: A configuration $x\\in {\\mathcal A}^{\\mathbb{Z}^d}$ is called \\emph{periodic} if there exists $n \\in \\mathbb N$ such that $\\sigma^{n \\vec e_{i}}(x)=x$ for all $1\\leq i \\leq d$. A configuration $x\\in X$ is called \\emph{frozen} if its homoclinic class is a singleton. This notion coincides with the notion of frozen coloring in \\cite{brightwell2000gibbs}. A subshift $X$ will be called \\emph{frozen} if it consists of frozen configurations, equivalently $\\Delta_X$ is the diagonal. A measure on $X$ will be called \\emph{frozen} if its support is frozen. Note that any shift space consisting just of periodic configurations is frozen. All frozen nearest neighbour shifts of finite type are stiff: Suppose $X$ is a nearest neighbour shift of finite type which is not stiff. Then there is a symbol $v$ which can be config-folded to a symbol $w$. This means that any appearance of $v$ in a configuration $x\\in X$ can be replaced by $w$. Hence the homoclinic class of $x$ is not a singleton. Therefore $X$ is not frozen.\n\n\n\\begin{prop}\\label{proposition: periodicfoldentropy} Let $X$ be a nearest neighbour shift of finite type such that a sequence of config-folds starting from $X$ results in the orbit of a periodic configuration. Then every shift-invariant probability measure adapted to $X$ is fully supported.\n\\end{prop}\n\\begin{prop}\\label{proposition: frozenfoldpivot}\nLet $X$ be a nearest neighbour shift of finite type such that a sequence of config-folds starting from $X$ results in a frozen shift. Then $X$ has the pivot property.\n\\end{prop}\n\n\\noindent\\textbf{Examples:}\n\\begin{enumerate}\n\\item\n$X:=\\{0\\}^{\\mathbb{Z}^d}\\cup \\{1\\}^{\\mathbb{Z}^d}$ is a frozen shift space but not the orbit of a periodic configuration. Clearly the delta measure $\\delta_{\\{0\\}^{\\mathbb{Z}^d}}$ is a shift-invariant probability measure adapted to $X$ but not fully supported. A more non-trivial example of a nearest neighbour shift of finite type which is frozen but not the orbit of a periodic configuration is the set of the Robinson tilings $Y$ \\cite{Robinson1971}. There are configurations in $Y$ which have the so-called ``fault lines''; they can occur at most once in a given configuration. Consequently for all shift-invariant probability measures on $Y$, the probability of seeing a fault line is zero. Thus no shift-invariant probability measure (and hence no adapted shift-invariant probability measure) on $Y$ is fully supported.\n\n\\item\\label{Example: Safe Symbol}\nLet $X$ be a shift space with a safe symbol $\\star$. Then any symbol in $X$ can be config-folded into the safe symbol. By config-folding the symbols one by one, we obtain a fixed point $\\{\\star\\}^{\\mathbb{Z}^d}$. Thus any nearest neighbour shift of finite type with a safe symbol satisfies the hypothesis of both the propositions.\n\n\\item \\label{Example: Folds to an edge}Suppose $\\mathcal H$ is a graph which folds into a single edge (denoted by $Edge$) or a single vertex $v$ with a loop. Then the shift space $X_\\mathcal H$ can be\nconfig-folded to $X_{Edge}$ (which consists of two periodic configurations) or the fixed point $\\{v\\}^{\\mathbb{Z}^d}$ respectively. In the latter case, the graph $\\mathcal H$ is called \\emph{dismantlable} \\cite{nowwinkler}. Note that finite non-trivial trees and the graph $C_4$ fold into an edge. For dismantlable graphs $\\mathcal H$, Theorem 4.1 in \\cite{brightwell2000gibbs} implies the conclusions of Propositions \\ref{proposition: periodicfoldentropy} and \\ref{proposition: frozenfoldpivot} for $X_\\mathcal H$ as well.\n\\end{enumerate}\u2022\n\n\\begin{proof}[Proof of Proposition \\ref{proposition: periodicfoldentropy}] Let $\\mu$ be a shift-invariant probability measure adapted to $X$. To prove that $supp(\\mu)= X$ it is sufficient to prove that for all $n\\in \\mathbb N$ and $x\\in X$ that $\\mu([x]_{D_n})>0$. Let $X_0=X$, $X_1$, $X_2$$,\\ldots,$ $X_r$ be a sequence of full config-folds where $X_r:=\\{ \\sigma^{\\vec i_1}(p), \\sigma^{\\vec i_2}(p),\\ldots, \\sigma^{\\vec i_{k-1}}(p) \\}$ is the orbit of a periodic point. For any two configurations $z,w\\in X$ there exists $\\vec i\\in \\mathbb{Z}^d$ such that $F_X(z)= F_X(\\sigma^{\\vec i}(w)).$\nSince $\\mu$ is shift-invariant we can choose $y \\in supp (\\mu)$ such that $F_X(x)= F_X(y).$\nConsider the configurations $I_{X,n}(x)$ and $O_{X,n+2r-1}(y)$. By Proposition \\ref{prop: folding_ to _ stiffness_fixing_a_set} they satisfy the equations\n\\begin{eqnarray*}\nI_{X,n}(x)|_{D_{n+r-1}^c}&=&F_X(x)|_{D_{n+r-1}^c}\\text{ and }\\\\\nO_{X,n+2r-1}(y)|_{D_{n+r}}&=&F_X(y)|_{D_{n+r}}.\n\\end{eqnarray*}\n\\noindent Then $I_{X,n}(x)|_{\\partial D_{n+r-1}}= O_{X,n+2r-1}(y)|_{\\partial D_{n+r-1}}$. Since $X$ is a nearest neighbour shift of finite type, the configuration $z$ given by\n\\begin{eqnarray*}\nz|_{D_{n+r}}&:=&I_{X,n}(x)|_{D_{n+r}}\\\\\nz|_{D_{n+r-1}^c}&:=&O_{X,n+2r-1}(y)|_{D_{n+r-1}^c}\n\\end{eqnarray*}\n\\noindent is an element of $X$. Moreover\n\\begin{eqnarray*}\nz|_{D_{n}}&=&I_{X,n}(x)|_{D_{n}}=x|_{D_n}\\\\\nz|_{D_{n+2r-1}^c}&=&O_{X,n+2r-1}(y)|_{D_{n+2r-1}^c}=y|_{D^c_{n+2r-1}}.\n\\end{eqnarray*}\u2022\nThus $(y, z)\\in \\Delta_X$. Since $\\mu$ is adapted we get that $z\\in supp(\\mu)$. Finally\n$$\\mu([x]_{D_n})=\\mu([z]_{D_n})>0.$$\n\\end{proof}\n\nNote that all the maps being discussed here, $f_X$, $f_{X,A}$, $F_X$, $I_{X,n}$ and $O_{X,n}$ are (not necessarily shift-invariant) single block maps, that is, maps $f$ where $\\left(f(x)\\right)_{\\vec i}$ depends only on $x_{\\vec i}$. Thus if $f$ is one such map and $x|_A= y|_A$ for some set $A\\subset \\mathbb{Z}^d$ then $f(x)|_A=f(y)|_A$; they map homoclinic pairs to homoclinic pairs.\n\n\\begin{proof}[Proof of Proposition \\ref{proposition: frozenfoldpivot}] Let $X_0=X$, $X_1$, $X_2$$,\\ldots,$ $X_r$ be a sequence of full config-folds where $X_r$ is frozen. Let $(x, y) \\in \\Delta_X$. Since $X_r$ is frozen, $F_{X}(x)= F_X(y)$. Suppose $x|_{D_n^c}= y|_{D_n^c}$ for some $n\\in \\mathbb N$. Then $O_{X,n+r-1}(x)|_{D_n^c}=O_{X,n+r-1}(y)|_{D_n^c}$. Also by Proposition \\ref{prop: folding_ to _ stiffness_fixing_a_set},\n$$O_{X, n+r-1}(x)|_{D_n}=F_X(x)|_{D_n}=F_X(y)|_{D_n}= O_{X, n+r-1}(y)|_{D_n}.$$\nThis proves that $O_{X,n+r-1}(x)=O_{X,n+r-1}(y)$. In fact it completes the proof since for all $z\\in X$ there exists a chain of pivots in $X$ from $z$ to $O_{X,n+r-1}(z)$.\n\\end{proof}\n\n\n\n\\section{Universal Covers}\\label{section:universal covers}\nMost cases will not be as simple as in the proof of Propositions \\ref{proposition: periodicfoldentropy} and \\ref{proposition: frozenfoldpivot}. We wish to prove the conclusions of these propositions for hom-shifts $X_\\mathcal H$ when $\\mathcal H$ is a connected four-cycle free graph. Many ideas carry over from the proofs of these results because of the relationship of such graphs with their universal covers; we describe this relationship next. The results in this section are not original; look for instance in \\cite{Stallingsgraph1983}. We mention them for completeness.\n\nLet $\\mathcal H$ be a finite connected graph with no self-loops. We denote by $d_\\mathcal H$ the ordinary graph distance on $\\mathcal H$ and by $D_\\mathcal H(u)$, the \\emph{ball of radius 1} around $u$. A graph homomorphism $\\pi:\\mathcal C\\longrightarrow \\mathcal H$ is called a \\emph{covering map} if for some $n \\in \\mathbb N \\cup \\{\\infty\\}$ and all $u \\in \\mathcal H$, there exist disjoint sets $\\{C_i\\}_{i=1}^n\\subset \\mathcal C$ such that $\\pi^{-1}\\left(D_\\mathcal H(u)\\right)= \\cup_{i=1}^n C_i $ and $\\pi|_{C_i}: C_i\\longrightarrow D_\\mathcal H(u)$ is an isomorphism of the induced subgraphs for $1\\leq i\\leq n$. A \\emph{covering space} of a graph $\\mathcal H$ is a graph $\\mathcal C$ such that there exists a covering map $\\pi: \\mathcal C\\longrightarrow \\mathcal H$.\n\nA \\emph{universal covering space} of $\\mathcal H$ is a covering space of $\\mathcal H$ which is a tree. Unique up to graph isomorphism \\cite{Stallingsgraph1983}, these covers can be described in multiple ways. Their standard construction uses non-backtracking walks \\cite{Angluin80}: A \\emph{walk} on $\\mathcal H$ is a sequence of vertices $(v_1, v_2, \\ldots, v_n)$ such that $v_i\\sim_\\mathcal H v_{i+1}$ for all $1\\leq i \\leq n-1$. The \\emph{length} of a walk $p=(v_1, v_2, \\ldots, v_n)$ is $|p|=n-1$, the number of edges traversed on that walk. It is called \\emph{non-backtracking} if $v_{i-1}\\neq v_{i+1}$ for all $2\\leq i \\leq n-1$, that is, successive steps do not traverse the same edge. Choose a vertex $u \\in \\mathcal H$. The vertex set of the universal cover is the set of all non-backtracking walks on $\\mathcal H$ starting from $u$; there is an edge between two such walks if one extends the other by a single step. The choice of the starting vertex $u$ is arbitrary; choosing a different vertex gives rise to an isomorphic graph. We denote the universal cover by $E_\\mathcal H$. The covering map $\\pi: E_\\mathcal H\\longrightarrow \\mathcal H$ maps a walk to its terminal vertex. Usually, we will denote by $\\tilde u, \\tilde v$ and $\\tilde w$ vertices of $E_\\mathcal H$ such that $\\pi(\\tilde u)= u$, $\\pi(\\tilde v)= v$ and $\\pi(\\tilde w)= w$.\n\nThis construction shows that the universal cover of a graph is finite if and only if it is a finite tree. To see this if the graph has a cycle then the finite segments of the walk looping around the cycle give us infinitely many vertices for the universal cover. If the graph is a finite tree, then all walks must terminate at the leaves and their length is bounded by the diameter of the tree. In fact, the universal cover of a tree is itself while the universal cover of a cycle (for instance $C_4$) is $\\mathbb{Z}$ obtained by finite segments of the walks $(1, 2, 3, 0, 1, 2, 3, 0, \\ldots )$ and $(1, 0, 3, 2, 1, 0, 3, 2, \\ldots )$.\n\nFollowing the ideas of homotopies in algebraic topology, there is a natural operation on the set of walks: two walks can be joined together if one begins where the other one ends. More formally, given two walks $p=(v_1, v_2, \\ldots, v_n)$ and $q=(w_1, w_2, \\ldots, w_m)$ where $v_n=w_1$, consider $p\\star q=(v_1, v_2, \\ldots, v_n, w_2, w_3, \\ldots, w_m)$. However even when $p$ and $q$ are non-backtracking $p\\star q$ need not be non-backtracking. So we consider the walk $[p\\star q]$ instead which erases the backtracking segments of $p \\star q$, that is, if for some $i>1$, $v_{n-i+1}\\neq w_{i}$ and $v_{n-j+1}=w_j$ for all $1\\leq j \\leq i-1$ then\n$$[p\\star q]:=(v_1, v_2, \\ldots, v_{n-i+1}, w_{i-1}, w_{i}, \\ldots, w_m).$$\n\nThis operation of erasing the backtracking segments is called \\emph{reduction}, look for instance in \\cite{Stallingsgraph1983}.\nThe following proposition is well-known (Section 4 of \\cite{Stallingsgraph1983}) and shall be useful in our context as well:\n\\begin{prop}\\label{proposition:isomorphism_of_universal_covering_space}\nLet $\\mathcal H$ be a finite connected graph without any self-loops. Then for all $\\tilde{v}, \\tilde w \\in E_\\mathcal H$ satisfying $\\pi(\\tilde v)= \\pi(\\tilde w)$ there exists a graph isomorphism $\\phi: E_\\mathcal H\\longrightarrow E_\\mathcal H$ such that $\\phi(\\tilde v)= \\tilde w$ and $\\pi \\circ \\phi = \\pi$.\n\\end{prop}\n\nTo see how to construct this isomorphism, consider as an example $ (u)$, the empty walk on $\\mathcal H$ and $(v_1, v_2, \\ldots, v_n)$, some non-backtracking walk such that $v_1=v_n=u$. Then the map $\\phi: E_\\mathcal H\\longrightarrow E_\\mathcal H$ given by\n$$\\phi(\\tilde w):= [(v_1, v_2, \\ldots, v_n) \\star \\tilde w].$$\nis a graph isomorphism which maps $(u)$ to $(v_1, v_2, \\ldots, v_n)$; its inverse is $\\psi: E_\\mathcal H\\longrightarrow E_\\mathcal H$ given by\n$$\\psi(\\tilde w):= [(v_n, v_{n-1}, \\ldots, v_1)\\star \\tilde w].$$\n\nThe maps $\\phi, \\pi$ described above give rise to natural maps, also denoted by $\\phi$ and $\\pi$ where $$\\phi:X_{E_\\mathcal H}\\longrightarrow X_{E_\\mathcal H}$$\nis given by $\\phi(\\tilde x)_{\\vec{i}} := \\phi(\\tilde x_{\\vec{i}})$ and\n$$\\pi: X_{E_\\mathcal H} \\longrightarrow X_{\\mathcal H}$$\nis given by $\\pi(\\tilde x)_{\\vec{i}}:=\\pi(\\tilde x_{\\vec{i}})$ for all ${\\vec{i}} \\in \\mathbb{Z}^d$ respectively. A \\emph{lift} of a configuration $x\\in X_\\mathcal H$ is a configuration $\\tilde{x}\\in X_{E_\\mathcal H}$ such that $\\pi \\circ \\tilde{x}= x$.\n\nNow we shall analyse some consequences of this formalism in our context. More general statements (where $\\mathbb{Z}^d$ is replaced by a different graph) are true (under a different hypothesis on $\\mathcal H$), but we restrict to the four-cycle free condition. We noticed in Section \\ref{section:Folding, Entropy Minimality and the Pivot Property} that if $\\mathcal H$ is a tree then $X_{\\mathcal H}$ satisfies the conclusions of Theorems \\ref{theorem: MRF fully supported } and \\ref{theorem: pivot property for four cycle free}. Now we will draw a connection between the four-cycle free condition on $\\mathcal H$ and the formalism in Section \\ref{section:Folding, Entropy Minimality and the Pivot Property}.\n\n\\begin{prop}[Existence of Lifts]\\label{proposition:covering_space_lifting}\nLet $\\mathcal H$ be a connected four-cycle free graph. For all $x\\in X_\\mathcal H$ there exists $\\tilde{x}\\in X_{E_\\mathcal H}$ such that $\\pi(\\tilde{x})=x$. Moreover the lift $\\tilde{x}$ is unique up to a choice of $\\tilde{x}_{\\vec 0}$.\n\\end{prop}\n\n\\begin{proof}\nWe will begin by constructing a sequence of graph homomorphisms $\\tilde{x}^n:D_n \\longrightarrow E_\\mathcal H$ such that $\\pi \\circ\\tilde{x}^n =x|_{D_n}$ and $\\tilde{x}^m|_{D_n}= \\tilde{x}^n$ for all $m>n$. Then by taking the limit of these graph homomorphisms we obtain a graph homomorphism $\\tilde{x}\\in X_{E_\\mathcal H}$ such that $\\pi \\circ\\tilde{x}=x$. It will follow that given $\\tilde{x}^0$ the sequence $\\tilde{x}^n$ is completely determined proving that the lifting is unique up to a choice of $\\tilde{x}_{\\vec {0}}$.\n\nThe recursion is the following: Let $\\tilde{x}^n: D_n \\longrightarrow E_\\mathcal H$ be a given graph homomorphism for some $n\\in \\mathbb N\\cup \\{0\\}$ such that $\\pi \\circ\\tilde{x}^n=x|_{D_n}$. For any ${\\vec{ i}}\\in D_{n+1}\\setminus D_n$, choose a vertex ${\\vec{j}}\\in D_n$ such that $\\vec{j}\\sim \\vec{i}$. Then $\\pi(\\tilde{x}^n_{\\vec{j}})=x_{\\vec{j}}\\sim x_{\\vec{i}}$. Since $\\pi$ defines a local isomorphism between $E_\\mathcal H$ and $\\mathcal H$, there exists a unique vertex $\\tilde v_{\\vec{i}}\\sim \\tilde{x}^n_{\\vec{j}} \\in E_\\mathcal H$ such that $\\pi(\\tilde v_{\\vec{i}})= x_{\\vec{i}}$. Define $\\tilde{x}^{n+1}: D_{n+1}\\longrightarrow E_\\mathcal H$ by\n\\begin{equation*}\n\\tilde{x}^{n+1}_{\\vec{i}}:=\\begin{cases}\\tilde{x}^{n}_{\\vec{i}} &\\text{if } \\vec{i}\\in D_n\\\\\\tilde v_{\\vec{i}} & \\text{if } \\vec{i}\\in D_{n+1}\\setminus D_n.\\end{cases}\n\\end{equation*}\u2022\nThen clearly $\\pi \\circ \\tilde{x}^{n+1}= x|_{D_{n+1}}$ and $\\tilde{x}^{n+1}|_{D_n}=\\tilde{x}^n$. Note that the extension $\\tilde{x}^{n+1}$ is uniquely defined given $\\tilde{x}^n$.\nWe need to prove that this defines a valid graph homomorphism from $D_{n+1}$ to $E_\\mathcal H$. Let $\\vec{i}\\in D_{n+1}\\setminus D_n$ and $\\vec{j}\\in D_n$ be chosen as described above. Consider if possible any $\\vec{j}^\\prime\\neq \\vec{j} \\in D_n$ such that $\\vec{j}^\\prime \\sim \\vec{i}$. To prove that $\\tilde{x}^{n+1}$ is a graph homomorphism we need to verify that $\\tilde{x}^{n+1}_{\\vec{j}^\\prime}\\sim \\tilde{x}^{n+1}_{\\vec{i}}$.\n\nConsider $\\vec{i}^\\prime\\in D_n$ such that $\\vec{i}^\\prime\\sim \\vec{j}$ and $\\vec{j}^\\prime$. Then $\\vec{i}^\\prime, \\vec{j}, \\vec{i}$ and $\\vec{j}^\\prime$ form a four-cycle. Since $\\mathcal H$ is four-cycle free either $x_{\\vec{i}^\\prime}=x_{\\vec{i}}$ or $x_{\\vec{j}^\\prime}= x_{\\vec{j}}$.\n\nSuppose $x_{\\vec{i}^\\prime}=x_{\\vec{i}}$; the other case is similar. Since $\\pi$ is a local isomorphism and $\\tilde{x}^{n+1}_{\\vec{i}},\\tilde{x}^{n+1}_{\\vec{i^\\prime}} \\sim \\tilde{x}^{n+1}_{\\vec{j}}$, we get that $\\tilde{x}^{n+1}_{\\vec{i}}=\\tilde{x}^{n+1}_{\\vec{i}^\\prime}$. But ${\\vec{i}}', {\\vec{j}}' \\in D_n$ and $\\tilde x^{n+1}|_{D_n}= \\tilde x^{n}$ is a graph homomorphism; therefore $\\tilde{x}^{n+1}_{\\vec{i}}=\\tilde{x}^{n+1}_{\\vec{i}^\\prime}\\sim \\tilde{x}^{n+1}_{\\vec{j}^\\prime}$.\n\\end{proof}\n\n\\begin{corollary}\\label{corollary:covering_space_lifting_homoclinic}\nLet $\\mathcal H$ be a connected four-cycle free graph and $x, y\\in X_\\mathcal H$. Consider some lifts $\\tilde{x}, \\tilde{y} \\in X_{E_\\mathcal H}$ such that $\\pi(\\tilde{x})= x $ and $\\pi(\\tilde{y})=y$. If for some $\\vec{i}_0 \\in \\mathbb{Z}^d$, $\\tilde{x}_{\\vec{i}_0}= \\tilde{y}_{\\vec{i}_0}$ then $\\tilde{x}= \\tilde{y}$ on the connected subset of\n$$\\{\\vec{j} \\in \\mathbb{Z}^d\\:|\\: x_{\\vec{j}}= y_{\\vec{j}}\\}$$ which contains $\\vec{i}_0$.\n\\end{corollary}\n\n\\begin{proof}\nLet $D$ be the connected component of $\\{\\vec{i} \\in \\mathbb{Z}^d \\:|\\: x_{\\vec{i}} = y_{\\vec{i}}\\}$ and $\\tilde D$ be the connected component of $\\{\\vec{i} \\in \\mathbb{Z}^d \\:|\\: \\tilde x_{\\vec{i}} = \\tilde y_{\\vec{i}}\\}$ which contain $\\vec{i}_0$.\n\nClearly $\\tilde D \\subset D$. Suppose $\\tilde D\\neq D$. Since both $D$ and $\\tilde D$ are non-empty, connected sets there exist $\\vec{i} \\in D \\setminus \\tilde D$ and $\\vec{j} \\in \\tilde D$ such that $\\vec{i} \\sim \\vec{j}$. Then $x_{\\vec{i}}= y_{\\vec{i}}$, $x_{\\vec{j}}= y_{\\vec{j}}$ and $\\tilde x_{\\vec{j}}= \\tilde y_{\\vec{j}}$. Since $\\pi $ is a local isomorphism, the lift must satisfy $\\tilde x_{\\vec{i}} = \\tilde y_{\\vec{i}}$ implying $\\vec{i} \\in \\tilde D$. This proves that $D= \\tilde D$.\n\\end{proof}\nThe following corollary says that any two lifts of the same graph homomorphism are `identical'.\n\\begin{corollary}\\label{corollary:lift_are_isomorphic}\nLet $\\mathcal H$ be a connected four-cycle free graph. Then for all $\\tilde x^1,\\tilde x^2 \\in X_{E_\\mathcal H}$ satisfying $\\pi(\\tilde x^1) = \\pi(\\tilde x^2)= x$ there exists an isomorphism $\\phi: E_\\mathcal H \\longrightarrow E_\\mathcal H$ such that $\\phi\\circ \\tilde x^1= \\tilde x^2$.\n\\end{corollary}\n\\begin{proof} By Proposition \\ref{proposition:isomorphism_of_universal_covering_space} there exists an isomorphism\n$\\phi: E_\\mathcal H\\longrightarrow E_\\mathcal H$ such that $\\phi(\\tilde x^1_{\\vec{0}})= \\tilde x^2_{\\vec{0}}$ and $\\pi \\circ \\phi = \\pi$. Then $(\\phi\\circ\\tilde x^1)_{\\vec{0}} = \\tilde x^2_{\\vec{0}}$ and $\\pi (\\phi\\circ\\tilde x^1)= (\\pi \\circ \\phi)(\\tilde x^1)= \\pi (\\tilde x^1)= x$. By Proposition \\ref{proposition:covering_space_lifting} $\\phi\\circ\\tilde x^1= \\tilde x^2$.\n\\end{proof}\n\nIt is worth noting at this point the relationship of the universal cover described here with the universal cover in algebraic topology. Undirected graphs can be identified with $1$ dimensional CW-complexes where the set of vertices correspond to the $0$-cells, the edges to the $1$-cells of the complex and the attaching map sends the end-points of the edges to their respective vertices. With this correspondence in mind the (topological) universal covering space coincides with the (combinatorial) universal covering space described above; indeed a $1$ dimensional CW-complex is simply connected if and only if it does not have any loops, that is, the corresponding graph does not have any cycles; it is a tree. The uniqueness, existence and many such facts about the universal covering space follow from purely topological arguments; for instance look in Chapter $13$ in \\cite{MunkresTopology75} or Chapters $5$ and $6$ in \\cite{Masseyanintroduction1977}.\n\n\n\n\\section{Height Functions and Sub-Cocycles}\\label{Section:heights}\nExistence of lifts as described in the previous section enables us to measure the `rigidity' of configurations. In this section we define height functions and subsequently the slope of configurations, where steepness corresponds to this `rigidity'. The general method of using height\nfunctions is usually attributed to J.H.Conway \\cite{ThurstontilinggroupAMM}.\n\nFix a connected four-cycle free graph $\\mathcal H$. Given $x\\in X_\\mathcal H$ we can define the corresponding \\emph{height function} $h_x:\\mathbb{Z}^d\\times \\mathbb{Z}^d\\longrightarrow \\mathbb{Z}$ given by $h_x({\\vec{i}},{\\vec{j}}):=d_{E_\\mathcal H}(\\tilde{x}_{\\vec{i}},\\tilde{x}_{\\vec{j}} )$ where $\\tilde{x}$ is a lift of $x$. It follows from Corollary \\ref{corollary:lift_are_isomorphic} that $h_x$ is independent of the lift $\\tilde{x}$.\n\n\nGiven a finite subset $A\\subset \\mathbb{Z}^d$ and $x\\in X_\\mathcal H$ we define the \\emph{range of $x$ on A} as\n\\begin{equation*}\nRange_A(x):=\\max_{{\\vec{j}}_1, {\\vec{j}}_2\\in A} h_x({\\vec{j}}_1, {\\vec{j}}_2).\n\\end{equation*}\nFor all $x\\in X_\\mathcal H$\n\\begin{equation*}\nRange_A(x)\\leq Diameter(A)\n\\end{equation*}\nand more specifically\n\\begin{equation} \\label{equation:diameter_bounds_height}\nRange_{D_n}(x)\\leq 2n\n\\end{equation}\nfor all $n \\in \\mathbb N$. Since $\\tilde x\\in X_{E_\\mathcal H}$ is a map between bipartite graphs it preserves the parity of the distance function, that is, if $\\vec i, \\vec j \\in \\mathbb{Z}^d$ and $x\\in X_\\mathcal H$ then the parity of $\\|\\vec i - \\vec j\\|_1$ is the same as that of $h_x(\\vec i, \\vec j)$. As a consequence it follows that $Range_{\\partial D_n}(x)$ is even for all $x\\in X_{\\mathcal H}$ and $n \\in \\mathbb N$. We note that\n$$Range_{A}(x)= Diameter(Image(\\tilde x|_{A})).$$\n\nThe height function $h_x$ is subadditive, that is,\n$$h_x(\\vec i,\\vec j)\\leq h_x(\\vec i, \\vec k)+ h_x(\\vec k, \\vec j)$$\nfor all $x\\in X_\\mathcal H$ and $\\vec i ,\\vec j$ and $\\vec k \\in \\mathbb{Z}^d$. This is in contrast with the usual height function (as in \\cite{chandgotia2013Markov} and \\cite{peled2010high}) where there is an equality instead of the inequality. This raises some technical difficulties which are partly handled by the subadditive ergodic theorem.\n\nThe following terminology is not completely standard: Given a shift space $X$ a \\emph{sub-cocycle} is a measurable map $c: X\\times \\mathbb{Z}^d \\longrightarrow \\mathbb N\\cup \\{0\\}$ such that for all $\\vec i, \\vec j \\in \\mathbb{Z}^d$\n$$c(x, \\vec i +\\vec j)\\leq c(x, \\vec i)+ c(\\sigma^{\\vec i}(x), \\vec j).$$\nSub-cocycles arise in a variety of situations; look for instance in \\cite{Hammersleyfirst1965}. We are interested in the case $c(x, \\vec i)= h_x(\\vec 0, \\vec i)$ for all $x\\in X_\\mathcal H$ and $\\vec i \\in \\mathbb{Z}^d$. The measure of `rigidity' lies in the asymptotics of this sub-cocycle, the existence of which is provided by the subadditive ergodic theorem. Given a set $X$ if $f: X\\longrightarrow \\mathbb R$ is a function then let $f^+:=\\max(0,f)$.\n\n\\begin{thm}[Subadditive Ergodic Theorem]\\label{theorem:Subadditive_ergodic_theorem}\\cite{walters-book}\nLet $(X, \\mathcal B, \\mu)$ be a probability space and let $T: X\\longrightarrow X$ be measure preserving. Let $\\{f_n\\}_{n=1}^\\infty$ be a sequence of measurable functions $f_n: X\\longrightarrow \\mathbb R\\cup \\{-\\infty\\}$ satisfying the conditions:\n\\begin{enumerate}[(a)]\n\\item\n$f_1^+ \\in L^1(\\mu)$\n\\item\nfor each $m$, $n \\geq 1$, $f_{n+m }\\leq f_n + f_m \\circ T^n$ $\\mu$-almost everywhere.\n\\end{enumerate}\u2022\nThen there exists a measurable function $f: X\\longrightarrow \\mathbb R\\cup \\{-\\infty \\}$ such that $f^+\\in L^1(\\mu)$, $f\\circ T=f$, $\\lim_{n\\rightarrow \\infty} \\frac{1}{n}f_n =f$, $\\mu$-almost everywhere\nand\n$$\\lim_{n \\longrightarrow \\infty}\\frac{1}{n}\\int f_n d\\mu = \\inf_{n}\\frac{1}{n}\\int f_n d \\mu= \\int f d\\mu.$$\n\\end{thm}\n\nGiven a direction $\\vec{ i} =(i_1, i_2, \\ldots, i_d)\\in \\mathbb R^d$ let $\\lfloor\\vec{ i}\\rfloor=(\\lfloor i_1\\rfloor, \\lfloor i_2\\rfloor, \\ldots, \\lfloor i_d\\rfloor)$. We define for all $x \\in X_\\mathcal H$ the \\emph{slope of $x$ in the direction $\\vec{ i}$} as\n$$sl_{\\vec {i}}(x):= \\lim_{n \\longrightarrow \\infty}\\frac{1}{n} h_x(\\vec 0, \\lfloor n \\vec{ i}\\rfloor)$$\nwhenever it exists.\n\nIf $\\vec i\\in \\mathbb{Z}^d$ we note that the sequence of functions $f_n: X_\\mathcal H\\longrightarrow \\mathbb N\\cup \\{\\vec 0\\}$ given by\n$$f_n(x)=h_x(\\vec 0, n\\vec i)$$\nsatisfies the hypothesis of this theorem for any shift-invariant probability measure on $X_\\mathcal H$: $|f_1|\\leq \\|\\vec i\\|_1$ and the subadditivity condition in the theorem is just a restatement of the sub-cocycle condition described above, that is, if $T= \\sigma^{\\vec i}$ then\n$$f_{n+m }(x)= h_x(\\vec 0, (n+m)\\vec i)\\leq h_x(\\vec 0, n \\vec i)+ h_{\\sigma^{n \\vec i}x}(\\vec 0,m \\vec i ) =f_n(x) + f_m(T^n(x)).$$\nThe asymptotics of the height functions (or more generally the sub-cocycles) are a consequence of the subadditive ergodic theorem as we will describe next. In the following by an ergodic measure on $X_\\mathcal H$, we mean a probability measure on $X_\\mathcal H$ which is ergodic with respect to the $\\mathbb{Z}^d$-shift action on $X_\\mathcal H$.\n\n\n\n\\begin{prop}[Existence of Slopes]\\label{prop:existence_of_slopes}\nLet $\\mathcal H$ be a connected four-cycle free graph and $\\mu$ be an ergodic measure on $X_\\mathcal H$. Then for all $\\vec{ i}\\in \\mathbb{Z}^d$\n$$sl_{\\vec {i}}(x)=\\lim_{n \\longrightarrow \\infty}\\frac{1}{n} h_x({\\vec{0}}, n \\vec{ i})$$\nexists almost everywhere and is independent of $x$. Moreover if $\\vec {i}= (i_1, i_2\\ldots, i_d)$ then\n$$sl_{\\vec {i}}(x)\\leq \\sum_{k=1}^d |i_k| sl_{\\vec {e}_k}(x).$$\n\\end{prop}\n\\begin{proof}\nFix a direction $\\vec{ i}\\in \\mathbb{Z}^d$. Consider the sequence of functions $\\{f_n\\}_{n=1}^\\infty$ and the map $T: X_\\mathcal H\\longrightarrow X_\\mathcal H$ as described above. By the subadditive ergodic theorem there exists a function $f: X_\\mathcal H\\longrightarrow \\mathbb R\\cup \\{-\\infty\\}$ such that\n$$\\lim_{n \\rightarrow \\infty}\\frac{1}{n}f_n=f\\ almost\\ everywhere.$$\nNote that $f= sl_{\\vec{i}}$. Since for all $x\\in X_\\mathcal H$ and $n \\in\\mathbb N$, $0\\leq f_n\\leq n\\|{\\vec{i}}\\|_1$, $0\\leq f(x)\\leq \\|\\vec i\\|_1$ whenever it exists. Fix any $\\vec{j}\\in \\mathbb{Z}^d$. Then\n\\begin{eqnarray*}\nf_n(\\sigma^{\\vec{j}}(x))&=& h_{\\sigma^{\\vec{j}}(x)}({\\vec{0}}, n \\vec{i})= h_x(\\vec{j}, n \\vec{ i}+\\vec{ j})\n\\end{eqnarray*}\nand hence\n\\begin{eqnarray*}\n-h_x(\\vec{j}, {\\vec{0}}) + h_x({\\vec{0}}, n \\vec{i})- h_x(n \\vec{i}, n \\vec{i}+\\vec{j})&\\leq& f_n(\\sigma^{\\vec{j}}(x))\\\\\n&\\leq& h_x(\\vec{j},{\\vec{0}}) + h_x({\\vec{0}}, n \\vec{i})+ h_x(n \\vec{i}, n \\vec{i}+\\vec{j})\n\\end{eqnarray*}\nimplying\n\\begin{eqnarray*}\n-2\\|\\vec{j}\\|_1+ f_n(x)\\leq & f_n(\\sigma^{\\vec{j}}(x))& \\leq 2\\|\\vec{j}\\|_1+ f_n(x)\n\\end{eqnarray*}\u2022\nimplying\n$$f(x)=\\lim_{n \\longrightarrow \\infty} \\frac{1}{n} f_n(x)= \\lim_{n \\longrightarrow \\infty}\\frac{1}{n} f_n(\\sigma^{\\vec{j}} x)= f(\\sigma^{\\vec{j}}(x))$$\nalmost everywhere. Since $\\mu$ is ergodic $sl_{{\\vec{i}}}= f$ is constant almost everywhere. Let $\\vec{i}^{(k)} = (i_1, i_2, \\ldots, i_k, 0, \\ldots, 0)\\in \\mathbb{Z}^d$. By the subadditive ergodic theorem\n\\begin{eqnarray*}\nsl_{\\vec{i}}(x)= \\int sl_{\\vec{i}}(x) d\\mu&=& \\lim_{n \\longrightarrow \\infty}\\frac{1}{n}\\int h_x({\\vec{0}}, n \\vec{i}) d\\mu\\\\\n&\\leq&\\sum_{k=1}^d \\lim_{n \\longrightarrow\\infty}\\frac{1}{n} \\int h_{\\sigma^{n \\vec{i}^{(k-1)}}(x)}({\\vec{0}}, ni_{k}\\vec{e}_k ) d \\mu\\\\\n&=&\\sum_{k=1}^d \\lim_{n \\longrightarrow\\infty}\\frac{1}{n} \\int h_x({\\vec{0}}, ni_{k}\\vec{e}_k ) d \\mu\\\\\n&\\leq&\\sum_{k=1}^d|i_k|\\lim_{n \\longrightarrow\\infty}\\frac{1}{n} \\int h_x({\\vec{0}}, n\\vec{e}_k ) d \\mu\\\\\n&=&\\sum_{k=1}^d |i_k| sl_{\\vec {e}_k}(x).\n\\end{eqnarray*}\u2022\t\nalmost everywhere.\n\\end{proof}\n\\begin{corollary}\\label{corollary: existence_of _slopes_in_reality}\nLet $\\mathcal H$ be a connected four-cycle free graph. Suppose $\\mu$ is an ergodic measure on $X_\\mathcal H$. Then for all $\\vec{i}\\in \\mathbb R^d$\n$$sl_{\\vec{i}}(x)=\\lim_{n \\longrightarrow \\infty}\\frac{1}{n} h_x({\\vec{0}}, \\lfloor n \\vec{i}\\rfloor)$$\nexists almost everywhere and is independent of $x$. Moreover if $\\vec{i}= (i_1, i_2,\\ldots, i_d)$ then\n$$sl_{\\vec{i}}(x)\\leq \\sum_{k=1}^d |i_k| sl_{\\vec{e}_k}(x).$$\n\\end{corollary}\n\\begin{proof}\nLet $\\vec{i}\\in \\mathbb Q^d$ and $N\\in \\mathbb N$ such that $N \\vec{i} \\in \\mathbb{Z}^d$. For all $n \\in \\mathbb N$ there exists $k \\in \\mathbb N\\cup\\{0\\}$ and $0\\leq m\\leq N-1$ such that $n = kN+m$. Then\nfor all $x\\in X_\\mathcal H$\n$$h_x({\\vec{0}}, k N\\vec{i})- N\\|\\vec{i}\\|_1\\leq h_x({\\vec{0}}, \\lfloor n\\vec{i} \\rfloor)\\leq h_x({\\vec{0}}, k N\\vec{i})+ N\\|\\vec{i}\\|_1$$\nproving\n$$sl_{\\vec{i}}(x)=\\lim_{n\\longrightarrow\\infty}\\frac{1}{n}h_x({\\vec{0}}, \\lfloor n\\vec{ i} \\rfloor) = \\frac{1}{N}\\lim_{k \\longrightarrow \\infty} \\frac{1}{k}h_x({\\vec{0}}, k N\\vec{i}) = \\frac{1}{N}sl_{N\\vec{i}}(x)$$\nalmost everywhere. Since $sl_{N\\vec{i}}$ is constant almost everywhere, we have that $sl_{\\vec{i}}$ is constant almost everywhere as well; denote the constant by $c_{\\vec{i}}$ . Also\n$$sl_{\\vec{i}}(x)\\leq \\frac{1}{N}\\sum _{l=1}^d|N i_l|sl_{\\vec{e}_l}(x)=\\sum _{l=1}^d|i_l|sl_{\\vec{e}_l}(x).$$\n\nLet $X\\subset X_\\mathcal H$ be the set of configurations $x$ such that\n$$\\lim_{n \\longrightarrow \\infty}\\frac{1}{n} h_x({\\vec{0}}, \\lfloor n \\vec{i}\\rfloor)=c_{\\vec{i}}$$\nfor all ${\\vec{i}} \\in \\mathbb Q^d$. We have proved that $\\mu(X)=1$.\n\nFix $x\\in X$.\nLet $\\vec i, \\vec{j}\\in \\mathbb R^d$ such that $\\|\\vec{i}- \\vec{j}\\|_1<\\epsilon$. Then\n$$\\left|\\frac{1}{n} h_x({\\vec{0}}, \\lfloor n \\vec{i}\\rfloor)-\\frac{1}{n} h_x({\\vec{0}}, \\lfloor n \\vec{j}\\rfloor)\\right|\\leq\\frac{1}{n}\\|\\lfloor n \\vec{i}\\rfloor-\\lfloor n \\vec{j}\\rfloor\\|_1\\leq\\epsilon+\\frac{2d}{n}.$$\nThus we can approximate $\\frac{1}{n} h_x({\\vec{0}}, \\lfloor n \\vec{i}\\rfloor)$ for ${\\vec{i}} \\in \\mathbb R^d$ by $\\frac{1}{n} h_x({\\vec{0}}, \\lfloor n \\vec{j}\\rfloor)$ for ${\\vec{j}} \\in \\mathbb Q^d$ to prove that $\\lim_{n \\longrightarrow \\infty}\\frac{1}{n} h_x({\\vec{0}}, \\lfloor n \\vec{i}\\rfloor)$ exists for all $\\vec i \\in \\mathbb R^d$, is independent of $x\\in X$ and satisfies\n$$sl_{\\vec{i}} (x)\\leq \\sum _{k=1}^d|i_k|sl_{\\vec{e}_k}(x).$$\n\\end{proof}\n\nThe existence of slopes can be generalised from height functions to continuous sub-cocycles; the same proofs work:\n\\begin{prop}Let $c:X\\times \\mathbb{Z}^d \\longrightarrow \\mathbb R$ be a continuous sub-cocycle and $\\mu$ be an ergodic measure on $X$. Then for all $\\vec{i}\\in \\mathbb R^d$\n$$sl^c_{\\vec{i}}(x):=\\lim_{n \\longrightarrow \\infty}\\frac{1}{n} c(x, \\lfloor n \\vec{i}\\rfloor)$$\nexists almost everywhere and is independent of $x$. Moreover if $\\vec{i}= (i_1, i_2\\ldots, i_d)$ then\n$$sl^c_{\\vec{i}}(x)\\leq \\sum_{k=1}^d |i_k| sl^c_{\\vec{e}_k}(x).$$\n\\end{prop}\n\nLet $C_X$ be the space of continuous sub-cocycles on a shift space $X$. $C_X$ has a natural vector space structure: given $c_1, c_2\\in C_X$, $(c_1 +\\alpha c_2)$ is also a continuous sub-cocycle on $X$ for all $\\alpha\\in \\mathbb R$ where addition and scalar multiplication is point-wise. The following is not hard to prove and follows directly from definition.\n\\begin{prop}\\label{proposition: sub-cocycles under conjugacy}\nLet $X, Y$ be conjugate shift spaces. Then every conjugacy $f: X \\longrightarrow Y$ induces a vector-space isomorphism $ f^\\star: C_Y\\longrightarrow C_X$ given by\n$$f^\\star(c)(x, \\vec {i}):= c(f(x), \\vec{i})$$\nfor all $c\\in C_Y$, $x\\in X$ and $\\vec i \\in \\mathbb{Z}^d$. Moreover $sl^c_{\\vec i}(y)=sl^{f^\\star(c)}_{\\vec i}(f^{-1}(y))$ for all $y\\in Y$ and $\\vec i \\in \\mathbb R^d$ for which the slope $sl^c_{\\vec i}(y)$ exists.\n\\end{prop}\n\\section{Proofs of the Main Theorems} \\label{section: Proof of the main theorems}\n\\begin{proof}[Proof of Theorem \\ref{theorem: MRF fully supported }] If $\\mathcal H$ is a single edge, then $X_\\mathcal H$ is the orbit of a periodic configuration; the result follows immediately. Suppose this is not the case. The proof follows loosely the proof of Proposition \\ref{proposition: periodicfoldentropy} and morally the ideas from \\cite{lightwoodschraudnerentropy}: We prove existence of two kind of configurations in $X_\\mathcal H$, ones which are `poor' (Lemma \\ref{lemma:slope 1 is frozen}), in the sense that they are frozen and others which are `universal' (Lemma \\ref{lemma:patching_various_parts}), for which the homoclinic class is dense.\n\nIdeas for the following proof were inspired by discussions with Anthony Quas. A similar result in a special case is contained in Lemma 6.7 of \\cite{chandgotia2013Markov}.\n\n\\begin{lemma}\\label{lemma:slope 1 is frozen} Let $\\mathcal H$ be a connected four-cycle free graph and $\\mu$ be an ergodic probability measure on $X_\\mathcal H$ such that $sl_{\\vec e_k}(x)=1$ almost everywhere for some $1\\leq k \\leq d$. Then $\\mu$ is frozen and $h_\\mu=0$.\n\\end{lemma}\n\\begin{proof}\nWithout loss of generality assume that $sl_{\\vec e_1}(x)=1$ almost everywhere. By the subadditivity of the height function for all $k, n \\in \\mathbb N$ and $x\\in X_\\mathcal H$ we know that\n$$\\frac{1}{kn}h_x(\\vec 0, kn\\vec{e}_1) \\leq \\frac{1}{kn}\\sum_{m=0}^{n-1}h_x(km\\vec{e}_1, k(m+1)\\vec{e}_1)=\\frac{1}{n}\\sum_{m=0}^{n-1}\\frac{1}{k}h_{\\sigma^{km \\vec e_1} (x)}(\\vec 0, k\\vec{e}_1) \\leq 1.$$\nSince $sl_{\\vec{e}_1}(x)= 1$ almost everywhere, we get that\n$$\\lim_{n\\longrightarrow \\infty} \\frac{1}{n}\\sum_{m=0}^{n-1}\\frac{1}{k}h_{\\sigma^{km \\vec e_1} (x)}(\\vec 0, k\\vec{e}_1)=1$$\nalmost everywhere. By the ergodic theorem\n$$\\int \\frac{1}{k}h_{x}(\\vec 0, k\\vec{e}_1) d \\mu= 1.$$\nTherefore $h_{x}(\\vec 0, k\\vec{e}_1)=k$ almost everywhere which implies that\n\\begin{equation}\nh_x(\\vec i, \\vec i+k\\vec{e}_1)=k \\label{eq:slopeoneheightconstantrise}\n\\end{equation}\u2022\nfor all $\\vec i \\in \\mathbb{Z}^d$ and $k \\in \\mathbb N$ almost everywhere. Let $X\\subset supp(\\mu)$ denote the set of such configurations.\n\nFor some $n \\in \\mathbb N$ consider two patterns $a,b \\in \\mathcal L_{B_n\\cup \\partial_2 B_n}(supp(\\mu))$ such that $a|_{\\partial_2 B_n}= b|_{\\partial_2 B_n}$. We will prove that then $a|_{B_n}= b|_{B_n}$. This will prove that $\\mu$ is frozen, and $|\\mathcal L_{B_n}(supp(\\mu))|\\leq|\\mathcal L_{\\partial_2 B_n}(supp(\\mu))|\\leq |{\\mathcal A}|^{|\\partial_2 B_n|}$ implying that $h_{top}(supp(\\mu))=0$. By the variational principle this implies that $h_\\mu=0$.\n\nConsider $x, y \\in X$ such that $x|_{B_n\\cup \\partial_2 B_n}= a$ and $y|_{B_n\\cup \\partial_2 B_n}= b$. Noting that $\\partial_2 B_n$ is connected, by Corollary \\ref{corollary:covering_space_lifting_homoclinic} we can choose lifts $\\tilde x, \\tilde y\\in X_{E_\\mathcal H}$ such that $\\tilde x|_{\\partial_2 B_n}= \\tilde y|_{\\partial_2 B_n}$. Consider any $\\vec i \\in B_n$ and choose $k\\in - \\mathbb N$ such that $\\vec i + k \\vec e_1, \\vec i + (2n+2+k)\\vec e_1 \\in \\partial B_n$. Then by Equation \\ref{eq:slopeoneheightconstantrise} $d_{E_\\mathcal H}(\\tilde x_{\\vec i + k \\vec e_1}, \\tilde x_{\\vec i + (2n+2+k) \\vec e_1})= 2n+2$. But\n$$(\\tilde x_{\\vec i + k \\vec e_1},\\tilde x_{\\vec i + (k+1)\\vec e_1}, \\ldots,\\tilde x_{\\vec i + (2n+2+k) \\vec e_1} )\\text{ and }$$\n$$(\\tilde y_{\\vec i + k \\vec e_1},\\tilde y_{\\vec i + (k+1)\\vec e_1}, \\ldots,\\tilde y_{\\vec i + (2n+2+k) \\vec e_1} )$$\nare walks of length $2n+2$ from $\\tilde x_{\\vec i + k \\vec e_1}$ to $\\tilde x_{\\vec i + (2n+2+k) \\vec e_1}$. Since $E_\\mathcal H$ is a tree and the walks are of minimal length, they must be the same. Thus $\\tilde x|_{B_n}=\\tilde y|_{B_n}$. Taking the image under the map $\\pi$ we derive that\n$$a|_{B_n}=x|_{B_n}=y|_{B_n}= b|_{B_n}.$$\n\\end{proof}\n\nThis partially justifies the claim that steep slopes lead to greater `rigidity'. We are left to analyse the case where the slope is submaximal in every direction. As in the proof of Proposition 7.1 in \\cite{chandgotia2013Markov} we will now prove a certain mixing result for the shift space $X_\\mathcal H$.\n\n\\begin{lemma}\\label{lemma:patching_various_parts} Let $\\mathcal H$ be a connected four-cycle free graph and $|\\mathcal H|= r$. Consider any $x\\in X_\\mathcal H$ and some $y \\in X_\\mathcal H$ satisfying $Range_{\\partial D_{(d+1)n+3r+k}}(y)\\leq 2k$ for some $n \\in \\mathbb N$. Then\n\\begin{enumerate}\n\\item\\label{case:not_bipartite}\nIf either $\\mathcal H$ is not bipartite or $x_{\\vec 0}, y_{\\vec 0}$ are in the same partite class of $\\mathcal H$ then there exists $z\\in X_\\mathcal H$ such that\n$$z_{\\vec i}=\n\\begin{cases}\nx_{\\vec i}& \\ if \\ \\vec i \\in D_n\\\\\ny_{\\vec i} &\\ if \\ \\vec i \\in D_{(d+1)n+3r+k}^c.\n\\end{cases}$$\n\\item \\label{case:bipartite}\nIf $\\mathcal H$ is bipartite and $x_{\\vec 0}, y_{\\vec 0}$ are in different partite classes of $\\mathcal H$ then there exists $z\\in X_\\mathcal H$ such that\n$$z_{\\vec i}=\n\\begin{cases}\nx_{\\vec i+\\vec e_1} &if \\ \\vec i \\in D_n\\\\\ny_{\\vec i} & if \\ \\vec i \\in D_{(d+1)n+3r+k}^c.\n\\end{cases}$$\n\\end{enumerate}\u2022\n\\end{lemma}\nThe separation $dn+3r+k$ between the induced patterns of $x$ and $y$ is not optimal, but sufficient for our purposes.\n\\begin{proof} We will construct the configuration $z$ only in the case when $\\mathcal H$ is not bipartite. The construction in the other cases is similar; the differences will be pointed out in the course of the proof.\n\\begin{enumerate}\n\\item\n\\textbf{Boundary patterns with non-maximal range to monochromatic patterns inside.}\nLet $\\tilde y$ be a lift of $y$ and $\\mathcal T^\\prime$ be the image of $\\tilde y|_{ D_{(d+1)n+3r+k+1}}$. Let $\\mathcal T$ be a minimal subtree of $E_\\mathcal H$ such that\n$$Image(\\tilde y|_{\\partial D_{(d+1)n+3r+k}})\\subset \\mathcal T\\subset \\mathcal T^\\prime.$$\nSince $Range_{\\partial D_{(d+1)n+3r+k}}(y)\\leq 2k$, $diameter(\\mathcal T)\\leq 2k$. By Proposition \\ref{proposition:folding trees into other trees} there exists a graph homomorphism $f:\\mathcal T^\\prime \\longrightarrow \\mathcal T$ such that $f|_\\mathcal T$ is the identity. Consider the configuration $\\tilde y^1$ given by\n$$\\tilde y^1_{\\vec i}= \\begin{cases}\nf(\\tilde y_{\\vec i}) &\\text{ if }\\vec i \\in D_{(d+1)n+3r+k+1}\\\\\n\\tilde y_{\\vec i} &\\text{ otherwise.}\n\\end{cases}\u2022$$\nThe pattern\n$$\\tilde y^1|_{D_{(d+1)n+3r+k+1}}\\in \\mathcal L_{D_{(d+1)n+3r+k+1}}(X_{\\mathcal T})\\subset \\mathcal L_{D_{(d+1)n+3r+k+1}}(X_{E_\\mathcal H}).$$\nMoreover since $f|_\\mathcal T$ is the identity map,\n$$\\tilde y^1|_{D_{(d+1)n+3r+k}^c}=\\tilde y|_{D_{(d+1)n+3r+k}^c}\\in \\mathcal L_{D_{(d+1)n+3r+k}^c}(X_{E_\\mathcal H}).$$\nSince $X_{E_\\mathcal H}$ is given by nearest neighbour constraints $\\tilde y^1\\in X_{E_\\mathcal H}$.\n\nRecall that the fold-radius of a nearest neighbour shift of finite type (in our case $X_\\mathcal T$) is the total number of full config-folds required to obtain a stiff shift. Since $diameter(\\mathcal T)\\leq 2k$ the fold-radius of $X_{\\mathcal T}\\leq k$. Let a stiff shift obtained by a sequence of config-folds starting at $X_{\\mathcal T}$ be denoted by $Z$. Since $\\mathcal T$ folds into a graph consisting of a single edge, $Z$ consists of two checkerboard patterns in the vertices of an edge in $\\mathcal T$, say $\\tilde v_1$ and $\\tilde v_2$. Corresponding to such a sequence of full config-folds, we had defined in Section \\ref{section:Folding, Entropy Minimality and the Pivot Property} the outward fixing map $O_{X_\\mathcal T, (d+1)n+3r+k}$. By Proposition \\ref{prop: folding_ to _ stiffness_fixing_a_set} the configuration $O_{X_\\mathcal T,(d+1)n+3r+k}(\\tilde y^1)\\in X_{E_{\\mathcal H}}$ satisfies\n\\begin{eqnarray*}\nO_{X_\\mathcal T,(d+1)n+3r+k}(\\tilde y^1)|_{D_{(d+1)n+3r+1}}\\in \\mathcal L_{D_{(d+1)n+3r+1}}(Z) \\\\\nO_{X_\\mathcal T,(d+1)n+3r+k}(\\tilde y^1)|_{D_{(d+1)n+3r+k}^c}=\\tilde y^1|_{D_{(d+1)n+3r+k}^c}=\\tilde y|_{D_{(d+1)n+3r+k}^c}.\n\\end{eqnarray*}\n\\noindent Note that the pattern $O_{X_\\mathcal T,(d+1)n+3r+k}(\\tilde y^1)|_{\\partial D_{(d+1)n+3r}}$ uses a single symbol, say $\\tilde v_1$. Let $\\pi (\\tilde v_1)= v_1$. Then the configuration $y^\\prime= \\pi(O_{X_\\mathcal T,(d+1)n+3r+k}(\\tilde y^1))\\in X_\\mathcal H$ satisfies\n\\begin{eqnarray*}\ny^\\prime|_{\\partial D_{(d+1)n+3r}} &=& v_1\\\\\ny^\\prime|_{D_{(d+1)n+3r+k}^c}&=&y|_{D_{(d+1)n+3r+k}^c}.\n\\end{eqnarray*}\u2022\n\\item\n\\textbf{Constant extension of an admissible pattern.}\nConsider some lift $\\tilde x$ of $x$. We begin by extending $\\tilde x|_{B_n}$ to a periodic configuration $\\tilde x^1\\in X_{E_\\mathcal H}$. Consider the map $f: [-n, 3n]\\longrightarrow [-n, n]$ given by\n\\begin{equation*}\nf(k)=\\begin{cases}\nk &\\text{ if } k \\in [-n,n]\\\\\n2n-k &\\text{ if }k \\in [n,3n].\n\\end{cases}\u2022\n\\end{equation*}\n\\noindent Then we can construct the pattern $\\tilde a\\in \\mathcal L_{[-n, 3n]^d}(X_{E_\\mathcal H})$ given by\n$$\\tilde a_{i_1, i_2, \\ldots i_d}= \\tilde x_{f(i_1), f(i_2), \\ldots, f(i_d)}.$$\nGiven $k, l \\in [-n, 3n]$ if $|k-l|=1$ then $|f(k)-f(l)|=1$. Thus $\\tilde a$ is a locally allowed pattern in $X_{E_\\mathcal H}$. Moreover since $f(-n)= f(3n)$ the pattern $\\tilde a$ is `periodic', meaning,\n$$\\tilde a_{i_1, i_2,\\ldots, i_{k-1}, -n, i_{k+1}, \\ldots, i_d }= \\tilde a_{i_1, i_2, \\ldots, i_{k-1}, 3n, i_{k+1}, \\ldots, i_d }$$\nfor all $i_1, i_2, \\ldots, i_d \\in [-n,3n]$. Also $\\tilde a|_{B_n}=\\tilde x|_{B_n}$. Then the configuration $\\tilde x^1$ obtained by tiling $\\mathbb{Z}^d$ with $\\tilde a|_{[-n,3n-1]^d}$, that is,\n$$\\tilde x^1_{\\vec i}= \\tilde a_{(i_1\\!\\!\\!\\mod 4n,\\ i_2\\!\\!\\!\\mod 4n,\\ \\ldots,\\ i_d\\!\\!\\!\\mod 4n)-(n, n, \\ldots, n)}\\text{ for all }\\vec i \\in \\mathbb{Z}^d$$\nis an element of $X_{E_\\mathcal H}$. Moreover $\\tilde x^1|_{B_n}= \\tilde a|_{B_n}= \\tilde x|_{B_n}$ and $Image(\\tilde x^1)= Image(\\tilde x|_{B_n})$. Since $diameter(B_n)=2dn$, $diameter(Image(\\tilde x^1))\\leq 2dn$. Let $\\tilde \\mathcal T= Image(\\tilde x^1)$. Then the fold-radius of $X_{\\tilde \\mathcal T}$ is less than or equal to $dn$. Let a stiff shift obtained by a sequence of config-folds starting at $X_{\\tilde \\mathcal T}$ be denoted by $Z'$. Since $\\tilde \\mathcal T$ folds into a graph consisting of a single edge, $Z^\\prime$ consists of two checkerboard patterns in the vertices of an edge in $\\tilde T$, say $\\tilde w_1$ and $\\tilde w_2$. Then by Proposition \\ref{prop: folding_ to _ stiffness_fixing_a_set}\n\\begin{eqnarray*}\nI_{X_{\\tilde\\mathcal T},n}(\\tilde x^1)|_{D_n}= \\tilde x^1|_{D_n}= \\tilde x|_{D_n}\\\\\nI_{X_{\\tilde\\mathcal T},n}(\\tilde x^1)|_{D_{(d+1)n-1}^c} \\in \\mathcal L_{D_{(d+1)n-1}^c}(Z^\\prime).\n\\end{eqnarray*}\n\\noindent We note that $I_{X_{\\tilde\\mathcal T},n}(\\tilde x^1)|_{\\partial D_{(d+1)n-1}}$ consists of a single symbol, say $\\tilde w_1$. Let $\\pi(\\tilde w_1)= w_1$. Then the configuration $x^\\prime=\\pi(I_{X_{\\tilde\\mathcal T},n}(\\tilde x^1)) \\in X_{\\mathcal H}$ satisfies\n\\begin{eqnarray*}\nx^\\prime|_{D_n}= x|_{D_n}\\text{ and}\\\\\nx^\\prime|_{\\partial D_{(d+1)n-1}}=w_1.\n\\end{eqnarray*}\u2022\n\\item \\textbf{Patching of an arbitrary pattern inside a configuration with non-maximal range.}\nWe will first prove that there exists a walk on $\\mathcal H$ from $w_1$ to $v_1$, $((w_1= u_1), u_2, \\ldots, (u_{3r+2}= v_1))$.\nSince the graph is not bipartite, it has a cycle $p_1$ such that $|p_1|\\leq r-1$ and is odd. Let $v^\\prime$ be a vertex in $p_1$. Then there exist walks $p_2$ and $p_3$ from $w_1$ to $v^\\prime$ and from $v^\\prime$ to $v_1$ respectively such that $|p_2|, |p_3|\\leq r-1$. Consider any vertex $w^\\prime\\sim_\\mathcal H v_1$. If\n$3r+1-|p_2|- |p_3|$ is even then the walk\n$$p_2\\star p_3 (\\star (v_1, w^\\prime, v_1))^{\\frac{3r+1-|p_2|- |p_3|}{2}}$$\nand if not, then the walk\n$$p_2\\star p_1 \\star p_3 (\\star (v_1, w^\\prime, v_1))^{\\frac{3r+1-|p_1|-|p_2|- |p_3|}{2}}$$\nis a walk of length $3r+1$ in $\\mathcal H$ from $w_1$ to $v_1$. This is the only place where we use the fact that $\\mathcal H$ is not bipartite. If it were bipartite, then we would require that $x^\\prime_{\\vec 0}$ and $y^\\prime_{\\vec 0}$ have to be in the same partite class to construct such a walk.\n\nGiven such a walk the configuration $z$ given by\n\\begin{eqnarray*}\nz|_{D_{(d+1)n}}&=& x^\\prime|_{D_{(d+1)n}}\\\\\nz|_{D^c_{(d+1)n +3r}}&= &y^\\prime|_{D^c_{(d+1)n +3r}}\\\\\nz|_{\\partial D_{(d+1)n+i-2}}&=& u_i\\text{ for all } 1\\leq i \\leq 3r+2\n\\end{eqnarray*}\n\\noindent is an element of $X_\\mathcal H$ for which $z|_{D_n}=x^\\prime|_{D_n}=x|_{D_n}$ and $z|_{D_{(d+1)n+3r+k}^c}=y^\\prime|_{D_{(d+1)n+3r+k}^c}=y|_{D_{(d+1)n+3r+k}^c}.$\n\\end{enumerate}\u2022\n\n\\end{proof}\n\nWe now return to the proof of Theorem \\ref{theorem: MRF fully supported }. Let $\\mu$ be an ergodic probability measure adapted to $X_\\mathcal H$ with positive entropy.\n\nSuppose $sl_{\\vec e_i}(x)= \\theta_i$ almost everywhere. By Lemma \\ref{lemma:slope 1 is frozen}, $\\theta_i<1$ for all $1\\leq i \\leq d$. Let $\\theta= \\max_i \\theta_i$ and $0<\\epsilon<\\frac{1}{4}\\left(1- \\theta\\right)$. Denote by $S^{d-1}$, the sphere of radius $1$ in $\\mathbb R^d$ for the $l^1$ norm. By Corollary \\ref{corollary: existence_of _slopes_in_reality} for all $\\vec{v}\\in S^{d-1}$\n$$\\lim_{n\\longrightarrow \\infty }\\frac{1}{n}h_x({\\vec{0}}, \\lfloor n \\vec{v}\\rfloor) \\leq \\theta$$\nalmost everywhere. Since $S^{d-1}$ is compact in $\\mathbb R^d$ we can choose a finite set $\\{\\vec{v}_1, \\vec{v}_2, \\ldots, \\vec{v}_t\\} \\subset S^{d-1}$ such that for all $\\vec{v}\\in S^{d-1}$ there exists some $1\\leq i\\leq t$ satisfying $\\|\\vec{v}_i -\\vec v\\|_1<\\epsilon$. By Egoroff's theorem \\cite{Follandreal1999} given $\\epsilon$ as above there exists $N_0\\in \\mathbb N$ such that for all $n\\geq N_0$ and $1\\leq i \\leq t$\n\\begin{equation}\n\\mu(\\{x\\in X_\\mathcal H\\:|\\:h_x({\\vec{0}}, \\lfloor n \\vec{v}_i\\rfloor)\\leq n\\theta + n\\epsilon\\ for\\ all\\ 1\\leq i\\leq t\\}) >1-\\epsilon.\\label{equation:uniform_continuity_of_heights}\n\\end{equation}\u2022\nLet $\\vec{v} \\in \\partial D_{n-1}$ and $1\\leq i_0\\leq t$ such that\n$\\|\\frac{1}{n}\\vec{v}-\\vec{v}_{i_0}\\|_1<\\epsilon$. If for some $x\\in X_\\mathcal H$ and $n \\in \\mathbb N$\n$$h_x({\\vec{0}}, \\lfloor n \\vec{v}_{i_0}\\rfloor)\\leq n\\theta + n\\epsilon$$\nthen\n$$h_x({\\vec{0}}, \\lfloor \\vec{v}\\rfloor)\\leq h_x({\\vec{0}}, \\lfloor n \\vec{v}_{i_0}\\rfloor) +\\lceil n \\epsilon \\rceil \\leq n\\theta + 2n\\epsilon+1.$$\nBy Inequality \\ref{equation:uniform_continuity_of_heights} we get\n$$\\mu\\left(\\{x\\in X_\\mathcal H\\:|\\: h_x\\left({\\vec{0}}, \\lfloor \\vec{v}\\rfloor\\right)\\leq n\\theta + 2n\\epsilon+1\\ for\\ all\\ \\vec{v}\\in \\partial D_{n-1}\\}\\right) >1-\\epsilon$$\nfor all $n\\geq N_0$. Therefore for all $n\\geq N_0$ there exists $x^{(n)}\\in supp(\\mu) $ such that\n$$Range_{\\partial D_{n-1}}\\left(x^{(n)}\\right)\\leq 2n\\theta + 4 n \\epsilon +2< 2n(1- \\epsilon)+2.$$\nLet $x \\in X_\\mathcal H$ and $n_0\\in \\mathbb N$. It is sufficient to prove that $\\mu([x]_{D_{n_0-1}})>0$. Suppose $r:=|\\mathcal H|$. Choose $k \\in \\mathbb N$ such that\n\\begin{eqnarray*}\nn_0(d+1)+3r+k+1&\\geq&N_0\\\\\n2\\left(n_0(d+1)+3r+k+1\\right)(1-\\epsilon)+2&\\leq&2k.\n\\end{eqnarray*}\u2022\nThen by Lemma \\ref{lemma:patching_various_parts} there exists $z\\in X_\\mathcal H$ such that either\n\\begin{equation*}\nz_{\\vec{j}}=\n\\begin{cases}\nx_{\\vec{j}} \\quad\\quad\\quad\\quad \\quad\\quad\\: &if \\ {\\vec{j}} \\in D_{n_0}\\\\\nx^{\\left(n_0(d+1)+3r+k+1\\right)}_{\\vec{j}} \\ &if \\ {\\vec{j}} \\in D_{n_0(d+1)+3r+k}^c\n\\end{cases}\n\\end{equation*}\u2022\nor\n\\begin{equation*}\nz_{\\vec{j}}=\n\\begin{cases}\nx_{{\\vec{j}} +\\vec e_1}\\quad\\quad\\quad\\quad \\quad\\quad \\:& if \\ {\\vec{j}} \\in D_{n_0}\\\\\nx^{\\left(n_0(d+1)+3r+k+1\\right)}_{\\vec{j}} \\ &if \\ {\\vec{j}} \\in D_{n_0(d+1)+3r+k}^c.\n\\end{cases}\n\\end{equation*}\u2022\nIn either case $(z, x^{\\left(n_0(d+1)+3r+k+1\\right)})\\in \\Delta_{X_\\mathcal H}$. Since $\\mu$ is adapted to $X_\\mathcal H$, $z\\in supp(\\mu)$. In the first case we get that $\\mu([x]_{D_{n_0-1}})=\\mu([z]_{D_{n_0-1}})>0$. In the second case we get that $$\\mu([x]_{D_{n_0-1}})=\\mu(\\sigma^{\\vec e_1}([x]_{D_{n_0-1}}))=\\mu([z]_{D_{(n_0-1)}-\\vec e_1})>0.$$\nThis completes the proof.\n\n\\end{proof}\n\nEvery shift space conjugate to an entropy minimal shift space is entropy minimal. However a shift space $X$ which is conjugate to $X_\\mathcal H$ for $\\mathcal H$ which is connected and four-cycle free need not even be a hom-shift. By following the proof carefully it is possible to extract a condition for entropy minimality which is conjugacy-invariant:\n\n\\begin{thm}\\label{theorem:conjugacy_invariant_entropy minimality condition}\nLet $X$ be a shift of finite type and $c$ a continuous sub-cocycle on $X$ with the property that $c(\\cdot, {\\vec{i}})\\leq \\|{\\vec{i}}\\|_1$ for all ${\\vec{i}} \\in \\mathbb{Z}^d$. Suppose every ergodic probability measure $\\mu$ adapted to $X$ satisfies:\n\\begin{enumerate}\n\\item\nIf $sl^c_{\\vec e_i}(x)=1$ almost everywhere for some $1\\leq i \\leq d$ then $h_\\mu< h_{top}(X)$.\n\\item\nIf $sl^c_{\\vec e_i}(x)<1$ almost everywhere for all $1\\leq i \\leq d$ then $supp(\\mu)=X$.\n\\end{enumerate}\nthen $X$ is entropy minimal.\n\\end{thm}\n\nHere is a sketch: By Proposition \\ref{proposition:entropyviamme} and Theorems \\ref{thm:equiGibbs}, \\ref{theorem: ergodic decomposition of markov random fields} it is sufficient to prove that every ergodic measure of maximal entropy is fully supported. If $X$ is a shift of finite type satisfying the hypothesis of Theorem \\ref{theorem:conjugacy_invariant_entropy minimality condition} then it is entropy minimal because every ergodic measure of maximal entropy of $X$ is an ergodic probability measure adapted to $X$; its entropy is either smaller than $h_{top}(X)$ or it is fully supported. To see why the condition is conjugacy invariant suppose that $f:X\\longrightarrow Y$ is a conjugacy and $c\\in C_Y$ satisfies the hypothesis of the theorem. Then by Proposition \\ref{proposition: sub-cocycles under conjugacy} it follows that ${f^\\star}(c)\\in C_X$ satisfies the hypothesis as well.\n\n\\begin{proof}[Proof of Theorem \\ref{theorem: pivot property for four cycle free}] By Proposition \\ref{proposition: pivot for disconnected} we can assume that $\\mathcal H$ is connected. Consider some $(x, y)\\in \\Delta_{X_\\mathcal H}$. By Corollary \\ref{corollary:covering_space_lifting_homoclinic} there exist $(\\tilde x, \\tilde y)\\in \\Delta_{X_{E_\\mathcal H}}$ such that $\\pi(\\tilde x)= x$ and $\\pi(\\tilde y)=y$. It is sufficient to prove that there is a chain of pivots from $\\tilde x$ to $\\tilde y$. We will proceed by induction on $\\sum_{{\\vec{i}} \\in \\mathbb{Z}^d} d_{E_\\mathcal H}(\\tilde x_{{\\vec{i}}}, \\tilde y_{\\vec{i}})$. The induction hypothesis (on $M$) is : If $\\sum_{{\\vec{i}} \\in \\mathbb{Z}^d} d_{E_\\mathcal H}(\\tilde x_{\\vec{i}}, \\tilde y_{\\vec{i}})= 2M$ then there exists a chain of pivots from $\\tilde x$ to $\\tilde y$.\n\nWe note that $d_{E_\\mathcal H}(\\tilde x_{\\vec{i}}, \\tilde y_{\\vec{i}})$ is even for all ${\\vec{i}} \\in \\mathbb{Z}^d$ since there exists ${\\vec{i}}^\\prime\\in \\mathbb{Z}^d$ such that $\\tilde x_{{\\vec{i}}^\\prime}= \\tilde y_{{\\vec{i}}^{\\prime}}$ and hence $\\tilde x_{\\vec{i}}$ and $\\tilde y_{\\vec{i}}$ are in the same partite class of $E_\\mathcal H$ for all ${\\vec{i}} \\in \\mathbb{Z}^d$.\n\nThe base case $(M=1)$ occurs exactly when $\\tilde x$ and $\\tilde y$ differ at a single site; there is nothing to prove in this case. Assume the hypothesis for some $M\\in \\mathbb N$.\n\nConsider $(\\tilde x, \\tilde y)\\in \\Delta_{X_{E_\\mathcal H}}$ such that\n$$\\sum_{{\\vec{i}} \\in \\mathbb{Z}^d} d_{E_\\mathcal H}(\\tilde x_{\\vec{i}}, \\tilde y_{\\vec{i}})=2M+2.$$\nLet\n$$B=\\{{\\vec{j}} \\in \\mathbb{Z}^d\\:|\\: \\tilde x_{\\vec{j}} \\neq \\tilde y_{\\vec{j}}\\}$$\nand a vertex $\\tilde v\\in E_\\mathcal H$. Without loss of generality we can assume that\n\\begin{equation}\n\\max_{{\\vec{i}} \\in B} d_{E_\\mathcal H}(\\tilde v, \\tilde x_{\\vec{i}})\\geq \\max_{{\\vec{i}} \\in B} d_{E_\\mathcal H}(\\tilde v, \\tilde y_{\\vec{i}}).\\label{equation:assumption_for_pivot}\n\\end{equation}\u2022\nConsider some ${\\vec{i}}_0 \\in B$ such that\n$$d_{E_\\mathcal H}(\\tilde v, \\tilde x_{{\\vec{i}}_0})= \\max_{{\\vec{i}} \\in B}d_{E_\\mathcal H}(\\tilde v, \\tilde x_{{\\vec{i}}}).$$\nConsider the shortest walks $(\\tilde v= \\tilde v_1, \\tilde v_2, \\ldots, \\tilde v_n=\\tilde x_{{\\vec{i}}_0})$ from $\\tilde v$ to $\\tilde x_{{\\vec{i}}_0}$ and $(\\tilde v= \\tilde v^\\prime_1, \\tilde v^\\prime_2, \\ldots, \\tilde v^\\prime_{n^\\prime}=\\tilde y_{{\\vec{i}}_0})$ from $\\tilde v$ to $\\tilde y_{{\\vec{i}}_0}$. By Assumption \\ref{equation:assumption_for_pivot}, $n^\\prime\\leq n$. Since these are the shortest walks on a tree, if $\\tilde v^\\prime_k=\\tilde v_{k^\\prime}$ for some $1\\leq k\\leq n^\\prime$ and $1\\leq k^{\\prime} \\leq n$ then $k =k^{\\prime}$ and $\\tilde v_l = \\tilde v_l^\\prime$ for $1\\leq l \\leq k$. Let\n$$k_0= \\max\\{1\\leq k\\leq n^\\prime\\:|\\: \\tilde v_k^\\prime= \\tilde v_k\\}.$$\nThen the shortest walk from $\\tilde x_{{\\vec{i}}_0}$ to $\\tilde y_{{\\vec{i}}_0}$ is given by $\\tilde x_{{\\vec{i}}_0}=\\tilde v_n, \\tilde v_{n-1}, \\tilde v_{n-2}, \\ldots, \\tilde v_{k_0}, \\tilde v^\\prime_{k_0+1}, \\ldots, \\tilde v^\\prime_{n^\\prime}= \\tilde y_{{\\vec{i}}_0}$.\n\nWe will prove for all $\\vec i \\sim \\vec i_{0}$, $\\tilde x_{\\vec i}= \\tilde v_{n-1}$. This is sufficient to complete the proof since then the configuration\n$$\\tilde x^{(1)}_{{\\vec{j}}}=\n\\begin{cases}\n\\tilde x_{\\vec{j}} \\quad\\ \\:if\\ {\\vec{j}} \\neq {\\vec{i}}_0\\\\\n\\tilde v_{n-2}\\ \\ if\\ {\\vec{j}} = {\\vec{i}}_0,\n\\end{cases}$$\n\\noindent is an element of $X_{E_\\mathcal H}$, $(\\tilde x,\\tilde x^{(1)})$ is a pivot and\n$$n+n^\\prime -2 k_0 -2=d_{E_\\mathcal H}{(\\tilde x^{(1)}_{{\\vec{i}}_0}, \\tilde y_{{\\vec{i}}_0})}< d_{E_\\mathcal H}(\\tilde x_{{\\vec{i}}_0}, \\tilde y_{{\\vec{i}}_0})=n+n^\\prime -2 k_0$$\n\\noindent giving us a pair $(\\tilde x^{(1)}, \\tilde y)$ such that\n$$\\sum_{{\\vec{i}} \\in \\mathbb{Z}^d} d_{E_\\mathcal H}(\\tilde x^{(1)}_{\\vec{i}}, \\tilde y_{\\vec{i}})=\\sum_{{\\vec{i}} \\in \\mathbb{Z}^d} d_{E_\\mathcal H}(\\tilde x_{\\vec{i}}, \\tilde y_{\\vec{i}})-2= 2M$$\n\nto which the induction hypothesis applies. There are two possible cases:\n\\begin{enumerate}\n\\item\n${\\vec{i}} \\in B$: Then $d_{E_\\mathcal H}(\\tilde v, \\tilde x_{\\vec i})=d_{E_\\mathcal H}(\\tilde v, \\tilde x_{\\vec i_0})-1$ and $\\tilde x_{\\vec{i}}\\sim_{E_\\mathcal H} \\tilde x_{\\vec i_0}$. Since $E_\\mathcal H$ is a tree, $\\tilde x_{\\vec{i}}= \\tilde v_{n-1}$.\n\n\n\\item ${\\vec{i}} \\notin B$: Then $\\tilde x_{\\vec{i}}= \\tilde y_{\\vec{i}}$ and we get that $d_{E_\\mathcal H}(\\tilde x_{{\\vec{i}}_0}, \\tilde y_{{\\vec{i}}_0})=2$. Since $\\tilde x_{\\vec{i}}\\sim_{E_\\mathcal H} \\tilde x_{{\\vec{i}}_0}$, the shortest walk joining $\\tilde v$ and $\\tilde x_{{\\vec{i}}}$ must either be $\\tilde v= \\tilde v_1, \\tilde v_2, \\ldots, \\tilde v_{n-1}= \\tilde x_{{\\vec{i}}}$ or $\\tilde v= \\tilde v_1, \\tilde v_2,\\ldots,\\tilde v_{n}= \\tilde x_{{\\vec{i}}_0}, \\tilde v_{n+1}= \\tilde x_{{\\vec{i}}}$. We want to prove that the former is true. Suppose not.\n\nSince $\\tilde y_{{\\vec{i}}_0}\\sim_{E_\\mathcal H} \\tilde x_{{\\vec{i}}}$ and ${\\vec{i}}_0\\in B$, the shortest walk from $\\tilde v$ to $\\tilde y_{{\\vec{i}}_0}$ is $\\tilde v= \\tilde v_1, \\tilde v_2,\\ldots,\\tilde v_{n}= \\tilde x_{{\\vec{i}}_0}, \\tilde v_{n+1}= \\tilde x_{{\\vec{i}}}, \\tilde v_{n+2}=\\tilde y_{{\\vec{i}}_0} $. This contradicts Assumption \\ref{equation:assumption_for_pivot} and completes the proof.\n\n\\end{enumerate}\n\n\\end{proof}\n\n\n\n\\section{Further Directions}\n\\subsection{Getting Rid of the Four-Cycle Free Condition}\n\nIn the context of the results in this paper, the four-cycle free condition seems a priori artificial; we feel that in many cases it is a mere artifact of the proof. To the author, getting rid of this condition is an important and interesting topic for future research. Here we will illustrate what goes wrong when we try to apply our proofs for the simplest possible example with four-cycles, that is, $C_4$.\n\n\nWe have shown (Example \\ref{Example: Folds to an edge}) that $X_{C_4}$ satisfies the hypothesis of Propositions \\ref{proposition: periodicfoldentropy} and \\ref{proposition: frozenfoldpivot} and thus it also satisfies the conclusions of Theorems \\ref{theorem: MRF fully supported } and \\ref{theorem: pivot property for four cycle free}. The proofs of Theorems \\ref{theorem: MRF fully supported } and \\ref{theorem: pivot property for four cycle free} however rely critically on the existence of lifts to the universal cover, that is, Proposition \\ref{proposition:covering_space_lifting}. However the conclusion of this proposition does not hold for $X_{C_4}$: The universal cover of $C_4$ is $\\mathbb{Z}$ and the corresponding covering map $\\pi: \\mathbb{Z} \\longrightarrow C_4$ is given by $\\pi(i)= i \\mod 4$. By the second remark following Theorem 4.1 in \\cite{chandgotia2013Markov} it follows that the induced map $\\pi: X_{\\mathbb{Z}}\\longrightarrow X_{C_4}$ is not surjective disproving the conclusion of Proposition \\ref{proposition:covering_space_lifting} for $X_{C_4}$.\n\n\\subsection{Identification of Hom-Shifts}\n\\noindent\\textbf{Question 1:} Given a shift space $X$, are there some nice decidable conditions which imply that $X$ is conjugate to a hom-shift?\n\nBeing conjugate to a hom-shift lays many restrictions on the shift space, for instance on its periodic configurations. Consider a conjugacy $f:X\\longrightarrow X_\\mathcal H$ where $\\mathcal H$ is a finite undirected graph. Let $Z\\subset X_\\mathcal H$ be the set of configurations invariant under $\\{\\sigma^{2\\vec e_i}\\}_{i=1}^d$. Then there is a bijection between $Z$ and $\\mathcal L_A(X_\\mathcal H)$ where $A$ is the rectangular shape\n$$A:=\\{\\sum_{i=1}^{d}\\delta_i \\vec e_i\\:|\\: \\delta_i\\in \\{0,1\\}\\}$$\nbecause every pattern in $\\mathcal L_A(X_\\mathcal H)$ extends to a unique configuration in $Z$. More generally given a graph $\\mathcal H$ it is not hard to compute the number of periodic configurations for a specific finite-index subgroup of $\\mathbb{Z}^d$. Moreover periodic points are dense in these shift spaces and there are algorithms to compute approximating upper and lower bounds of their entropy \\cite{symmtricfriedlan1997,louidor2010improved}. Hence the same then has to hold for the shift space $X$ as well. We are not familiar with nice decidable conditions which imply that a shift space is conjugate to a hom-shift.\n\n\\subsection{Hom-Shifts and Strong Irreducibility}\\label{subsection: homSI}\n\n\\noindent\\textbf{Question 2:} Which hom-shifts are strongly irreducible?\n\nWe know two such conditions:\n\\begin{enumerate}\n\\item\\cite{brightwell2000gibbs}\nIf $\\mathcal H$ is a finite graph which folds into $\\mathcal H^\\prime$ then $X_\\mathcal H$ is strongly irreducible if and only if $X_{\\mathcal H'}$ is strongly irreducible. This reduces the problem to graphs $\\mathcal H$ which are stiff. For instance if $\\mathcal H$ is dismantlable, then $X_\\mathcal H$ is strongly irreducible.\n\\item\\cite{Raimundo2014}\n$X_\\mathcal H$ is single site fillable. A shift space $X_{\\mathcal F}\\subset {\\mathcal A}^{\\mathbb{Z}^d}$ is said to be \\emph{single site fillable} if for all patterns $a\\in {\\mathcal A}^{\\partial\\{\\vec 0\\}}$ there exists a locally allowed pattern in $X_{\\mathcal F}$, $b\\in {\\mathcal A}^{D_1}$ such that\n$b|_{\\partial\\{\\vec 0\\}}=a$. In case $X_{\\mathcal F}= X_\\mathcal H$ for some graph $\\mathcal H$ then it is single site fillable if and only if given vertices $v_1, v_2, \\ldots, v_{2d}\\in \\mathcal H$ there exists a vertex $v\\in \\mathcal H$ adjacent to all of them.\n\\end{enumerate}\nIt follows that $X_{K_5}$ is single site fillable and hence strongly irreducible for $d=2$. In fact strong irreducibility has been proved in \\cite {Raimundo2014} for shifts of finite type with a weaker mixing condition called TSSM. This does not cover all possible examples. For instance it was proved in \\cite{Raimundo2014} that $X_{K_4}$ is strongly irreducible for $d=2$ even though it is not TSSM and $K_4$ is stiff. We do not know if it is possible to verify whether a given hom-shift is TSSM.\n\n\\subsection{Hom-Shifts and Entropy Minimality}\\textbf{Question 3:} Given a finite connected graph $\\mathcal H$ when is $X_\\mathcal H$ entropy minimal?\n\n\n\nWe have provided some examples in the paper:\n\\begin{enumerate}\n\\item\n$\\mathcal H$ can be folded to a single vertex with a loop or a single edge. (Proposition \\ref{proposition: periodicfoldentropy})\n\\item $\\mathcal H$ is four-cycle free. (Theorem \\ref{theorem:four cycle free entropy minimal})\n\\end{enumerate}\u2022\nAgain this does not provide the full picture. For instance $X_{K_4}$ is strongly irreducible when $d=2$ and hence entropy minimal even though $K_4$ is stiff and not four-cycle free.\nA possible approach might be via identifying the right sub-cocycle and Theorem \\ref{theorem:conjugacy_invariant_entropy minimality condition}.\n\n\\noindent\\textbf{Conjecture:} Let $d=2$ and $\\mathcal H$ be a finite connected graph. Then $X_\\mathcal H$ is entropy minimal.\n\n\n\\subsection{Hom-Shifts and the Pivot Property}\\label{subsection: Hom-shifts and the pivot property}\nWe have given a list of examples of graphs $\\mathcal H$ for which the shift space $X_\\mathcal H$ has the pivot property in Section \\ref{section: the pivot property}. In this paper we have provided two further sets of examples:\n\\begin{enumerate}\n\\item\n$\\mathcal H$ can be folded to a single vertex with a loop or a single edge. (Proposition \\ref{proposition: frozenfoldpivot})\n\\item\n$\\mathcal H$ is four-cycle free. (Theorem \\ref{theorem: pivot property for four cycle free})\n\\end{enumerate}\n\nWe saw in Section \\ref{section: the pivot property} that $X_{K_4}, X_{K_5}$ do not have the pivot property when $d=2$. However they do satisfy a weaker property which we will describe next.\n\nA shift space $X$ is said to have the \\emph{generalised pivot property} if there is an $r\\in \\mathbb N$ such that for all $(x,y)\\in \\Delta_X$ there exists a chain $x^1=x, x^2, x^3, \\ldots, y=x^n\\in X$ such that $x^i$ and $x^{i+1}$ differ at most on some translate of $D_r$.\n\nIt can be shown that any nearest neighbour shift of finite type $X\\subset {\\mathcal A}^\\mathbb{Z}$ has the generalised pivot property. In higher dimensions this is not true without any hypothesis; look for instance in Section 9 in \\cite{chandgotia2013Markov}. It is not hard to prove that any single site fillable nearest neighbour shift of finite type has the generalised pivot property. This can be generalised further: in \\cite{Raimundo2014} it is proven that every shift space satisfying TSSM has the generalised pivot property.\n\n\\noindent\\textbf{Question 4:} For which graphs $\\mathcal H$ does $X_\\mathcal H$ satisfy the pivot property? What about the generalised pivot property?\n\n\\section{Acknowledgments}\nI would like to thank my advisor, Prof. Brian Marcus for dedicated reading of a million versions of this paper, numerous suggestions, insightful discussions and many other things. The line of thought in this paper was begot in discussions with Prof. Tom Meyerovitch, his suggestions and remarks have been very valuable to me. I will also like to thank Prof. Ronnie Pavlov, Prof. Sam Lightwood, Prof. Michael Schraudner, Prof. Anthony Quas, Prof. Klaus Schmidt, Prof. Mahan Mj, Prof. Peter Winkler and Raimundo Brice\\~no for giving a patient ear to my ideas and many useful suggestions. Lastly, I will like to thank Prof. Jishnu Biswas; he had introduced me to universal covers, more generally to the wonderful world of algebraic topology. This research was partly funded by the Four-Year Fellowship at the University of British Columbia. Lastly I would like to thank the anonymous referee for giving many helpful comments and corrections largely improving the quality of the paper.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{introduction}\nNeural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN). \n\nRNNs are powerful models in various NLP tasks, such as machine translation \\citep{cho-etal-2014}, sentiment classification \\citep{wang-and-tian-2016,liu-emnlp-2016,wang-etal-2016,zhang-etal-2016,liang-etal-2016}, reading comprehension \\citep{kadlec-etal-2016,dhingra-etal-2016,sordoni-etal-2016,cui-etal-2016,cui-etal-2017-aoa,yang-etal-2016}, etc. \nThe recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. \nThere are two main implementations of RNN: Long Short-Term Memory (LSTM) \\citep{hochreiter-1997} and Gated Recurrent Unit (GRU) \\citep{cho-etal-2014}, which solve the gradient vanishing problems in vanilla RNNs. \n\nCompared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification \\citep{kim-2014}, etc.\nHowever, different from RNN, CNN sets a pre-defined convolutional kernel to ``summarize'' a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.\n\nAs both global and local information is important in most of NLP tasks \\citep{luong-etal-2015}, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: {\\em shallow fusion}, {\\em deep fusion} and {\\em deep-enhanced fusion}. \n\nTo verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. \nIn the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model.\nTo further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) \\citep{cui-etal-2017-aoa}.\nExperimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets.\nThe main contributions of our work are listed as follows.\n\\begin{itemize}[leftmargin=*]\n\t\\item We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.\n\t\\item The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.\n\t\\item The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.\n\\end{itemize}\n\n\\section{Related Works}\\label{related-work}\n\nGated recurrent unit (GRU) has been proposed in the scenario of neural machine translations \\citep{cho-etal-2014}. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.\n\nHowever, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification \\citep{kim-2014}.\n\nVarious efforts have been made on combining CNN and RNN.\n\\citet{wang-etal-2016} proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. \n\\citet{liang-etal-2016} proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. \n\\citet{zhang-etal-2016} presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. \n\\citet{cai-etal-2016} propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. \n\\citet{kim-etal-2016} build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history.\n\n\nThe difference between our CRU model and previous works can be concluded as follows.\n\\begin{itemize}[leftmargin=*]\n \\item Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works.\n \\item Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU \\citep{wang-etal-2016}. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to. \n \\item We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a \"word + context\" representation for enhancement.\n\\end{itemize}\n\n\n\\section{Our approach}\\label{cru}\nIn this section, we will give a detailed introduction to our CRU model.\nFirstly, we will give a brief introduction to GRU \\citep{cho-etal-2014} as preliminaries, and then three variants of our CRU model will be illustrated.\n\n\\subsection{Gated Recurrent Unit}\nGated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data \\citep{cho-etal-2014}, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU.\nGiven a sequence $x = \\{x_1, x_2, ..., x_n\\}$, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.\n\\begin{gather}\nz_t = \\sigma(W_z x_t+U_z h_{t-1}) \\\\\nr_t = \\sigma(W_r x_t+U_r h_{t-1}) \\\\\n\\widetilde{h_t} = \\tanh(W x_t+U [r_t \\odot h_{t-1}]) \\\\\nh_t = z_t h_{t-1} + (1-z_t) \\widetilde{h_t}\n\\end{gather}\n\nwhere $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\\sigma$ is often chosen as $sigmoid$ function.\nIn many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account.\n\n\n\\subsection{Contextual Recurrent Unit}\nBy only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. \nHere is an example that shows this problem.\n\n\\begin{quote}\n\\begin{scriptsize}\\begin{verbatim}\nThere are many fan mails in the mailbox. \nThere are many fan makers in the factory.\n\\end{verbatim}\\end{scriptsize}\n\\end{quote}\n\nAs we can see that, though two sentences share the same beginning before the word {\\em fan}, the meanings of the word {\\em fan} itself are totally different when we meet the following word {\\em mails} and {\\em makers}. The first {\\em fan} means ``a person that has strong interests in a person or thing\", and the second one means ``a machine with rotating blades for ventilation\".\nHowever, the embedding of word {\\em fan} does not discriminate according to the context. \nAlso, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word {\\em fan}, the output of GRU does not change, though they have entirely different meanings when we see the following words.\n\nTo enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU).\nIn this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies.\nMoreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.\n\nIn this paper, we propose three different types of CRU models: {\\em shallow fusion}, {\\em deep fusion} and {\\em deep-enhanced fusion}, from the most fundamental one to the most expressive one. \nWe will describe these models in detail in the following sections.\n\n\\subsubsection{Shallow Fusion}\nThe most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as {\\em shallow fusion}, because the CNN and RNN are applied linearly without changing inner architectures of both. \n\nFormally, when given a sequential data $x = \\{x_1, x_2, ..., x_n\\}$, a shallow fusion of CRU can be illustrated as follows.\n\\begin{gather} \ne_t = W_e \\cdot x_t ~~;~~ c_t = \\phi(\\widetilde{e_t}) \\\\\nh_t = GRU(h_{t-1}, c_t) \n\\end{gather}\n\nWe first transform word $x_t$ into word embeddings through an embedding matrix $W_e$.\nThen a convolutional operation $\\phi$ is applied to the context of $e_t$, denoted as $\\widetilde{e_t}$, to obtain contextual representations.\nFinally, the contextual representation $c_t$ is fed into GRU units.\n\nFollowing \\cite{kim-2014}, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks.\nLet $e_{i:j} \\in\\mathbb{R}^{\\mathcal \\\\j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings.\n\\begin{equation} e_{i:j} = concat[e_i, e_{i+1}, ..., e_j] \\end{equation}\n\nThe embedding-wise convolution is to apply a convolution filter {\\bf w} $\\in\\mathbb{R}^{\\mathcal \\\\k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. \nThis can be formulated as\n\\begin{equation} c_i = f({\\bf w} \\cdot e_{i:i+k-1} + b) \\end{equation}\nwhere $f$ is a non-linear function and $b$ is the bias.\n\nBy applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated.\nIn this paper, we apply a {\\em same-length} convolution (length of the sentence does not change), i.e. $c \\in\\mathbb{R}^{\\mathcal \\\\n*1}$.\nThen we apply $d$ filters with the same window size to obtain multiple feature maps.\nSo the final output of CNN has the shape of $C \\in\\mathbb{R}^{\\mathcal \\\\n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks.\n\n\\subsubsection{Deep Fusion}\nThe contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion.\nIn order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.\n\\begin{gather}\nz_t = \\sigma(\\phi_z(\\widetilde{e_t})) + U_z h_{t-1}) \\\\\nr_t = \\sigma(\\phi_r(\\widetilde{e_t})) + U_r h_{t-1}) \\\\\n\\widetilde{h_t} = \\tanh(\\phi(\\widetilde{e_t}))+U [r_t \\odot h_{t-1}]) \n\\end{gather}\n\nwhere $\\phi_z, \\phi_r, \\phi$ are three different CNN layers, i.e., the weights are not shared.\nWhen the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.\n\n\\subsubsection{Deep-Enhanced Fusion}\nIn shallow fusion and deep fusion, we used the convolutional operation to summarize the context.\nHowever, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context. \n\nFor better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of ``enriching word representation with contextual information while preserving its basic meaning''.\nFigure \\ref{deep-e-example} illustrates our motivations.\n\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{example2.pdf}\n \\caption{\\label{deep-e-example} An intuitive illustration of variants of the CRU model. The gray scale represents the amount of information. a) original sentence; b) original representation of word ``shortcut''; c) applying convolutional filter (length=3); d) adding original word embedding; }\n\\end{figure}\n\nFormally, the Equation 9 to 11 can be further rewritten into\n\\begin{gather}\nz_t = \\sigma(W_z(\\phi_z(\\widetilde{e_t}) + e_t) + U_z h_{t-1} \\\\\nr_t = \\sigma(W_r(\\phi_r(\\widetilde{e_t}) + e_t) + U_r h_{t-1}) \\\\\n\\widetilde{h_t} = \\tanh(W(\\phi(\\widetilde{e_t}) + e_t)+U [r_t \\odot h_{t-1}])\n\\end{gather}\n\nwhere we add original word embedding $e_t$ after the CNN operation, to ``enhance'' the original word information while not losing the contextual information that has learned from CNNs.\n\n\n\\section{Applications}\\label{application}\nThe proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks.\nAs we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension.\nIn the sentiment classification task, we build a simple neural model and applied our CRU.\nIn the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader \\cite{cui-etal-2017-aoa}, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines.\n\n\\subsection{Sentiment Classification}\\label{sentiment-classification}\nIn the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive\/negative or subjective\/objective category.\nA general neural network architecture for this task is depicted in Figure \\ref{sc-arch}.\n\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{sc-arch2.pdf}\n \\caption{\\label{sc-arch} A general neural network architecture of sentiment classification task.}\n\\end{figure}\n\nFirst, the movie review is transformed into word embeddings.\nAnd then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text.\nIn this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated.\nAfter that, a fully connected layer will be added after sequence modeling.\nFinally, the binary decision is made through a single $sigmoid$ unit.\n\nAs shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models.\nThe detailed experimental result of sentiment classification will be given in the next section.\n\n\n\\subsection{Reading Comprehension}\\label{rc-task}\nBesides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. \nIn this paper, we strengthened the recent AoA Reader \\cite{cui-etal-2017-aoa} and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened.\n\n\\subsubsection{Task Description}\nThe cloze-style reading comprehension is a fundamental task that explores relations between the document and the query.\nFormally, a general cloze-style query can be illustrated as a triple $\\langle {\\mathcal D}, {\\mathcal Q}, {\\mathcal A} \\rangle$, where $\\mathcal D$ is the document, $\\mathcal Q$ is the query and the answer $\\mathcal A$. \nNote that the answer is a {\\em single} word in the document, which requires us to exploit the relationship between the document and query.\n\n\n\\subsubsection{Modified AoA Reader}\nIn this section, we briefly introduce the original AoA Reader \\cite{cui-etal-2017-aoa}, and illustrate our modifications.\nWhen a cloze-style training triple $\\langle \\mathcal D, \\mathcal Q, \\mathcal A \\rangle$ is given, the Modified AoA Reader will be constructed in the following steps.\nFirst, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer.\nThe recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model.\n\nTo further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance.\nThe main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models \\cite{dhingra-etal-2016,pengli-etal-2016,yang-etal-2016}.\nIn this paper, we adopt two additional features in document word embeddings (no features applied to the query side).\n\n\\noindent{{$\\bullet$~~ \\bf Document word frequency}}: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document.\n\\begin{equation} freq(d) = \\frac{word\\_count(d)}{length(\\mathcal D)}, d\\in \\mathcal D \\end{equation}\n\n\\noindent{{$\\bullet$~~ \\bf Count of query word}}: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) \\cite{pengli-etal-2016}, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be.\nWe replace the Equation 16 with the following formulation (query side is not changed), \n\\begin{equation} \\small e(x) = concat[W_e \\cdot x, freq(x), CoQ(x)] , x\\in \\mathcal D \\end{equation}\nwhere $freq(x)$ and $CoQ(x)$ are the features that introduced above. \n\\begin{gather}\n{\\small \\overrightarrow{h_s(x)}} = {\\small \\overrightarrow{RNN}(e(x)) ; \\overleftarrow{h_s(x)} = \\overleftarrow{RNN}(e(x))} \\\\\nh_s(x) = [\\overrightarrow{h_s(x)}; \\overleftarrow{h_s(x)}]\n\\end{gather}\n\nOther parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in \\citet{cui-etal-2017-aoa}.\n\n\\section{Experiments: Sentiment Classification}\\label{experiments-sc}\n\n\\subsection{Experimental Setups}\nIn the sentiment classification task, we tried our model on the following public datasets.\n\\begin{itemize}[leftmargin=*]\n \\item {\\bf MR}\\footnote{\\url{http:\/\/www.cs.cornell.edu\/People\/pabo\/movie-review-data\/}} Movie reviews with one sentence each. Each review is classified into positive or negative \\cite{pang-and-lee-2005}.\n \\item {\\bf IMDB}\\footnote{\\url{http:\/\/ai.stanford.edu\/~amaas\/data\/sentiment\/}} Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative \\cite{maas-etal-2011}. Note that each movie review may contain several sentences.\n \\item {\\bf SUBJ}$^1$ Movie review labeled with subjective or objective \\cite{pang-and-lee-2004}. \n\\end{itemize}\n\nThe statistics and hyper-parameter settings of these datasets are listed in Table \\ref{imdb-stats}.\n \n \\begin{table}[htp]\n \\begin{center}\n \n \\begin{tabular}{lccc}\n \\toprule\n & \\bf MR & \\bf IMDB & \\bf SUBJ \\\\\n \\midrule\n Train \\# & 10,662 & 25,000 & 10,000 \\\\\n \n Test \\# & 10-CV & 25,000 & 10-CV \\\\\n \\midrule\n Embed. size & 200 & 256 & 200 \\\\\n Hidden size & 200 & 256 & 200 \\\\\n Dropout & 0.3 & 0.3 & 0.4 \\\\\n Pre-train Embed. & GloVe & - & GloVe \\\\\n Initial LR & 0.0005 & 0.001 & 0.0005 \\\\\n Vocab truncation & - & 50,000 & - \\\\\n \\bottomrule\n \\end{tabular}\n \\end{center}\n \\caption{\\label{imdb-stats} Statistics and hyper-parameter settings of MR, IMDB and SUBJ datasets. 10-CV represents 10-fold cross validation.}\n \\end{table} \n\n \nAs these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. \nAlso, we applied dropout \\cite{srivastava-etal-2014} to the output of the embedding layer and fully connected layer.\nThe fully connected layer has a dimension of 1024.\nIn the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) \\cite{pennington-etal-2014} and fine-tuned during the training process.\nIn the IMDB condition, the vocabulary is truncated by descending word frequency order.\nWe adopt batched training strategy of 32 samples with ADAM optimizer \\cite{kingma2014adam}, and clipped gradient to 5 \\cite{pascanu-etal-2013}.\nUnless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments.\nWe use 10-fold cross-validation (CV) in the dataset that has no train\/valid\/test division. \n\n\\subsection{Results}\\label{result-sc}\n\nThe experimental results are shown in Table \\ref{exp-class}.\nAs we mentioned before, all RNNs in these models are {\\bf bi-directional}, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information.\nAs we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7\\%, 1.0\\%, and 1.9\\% can be observed in three datasets, respectively.\nWe also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6\\%, 0.7\\%, and 0.8\\% gains respectively, which demonstrate its effectiveness. \nBy employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper.\n\n\n \\begin{table}[t]\n \\begin{center}\n \\small\n \\begin{tabular}{lccc}\n \\toprule\n \\bf System & \\bf MR & \\bf IMDB & \\bf SUBJ \\\\\n \\midrule\n \n Multi-channel CNN & 81.1 & - & 93.2 \\\\\n \n HRL & - & 90.9 & - \\\\\n Multi-task arc-II & - & {\\em 91.2} & {\\em 95.0} \\\\\n CNN-GRU-wordvec & 82.3 & - & - \\\\\n DSCNN-Pretrain & 82.2 & 90.7 & 93.9 \\\\\n LR-Bi-LSTM & 82.1 & - & - \\\\\n AC-BLSTM & 83.1& - & 94.2 \\\\\n G-AC-BLSTM & {\\em 83.7} & -& 94.3 \\\\\n \\midrule\\midrule\n GRU & 81.0 & 90.9 & 93.9 \\\\\n CRU (shallow fusion) & 82.1 & 91.3 & 95.0 \\\\\n CRU (deep fusion) & 82.7 & 91.5 & 95.2 \\\\\n CRU (deep-enhanced, filter=3) & {\\bf 83.7} & {\\bf 91.9} & {\\bf 95.8} \\\\\n CRU (deep-enhanced, filter=5) & 83.2 & 91.7 & 95.2 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{center}\n \\caption{\\label{exp-class} Results on MR, IMDB and SUBJ sentiment classification task. Best previous results are marked in italics, and overall best results are mark in bold face. {\\small {\\bf Multi-channel CNN} \\cite{kim-2014}: A CNN architecture with static and non-static word embeddings. {\\bf HRL} \\cite{wang-and-tian-2016}: A hybrid residual LSTM architecture. {\\bf Multi-task arc-II} \\cite{liu-emnlp-2016}: A deep architectures with shared local-global hybrid memory for multi-task learning. {\\bf CNN-GRU-word2vec} \\cite{wang-etal-2016}: An architecture that combines CNN and GRU model with pre-trained word embeddings by {\\em word2vec}. {\\bf DSCNN-Pretrain} \\cite{zhang-etal-2016}: Dependency sensitive convolutional neural networks with pretrained sequence autoencoders. {\\bf AC-BLSTM} \\cite{liang-etal-2016}: Asymmetric convolutional bidirectional LSTM networks. } }\n \\end{table}\n \nWhen comparing three variants of the CRU model, as we expected, the CRU with {\\em deep-enhanced fusion} performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power.\nAlso, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. \nWe plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure \\ref{mr-length}.\n\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{mr-length.pdf}\n \\caption{\\label{mr-length} Trends of MR test set accuracy with the increasing convolutional filter length. }\n \\end{figure}\n \nAs we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy.\nOn the contrary, the larger filters generally outperform the lower ones, but not always.\nOne possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter.\n \nWe also compared our CRU model with related works that combine CNN and RNN \\cite{wang-etal-2016,zhang-etal-2016,liang-etal-2016}. \nFrom the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing {\\em deep fusion} and enhancing the contextual representations with original embeddings could substantially improve the power of word representations.\n \nOn another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure \\ref{imdb-train}.\nAs we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability.\n\n \\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{imdb-4.pdf}\n \\caption{\\label{imdb-train} Trends of IMDB test set accuracy with the training time growing.}\n \\end{figure}\n\n\\section{Experiments: Reading Comprehension}\\label{experiments-rc} \n \n \\begin{table*}[ht]\n \\begin{center}\n \n \\begin{tabular}{p{9cm} ccccc}\n \\toprule\n & \\multicolumn{2}{p{2cm}}{\\centering \\bf CBT NE} & \\multicolumn{2}{p{2cm}}{\\centering \\bf CBT CN} \\\\\n & \\bf Valid & \\bf Test & \\bf Valid & \\bf Test\\\\\n \\midrule\n Human \\cite{hill-etal-2015} & - & {\\em 81.6} & - & {\\em 81.6} \\\\\n MemNN \\cite{hill-etal-2015} & 70.4 & 66.6 & 64.2 & 63.0 \\\\ \n AS Reader \\cite{kadlec-etal-2016} & 73.8 & 68.6 & 68.8 & 63.4 \\\\\n GA Reader \\cite{dhingra-etal-2016} & 74.9 & 69.0 & 69.0 & 63.9 \\\\\n \n Iterative Attention \\cite{sordoni-etal-2016} & 75.2 & 68.6 & 72.1 & 69.2 \\\\\n AoA Reader \\cite{cui-etal-2017-aoa} & 77.8 & 72.0 & 72.2 & 69.4 \\\\\n NSE Adp. Com. \\cite{munkhdalai2016reasoning} & 78.2 & 73.2 & 74.2 & 71.4 \\\\\n GA Reader + Fine-gating \\cite{yang-etal-2016} & 79.1 & {\\em 75.0} & 75.3 & 72.0 \\\\\n AoA Reader + Re-ranking \\cite{cui-etal-2017-aoa} & {\\em 79.6} & 74.0 & {\\em 75.7} & {\\em 73.1} \\\\\n \\midrule\n M-AoA Reader (GRU) & 78.0 & 73.8 & 72.8 & 69.8 \\\\\n M-AoA Reader (CRU) & 79.5 & 75.4 & 74.4 & 71.3 \\\\\n M-AoA Reader (CRU) + Re-ranking & {\\bf 80.6} & {\\bf 76.1} & {\\bf 76.6} & {\\bf 74.5} \\\\\n \\midrule\\midrule\n AS Reader (Ensemble) & 74.5 & 70.6 & 71.1 & 68.9 \\\\\n KnReader (Ensemble) & 78.0 & 73.3 & 72.2 & 70.6 \\\\\n Iterative Attention (Ensemble) & 76.9 & 72.0 & 74.1 & 71.0 \\\\\n AoA Reader (Ensemble) & 78.9 & 74.5 & 74.7 & 70.8 \\\\\n AoA Reader (Ensemble + Re-ranking) & {\\em 80.3} & {\\em 75.7} & {\\em 77.0} & {\\em 74.1} \\\\\n \\midrule\n M-AoA Reader (CRU) (Ensemble) & 80.0 & 77.1 & 77.0 & 73.5 \\\\ \n M-AoA Reader (CRU) (Ensemble + Re-ranking) & {\\bf 81.8} & {\\bf 77.5} & {\\bf 79.0} & {\\bf 76.8} \\\\ \n \\bottomrule\n \\end{tabular}\n \\end{center}\n \\caption{\\label{public-result} Results on the CBT NE and CN cloze-style reading comprehension datasets.\n }\n \\end{table*}\n\n\\subsection{Experimental Setups} \nWe also tested our CRU model in the cloze-style reading comprehension task.\nWe carried out experiments on the public datasets: CBT NE\/CN \\cite{hill-etal-2015}.\nThe CRU model used in these experiments is the {\\em deep-enhanced} type with the convolutional filter length of 3.\nIn the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by \\citet{cui-etal-2017-aoa}, and all LMs are 8-gram trained by SRILM toolkit \\cite{stolcke-2002}.\nFor other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as \\citet{cui-etal-2017-aoa} to make the experiments more comparable.\n\n\n\\subsection{Results}\nThe overall experimental results are given in Table \\ref{public-result}. \nAs we can see that our proposed models can substantially outperform various state-of-the-art systems by a large margin.\n\n\\begin{itemize}[leftmargin=*]\n \\item Overall, our final model (M-AoA Reader + CRU + Re-ranking) could give significant improvements over the previous state-of-the-art systems by 2.1\\% and 1.4\\% in test sets, while re-ranking and ensemble bring further improvements.\n \\item When comparing M-AoA Reader to the original AoA Reader, 1.8\\% and 0.4\\% improvements can be observed, suggesting that by incorporating additional features into embedding can enrich the power of word representation. Incorporating more additional features in the word embeddings would have another boost in the results, but we leave this in future work.\n \\item Replacing GRU with our CRU could significantly improve the performance, where 1.6\\% and 1.5\\% gains can be obtained when compared to M-AoA Reader. This demonstrates that incorporating contextual information when modeling the sentence could enrich the representations. Also, when modeling an unknown word, except for its randomly initialized word embedding, the contextual information could give a possible guess of the unknown word, making the text more readable to the neural networks.\n\\item The re-ranking strategy is an effective approach in this task. We observed that the gains in the common noun category are significantly greater than the named entity. One possible reason is that the language model is much beneficial to CN than NE, because it is much more likely to meet a new named entity that is not covered in the training data than the common noun.\n\\end{itemize}\n\n\\section{Qualitative Analysis}\\label{qualitative-analysis} \nIn this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task.\nWe focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as ``not''. The second type is the one contains sentiment transition, such as ``clever but not compelling''. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table \\ref{quality-result-table}. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.\n\n \\begin{table}[h]\n \\begin{center}\n \n \\begin{tabular}{lcc}\n \\toprule\n & \\bf GRU & \\bf CRU \\\\\n \\midrule\n Negation Term (50) & 37 & 42 \\\\\n Sentiment Transition (50) & 34 & 40 \\\\\n \\midrule\n Total (100) & 71 & 82 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{center}\n \\caption{\\label{quality-result-table} Number of correctly classified samples.}\n \\end{table}\n\nAmong these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table \\ref{quality-table}.\n\n \\begin{table}[htbp]\n \\begin{center}\n \\small\n \\begin{tabular}{lcc}\n \\toprule\n \\bf Sentence & \\bf GRU & \\bf CRU \\\\\n \\midrule\n I like that Smith & POS & POS \\\\\n \\midrule\n I like that Smith, \\\\ he's {\\em not making fun of} these people, & POS & POS \\\\\n \\midrule\n I like that Smith, \\\\ he's {\\em not making fun of} these people, \\\\he's {\\em not laughing at} them. & {\\em NEG} & POS \\\\\n \\bottomrule\n \\end{tabular}\n \\end{center}\n \\caption{\\label{quality-table} Predictions of each level of the sentence. }\n \\end{table}\n\nRegarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context {\\em during} the recurrent modeling the sentence, and the phrases such as ``not making fun'' and ``not laughing at'' could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.\n\n\\section{Conclusion}\\label{conclusion}\nIn this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). \nWe inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. \nWe have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task.\nExperimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzexbt b/data_all_eng_slimpj/shuffled/split2/finalzzexbt new file mode 100644 index 0000000000000000000000000000000000000000..a1d700f86905213887836959c7d00050cbad5eef --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzexbt @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\noindent The identification of time-varying systems plays a key role in different applications, such as adaptive and model predictive control, where a good real-time tracking of the system to be controlled is necessary. In addition, the detection of changes or drifts in plant parameters is crucial in terms of process monitoring and fault detection. Online System Identification (SysId) and the estimation of time-varying systems are typically strictly connected problems: one would like to exploit the new data that become available in order to track possible changes in the system dynamics.\\\\\nRecursive Prediction Error Method (RPEM), a variant of the classical PEM \\cite{Ljung:99,RecursiveBook}, represents nowadays a well-established technique, through which the current estimate can be efficiently updated, as soon as new data are provided. RPEM are parametric approaches, relying on Recursive Least-Squares (RLS) routines, which compute the parameter estimate by minimizing a functional of the prediction errors \\cite{Ljung:99}. An extension of these approaches to the identification of time-varying systems involves the adoption of a forgetting factor, through which old data become less relevant in the estimation criterion. Convergence and stability properties of Forgetting Factor RPEM have been well-studied within the SysId community \\cite{bittanti1990convergence,guo1993performance}. \\\\\nAlternative approaches model the coefficients trajectories as stochastic processes \\cite{Chow84}, thus exploiting Kalman filtering \\cite{guo1990estimating} or Bayesian inference \\cite{Sarris73} for parameter estimation. Combinations of bases sequences (e.g. wavelet basis \\cite{tsatsanis1993time}) have also been considered to model the parameters time evolution.\n\\\\The above-mentioned parametric procedures share the criticality of the model selection complexity: this step is especially crucial when the model complexity has to be modified in response to changes in the true system dynamics.\nIn addition, classical complexity selection rules (e.g. cross-validation or information criteria) may not be applicable in online settings, due to the excessive computational effort they require. \nModel complexity issues have been partially addressed in the SysId community through the recent introduction of non-parametric methods, relying on Gaussian processes and Bayesian inference \\cite{GP-AC-GdN:11,SurveyKBsysid}. In this framework model complexity is tuned in a continuous manner by estimating the hyper-parameters which describe the prior distribution chosen by the user \\cite{PCAuto2015}. This property makes these new techniques appealing for the online identification of time-varying systems: indeed, model complexity can be continuously adapted whenever new data become available.\\\\\nIn a previous work \\cite{RPPCECC2016} we started exploring this research direction by adapting the newly introduced Bayesian procedures to an online identification setting. The methodologies proposed in \\cite{RPPCECC2016} are extended in this new paper by dealing with time-varying systems. Two approaches, relying on the use of a forgetting factor, are proposed; in particular, following the approach in \\cite{PA2014}, we investigate the online estimation of the forgetting factor by treating it as a hyper-parameter of the Bayesian inference procedure. These techniques are experimentally compared with the classical parametric counterparts: the results appear favourable and promising for the methods we propose.\n\\\\\nThe paper is organized as follows. Sec.~\\ref{sec:problem_formulation} presents the online identification framework and the challenges we will try to address. Sec.~\\ref{sec:pem} provides a brief review of parametric real-time identification techniques, while Sec.~\\ref{sec:bayes} illustrates the Bayesian approach to linear SysId, both in the batch and online scenarios. In particular, Sec.~\\ref{sec:time_var} focuses on the estimation of time-varying systems. Experimental results are reported in Sec.~\\ref{sec:experiment}, while conclusions are drawn in Sec.~\\ref{sec:conclusion}.\n\n\n\\section{Problem Formulation}\\label{sec:problem_formulation}\n\\noindent Consider a dynamical system described through an output-error model, i.e.:\n\\begin{equation} \\label{equ:sys}\ny(t) = \\left[h \\ast u\\right](t) + e(t), \\quad y(t), \\ u(t) \\in\\mathbb{R}\n\\end{equation}\nwhere $h(t)$ denotes the model impulse response and $e(t)$ is assumed to be a zero-mean Gaussian noise with variance $\\sigma^2$.\\\\\nSysId techniques aim at estimating the impulse response $h$ of the system, once a set $\\mathcal{D}:=\\left\\{y(t),u(t)\\right\\}_{t=1}^N$ of measurements of its input and output signals is provided.\\\\\nIn this work we consider an online setting, in which a new set of input-output measurements becomes available every $T$ time steps.\nSpecifically, let us define the variable $i:=k\/T$ by assuming w.l.o.g. that $k$ is a multiple of $T$, and the $i^{th}-$dataset as $\\mathcal{D}_i =\\left\\{u(t),y(t)\\right\\}_{t=(i-1)T +1}^{iT}$.\n\\\\We suppose that at time $k$ an impulse response estimate $\\hat{h}^{(i)}$ has been computed using the data coming from a collection of previous datasets $\\bigcup_{l=1}^{i} \\mathcal{D}_l = \\left\\{u(t),y(t)\\right\\}_{t=1}^{i T}$; at time $k+T$ new data $\\mathcal{D}_{i+1}$ become available and we would like to update the previous estimate $\\hat{h}^{(i)}$ by exploiting them. In addition we assume that the underlying system undergoes certain variations that we would like to track: this situation could often arise in practice, due to e.g. variations of the internal temperature, of the masses (e.g. after grasping an object).\\\\\nFurthermore, online applications typically require that the new estimate is available before the new dataset $\\mathcal{D}_{i+2}$ is provided, thus limiting the computational complexity and the memory storage of the adopted estimation methods.\\\\\nIn this paper, the recently proposed Bayesian approach to SysId \\cite{SurveyKBsysid} is adapted in order to cope with the outlined online setting. Its performances are compared with the ones achieved using classical parametric approaches.\n\n\\begin{rem}\nWe stress that in the remainder of the paper we will use the indexes $k$ and $iT$ interchangeably.\n\\end{rem}\n\n\\section{Parametric Approach}\\label{sec:pem}\n\\noindent Standard parametric approaches to SysId rely on the a-priori choice of a model class $\\mathcal{M}$ (e.g. ARX, ARMAX, OE, etc.), which is completely characterized by a parameter $\\theta \\in \\mathbb{R}^m$.\n\n\\subsection{Batch Approach}\n\\noindent In the batch setting, when a dataset $\\mathcal{D}=\t\\left\\{y(t),u(t)\\right\\}_{t=1}^N$ is provided, the identification procedure reduces to estimate $\\theta$ by minimizing the sum of squared prediction errors:\n\\begin{equation}\n\\hat{\\theta} = \\arg\\min_{\\theta\\in \\mathbb{R}^m} V_N (\\theta,\\mathcal{D}) = \\arg\\min_{\\theta\\in \\mathbb{R}^m} \\frac{1}{2}\\sum_{t=1}^N \\left(y(t)-\\hat{y}(t\\vert \\theta)\\right)^2\n\\end{equation}\nwhere $\\hat{y}(t\\vert \\theta)$ denotes the one-step ahead predictor \\cite{Ljung:99}.\n\n\\subsection{Online Approach}\n\\noindent The extension of these procedures to an online setting relies on RLS (or pseudo LS) methods.\\\\\nFor ease of notation, let us assume $T=1$ in this section. Suppose that at time $k+1$ a new input-output data pair $\\mathcal{D}_{i+1}$ is provided; then $\\hat{\\theta}^{(i)}$ is updated as:\n\\begin{equation}\\label{equ:param_update}\n\\hat{\\theta}^{(i+1)} = \\hat{\\theta}^{(i)} + \\mu^{(i+1)} Q^{(i+1)^{-1}} \\nabla_{\\theta} V_{k+1}(\\hat{\\theta}^{(i)}, \\textstyle{\\bigcup_{l=1}^{i+1}}\\mathcal{D}_{l})\n\\end{equation}\nwhere $\\nabla_\\theta V_{k+1}(\\hat{\\theta}^{(i)}, \\bigcup_{l=1}^{i+1}\\ \\mathcal{D}_{l})$ denotes the gradient of the loss function computed in the previous estimate and in the new data; $\\mu^{(i+1)}\\in \\mathbb{R}$ and $Q^{(i+1)}\\in\\mathbb{R}^{m\\times m}$ are appropriate scalings which assume different shapes according to the specific algorithm which is adopted (see \\cite{RecursiveBook} and \\cite{Ljung:99}, Ch. 11, for further details). Notice that \\eqref{equ:param_update} is simply a scaled gradient step w.r.t. the loss function $V_{k+1}(\\theta,\\bigcup_{l=1}^{i+1}\\mathcal{D}_{l})$.\n\n\\subsection{Dealing with time-varying systems}\n\\noindent In order to cope with time-varying systems, a possible strategy involves the inclusion of a \\textit{forgetting factor} $\\bar{\\gamma}$ in the loss function $V_{k}(\\theta,\\mathcal{D})$:\n\\begin{equation} \\label{equ:loss_ff}\nV_{k}^\\gamma (\\theta,\\mathcal{D}) = \\frac{1}{2} \\sum_{t=1}^k \\bar{\\gamma}^{k-t} \\left(y(t)-\\hat{y}(t\\vert \\theta)\\right)^2, \\qquad \\bar{\\gamma} \\in (0,1]\n\\end{equation}\nIn this way old measurements become less relevant for the computation of the estimate. A recursive update of the estimate $\\hat{\\theta}^{(i)}$ (as the one in \\eqref{equ:param_update}) can be derived (\\cite{Ljung:99}, Ch. 11).\n\\\\\nAs an alternative, a sliding window approach can be adopted: at each time step only the last $N_w$ data are used for computing the current estimate (with $N_w$ being the window length). However, since this approach does not admit an update rule as the one in \\eqref{equ:param_update}, the computational complexity of the new estimate will depend on the window length.\n\\\\\nA crucial role in the application of parametric SysId techniques is played by the model order selection step: once a model class $\\mathcal{M}$ is fixed, its complexity has to be chosen using the available data. This is typically accomplished by estimating models with different complexities and by applying tools such as cross-validation or information criteria to select the most appropriate one. However, the estimation of multiple models may be computationally expensive, making this procedure not suited for the online identification of time-varying systems. Indeed, in this framework, it should ideally be applied every time new data become available.\n\\\\\nThe recently proposed approach to SysId, relying on regularization\/Bayesian techniques, overpasses the above-described issue by jointly performing estimation and order selection.\nNext section will illustrate how the batch regularization\/Bayesian method can be tailored to the online identification of time-varying systems.\n\n\n\n\\section{Regularization\/Bayesian Approach} \\label{sec:bayes}\n\n\\subsection{Batch Approach}\n\\label{subsec:batch_approach}\n\\noindent We discuss how the regularization\/Bayesian technique works in the standard batch setting, i.e. when data $\\mathcal{D}=\\left\\{y(t),u(t)\\right\\}_{t=1}^N$ are given. For future use, let us define the vector $Y_N =\\left[y(1)\\ ...\\ y(N)\\right]^\\top\\in\\mathbb{R}^N$.\n\\\\\nAccording to the Bayesian estimation, the impulse response $h$ is considered as a realization of a stochastic process with a prior distribution $p_\\eta(h)$, depending on some parameters $\\eta\\in \\Omega$. The prior $p_\\eta(h)$ is designed in order to account for some desired properties of the estimated impulse response, such as smoothness and stability \\cite{GP-AC-GdN:11,SurveyKBsysid}. In the Bayesian framework, the parameters $\\eta$ are known as hyper-parameters and they need to be estimated from the data, e.g. by optimizing the so-called marginal likelihood (i.e. the likelihood once the latent variable $h$ has been integrated out) \\cite{PCAuto2015}:\n\\begin{equation}\n\\hat{\\eta} =\\arg\\max_{\\eta\\in\\Omega} p_\\eta(Y_N) = \\arg\\max_{\\eta\\in\\Omega} \\int p(Y_N\\vert h)p_\\eta(h)dh\n\\end{equation}\nOnce the hyper-parameters $\\eta$ have been estimated, the minimum variance estimate of $h$ needs to be computed; it coincides with the posterior mean given the observed data:\n\\begin{equation}\\label{equ:post_est}\n\\hat{h} := \\mathbb{E}_{\\hat{\\eta}} \\left[h \\vert Y_N \\right] =\\int h \\frac{p(Y_N\\vert h)p_{\\hat{\\eta}}(h)}{p_{\\hat{\\eta}}(Y_N)} dh\n\\end{equation}\nIn the SysId context, $h$ is typically modelled as a zero-mean Gaussian process (independent of the noise $e(t)$) with covariance $\\mathbb{E}\\left[h(t),h(s)\\right]=\\bar{K}_\\eta(t,s)$ (aka kernel in the Machine Learning literature) \\cite{GP-AC-GdN:11,ChenOL12}. Thanks to this asssumption, the marginal likelihood $p_\\eta(Y_N)$ is Gaussian and the estimate \\eqref{equ:post_est} is available in closed form.\n\\\\\nFurthermore, for simplicity the IIR model in \\eqref{equ:sys} can be accurately approximated by a FIR model of order $n$, whenever $n$ is chosen large enough to catch the relevant components of the system dynamics. By collecting in $\\mathbf{h}:=\\left[h(1)\\ \\cdots\\ h(n)\\right]^\\top\\in\\mathbb{R}^n$ the first $n$ impulse response coefficients, the following Gaussian prior can be defined:\n\\begin{align}\np_\\eta(\\mathbf{h})&\\sim \\mathcal{N}(0,K_\\eta), \\qquad \\eta \\in \\Omega \\subset \\mathbb{R}^d,\\ \\ K_\\eta \\in \\mathbb{R}^{n\\times n}\n\\end{align}\nThe hyper-parameters $\\eta$ can then be estimated by solving\n\\begin{align}\n\\hat{\\eta} &= \\arg\\min_{\\eta\\in\\Omega}\\ - \\ln p_\\eta(Y_N)= \\arg\\min_{\\eta\\in\\Omega}\\ f_N(\\eta)\\label{equ:ml_max}\\\\\nf_N(\\eta) &= Y_N^\\top \\Sigma(\\eta)^{-1} Y_N + \\ln \\det \\Sigma(\\eta)\\label{equ:ml}\\\\\n\\Sigma(\\eta)&= \\Phi_N K_\\eta \\Phi_N^\\top + \\sigma^2 I_N\n\\end{align}\nwhere $\\Phi_N\\in\\mathbb{R}^{N\\times n}$:\n\\begin{equation}\\label{equ:phi}\n\\Phi_N := \\begin{bmatrix}\nu(0) & u(-1) & \\cdots & u(-n+1)\\\\\n\\vdots & \\ddots & \\ddots &\\vdots\\\\\nu(N) & u(N-1) & \\cdots & u(N-n+1)\n\\end{bmatrix} \n\\end{equation}\nIn the batch setting we are considering the quantities $u(-n+1), ...,u(0)$ can be either estimated or set to zero. Here, we follow the latter option. \nOnce $\\hat{\\eta}$ has been computed, the corresponding minimum variance estimate is given by\n\\resizebox{1.02\\linewidth}{!}{\n \\begin{minipage}{\\linewidth}\n\\begin{align}\n\\widehat{\\mathbf{h}} :&=\\mathbb{E}_{\\hat{\\eta}}\\left[\\mathbf{h} \\vert Y_N\\right] = \\arg\\min_{\\mathbf{h}^\\in\\mathbb{R}^n} \\left(Y_N-\\Phi_N\\mathbf{h}\\right)^\\top \\left(Y_N-\\Phi_N\\mathbf{h}\\right) + \\sigma^2 \\mathbf{h}^\\top K_{\\hat{\\eta}}^{-1}\\mathbf{h}\\nonumber \\\\\n&= (\\Phi_N^\\top \\Phi_N + \\sigma^2 K_{\\hat{\\eta}}^{-1})^{-1}\\Phi_N^\\top Y_N \\label{equ:h_hat}\n\\end{align}\n\\end{minipage}\n}\n\n\\vspace{1mm}\n\n\\begin{rem}\nThe estimate $\\widehat{\\mathbf{h}}$ in \\eqref{equ:h_hat} can be computed once a noise variance estimate $\\hat{\\sigma}^2$ is available. For this purpose, $\\sigma^2$ can be treated as a hyper-parameter and estimated by solving \\eqref{equ:ml_max} or it can be computed from a LS estimate of $\\mathbf{h}$. In this work the latter option is adopted.\n\\end{rem}\n\n\n\n\\subsection{Online Approach} \\label{subsec:bayes_onine}\n\\noindent We now adapt the batch technique described in Sec.~\\ref{subsec:batch_approach} to the online setting outlined in Sec.~\\ref{sec:problem_formulation}. At time $k+T$, when data $\\mathcal{D}_{i+1}=\\left\\{u(t),y(t)\\right\\}_{t=iT+1}^{(i+1)T}$ are provided, the current impulse response estimate $\\widehat{\\mathbf{h}}^{(i)}$ is updated through formula \\eqref{equ:h_hat}, once the data matrices are enlarged with the new data and a new hyper-parameter estimate $\\hat{\\eta}^{(i+1)}$ is computed. The data matrices are updated through the following recursions\n\\begin{align}\nR^{(i+1)} &:= \\Phi_{(i+1)T}^\\top \\Phi_{(i+1)T} = R^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\Phi_{iT+1}^{(i+1)T}\\label{equ:r}\\\\\n\\widetilde{Y}^{(i+1)} &:= \\Phi_{(i+1)T}^\\top Y_{(i+1)T} =\\widetilde{Y}^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top Y_{iT+1}^{(i+1)T}\\label{equ:y_tilde}\\\\\n\\xbar{Y}^{(i+1)} &:= Y_{(i+1)T}^\\top Y_{(i+1)T}= \\xbar{Y}^{(i)} + \\left(Y_{iT+1}^{(i+1)T}\\right)^\\top Y_{iT+1}^{(i+1)T}\\label{equ:y_bar}\n\\end{align}\nwhere $Y_{(i+1)T}=\\left[y(1)\\cdots y(iT+T)\\right]^\\top\\in\\mathbb{R}^{(i+1)T}$, $Y_{iT+1}^{(i+1)T} = \\left[y(iT+1) \\cdots y(iT+T)\\right]$; $\\Phi_i$ is defined as in \\eqref{equ:phi} with $N$ replaced by $(i+1)T$, while $\\Phi_{iT+1}^{(i+1)T}$ has the same structure of matrix \\eqref{equ:phi} but it contains the data from $iT-n+1$ to $(i+1)T$.\nThe computational cost of \\eqref{equ:r}-\\eqref{equ:y_bar} is, $O(n^2T)$, $O(nT)$ and $O(T^2)$, respectively.\\\\\nThe minimization of $f_{(i+1)T}(\\eta)$ in \\eqref{equ:ml}, needed to determine $\\hat{\\eta}^{(i+1)}$, is typically performed through iterative routines, such as 1st or 2nd order optimization algorithms \\cite{BonettiniCPSIAM2014} or the Expectation-Maximization (EM) algorithm \\cite{bottegal2016robust,BOTTEGAL2015466}. Since these methods may require a large number of iterations before reaching convergence, they may be unsuited for online applications. We should recall that, when adopted for marginal likelihood optimization, each iteration of these algorithms has a computational complexity of $O(n^3)$, due to the objective function evaluation. Specifically, $f_{(i+1)T}(\\eta)$ can be robustly evaluated as \\cite{chen2013implementation}\n\\begin{align}\nf_{(i+1)T}(\\eta) = &((i+1)T-n)\\ln \\sigma^2 + 2\\ln\\vert S\\vert \\nonumber\\\\\n&+\\sigma^{-2}(\\ \\xbar{Y}^{(i+1)}- \\| S^{-1}L^\\top \\widetilde{Y}^{(i+1)}\\|_2^2\\ ) \\label{equ:eff_ml}\n\\end{align}\nwhere $L$ and $S$ are Cholesky factors: $K_\\eta =: LL^\\top$ and $\\sigma^2 I_n + L^\\top R^{(i+1)} L =: SS^\\top$ (whose computation is $O(n^3)$).\\\\ \nTo tackle the real-time constraints, the approach proposed in \\cite{RPPCECC2016} is adopted: $\\hat{\\eta}^{(i+1)}$ is computed by running just one iteration of a Scaled Gradient Projection (SGP) algorithm (a 1st order optimization method) applied to solve problem \\eqref{equ:ml_max} \\cite{BonettiniCPSIAM2014}. Algorithm \\ref{alg:1step_grad} summarizes its implementation. Notice that it is initialized with the previous estimate $\\hat{\\eta}^{(i)}$ (obtained using the data $\\bigcup_{l=1}^{i} \\mathcal{D}_l$) which is likely to be close to a local optimum of the objective function $f_{iT}(\\eta)\\equiv f_{k}(\\eta)$. If the number of new data $T << k$, it is reasonable to suppose that $\\arg\\min_{\\eta\\in\\Omega} f_{iT} (\\eta) \\approx \\arg\\min_{\\eta\\in \\Omega} f_{(i+1)T} (\\eta)$. \nTherefore, by just performing one SGP iteration, $\\hat{\\eta}^{(i+1)}$ will be sufficiently close to a local optimum of $f_{(i+1)T}(\\eta)$.\n\n\n\\begin{algorithm}\n\\caption{1-step Scaled Gradient Projection (SGP)}\\label{alg:1step_grad}\n\\begin{algorithmic}[1]\n\\Statex{\\textbf{Inputs:}} previous estimates $\\{ \\hat{\\eta}^{(i)}, \\hat{\\eta}^{(i-1)}\\}$, $\\nabla f_{iT}(\\hat{\\eta}^{(i-1)})$, $R^{(i+1)}$, $\\widetilde{Y}^{(i+1)}$, $\\xbar{Y}^{(i+1)}$, $\\hat{\\sigma}^{(i+1)^2}$\n\\Statex Initialize: $c=10^{-4},\\ \\delta=0.4$\n\\State Compute $\\nabla f_{(i+1)T}(\\hat{\\eta}^{(i)})$\n\\State $r^{(i-1)} \\gets \\hat{\\eta}^{(i)} - \\hat{\\eta}^{(i-1)}$ \\label{alg_step:rf}\n\\State $w^{(i-1)} \\gets \\nabla f_{(i+1)T}(\\hat{\\eta}^{(i)}) - \\nabla f_{iT}(\\hat{\\eta}^{(i-1)})$ \\label{alg_step:w}\n\\State Approximate the inverse Hessian of $f_{(i+1)T}(\\hat{\\eta}^{(i)})$ as $B^{(i)}=\\alpha^{(i)}D^{(i)}$ (using the procedure outlined in \\cite{BonettiniCPSIAM2014})\\label{alg_step:inverse_H}\n\\State Project onto the feasible set:\n\\Statex $z\\gets \\Pi_{\\Omega,D^{(i)}} (\\ \\hat{\\eta}^{(i)} -B^{(i)}\\nabla f_{(i+1)T}(\\hat{\\eta}^{(i)})\\ )$\\label{alg_step:proj}\n\\State $\\Delta\\hat{\\eta}^{(i)} \\gets z - \\hat{\\eta}^{(i)}$\n\\State $\\nu \\gets 1$\n\\If{$f_{(i+1)T}(\\hat{\\eta}^{(i)}+\\nu \\Delta \\hat{\\eta}^{(i)}) \\leq f_{(i+1)T}(\\hat{\\eta}^{(i)})+ c \\nu \\nabla f_{(i+1)T}(\\hat{\\eta}^{(i)})^\\top \\Delta\\hat{\\eta}^{(i)}$}\n\\State Go to step 12\n\\Else\n\\State $\\nu \\gets \\delta \\nu$\n\\EndIf\n\\State $\\hat{\\eta}^{(i+1)} \\gets \\hat{\\eta}^{(i)} + \\nu \\Delta \\hat{\\eta}^{(i)}$\n\\Statex \\textbf{Output:} $\\hat{\\eta}^{(i+1)}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\noindent The key step in Algorithm \\ref{alg:1step_grad} is \\ref{alg_step:inverse_H}, where the inverse Hessian is approximated as the product between the positive scalar $\\alpha^{(i)}\\in\\mathbb{R}_+$ and the diagonal matrix $D^{(i)}\\in\\mathbb{R}^{d\\times d}$.\n$\\alpha^{(i)}$ is chosen by alternating the so-called Barzilai-Borwein (BB) rules \\cite{barzilai1988two}:\n\\begin{equation}\n\\alpha_1^{(i)} := \\frac{r^{(i-1)^\\top}r^{(i-1)}}{r^{(i-1)^\\top}w^{(i-1)}}, \\qquad \n\\alpha_2^{(i)} := \\frac{r^{(i-1)^\\top}w^{(i-1)}}{w^{(i-1)^\\top}w^{(i-1)}}\\label{equ:bb}\n\\end{equation}\nwith $r^{(i-1)}$ and $w^{(i-1)}$ specified at steps \\ref{alg_step:rf} and~\\ref{alg_step:w} of Algorithm \\ref{alg:1step_grad}. The definition of $D^{(i)}$ depends on the constraints set and on the objective function. The authors in \\cite{BonettiniCPSIAM2014} exploit the following decomposition of $\\nabla_\\eta f_{(i+1)T}(\\eta)$ (defined in \\eqref{equ:ml}):\n\\begin{equation}\\label{equ:grad_decomp}\n\\nabla_\\eta f_{(i+1)T}(\\eta) = V(\\eta) - U(\\eta), \\quad V(\\eta)>0, \\ U(\\eta)\\geq 0\n\\end{equation}\nto specify $D^{(i)}$.\n\\noindent We refer the interested reader to \\cite{BonettiniCPSIAM2014} for further details\n\n\\noindent The projection operator adopted at step \\ref{alg_step:proj} of Algorithm \\ref{alg:1step_grad} is\n\\begin{equation}\n\\Pi_{\\Omega,D^{(i)}} (z) = \\textstyle{\\arg\\min_{x\\in\\Omega}} (x-z)^\\top D^{(i)^{-1}}(x-z)\n\\end{equation}\n\n\n\\begin{rem}\nBesides SGP, in \\cite{RPPCECC2016} other inverse Hessian approximations are investigated (e.g. the BFGS formula). In this work we only consider the SGP approximation, since it appears preferable to the others, according to the experiments we performed (both in the time-invariant and -variant domain). \\cite{RPPCECC2016} also considers the EM algorithm as an alternative to 1st order optimization methods to solve problem \\eqref{equ:ml_max}. Even if the results reported for EM in \\cite{RPPCECC2016} are comparable to the ones achieved through SGP, the latter approach appears superior to EM in the time-varying setting we are considering.\n\\end{rem}\n\n\\subsection{Dealing with time-varying systems}\\label{sec:time_var}\n\\noindent In this section we deal with the identification of time-varying systems: specifically, estimators have to be equipped with tools through which past data become less relevant for the current estimation. In the following we propose two routines which combine the ``online Bayesian estimation'' above sketched with the ability to ``forget'' past data.\n\n\\subsubsection{Fixed Forgetting Factor}\nFollowing a classical practice in parametric SysId (see Sec.~\\ref{sec:pem}), we introduce a forgetting factor $\\bar{\\gamma} \\in (0,1]$ into the data we are provided in order to base the estimation mainly on the more recent data. Specifically, we assume that the first $k$ data are generated according to the following linear model:\n\\begin{equation}\\label{equ:ff_model}\n\\bar{G}_k Y_k = \\bar{G}_k \\Phi_k \\mathbf{h} + E, \\ E= \\left[e(1)...e(k)\\right]^\\top \\sim \\mathcal{N}(0,\\sigma^2 I_k)\n\\end{equation}\nwhere $\\bar{G}_k \\bar{G}_k =: \\bar{\\Gamma}_k$ and $\\bar{\\Gamma}_k := diag\\left(\\bar{\\gamma}^{k-1}, \\bar{\\gamma}^{k-2}, ..., \\bar{\\gamma}^0\\right)$. Therefore, when adopting the regularized regression criterion \\eqref{equ:h_hat}, the estimate at time $k$ is computed as:\n\\begin{align}\n\\widehat{\\mathbf{h}}_{\\bar{\\gamma}} &:= \\arg\\min_{\\mathbf{h}\\in\\mathbb{R}^n}\\sum_{t=1}^k \\bar{\\gamma}^{k-t} \\left(y(t) - \\Phi_t^t \\mathbf{h}\\right)^2 + \\sigma^2 \\mathbf{h}^\\top K_{\\hat{\\eta}}^{-1} \\mathbf{h} \\label{equ:regul_probl_ff}\\\\\n&= \\arg\\min_{\\mathbf{h}\\in\\mathbb{R}^n}\\left(Y_k - \\Phi_k \\mathbf{h}\\right)^\\top \\bar{\\Gamma}_k \\left(Y_k - \\Phi_k \\mathbf{h}\\right) + \\sigma^2 \\mathbf{h}^\\top K_{\\hat{\\eta}}^{-1} \\mathbf{h}\\nonumber\\\\\n&= (\\Phi_k^\\top \\bar{\\Gamma}_k \\Phi_k + \\sigma^2 K_{\\hat{\\eta}}^{-1})^{-1} \\Phi_k^\\top \\bar{\\Gamma}_k Y_k \\label{equ:h_hat_ff}\n\\end{align}\nCorrespondingly, the hyper-parameters are estimated solving:\n\\begin{align}\n\\hat{\\eta} &= \\arg\\min_{\\eta\\in\\Omega}\\ Y_k^\\top \\bar{G}_k \\Sigma_{\\bar{\\gamma}}(\\eta)^{-1} \\bar{G}_k Y_k + \\ln \\det \\Sigma_{\\bar{\\gamma}}(\\eta)\\label{equ:eta_hat_ff}\\\\\n\\Sigma_{\\bar{\\gamma}}(\\eta)&= \\bar{G}_k \\Phi_k K_\\eta \\Phi_k^\\top \\bar{G}_k + \\sigma^2 I_k\n\\end{align}\nAlgorithm \\ref{alg:on_line_ff} illustrates the online implementation of the identification procedure based on equations \\eqref{equ:h_hat_ff} and \\eqref{equ:eta_hat_ff}. In particular, it assumes that at time $k$ the estimates $\\widehat{\\mathbf{h}}^{(i)}$ and $\\hat{\\eta}^{(i)}$ are available and they have been computed by solving, respectively, \\eqref{equ:regul_probl_ff} and \\eqref{equ:eta_hat_ff}; these estimates are then online updated after the new data $\\mathcal{D}_{i+1}$ are provided. Once $\\bar{\\gamma}$ is chosen by the user, it is inserted in the data matrices $R_{\\bar{\\gamma}}^{(i+1)}:=\\Phi_{(i+1)T}^\\top \\bar{\\Gamma}_{(i+1)T}\\Phi_{(i+1)T},\\ \\widetilde{Y}_{\\bar{\\gamma}^{(i+1)}}:=\\Phi_{(i+1)T}^\\top \\bar{\\Gamma}_{(i+1)T}Y_{(i+1)T}, \\\n\\xbar{Y}_{\\bar{\\gamma}^{(i+1)}}:=Y_{(i+1)T}^\\top \\bar{\\Gamma}_{(i+1)T}Y_{(i+1)T}$, updated at steps \\ref{alg2_step:r}-\\ref{alg2_step:yb} of the algorithm.\n\n\\begin{algorithm}\n\\caption{Online Bayesian SysId: Fixed Forgetting Factor}\\label{alg:on_line_ff}\n\\begin{algorithmic}[1]\n\\Statex{\\textbf{Inputs:}} forgetting factor $\\bar{\\gamma}$, previous estimates $\\{ \\hat{\\eta}^{(i)}, \\hat{\\eta}^{(i-1)}\\}$, previous data matrices $\\{R_{\\bar{\\gamma}}^{(i)},\\widetilde{Y}_{\\bar{\\gamma}}^{(i)},\\xbar{Y}_{\\bar{\\gamma}}^{(i)}\\}$, new data $\\mathcal{D}_{i+1}=\\left\\{u(t),y(t)\\right\\}_{t=iT+1}^{(i+1)T}$\n\\State $R_{\\bar{\\gamma}}^{(i+1)} \\gets \\bar{\\gamma}^T R_{\\bar{\\gamma}}^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\bar{\\Gamma}_T\\ \\Phi_{iT+1}^{(i+1)T} $ \\label{alg2_step:r}\n\\State $\\widetilde{Y}_{\\bar{\\gamma}}^{(i+1)} \\gets \\gamma^T\\widetilde{Y}_\\gamma^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\bar{\\Gamma}_T\\ Y_{iT+1}^{(i+1)T}$ \\label{alg2_step:yt}\n\\State $\\xbar{Y}_{\\bar{\\gamma}}^{(i+1)} \\gets \\bar{\\gamma}^T \\xbar{Y}_{\\bar{\\gamma}}^{(i)} + \\left(Y_{iT+1}^{(i+1)T}\\right)^\\top \\bar{\\Gamma}_T\\ Y_{iT+1}^{(i+1)T}$ \\label{alg2_step:yb}\n\\State $\\widehat{\\mathbf{h}}_{LS}^{(i+1)} \\gets R_{\\bar{\\gamma}}^{(i+1)^{-1}} \\widetilde{Y}_{\\bar{\\gamma}}^{(i+1)}$ \\label{alg2_step:ls}\n\\State {\\footnotesize$\\hat{\\sigma}^{(i+1)^2} \\gets \\frac{1}{ (i+1)T - n} \\left(\\bar{Y}_{\\bar{\\gamma}}^{(i+1)}-2\\widetilde{Y}_{\\bar{\\gamma}}^{(i+1)^\\top}\\widehat{\\mathbf{h}}_{LS}^{(i+1)} + \\widehat{\\mathbf{h}}_{LS}^{(i+1)^\\top}R_{\\bar{\\gamma}}^{(i+1)}\\widehat{\\mathbf{h}}_{LS}^{(i+1)} \\right)$}\n\\State $\\hat{\\eta}^{(i+1)}\\gets \\arg\\min_{\\eta\\in\\Omega}\\ f_{(i+1)T}(\\eta)$ (use Algorithm \\ref{alg:1step_grad})\n\\label{alg2_step:ml}\n\\State $\\widehat{\\mathbf{h}}^{(i+1)} \\gets \\left(R_{\\bar{\\gamma}}^{(i+1)} +\\hat{\\sigma}_{\\bar{\\gamma}}^{(i+1)^2} K_{\\hat{\\eta}^{(i+1)}}^{-1}\\right)^{-1}\\widetilde{Y}_{\\bar{\\gamma}}^{(i+1)}$\\label{alg_step:h}\n\\Statex{\\textbf{Output:}} $\\widehat{\\mathbf{h}}^{(i+1)}$, $\\hat{\\eta}^{(i+1)}$\n\\end{algorithmic}\n\\end{algorithm}\n \n\n\n\\subsubsection{Treating the Forgetting Factor as a Hyper-parameter}\nThe Bayesian framework provides the user with the possibility to treat the forgetting factor as a hyper-parameter and to estimate it by solving:\n\\begin{align}\n\\hat{\\eta}, \\hat{\\gamma} &= {\\textstyle\\arg\\min_{\\eta\\in\\Omega, \\gamma\\in(0,1]}}\\ f_k(\\eta,\\gamma)\\label{equ:eta_hat_ff_hyper} \\\\\nf_k(\\eta, \\gamma)&= Y_k^\\top G_k \\Sigma(\\eta,\\gamma)^{-1} G_k Y_k + \\ln \\det \\Sigma(\\eta,\\gamma)\\label{equ:ml_ff_hyper} \\\\\n\\Sigma(\\eta,\\gamma)&= G_k \\Phi_k K_\\eta \\Phi_k^\\top G_k + \\sigma^2 I_k\n\\end{align}\nwhere $G_k G_k =: \\Gamma_k$ and $\\Gamma_k := diag\\left(\\gamma^{k-1}, \\gamma^{k-2}, ..., \\gamma^0\\right)$.\n\\begin{rem} Notice that the model \\eqref{equ:ff_model} is equivalent to\n$$\nY_k = \\Phi_k \\mathbf{h} + E_{\\bar{\\gamma}}, \\quad E_{\\bar{\\gamma}} = \\left[e_{\\bar{\\gamma}}(1),...,e_{\\bar{\\gamma}}(k)\\right]^\\top \\sim \\mathcal{N}(0,\\sigma^2 \\bar{\\Gamma}_k^{-1})\n$$\nTherefore, treating the forgetting factor as a hyper-parameter is equivalent to modeling the noise with a non-constant variance and to give to the diagonal entries of the covariance matrix an exponential decaying structure.\n\\end{rem}\n\nThe online implementation of this approach is detailed in Algorithm \\ref{alg:on_line_ff_hyper}, where\n\\begin{equation}\nR_{\\hat{\\pmb{\\gamma}}}^{(i)} := \\hat{\\gamma}^{(i)} R_{\\hat{\\pmb{\\gamma}}}^{(i-1)} + \\left(\\Phi_{(i-1)T+1}^{iT}\\right)^\\top \\widehat{\\Gamma}_T^{(i)} \\Phi_{(i-1)T+1}^{iT} \n\\end{equation}\nwith $\\widehat{\\Gamma}_T^{(i)} = diag((\\hat{\\gamma}^{(i)})^{T-1},.. ,(\\hat{\\gamma}^{(i)})^{0})$. $\\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}$ and $\\xbar{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}$ are analogously defined.\\\\\nWe should stress that the objective function in \\eqref{equ:ml_ff_hyper} does not admit the decomposition \\eqref{equ:grad_decomp}; we have\n\\begin{equation*}\n\\frac{\\partial f_k(\\eta,\\gamma)}{\\partial \\gamma} = V(\\eta,\\gamma) + U(\\eta,\\gamma), \\quad V(\\eta,\\gamma) >0, \\ U(\\eta,\\gamma) \\geq 0 \n\\end{equation*}\nThus, when $\\gamma$ is treated as an hyper-parameter, Algorithm \\ref{alg:1step_grad} is run setting $D^{(i)}=I_d$ at step \\ref{alg_step:inverse_H}; $\\alpha^{(i)}$ is still determined alternating the BB rules \\eqref{equ:bb}.\n\n\n\n\\begin{algorithm}\n\\caption{Online Bayesian SysId: Forgetting Factor as a hyper-parameter}\\label{alg:on_line_ff_hyper}\n\\begin{algorithmic}[1]\n\\Statex{\\textbf{Inputs:}} previous estimates $\\{ \\hat{\\eta}^{(i)}, \\hat{\\eta}^{(i-1)}, \\hat{\\gamma}^{(i)}, \\hat{\\gamma}^{(i-1)}\\}$, previous data matrices $\\{R_{\\hat{\\pmb{\\gamma}}}^{(i)},\\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)},\\xbar{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}\\}$, new data $\\mathcal{D}_{i+1}=\\left\\{u(t),y(t)\\right\\}_{t=iT+1}^{(i+1)T}$\n\\State $R_{\\gamma}^{(i+1)} \\gets \\gamma^T R_{\\hat{\\pmb{\\gamma}}}^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\Gamma_T \\ \\Phi_{iT+1}^{(i+1)T} $ \\label{alg3_step:r}\n\\State $\\widetilde{Y}^{(i+1)}_{\\gamma} \\gets \\gamma^T \\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\Gamma_T \\ Y_{iT+1}^{(i+1)T}$ \\label{alg3_step:yt}\n\\State $\\xbar{Y}^{(i+1)}_\\gamma \\gets \\gamma^T \\xbar{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)} + \\left(Y_{iT+1}^{(i+1)T}\\right)^\\top \\Gamma_T \\ Y_{iT+1}^{(i+1)T}$ \\label{alg3_step:yb}\n\\State $\\widehat{\\mathbf{h}}_{LS}^{(i+1)} \\gets (R_{\\hat{\\pmb{\\gamma}}}^{(i)})^{-1} \\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}$ \\label{alg3_step:ls}\n\\State {\\footnotesize$\\hat{\\sigma}^{2^{(i+1)}} \\gets \\frac{1}{ (i+1)T - n} \\left(\\xbar{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}-\n2(\\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)})^\\top\\ \\widehat{\\mathbf{h}}_{LS}^{(i+1)} + (\\widehat{\\mathbf{h}}_{LS}^{(i+1)})^\\top R_{\\hat{\\pmb{\\gamma}}}^{(i)} \\widehat{\\mathbf{h}}_{LS}^{(i+1)} \\right)$}\n\\State $\\hat{\\eta}^{(i+1)}, \\hat{\\gamma}^{(i+1)} \\gets \\arg\\min_{\\eta\\in\\Omega, \\gamma\\in(0,1]}\\ f_{(i+1)T}(\\eta,\\gamma)$ \\Statex (use Algorithm \\ref{alg:1step_grad})\n\\label{alg3_step:ml}\n\\State $\\widehat{\\mathbf{h}}^{(i+1)} \\gets \\left(R_{\\hat{\\pmb{\\gamma}}}^{(i+1)} +\\hat{\\sigma}^{2^{(i+1)}}(\\hat{\\gamma}^{(i+1)})\\ K_{\\hat{\\eta}^{(i+1)}}^{-1}\\right)^{-1}\n\\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i+1)}$\\label{alg_step:h}\n\\Statex{\\textbf{Output:}} $\\widehat{\\mathbf{h}}^{(i+1)}$, $\\hat{\\eta}^{(i+1)}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n \n\n\n\n\n\t\n\n\n\n\\section{Experimental Results}\\label{sec:experiment}\n\\noindent In this section we test the online algorithms for parametric and Bayesian SysId described in Sec.~\\ref{sec:pem} and \\ref{sec:bayes}. Their performance are compared through a Monte-Carlo study over 200 time-varying systems.\n\n\\subsection{Data}\n\\noindent 200 datasets consisting of 3000 input-output measurement pairs are generated. Each of them is created as follows: the first 1000 data are produced by a system contained in the data-bank D4 (used in \\cite{TC-MA-LL-AC-GP:14}), while the remaining 2000 data are generated by perturbing the D4-system with two additional poles and zeros. These are chosen such that the order of the D4-system changes, thus creating a switch on the data generating system at time $k=1001$. \n\\\\\nThe data-bank D4 consists of 30th order random SISO dicrete-time systems having all the poles inside a circle of radius 0.95. These systems are simulated with a unit variance band-limited Gaussian signal with normalized band $[0,0.8]$. A zero mean white Gaussian noise, with variance adjusted so that the Signal to Noise Ration (SNR) is always equal to 1, is then added to the output data.\n\n\n\\subsection{Estimators}\n\\noindent The parametric estimators are computed with the \\verb!roe! Matlab routine, using the BIC criterion for the model complexity selection. In the following this estimator will be denoted as \\textit{PEM BIC}. Furthermore, as a benchmark we introduce the parametric oracle estimator, called \\textit{PEM OR}, which selects the model complexity by choosing the model that gives the best fit to the impulse response of the true system. The order selection is performed every time a new dataset becomes available: multiple models with orders ranging from 1 to 20 are estimated and the order selection is performed according to the two criteria above-described.\n\\\\\nFor what regards the methods relying on Bayesian inference we adopt a zero-mean Gaussian prior with a covariance matrix (kernel) given by the so-called TC-kernel:\n\\begin{equation}\nK_\\eta^{TC}(k,j) = \\lambda \\min(\\beta^{k},\\beta^{j}), \\qquad \\eta = [ \\lambda,\\ \\beta]\n\\end{equation}\nwith $\\Omega = \\{(\\lambda,\\beta): \\lambda \\geq 0, 0 \\leq \\beta \\leq 1\\}$ \\cite{ChenOL12}. The length $n$ of the estimated impulse responses is set to 100. In the following, we will use the acronym \\textit{TC} to denote these methods. Furthermore, the notation \\textit{OPT} will refer to the standard Bayesian procedure, in which the SGP algorithm adopted to optimize the marginal likelihood $f_k(\\eta)$ is run until the relative change in $f_k(\\eta)$ is less than $10^{-9}$.\nFrom here on, the online counterpart (illustrated in Sec.~\\ref{sec:bayes}) will be referred to as the \\textit{1-step ML}.\nWe will also use the acronyms \\textit{TC FF} when a fixed forgetting factor is adopted, \\textit{TC est FF} when the forgetting factor is estimated as a hyper-parameter.\n\\\\\nFor each Monte Carlo run, the identification algorithms are initialized using the first batch of data $\\mathcal{D}_{init}=\\left\\{u(t),y(t)\\right\\}_{t=1}^{300}$.\nAfter this initial step, the estimators are updated every $T=10$ time steps, when new data $\\mathcal{D}_{i+1} = \\left\\{u(t),y(t)\\right\\}_{t=iT}^{(i+1)T}$ are provided. The forgetting factor in the \\textit{TC FF} and \\textit{PEM} methods is set to 0.998, while its estimation in \\textit{TC est FF} method is initialized with 0.995.\n\n\\subsection{Performance}\n\\noindent The purpose of the experiments is twofold. First, we will compare the two routines we have proposed in Sec.~\\ref{sec:time_var} to explicitly deal with time-varying systems. Second, we will compare the parametric and the Bayesian identification approaches while dealing with time-varying systems.\\\\\nAs a first comparison, we evaluate the adherence of the estimated impulse response $\\widehat{\\mathbf{h}}$ to the true one $\\mathbf{h}$, measured~as\n\\begin{equation}\n\\label{eq:fit_imp_resp}\n\\mathcal{F}(\\widehat{\\mathbf{h}})= 100 \\cdot \\Big(1- \\frac{\\Vert \\mathbf{h} - \\widehat{\\mathbf{h}} \\Vert_2}{\\Vert \\mathbf{h} \\Vert_2}\\Big)\n\\end{equation}\nFigure \\ref{fig:fit} reports the values of $\\mathcal{F}(\\widehat{\\mathbf{h}})$ at four time instants.\n\n\n\n\n\n\\begin{figure}[hbtp]\n\\centering\n\\includegraphics[width=.9\\columnwidth]{fitBoxplot_timeVard4_sgp_1i_TC_Nsys200_NestSysOrig1000_NestPerturb3000_SNR1_Nred300_Nstep10_FF998_FFpem998_FINAL-crop.pdf}\n\\caption{Fit $\\mathcal{F}(\\widehat{\\mathbf{h}})$ achieved at four time instants $k$ (corresponding to the number of data available for the estimation).}\n\\label{fig:fit}\n\\end{figure}\n\n\n\\noindent \nIt is interesting to note that immediately before the change in the data generating system ($k = 1000$) the \\textit{TC} methods slightly outperform the ideal parametric estimator \\textit{PEM OR}.\nAfter the switch (occurring at $k=1001$), among regularization\/Bayesian routines \\textit{TC est FF} recovers the fit performance a bit faster than \\textit{TC FF}; even at regime it outperforms the latter because it can choose forgetting factor values that retain a larger amount of data.\n\\\\\nWe also observe how the \\textit{1-step ML} procedures and the corresponding \\textit{OPT} routines provide analogous performance at each time step $k$, validating the method we propose for online estimation and confirming the results in \\cite{RPPCECC2016}.\n\\\\\nThe unrealistic \\textit{PEM OR} represents the reference on the achievable performance of the PEM estimators; it outperforms \\textit{TC} methods in the transient after the switch, while it has comparable performance at regime. Instead, the recursive \\textit{PEM BIC} estimator performs very poorly.\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabularx}{\\columnwidth}{|X|X|X|X|X|X|X|}\n\\hline \n & \\multicolumn{3}{c|}{TC} & \\multicolumn{2}{c|}{PEM} \n\\\\ \n & OPT FF & FF & est FF & OR & BIC \\\\ \n\\hline \nmean & 6.70 & 0.44 & 5.45 & 18.44 & 18.44 \\\\\\hline\nstd & 1.28 & 0.03 & 0.67 & 0.69 & 0.69 \\\\\n\\hline \n\\end{tabularx} \n\\caption{Computational cumulative time after data $\\mathcal{D} =\\left\\{u(t),y(t)\\right\\}_{t=1}^{3000}$ are used: mean and std over 200 datasets.}\\label{tab:cumulative_time_TC_PEM}\n\\vspace{-4mm}\n\\end{table}\n\n\n\n\\noindent As a second comparison, Table \\ref{tab:cumulative_time_TC_PEM} reports the computational cumulative time of the proposed algorithms in terms of mean and standard deviation after the estimators are fed with all the data $\\mathcal{D} =\\left\\{u(t),y(t)\\right\\}_{t=1}^{3000}$. \nThe \\textit{1-step ML} methods are one order of magnitude faster than the corresponding \\textit{OPT} ones. The \\textit{TC est FF} estimator is slower than \\textit{TC FF}: this should be a consequence of having set $D^{(i)}=I_d$ in Algorithm \\ref{alg:1step_grad}. On the other hand the RPEM estimators are three times slower than the \\textit{OPT} ones, thus appearing not particularly appealing for online applications. \n\\section{Conclusion and Future Work}\\label{sec:conclusion}\n\\noindent We have adopted recently developed SysId techniques relying on the Gaussian processes and Bayesian inference to the identification of time-varying systems. Specifically, we have focused on an online setting by assuming that new data become available at predefined time instants. To tackle the real-time constraints we have modified the standard Bayesian procedure: hyper-parameters are estimated by performing only one gradient step in the corresponding marginal likelihood optimization problem.\nIn order to cope with the time-varying nature of the systems to be identified, we propose two approaches, based on the use of a forgetting factor. One of them treats the forgetting factor as a constant, while the other estimates it as a hyper-parameter of the Bayesian inference procedure.\\\\\nWe believe that the preliminary investigation performed in this work may pave the way for further research in this topic. A future research direction could consider the recursive update of the Bayesian estimate, resembling the one which is available for parametric techniques.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nMultiple stellar populations are routinely \nfound in old Galactic and intermediate-age Magellanic Clouds star clusters\n(Piotto 2008 and references therein). \nWhether they are a signature of the cluster formation process or a result of the star formation\nhistory and related stellar\nevolution effects, is still matter of lively discussion (Renzini 2008, Bekki et al. 2008, Decressin et al. 2007).\nThe prototype of globular hosting multiple populations has for long time been \n$\\omega$~ Cen (Villanova et al. 2007), although the current understanding is that it is possibly the remnant\nof a dwarf galaxy (Carraro \\& Lia 2000, Tsuchiya et al. 2004, Romano et al. 2007).\\\\\nMost chemical studies of the stellar population in $\\omega$~ Cen are\nrestricted within 20 arcmin of the cluster radius center (see Norris \\& Da Costa 1995, Villanova et al. 2007),\n where, probably, the diverse stellar components are better mixed and\nless subjected to external perturbations, like the Galactic tidal stress, than the outer\nregions. Assessing whether there are population inhomogeneities in $\\omega$ Cen or gradients\nin metal abundance is a crucial step to progress in our understanding of this fascinating stellar\nsystem.\\\\\nIn Scarpa et al. (2003, 2007) we presented the results of a spectroscopic campaign to\nstudy the stellar radial velocity dispersion profile at $\\sim$ 25 arcmin, half way to \nthe tidal radius ($\\sim$ 57 arcmin, Harris 1996), in an attempt to find a new way to verify the \nMOND (Modified Newtonian Dynamics, Milgrom 1983) theory of gravitation.\\\\\nIn this paper we make use of a subsample of those spectra (the ones taken for\nRGB stars) and extract estimates of metal abundances for some of the most\ninteresting elements.\nThe aim is to study the chemical trends of the stellar populations in the cluster periphery,\nto try to learn whether the cluster outskirts contain, both qualitatively\nand quantitatively, the same population mix and to infer from this additional information\non the cluster formation and evolution.\\\\\nThe layout of the paper is as follows. In Sect.~2 we describe observations and data reduction,\nwhile Sect.~3 is dedicated to the derivation of metal abundances.\nThese latter are then discussed in detail in Sect.~4. Sect.~5 is devoted to the comparison\nof the metal abundance trends in the inner and outer regions of $\\omega$~ Cen, and, finally,\nSect.~6 summarizes the findings of this study.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure1.eps}\n\\caption{The CMD of $\\omega$~ Cen in the optical (left panel) and infrared.\nSolid symbols of different colors indicate stars belonging to the MPP (red),\nIMP (blue) and MRP(green). See text for more details.}\n\\label{f1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure2.eps}\n\\caption{Distribution of iron abundance for the program stars. A bimodal\nGaussian fit is used to derive the mean iron abundance of the MPP and IMP. \nMean iron abundances of the three peaks are indicated. See text for more details.}\n\\label{f2}\n\\end{figure}\n\n\\begin{table} [!hbp]\n\\caption{Measured Solar abundances ($\\rm\n {log\\epsilon(X)=log(N_{X}\/N_{H})+12)}$.} \n\\label{t1}\n\\centering\n\\begin{tabular}{lc}\n\\hline\n\\hline\nElement & log$\\epsilon$(X)\\\\\n\\hline\nNaI & 6.37 \\\\\nMgI & 7.54 \\\\\nSiI & 7.61 \\\\\nCaI & 6.39 \\\\\nTiI & 4.94 \\\\\nTiI & 4.96 \\\\\nCrI & 5.63 \\\\\nFeI & 7.50 \\\\\nNiI & 6.28 \\\\\nZnI & 4.61 \\\\\nYII & 2.25 \\\\\nBaI & 2.31 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\noindent\n\n\n\\section{Observations and Data reduction}\nOur data-set consists of UVES spectra collected in\nAugust 2001, for a project devoted to \nmeasuring radial velocities and establishing membership in the outskirts of the \ncluster. Data were obtained with UVES\/VLT@UT2\n(Pasquini et al.\\ 2002) with a typical seeing of 0.8-1.2 arcsec. \nWe observed isolated stars from the lower red giant branch (RGB) up to the tip\nof the RGB of $\\omega$ Cen, in the magnitude\nrange $11.5<{\\rm V}<16.0$.\n\nWe used the UVES spectrograph in the RED 580 setting. The spectra have a spectral coverage of\n$\\sim$2000 \\AA \\ with the central wavelength at 5800 \\AA. The typical\nsignal to noise ratio is ${\\rm S\/N\\sim 20-30}$.\nFor additional details, the reader is referred to Scarpa et al. (2003).\n\n\nData were reduced using UVES pipelines (Ballester et al.\\ 2000),\nincluding bias subtraction, flat-field correction, wavelength\ncalibration, sky subtraction and spectral rectification. Stars\nwere selected from photographic BV observations (van Leeuwen et al. 2000)\ncoupled with infrared JHK 2MASS photometry (Skrutskie\net al.\\ 2006). Targets are located at a radial distance between 20 and 30\narcmin. The whole sample of stars contain both RGB and horizontal branch (HB) stars.\nIn this paper we focus our attention only on RGB objects, for the sake of comparison\nwith previous studies.\n\n\n\\subsection{Radial velocities and membership}\nIn the present work, radial velocities were used as the membership criterion\nsince the cluster stars all have similar motions with respect to the observer.\nThe radial velocities of the\nstars were measured using the IRAF FXCOR task, which cross-correlates\nthe object spectrum with a template. As a template, we used a\nsynthetic spectrum obtained through the spectral synthesis code\nSPECTRUM (see {\\sf http:\/\/www.phys.appstate.edu\/spectrum\/spectrum.html} for\nmore details), using a Kurucz model atmosphere with roughly the mean\natmospheric parameters of our stars $\\rm {T_{eff}=4900}$ K, $\\rm {log(g)=2.0}$,\n$\\rm {v_{t}=1.3}$ km\/s, $\\rm {[Fe\/H]=-1.40}$. Each radial velocity was\ncorrected to the heliocentric system. We calculated a first approximation\nmean velocity and the r.m.s ($\\sigma$) of the velocity distribution.\nStars showing $\\rm {v_{r}}$ out of more than\n$3\\sigma$ from the mean value were considered probable field objects and\nrejected, leaving us with 48 UVES spectra of probable members, whose position\nin the CMD are shown in Fig.~\\ref{f1}. Radial velocities\nfor member stars are reported in Tab.~\\ref{t2}\n\n\\section{Abundance analysis}\n\\subsection{Continuum determination}\n\nThe chemical abundances for all elements\nwere obtained from the equivalent widths (EWs) of the spectral lines (see next\nSection for the description of the line-list we used). \nAn accurate measurement of EWs first requires a good determination\nof the continuum level. Our relatively metal-poor stars\nallowed us to proceed in the following way. First, \nfor each line, we selected a region of 20 \\AA \\ centered on the line itself\n(this value is a good compromise between having enough points, i. e. a good statistic, and \navoiding a too large region where the spectrum might not be flat).\nThen we built the histogram of the distribution of the flux where the peak is a\nrough estimation of the continuum. We refined this determination by fitting a\nparabolic curve to the peak and using the vertex as our continuum estimation. \nFinally, we revised the continuum determination by eye and corrected by hand\nif an obvious discrepancy with the spectrum was found.\nThen, using the continuum value previously obtained, we fit a Gaussian curve\nto each spectral line and obtained the EW from integration.\nWe rejected lines if they were affected by bad continuum determinations, by non-Gaussian\nshape, if their central wavelength did not agree with that expected from \nour line-list, or if the lines were too broad or too narrow with respect to the\nmean FWHM.\nWe verified that the Gaussian shape is a good approximation for our spectral\nlines, so no Lorenzian correction has been applied.\n\n\\begin{table*}\n\\caption{Stellar parameters. Coordinates are for J2000.0 equinox}\n\\label{t2}\n\\centering\n\\begin{tabular}{cccccccccccc}\n\\hline\n\\hline\nID & $\\alpha$ & $\\delta$ & B & V & J$_{\\rm 2MASS}$ & H$_{\\rm 2MASS}$ & K$_{\\rm 2MASS}$ & T$_{\\rm eff}$& log(g) & v$_{\\rm t}$ & RV$_{\\rm H}$\\\\\n\\hline\n& deg & deg. & & & & & & $^{0}K$ & & km\/sec & km\/sec \\\\\n\\hline \n00006 & 201.27504 & -47.15599 & 16.327 & 15.531 & 13.865 & 13.386 & 13.364 & 5277 & 2.75 & 1.23 & 222.99\\\\\n08004 & 201.07113 & -47.22082 & 15.393 & 14.508 & 12.687 & 12.110 & 12.007 & 4900 & 2.17 & 1.38 & 241.70\\\\\n10006 & 201.16314 & -47.23357 & 14.510 & 13.710 & 11.887 & 11.413 & 11.300 & 5080 & 1.93 & 1.44 & 237.18\\\\\n10009 & 201.24457 & -47.23406 & 13.807 & 12.573 & 10.331 & 9.664 & 9.520 & 4432 & 1.14 & 1.64 & 227.57\\\\\n10010 & 201.33458 & -47.23334 & 14.982 & 13.941 & 11.963 & 11.394 & 11.249 & 4758 & 1.88 & 1.45 & 220.18\\\\\n13006 & 201.13373 & -47.25880 & 16.442 & 15.665 & 14.112 & 13.615 & 13.504 & 5251 & 2.79 & 1.22 & 231.78\\\\\n14002 & 201.16243 & -47.26471 & 15.696 & 14.853 & 13.110 & 12.634 & 12.552 & 5151 & 2.42 & 1.31 & 224.00\\\\\n22007 & 201.08521 & -47.32639 & 14.799 & 13.635 & 11.843 & 11.221 & 11.077 & 4750 & 1.75 & 1.49 & 227.25\\\\\n25004 & 201.18696 & -47.34607 & 15.048 & 14.064 & 12.393 & 11.852 & 11.762 & 5034 & 2.06 & 1.41 & 230.68\\\\\n27008 & 201.16507 & -47.36326 & 15.242 & 14.220 & 12.519 & 12.046 & 11.911 & 5095 & 2.15 & 1.38 & 237.50\\\\\n28009 & 201.13729 & -47.36499 & 15.687 & 14.779 & 13.133 & 12.664 & 12.549 & 5186 & 2.41 & 1.32 & 236.00\\\\\n33006 & 201.12822 & -47.40730 & 13.062 & 11.403 & 8.924 & 8.064 & 7.929 & 4051 & 0.39 & 1.83 & 226.34\\\\ \n34008 & 201.19496 & -47.41343 & 13.803 & 12.629 & 10.510 & 9.897 & 9.749 & 4570 & 1.25 & 1.61 & 232.16\\\\\n38006 & 201.11643 & -47.44354 & 16.289 & 15.436 & 13.822 & 13.304 & 13.263 & 5202 & 2.68 & 1.25 & 222.58\\\\\n39013 & 201.16078 & -47.45089 & 13.950 & 12.755 & 10.690 & 10.097 & 9.935 & 4610 & 1.32 & 1.59 & 231.47\\\\\n42012 & 201.17440 & -47.47487 & 14.468 & 13.379 & 11.299 & 10.705 & 10.613 & 4673 & 1.61 & 1.52 & 232.58\\\\\n43002 & 201.14213 & -47.47916 & 15.313 & 14.365 & 12.597 & 12.113 & 11.956 & 5021 & 2.17 & 1.38 & 229.17\\\\\n45011 & 201.10941 & -47.49389 & 16.208 & 15.346 & 13.630 & 13.146 & 13.146 & 5229 & 2.66 & 1.25 & 224.37\\\\\n45014 & 201.15625 & -47.50013 & 15.894 & 15.066 & 13.316 & 12.803 & 12.720 & 5073 & 2.47 & 1.30 & 249.24\\\\\n46003 & 201.12943 & -47.50252 & 15.640 & 14.788 & 13.073 & 12.578 & 12.455 & 5091 & 2.37 & 1.33 & 242.00\\\\\n48009 & 201.12036 & -47.51844 & 16.504 & 15.616 & 14.125 & 13.602 & 13.537 & 5279 & 2.79 & 1.22 & 222.94\\\\\n49008 & 201.16235 & -47.52717 & 15.657 & 14.687 & 12.799 & 12.256 & 12.210 & 4925 & 2.26 & 1.36 & 238.57\\\\\n51005 & 201.09190 & -47.53945 & 16.140 & 15.292 & 13.551 & 13.005 & 12.913 & 5028 & 2.55 & 1.28 & 221.27\\\\\n57006 & 201.18559 & -47.58523 & 15.906 & 15.046 & 13.320 & 12.797 & 12.757 & 5096 & 2.48 & 1.30 & 234.33\\\\\n61009 & 201.16032 & -47.61620 & 14.488 & 13.496 & 11.533 & 10.947 & 10.890 & 4784 & 1.71 & 1.50 & 239.65\\\\\n76015 & 201.33839 & -47.73435 & 15.602 & 14.604 & 12.839 & 12.355 & 12.231 & 5026 & 2.27 & 1.35 & 241.73\\\\\n77010 & 201.23548 & -47.74124 & 14.992 & 13.641 & 11.886 & 11.269 & 11.133 & 4746 & 1.75 & 1.49 & 238.49\\\\\n78008 & 201.21908 & -47.74676 & 16.088 & 15.001 & 13.484 & 12.990 & 12.909 & 5221 & 2.52 & 1.29 & 223.14\\\\\n80017 & 201.40179 & -47.75878 & 15.250 & 14.294 & 12.481 & 11.989 & 11.896 & 5026 & 2.15 & 1.38 & 231.59\\\\\n82012 & 201.44193 & -47.77921 & 16.094 & 15.298 & 13.558 & 13.059 & 12.947 & 5099 & 2.58 & 1.27 & 232.28\\\\\n85007 & 201.19307 & -47.80062 & - & - & 14.020 & 13.489 & 13.419 & 4983 & 2.20 & 1.37 & 250.48\\\\\n85014 & 201.37723 & -47.80134 & 15.400 & 14.347 & 12.560 & 11.982 & 11.923 & 4899 & 2.11 & 1.39 & 236.78\\\\\n85019 & 201.53965 & -47.80194 & 15.727 & 14.803 & 12.939 & 12.428 & 12.308 & 4938 & 2.31 & 1.34 & 243.81\\\\\n86007 & 201.22490 & -47.80442 & - & - & 13.024 & 12.487 & 12.387 & 4914 & 1.88 & 1.45 & 238.70\\\\\n86010 & 201.31217 & -47.80789 & 15.594 & 14.557 & 12.926 & 12.437 & 12.329 & 5115 & 2.29 & 1.35 & 238.05\\\\\n86017 & 201.56208 & -47.80760 & 16.289 & 15.452 & 13.737 & 13.319 & 13.232 & 5290 & 2.73 & 1.24 & 231.74\\\\\n87009 & 201.61710 & -47.81630 & 16.081 & 15.199 & 13.392 & 12.885 & 12.850 & 5082 & 2.54 & 1.29 & 247.67\\\\\n88023 & 201.58521 & -47.82029 & 16.415 & 15.542 & 13.774 & 13.268 & 13.154 & 5050 & 2.66 & 1.25 & 232.87\\\\\n89009 & 201.57067 & -47.83291 & 13.776 & 12.650 & 10.497 & 9.890 & 9.753 & 4568 & 1.25 & 1.61 & 242.07\\\\\n89014 & 201.66544 & -47.83110 & 14.611 & 13.607 & 11.639 & 11.055 & 10.967 & 4774 & 1.75 & 1.49 & 231.57\\\\\n90008 & 201.22516 & -47.83980 & - & - & 13.209 & 12.703 & 12.591 & 5010 & 1.95 & 1.43 & 240.42\\\\\n90019 & 201.62529 & -47.83825 & 14.462 & 13.509 & 11.537 & 11.018 & 10.911 & 4860 & 1.75 & 1.48 & 232.73\\\\\n90020 & 201.64363 & -47.83814 & 16.305 & 15.563 & 13.858 & 13.395 & 13.292 & 5219 & 2.74 & 1.23 & 240.39\\\\\n93016 & 201.65058 & -47.86211 & 15.342 & 14.479 & 12.620 & 12.107 & 12.031 & 5015 & 2.22 & 1.37 & 230.82\\\\\n94011 & 201.30980 & -47.86480 & 15.217 & 14.151 & 12.462 & 11.911 & 11.842 & 4989 & 2.07 & 1.40 & 241.74\\\\\n95015 & 201.54907 & -47.87303 & 16.122 & 15.264 & 13.475 & 12.977 & 12.884 & 5076 & 2.56 & 1.28 & 239.10\\\\\n96011 & 201.52316 & -47.88203 & 13.954 & 12.975 & 11.027 & 10.514 & 10.416 & 4894 & 1.56 & 1.53 & 229.55\\\\\n98012 & 201.35549 & -47.89600 & 14.561 & 13.623 & 12.034 & 11.552 & 11.471 & 5210 & 1.96 & 1.43 & 229.93\\\\\n\\hline \n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n\\caption{Stellar abundances}\n\\label{t3}\n\\centering\n\\begin{tabular}{lccccccccccccc}\n\\hline\n\\hline\nID & FeI & ${\\rm [FeI\/H]}$ & NaI & MgI & SiI & CaI & TiI & TiII & CrI & NiI & ZnI & YII & BaII\\\\\n\\hline \n00006 & 6.15 & -1.35 & 4.91 & 6.27 & 6.63 & 5.25 & 3.94 & 3.89 & 4.11 & 4.61 & 3.35 & 1.06 & 1.27\\\\\n08004 & 6.23 & -1.27 & 5.74 & 6.31 & 6.97 & 5.53 & 4.12 & 4.30 & 4.38 & 5.01 & 3.61 & 1.75 & 1.93\\\\\n10006 & 5.80 & -1.70 & - & - & - & 5.03 & 3.77 & 3.64 & 3.76 & - & 3.09 & - & 1.57\\\\\n10009 & 6.18 & -1.32 & 5.61 & 6.38 & - & 5.27 & 3.93 & 4.03 & 4.24 & 5.04 & 3.04 & 1.26 & 1.69\\\\\n10010 & 6.45 & -1.05 & 5.38 & 6.71 & - & 5.65 & 3.91 & 4.02 & 4.65 & 5.07 & 3.42 & 1.86 & 2.08\\\\\n13006 & 5.93 & -1.57 & - & 6.31 & - & 5.06 & 3.88 & 3.61 & 4.04 & - & 2.95 & - & 0.37\\\\\n14002 & 6.02 & -1.48 & - & - & - & 5.25 & 3.90 & 3.72 & 4.00 & 5.19 & 3.28 & 0.61 & 1.15\\\\\n22007 & 6.43 & -1.07 & 5.80 & 6.88 & 6.85 & 5.61 & 4.07 & 4.17 & 4.51 & 5.09 & 3.54 & 1.80 & 2.00\\\\\n25004 & 6.14 & -1.36 & - & - & - & 5.44 & 4.09 & 3.81 & 3.95 & - & - & - & 1.08\\\\\n27008 & 6.52 & -0.98 & - & - & 7.29 & 5.60 & 4.30 & 4.24 & 4.91 & - & - & 1.86 & 2.71\\\\\n28009 & 6.29 & -1.21 & - & - & - & 5.53 & 4.20 & 4.29 & - & 4.97 & - & - & 0.65\\\\\n33006 & 6.07 & -1.43 & - & 6.38 & - & 5.30 & 4.10 & 4.27 & 4.32 & 4.73 & - & - & 1.43\\\\ \n34008 & 6.11 & -1.39 & 5.52 & 6.24 & - & 5.29 & 3.87 & 3.99 & 4.00 & 4.83 & - & 1.27 & 1.54\\\\\n38006 & 5.97 & -1.53 & - & 6.19 & - & 5.15 & 3.79 & 3.88 & 4.16 & 4.99 & 3.19 & 0.65 & 0.61\\\\\n39013 & 6.01 & -1.49 & - & 6.51 & 6.85 & 5.30 & 3.77 & 3.71 & 4.13 & 4.80 & - & 1.34 & 1.17\\\\\n42012 & 6.10 & -1.40 & 5.05 & 6.62 & 6.66 & 5.42 & 3.93 & 4.08 & 4.22 & 4.85 & 3.37 & 1.62 & 1.61\\\\\n43002 & 5.94 & -1.56 & 5.10 & 6.09 & - & 5.24 & 3.96 & 3.81 & 4.29 & - & - & 1.52 & 1.15\\\\\n45011 & 6.16 & -1.34 & 5.30 & 6.63 & 6.66 & 5.46 & 4.24 & 4.14 & 4.28 & 4.94 & - & 0.98 & 1.64\\\\\n45014 & 5.76 & -1.74 & - & 6.25 & - & 5.00 & 3.64 & 3.65 & 3.83 & - & - & - & 0.10\\\\\n46003 & 5.81 & -1.69 & - & 6.04 & - & 5.10 & 3.63 & 3.63 & 4.16 & 4.75 & 3.03 & 0.41 & 0.36\\\\\n48009 & 6.24 & -1.26 & 5.30 & 6.83 & - & 5.61 & - & - & 4.58 & 5.18 & 3.68 & 1.10 & 2.06\\\\\n49008 & 6.09 & -1.41 & - & 6.43 & - & 5.57 & 4.19 & 4.19 & 4.36 & 5.04 & 4.18 & 2.34 & 1.79\\\\\n51005 & 6.08 & -1.42 & - & 6.31 & - & 5.56 & 4.01 & 4.34 & 4.55 & 4.97 & 3.59 & 1.28 & 1.20\\\\\n57006 & 5.80 & -1.70 & 5.61 & - & - & 5.10 & 3.97 & 4.11 & 3.84 & - & 2.96 & 0.68 & 0.47\\\\\n61009 & 5.76 & -1.74 & - & 6.12 & 6.48 & 5.13 & 3.74 & 3.80 & 4.13 & 5.82 & - & 0.38 & 0.50\\\\\n76015 & 5.90 & -1.60 & - & - & - & 5.21 & 3.85 & 3.88 & 4.06 & - & 3.40 & 1.12 & 1.70\\\\\n77010 & 6.56 & -0.94 & 5.61 & 7.16 & 7.21 & 5.82 & 4.10 & 4.17 & 4.73 & 5.29 & 3.46 & 1.84 & 1.81\\\\\n78008 & 6.14 & -1.36 & - & - & - & 5.15 & 4.10 & 3.92 & 4.32 & 5.06 & - & 0.72 & 0.96\\\\\n80017 & 6.04 & -1.46 & - & - & - & 5.23 & 3.64 & 3.79 & 3.87 & - & - & - & 0.55\\\\\n82012 & 5.86 & -1.64 & 5.63 & - & - & 5.08 & 3.82 & 3.76 & 4.16 & - & 3.72 & 1.06 & 1.29\\\\\n85007 & 5.52 & -1.98 & - & 6.29 & - & 5.14 & 3.67 & 3.30 & 3.83 & 4.74 & - & 0.61 & 0.89\\\\\n85014 & 6.03 & -1.47 & - & - & - & 5.50 & 4.24 & 4.05 & 4.42 & - & - & 1.86 & 1.62\\\\\n85019 & 5.88 & -1.62 & - & 6.59 & - & 5.29 & 3.96 & 3.99 & 4.19 & - & 3.56 & 1.47 & 1.68\\\\\n86007 & 5.84 & -1.66 & 5.72 & 6.24 & 6.87 & 5.50 & 4.05 & 3.86 & 4.43 & 4.80 & 3.67 & 1.24 & 1.65\\\\\n86010 & 5.87 & -1.63 & - & - & - & 5.17 & - & - & - & - & - & - & 0.71\\\\\n86017 & 6.15 & -1.35 & 5.55 & - & 6.84 & 5.54 & 4.19 & 4.21 & 4.17 & 5.07 & 3.33 & 1.22 & 2.30\\\\\n87009 & 6.12 & -1.38 & - & 6.59 & - & 5.62 & 4.41 & 4.11 & 4.83 & 4.90 & 3.41 & 1.98 & 1.84\\\\\n88023 & 6.10 & -1.40 & 5.08 & - & - & 5.52 & 4.26 & 4.16 & 4.52 & 4.65 & - & 1.80 & 2.20\\\\\n89009 & 5.74 & -1.76 & - & 6.28 & - & 5.05 & 3.55 & 3.76 & 4.09 & 4.55 & - & 0.19 & 0.41\\\\\n89014 & 6.14 & -1.36 & 5.84 & - & 6.66 & 5.53 & 4.08 & 4.16 & 4.45 & 4.92 & - & 1.18 & 1.64\\\\\n90008 & 5.65 & -1.85 & 5.32 & 6.27 & - & 5.35 & 3.82 & 3.29 & 4.20 & - & 3.53 & 0.73 & 1.00\\\\\n90019 & 5.83 & -1.67 & - & - & - & 5.16 & 4.10 & 3.70 & 4.14 & - & - & 0.54 & 0.47\\\\\n90020 & 5.89 & -1.61 & - & - & - & 5.09 & 3.70 & 3.72 & - & - & - & 0.57 & 0.60\\\\\n93016 & 6.20 & -1.30 & - & 6.69 & - & 5.37 & 4.07 & 3.94 & 4.59 & - & - & 1.85 & 1.49\\\\\n94011 & 5.93 & -1.57 & - & - & - & 5.17 & 4.02 & 4.00 & 4.12 & - & - & 0.43 & 0.72\\\\\n95015 & 6.24 & -1.26 & - & 6.72 & - & 5.32 & 4.31 & 4.28 & 4.71 & 5.12 & 3.74 & 1.33 & 1.48\\\\\n96011 & 5.96 & -1.54 & 5.68 & - & - & 5.25 & 4.20 & 4.05 & 4.33 & 4.59 & - & 1.15 & 1.70\\\\\n98012 & 5.85 & -1.65 & - & - & - & 4.98 & - & - & - & - & - & - & 0.68\\\\\n\\hline \nObs. lines & 30 & & 2 & 1 & 2 & 10 & 10 & 5 & 5 & 5 & 1 & 4 & 2\\\\ \n\\hline \n\\end{tabular}\n\\end{table*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=10cm]{figure3.eps}\n\\caption{Trend of Na and $\\alpha-$element abundance ratios as a function of\n [Fe\/H]. Mean values (continuous lines) are provided for those elements which do not show a \n sizable scattering. See also Table~3.}\n\\label{f3}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=10cm]{figure4.eps}\n\\caption{Trend of abundance ratios for Iron peak elements (Ni and Cr),\nZn, Y and Ba for the outer region stars.\nMean values (continuous lines) are provided when there is no sizable scatter }\n\\label{f4}\n\\end{figure*}\n\n\\subsection{The linelist}\n\nThe line-lists for the chemical analysis were obtained from the VALD\ndatabase (Kupka et al.\\ 1999) and calibrated using the Solar-inverse technique.\nFor this purpose we used the high resolution, high S\/N Solar spectrum\nobtained at NOAO ($National~Optical~Astronomy~Observatory$, Kurucz et\nal.\\ 1984). The EWs for the reference Solar spectrum were obtained in the same\nway as the observed spectra, with the exception of the strongest lines, where\na Voigt profile integration was used. Lines affected by blends were rejected\nfrom the final line-list.\nMetal abundances were determined by the Local Thermodynamic Equilibrium (LTE)\nprogram MOOG (freely distributed by C. Sneden, University of Texas at Austin),\ncoupled with a solar model atmosphere interpolated from the Kurucz (1992) grids\nusing the canonical atmospheric parameters for the Sun: $\\rm {T_{eff}=5777}$ K,\n$\\rm {log(g)}=4.44$, $\\rm {v_{t}=0.80}$ km\/s and $\\rm {[Fe\/H]=0.00}$.\nIn the calibration procedure, we adjusted the value of the line strength\nlog(gf) of each spectral line in order to report the abundances obtained from\nall the lines of the same element to the mean value.\nThe chemical abundances obtained for the Sun and used in the paper as\nreference are reported in Tab.~\\ref{t1}.\n\n\\subsection{Atmospheric parameters}\n\nEstimates of the atmospheric parameters were derived from \nBVJHK photometry. Effective temperatures (T$_{\\rm eff}$) for each \nstar were derived from the T$_{\\rm eff}$-color relations\n(Alonso et al.\\ 1999, Di Benedetto 1998, and Ramirez \\& M\\'elendez 2005). \nColors were de-reddened using a reddening given by\nSchlegel et al. (1998). A value E(B-V) = 0.134 mag. was adopted.\n\nSurface gravities log(g) were obtained from the canonical equation:\n\n\\begin{center}\n${\\rm log(g\/g_{\\odot}) = log(M\/M_{\\odot}) + 4\\cdot\n log(T_{eff}\/T_{\\odot}) - log(L\/L_{\\odot}) }$\n\\end{center}\n\nFor M\/M$_{\\odot}$ we adopted 0.8 M$_{\\odot}$, which is the\ntypical mass of RGB stars in globular clusters.\nThe luminosity ${\\rm L\/L_{\\odot}}$ was obtained from the absolute\nmagnitude M$_{\\rm V}$, assuming an absolute distance modulus \nof (m-M)$_{\\rm 0}$=13.75 (Harris 1996), which gives an apparent distance\nmodulus of (m-M)$_{\\rm V}$=14.17 using the adopted reddening. \nThe bolometric correction ($\\rm {BC}$) was derived by adopting the relation\nBC-T$_{\\rm eff}$ from Alonso et al.\\ (1999). \\\\\nFinally, microturbolence velocity ($\\rm {v_{t}}$) was obtained from the\nrelation (Marino et al. 2008):\n\n\\begin{center}\n${\\rm v_{t}\\ (km\/s) = -0.254\\cdot log(g)+1.930}$\n\\end{center}\n\nwhich was obtained specifically for old RGB stars, as it is our present sample.\nAdopted atmospheric parameters for each star are reported in Tab.~\\ref{t2}\nin column 9,10,11.\nIn this Table column 1 gives the ID of the star, columns 2 and 3 the\ncoordinates, column 4,5,6,7,8 the B,V,J,H,K magnitudes, column 12 the\nheliocentric radial velocity.\n\n\n\\subsection{Chemical abundances}\n\nThe Local Thermodynamic Equilibrium (LTE) program MOOG (freely\ndistributed by C. Sneden, University of Texas at Austin) has been used to\ndetermine the abundances from EWs, coupled with model atmosphere\ninterpolated from the Kurucz (1992) for the parameters obtained as described\nin the previous Section.\nThe wide spectral range of the UVES data allowed us to derive the\nchemical abundances of several elements. Chemical abundances\nfor single stars we obtained are listed in Tab.~\\ref{t3}.\nThe last line of this table gives the mean number of lines we were able to \nmeasured for each elements.\nTi is the only element for which we could measure neutral and first ionization\nlines. The difference of mean abundances obtained from the two stages is:\n\\begin{center}\n${\\rm \\Delta(TiI-TiII)=0.03\\pm0.01}$\n\\end{center}\nThis difference is small and compatible with zero within 3 $\\sigma$. This confirms\nthat gravities obtained by the canonical equation are not affected by appreciable\nsystematic errors.\n\n\\subsection{Internal errors associated with the chemical abundances}\n\nThe measured abundances of every element vary from\nstar to star as a consequence of both measurement errors and\nintrinsic star to star abundance variations.\nIn this section our goal is to search for evidence of intrinsic\nabundance dispersion in each element by comparing the observed\ndispersion $\\sigma_{\\rm {obs}}$ and that produced by internal errors\n($\\Delta_{\\rm {tot}}$). Clearly, this requires an accurate analysis of\nall the internal sources of measurement errors.\nWe remark here that we are interested in star-to-star intrinsic\nabundance variation, i.e. we want to measure the internal intrinsic\nabundance spread of our sample of stars. For this reason, we\nare not interested in external sources of error which are systematic\nand do not affect relative abundances.\\\\\nIt must be noted that two main sources of errors contribute\nto $\\sigma_{\\rm {tot}}$. They are:\n\\begin{itemize}\n\\item the errors $\\sigma_{\\rm {EW}}$ due to the uncertainties in the\n EWs measures;\n\\item the uncertainty $\\sigma_{\\rm {atm}}$ introduced by errors in the \n atmospheric parameters adopted to compute the chemical abundances.\n\\end{itemize}\n\n$\\sigma_{\\rm {EW}}$ is given by MOOG for each element and each star.\nIn Tab.~\\ref{t4} we reported in the second column the average $\\sigma_{\\rm {EW}}$\nfor each element. For Mg and Zn we were able to measure only one line.\nFor this reason their $\\sigma_{\\rm {EW}}$ has been obtained as the mean of\n$\\sigma_{\\rm {EW}}$ of Na and Si multiplied by $\\sqrt 2$. \nNa and Si lines were selected because their strength was similar to that of Mg\nand Zn features. This guarantees that $\\sigma_{\\rm {EW}}$, due to the photon\nnoise, is the same for each single line.\n\nErrors in temperature are easy to determine because, for each star, it is\nthe r.m.s. of the temperatures obtained from the single colours. The mean\nerror $\\Delta$T$_{\\rm eff}$ turned out to be 50 K.\nUncertainty on gravity has been obtained by the canonical formula using the\npropagation of errors. The variables used in this formula that are affected\nby random errors are T$_{\\rm eff}$ and the V magnitude. For temperature we\nused the error previously obtained, while for V we assumed a error of 0.1 mag,\nwhich is the typical random error for photographic magnitudes. Other error\nsources (distance modulus, reddening, bolometric correction) affect\ngravity in a systematic way, so are not important to our analysis.\nMean error in gravity turned out to be 0.06 dex. This implies, in turn, a mean error\nin microturbolence of 0.02 km\/s.\n \nOnce the internal errors associated with the atmospheric parameters were\ncalculated, we re-derived the abundances of two reference stars (\\#00006 and\n\\#42012) which roughly cover the whole temperature range of our sample \nby using the following combination of atmospheric parameters:\n\\begin{itemize}\n\\item ($\\rm {T_{eff}} \\pm \\Delta (\\rm {T_{eff}})$, $\\rm {log(g)}$, $\\rm {v_{t}}$)\n\\item ($\\rm {T_{eff}} $, $\\rm {log(g)} \\pm \\Delta (\\rm {log(g)})$, $\\rm {v_{t}}$)\n\\item ($\\rm {T_{eff}} $, $\\rm {log(g)}$, $\\rm {v_{t}} \\pm \\Delta (\\rm {v_{t}})$)\n\\end{itemize}\nwhere $\\rm {T_{eff}}$, $\\rm {log(g)}$, $\\rm {v_{t}}$ are the measures\ndetermined in Section 3.2 .\n\nThe difference of abundance between values obtained with the original\nand those ones obtained with the modified values gives\nthe errors in the chemical abundances due to uncertainties in\neach atmospheric parameter. They are listed in Tab.~\\ref{t4} (columns 3, 4\nand 5) and are the average values obtained from the two stars. \nBecause the parameters were not obtained indipendently we cannot\nestimate of the total error associated with the abundance\nmeasures by simply taking the squadratic sum of all the single errors.\nInstead we calculated the upper limits for the total error as:\n\\begin{center}\n$\\rm {\\Delta_{tot}=\\sigma_{EW}+\\Sigma(\\sigma_{atm})}$\n\\end{center}\nlisted in column 6 of Tab.~\\ref{t4}.\nColumn 7 of Tab.~\\ref{t4} is the observed dispersion.\nComparing $\\sigma_{\\rm obs}$ with $\\Delta_{\\rm tot}$ (wich is an upper limit) we can see\nthat for many elements (Mg, Si, Ca, Ti, Cr, Ni) we do not find any evidence of inhomogeneity\namong the whole Fe range. Some others (Na, Zn, Y, Ba) instead show\nan intrinsic dispersion. This is confirmed also by Figs. \\ref{f3} and \\ref{f4}\n(see next Section).\nFinally we just mention here the problem of the differential reddening. \nSome authors (Calamida et al. 2005) claim that is is of the order of 0.03 mag,\nwhile some others (McDonald et al. 2009) suggest a value lower than 0.02 dex.\nHowever all those results concern the internal part, while no information is\navailable for the region explored in this paper. \nWe can only say that an error on the reddening of 0.03 dex would alter\nthe temperature of 90 degrees.\n\n\\begin{table*}\n\\caption{Internal errors associated with the chemical abundances\ndue to errors in the EW measurement (column 2) and in the atmospheric\nparameters (column 3,4,5) for the studied elements.\n6$^{th}$ column gives the total internal error, while the last\ncolumn gives the observed dispersion of the abundances. See text for\nmore details.\n}\n\n\\label{t4}\n\\centering\n\\begin{tabular}{lcccccc}\n\\hline\n\\hline\nEl. & $\\sigma_{\\rm EW}$ & $\\Delta$T$_{\\rm eff}$ & $\\Delta$log(g) & $\\Delta$v$_{\\rm t}$ & $\\Delta_{\\rm tot}$ & $\\sigma_{\\rm obs}$\\\\\n\\hline \n${\\rm [FeI\/H]}$ & 0.05 & 0.05 & 0.01 & 0.02 & 0.13 & - \\\\\n${\\rm [NaI\/FeI]}$ & 0.12 & 0.02 & 0.01 & 0.02 & 0.17 & 0.34\\\\\n${\\rm [MgI\/FeI]}$ & 0.18 & 0.02 & 0.01 & 0.02 & 0.23 & 0.18\\\\\n${\\rm [SiI\/FeI]}$ & 0.15 & 0.03 & 0.01 & 0.02 & 0.21 & 0.12\\\\\n${\\rm [CaI\/FeI]}$ & 0.09 & 0.01 & 0.00 & 0.01 & 0.11 & 0.11\\\\\n${\\rm [TiI\/FeI]}$ & 0.14 & 0.04 & 0.01 & 0.01 & 0.20 & 0.19\\\\\n${\\rm [TiII\/FeI]}$ & 0.13 & 0.04 & 0.03 & 0.01 & 0.21 & 0.17\\\\\n${\\rm [CrI\/FeI]}$ & 0.12 & 0.03 & 0.01 & 0.01 & 0.17 & 0.17\\\\\n${\\rm [NiI\/FeI]}$ & 0.13 & 0.01 & 0.01 & 0.01 & 0.16 & 0.14\\\\\n${\\rm [ZnI\/FeI]}$ & 0.19 & 0.04 & 0.03 & 0.02 & 0.28 & 0.32\\\\\n${\\rm [YII\/FeI]}$ & 0.13 & 0.03 & 0.03 & 0.01 & 0.20 & 0.42\\\\\n${\\rm [BaII\/FeI]}$ & 0.14 & 0.02 & 0.03 & 0.00 & 0.19 & 0.50\\\\\n\\hline \n\\end{tabular}\n\\end{table*}\n\n\n\n\\section{Results of abundance analysis}\nThe results of the abundance analysis are shown in Fig.~\\ref{f2} for [Fe\/H],\nand in Figs.~\\ref{f3} and \\ref{f4} for all the abundance ratios we could derive.\nA Gaussian fit was used to derive the mean metallicity of the three peaks in\nFig.~\\ref{f2}. We found the following values: -1.64 (metal poor\npopulation, {\\it MPP}), -1.37 (intermediate metallicity population, {\\it IMP}), and -1.02\n(metal rich population, {\\it MRP}). Stars belonging to each of the three populations are\nidentified with different colors in Fig.~\\ref{f1}.The population mix is in the proportion \n({\\it MPP:IMP:MRP}) = (21:23:4).\\\\\n\n\\noindent\nThe abundance ratio trends versus [Fe\/H] \nare shown in the various panels in Figs.~\\ref{f3} and \\ref{f4} for all the elements we could measure.\nWhen the abundance ratio scatter is low (lower than 0.2 dex which, according to\nthe previous Section, implies a homogeneous abundance) we also show the mean value\nof the data as a continuous line, to make the comparison with literature easier.\nWhat we find in the outer region of $\\omega$~ Cen is in basic agreement\nwith previous investigations. Comparing our trends with -e.g.- Norris \\& Da Costa (1995)\nvalues( see next Section for a more general comparison with the literature), \nwe find that all the abundance ratios we could measure are in very good\nagreement with that study, except for [Ti\/Fe], \nwhich is slightly larger in our stars, and [Ca\/Fe], \nwhich is slightly smaller in our study. However, within the measurement errors we do not\nfind any significant deviation.\\\\ \nThe $\\alpha-$elements (Mg, Ti, Si and Ca, see Fig.~\\ref{f3}) \nare systematically overabundant with respect to the Sun, \nwhile iron peak elements (Ni and Cr, see Fig.~\\ref{f4}) are basically solar.\nSimilarly, overabundant in average with respect to the Sun are Y, Ba and Zn (see Fig.~\\ref{f4}). \nY abundance ratio show some trend with [Fe\/H], but of the same sign and \ncomparable magnitude to Norris \\& Da Costa (1995).\n\nFinally, we looked for possible correlations between abundance ratios, and compare\nthe outcome from the different populations of our sample. This was possible only for [Y\/Fe] and [Zn\/Fe]\nversus [Ba\/Fe], and it is plotted in Fig.~\\ref{f5}. For {\\it MPP} (filled circles) a trend\nappears both for Zn and Y as a function of Ba (see also value of the slope ($a$) in Fig.~\\ref{f5}), with\nBa-poor stars being also Zn and Y poor. Y-Ba correlation can be easily\nexplained because both are neutron-capture elements.\\\\\nAs for {\\it IMP}, a marginal trend in the Y vs. Ba relation is present,\nwhile no trend appears in the Zn vs. Ba. No trends at all were detected for\nMRP, mostly because our sample of {\\it MRP} stars is too small for any significant\nconclusion. We underline the fact that this different behaviour of {\\it MPP} and {\\it IMP}\nwith respect to their Zn-Y-Ba correlations points to a different chemical\nenrichment history of the two populations.\n\n\\section{Outer versus inner regions}\n\nA promising application of our data is the comparison of the population mix in the cluster outskirts\nwith the one routinely found in more central regions of the cluster (Norris \\& Da Costa 1995; Smith et al. 2000;\nVillanova et al 2007; Johnson et al. 2009; Wylie-de Boer et al. 2009).\\\\\n\nTo this aim, we firstly compute the fraction of stars\nin the various metallicity ([Fe\/H]) populations, and compare it with the inner\nregions trends from Villanova et al. (2007), for the sake of homogeneity,\nto statistically test the significance of their similarity or difference. \nWe are aware that this is not much more than a mere exercise.\nFirstly, while our program stars are mostly in the RGB phase, in Villanova et al (2007)\nsample only SGB stars are present. This implies that we are comparing stars in\nslightly different evolutionary phases.\nSecond, and more important, the statistics is probably too poor. \nIn fact, we report in Table~5 (column 2 and 3) the number of stars\nin the different metallicity bin derived from a Gaussian fit to our and Villanova et al. (2007)\ndata. They have large errors. We see that within these errors the population mix is basically the\nsame in the inner and outer regions. Therefore, with so few stars we cannot\ndetect easily differences between the inner and outer regions.\nTo check for that, we make use of the Kolmogorov-Smirnov statistics,\nand compare the metallicity distributions of the inner and outer samples, to see\nwhether they come from the same parental distribution. We found that the probability\nthat the two distributions derive from the same underlying distribution is 77$\\%$.\nThis is quite a small number, and simply means that with these samples we cannot\neither disprove or confirm the null hypothesis (say that the two populations have\nsame parental distribution).\nBesides, our sample and that of Villanova et al (2007) do not have stars\nbelonging to the most metal-rich population of Omega centauri (at\n[Fe\/H]$\\sim$-0.6), which therefore we cannot comment on.\\\\\n\n\\noindent\nThat clarified, we then compare in Fig.~6 and Fig.~7 the trend of the various elements we could\nmeasure (see Table~4) in the cluster outskirts with the trends found in the central regions by other\nstudies. In details, in all Fig.~6 panels we indicate with filled circles the data\npresented in this study and with open circles data from Villanova et al. (2007). Crosses indicate\nWylie-de Boer et al. (2009), stars Norris \\& Da Costa (1995), empty squares \nSmith et al. (2000) and, finally, empty pentagons Johnson et al. (2009).\nWe separate in Fig.~6 elements which do no show significant scatter (see Table~4) from\nelements which do show a sizeable scatter (see Fig.~7).\nBa abundances from Villanova et al. (2007) were corrected of $\\sim$-0.3 dex,\nto take into account the hyperfine structure that seriously affects the Ba\nline at 4554 \\AA.\n\n\n\n\\begin{table}\n\\centering\n\\label{t5}\n\\begin{tabular}{ccc}\n\\hline\nPopulation & Inner & Outer \\\\\n & $\\%$ & $\\%$ \\\\\n\\hline\nMPP & 46$\\pm$10 & 45$\\pm$10 \\\\\nIMP & 36$\\pm$10 & 47$\\pm$10 \\\\\nMRP & 18$\\pm$10 & 8$\\pm$10 \\\\\n\\hline\n\\end{tabular}\n \\caption{Percentages of different metallicity populations in the inner and outer regions\nof $\\omega$~ Cen.}\n \\end{table}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure5.eps}\n\\caption{\nAbundance ratios of [Y\/Fe] and [Zn\/Fe] vs. [Ba\/Fe] for our sample.\nFilled circles, open circles, and crosses are MPP, IMP, MRP stars\nrespectively. A straight line has been fitted to MPP stars. The value of the slope\n({\\it a}) is given. In both cases {\\it a} is out of more than 3$\\sigma$ with\nrespect the null trend value, implying that trends for MPP are real.\n}\n\\label{f5}\n\\end{figure}\n\n\n\n\\noindent\nLooking at Fig.~6, we immediately recognize two important facts.\\\\\nFirst, all the studies we culled from the literature for Omega Cen central regions\nshow the same trends.\\\\\nSecond, and more important for the purpose of this paper,\nwe do not see any significant difference bewteen the outer (filled circles)\nand inner (all the other symbols) regions of the cluster. Only Ti seems to be\nslightly over-abundant in the outer regions with respect to the more central ones.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure6.eps}\n\\caption{Comparison of abundance ratios in the inner and\nouter stars (filled circles). Symbols are as follows. Empty circles (Villanova et al. 2007), crosses \n(Wylie-de Boer et al. 2009), stars (Norris \\& Da Costa 1995), empty squares \n(Smith et al. 2000) empty pentagons (Johnson et al. 2009)}\n\\label{f6}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure7.eps}\n\\caption{Comparison of abundance ratios in the inner\nand outer stars (filled circles). Symbols are as in Fig.~6}\n\\label{f7}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure8.eps}\n\\caption{Comparison of our Y and Ba abundance ratios (our sample, filled\n circles) with the literature (inner sample).Symbols are as in Fig.~6}\n\\label{f8}\n\\end{figure}\n\n\\noindent\nAs for the more scattered elements (see Fig.~7) we notice that Na shows the opposite trend\nin the outer regions with respect to the inner ones, but this is possibly related\nto a bias induced by the signal to noise of our data which does not allow us to detect\nNa-poor stars in the metal poor population.\nOn the other hand, Y and Ba do not show any spatial difference. \\\\\n\n\\noindent\nInterestingly enough, at low metallicity Ba shows quite a significant scattered\ndistribution, expecially for stars more metal-poor than -1.2 dex, covering a\nrange of 1.5 dex. This behaviour is shared with Y and Na, althought at a lower level.\nFurthermore, looking carefully at Fig. 4, it is possible to see a hint of\nbimodality for the Ba content of the stars having [Fe\/H]$<$-1.5 dex\n(i.e. belonging to the MMP), with the presence of a group of objects having\n[Ba\/Fe]$\\sim$1.0 dex, and another having [Ba\/Fe]$\\sim$-0.2 dex.\nThe same trend is visible in all the literature data.\\\\\nWe remind the reader that such a bimodal distribution is similar to that found by \nJohnson et al. (2009, thier Fig.~8) for Al.\\\\\n\n\\noindent\nFinally, we compare our Y vs. Ba trend with literature in Fig. 8. Also in this\ncase the agreement is very good and no radial trend is found.\\\\\n\n\\noindent\nThe stars studied by Wylie-de Boer et\nal. (2009) deserve special attention. They belong to the Kapteyn Group, \nbut their kinematics and chemistry suggest a likely association with $\\omega$ Cen.\nThese stars, in spite of being part of a moving group, exhibit \nquite a large iron abundance spread (see Fig. 6 and 7), totally compatible with the one of\n$\\omega$ Cen. Also their Na and Ba abundance (see Fig. 7) are comparable with\nthose of the cluster. We suggest that the comparison with our data reinforces\nthe idea that the Kapteyn Group is likely formed by stars stripped away from $\\omega$ Cen.\n\n\\section{Conclusions}\nIn this study, we analized a sample of 48 RGB stars located half-way to the tidal\nradius of $\\omega$ Cen, well beyond any previous study devoted to the detailed chemical composition of the\ndifferent cluster sub-populations.\\\\\nWe compared the abundance trends in the cluster outer regions with literature studies which focus\non the inner regions of $\\omega$ Cen.\\\\\n\n\\noindent\nThe results of this study can be summarized as follows:\n\n\\begin{description}\n\\item $\\bullet$ we could not highlight any difference between the outer and inner regions\nas far as [Fe\/H]is concerned: the same mix of different iron abundance population is present\nin both locations;\n\\item $\\bullet$ most elements appear in the same proportion both in the inner and in the outer\nzone, irrespective of the particular investigation one takes into account for the comparison;\n\\item $\\bullet$ [Ba\/Fe] shows an indication of a bimodal distribution at low metallicity at any location\nin the cluster, which deserves further investigation;\n\\item $\\bullet$ no indications emerge of gradients in the radial abundance trend of the elements we could\nmeasure.\n\\end{description}\n\n\\noindent\nOur results clearly depend on a small data-set, and more extended studies are encouraged to confirm\nor deny our findings.\n\n\n\\section*{Acknowledgements}\nSandro Villanova acknowledges ESO for financial support during several visits to the Vitacura\nScience office. The authors express their gratitude to Kenneth Janes for reading carefully\nthe manuscript.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nTransition Metal Dichalcogenides (TMD) have the generic formula MX$_2$, consisting of a transition metal M (Nb, Ta, Ti, Mo, W...) and a chalcogen X (S, Se, Te). They are layered materials with strong in-plane bonds and weak Van der Waals inter-layers interactions providing an important two dimensional (2D) character. The individual layers consist of a triangular lattice of transition metal atoms surrounded by chalcogens, and come in two forms named 1T and 1H. In 1T layers the transition metal atoms are surrounded by six chalcogens in octahedral (O$_h$) coordination, whereas in 1H layers the six chalcogens are in trigonal prismatic (D$_{3h}$) coordination. \nThese two base layers have a wide range of possible stacking arrangements, called polytypes \\cite{Wilson1975AdvPhysCDWTMDReview} (e.g. see Fig.~\\ref{fig:2H3R}), which differ by the translation, rotation and ordering of the two base layers 1H and 1T. \nTMD polytypes are usually classified using Ramsdell's notation\\cite{ramsdell1947studies}, which specifies the number of layers in the unit cell followed by a letter to indicate the lattice type and, when necessary, an additional alphanumeric character to distinguish between stacking sequences. Thus, a 1T polytype has 1 layer in a trigonal unit cell while a 2H polytype has 2 layers in a primitive hexagonal unit cell.\nThis distinction is especially important for TMD as polytypes of the same TMD compound can have dramatically different electronic properties spanning from semiconducting to metallic or superconducting\\cite{Voiry2015TMD_polytypesynthesis_review}. \n\nTMD recently attracted renewed interest because their quasi 2D nature is similar to graphene and the tunability of their electronic properties is promising for novel electronic devices\\cite{Wang2012TMDreviewnature,Vogel2015MRSBull}. In the case of metallic TMD, the 2D character and strong electron-phonon coupling makes them prone to electronic orderings such as Mott insulator or charge density waves (CDW) and superconductivity\\cite{Wilson1975AdvPhysCDWTMDReview}. This multiplicity of possible ground states hold a great technological potential. For instance, a new \\emph{orbitronics} concept has been proposed in TMD such as 1T-TaS$_2$, whereby switching between the orbital configurations and melting the CDW phase using ultrashort laser pulses would yield a complete and reversible semiconductor-to-metal transition\\cite{Ritschel2015Orbitronics}. \n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figure1_1T_2H_3R90dpi.png}%\n\\caption{\\label{fig:2H3R} \\textbf{Cristallographic structures of the three known polytypes of NbS$_2$}: 1T reported only in thin film\\cite{Carmalt2004_1st_1TNbS2synthesis} or monolayer form\\cite{Chen20151TNbS2monolayerforH2}, and 2H and 3R found in bulk crystals form\\cite{Fisher1980NbS2difficultsynthesis}. In the 1T polytype, the transition metal atom is in octahedral coordination and layers are stacked without rotation or in-plane translation. The 2H and 3R polytypes are composed of 1H single layers with the metal atom in trigonal prismatic coordination, but they differ by their stacking: rotation and no in-plane translation for 2H, in-plane translation and no rotation for 3R. The unit cell is indicated by solid red lines.}\n\\end{figure}\n\nCDW are periodic modulations of the electronic density accompanied by a periodic distortion of the crystal lattice. CDW are usually caused either by a nesting vector of the Fermi surface inducing a peak in electronic susceptibility\\cite{gruner}, or by a strong k-dependent electron-phonon coupling\\cite{weber}. TMD are the first 2D compounds where CDW were observed\\cite{Wilson1974PRL}, and TMD appear even more prone to CDW in single layer than in bulk form. For instance, in 2H-NbSe$_2$ the CDW transition temperature increases from 33\\,K in bulk to 145\\,K in monolayer form\\cite{Xi2015highTcdwNbSe2,CalandraPRB2009monolayerNbSe2}.\n\nHowever, among the metallic TMD, NbS$_2$ stands out as none of its polytypes have been reported to have a CDW. \n{In bulk form, only the 2H and 3R polytypes (trigonal prismatic coordination of Nb atoms) have been grown and CDWs have not been reported in either polytytpe\\cite{FriendYoffe1987}}.\nThe trigonal prismatic coordination was found to be thermodynamically stable in bulk by DFT calculations\\cite{LIU2014472}. The 1T polytype (octahedral coordination) has also been reported but only in single layer\\cite{Chen20151TNbS2monolayerforH2} or thin film form\\cite{Carmalt2004_1st_1TNbS2synthesis}, both also without CDW.\n\nIn the 2H polytype of NbS$_2$ that we study here, we previously showed that anharmonic effects prevent the formation of a CDW despite strong phonon modes softening\\cite{Leroux2012anharmonicNbS2}. Thus, 2H-NbS$_2$ is just on the verge of a CDW and DFT calculations also hint at the proximity of density waves instabilities\\cite{Leroux2012anharmonicNbS2, Guller2015DFTsdwNbS2, HeilPRL2017}.\nThe soft phonon modes do contribute to another electronic ordering: the metal-to-superconductor transition below $T_\\mathrm{c}= 6$\\,K\\cite{Wilson1975AdvPhysCDWTMDReview}, in which they are the dominant contributor to anisotropic two-gap superconductivity\\cite{Guillamon2008STMVortexcore,KACMARCIK2010S719,pribulova2010two,Diener2011TDONbS2,Leroux2012Hc1NbS2,HeilPRL2017}.\nYet, no other electronic phase has ever been found experimentally in 2H-NbS$_2$ using either: very pure crystals ($RRR=105$)\\cite{Naito1982ResistivityNbS2}, low temperature (100\\,mK)\\cite{Guillamon2008STMVortexcore}, or high pressure\\cite{JonesMorosinPRB1972pressureNbS2}. This is in contrast with the isoelectronic TMD 2H-NbSe$_2$ and 2H-TaS\/Se$_2$ which all have CDW\\cite{Wilson1975AdvPhysCDWTMDReview,Naito1982ResistivityNbS2}.\n \nHere we find that there are faint traces of CDW in 2H-NbS$_2$ using {diffuse x-ray scattering}. This CDW wavevectors are the same as that of the commensurate CDW in 1T-TaS$_2$ and 1T-TaSe$_2$. Such 1T-like CDW has not been reported before for the NbS$_2$ compound. We suggest two mechanisms to explain both the symmetry and very small amplitude of the CDW we observe. Rotational stacking faults between 2H domains could be locally like a 1T-layer, or a very dilute amount of Nb in the van der Waals interlayer space could also present an octahedral coordination.\n\n\n\n\\section{Materials and methods}\n\nSingle crystals of 2H-NbS$_2$ were synthesized from an appropriate mixture of the elements that was sealed in an evacuated quartz tube. A large excess of sulfur was added to the mixture (20 \\%) to act as a transporting agent and favor the formation of the 2H polytype. The tube was heated to 950$^\\circ$C for 240\\,h, slowly cooled down to 750$^\\circ$C and subsequently quenched to room temperature. This synthesis yielded a powder containing single crystals with lateral sizes exceeding 200\\,$\\mu$m as shown in Ref.\\cite{Diener2011TDONbS2}. Powder x-ray diffraction on several batches showed a predominance by volume of 99\\% of 2H polytype (P6$_3\/mmc$) versus 1\\% of 3R (R$3m$), and the polytype of each single crystal used was checked individually using x-ray diffraction. We find lattice parameters $a = b = 3.33\\,$\\AA~and $c = 11.95\\,$\\AA. Superconducting properties and phonon spectrum of samples from this batch were published elsewhere\\cite{Guillamon2008STMVortexcore,Kacmarcik2010CpNbS2,Diener2011TDONbS2,Leroux2012Hc1NbS2,Leroux2012anharmonicNbS2} and are in agreement with the literature\\cite{Onabe1978ResistivityNbS2,Naito1982ResistivityNbS2}. Typical superconducting transition temperature is $T_\\mathrm{c} = 6.05 \\pm 0.4$\\,K, as determined by AC specific heat\\cite{Kacmarcik2010CpNbS2}. \n\n{Diffuse x-ray scattering} imaging was performed at beamline ID29 at the ESRF at a wavelength of 0.6966 \\AA\\ (17.798 keV) and using a PILATUS 6M detector 200\\,mm away from the sample. 3600 pictures were acquired in three dimensions with 0.1$^\\circ$ oscillations and 0.25\\,s of exposure. Reconstruction of the $(h,k,0)$ plane was performed using CrysAlis software. Final reconstructions were made with locally developed software and Laue symmetry applied to remove the gaps between the detector elements. Inelastic x-ray scattering was performed at beamline ID28 at the ESRF using the Si (9,9,9) monochromator reflection giving an energy resolution of 2.6\\,meV and a photon energy of 17.794 keV. Measurements were performed at 300 and 77\\,K using a nitrogen cryostream cooler.\n\n\\section{Results}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figure2_RT_77K.jpg}%\n\\caption{\\label{fig:RT_77K}\\textbf{{diffuse x-ray scattering} of 2H-NbS$_2$.} $(h,k,0)$ plane at 300\\,K \\textbf{(left)} and 77\\,K \\textbf{(right)} showing the hexagonal Brillouin zone, {the reciprocal space base vectors $\\vec{a}^*$ and $\\vec{b}^*$, the high symmetry points $\\Gamma$, M and K, and the (1,0,0) and (2,0,0) Bragg peaks.} Elongated diffuse scattering is visible between Bragg peaks at 300\\,K, and increases in intensity at 77\\,K. It is caused by soft phonon modes and is not visible between each pair of Bragg peaks because of the longitudinal polarization of the soft phonon modes\\cite{Leroux2012anharmonicNbS2}.\nAt 77\\,K, three types of satellite peaks appear: a ring of twelve sharp peaks around each Bragg peak, a peak at the M point and 4 peaks around the M point. One example from each of these three sets of satellite peaks are indicated by white circles on the right panel}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Graph1.pdf}%\n\\caption{\\label{fig:IXS}\\textbf{Typical IXS spectra to determine the elastic or inelastic nature of the satellite peaks.} IXS energy spectra at 77\\,K around the M point ($\\vec{q} = (0.5,0,0)$) show that the elastic peak at zero energy transfer (static order) is the dominant contribution to the peak observed at the M point in diffuse scattering (energy integrated intensity). But the amplitude of this elastic peak is still comparable to that of a soft phonon. An elastic peak corresponds to a static diffracting object in real space.}\n\\end{figure}\n\nFig.~\\ref{fig:RT_77K} shows the diffuse scattering in the $(h,k,0)$ plane reconstructed from diffuse scattering data, at 300 and 77\\,K. The Bragg peaks amplitude is saturated on these images. The rocking curve of the (1,1,0) spot of a crystal from the same batch has a Full-Width at Half Maximum (FWHM) of 0.12$^\\circ$ at room temperature, implying a Bragg peak FWHM of at most 0.0068\\AA$^{-1}$ or 0.0036 $a^*$, i.e. an in-plane coherence length of at least 276 unit cells. \n\nAt 300\\,K, some diffuse scattering can be seen spanning the length between the different $\\Gamma$M direction around each Bragg peak. This elongated diffuse scattering becomes salient at 77\\,K.\nIt is caused by the broad softening of phonon modes around 1\/3 of $a^*$ (2\/3 of $\\Gamma$M)\\cite{Leroux2012anharmonicNbS2}. \n\nAt 77\\,K, Fig.~\\ref{fig:RT_77K} also shows three types of satellite peaks: a peak at the M point, four peaks around the M point, and a ring of twelve peaks around each Bragg peak.\nInelastic x-ray scattering results, presented in Fig.~\\ref{fig:IXS}, show that these peaks are all of an elastic nature, i.e. reflections of a static order, but with an amplitude similar to that of the soft phonon modes. {Comparison to the (1,1,0) Bragg peak, shows that these peaks are 5 orders of magnitude less intense}. Such a low intensity indicates that these peaks correspond either to very small atomic displacements or to displacements taking place in a very small fraction of the crystal.\n\n\\subsection{Satellite peak at the M point}\n\n\\textit{Ab initio} calculations\\cite{Leroux2012anharmonicNbS2} find that the satellite peak at the M point corresponds to a maximum in electronic susceptibility. Considering its phonon-like amplitude, the peak at the M point could correspond to Friedel's oscillations around impurities. However the peak FWHM is 0.036\\,$a^*$, as shown in the lower panel of Fig.~\\ref{fig:satpeakwidth}, which corresponds to a coherence length of about 30 unit cells. It therefore seems equally likely that the peak at the M point could correspond to an extremely faint CDW with a periodicity of $2\\,a$ induced by the maximum in electronic susceptibility. Such faint $2\\,a$ super-lattice spots have also been reported in 2H-NbSe$_2$\\cite{CHEN1984}.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figure3_1st2nd3rdorder_reduced.pdf}%\n\\caption{\\label{fig:qvector}\\textbf{Two interwoven commensurate superlattices} Diffuse scattering of 2H-NbS$_2$ in the $(h,k,0)$ plane at 77\\,K.\nThe ring of twelve satellite peaks around each Bragg peak, and the four peaks around the M point can be indexed with two wavevectors. \n\\textbf{(Upper left panel)} Subset of peaks indexed by $\\vec{q_1} = \\frac{3}{13}\\,\\vec{a}^* + \\frac{1}{13}\\,\\vec{b}^*$, with $1^{\\mathrm{st}}$, $2^{\\mathrm{nd}}$ and $3^{\\mathrm{rd}}$ order reflections.\n\\textbf{(Upper right panel)} Both subsets of peaks indexed by $\\vec{q_1}$ in shades of red, and its mirror image $\\vec{q_2} = \\frac{4}{13}\\,\\vec{a}^* - \\frac{1}{13}\\,\\vec{b}^*$ in shades of blue.\n\\textbf{(Lower panel)} $\\vec{q_1}$, and $\\vec{q_2}$ are commensurate with the crystal lattice via $13\\,\\vec{q_1} = 3\\,\\vec{a}^* + \\vec{b}^*$. The wavevectors length is $||\\vec{q_{1,2}}|| =\\frac{1}{\\sqrt{13}}||\\vec{a}^*||$ so that each defines a $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ superlattice in real space. This is also geometrically equivalent to $3\\vec{q_1}-\\vec{q_1}'=\\vec{a}^*$, where $\\vec{q_1}'$ is $\\vec{q_1}$ rotated by $+120^\\circ$, which clearly appears in the upper panels.\n{Note that the upper panels show the same region as in Fig.~\\ref{fig:RT_77K}, and that the lower panel is an extended view centered on this same region.}\n}\n\\end{figure}\n\n\\subsection{Other satellite peaks}\n\nThe ring of twelve satellite peaks around each Bragg peak, and the four peaks around the M point can all be indexed with only two wavevectors. The left panel of Fig.~\\ref{fig:qvector} shows the wavevector: $\\vec{q_1} = \\frac{3}{13}\\,\\vec{a}^* + \\frac{1}{13}\\,\\vec{b}^* \\approx 0.231\\,\\vec{a}^* + 0.077\\,\\vec{b}^* $ and its $1^{\\mathrm{st}}$, $2^{\\mathrm{nd}}$ and $3^{\\mathrm{rd}}$ order reflections. This wavevector corresponds to a deviation angle of $\\arctan\\left(\\frac{\\sqrt{3}}{7}\\right)\\approx13.9^{\\circ}$ from $\\vec{a}^*$. The right panel of Fig.~\\ref{fig:qvector} shows both $\\vec{q_1}$ in shades of red, and $\\vec{q_2} = \\frac{4}{13}\\,\\vec{a}^* - \\frac{1}{13}\\,\\vec{b}^* \\approx 0.308\\,\\vec{a}^* - 0.077\\,\\vec{b}^* $ in shades of blue.\n\nThese two wavevectors are mirror image of each other, with length $||\\vec{q_{1,2}}|| =\\frac{1}{\\sqrt{13}}||\\vec{a}^*||$. They thus correspond to two commensurate $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ superlattices in real space.\nNote that the commensurate relation $13\\,\\vec{q_1} = 3\\,\\vec{a}^* + \\vec{b}^*$ is geometrically equivalent to $3\\vec{q_1}-\\vec{q_1}'=\\vec{a}^*$, where $\\vec{q_1}'$ is $\\vec{q_1}$ rotated by $+120^\\circ$. This clearly appears in the upper panels of Fig.~\\ref{fig:qvector} where the $3^{\\mathrm{rd}}$ order reflections from one Bragg peak coincide with the $1^{\\mathrm{st}}$ order reflections from another. \n\nThe presence of high order reflections evidences the long range coherence associated to these peaks or the non-sinusoidal character of the atomic displacements. \nThe long range coherence is also evidenced by the small width of the peaks as shown in the upper panel of Fig.~\\ref{fig:satpeakwidth}. The FWHM along $a^*$ of the satellite peak at (1.231, 0.077, 0) is 0.012 $a^*$ corresponding to a coherence length of $\\approx 83$ unit cells. {These sharp peaks along a* correspond to rods of scattering along c* with FWHM of $\\approx 0.5$\\,c* i.e. 2 unit cells along the c-axis.}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Satpeakwidth3.pdf}%\n\\caption{\\label{fig:satpeakwidth}\\textbf{Width of the satellite peaks} \\textbf{(Upper panel)} cross-sections of the satellite peak at $(1,0,0)+\\vec{q_1}$, which is part of the ring of 12 peaks around each Bragg peak. \\textbf{(Lower panel)} cross-sections of a satellite peak at the M point. Note that the difference between the cross-section along and perpendicularly to $a^*$ are due to the background of diffuse scattering (caused by soft phonons) which disappears rapidly perpendicularly to $a^*$.}\n\\end{figure}\n\nThe ring of twelve satellite peaks also has an intensity that follows the same extinction pattern as the elongated diffuse scattering, suggesting that it corresponds to a static longitudinal modulation. Indeed, the very specific angles at which the diffuse scattering is extinguished show that the underlying soft phonons (which cause the diffuse scattering) are polarized longitudinally with in-plane niobium displacements\\cite{Leroux2012anharmonicNbS2}.\nIn more details, the scattered intensity depends on phonons polarization via the dynamical structure factor $G(Q,m)$~\\cite{Burkel_IXS}\n\\begin{equation}\nG(Q,m)=\\left|\\sum_{j}^{\\text{unit cell}} f_j(\\vec{Q}).\\mathrm{e}^{-W_j}\\left[\\vec{Q}.\\vec{\\epsilon_j}(\\vec{Q},m) \\right] \\sqrt{M_j} e^{i \\vec{Q}.\\vec{r_j}}\\right|^2\n\\label{eqn:polarisation_phonon}\n\\end{equation}\nwhere $f_j(\\vec{Q})$ is the atomic form factor of atom $j$ at $\\vec{r_j}$ with mass $M_j$; $\\vec{\\epsilon_j}(\\vec{Q},m)$ is the unit displacement vector of atom $j$ in the $m$ phonon branch for a phonon wavevector $\\vec{Q}$; and $\\mathrm{e}^{-W_j}$ is the Debye-Waller factor of atom $j$.\nBecause $\\vec{Q}.\\vec{\\epsilon_j}(\\vec{Q},m)$ is zero for a phonon polarization perpendicular to $\\vec{Q}$ (see Fig.11 in Ref.~\\citenum{Burkel_IXS}), these extinctions indicate that the soft phonons are longitudinally polarized.\n{We cannot distinguish the respective contribution of sulfur and niobium atoms to the longitudinal soft phonon modes in our data. However, the total scattered intensity is dominated by the contribution from niobium atoms as the mass and atomic form factor of niobium are larger than that of sulfur. This suggests that the displacements of the niobium atoms involved in the soft phonons would also be mostly longitudinal, i.e. in the ab plane.}\n\nThe commensurate wavevectors $\\vec{q_1}$ and $\\vec{q_2}$ are the same as those of the low temperature commensurate CDW in 1T-TaS$_2$\\cite{ScrubyPhilMag1975} (semiconducting 1T$_3$ phase) and 1T-TaSe$_2$\\cite{McMillanPRB1975,Wilson1974PRL}.\nIt is worth noting that this CDW is dominated by in-plane longitudinal displacements of Ta atoms in 1T-TaS$_2$\\cite{ScrubyPhilMag1975}. Also, only one set of 6 peaks around each Bragg peaks is observed in the Ta based TMD. {But here we observe that both sets of 6 peaks are equivalently present, evidencing two sets of triple-q CDW, most likely from twinning in the crystal.}\n\nWe therefore conclude that the ring of peaks we observe in 2H-NbS$_2$ is the trace of a faint longitudinal periodic lattice distortion, appearing between 77 and 300\\,K, and corresponding to two commensurate $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ CDW identical to that found in 1T Ta based TMD. \nIn addition, as the commensurate CDW becomes incommensurate above 473\\,K in 1T-TaSe$_2$ and 190\\,K in 1T-TaS$_2$, this suggests the possibility of an incommensurate CDW in our crystal as well. As we observed no incommensurate peaks at 300\\,K, this incommensurate CDW would have to occur in a temperature range between 77 and 300\\,K.\n\nInterestingly, in 1T-TaSe$_2$, the thrice degenerate wavevector of the high temperature incommensurate CDW becomes commensurate with the lattice by a rotation of 13.9$^{\\circ}$ because it is not close enough to $1\/3\\,a^*$. {Indeed, according to Landau theory of CDW in TMD\\cite{McMillanPRB1975}, this commensuration by rotation is a feature of the 1T polytype, whereas in the 2H polytype the CDW locks in with 1\/3 of $a^*$\\cite{McMillanPRB1975} (2\/3 of $\\Gamma$M). While we cannot preclude that the CDW we observe is a bulk phenomenon native to the 2H polytype, this would be the first $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ in a 2H TMD to our knowledge. In addition, the very short coherence length of the CDW peaks along c* supports the picture of a CDW occurring almost independently in each layer of the crystal. Therefore, we also consider the possibility that this CDW originates from a local 1T-like environment in a 2H crystal and we now discuss the possible origins of such environment.}\n\n\\section{Discussion}\nIn TMD, the dominance of trigonal prismatic (1H) or octahedral (1T) coordination can be classified by transition metal atoms. There are three typical cases. In the first case, as with titanium (Ti), the coordination is generally octahedral so that the dominant polytype is 1T. In the second case, such as niobium (Nb), the coordination is usually trigonal prismatic so that 2H or 3R polytypes are favored. In the third case, as with tantalum (Ta), both coordinations have similar energies, in which case various polytypes can be synthesized: 1T, 2H and mixed stackings of 1T and 1H layers such as 4Hb.\\cite{FriendYoffe1987}\n\nIn NbS$_2$, the 3R polytype is the thermodynamically stable phase at room temperature\\cite{Fisher1980NbS2difficultsynthesis}. The 2H polytype can also be synthesized at room temperature by quenching from $\\approx1000$\\,K. As for the 1T polytype of NbS$_2$, it has never been synthesized in bulk crystal form, but it can be stabilized by strain in thin film\\cite{Carmalt2004_1st_1TNbS2synthesis} or monolayer\\cite{Chen20151TNbS2monolayerforH2} forms.\n\nLooking at the 2H structure in Fig.~\\ref{fig:2H3R}, we emphasize that it has two possible rotational positions for each 1H layer, separated by a 60$^\\circ$ rotation around the c-axis. This rotational position alternates between each 1H layers, so that, once an origin is given, the rotational positions of all 1H layers are fixed in an ideal crystal. In real crystals of the 2H structure, especially if synthesized by quenching, this opens up the possibility of rotational domains, where each domain has a different origin of the rotational positions. \n\nMost interestingly, at the junction of two rotational domains, there should be two 1H layers in the same rotational position stacked one onto the other (i.e. a locally 1H polytype), where the sulfur atoms are facing each other. This locally 1H polytype seems a priori unstable because of the geometrical repulsion between sulfur atoms in adjacent layers. In fact, such stacking of sulfur atoms does not occur in either of the three known polytypes 1T, 2H, or 3R and there are no known purely 1H polytype of NbS$_2$. Energetically, it seems much more likely that one of the sulfur atoms layer will move such that the sulfur atoms of one layer face the center of a triangle of sulfur atoms in the other layer. This reduces the geometrical repulsion, which brings the layers closer together and increases the orbital overlaps and van der Waals interactions. There are, however, several ways to displace the sulfur atoms layer.\n\nOne way, which does not involve changing the coordination of the Nb atoms, is for one of the 1H layer to slide by $(\\frac{1}{3}, \\frac{2}{3}, 0)$ or $(\\frac{2}{3}, \\frac{1}{3}, 0)$, yielding a locally 3R structure (which is non-centrosymmetric, hence the two possible sliding vectors). Such 3R-like stacking faults have actually been studied before in 2H-NbS$_2$. The study\\cite{Katzke2002} concluded to the presence of 15\\% of 3R-like stacking faults in powder samples of 2H-NbS$_2$ {(i.e. any two adjacent layers have a 15\\% chance of having a faulty stacking)}.\nWe performed a similar analysis in our sample and found the presence of 18\\% of 3R-like stacking faults.\\cite{lerouxHALThesis}\n\nA second way the sulfur atoms layer can move to reduce geometrical repulsion at the junction between domains, is a rotation by 60$^\\circ$ around the c-axis. This changes the coordination of the Nb atom from trigonal prismatic to octahedral, and yields a single purely 1T layer. To some extent, this is similar to thin films and monolayer of 1T-NbS$_2$, where 1T layers are stabilized by strain at interfaces. This single 1T layer is only three-fold symmetric and can occur in two types which are mirror image of each other (or, equivalently, rotated by 60$^\\circ$). The junctions between domains would yield both types equiprobably. This would naturally explain the presence of both wavevectors $\\vec{q_1}$ and $\\vec{q_2}$ yielding two $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ superlattices, instead of only one in pure 1T-TaS$_2$ and 1T-TaSe$_2$.\n\nTo our knowledge, this type of 1T-like stacking faults has not been studied before. In fact, considering that it involves a change of coordination of the Nb atom, a 1T-like stacking fault seems more energetic than the 3R-like stacking fault considered above. We can therefore expect that the 1T-like stacking fault occurs less frequently than the 3R-like ones. Yet, if the 1T-like CDW we observed in x-ray occurs only on such rare 1T-like stacking fault, it would explain why the CDW x-ray peaks are so faint.\n\nFinally, another explanation for the presence of local 1T-like environment could be based on the presence of small clusters of extra Nb atoms intercalated in the van der Waals gap between layers. Indeed, Meerschaut and Deudon\\cite{Meershault01} have reported that the 3R-NbS$_2$ phase is favored by an overdoping of Nb. This extra Nb is placed in the van der Waals gap between two layer of Nb in a trigonal prismatic coordination. Locally the Nb atom is surrounded by 6 chalcogen atoms in a octahedral coordination. Because of the Nb-Nb repulsion, this extra Nb atom is slightly shifted from the center of the octahedron \\cite{Meershault01}. Thus, in our NbS$_2$ crystal, a local 1T-like environment could be associated to a small amount of extra Nb with a local octahedral coordination lying in the 3R-like stacking fault.\n\n\\section{Conclusions}\nUsing {diffuse x-ray scattering} in 2H-NbS$_2$, we observed very weak superlattice peaks corresponding to two longitudinal commensurate $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ periodic lattice distortion, identical to that associated with the CDW of 1T-TaSe$_2$ and 1T-TaS$_2$. Around each Bragg peaks in the $(h,k,0)$ plane, we found a series of 12 satellite peaks at $\\pm$ 13.9$^{\\circ}$ from $\\vec{a}^*$ and $\\vec{b}^*$, commensurate with the lattice through $3\\vec{q_1}-\\vec{q_1}'=\\vec{a}^*$ or, equivalently, $13\\,\\vec{q_1} = 3\\,\\vec{a}^* + \\vec{b}^*$. Inelastic x-ray scattering (IXS) measurements confirmed the predominantly elastic nature of these satellite peaks, but the amplitude of these peaks is almost as faint as that of soft phonons.\nTo our knowledge, no CDW has been reported in any polytypes of NbS$_2$.\nWe suggest that rotational disorder in the stacking of 1H layers, induces 3R-like stacking fault and, less frequently, single 1T layers at the interface between 2H rotational domains. Such rare and dilute 1T layers might be the support of this faint 1T-like CDW. A very dilute amount of Nb in the van der Waals interlayer space of 3R-like stacking fault could also present a 1T-like octahedral coordination.\n\\section{Acknowledgments}\nM.L. acknowledges S. Eley for fruitful discussions.\nThis work was supported by the Neel Institute CNRS - University of Grenoble. We acknowledge the European Synchrotron Radiation Facility for provision of synchrotron radiation facilities. The experiments were performed on beamline ID28 and ID29.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introducci\u00f3n}\nEn rob\u00f3tica, se conoce como exploraci\u00f3n al proceso mediante el cual se busca aumentar la informaci\u00f3n respecto del entorno del robot para construir un modelo del ambiente que lo rodea.\nSe aplican los algoritmos de exploraci\u00f3n cuando se necesita conocer el estado de una locaci\u00f3n a la que no pueden acceder personas o hay un riesgo latente o manifiesto en su ingreso. \nPor ejemplo en estructuras colapsadas o con posibilidad de colapso en busca de sobrevivientes.\nEn estas circunstancias los robots m\u00e1s utilizados son de dimensiones reducidas por su capacidad de atravesar aberturas peque\u00f1as.\nDebido a estos limitantes geom\u00e9tricos, la capacidad de c\u00f3mputo de estos robots tambi\u00e9n se ve limitada.\n\nLo modelos generados por estos robots son \u00fatiles para poder navegar en estos ambientes, entendi\u00e9ndose por navegar a la tarea que permite llevarlo de un punto a otro de manera segura evadiendo obst\u00e1culos.\nLos modelos pueden ser m\u00e9tricos, donde se trata de representar o reproducir la geometr\u00eda del entorno; o topol\u00f3gicos, donde se describe la relaci\u00f3n espacial entre distintos ambientes.\n\n\nRespecto de los m\u00e9tricos, durante a\u00f1os una de las formas m\u00e1s populares de representar el entorno fue mediante grilla de ocupaci\u00f3n \\cite{moravec_1985}.\nEste m\u00e9todo consiste en representar el ambiente bidimensional con un plano dividido en celdas formando una grilla.\nLas celdas de la grilla son de tama\u00f1o regular y cada una contiene un valor que representa la probabilidad de estar ocupada basada en modelos probabil\u00edsticos de los sensores involucrados.\nDerivaciones de este trabajo usando tres dimensiones con cubos llamados \\textit{voxels} se presentaron en trabajos como \\cite{moravec1996} y \\cite{dryanovski2010} mostraron ser ineficientes respecto del uso de memoria para almacenar el mapa debido al gran tama\u00f1o de los mismos.\n\n\\begin{figure}[tp!]\n\\centering\n \\includegraphics[width=0.45\\textwidth]{images\/jetson_nano.png}\n \\caption{Montaje utilizado para las pruebas en un escenario real}\n \\label{fig:pruebas_real}\n\\end{figure}\n\n\\IEEEpubidadjcol\n\nDe forma m\u00e1s eficiente, el \\textit{framework} Octomap \\cite{hornung2013} usa una estructura en forma de \u00e1rbol con ocho nodos, donde cada nodo comienza con un voxel que es dividido sucesivamente en otros ocho hasta alcanzar la resoluci\u00f3n deseada.\nEsta estructura es llamada \\textit{octrees} y fue propuesta por primera vez en \\cite{doctor1981}.\nEl valor contenido en el voxel puede ser binario, un valor de probabilidad que puede estar basado en distintos criterios o una variante con una cota aplicada a una densidad de probabilidad.\nSi bien el Octomap hace mejor uso de la memoria, el acceso a cada elemento tiene mayor costo computacional que la grilla de ocupaci\u00f3n.\n\nPara mejorar el acceso a cada elemento, en \\cite{museth2013vdb} se propone una variante de \u00e1rboles B+, usados generalmente en sistemas de archivos, llamada VDB por las siglas de \\textit{Volume Dynamic B+tree}.\nCon esta topolog\u00eda se puede modelar un espacio de \u00edndices tridimensional, \\textit{virtualmente infinito} que permite el acceso r\u00e1pido a informaci\u00f3n dispersa.\nAdicionalmente, la implementaci\u00f3n de VDB no impone restricciones de topolog\u00eda sobre los datos de entrada y admite patrones de acceso, inserci\u00f3n y eliminado aleatorio r\u00e1pidos, en promedio $\\mathcal{O}(1)$.\nLa implementaci\u00f3n de esta estructura de datos es conocida como OpenVDB (OVDB) y se presenta en \\cite{museth2019}.\n\nExplotando las caracter\u00edsticas de OVDB, en \\cite{stvl2020steve} se presenta una biblioteca llamada STVL por las siglas de \\textit{Spatio-Temporal Voxel Layer} donde se implementan una serie de \\textit{buffers} que almacenan nubes de puntos provenientes de c\u00e1maras de profundidad u otras fuentes capaces de generar este tipo de datos.\nEstas nubes de puntos se codifican usando OpenVDB logrando formar mapas tridimensionales de manera eficiente.\nDebido a la resoluci\u00f3n y cantidad de sensores, las nube suelen estar compuestas de millones de puntos por lo que es necesario realizar una \\textit{compresi\u00f3n} diezm\u00e1ndola con alg\u00fan criterio.\nEn \\cite{stvl2020steve} esto es realizado mediante un \\textit{filtro de voxelizado}, disponible en la biblioteca \\textit{Point Could Library} \\cite{Rusu_ICRA2011_PCL} el cual corre en CPU.\n\nEn este trabajo se presenta una implementaci\u00f3n de STVL capaz de ser ejecutada en la GPU de la plataforma de desarrollo Jetson Nano de Nvidia, utilizando el arreglo de la Fig.~\\ref{fig:pruebas_real}.\nEsta plataforma fue elegida debido a su reducido tama\u00f1o y gran eficiencia energ\u00e9tica, lo que posibilita su uso en peque\u00f1os robots ya sean voladores o terrestres.\nAdicionalmente, esta plataforma tiene la CPU y la GPU en el mismo chip, por los que comparten f\u00edsicamente la memoria.\nEsto evita que se hagan copias redundantes de bloques de memoria, por ejemplo una imagen, entre el CPU y la GPU.\nEste enfoque es conocido por NVIDIA como \\textit{Zero-Copy}.\nEspec\u00edficamente se presenta una variante de \\cite{bkedkowski2013general} implementada en CUDA para realizar la compresi\u00f3n de los puntos.\nEsta compresi\u00f3n o filtrado se realiza aprovechando el acceso \\textit{Zero-Copy} disponible en la plataforma Jetson Nano.\nNo hay, seg\u00fan el conocimiento de los autores, otra implementaci\u00f3n que pueda ser ejecutada en esta plataforma.\n\nEl trabajo se organiza de la siguiente manera: En la Secci\u00f3n~II se describen las herramientas de \\textit{software} y las plataformas de \\textit{hardware} utilizadas en el trabajo, as\u00ed como la propuesta de modificaci\u00f3n al algoritmo para poder ser utilizado en la plataforma Jetson Nano. Tambi\u00e9n se describen los escenario donde fue probada la implementaci\u00f3n. En la Secci\u00f3n~III se muestran los resultados obtenidos en las distintas plataformas y escenarios, con tres tama\u00f1os de voxel. Las conclusiones del trabajo y futuras l\u00edneas de investigaci\u00f3n son enumeradas en la Secci\u00f3n~IV.\n\n\\section{Materiales y m\u00e9todos}\n\nPara evaluar el algoritmo propuesto se hicieron pruebas en un ambiente simulado y en un escenario real.\nAmbos escenarios de similares caracter\u00edsticas, est\u00e1ticos y estructurados.\nAdem\u00e1s se compar\u00f3 el tiempo de c\u00f3mputo de la biblioteca original y la propuesta usando distintas plataformas.\n\n\\subsection{Herramientas de software utilizadas}\nLa implementaci\u00f3n se realiz\u00f3 sobre el \\textit{framework} ROS (siglas en ingl\u00e9s de Sistema Operativo para Robots) \\cite{quigley2009ROS}, usando la versi\u00f3n de nombre clave Melodic Morenia.\nEsto permite una r\u00e1pida integraci\u00f3n del m\u00e9todo a evaluar con algoritmos existentes y ampliamente usados en rob\u00f3tica.\nLa mejora propuesta es una variante de m\u00e9todo de filtrado de puntos de PCL~\\cite{pcl} utilizado por la biblioteca STVL disponible en \\cite{stvl}.\nPara la etapa de simulaci\u00f3n, la misma se realiz\u00f3 con el entorno Gazebo \\cite{koenig2004design} versi\u00f3n 9 la cual puede descargarse de \\cite{gazebo} e integrarse a ROS.\n\n\\subsection{Plataformas de hardware utilizadas}\nLa plataforma elegida para realizar las pruebas fue la Jetson Nano de NVIDIA.\nLa misma es una placa de peque\u00f1as dimensiones, tan solo 70mm de largo y 45mm de ancho, lo que posibilita su utilizaci\u00f3n en robots de tama\u00f1o reducido.\nCuenta con una GPU de arquitectura Maxwell de 128 CUDA cores con los cuales puede correr hasta 2048 hilos y como procesador principal tiene un ARM Quad-core Cortex-A57.\nEsta placa cuenta con una memoria de 4GB 64-bit LPDDR4 con un ancho de banda m\u00e1ximo te\u00f3rico 25.6GB\/s.\nCon todo este \\textit{hardware} puede alcanzar 472GFLOPS de rendimiento de c\u00f3mputo en FP16, con 5-10W de consumo de energ\u00eda.\nComo se mencion\u00f3 antes la CPU y la GPU se encuentran en el mismo encapsulado y comparten f\u00edsicamente la misma memoria del sistema, admitiendo una forma limitada de \\textit{memoria unificada}.\nDicha arquitectura permite comunicaciones CPU-GPU, que no son posibles en GPU discretas.\nDe esta forma, se evita la copia de cada punto y solamente es necesario pasar la direcci\u00f3n de memoria del mismo a las funciones de CUDA lo que se traduce en una reducci\u00f3n de tiempo y espacio utilizado.\nEs importante destacar que esta implementaci\u00f3n es en si misma una mejora sobre el algoritmo original presentado por \\cite{bkedkowski2013general}, ya que originalmente estaba pensado para arquitecturas de GPU NVIDIA Fermi y Kepler las cuales son generaciones anteriores y no disponen de memoria unificada.\n\nEl algoritmo tambi\u00e9n fue evaluado en una PC de escritorio con procesador Ryzen 7 1700 y una GPU NVIDIA GTX 1660 Super.\nLa misma cuenta con 6GB de memoria GDDR6 de video, con un ancho de banda te\u00f3rico m\u00e1ximo indicado por el fabricante de hasta 336GB\/s.\nEsta placa pertenece a la familia de micro arquitectura Turing la cual dispone de la tecnolog\u00eda de memoria unificada.\nComo contrapartida, la memoria est\u00e1 separada del procesador de la CPU por lo cual la informaci\u00f3n tiene que ser copiada a trav\u00e9s del bus PCI Express.\n\nPara generar la nube de puntos en el escenario real se utiliz\u00f3 la c\u00e1mara RGB-D Intel RealSense Depth Camera D435.\nEsta c\u00e1mara est\u00e1 formada por un par est\u00e9reo capaz de determinar la distancia al sensor de los puntos dentro de su campo de visi\u00f3n.\nTambi\u00e9n dispone de un proyector infrarrojo, para mejorar la imagen 3D en paredes que no tengan caracter\u00edsticas sobresalientes.\n\n\\subsection{Algoritmo propuesto}\nEn el algoritmo presentado en \\cite{stvl2020steve}, las nubes de puntos provenientes de las c\u00e1maras RGB-D, son ingresadas en \\textit{buffers} de observaciones, los cuales almacenan la posici\u00f3n relativa de la medici\u00f3n con respecto a un sistema de coordenadas globales.\nLuego, estas mediciones pueden ser utilizadas con dos finalidades: operaciones de marcado en la cual se designa la celda como ocupada, u operaciones de liberaci\u00f3n donde se marca la celda como vac\u00eda.\n\nDebido a la implementaci\u00f3n paralela del programa, existen \\textit{buffers} extra destinado a cada una de las operaciones.\nLas nubes de puntos recibidas, al estar en el sistema de referencia de la c\u00e1mara, tienen que ser transformadas al sistema de coordenadas de globales.\nEsta transformaci\u00f3n de un sistema de referencias a otro es realizado mediante la biblioteca \\texttt{tf2} \\cite{foote2013}, la cual permite realizar las transformaciones entre los m\u00faltiples sistemas de referencia presentes en el robot, a lo largo del tiempo.\n\nMediante el an\u00e1lisis de tiempo empleado en cada una de las operaciones del programa mencionadas anteriormente, se determinaron las partes del algoritmo que presentaban mayores demoras.\nComo se puede ver en la Fig.\\ref{fig:tiempo_algoritmos}, las operaciones m\u00e1s costosas desde el punto de vista computacional son la de filtrado de la nube de puntos, llamada \\texttt{filter} y la operaci\u00f3n de marcado\/liberaci\u00f3n, llamada \\texttt{ClearFrustums}.\nEsta \u00faltima comprende el acceso a las celdas del mapa global, utilizando los m\u00e9todos provistos por la biblioteca OpenVDB.\n\n\n\nEstas nubes de puntos pueden llegar a ser muy densas, del orden de millones de puntos, y al ser utilizadas en el marcado de celdas para el mapa global, es necesario realizar una compresi\u00f3n de las mismas.\nEn \\cite{stvl2020steve} esto es realizado mediante un filtro de \\textit{voxelizado}, disponible en la biblioteca de PCL.\nEste filtro recibe como entrada nube de puntos, la cual tiene que estar en el sistema de referencia global, para poder limitar la altura de la nube de puntos.\nEste l\u00edmite en el eje $\\mathbf{z}$, permite al \\texttt{navigation\\_stack} de ROS \\cite{tbd} proyectar la nube de puntos sobre el plano horizontal y generar un mapa de ocupaci\u00f3n, utilizado en la navegaci\u00f3n del robot.\nEl filtro disponible en la biblioteca PCL es calculado en la CPU, y seg\u00fan el conocimiento de los autores, no hay otra implementaci\u00f3n que pueda ser ejecutada en esta plataforma.\n\nPor otro lado, los algoritmos implementados en CUDA para el filtrado de nubes de puntos normalmente se implementan en GPUs de escritorio con gran capacidad de memoria.\nEsto los convierten en poco atractivos para utilizar en plataformas embebidas debido a su limitada capacidad de memoria, generalmente compartida con la CPU.\n\n\\begin{figure}[bt]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/tiempos_funciones_pcl.png}}\n\\caption{Tiempos de procesamiento de las partes de mayor carga computacional del algoritmo.}\n\\label{fig:tiempo_algoritmos}\n\\end{figure}\n\n\nSe implement\u00f3 una versi\u00f3n modificada de \\textit{Cubic Subspaces - Neighboring Buckets} \\cite{bkedkowski2013general}, implementada en CUDA y utilizando el proceso \\textit{Zero-Copy} disponible en plataformas como la Jetson Nano.\nLa idea principal es usar la GPU para descomponer el espacio 3D en una cuadr\u00edcula regular de $2^n \\times 2^n\\times 2^n$ cubos $(n = 4,5,6,7,8,9)$. Por lo tanto, para cada punto, se consideran solo 27 cubos $(3^3)$ para encontrar los vecinos m\u00e1s cercanos.\nPara calcular la distancia entre dos puntos $p1={x_1,y_1,z_1}$ y $p2={x_2,y_2,z_2}$ se utiliza la distancia eucl\u00eddea definida como: %\n{\n\\setlength{\\abovedisplayskip}{1pt}\n\\setlength{\\belowdisplayskip}{15pt}\n\\begin{equation}\nd(p1,p2) = \\Big[ (x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2\\Big]^{\\dfrac{1}{2}}\n\\end{equation}%\n}\ncon $x$, $y$, $z$ y $d(p1,p2)$ pertenecientes al espacio $\\mathbb{R}^3$.\nCada punto del espacio tridimensional $XYZ$ es normalizado, de tal forma que $\\left\\{x,y,z\\in \\mathbb{R}: -1 \\leq x,y,z \\leq 1\\right\\}$.\nLuego, son clasificados mediante un \u00e1rbol de decisi\u00f3n para determinar a que subdivisi\u00f3n del espacio $2^n \\times 2^n\\times 2^n$ pertenecen.\n\nLa cantidad de puntos que pertenecen a cada subdivisi\u00f3n, es determinada mediante una tabla (\\texttt{tabla\\_subdiv}) ordenada de pares ``clave-valor\"\\,utilizando el conocido algoritmo ``Radix Sort\".\nEs importante notar que esta tabla es almacenada en la memoria global de la GPU, por lo tanto, todos los hilos de CUDA pueden acceder a los datos.\nEl par ``clave-valor\", junto con la informaci\u00f3n de la cantidad de puntos en cada subdivisi\u00f3n, son accesibles mediante la memoria de la GPU, y se utilizar\u00e1 para buscar el vecino m\u00e1s cercano en el algoritmo.\n\n\\begin{figure*}[ht]\n \\includegraphics[width=\\textwidth,height=4cm]{images\/superposition.png}\n \\caption{Secuencia de im\u00e1genes del ambiente utilizado para las pruebas. Izquierda: Nube de puntos generada por la c\u00e1mara RGB-D, Centro: Grilla de ocupaci\u00f3n 3D generada por el algoritmo STVL, Derecha: Nube de puntos y mapa superpuestos. }\n \\label{fig:grilla_3d_generada}\n\\end{figure*}\n\n\\begin{algorithm}[ht]\n\\caption{Filtrado de puntos 3D}\n\\label{alg:filtrado}\n\\begin{algorithmic}[1]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Entrada:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Salida:}}\n\\renewcommand{\\algorithmicfor}{\\textbf{para}}\n\\renewcommand{\\algorithmicdo}{\\textbf{hacer}}\n\\renewcommand{\\algorithmicend}{\\textbf{fin}}\n\\renewcommand{\\algorithmicwhile}{\\textbf{mientras}}\n\\renewcommand{\\algorithmicif}{\\textbf{si}}\n\\renewcommand{\\algorithmicthen}{\\textbf{entonces}}\n\n\\REQUIRE Puntero a la nube de puntos de la c\u00e1mara\n\\ENSURE Puntero a la de puntos filtrada\n\n\\STATE copiar el puntero del host al device\n\\STATE llamado a la funci\u00f3n de CUDA\n\\FOR {todos los puntos $m^i_{xyz}$ en paralelo}\n\\STATE buscar $subdiv_{m^i}$\n\\STATE actualizar \\texttt{tabla\\_subdiv}\n\\ENDFOR\n\\STATE en paralelo ordenar \\texttt{tabla\\_subdiv} $\\{$radix sort$\\}$\n\n\n\\WHILE {la cantidad de puntos marcados $>$ 1000 $\\{$un kernel CUDA por cada punto $m_{xyz}$ $\\}$}\n\\FOR {todos los puntos $m^i_{xyz}$ en paralelo}\n\\STATE buscar $subdiv$\n\\FOR {todas las $subdiv$ vecinas}\n\\STATE contar la cantidad de vecinos de $\\{$teniendo en cuenta los puntos marcados para borrar$\\}$\n\\ENDFOR $\\{$un kernel de CUDA por un punto$\\}$\n\\STATE marcar $m^i_{xyz}$ para borrar si \\texttt{cont} $>$ \\texttt{umbral}\n\\ENDFOR $\\{$un kernel de CUDA para todos los puntos$\\}$\n\\IF {cantidad de puntos marcada $>$ 1000 }\n\\STATE aleatoriamente elegir 1000 puntos marcados para eliminarlos definitivamente\n\\ENDIF\n\\ENDWHILE\n\n\n\\STATE sincronizar el llamado a los kernels\n\\STATE eliminar los puntos marcados\n\\STATE copiar el puntero del device al host\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\nEl objetivo del filtrado es eliminar puntos, para reducir la densidad de la nube de puntos y al mismo tiempo eliminar el ruido de la medici\u00f3n proveniente de la c\u00e1mara de profundidad.\nUna vez calculados los vecinos m\u00e1s cercanos por el m\u00e9todo anteriormente descripto, las subdivisiones del espacio son iterativamente reducidas hasta obtener la densidad de puntos deseada.\nEste proceso puede ser descripto mediante el Algoritmo \\ref{alg:filtrado}. \n\nLa nube de puntos obtenida del filtro, es una versi\u00f3n comprimida de la nube de entrada, y la densidad de la misma depende del tama\u00f1o de bloque elegido para el filtro.\nEsta dimensi\u00f3n puede ser configurada al inicio del algoritmo, y tambi\u00e9n es utilizada por la estructura de OpenVDB para almacenar los puntos en el mapa global.\n \nDebido a las restricciones impuestas por la evasi\u00f3n de obst\u00e1culos, es una prioridad garantizar c\u00e1lculos completos de la nube de puntos en el menor tiempo posible.\n\nPara asegurar la utilizaci\u00f3n completa de la GPU, el programa determina la cantidad m\u00e1xima de procesadores CUDA disponibles, para distribuir cada \\texttt{subdiv} del espacio.\nEsto se puede ver en el punto n\u00famero 11 del Algoritmo \\ref{alg:filtrado}.\n\n\nSe realizaron experimentos en ambientes simulados y reales para validar el m\u00e9todo de filtrado.\nA continuaci\u00f3n se presenta una breve descripci\u00f3n del sistema utilizado.\n\n\n\\begin{figure}[b]\n\\centering\n \\includegraphics[width=0.45\\textwidth]{images\/gazebo_stvl.png}\n \\caption{Robot en el entorno de simulaci\u00f3n. Izquierda: Mapa 3D obtenido por el algoritmo. Derecha: Pasillo de oficina con obst\u00e1culos en sus laterales}\n \\label{fig:prueba_simulada}\n\\end{figure}\n\n\\subsection{Escenario simulado}\nPara evaluar el algoritmo propuesto, en una primera instancia se utiliz\u00f3 el simulador Gazebo \\cite{koenig2004design}, en su versi\u00f3n 9.\nSe opt\u00f3 por por este simulador ya que es compatible con el protocolo de mensajes de ROS Melodic utilizada en este trabajo, y cuenta con soporte para los sensores requeridos por el algoritmo.\n\nEl robot es del estilo tracci\u00f3n diferencial, con una c\u00e1mara RGB-D montada en su parte superior.\nEl mismo cuenta con odometr\u00eda, con lo cual el mapa generado puede ser extendido m\u00e1s all\u00e1 del alcance de la c\u00e1mara de profundidad.\nEn la Fig.\\ref{fig:prueba_simulada}, se puede observar el mapa generado por el algoritmo.\nEl mismo intenta recrear un ambiente de oficinas, en el cual se encuentran dispuestos mobiliario, en ambos lados.\nEs importante destacar que al ser un sensor simulado, no presenta los artefactos normalmente encontrados en c\u00e1maras RGB-D.\n\n\\subsection{Escenario real}\nPara un an\u00e1lisis cualitativo se realizaron pruebas en un escenario real en donde se tomaron im\u00e1genes de un pasillo con distintos obst\u00e1culos y puertas.\nSe utiliz\u00f3 la configuraci\u00f3n mostrada en la Fig.~\\ref{fig:pruebas_real}.\nLa misma est\u00e1 compuesta por la c\u00e1mara RGB-D con el eje \u00f3ptico alineado con el sentido de circulaci\u00f3n del pasillo.\nLa c\u00e1mara fue montada sobre un perfil de aluminio.\nDebajo de la misma, se coloc\u00f3 la plataforma de desarrollo Jetson Nano.\n\n\\section{Resultados}\n\nComo se explic\u00f3 anteriormente, el mayor c\u00f3mputo del algoritmo est\u00e1 concentrado en dos partes principales, el filtrado y el acceso a la nube de puntos global mediante las funciones provistas por OVDB.\nLuego de la implementaci\u00f3n del algoritmo, en la Fig.~\\ref{fig:pcl_gpu} se muestra el an\u00e1lisis de cada funci\u00f3n comparando la versi\u00f3n original con la versi\u00f3n en GPU ejecut\u00e1ndose en la Jetson Nano.\n\n\\begin{figure}[b]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/tiempos_funciones_pcl_gpu.png}}\n\\caption{Comparaci\u00f3n de las funciones de mayor carga computacional, en el algoritmo con el filtrado en CUDA y en PCL. Se puede apreciar como las funciones las funciones \\texttt{clear\\_frustum} y \\texttt{mark} que est\u00e1n implementadas en CPU, ahora se ejecutan en un menor tiempo, ya que la CPU dispone de m\u00e1s recursos.}\n\\label{fig:pcl_gpu}\n\\end{figure}\n\nEn la Fig.~\\ref{fig:tiempo_filtrado}, se pueden observar los tiempos empleados por el filtrado de la nube de puntos.\nComo se puede ver, la versi\u00f3n utilizando CUDA es m\u00e1s r\u00e1pida que la versi\u00f3n original utilizando el filtrado con la biblioteca PCL.\nEsto era de esperar, ya que el filtrado puede ser descompuesto de forma tal, que se expone el paralelismo del algoritmo.\nEstas tareas masivamente paralelas, hacen que las placas GPU saquen grandes ventajas frente a su versi\u00f3n serializada.\n\n\\begin{figure}[ht!]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/tiempos_filtrado.png}}\n\\caption{Comparaci\u00f3n entre la versi\u00f3n original del algoritmo de filtrado mediante PCL y la versi\u00f3n propuesta en CUDA, para diferentes resoluciones del mapa global}\n\\label{fig:tiempo_filtrado}\n\\end{figure}\n\nPor otro lado, llevar el procesamiento de la nube de puntos a la GPU, implica que la CPU ahora dispone de m\u00e1s recursos, dejando lugar para ejecutar otros algoritmos en forma simultanea.\nEsto se puede ver en la Fig.~\\ref{fig:tiempo_operador_ovdb} como una reducci\u00f3n en el tiempo de procesamiento de la nube de puntos por parte del algoritmo de OVDB, el cual se ejecuta en la CPU.\nPara el mismo tama\u00f1o de grilla elegido, al algoritmo se procesa en menor tiempo en la versi\u00f3n con el filtro paralelizado en CUDA.\n\nEsto tambi\u00e9n se puede ver en la Fig.~\\ref{fig:pcl_gpu}, en la cual las funciones \\texttt{clear\\_frustum} y \\texttt{mark} que est\u00e1n implementadas en CPU, ahora se ejecutan en un menor tiempo, pese a que no se realizaron mejoras en esas funciones.\nEs importante destacar que la mejora se realiz\u00f3 sobre el filtrado de la nube de puntos; y el algoritmo de inserci\u00f3n y eliminaci\u00f3n de puntos, mediantes las funciones de OVDB en el mapa global no presenta modificaciones.\n\n\n\\begin{figure}[b]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/tiempos_operador_ovdb.png}}\n\\caption{Comparaci\u00f3n del tiempo para las operaciones en la CPU por parte de la biblioteca OpenVDB en el mapa global, para diferentes tama\u00f1os de grilla.}\n\\label{fig:tiempo_operador_ovdb}\n\\end{figure}\n\nEn la Fig.~\\ref{fig:grilla_3d_generada} podemos observar el mapa de ocupaci\u00f3n 3D generado por el algoritmo.\nTambi\u00e9n se puede apreciar en la parte inferior, la ausencia de voxels (puntos en el mapa 3D), debido a que es una condici\u00f3n requerida por el \\texttt{navigation\\_stack} para realizar la proyecci\u00f3n sobre el plano y generar el mapa de ocupaci\u00f3n.\nEste mapa generado, puede ser utilizado por algoritmos de planificaci\u00f3n, para desplazarse en el ambiente evitando obst\u00e1culos, y de forma paralela para generar un mapa con la informaci\u00f3n de odometr\u00eda.\nEste escenario ser\u00eda el caso de un robot navegando por un pasillo, en el cual se encuentran presentes obst\u00e1culos que obstruyen su camino.\nEn este caso la resoluci\u00f3n elegida fue de 5cm, la cual no solo permiti\u00f3 representar con efectividad los obst\u00e1culos (Cajas de cart\u00f3n en la imagen), si no que tambi\u00e9n permiti\u00f3 capturar detalles del ambiente.\nPor ejemplo, en la esquina superior derecha, podemos observar como el matafuegos es capturado por el mapa 3D.\nEn el caso de la puerta a la izquierda de la escena, la discontinuidad se puede distinguir por un cambio en el color de la grilla 3D.\nEs importante remarcar que, debido a la presencia de ruido en la c\u00e1mara de profundidad, en la tercera imagen de la Fig.~\\ref{fig:grilla_3d_generada}, algunos voxels del mapa 3D son ocluidos por la nube de puntos.\n\nEl algoritmo de filtrado implementado en CUDA no tiene una gran carga aritm\u00e9tica, debido a que la mayor\u00eda de operaciones corresponden a movimientos de memoria.\nComo se mencion\u00f3 anteriormente, la plataforma utilizada dispone de una memoria LPDDR4 con cuatro canales de 16 bits, capaz de alcanzar un ancho de banda m\u00e1ximo te\u00f3rico de 25.6GB\/s, sin embargo en las pruebas realizadas la velocidad no supera los 16GB\/s para copias entre la misma GPU y de 10GB\/s para copias CPU-GPU.\nLas pruebas fueron realizadas con los ejemplos sugeridos por el fabricante \\cite{cudaSpeed}, para evaluar el ancho de banda.\nEs importante destacar que las transferencias CPU-GPU se deben a no utilizar memoria fijada (\\texttt{pinned\\_memory)}), la cual evita que el sistema operativo la mueva o la cambie al disco.\nLa memoria fija proporciona una mayor velocidad de transferencia para la GPU y permite la copia asincr\u00f3nica\n\n\nComo puede observarse en la Fig.~\\ref{fig:mejora_velocidad}, se obtiene una mejora notable en el tiempo de filtrado de la nube de puntos.\nLa velocidad del filtrado se reduce en un orden de magnitud cuando el algoritmo es ejecutado en la computadora de escritorio como se indica en la Tabla \\ref{tab:mejora_velocidad}.\nEsto tambi\u00e9n se debe al cambio de micro arquitectura, ya que en Turing los procesadores se redise\u00f1aron para unificar la memoria compartida, el almacenamiento en cach\u00e9 de texturas y el almacenamiento en cach\u00e9 de carga de memoria, en una sola unidad.\nEsto se traduce en 2 veces m\u00e1s ancho de banda y m\u00e1s de 2 veces m\u00e1s capacidad disponible para la cach\u00e9 L1 \\cite{cudaTuring}, en comparaci\u00f3n a la arquitectura anterior a Turing.\nPor otra parte, la placa de video cuenta con 22 SM (Streaming Multiprocessors) con con 64 CUDA cores cada uno, lo que mejora el desempe\u00f1o del algoritmo, puesto que cada subdivisi\u00f3n del espacio (\\texttt{subdiv}) es procesada en forma independiente del resto. \n\nCabe aclarar que el algoritmo para este \u00faltimo caso, est\u00e1 utilizando memoria unificada, pero no puede utilizarse el concepto de \\textit{Zero-copy} disponible en la plataforma Jetson.\n\n\\begin{table}[htbp]\n\\caption{Datos estad\u00edsticos obtenidos para la funci\u00f3n de filtrado en la plataforma Jetson Nano y en una GPU discreta, para un tama\u00f1o de 0.02m}\n\\label{tab:mejora_velocidad}\n\\centering\n\\begin{tabular}{l|c|c}\nPlataforma & Media [ms]& Desviaci\u00f3n est\u00e1ndar\\\\\n\\hline\npcl-jetson& $46.88$ & $1.60\\cdot 10^{-3}$\\\\\ncuda-jetson& $23.2$ & $1.38\\cdot 10^{-3}$\\\\\ncuda-desktop& $3.9$ & $1.76\\cdot 10^{-3}$\\\\\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/compara_jetson_desktop.png}}\n\\caption{Comparaci\u00f3n de tiempo de filtrado para tres configuraciones con una resoluci\u00f3n de 0.02m: a la izquierda, el algoritmo de filtrado original corriendo en la CPU de la Jetson; al centro, el algoritmo propuesto corriendo en la GPU de la Jetson; a la derecha, el algoritmo propuesto corriendo en la GPU de escritorio}\n\\label{fig:mejora_velocidad}\n\\end{figure}\n\n\n\n\\balance\n\n\\section{Conclusiones}\nEn este trabajo se present\u00f3 una mejora sobre sobre el filtrado de una nube de puntos, en el programa STVL.\nSe realizaron pruebas simuladas como en la vida real, y se analizaron las ventajas de procesar la nube de puntos en la GPU embebida en plataformas como las NVIDIA Jetson.\nTambi\u00e9n se realiz\u00f3 una comparaci\u00f3n de velocidad con una placa de video discreta en una PC de escritorio, en la cual se pudo comprobar que el aumento en la velocidad de memoria disminuye el tiempo de procesamiento del algoritmo.\nDurante el desarrollo del algoritmo, se pudo constatar que es muy importante tener la misma arquitectura de GPU que la de la plataforma sobre la cual se est\u00e1 desarrollando.\nUna discordancia en la misma, podr\u00eda provocar errores conceptuales y algor\u00edtmicos, debido a que algunas funciones pueden no estar disponibles.\nPara el caso de la Jetson Nano, la misma cuenta con una versi\u00f3n limitada de las funciones de memoria unificada.\nEs trabajo futuro implementar mejoras sobre el algoritmo presentado en la plataforma Jetson TX2, la cual cuenta con micro arquitectura Pascal y mejor soporte de memoria unificada, con lo cual se espera mejorar a\u00fan m\u00e1s el rendimiento.\n\nTambi\u00e9n se est\u00e1 trabajando en reemplazar el uso de las funciones de OpenVDB (realizadas en CPU), por las de la biblioteca GVDB provistas por NVIDIA \\cite{gvdb}.\nDebido al uso de funciones implementadas a partir de la micro arquitectura Pascal, se planea utilizar la plataforma Jetson TX2.\n\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nIn recent years, reinforcement learning~\\cite{BertsekasTsitsiklis1996,KaelblingLittmanMoore1996,sutton1998reinforcement,Szepesvari2009} has emerged as a leading framework to learn how to act optimally in unknown environments. Policy gradient methods~\\cite{Sutton2000,Kakade2002,KondaTsitsiklis2003,\nSchulman2015,Schulman2017,SilverSchrittwieserSimonyanEtAl2017} have played a prominent role in the success of reinforcement learning. Such methods have two critical components: policy evaluation and policy improvement. In policy evaluation step, the performance of a parameterized policy is evaluated while in the policy improvement step, the policy parameters are updated using stochastic gradient ascent.\n\nPolicy gradient methods may be broadly classified as Monte Carlo \nmethods and temporal difference methods. In Monte Carlo methods, performance of a\npolicy is estimated using the discounted return of a single sample path; in\ntemporal difference methods, the value(-action) function is guessed and this guess is\niteratively improved using temporal differences. Monte Carlo methods are attractive\nbecause they have zero bias, are simple and easy to implement, and work for\nboth discounted and average reward setups as well as for models with\ncontinuous state and action spaces. However, they suffer from various drawbacks.\nFirst, they have a high variance because a single sample path is used to\nestimate performance. Second, they are not asymptotically optimal for\ninfinite horizon models because it is effectively\nassumed that the model is episodic; in infinite horizon models, the\ntrajectory is arbitrarily truncated to treat the model as an episodic model.\nThird, the policy improvement step cannot be carried out in tandem with\npolicy evaluation. One must wait until the end of the episode to estimate the\nperformance and only then can the policy parameters be updated. It is for\nthese reasons that Monte Carlo methods are largely ignored in the literature\non policy gradient methods, which\nalmost exclusively focuses on temporal difference methods such as actor-critic with eligibility traces~\\cite{sutton1998reinforcement}. \n\nIn this paper, we propose a Monte Carlo method---which we call \\emph{Renewal\nMonte Carlo} (RMC)---for infinite horizon Markov decision processes with\ndesignated start state. Like Monte Carlo, RMC has low bias, is simple and easy\nto implement, and works for models with continuous state and action spaces.\nAt the same time, it does not suffer from the drawbacks of typical Monte Carlo methods.\nRMC is a low-variance online algorithm that works for infinite horizon\ndiscounted and average reward setups. One doesn't have to wait until the end\nof the episode to carry out the policy improvement step; it can be carried out\nwhenever the system visits the start state (or a neighborhood of\nit).\n\nAlthough renewal theory is commonly used to estimate performance of stochastic\nsystems in the simulation optimization community~\\cite{Glynn1986,Glynn1990},\nthose methods assume that the probability law of the primitive random\nvariables and its weak derivate are known, which is not the case in\nreinforcement learning. Renewal theory is also commonly used in\nthe engineering literature on queuing theory and systems and control for\nMarkov decision processes (MDPs) with average reward criteria and a known\nsystem model. There is some prior work on using renewal theory for\nreinforcement learning~\\cite{MarbachTsitsiklis2001,MarbachTsitsiklis2003},\nwhere renewal theory based estimators for the average return and differential\nvalue function for average reward MDPs is developed. In RMC, renewal theory is\nused in a different manner for discounted reward MDPs (and the results\ngeneralize to average cost MDPs).\n\n\\section{RMC Algorithm} \\label{sec:rl}\n\nConsider a Markov decision process (MDP) with state $\\State_t \\in \\mathcal{\\State}$\nand action $\\Action_t \\in \\ACTION$. The system starts in an initial state\n$\\state_0 \\in \\mathcal{\\State}$ and at time $t$:\n\\begin{enumerate}\n \\item there is a controlled transition from $S_t$ to $S_{t+1}$ according to\n a transition kernel $P(\\Action_t)$;\n \\item a per-step reward $R_t = r(\\State_t, \\Action_t, \\State_{t+1})$ is\n received.\n\\end{enumerate}\nFuture is discounted at a rate $\\discount \\in (0,1)$. \n\nA (time-homogeneous and Markov) policy $\\policy$ maps the current state to a\ndistribution on actions, i.e., $\\Action_t \\sim \\policy(\\State_t)$. We use\n$\\policy(\\action | \\state)$ to denote \n$\\PR(\\Action_t = \\action | \\State_t = \\state)$.\nThe performance of a policy $\\policy$ is given by\n\\begin{equation}\n J_\\pi = \n \\EXPA\\biggl[\\sum_{t=0}^{\\infty}\\discount^{t}\\Reward_t\\biggm|\\State_0 =\n \\state_0\\biggr]. \\label{eq:Vp-defn}\n\\end{equation}\n\nWe are interested in identifying an optimal policy, i.e., a policy that maximizes the performance. When $\\mathcal{\\State}$ and $\\ACTION$ are Borel spaces, we assume that the model satisfies the standard conditions under which time-homogeneous Markov policies are optimal~\\cite{Hernandez-Lerma1996}. In the\nsequel, we present a sample path based online learning algorithm, which\nwe call Renewal Monte Carlo (RMC), which identifies a locally optimal policy\nwithin the class of parameterized policies.\n\nSuppose policies are parameterized by a closed and convex subset\n$\\polParSpace$ of the Euclidean space. For example, $\\polParSpace$\n could be the weight vector in a Gibbs soft-max policy, or the weights\n of a deep neural network, or the thresholds in a control limit policy, and\nso on. Given $\\polPars \\in \\polParSpace$, we use $\\policy_\\polPars$ to denote\nthe policy parameterized by $\\polPars$ and $J_\\polPars$ to denote\n$J_{\\policy_{\\polPars}}$. We assume that for all policies $\\policy_\\polPars$,\n$\\polPars \\in \\polParSpace$, the designated start state $s_0$ is positive\nrecurrent.\n\nThe typical approach for policy gradient based reinforcement learning is to start with an\ninitial guess $\\polPars_0 \\in \\polParSpace$ and iteratively update it using\nstochastic gradient ascent. In particular, let $\\widehat \\GRAD J_{\\polPars_m}$ be an\nunbiased estimator of $\\GRAD_\\polPars J_\\polPars \\big|_{\\polPars =\n\\polPars_m}$, then update \n\\begin{equation} \\label{eq:J-update}\n \\polPars_{m+1}\n= \\big[ \\polPars_m + \\alpha_m \\widehat \\GRAD J_{\\polPars_m} \\big]_{\\polParSpace}\n\\end{equation} \nwhere $[\\polPars]_{\\polParSpace}$ denotes the projection of\n$\\polPars$ onto $\\polParSpace$ and $\\{\\alpha_m\\}_{m \\ge 1}$ is the sequence of\nlearning rates that satisfies the standard assumptions of \n\\begin{equation}\\label{eq:lr}\n \\sum_{m=1}^\\infty \\alpha_m = \\infty \n \\quad\\text{and}\\quad \n \\sum_{m=1}^\\infty \\alpha_m^2 < \\infty.\n\\end{equation}\nUnder mild\ntechnical conditions~\\cite{Borkar:book}, the above iteration converges to a $\\polPars^*$ that is\nlocally optimal, i.e., $\\GRAD_\\polPars J_\\polPars \\big|_{\\polPars =\n\\polPars^*} = 0$. In RMC, we approximate $\\GRAD_\\polPars J_\\polPars$ by a\nRenewal theory based estimator as explained below.\n\nLet $\\tau^{(n)}$ denote the stopping time when the system returns to the start\nstate $\\state_0$ for the $n$-th time. In particular, let $\\tau^{(0)}=0$ and\nfor $n \\ge 1$ define \n\\[ \\tau^{(n)} = \\inf\\{t > \\tau^{(n-1)}:\\state_t = \\state_0\\}. \\]\nWe call the sequence of $(\\State_t, \\Action_t,\n\\Reward_t)$ from $\\tau^{(n-1)}$ to ${\\tau^{(n)} - 1}$ as the\n\\emph{$n$-th regenerative cycle}. Let $\\mathsf{R}^{(n)}$ and $\\mathsf{T}^{(n)}$ denote the total\ndiscounted reward and total discounted time of the $n$-th regenerative cycle,\ni.e., \n\\begin{align}\\label{eq:Rn_and_Tn}\n \\mathsf{R}^{(n)} = \\Gamma^{(n)} \n \\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}}\n \\discount^{t} R_t\n \\quad\\text{and}\\quad\n \\mathsf{T}^{(n)} = \\Gamma^{(n)}\n \\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)}-1}}\n \\discount^{t},\n\\end{align}\nwhere $\\Gamma^{(n)}=\\discount^{-\\tau^{(n-1)}}$.\nBy the strong Markov property, $\\{\\mathsf{R}^{(n)}\\}_{n \\ge 1}$ and $\\{\\mathsf{T}^{(n)}\\}_{n \\ge 1}$\nare i.i.d.\\@ sequences. Let $\\mathsf{R}_\\polPars$ and $\\mathsf{T}_\\polPars$ denote $\\EXP[\\mathsf{R}^{(n)}]$\nand $\\EXP[\\mathsf{T}^{(n)}]$, respectively. Define\n\\begin{equation}\n \\widehat \\mathsf{R} = \\frac 1N \\sum_{n=1}^N \\mathsf{R}^{(n)} \n \\quad \\hbox{and}\\quad\n \\widehat \\mathsf{T} = \\frac 1N \\sum_{n=1}^N \\mathsf{T}^{(n)} ,\n \\label{eq:est}\n\\end{equation}\nwhere $N$ is a large number. \nThen, $\\widehat \\mathsf{R}$ and $\\widehat \\mathsf{T}$ are unbiased and asymptotically\nconsistent estimators of $\\mathsf{R}_\\polPars$ and $\\mathsf{T}_\\polPars$.\n\nFrom ideas similar to standard\nRenewal theory \\cite{Feller1966}, we have the following.\n\\begin{proposition}[Renewal Relationship]\\label{prop:renewal-basic1} The performance of policy $\\policy_\\polPars$ is given by:\n\\begin{equation}\\label{eq:renewal-basic1}\n J_\\polPars = \\frac { \\mathsf{R}_\\polPars } { (1 - \\discount) \\mathsf{T}_\\polPars }.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\n For ease of notation, define\n \\[\n \\overline \\mathsf{T}_\\polPars = \\EXPB\\big[ \\discount^{\\tau^{(n)} - \\tau^{(n-1)}}\n \\big]\n \\]\n Using the formula for geometric series, we get that $\\mathsf{T}_\\polPars = ( 1 -\n \\overline \\mathsf{T}_\\polPars)\/(1 - \\discount)$. Hence,\n \\begin{equation} \\label{eq:Tbar1}\n \\overline \\mathsf{T}_\\polPars = 1 - (1 - \\discount) \\mathsf{T}_\\polPars.\n \\end{equation}\n\n Now, consider the performance:\n \\begin{align}\n J_\\polPars &= \\EXPB\\bigg[\n \\sum_{t=0}^{\\tau^{(1)}-1} \\discount^{t} R_t \n \\notag \\\\\n & \\hspace{7em}\n + \\discount^{\\tau^{(1)}}\n \\smashoperator{\\sum_{t = \\tau^{(1)}}^\\infty} \\discount^{t-\\tau^{(1)}} R_t \n \\biggm| \\State_{0} = \\state_0 \\bigg]\n \\displaybreak[0]\n \\notag \\\\\n &\\stackrel{(a)}= \\mathsf{R}_\\polPars + \n \\EXPB[ \\discount^{\\tau^{(1)})} ]\\, J_\\polPars\n \\displaybreak[0]\n \\notag \\\\\n &= \\mathsf{R}_\\polPars + \\overline \\mathsf{T}_\\polPars J_\\polPars,\n \\label{eq:R-11}\n \\end{align}\n where the second expression in $(a)$ uses the independence of random\n variables from $(0, \\tau^{(1)}-1)$ to those from $\\tau^{(1)}$ onwards \n due to the strong Markov property. Substituting~\\eqref{eq:Tbar1}\n in~\\eqref{eq:R-11} and rearranging terms, we get the result of the\n proposition.\n\\end{proof}\nDifferentiating both sides of Equation~\\eqref{eq:renewal-basic1} with respect to $\\polPars$, we get that\n\\begin{equation} \\label{eq:H}\n \\GRAD_\\polPars J_\\polPars = \\frac{H_\\polPars}{\\mathsf{T}_\\polPars^2(1 - \\discount)},\n \\enskip\\text{where }\n H_\\polPars = \\mathsf{T}_\\polPars \\GRAD_\\polPars \\mathsf{R}_\\polPars\n - \\mathsf{R}_\\polPars \\GRAD_\\polPars \\mathsf{T}_\\polPars. \n\\end{equation}\n\nTherefore, instead of using stochastic gradient ascent to find the maximum\nof $J_\\polPars$, we can use stochastic approximation to find\nthe root of $H_\\polPars$. In particular, let $\\widehat H_m$ be an unbiased\nestimator of $H_{\\polPars_m}$. We then use the update\n\\begin{equation} \\label{eq:H-update}\n \\polPars_{m+1} = \\big[ \\polPars_m + \\alpha_m \\widehat H_m \\big]_{\\polParSpace}\n\\end{equation}\nwhere $\\{\\alpha_m\\}_{m \\ge 1}$ satisfies the standard conditions on learning\nrates~\\eqref{eq:lr}. The above iteration converges to a locally optimal policy.\nSpecifically, we have the following.\n\n\\begin{theorem}\\label{thm:convergence}\n Let $\\widehat \\mathsf{R}_m$, $\\widehat \\mathsf{T}_m$, $\\widehat \\GRAD \\mathsf{R}_m$ and $\\widehat\n \\GRAD \\mathsf{T}_m$ be unbiased estimators of $\\mathsf{R}_{\\polPars_m}$, $\\mathsf{T}_{\\polPars_m}$,\n $\\GRAD_\\polPars \\mathsf{R}_{\\polPars_m}$, and $\\GRAD_\\polPars \\mathsf{R}_{\\polPars_m}$,\n respectively such that $\\widehat \\mathsf{T}_m \\perp \\widehat \\GRAD \\mathsf{R}_m$ and\n $\\widehat \\mathsf{R}_m \\perp \\widehat \\GRAD \\mathsf{T}_m$.\\footnote{The notation $X \\perp Y$\n means that the random variables $X$ and $Y$ are independent.} Then,\n \\begin{equation}\n \\widehat H_m = \\widehat \\mathsf{T}_m \\widehat \\GRAD \\mathsf{R}_m - \\widehat \\mathsf{R}_m \n \\widehat \\GRAD \\mathsf{T}_m\n \\label{eq:H-est}\n \\end{equation}\n is an unbiased estimator of $H_\\polPars$ and \n the sequence $\\{\\polPars_m\\}_{m \\ge 1}$ generated\n by~\\eqref{eq:H-update} converges almost surely and \n \\[\n \\lim_{m \\to \\infty} \\GRAD_\\polPars J_\\polPars \\big|_{\\polPars_m} = 0.\n \\]\n\\end{theorem}\n\\begin{proof}\n The unbiasedness of $\\widehat H_m$ follows immediately from the\n independence assumption. The convergence of\n the $\\{\\polPars_m\\}_{m \\ge 1}$ follows from~\\cite[Theorem 2.2]{Borkar:book}\n and the fact that the model satisfies conditions (A1)--(A4)\n of~\\cite[pg~10--11]{Borkar:book}.\n\\end{proof}\n\nIn the remainder of this section, we present two methods for estimating the\ngradients of $\\mathsf{R}_\\polPars$ and $\\mathsf{T}_\\polPars$. \nThe first is a likelihood ratio based\ngradient estimator which works when the policy is differentiable with respect\nto the policy parameters. The second is a simultaneous perturbation based\ngradient estimator that uses finite differences, which is useful when the\npolicy is not differentiable with respect to the policy parameters.\n\n\\subsection{Likelihood ratio based gradient based estimator}\\label{sec:likelihood}\n\nOne approach to estimate the performance gradient is to use likelihood radio\nbased estimates~\\cite{Rubinstein1989,Glynn1990,Williams1992}.\nSuppose the policy $\\policy_\\polPars(\\action | \\state)$ is differentiable with respect to\n$\\polPars$. For any time~$t$, define the likelihood function\n\\begin{equation}\\label{eq:score}\n \\Score_t = \n \\GRAD_\\polPars \\log[ \\policy_\\polPars(\\Action_t \\mid \\State_t) ],\n\\end{equation}\nand for $\\sigma \\in \\{ \\tau^{(n-1)}, \\dots, \\tau^{(n)} - 1 \\}$, define\n\\begin{equation}\n \\mathsf{R}^{(n)}_\\sigma = \\Gamma^{(n)}\\sum_{t=\\sigma}^{\\tau^{(n)}-1}\\discount^tR_t,\\enskip\n \\mathsf{T}^{(n)}_\\sigma = \\Gamma^{(n)}\\sum_{t=\\sigma}^{\\tau^{(n)}-1}\\discount^t.\\label{eq:R_T_sigma}\n\\end{equation}\nIn this notation $\\mathsf{R}^{(n)} = \\mathsf{R}^{(n)}_{\\tau^{(n-1)}}$ and $\\mathsf{T}^{(n)} = \\mathsf{T}^{(n)}_{\\tau^{(n-1)}}$.\nThen, define the following estimators for $\\GRAD_\\polPars \\mathsf{R}_\\polPars$ and\n$\\GRAD_\\polPars \\mathsf{T}_\\polPars$:\n\\begin{align}\n\\widehat \\GRAD \\mathsf{R} &= \\frac 1N \\sum_{n=1}^N \\sum_{\\sigma=\\tau^{(n-1)}}^{\\tau^{(n)}-1}\\mathsf{R}^{(n)}_\\sigma \\Score_{\\sigma},\\label{eq:grad_R_new}\\\\\n\\widehat \\GRAD \\mathsf{T} &= \\frac 1N \\sum_{n=1}^N \\sum_{\\sigma=\\tau^{(n-1)}}^{\\tau^{(n)}-1}\\mathsf{T}^{(n)}_\\sigma \\Score_{\\sigma},\\label{eq:grad_T_new}\n\\end{align}\nwhere $N$ is a large number. \n\n\\begin{proposition} \\label{prop:estimator}\n $\\widehat \\GRAD \\mathsf{R}$ and $\\widehat \\GRAD \\mathsf{T}$ defined above are unbiased\n and asymptotically consistent estimators\n of $\\GRAD_\\polPars \\mathsf{R}_\\polPars$ and $\\GRAD_\\polPars \\mathsf{T}_\\polPars$. \n\\end{proposition}\n\\begin{proof}\nLet $P_\\polPars$ denote the probability induced on the sample paths when the\nsystem is following policy $\\policy_\\polPars$. For $t \\in \\{ \\tau^{(n-1)},\n\\dots, \\tau^{(n)} - 1\\}$, let $D^{(n)}_{t}$ denote the sample path $(\\State_s,\n\\Action_s, \\State_{s+1})_{s=\\tau^{(n-1)}}^{t}$ for\nthe $n$-th regenerative cycle until time $t$. Then,\n\\[\n \\let\\smashoperator\\relax\n P_\\polPars(D^{(n)}_t) = \\smashoperator{\\prod_{s= \\tau^{(n-1)}}^{t} }\n \\policy_\\polPars(A_s | S_s)\n \\PR(\\State_{s+1} | S_{s}, A_{s})\n\\]\nTherefore,\n\\begin{equation} \\label{rel:11}\n \\GRAD_\\polPars \\log P_\\polPars(D^{(n)}_t) =\n \\smashoperator{\\sum_{s=\\tau^{(n-1)}}^{t}} \\GRAD_\\polPars \\log\n \\policy_\\polPars(\\Action_s | \\State_s) = \\smashoperator{\\sum_{s=\\tau^{(n-1)}}^{t}} \\Score_s.\n\\end{equation}\n\nNote that $\\mathsf{R}_\\polPars$ can be written as:\n\\[\n \\mathsf{R}_\\polPars = \\Gamma^{(n)}\\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}}\\discount^{t}\\EXPB[ R_t].\n\\]\nUsing the log derivative trick,\\footnote{Log-derivative trick: For any distribution $p(x|\\theta)$ and any function $f$,\n\\[\n \\GRAD_\\theta \\EXP_{X \\sim p(X|\\theta)} [ f(X) ]\n = \n \\EXP_{X \\sim p(X|\\theta)}[ f(X) \\GRAD_\\theta \\log p(X | \\theta)].\n\\]\n} we get\n\\begin{align} \n \\GRAD_\\polPars \\mathsf{R}_\\polPars &= \n \\Gamma^{(n)} \n \\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}}\n \\discount^{t}\\,\n \\EXPB[ R_t \\GRAD_\\polPars \\log P_\\polPars(D^{(n)}_t) ] \n \\notag \\\\\n &\\stackrel{(a)}= \n \\Gamma^{(n)} \n \\EXPB\\bigg[\n \\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}\n \\bigg[\n\\discount^{t} R_t{\\sum_{\\sigma=\\tau^{(n-1)}}^t}\\Score_\\sigma\\bigg] \\bigg]\n \\notag \\\\\n &\\stackrel{(b)}= \\EXPB\\bigg[ \n \\sum_{\\sigma = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}\\Score_\\sigma\\bigg[\n \\Gamma^{(n)}\\sum_{t=\\sigma}^{\\tau^{(n)} - 1}\\discount^{t} R_t \\bigg] \n \\bigg] \\notag \\\\\n &\\stackrel{(c)}= \\EXPB\\bigg[ \n \\sum_{\\sigma = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}\n \\mathsf{R}^{(n)}_\\sigma \\Score_\\sigma \\bigg]\n \\label{rel:12}\n\\end{align}\nwhere $(a)$ follows from~\\eqref{rel:11}, $(b)$ follows from changing the order\nof summations, and $(c)$ follows from the definition of\n$\\mathsf{R}^{(n)}_\\sigma$ in~\\eqref{eq:R_T_sigma}. $\\widehat \\GRAD\n\\mathsf{R}$ is an unbiased and asymptotically consistent estimator of the right hand\nside of the first equation in~\\eqref{rel:12}. The result for $\\widehat \\GRAD\n\\mathsf{T}$ follows from a similar argument.\n\\end{proof}\n\n\\begin{algorithm2e}[!tb]\n\\def\\1#1{\\quitvmode\\hbox to 1em{\\hfill$\\mathsurround0pt #1$}}\n\\SetKwInOut{Input}{input}\n\\SetKwInOut{Output}{output}\n\\SetKwInOut{Init}{initialize}\n\\SetKwProg{Fn}{function}{}{}\n\\SetKwFor{ForAll}{forall}{do}{}\n\\SetKwRepeat{Do}{do}{while}\n\\DontPrintSemicolon\n\\Input{Intial policy $\\polPars_0$, discount factor $\\discount$, \n initial state~$\\state_0$, number of regenerative cycles $N$}\n\n\\For{iteration $m = 0, 1, \\dots$}{\n \\For{regenerative cycle $n_1=1$ to $N$}{\n Generate $n_1$-th regenerative cycle using\n \\rlap{policy~$\\policy_{\\polPars_m}$.}\n\n Compute $\\mathsf{R}^{(n_1)}$ and $\\mathsf{T}^{(n_1)}$ using~\\eqref{eq:Rn_and_Tn}.\n }\n Set $\\widehat \\mathsf{R}_{m} = \\texttt{average}(\\mathsf{R}^{(n_1)}: n_1 \\in \\{1,\\dots,N\\})$.\n\n Set $\\widehat \\mathsf{T}_{m} = \\texttt{average}(\\mathsf{T}^{(n_1)}: n_1 \\in \\{1,\\dots,N\\})$.\n\n \\For{regenerative cycle $n_2=1$ to $N$}{\n Generate $n_2$-th regenerative cycle using\n \\rlap{policy~$\\policy_{\\polPars_m}$.}\n\n Compute $\\mathsf{R}_\\sigma^{(n_2)}$, $\\mathsf{T}_\\sigma^{(n_2)}$ and $\\Score_\\sigma$\n for all $\\sigma$.\n }\n Compute $\\widehat \\GRAD \\mathsf{R}_{m}$ and $\\widehat \\GRAD \\mathsf{T}_m$ \n using \\eqref{eq:grad_R_new} and~\\eqref{eq:grad_T_new}.\n\n \\vskip 2pt\n Set $\\widehat H_m = \\widehat \\mathsf{T}_m \\widehat \\GRAD \\mathsf{R}_m - \\widehat \\mathsf{R}_m \\widehat \\GRAD \\mathsf{T}_m$.\n\n \\vskip 4pt\n Update $\\polPars_{m+1} = \\big[ \\polPars_m + \\alpha_m \\widehat H_m\n \\big]_{\\polParSpace}$.\n}\n\\caption{RMC Algorithm with likelihood ratio based gradient estimates.}\n\\label{alg:likelihood}\n\\end{algorithm2e}\n\nTo satisfy the independence condition of Theorem~\\ref{thm:convergence}, we use two independent sample paths: one to estimate $\\widehat \\mathsf{R}$ and\n$\\widehat \\mathsf{T}$ and the other to estimate $\\widehat \\GRAD \\mathsf{R}$ and $\\widehat\n\\GRAD \\mathsf{T}$. The complete algorithm in shown in Algorithm~\\ref{alg:likelihood}.\nAn immediate consequence of Theorem~\\ref{thm:convergence} is the following. \n\\begin{corollary}\\label{cor:pol_grad_conv}\n The sequence $\\{\\polPars_m\\}_{m \\ge 1}$ generated by\n Algorithm~\\ref{alg:likelihood} converges to a local optimal.\n\\end{corollary}\n\n\\begin{remark}\\label{rem:1}\nAlgorithm~\\ref{alg:likelihood} is presented in its simplest form. It is possible to use standard variance reduction techniques such as\nsubtracting a baseline~\\cite{Williams1992,Greensmith2004,Peters2006} to reduce variance. \n\\end{remark}\n\n\\begin{remark}\\label{rem:2} \\label{rem:single_run}\nIn Algorithm~\\ref{alg:likelihood}, we use two separate runs to compute $(\\widehat \\mathsf{R}_m, \\widehat \\mathsf{T}_m)$ and $(\\GRAD \\widehat \\mathsf{R}_m, \\GRAD \\widehat \\mathsf{T}_m)$ to ensure that the independence conditions of Proposition~\\ref{prop:estimator} are satisfied. In practice, we found that using a single run to compute both $(\\widehat \\mathsf{R}_m, \\widehat \\mathsf{T}_m)$ and $(\\GRAD \\widehat \\mathsf{R}_m, \\GRAD \\widehat \\mathsf{T}_m)$ has negligible effect on the accuracy of convergence (but speeds up convergence by a factor of two).\n\\end{remark}\n\n\\begin{remark}\\label{rem:3}\nIt has been reported in the literature~\\cite{Thomas2014} that using a biased estimate of the gradient given by:\n\\begin{equation}\n \\mathsf{R}^{(n)}_\\sigma = \\Gamma^{(n)} \n \\sum_{t=\\sigma}^{\\tau^{(n)}-1}\\discount^{t-\\sigma} R_t,\n \\label{eq:R_sigma_biased} \n\\end{equation}\n(and a similar expression for $T^{(n)}_\\sigma$) leads to faster convergence. We call this variant \\textit{RMC with biased gradients} and, in our experiments, found that it does converge faster than RMC.\n\\end{remark}\n\n\n\\subsection{Simultaneous perturbation based gradient estimator}\nAnother approach to estimate performance gradient is to use simultaneous\nperturbation based estimates\\cite{Spall1992,Maryak2008,Katkovnik1972,Bhatnagar:2013}. The general one-sided form of such estimates is\n\\[\n \\widehat \\GRAD \\mathsf{R}_\\polPars = \\delta (\n \\widehat \\mathsf{R}_{\\polPars + c \\delta} - \\widehat \\mathsf{R}_{\\polPars} )\/c\n\\]\nwhere $\\delta$ is a random variable with the same dimension as\n$\\polPars$ and $c$ is a small constant. The expression for $\\widehat \\GRAD\n\\mathsf{T}_\\polPars$ is similar. When $\\delta_i \\sim \n\\text{Rademacher}(\\pm 1)$, the above method corresponds\nto simultaneous perturbation stochastic approximation (SPSA)~\\cite{Spall1992,\nMaryak2008}; when $\\delta \\sim \\text{Normal}(0, I)$, the above method\ncorresponds to smoothed function stochastic approximation\n(SFSA)~\\cite{Katkovnik1972,Bhatnagar:2013}.\n\n\\begin{algorithm2e}[!tb]\n\\def\\1#1{\\quitvmode\\hbox to 1em{\\hfill$\\mathsurround0pt #1$}}\n\\SetKwInOut{Input}{input}\n\\SetKwInOut{Output}{output}\n\\SetKwInOut{Init}{initialize}\n\\SetKwProg{Fn}{function}{}{}\n\\SetKwFor{ForAll}{forall}{do}{}\n\\SetKwRepeat{Do}{do}{while}\n\\DontPrintSemicolon\n\\Input{Intial policy $\\polPars_0$, discount factor $\\discount$, initial state~$\\state_0$, number of regenerative cycles $N$, constant $c$, perturbation distribution\n$\\Delta$}\n\n\\For{iteration $m = 0, 1, \\dots$}{\n \\For{regenerative cycle $n_1=1$ to $N$}{\n Generate $n_1$-th regenerative cycle using\n \\rlap{policy~$\\policy_{\\polPars_m}$.}\n\n Compute $\\mathsf{R}^{(n_1)}$ and $\\mathsf{T}^{(n_1)}$ using~\\eqref{eq:Rn_and_Tn}.\n }\n Set $\\widehat \\mathsf{R}_{m} = \\texttt{average}(\\mathsf{R}^{(n_1)}: n_1 \\in \\{1,\\dots,N\\})$.\n\n Set $\\widehat \\mathsf{T}_{m} = \\texttt{average}(\\mathsf{T}^{(n_1)}: n_1 \\in \\{1,\\dots,N\\})$.\n\n Sample $\\delta \\sim \\Delta$.\n\n Set $\\polPars_m' = \\polPars_m + c \\delta$.\n\n \\For{regenerative cycle $n_2=1$ to $N$}{\n Generate $n_2$-th regenerative cycle using\n \\rlap{policy~$\\policy_{\\polPars_m}$.}\n\n Compute $\\mathsf{R}^{(n_2)}$ and $\\mathsf{T}^{(n_2)}$ using~\\eqref{eq:Rn_and_Tn}.\n }\n Set $\\widehat \\mathsf{R}'_{m} = \\texttt{average}(\\mathsf{R}^{(n_2)}: n_2 \\in \\{1,\\dots,N\\})$.\n\n Set $\\widehat \\mathsf{T}'_{m} = \\texttt{average}(\\mathsf{T}^{(n_2)}: n_2 \\in \\{1,\\dots,N\\})$.\n\n \\vskip 2pt\n Set $\\widehat H_m = \\delta(\\widehat \\mathsf{T}_m \\widehat \\mathsf{R}'_m \n - \\widehat \\mathsf{R}_m \\widehat \\mathsf{T}'_m)\/c$.\n\n \\vskip 4pt\n Update $\\polPars_{m+1} = \\big[ \\polPars_m + \\alpha_m \\widehat H_m\n \\big]_{\\polParSpace}$.\n}\n\\caption{RMC Algorithm with simultaneous perturbation based gradient estimates.}\n\\label{alg:SPSA}\n\\end{algorithm2e}\n\n\nSubstituting the above estimates in~\\eqref{eq:H-est} and simplifying, we get\n\\[\n \\widehat H_\\polPars = \\delta ( \\widehat \\mathsf{T}_\\polPars \\widehat \\mathsf{R}_{\\polPars + c\\delta} \n - \n \\widehat \\mathsf{R}_\\polPars \\widehat \\mathsf{T}_{\\polPars + c \\delta} )\/c.\n\\]\nThe complete algorithm in shown in Algorithm~\\ref{alg:SPSA}. Since $(\\widehat\n\\mathsf{R}_\\polPars, \\widehat \\mathsf{T}_\\polPars)$ and $(\\widehat \\mathsf{R}_{\\polPars + c \\delta},\n\\widehat \\mathsf{T}_{\\polPars + c \\delta})$ are estimated from separate sample paths,\n$\\widehat H_\\polPars$ defined above is an unbiased estimator of $H_\\polPars$.\nThen, an immediate consequence of Theorem~\\ref{thm:convergence} is the\nfollowing.\n\\begin{corollary}\n The sequence $\\{\\polPars_m\\}_{m \\ge 1}$ generated by\n Algorithm~\\ref{alg:SPSA} converges to a local optimal.\n\\end{corollary}\n\n\\section{RMC for Post-Decision State Model} \\label{sec:post_model}\nIn many models, the state dynamics can be split into two parts: a controlled evolution followed by an uncontrolled evolution. For example, many continuous state models have dynamics of the form\n\\[\nS_{t+1} = f(S_t, A_t) + N_t,\n\\]\nwhere $\\{N_t\\}_{t \\ge 0}$ is an independent noise process. For other examples, see the inventory control and event-triggered communication models in Sec~\\ref{sec:num_exp}. Such models can be written in terms of a post-decision state model described below.\n\nConsider a post-decision state MDP with pre-decision state $\\Prestate_t \\in\n\\PRESTATE$, post-decision state $\\Poststate_t \\in \\POSTSTATE$, action\n$\\Action_t \\in \\ACTION$.\nThe system starts at an initial state $\\poststate_0 \\in \\POSTSTATE$ and at \ntime~$t$: \n\\begin{enumerate}\n \\item there is a controlled transition from $\\Prestate_t$ to $\\Poststate_t$\n according to a transition kernel $\\PRE P(\\Action_t)$; \n \\item there is an uncontrolled transition from $\\Poststate_t$ to\n $\\Prestate_{t+1}$ according to a transition kernel $\\POST P$;\n \\item a per-step reward $R_t = r(\\Prestate_t, \\Action_t,\n \\Poststate_t)$ is received.\n\\end{enumerate}\nFuture is discounted at a rate $\\discount \\in (0,1)$. \n\\begin{remark}\n When $\\POSTSTATE = \\PRESTATE$ and $\\PRE P$ is identity, then the above model\n reduces to the standard MDP model, considered in Sec~\\ref{sec:rl}. When\n $\\POST P$ is a deterministic transition, the model reduces to a standard MDP\n model with post decision\n states~\\cite{VanRoyBertsekasLeeEtAl1997,powell2011approximate}. \n\\end{remark}\n\nAs in Sec~\\ref{sec:rl}, we choose a (time-homogeneous and Markov) policy $\\policy$ that maps the current pre-decision state $\\PRESTATE$ to a\ndistribution on actions, i.e., $\\Action_t \\sim \\policy(\\Prestate_t)$. We use\n$\\policy(\\action | \\prestate)$ to denote \n$\\PR(\\Action_t = \\action | \\Prestate_t = \\prestate)$. \n\nThe performance when the system starts in post-decision state $\\poststate_0 \\in\n\\POSTSTATE$ and follows policy $\\policy$ is given by\n\\begin{equation}\n J_\\policy = \n \\EXPA\\biggl[\\sum_{t=0}^{\\infty}\\discount^{t}\\Reward_t\\biggm|\\Poststate_0 =\n \\poststate_0\\biggr].\n\\end{equation}\nAs before, we are interested in identifying an optimal policy, i.e., a policy that maximizes the performance. When $\\mathcal{\\State}$ and $\\ACTION$ are Borel spaces, we assume that the model satisfies the standard conditions under which time-homogeneous Markov policies are optimal~\\cite{Hernandez-Lerma1996}.\nLet $\\tau^{(n)}$ denote the stopping times such that $\\tau^{(0)} = 0$ and for\n$n \\ge 1$,\n\\[\n \\tau^{(n)} = \\inf \\{ t > \\tau^{(n-1)} : \\poststate_{t-1} = \\poststate_0 \\}.\n\\]\nThe slightly unusual definition (using $\\poststate_{t-1} = \\poststate_0$\nrather than the more natural $\\poststate_t = \\poststate_0$) is to ensure that\nthe formulas for $\\mathsf{R}^{(n)}$ and $\\mathsf{T}^{(n)}$ used in Sec.~\\ref{sec:rl} remain\nvalid for the post-decision state model as well. Thus, using arguments similar to Sec.~\\ref{sec:rl}, we can show that both variants of RMC\npresented in Sec.~\\ref{sec:rl} converge to a locally optimal parameter\n$\\polPars$ for the post-decision state model as well.\n\n\\section{Approximate RMC}\\label{sec:approx_rl}\n\nIn this section, we present an approximate version of RMC (for the basic\nmodel of Sec.~\\ref{sec:rl}). Suppose that the state and action spaces\n$\\mathcal{\\State}$ and $\\ACTION$ are separable metric spaces (with metrics $d_S$ and\n$d_A$).\n\nGiven an approximation constant $\\rho \\in \\reals_{> 0}$, let $B^\\rho = \\{s \\in \\mathcal{\\State}: d_S(s,s_0) \\le \\rho\\}$ denote\nthe ball of radius $\\rho$ centered around $s_0$. Given a policy $\\policy$, let\n$\\tau^{(n)}$ denote the stopping times for successive visits to $B^\\rho$,\ni.e., $\\tau^{(0)} = 0$ and for $n \\ge 1$, \n\\[\n \\tau^{(n)} = \\inf \\{ t > \\tau^{(n-1)} : \\state_t \\in B^\\rho \\}.\n\\]\nDefine $\\mathsf{R}^{(n)}$ and $\\mathsf{T}^{(n)}$ as in~\\eqref{eq:Rn_and_Tn} and let\n$\\mathsf{R}^\\rho_\\polPars$ and $\\mathsf{T}^\\rho_\\polPars$ denote the expected values of\n$\\mathsf{R}^{(n)}$ and $\\mathsf{T}^{(n)}$, respectively. Define\n\\[\n J^\\rho_\\polPars = \\frac{\\mathsf{R}^\\rho_\\polPars}{ (1-\\discount) \\mathsf{T}^\\rho_\\polPars}.\n\\]\n\n\\begin{theorem}\\label{thm:approx_RMC}\n Given a policy $\\policy_\\polPars$, let $V_\\polPars$ denote the value\n function and $\\overline \\mathsf{T}^\\rho_\\polPars = \\EXPB[ \\discount^{\\tau^{(1)}} |\n \\State_0 = \\state_0 ]$ (which is always less than $\\discount$). Suppose the\n following condition is satisfied:\n \\begin{enumerate}\n \\item[\\textup{(C)}] The value function $V_\\polPars$ is locally Lipschitz\n in $B^\\rho$, i.e., there exists a $L_\\polPars$ such that for any $s, s'\n \\in B^\\rho$, \n \\[\n | V_\\polPars(s) - V_\\polPars(s') | \\le L_\\polPars d_S(s,s').\n \\]\n \\end{enumerate}\n Then\n \\begin{equation}\\label{eq:approxJ_bound}\n \\big| J_\\polPars - J^\\rho_\\polPars \\big| \\le \n \\frac{ L_\\polPars \\overline \\mathsf{T}^\\rho_\\polPars } { (1-\\discount)\n \\mathsf{T}^\\rho_\\polPars} \\rho \\le \\frac{\\discount}{(1-\\discount)} L_\\polPars \\rho.\n \\end{equation}\n\\end{theorem}\n\\begin{proof}\n We follow an argument similar to Proposition~\\ref{prop:renewal-basic1}.\n \\begin{align}\n J_\\polPars &= V_\\polPars(s_0) = \n \\EXPB\\bigg[\n \\sum_{t=0}^{\\tau^{(1)}-1} \\discount^{t} R_t \n \\notag \\\\\n & \\hskip 6em \n + \\discount^{\\tau^{(1)}}\n \\sum_{t = \\tau^{(1)}}^\\infty \\discount^{t-\\tau^{(1)}} R_t \n \\biggm| \\State_{0} = \\state_{\\tau^{(1)}} \\bigg]\n \\notag \\\\\n &\\stackrel{(a)}= \\mathsf{R}^\\rho_\\polPars + \n\\EXPB[ \\discount^{\\tau^{(1)}} | \\State_0 = \\state_0]\\, V_\\polPars(\\state_{\\tau^{(1)}})\n\\label{eq:approx1}\n \\end{align}\n where $(a)$ uses the strong Markov property.\n Since $V_\\polPars$ is locally Lipschitz with constant $L_\\polPars$ and\n $s_{\\tau^{(1)}} \\in B^\\rho$, we have that\n \\[\n |J_\\polPars - V_\\polPars(s_{\\tau^{(1)}}) | = |V_\\polPars(s_0) -\n V_\\polPars(s_{\\tau^{(1)}}) | \\le L_\\polPars \\rho. \n \\]\n Substituting the above in~\\eqref{eq:approx1} gives\n \\[\n J_\\polPars \\le \\mathsf{R}^\\rho_\\polPars + \\overline \\mathsf{T}^\\rho_\\polPars (J_\\polPars +\n L_\\polPars \\rho).\n \\]\n Substituting $\\mathsf{T}^\\rho_\\polPars = (1 - \\overline \\mathsf{T}^\\rho_\\polPars)\/(1 -\n \\discount)$ and rearranging the terms, we get \n \\[\n J_\\polPars \\le J^\\rho_\\polPars + \\frac{L_\\polPars \\overline\n \\mathsf{T}^\\rho_\\polPars}{(1-\\discount) \\mathsf{T}^\\rho_\\polPars } \\rho.\n \\]\n The other direction can also be proved using a similar argument. The second\n inequality in~\\eqref{eq:approxJ_bound} follows from $\\overline \\mathsf{T}^\\rho_\\polPars \\le \\gamma$ and ${\\mathsf{T}^\\rho_\\polPars \\ge 1}$.\n\\end{proof}\n\nTheorem~\\ref{thm:approx_RMC} implies that we can find an approximately optimal policy by identifying policy parameters $\\polPars$ that minimize $J^\\rho_\\polPars$. To do so, we can appropriately modify both variants of\nRMC defined in Sec.~\\ref{sec:rl} to declare a renewal whenever the state lies\nin $B^\\rho$. \n\nFor specific models, it may be possible to verify that the value function is\nlocally Lipschitz (see Sec.~\\ref{sec:inv_ctrl} for an example). However, we\nare not aware of general conditions that guarantee local Lipschitz\ncontinuity of value functions. It is possible to identify sufficient conditions that guarantee global Lipschitz continuity of value functions (see~\\cite[Theorem 4.1]{Hinderer2005},\n\\cite[Lemma 1, Theorem 1]{Rachelson2010}, \\cite[Lemma 1]{Pirotta2015}). We state these conditions below.\n\\begin{proposition}\\label{prop:Lispschitz}\n Let $V_\\polPars$ denote the value function for any policy\n $\\policy_{\\polPars}$. Suppose the model satisfies the following conditions:\n \\begin{enumerate}\n \\item The transition kernel $P$ is Lipschitz, i.e., there\n exists a constant $L_P$ such that for all\n $s,s' \\in \\mathcal{\\State}$ and $a,a' \\in \\ACTION$, \n \\[\n \\mathcal K(P(\\cdot | s,a), P(\\cdot | s',a')) \\le \n L_P\\big[ d_S(s,s') + d_A(a,a') \\big],\n \\]\n where $\\mathcal K$ is the Kantorovich metric (also called\n Kantorovich-Monge-Rubinstein metric or Wasserstein distance) between\n probability measures.\n\n \\item The per-step reward $r$ is Lipschitz, i.e., there exists a constant\n $L_r$ such that for all $s,s',s_+ \\in \\mathcal{\\State}$ and $a,a' \\in \\ACTION$,\n \\[\n | r(s,a,s_+) - r(s',a',s_+) | \\le \n L_r\\big[ d_S(s,s') + d_A(a,a') \\big].\n \\]\n \\end{enumerate}\n In addition, suppose the policy satisfies the following:\n \\begin{enumerate}\n \\setcounter{enumi}{2}\n \\item The policy $\\policy_\\polPars$ is Lipschitz, i.e., there exists a\n constant $L_{\\policy_\\polPars}$ such that for any $s,s' \\in \\mathcal{\\State}$,\n \\[\n \\mathcal K( \\policy_\\polPars(\\cdot | s), \\policy_\\polPars(\\cdot | s')) \n \\le\n L_{\\policy_\\polPars}\\, d_S(s,s').\n \\]\n \\item $\\discount L_P(1 + L_{\\policy_\\polPars}) < 1$.\n \\item The value function $V_\\polPars$ exists and is finite. \n \\end{enumerate}\n Then, $V_\\polPars$ is Lipschitz. In particular, for any $s, s' \\in\n \\mathcal{\\State}$,\n \\[\n | V_\\polPars(s) - V_\\polPars(s') | \\le L_\\polPars d_S(s,s'),\n \\]\n where\n \\[\n L_\\polPars = \\frac{L_r (1 + L_{\\policy_\\polPars})}\n {1 - \\discount L_P(1 + L_{\\policy_\\polPars}) }.\n \\]\n\\end{proposition}\n\n\\section{Numerical Experiments}\\label{sec:num_exp}\n\nWe conduct three experiments to evaluate the performance of RMC: a randomly\ngenerated MDP, event-triggered communication, and inventory\nmanagement. \n\n\\subsection{Randomized MDP (GARNET)} \\label{sec:GARNET}\n\nIn this experiment, we study a randomly generated $\\text{GARNET}(100,10,50)$ \nmodel~\\cite{Bhatnagar2009}, which is an MDP with $100$ states, $10$ actions,\nand a branching factor of $50$ (which means that each row of all transition\n matrices has $50$ non-zero elements, chosen $\\text{Unif}[0,1]$ and\nnormalized to add to~$1$). For each state-action pair, with probability\n$p=0.05$, the reward is chosen $\\text{Unif}[10,100]$, and with probability\n$1-p$, the reward is~$0$. Future is discounted by a factor of\n$\\discount=0.9$. The first state is chosen as start state. The policy is a Gibbs soft-max distribution\nparameterized by $100 \\times 10$ (states $\\times$ actions) parameters, where\neach parameter belongs to the interval $[-30, 30]$. The temperature of the\nGibbs distribution is kept constant and equal to~$1$.\n\nWe compare the performance of RMC, RMC with biased gradient (denoted by\nRMC-B, see Remark~\\ref{rem:2}), and actor critic with eligibility\ntraces for the critic~\\cite{sutton1998reinforcement} (which we refer to as\nSARSA-$\\lambda$ and abbreviate as S-$\\lambda$ in the plots),\nwith $\\lambda \\in \\{0, 0.25, 0.5, 0.75, 1\\}$.\n For both the RMC algorithms, we use the same runs to estimate the gradients\n (see Remark~\\ref{rem:single_run} in Sec.~\\ref{sec:rl}).\n\\def\\PARAMS{For\nall algorithms, the learning rate is chosen using ADAM~\\cite{ADAM} with\ndefault hyper-parameters and the $\\alpha$ parameter of ADAM equal to $0.05$\nfor RMC, RMC-B, and the actor in SARSA-$\\lambda$ and the learning rate is equal to $0.1$ for the\ncritic in SARSA-$\\lambda$. For RMC and RMC-B, the policy parameters are\nupdated after $N=5$ renewals.}\nEach algorithm\\footnote{\\PARAMS} is run $100$ times and the mean and standard deviation of the\nperformance (as estimated by the algorithms themselves) is shown in\nFig.~\\ref{fig:GARNET-RL-train}. The performance of the corresponding policy\nevaluated by Monte-Carlo evaluation over a horizon of $250$~steps and averaged\nover $100$~runs is shown in Fig.~\\ref{fig:GARNET-RL-eval}. The optimal\nperformance computed using value iteration is also shown.\n\nThe results show that SARSA-$\\lambda$ learns faster (this is expected because\nthe critic is keeping track of the entire value function) but has higher\nvariance and gets stuck in a local minima. On the other hand, RMC and RMC-B\nlearn slower but have a low bias and do not get stuck in a local minima. The\nsame qualitative behavior was observed for other randomly generated models. Policy gradient algorithms only guarantee convergence to a local optimum. We are not sure why RMC and SARSA differ in which local minima they\nconverge to. Also, it was observed that RMC-B (which is RMC with biased evaluation of the gradient)\nlearns faster than RMC.\n\n\\begin{figure}[!t!b]\n \\centering\n \\begin{subfigure}{1.0\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{light_garnet_rl_train.pdf}\n \\caption{}\n \\label{fig:GARNET-RL-train}\n \\end{subfigure}\n \\begin{subfigure}{1.0\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{garnet_rl_eval.pdf}\n \\caption{}\n \\label{fig:GARNET-RL-eval}\n \\end{subfigure}\n \\caption{Performance of different learning algorithms on\n $\\text{GARNET}(100,10,50)$ with $p=0.05$ and $\\discount=0.9$. \n (a)~The performance estimated by the algorithms online. (b)~The\n performance estimated by averaging over $100$ Monte Carlo evaluations\n for a rollout horizon of $250$. The solid lines show the mean value and the shaded region shows the $\\pm$ one standard deviation region.}\n\\end{figure}\n\n\\subsection{Event-Triggered Communication} \\label{sec:rem_est}\n\n\\begin{figure}[!t!b]\n \\centering\n \\renewcommand\\unitlength{cm}\n \\includegraphics[width=1.0\\linewidth]{rl_re.pdf}\n \\caption{Policy parameters versus number of samples (sample values averaged over 100 runs) for event-driven\n communication using RMC for different values of $p_d$. The solid lines\n show the mean value and the shaded area shows the $\\pm$ one standard\n deviation region.}\n \\label{fig:RE}\n\\end{figure}\n\nIn this experiment, we study an event-triggered communication problem that arises in networked control systems~\\cite{LipsaMartins:2011,CSM:thresholds}. A transmitter observes a first-order autoregressive process $\\{X_t\\}_{t \\ge 1}$, i.e., $X_{t+1} =\n\\alpha X_t + W_t$, where $\\alpha, X_t, W_t \\in \\reals$, and $\\{W_t\\}_{t \\ge 1}$ is\nan i.i.d.\\ process. At each time, the transmitter uses an event-triggered\npolicy (explained below) to determine whether to transmit or not (denoted by\n$A_t = 1$ and $A_t = 0$, respectively). Transmission takes place over an\ni.i.d.\\ erasure channel with erasure probability $p_d$. Let $\\Prestate_t$ and\n$\\Poststate_t$ denote the ``error'' between the source realization and it's\nreconstruction at a receiver. It can be shown that $\\Prestate_t$ and\n$\\Poststate_t$ evolve as follows~\\cite{LipsaMartins:2011,CSM:thresholds}: when $A_t = 0$,\n$\\Poststate_t = \\Prestate_t$; when $A_t = 1$, $\\Poststate_t = 0$ if the\ntransmission is successful (w.p. $(1-p_d)$) and $\\Poststate_t = \\Prestate_t$ if\nthe transmission is not successful (w.p. $p_d$); \nand $\\Prestate_{t+1} = \\alpha \\Poststate_t + W_t$. Note that this is a post-decision state model, where the post-decision\nstate resets to zero after every successful transmission.\\footnote{Had we used\nthe standard MDP model instead of the post-decision state model,\nthis restart would not have always resulted in a renewal.}%\n\nThe per-step cost has two components: a communication cost of\n$\\lambda A_t$, where $\\lambda \\in \\reals_{> 0}$ and an estimation error \n$(\\Poststate_t)^2$. The objective is to minimize the expected discounted cost. \n\nAn event-triggered policy is a threshold policy that chooses $A_t = 1$\nwhenever $|\\Prestate_t| \\ge \\polPars$, where $\\polPars$ is a design choice.\nUnder certain conditions, such an event-triggered policy is known to be\noptimal~\\cite{LipsaMartins:2011,CSM:thresholds}. When the system model is known, algorithms to compute the optimal $\\polPars$ are presented\nin~\\cite{XuHes2004,CM:remote-estimation}. In this section, we use RMC to\nidentify the optimal policy when the model parameters are not known. \n\nIn our experiment we consider an event-triggered model with $\\alpha = 1$,\n$\\lambda = 500$, $p_d \\in \\{0, 0.1, 0.2\\}$, $W_t \\sim {\\cal N}(0, 1)$,\n$\\discount = 0.9$, and use simultaneous perturbation variant of RMC\\footnote{An\nevent-triggered policy is a parametric policy but $\\policy_\\polPars(\\action |\n\\prestate)$ is not differentiable in $\\polPars$. Therefore, the likelihood\nratio method cannot be used to estimate performance gradient.} to identify\n$\\polPars$. We run the algorithm 100 times and the result for different\nchoices of $p_d$ are shown in Fig.~\\ref{fig:RE}.\\footnote{We choose the\nlearning rate using ADAM with default hyper-parameters and the $\\alpha$\nparameter of ADAM equal to 0.01. We choose $c = 0.3$, $N=100$ and $\\Delta =\n\\mathcal{N}(0,1)$ in Algorithm~\\ref{alg:SPSA}.} For $p_d = 0$, the optimal\nthreshold computed using~\\cite{CM:remote-estimation} is also shown.\nThe results show that RMC converges relatively quickly and has low bias across multiple runs.\n\n\\subsection{Inventory Control} \\label{sec:inv_ctrl}\n\nIn this experiment, we study an inventory management problem that arises in operations\nresearch~\\cite{Arrow1951,Bellman1955}. Let $S_t \\in \\subset \\reals$ denote\nthe volume of goods stored in a warehouse, $A_t \\in \\reals_{\\ge 0}$ denote the\namount of goods ordered, and $D_t$ denotes the demand. The state evolves\naccording to $S_{t+1} = S_t + A_t - D_{t+1}$. \n\nWe work with the normalized cost function:\n\\[C(s) = a_p s (1-\\discount)\/\\discount + a_h s \\mathds{1}_{\\{ s \\ge 0\\}} - a_b s \\mathds{1}_{\\{s < 0\\}}, \\] \nwhere $a_p$ is the procurement cost, $a_h$ is the holding cost, and $a_b$ is the backlog cost (see~\\cite[Chapter 13]{Whittle1982}\nfor details).\n\nIt is known that there exists a threshold $\\theta$ such that the optimal\npolicy is a base stock policy with threshold $\\theta$ (i.e., whenever the\ncurrent stock level falls below $\\theta$, one orders up to $\\theta$).\nFurthermore, for $s \\le \\theta$, we have that~\\cite[Sec~13.2]{Whittle1982}\n\\begin{equation}\\label{eq:opt-IC}\n V_\\polPars(s) = C(s) + \\frac{\\discount}{(1-\\discount)} \\EXP[C(\\polPars - D) ].\n\\end{equation}\nSo, for $B^\\rho \\subset (0, \\theta)$, the value function is locally Lipschitz, with\n\\[\n L_\\polPars = \\left( a_h + \\frac{1 - \\discount} {\\discount} a_p \\right).\n\\]\nSo, we can use\napproximate RMC to learn the optimal policy.\n\nIn our experiments, we consider an inventory management model with $a_h = 1$, $a_b\n= 1$, $a_p = 1.5$, $D_t \\sim \\text{Exp}(\\lambda)$ with $\\lambda = 0.025$, start\nstate $s_0 = 1$, discount factor $\\discount = 0.9$, and use\nsimultaneous perturbation variant of approximate RMC to identify $\\theta$. \nWe\nrun the algorithm $100$ times and the result is shown in\nFig.~\\ref{fig:inv_ctl-RL}.\\footnote{We choose the learning rate using ADAM\nwith default hyper-parameters and the $\\alpha$ parameter of ADAM equal to\n0.25. We choose $c = 3.0$, $N=100$, and $\\Delta = \\mathcal{N}(0,1)$ in\nAlgorithm~\\ref{alg:SPSA} and choose $\\rho = 0.5$ for approximate RMC\\@. We bound the states within $[-100.0, 100.0]$.} The\noptimal threshold and performance computed using~\\cite[Sec 13.2]{Whittle1982}%\n\\footnote{For $\\text{Exp}(\\lambda)$ demand,\n the optimal threshold is (see~\\cite[Sec 13.2]{Whittle1982})\n \\[\\polPars^* = \\frac 1\\lambda \n \\log\\left( \\frac{a_h + a_b}{a_h + a_p(1-\\gamma)\/\\gamma)} \\right).\\]\n}\nis also shown. \nThe result shows that RMC converges to an approximately optimal parameter value with total cost within the bound predicted in Theorem~\\ref{thm:approx_RMC}.\n\n\\begin{figure}[!t!b]\n \\centering\n \\begin{subfigure}{1.0\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{AN_1_inv_ctl_threshold_paper.pdf}\n \\caption{}\n \\label{fig:inv_ctl-RL_threshold}\n \\end{subfigure}\n \\begin{subfigure}{1.0\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{AN_1_inv_ctl_perf_paper.pdf}\n \\caption{}\n \\label{fig:inv_ctl-RL_perf}\n \\end{subfigure}\n \\caption{(a) Policy parameters and (b) Performance (total cost) versus\n number of samples (sample values averaged over 100 runs) for inventory\n control using RMC\\@. The solid lines show the mean value and the shaded area shows the $\\pm$ one standard\ndeviation region. In (b), the performance is computed using~\\eqref{eq:opt-IC} for the policy parameters given in (a). The red rectangular region shows the total cost bound given by Theorem~\\ref{thm:approx_RMC}.}\n\\label{fig:inv_ctl-RL}\n\\end{figure}\n\n\\section{Conclusions}\n\nWe present a renewal theory based reinforcement learning algorithm called\nRenewal Monte Carlo. RMC retains the key advantages of Monte Carlo methods and\nhas low bias, is simple and easy to implement, and works for models with\ncontinuous state and action spaces. In addition, due to the averaging over\nmultiple renewals, RMC has low variance. We generalized the\nRMC algorithm to post-decision state models and also presented a variant that converges faster to an approximately optimal policy, where the renewal state is replaced by a renewal set. The error in using such an approximation is bounded by the size of the renewal set.\n\nIn certain models, one is interested in the peformance at a reference state\nthat is not the start state. In such models, we can start with an arbitrary\npolicy and ignore the trajectory until the reference state is visited for\nthe first time and use RMC from that time onwards (assuming that the reference\nstate is the new start state).\n\nThe results presented in this paper also apply to average reward models where\nthe objective is to maximize\n\\begin{equation}\n J_\\pi = \n \\lim_{t_h \\to\n \\infty}\\frac{1}{t_h}\\EXPA\\biggl[\\sum_{t=0}^{t_h-1}\\Reward_t\\biggm|\\State_0 =\n \\state_0\\biggr]. \\label{eq:avg_Vp-defn}\n\\end{equation}\nLet the stopping times $\\tau^{(n)}$ be defined as before. Define the total reward\n$\\mathsf{R}^{(n)}$ and duration $\\mathsf{T}^{(n)}$ of the $n$-th regenerative cycle as\n\\[\n \\mathsf{R}^{(n)} = \n \\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}} R_t\n \\quad\\text{and}\\quad\n \\mathsf{T}^{(n)} = \\tau^{(n)} - \\tau^{(n-1)}.\n\\]\nLet $\\mathsf{R}_\\polPars$ and $\\mathsf{T}_\\polPars$ denote the expected values of $\\mathsf{R}^{(n)}$\nand $\\mathsf{T}^{(n)}$ under policy $\\policy_{\\polPars}$. Then from standard renewal\ntheory we have that the performance $J_\\polPars$ is equal to \n$\\mathsf{R}_\\polPars\/ \\mathsf{T}_\\polPars$ and, therefore $\\GRAD_\\polPars J_\\polPars =\nH_\\polPars\/T^2_\\polPars$, where $H_\\polPars$ is defined as in~\\eqref{eq:H}. We\ncan use both variants of RMC prosented in Sec.~\\ref{sec:rl} to obtain\nestimates of $H_\\polPars$ and use these to update the policy parameters\nusing~\\eqref{eq:H-update}.\n\n\n\\section*{Acknowledgment}\n\nThe authors are grateful to Joelle Pineau for useful feedback and for suggesting the idea of approximate RMC.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfnbq b/data_all_eng_slimpj/shuffled/split2/finalzzfnbq new file mode 100644 index 0000000000000000000000000000000000000000..88248c7c31d7467cac4498f970cae2c0fa18f7da --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfnbq @@ -0,0 +1,5 @@ +{"text":"\\section{CHIRAL SYMMETRY}\n\\label{s1}\n\nThe outer components of nuclear forces are dominated by pion-exchanges\nand involve just a few basic subamplitudes, describing pion interactions\nwith either nucleons or other pions.\nThe simplest process $N \\!\\rar\\! \\pi N$, corresponding to the emission or \nabsorption of a single pion by a nucleon, is rather well understood and \ngives rise to the one-pion exchange $N\\!N$ potential ($OPEP$).\nThe scattering reaction $\\pi N \\!\\rar\\! \\pi N$ comes next and determines both \nthe very important two-pion exchange term in the $N\\!N$ force \nand the leading three-body interaction, as shown in Fig.\\ref{F1}.\n\n\\vspace{-4mm}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.7\\columnwidth,angle=0]{Robilotta_Osaka-01.eps}\n\\vspace{-3mm}\n\\caption{Free $\\pi N$ amplitude (a)\nand two-pion exchange two-body (b) and three-body (c) potentials.} \n\\label{F1}\n\\end{center}\n\\end{figure}\n\n\\vspace{-5mm}\n\nThe theoretical understanding of the $\\pi N$ amplitude proved\nto be very challenging and a suitable description was only produced \nby means of chiral symmetry.\nThis framework provides a natural explanation for the observed \nsmallness of $\\pi N$ scattering lengths and plays a fundamental\nrole in Nuclear Physics.\nNowadays, the use of chiral symmetry in low-energy pion-interactions\nis justified by $QCD$.\n\nThe small masses of the quarks $u$ and $d$, treated as perturbations \nin a chiral symmetric lagrangian, give rise to a well defined chiral \nperturbation theory (ChPT).\nHadronic amplitudes are then expanded in terms of a typical \nscale $q$, set by either pion four-momenta or nucleon three-momenta, \nsuch that $q\\ll 1$ GeV.\nThis procedure is rigorous and many results have the status of {\\em theorems}.\nIn general, these theorems are written as power series \nin the scale $q$ and involve both {\\em leading order terms} and \n{\\em chiral corrections}.\nThe former can usually be derived from tree diagrams, whereas the\nlatter require the inclusion of pion loops and are the main object \nof ChPT. \nAt each order, predictions for a given process must be unique and the \ninclusion of corrections cannot change already existing leading terms.\n\nThe relationship between chiral expansions of the $\\pi N$ amplitude and of \ntwo-pion exchange $(TPE)$ nuclear forces is discussed in the sequence.\nFor the $\\pi N$ amplitude, tree diagrams yield ${\\cal{O}}(q, q^2)$ terms \nand corrections up to ${\\cal{O}}(q^4)$ have already been evaluated, by means of both \ncovariant\\cite{BL}(CF) and heavy baryon\\cite{FM}(HBF) formalisms. \nIn the case of the $N\\!N$ potential, the leading term is ${\\cal{O}}(q^0)$ and\ngiven by the $OPEP$.\nThe tree-level $\\pi N$ amplitude yields $TPE$ contributions at ${\\cal{O}}(q^2,q^3)$\nand corrections at ${\\cal{O}}(q^4)$ are available, based on both \nHBF\\cite{HB} and CF\\cite{HR,HRR}.\nTree-level $\\pi N$ results also determine the leading ${\\cal{O}}(q^3)$ three-body\nforce and partial corrections at ${\\cal{O}}(q^4)$ begin to be derived \n\\cite{IR07,E3NP}.\nAs this discussion suggests, ${\\cal{O}}(q^4)$ corrections to both\ntwo- and three-nucleon forces require just the ${\\cal{O}}(q^3)$\n$\\pi N$ amplitude.\n\nThe full empirical content of the $\\pi N$ amplitude cannot be predicted\nby chiral symmetry alone. \nExperimental information at low energies is usually encoded into the \nsubthreshold coefficients introduced by H\\\"ohler and collaborators\\cite{H83}\nwhich can, if needed, be translated into the low-energy contants (LECs)\nof chiral lagrangians. \nTherefore, in order to construct a ${\\cal{O}}(q^3)$ $\\pi N$ amplitude,\none uses chiral symmetry supplemented by \nsubthreshold information, as indicated in Fig. \\ref{F2}.\nThe first two diagrams correspond to the nucleon pole, \nwhereas the other ones represent a smooth background.\nThe third graph reproduces the Weinberg-Tomozawa contact interaction,\nthe fourth one summarizes LEC contributions and the last two describe \nmedium range pion-cloud effects.\n\n\\vspace{-5mm}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.8\\columnwidth,angle=0]{Robilotta_Osaka-02.eps}\n\\vspace{-4mm}\n\\caption{Representation of the $\\pi N$ amplitude at ${\\cal{O}}(q^3)$.} \n\\label{F2}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{TWO-BODY POTENTIAL}\n\\label{s2}\n\nWith the purpose of discussing the problem of predicted \n$\\times$ observed chiral hierarchies, in this section we review \nbriefly results obtained by our goup\\cite{HR,HRR} for the \n$TPE$-$N\\!N$ potential at ${\\cal{O}}(q^4)$.\nThis component is determined by the three families of diagrams\nshown in Fig. \\ref{F3}. \nFamily $I$ begins at ${\\cal{O}}(q^2)$ and implements the minimal realization of \nchiral symmetry\\cite{RR94},\nwhereas family $I\\!I$ depends on $\\pi\\p$ correlations and is ${\\cal{O}}(q^4)$.\nThey involve only the constants $g_A$ and $ f_\\pi$\nand all dependence on the LECs is concentrated in family $I\\!I\\!I$,\nwhich begins at ${\\cal{O}}(q^3)$.\n\n\\vspace{-2mm}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.7\\columnwidth,angle=0]{Robilotta_Osaka-03.eps}\n\\caption{Dynamical structure of the two-pion exchange potential.} \n\\label{F3}\n\\end{center} \n\\end{figure}\n\n\\vspace{-5mm}\n\nAs far as chiral orders of magnitude are concerned, on finds that \nthe various components of the force begin as follows\\cite{HRR}:\n${\\cal{O}}(q^2)\\rightarrow V_{SS}^+, V_T^+, V_C^-$ and \n${\\cal{O}}(q^3)\\rightarrow V_C^+, V_{LS}^+, V_{LS}^-, V_{SS}^-, V_T^-$,\nwhere the superscripts $(+)$ and $(-)$ refer to terms proportional \nto either the identity or $\\mbox{\\boldmath $\\tau$}^{(1)}\\!\\cdot\\! \\mbox{\\boldmath $\\tau$}^{(2)}$ in isospin space.\nAn interesting feature of these results is that the role played by \nfamily $I\\!I$ is completely irrelevant.\nOn the other hand, family $I$ dominates almost completely the components\n$V_{LS}^+$, $V_T^+$, $V_{SS}^+$ and $V_C^-$, \nwhereas family $I\\!I\\!I$ does the same for $V_C^+$, $V_T^-$and $V_{SS}^-$.\n\n\\vspace{-4mm}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.49\\columnwidth,angle=0]{Robilotta_Osaka-04a.eps}\n\\includegraphics[width=0.49\\columnwidth,angle=0]{Robilotta_Osaka-04b.eps}\n\\vspace{-3mm}\n\\caption{$OPEP$ and $TPEP$ contributions to \nspin-spin (left) and tensor (right) NUisovector components.} \n\\label{F4}\n\\end{center} \n\\end{figure}\n\n\\vspace{-5mm}\n\nThe relationship between the $OPEP[={\\cal{O}}(q^0)]$ and $TPEP[={\\cal{O}}(q^3)]$ \ncontributions to the $V_{SS}^-$ and $V_T^-$ profile functions is \nshown in Fig. \\ref{F4}, where it is possible to see that the chiral\nhierarchy is respected.\n\n\nIn Fig. \\ref{F5}, the two central components $V_C^-[={\\cal{O}}(q^2)]$ and \n$V_C^+[={\\cal{O}}(q^3)]$ are displayed side by side and two features \nare to be noted.\nThe first one concerns the favorable comparison with the empirical\nArgonne\\cite{Arg} potentials in both cases.\nThe second one is that $|V_C^+| \\sim 10 \\, |V_C^-|$ in regions of physical\ninterest, defying strongly the predicted chiral hierarchy.\nThis problem will be further discussed in the sequence.\n\n\\vspace{-3mm}\n\n\\begin{figure}[h]\n\\begin{center}\n\\hspace*{-4mm}\n\\includegraphics[width=0.50\\columnwidth,angle=0]{Robilotta_Osaka-05a.eps}\n\\includegraphics[width=0.50\\columnwidth,angle=0]{Robilotta_Osaka-05b.eps}\n\\vspace{-2mm}\n\\caption{Isospin odd (left) and even (right) central components of the \ntwo-pion exchange potential.} \n\\label{F5}\n\\end{center} \n\\end{figure}\n\n\\vspace{-4mm}\n\nViolations of the chiral hierarchy are also present in the \n{\\em drift potential}\\cite{Rdrift}, which corresponds to kinematical\ncorrections due to the fact that the two-body center of mass is allowed to \ndrift inside a larger system. \nIn terms of Jacobi coordinates, it is represented by the operator\n\\begin{eqnarray}\nV(r)^\\pm = \\left. V(r)^\\pm \\right] _{cm} + V_D^\\pm \\, \\Omega_{D}\n\\;\\;\\;\\;\\;\\;\\;\\; \\leftrightarrow \\;\\;\\;\\;\\;\\;\\;\\;\n\\Omega_D = \\frac{1}{4\\sqrt{3}}\\, (\\mbox{\\boldmath $\\sigma$}^{(1)}\\!-\\! \\mbox{\\boldmath $\\sigma$}^{(2)}) \\!\\cdot\\! \\mbox{\\boldmath $r$} \\!\\times \\! \\;,\n(-i \\mbox{\\boldmath $\\nabla$}^{^{\\!\\!\\!\\!\\!\\!\\!\\!^\\leftrightarrow}}_\\rho)\\;.\n\\nonumber\n\\end{eqnarray}\n\n\nThe profile function $V_D^+$ together with $V_{LS}^+$, are \ndisplayed in Fig. \\ref{F6}.\nDrift corrections begin at ${\\cal{O}}(q^4)$ and, in principle, should be smaller \nthan the spin-orbit terms, which begin at ${\\cal{O}}(q^3)$.\nHowever, in this channel, the hierarchy is again not respected.\n\n\\vspace{20mm}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.7\\columnwidth,angle=0]{Robilotta_Osaka-06.EPS}\n\\vspace{-35mm}\n\\caption{Isospin even drift (full and dotted lines)\nand spin-orbit (dashed line) potentials.} \n\\label{F6}\n\\end{center} \n\\end{figure}\n\n\n\n\n\\section{THREE-BODY POTENTIAL}\n\\label{s3}\n\nThe leading term in the three-nucleon potential, known as $TPE$-$3NP$,\nhas long range and corresponds to the process shown in fig.1c,\nin which a pion is emitted by one of the nucleons, scattered by a second one, \nand absorbed by the last nucleon.\nIs this case, the intermediate $\\pi N$ amplitude, which is ${\\cal{O}}(q)$ for \nfree pions, becomes ${\\cal{O}}(q^2)$ and the three-body force begins at \n${\\cal{O}}(q^3)$.\nThe first modern version of this component of the force was produced by \nFujita and Miyazawa\\cite{F-M}, its chiral structure has bee much debated\nsince the seventies\\cite{3NP} and, nowadays, a sort o consensus has been \nreached about its form\\cite{Rtokyo}. \nThe leading $TPE$-$3NP$ has a generic structure given by\n\\begin{eqnarray}\n&& V_L(123) = -\\,\\frac{\\mu}{(4\\pi)^2} \n\\left\\{ \\delta_{ab} \\left[ a \\, \\mu - b \\, \\mu^3 \\, \\mbox{\\boldmath $\\nabla$}_{12} \\!\\cdot\\! \\mbox{\\boldmath $\\nabla$}_{23} \\right] \n+ d\\, \\mu^3 \\; i\\,\\epsilon_{bac} \\tau_c^{(2)}\\;\ni \\, \\mbox{\\boldmath $\\sigma$}^{(2)} \\!\\cdot\\! \\mbox{\\boldmath $\\nabla$}_{12} \\times \\mbox{\\boldmath $\\nabla$}_{23}\\right\\} \n\\nonumber\\\\\n&& \\;\\;\\;\\;\\; \\times \n\\left[ (g_A \\, \\mu\/2 \\,f_\\pi)\\;\\tau_a^{(1)} \\;\\mbox{\\boldmath $\\sigma$}^{(1)} \\!\\cdot\\! \\mbox{\\boldmath $\\nabla$}_{12} \\right] \\;\n\\left[ (g_A \\, \\mu\/2 \\,f_\\pi)\\;\\tau_b^{(3)} \\;\\mbox{\\boldmath $\\sigma$}^{(3)} \\!\\cdot\\! \\mbox{\\boldmath $\\nabla$}_{23} \\right] \\;\nY(x_{12}) \\; Y(x_{23}) \\;,\n\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere $\\mu$ is the pion mass and $a$, $b$ and $d$ are strength parameters,\ndetermined by either LECs or subhtreshold coefficients. \n\nThe evaluation of ${\\cal{O}}(q^4)$ corrections requires the inclusion of single\nloop effects and is associated with a large number of \ndiagrams, which are being calculated by Epelbaum and \ncollaborators\\cite{E3NP}.\nIn order to produce a feeling for the structure of these corrections,\nwe discuss a particular set of processes belonging to the \n$TPE$-$3NP$ class, considered recently\\cite{IR07}.\nFull results involve expressions which are too long and cumbersome\nto be displayed here.\nHowever, their main qualitative features can be summarized in the\nstructure \n$V(123)=V_L(123)+ [V_{\\delta L}(123) + \\delta V(123)]$, \nwhere $V_L$ is the leading term shown above and the factors within \nsquare brackets are ChPT corrections.\nThe function $V_{\\delta L}$ can be obtained directly from $V_L$, by replacing \n$(a,b,c) \\rightarrow (\\delta a,\\delta b, \\delta c)$, where \nthe $\\delta s$ indicate changes smaller than $10\\%$.\nThis part of the ChPT correction corresponds just to shifts in the \nparameters of the leading component.\nThe term $\\delta V(123)$, on the other hand, represents effects associated \nwith new mathematical functions involving both non-local operators \nand complicated propagators containg loop integrals,\nin place of the Yukawa functions.\nThe strengths of these new functions are determined by a new set of \nparameters $e_i$, which are also typically about $10\\%$ of the \nleading ones.\n\nIn summary, ChPT gives rise both to small changes in already existing \ncoefficients and to the appearance of many new mathematical structures.\nThe latter are the most interesting ones, since they may be instrumental\nin explaining effects such as the $A_y$ puzzle.\n\n\n\n\\section{THE CHIRAL PICTURE}\n\\label{s4}\n\nChiral symmetry has already been applied to about 20 components \nof nuclear forces, allowing a comprehensive picture to be assessed.\nAccording to ChPT, the various effects begin to appear at different\norders and the predicted hierarchy is displayed in the table below.\n\n\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular} {|c|ccc|}\n\\hline\nbeginning\t \t& TWO-BODY & TWO-BODY\t& THREE-BODY \t \\\\ \n\t\t\t\t& $OPEP$ & $TPEP$\t\t& $TPEP$\t\t \\\\ \\hline \n${\\cal{O}}(q^0)$\t\t& $V_T^-, V_{SS}^-$\t&& \t \t \\\\[1mm] \\hline\n${\\cal{O}}(q^2)$\t\t& $V_D^-$ & $V_C^-; V_T^+, V_{SS}^+$ &\t \\\\[1mm]\\hline\n${\\cal{O}}(q^3)$\t\t&& $V_{LS}^-, V_T^-, V_{SS}^-; V_C^+, V_{LS}^+$ \n& $d; a, b$ \t\\\\[1mm] \\hline\n${\\cal{O}}(q^4)$\t\t&& $V_D^-; V_Q^+, V_D^+$ & $ e_i $ \\\\[1mm] \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\nIn Ref.\\refcite{HRR}, the relative importance of $O(q^2)$, $O(q^3)$ \nand $O(q^4)$ terms in each component of the $TPEP$-$N\\!NP$ has been studied.\nIn general, convergence at distances of physical interest is \nsatisfactory, except for $V_C^+$, where the ratio between \n${\\cal{O}}(q^4)$ and ${\\cal{O}}(q^3)$ contributions \nis larger than $0.5$ for distances smaller than $2.5$ fm.\n\nAs far as the relative sizes of the various dynamical effects are\nconcerned, one finds strong violations of the predicted hierarchy\nwhen one compares $V_C^+$ with $V_C^-$ and $V_D^+$ with $V_{LS}^+$,\nas discussed above. \nIt is interesting to note that, in both cases,\nthe unexpected enhancements occur in the isoscalar sector. \nThe numerical explanation for this behavior is that some of the LECs \nused in the calculation are large and\ngenerated dynamically by delta intermediate states.\nHowever, it is also possible that perturbation theory may not apply \nto isoscalar interactions at intermediate distances.\nThis aspect of the problem is explored in the next section.\n\n\n\n\n\n\\section{SCALAR FORM FACTOR}\n\\label{s5}\n\n\n\nThe structure of $V_C^+$ was scrutinized in Ref.\\refcite{HRR}\nand found to be heavily dominated by a term of the form\n\\begin{eqnarray}\nV_C^+(r) \\sim -\\, (4\/f_\\pi^2)\\;\\left[ (c_3 - 2c_1) - c_3 \\; \\mbox{\\boldmath $\\nabla$}^2\/2 \\right] \\;\n\\tilde{\\sigma}_{N_N}(r)\\;,\n\\nonumber\n\\end{eqnarray}\nwhere the $c_i$ are LECs and $\\tilde{\\sigma}_{N_N}$ is the leading contribution \nfrom the pion cloud to the nucleon scalar form factor.\nThis close relationship between $\\tilde{\\sigma}_{N_N}$ and $V_C^+$ indicates \nthat the study of the former can shed light into the properties of the \nlatter.\n\nThe nucleon scalar form factor is defined as \n\\begin{eqnarray}\n\\langle N(p') | \\!-\\! {\\cal{L}}_{sb}\\, | N(p) \\rangle = \\sigma_N(t) \\; \\bar u(p')\\; u(p) \\;,\n\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere ${\\cal{L}}_{sb}$ is the symmetry breaking lagrangian. \nIt has already been expanded\\cite{BL} up to ${\\cal{O}}(q^4)$ and receives its leading \n${\\cal{O}}(q^2)$ contribution from a tree diagram associated with the LEC $c_1$.\nCorrections at ${\\cal{O}}(q^3)$ and ${\\cal{O}}(q^4)$ are produced by two triangle diagrams, \ninvolving nucleon and delta intermediate states.\nIn configuration space\\cite{sigma}, the scalar form factor is denoted\nby $\\tilde{\\sigma}$ and one writes \n\\begin{eqnarray}\n\\tilde{\\sigma}_N(\\mbox{\\boldmath $r$}) = - 4\\, c_1\\, \\mu^2\\, \\delta^3(\\mbox{\\boldmath $r$}) + \n\\tilde{\\sigma}_{N_N}(r) + \\tilde{\\sigma}_{N_\\Delta}(r) \\;,\n\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere $\\tilde{\\sigma}_{N_N}$ and $\\tilde{\\sigma}_{N_\\Delta}$ are the finite-range \ntriangle contributions. \n\n\\begin{figure}[t]\n\\begin{center}\n\\hspace*{-4mm}\n\\includegraphics[width=0.35\\columnwidth,angle=-90]{Robilotta_Osaka-07a.ps}\n\\includegraphics[width=0.35\\columnwidth,angle=-90]{Robilotta_Osaka-07b.ps}\n\\caption{Ratios $\\tilde{\\sigma}_N(r)\/(\\mu^2 f_\\pi^2)=(1-\\cos\\theta)$ (left)\nand $\\tilde{\\sigma}_{N_\\Delta}(r)\/\\tilde{\\sigma}_{N_N}(r)$ (right) as functions of \nthe distance $r$.}\n\\label{F7}\n\\vspace{-3mm}\n\\end{center} \n\\end{figure}\n\nThe symmetry breaking lagrangian can be expressed in terms of the chiral angle \n$\\theta$ as ${\\cal{L}}_{sb} = f_\\pi^2 \\, \\mu^2 \\,(\\cos\\theta-1)$.\nThe ratio $\\tilde{\\sigma}_N(r)\/(\\mu^2 f_\\pi^2)=(1-\\cos\\theta)$ describes the \ndensity of the $q\\bar{q}$ condensate around the nucleon and is \ndisplayed in Fig.~\\ref{F7}.\nOne notes that it vanishes at large distances and increases monotonically \nas one approaches the center.\nThis means that the function $\\tilde{\\sigma}_N(r)$ becomes meaningless beyond \na critical radius $R$, corresponding to $\\theta = \\pi\/2$,\nsince the physical interpretation of the quark condensate \nrequires the condition $q\\bar{q}>0$.\nIn Ref. \\refcite{sigma}, the condensate was assumed to no longer exist in \nthe region $r