diff --git a/data_all_eng_slimpj/shuffled/split2/finalzeni b/data_all_eng_slimpj/shuffled/split2/finalzeni new file mode 100644 index 0000000000000000000000000000000000000000..92263187c5e0811b2a17c623429a4e03cad4b615 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzeni @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and Motivation} \nIn the background of any investigation into learning algorithms are no-free-lunch phenomena: roughly, the observation that assumption-free statistical learning is infeasible in general (see, e.g., \\cite[Ch. 5]{MLbook} for a formal statement). Common wisdom is that learning algorithms and architectures must adequately reflect non-trivial features of the data-generating distribution to gain inductive purchase. \n\nFor many purposes we need to move beyond passive observation, focusing instead on what \\emph{would} happen were we to \\emph{act} upon a given system. Even further, we sometimes desire to \\emph{explain} the behavior of a system, raising questions about what \\emph{would have} occurred had some aspects of a situation been different. Such questions depend not just on the data distribution; they depend on deeper features of underlying data-generating \\emph{processes} or \\emph{mechanisms}. It is thus generally acknowledged that stronger assumptions are required if we want to draw \\emph{causal} conclusions from data \\citep{Spirtes2001,Pearl2009,Imbens2015,Peters,9363924}. \n\n\nWhether implicit or explicit, any approach to causal inference involves a space of candidate causal models, viz. data-generating processes. Indeed, a blunt way of incorporating inductive bias is simply to \\emph{omit} some class of possible causal hypotheses from consideration. Many (im)possibility results in the literature can accordingly be understood as pertaining to all models within a class. \nFor instance, if we can restrict attention to \\emph{Markovian} models that satisfy \\emph{faithfulness}, then we can \\emph{always} identify the structure of a model from experimental data (e.g., \\cite{Eberhardt2005,Spirtes2001}). If we can restrict attention to Markovian (continuous) models with linear functions and \\emph{non}-Gaussian noise, then \\emph{every} model can be learned even from purely observational data \\citep{JMLR:v7:shimizu06a}. As a negative example, in the larger class of (not necessarily Markovian) models, \\emph{no} model can \\emph{ever} be determined from observational data alone \\citep{Spirtes2001,BCII2020}.\n\nAt the same time, in many settings it is sensible to aim for results with ``nearly universal'' force. It is natural to ask, e.g., within the class of all Markovian models, how ``typical'' are those in which the faithfulness condition is violated? This might tell us, for instance, how typically we could expect failure of a method that depended on these assumptions. A well-known result shows that, fixing any particular causal dependence graph, such violations have \\emph{measure zero} for any smooth (e.g., Lebesgue) measure on the parameter space of distributions consistent with that graph \\citep{Meek1995}. In fact, the standard notion of statistical consistency itself, which underlies many possibility results in causal inference, requires omission of some purportedly ``negligible'' set of possible data streams \\citep{DiaconisFreedman,Spirtes2001}.\n\nThere are two standard mathematical approaches to making concepts like ``typical'' and ``negligible'' rigorous: \\emph{measure-theoretic} and \\emph{topological}. While the two approaches often agree, they capture slightly different intuitions \\citep{Oxtoby}. One virtue of the measure-theoretic approach is its natural probabilistic interpretation: intuitively, we are exceedingly \\emph{unlikely} to hit upon a set with measure zero. At the same time, the measure-theoretic approach is sometimes criticized in statistical settings for its alleged dependence on a measure, and this has been argued to favor topological approaches (see, e.g., \\cite{BELOT2020159} on no-free-lunch theorems). The latter of course in turn demands an appropriate topology.\n\nIn the present work we show how to define a sequence of meaningful topologies on the space of causal models, each corresponding to a progressively coarser level of the so called \\emph{causal hierarchy} (\\cite{pearl2018book,BCII2020}; see Fig. \\ref{fig:hierarchy} for an abbreviated pictorial summary).\nWe aim to demonstrate that topologizing causal models in this way helps clarify the scope and limits of causal inference under different assumptions, as well as the potential empirical status of those very assumptions, in a highly general setting. \n\nOur starting point is a canonical topology on the space of Borel probability distributions called the \\emph{weak topology}. The weak topology is grounded in the fundamental notion of \\emph{weak convergence} of probability distributions \\citep{Billingsley} and is thereby closely related to problems of statistical inference (see, e.g., \\cite{DemboPeres}). Recent work has sharpened this correspondence, showing that open sets in the weak topology correspond exactly to the statistical hypotheses that can be naturally deemed \\emph{verifiable} \\citep{Genin2018,GeninKelly}.\nWe extend the correspondence to higher levels of the causal hierarchy, including the most refined and expansive ``top'' level consisting of all (well-founded) causal models. Lower levels and natural subspaces (e.g., corresponding to prominent causal assumption classes) emerge as coarsenings and projections of this largest space. As an illustration of the general approach, we prove a topological version of the causal hierarchy theorem from \\cite{BCII2020}. Rather than showing that collapse happens only in a measure zero set as in \\cite{BCII2020}, our Theorem \\ref{thm:hierarchyFormal} show that collapse is \\emph{topologically meager}. Conceptually, this highlights a different (but complementary) intuition:\nnot only is collapse exceedingly unlikely in the sense of measure, meagerness implies that collapse could never be \\emph{statistically verified}. Correlatively, this implies that any causal assumption that would generally allow us to infer counterfactual probabilities from experimental (or ``interventional'') probabilities must itself be statistically unverifiable (Corollary \\ref{cor:hierarchy}). \n\nTo derive such a result we actually show something slightly stronger (see Lem.~\\ref{lem:separation}): even with respect to the subspace of models consistent with a fixed \\emph{temporal order} on variables, the causal hierarchy theorem holds. Merely knowing the temporal order of the variables is not enough to render collapse of the hierarchy a statistically verifiable proposition.\nFurthermore, we show that the \\emph{witness to collapse} can be taken as any of the well-known counterfactual ``probabilities of causation'' (see, e.g., \\cite{Pearl1999}): probabilities of necessity, sufficiency, necessity \\emph{and} sufficiency, enablement, or disablement. That is, none of these important quantities are fully determined by experimental data except in a meager set. \n\n\n\nIn \\S\\ref{scms} we give background on causal models, and in \\S\\ref{sec:causalhierarchy} we present a model-theoretic characterization of the causal hierarchy as a sequence of spaces. Topology is introduced in \\S\\ref{sec:weaktopology}, and the main results about collapse appear in \\S\\ref{sec:main}.\nFor the technical results, we include proof sketches in the main text to provide the core intuitions, relegating some of the details to an exhaustive technical appendix, which also includes additional supplementary material.\n\n\n\\section{Structural Causal Models} \\label{scms}\nA fundamental building block in the theory of causality is the \\emph{structural causal model} \\citep{Pearl1995,Spirtes2001,Pearl2009} or SCM, which formalizes the notion of a data-generating process. In addition to specifying data-generating distributions, these models also specify the generative mechanisms that produce them. For the purpose of causal inference and learning, SCMs provide a broad, fine-grained hypothesis space.\n\nThe notions in this section have their usual definition following, e.g., \\cite{Pearl2009}, but we have\nrecast them in the standard language of Borel probability spaces so as to handle the case of infinitely many variables rigorously.\nWe start with notation, basic assumptions, and some probability theory\n\n\n\\begin{notation*}\nThe signature (or range) of a variable $V$ is denoted $\\chi_V$. \nWhere $\\*S$ is a set of variables, let $\\chi_{\\*S} = \\bigtimes_{S \\in \\*S} \\chi_S$.\nGiven an indexed family of sets $\\{S_\\beta\\}_{\\beta \\in B}$ and elements $s_\\beta \\in S_\\beta$, let $(s_\\beta)_\\beta$ denote the tuple whose element at index $\\beta$ is $s_\\beta$, for all $\\beta$.\nFor $B' \\subset B$ write $\\pi_{B'} : \\bigtimes_{\\beta \\in B} S_\\beta \\to \\bigtimes_{\\beta \\in B'} S_{\\beta}$ for the \\emph{projection map} sending each $(s_\\beta)_{\\beta \\in B} \\mapsto (s_{\\beta'})_{\\beta' \\in B'}$; abbreviate $\\pi_{\\beta'} = \\pi_{\\{\\beta'\\}}$, where $\\beta' \\in B$. \n\\end{notation*}\n\nThe reader is referred to standard texts \\cite{johnkelley1975,Bogachev2007} for elaboration on the concepts used below.\n\n\\begin{definition}[Topology]\\label{def:basictopology}\nFor discrete spaces (like $\\chi_S$, for a single categorical variable $S$) we use the discrete topology and for product spaces (like $\\chi_{\\*S}$ for a set of variables $\\*S$) we use the product topology.\nNote that the so-called \\emph{cylinder sets} of the form $\\pi^{-1}_{\\*Y}(\\{\\*y\\})$ for finite subsets $\\*Y \\subset \\*S$ and $\\*y \\in \\chi_{\\*Y}$\nform a basis for the product topology on $\\chi_{\\*S}$. This cylinder set is a subset of $\\chi_{\\*S}$, and contains exactly those valuations agreeing with the value $\\pi_Y(\\*y)$ specified in $\\*y$ for $Y$, for every $Y \\in \\*Y$. Following standard statistical notation this cylinder is abbreviated as simply $\\*y$.\n\\end{definition}\n\\begin{definition}[Probability]\n\\label{def:probability}\nWhere $\\vartheta$ is a topological space write $\\mathcal{B}(\\vartheta)$ for its Borel $\\sigma$-algebra of measurable subsets.\nLet $\\mathfrak{P}(\\vartheta)$ be the set of probability measures on $\\mathcal{B}(\\vartheta)$.\nSpecifically, elements of $\\mathfrak{P}(\\vartheta)$ are functions $\\mu: \\mathcal{B}(\\vartheta) \\to [0, 1]$ assigning a probability to each measurable set such that $\\mu(\\vartheta) = 1$ and $\\mu\\big(\\bigcup_{i=1}^{\\infty}(S_i)\\big) = \\sum_{i=1}^{\\infty} \\mu(S_i)$ for each sequence $S_1, S_2, \\dots$ of pairwise disjoint sets from $\\mathcal{B}(\\vartheta)$.\nA map $f: \\vartheta_1 \\to \\vartheta_2$ is said to be \\emph{measurable} if $f^{-1}(S_2) \\in \\mathcal{B}(\\vartheta_1)$ for every $S_2 \\in \\mathcal{B}(\\vartheta_2)$.\n\\end{definition}\n\n\\begin{fact}[Lemma~1.9.4 \\cite{Bogachev2007}]\nA Borel probability measure is determined by its values on a basis\n\\end{fact}\n\n\\subsection{SCMs, Observational Distributions}\\label{ss:scml1}\n\nLet $\\*V$ be a set of \\emph{endogenous variables}.\nWe assume for simplicity every variable $V \\in \\*V$ is dichotomous\nwith $\\chi_V = \\{0, 1\\}$, although the results here generalize to any larger countable range.\nInfluences among endogenous variables are the main phenomena our formalism aims to capture.\nA well-founded\\footnote{See Appendix~\\ref{app:causalmodels} for additional background on orders and relations.} \\emph{direct influence} relation $\\rightarrow$ on $\\*V$ encapsulates the notion of one endogenous variable possibly influencing another. For each $V \\in \\*V$, we call $\\{V' \\in \\*V : V' \\rightarrow V\\} = \\mathbf{Pa}(V)$ the \\emph{parents} of $\\*V$.\nWe assume every set $\\mathbf{Pa}(V)$ is finite; this condition is called \\emph{local finiteness}. These two assumptions (well-foundedness and local finiteness) generalize the common \\emph{recursiveness} assumption to the infinitary setting, and have an alternative characterization in terms of ``temporal'' orderings:\n\\begin{fact}\\label{fact:omegalike}\nSay that a total order $\\prec$ on $\\*V$ is \\emph{$\\omega$-like} if every node has finitely many predecessors: for each $V \\in \\*V$, the set $\\{V' : V' \\prec V\\}$ is finite.\nThen the influence relation $\\rightarrow$ is extendible to an $\\omega$-like order iff $\\rightarrow$ is well-founded and locally finite.\n\\end{fact}\nIn addition to endogenous variables, causal models have \\emph{exogenous variables} $\\*U$.\nEach endogenous $V$ depends on a subset $\\*U(V)\\subset \\*U$ of ``exogenous parents'' and\nuncertainty enters via exogenous noise, that is, a distribution from $\\mathfrak{P}(\\chi_{\\*U})$.\nA \\emph{structural function} (or \\emph{mechanism}) for $V \\in \\*V$ is a measurable $f_{V} : \\chi_{\\mathbf{Pa}(V)} \\times \\chi_{\\*U(V)} \\to \\chi_{V}$\nmapping parental endogenous and exogenous valuations to values.\n\\begin{definition}\\label{def:scm:lit}\nA \\emph{structural causal model} is a tuple $\\mathcal{M} = \\langle \\*U, \\*V, \\{f_V\\}_{V \\in \\*V}, P \\rangle$ where $\\*U$ is a collection of exogenous variables, $\\*V$ is a collection of endogenous variables, $f_V$ is a structural function for each $V \\in \\*V$, and $P \\in \\mathfrak{P}(\\chi_{\\*U})$ is a probability measure on (the Borel $\\sigma$-algebra of) $\\chi_{\\*U}$.\n\\end{definition}\n\nAs is well known, recursiveness implies that each $\\*u \\in \\chi_{\\*U}$ induces a unique $\\*v \\in \\chi_{\\*V}$ that solves the simultaneous system of structural equations $\\{V = f_V\\}_V$:\n\\begin{proposition}\nAny SCM ${\\mathcal{M}}$ with well-founded, locally finite parent relation $\\rightarrow$ induces a unique measurable $m^{{\\mathcal{M}}} : \\chi_{\\*U} \\to \\chi_{\\*V}$ such that $f_V\\big( \\pi_{\\mathbf{Pa}(V)}(m^{{\\mathcal{M}}}(\\*u)), \\pi_{\\*U(V)}(\\*u) \\big) = \\pi_{V}\\big(m^{{\\mathcal{M}}}(\\*u)\\big)$ for all $\\*u \\in \\chi_{\\*U}$ and $V \\in \\*V$.\n\\end{proposition}\nMeasurability then entails that the exogenous noise $P$ induces a distribution on joint valuations of $\\*V$, called the \\emph{observational distribution},\nwhich characterizes passive observations of the system.\n\\begin{definition}\nThe observational distribution $p^{{\\mathcal{M}}} \\in \\mathfrak{P}(\\chi_{\\*V})$ is defined on open sets by $p^{{\\mathcal{M}}}(\\*y) = P\\big((m^{{\\mathcal{M}}})^{-1}(\\*y)\\big)$. Here recall that $\\*y$ represents a cylinder subset (Definition~\\ref{def:basictopology}) of $\\chi_{\\*V}$.\n\\end{definition}\n\n\n\n\n\n\n\\subsection{Interventions}\\label{ss:scml2}\nWhat makes SCMs distinctively causal is the way they accommodate statements about possible manipulations of a causal setup capturing, e.g., observations resulting from a controlled experimental trial. This is formalized in the following definition.\n\\begin{definition}\nAn \\emph{intervention} is a choice of a finite subset of variables $\\*W \\subset \\*V$ and $\\*w \\in \\chi_{\\*W}$. This intervention is written $\\*W \\coloneqq \\*w$, and we let $A$ be the set of all interventions.\nUnder this intervention, each $W \\in \\*W$ is held fixed to its value $\\pi_{W}(\\*w) \\in \\chi_W$ in $\\*w$ while the mechanism for any $V \\in \\*V \\setminus \\*W$ is left unchanged.\nSpecifically, where ${\\mathcal{M}}$ is as in Definition \\ref{def:scm:lit}, the manipulated model for $\\*W \\coloneqq \\*w$ is the model ${\\mathcal{M}}_{\\*W \\coloneqq \\*w} = \\langle \\*U, \\*V, \\{f^{\\*W \\coloneqq \\*w}_V\\}_{V \\in \\*V}, P\\rangle$ where\n\\begin{align*}\nf^{\\*W \\coloneqq \\*w}_V = \\begin{cases}\nf_V, & V \\notin \\*W\\\\\n\\text{constant func. mapping to } \\pi_V(\\*w), & V \\in \\*W.\n\\end{cases}\n\\end{align*}\n\\end{definition}\nThe \\emph{interventional} or \\emph{experimental distribution} $p^{\\mathcal{M}_{\\*W \\coloneqq \\*w}} \\in \\mathfrak{P}(\\chi_{\\*V})$ is just the observational distribution for the manipulated model ${\\mathcal{M}}_{\\*W \\coloneqq \\*w}$, and it encodes the probabilities for an experiment in which the variables $\\*W$ are fixed to the values $\\*w$.\n\\begin{remark}\nEmpty interventions $\\varnothing \\coloneqq ()$ are just passive observations, i.e., $p^{{\\mathcal{M}}_{\\varnothing \\coloneqq ()}} = p^{{\\mathcal{M}}}$.\n\\end{remark}\n\n\n\\subsection{Counterfactuals}\\label{ss:scml3}\nBy permitting multiple manipulated settings to share exogenous noise, not only the distribution arising from a single manipulation, but also joint distributions over multiple can be considered. These are often called \\emph{counterfactuals}. The set $\\mathfrak{P}(\\chi_{A \\times \\*V})$ encompasses the combined joint distributions over $\\*V$ for any combination of interventions from $A$. A basis for the space $\\chi_{A \\times \\*V}$ are the cylinder sets of the following form, for some sequence $(\\*X \\coloneqq \\*x, \\*Y), \\dots, (\\*W \\coloneqq \\*w, \\*Z)$ of pairs, where $ \\*Y, \\dots, \\*Z \\subset \\*V$ are finite, and $\\*X \\coloneqq \\*x, \\dots, \\*W \\coloneqq \\*w \\in A$ are interventions:\n\\begin{align*}\n \\pi^{-1}_{\\{\\*X \\coloneqq \\*x\\}\\times \\*Y}(\\{\\*y\\}) \\cap \\dots \\cap \\pi^{-1}_{\\{\\*W \\coloneqq \\*w\\}\\times \\*Z}(\\{\\*z\\}).\n\\end{align*}\nWe will abbreviate this open set as ${\\*y}_{\\*x}, \\dots, {\\*z}_{\\*w}$, writing, e.g. simply $\\*x$ for the intervention $\\*X = \\*x$.\n\\begin{definition}\nGiven ${\\mathcal{M}}$, define a counterfactual distribution $p_{\\text{cf}}^{{\\mathcal{M}}} \\in \\mathfrak{P}(\\chi_{A \\times \\*V})$ on a basis as follows:\n\\begin{equation*}\n p_{\\text{cf}}^{{\\mathcal{M}}}( \\*y_{\\*x}, \\dots, {\\*z}_{\\*w} ) = P\\big((m^{{\\mathcal{M}}_{\\*X \\coloneqq \\*x}})^{-1}(\\*y) \\cap \\dots \\cap (m^{{\\mathcal{M}}_{\\*W \\coloneqq \\*w}})^{-1}(\\*z)\\big).\n\\end{equation*}\nHere, the letters $\\*y, \\dots, \\*z$ on the right-hand side abbreviate the respective cylinder sets (Definition~\\ref{def:basictopology}) $\\pi^{-1}_{\\*Y}(\\{\\*y\\}), \\dots, \\pi^{-1}_{\\*Z}(\\{\\*z\\})$.\n\\end{definition}\n\\begin{remark}\nMarginalizing $p^{{\\mathcal{M}}}_{\\text{cf}}$ to any single intervention $\\*W \\coloneqq \\*w$ yields $p^{{\\mathcal{M}}_{\\*W \\coloneqq \\*w}}$. If $\\chi_{\\*U}$ is finite, we obtain a familiar \\cite{Galles1998} sum formula\n$p_{\\text{cf}}^{{\\mathcal{M}}}(\\*y_{\\*x}, \\dots, {\\*z}_{\\*w}) = \\sum_{\\{\\*u \\mid m^{{\\mathcal{M}}_{\\*X \\coloneqq \\*x}}(\\*u) \\in \\*y, \\dots, m^{{\\mathcal{M}}_{\\*W \\coloneqq \\*w}}(\\*u) \\in \\*z\\}} P(\\*u)$.\n\\end{remark}\n\n\\begin{example} As a very simple example (drawn from \\cite{Pearl2009,BCII2020}), just to illustrate the previous definitions and notation, consider a scenario with two binary exogenous variables $\\mathbf{U} = \\{U_1,U_2\\}$ and two binary endogenous variables $\\mathbf{V} = \\{X,Y\\}$. Let $U_1,U_2$ both be uniformly distributed, and define $f_X:\\chi_{U_1} \\rightarrow \\chi_X$ to be the identity, and $f_Y:\\chi_X \\times \\chi_{U_2} \\rightarrow \\chi_Y$ by $f_Y(u,x) = ux + (1-u)(1-x)$. This fully defines an SCM ${\\mathcal{M}}$ with influence $X \\rightarrow Y$, and produces an observational distribution $p^{\\mathcal{M}}$ such that $p^{\\mathcal{M}}(x,y) = \\nicefrac{1}{4}$ for all four settings $X=x,Y=y$. \n\nThe space $A$ of interventions in this example includes the empty intervention and all combinations of $X:=x$ and $Y:=y$, with $x,y \\in \\{0,1\\}$. Notably, all interventional distributions here collapse to observational distributions, e.g., $p^{{\\mathcal{M}}_{X:= x}}(X,Y) = p^{\\mathcal{M}}(X,Y)$, for both values of $x$. Thus, ``experimental'' manipulations of this system reveal little interesting causal structure. The counterfactual distribution $p^{\\mathcal{M}}_{\\mathrm{cf}}$, however, does not trivialize. For instance, $p^{\\mathcal{M}}_{\\mathrm{cf}}((X:=1, Y=1),(X:=0,Y=0)) = \\nicefrac{1}{2}$. This term is known as the \\emph{probability of necessity and sufficiency} \\cite{Pearl1999}, which we can abbreviate by $p^{\\mathcal{M}}_{\\mathrm{cf}}(y_x,y'_{x'})$. Note that $p^{\\mathcal{M}}_{\\mathrm{cf}}(y_x,y'_{x'}) \\neq p^{\\mathcal{M}}_{\\mathrm{cf}}(y_x)p^{\\mathcal{M}}_{\\mathrm{cf}}(y'_{x'}) = \\nicefrac{1}{4}$. Similarly, $p^{\\mathcal{M}}_{\\mathrm{cf}}(y'_x,y_{x'}) = \\nicefrac{1}{2}$.\n\n\\end{example}\n\n\n\n\n\\subsection{SCM classes}\nWe now define several subclasses of SCMs that we will use throughout the paper.\nNotably, we do not require their endogenous variable sets $\\*V$ to be finite. It is infinite in many applications, for instance, in time series models, or generative models defined by probabilistic programs (see, e.g., \\citep{II2019,Tavares}). Because the proofs call for slightly different methods, we deal with the infinite and finite cases separately. We make one additional assumption in the infinite case.\n\\begin{definition}\n$\\mu \\in \\mathfrak{P}(\\vartheta)$ is \\emph{atomless} if $\\mu(\\{t\\}) = 0$ for each $t \\in \\vartheta$;\n ${\\mathcal{M}}$ is atomless if $p^{{\\mathcal{M}}}_{\\text{cf}}$ is atomless.\n\\end{definition}\nIntuitively, an atomless distribution is one in which weight is always ``smeared'' out continuously and there are no point masses; infinitely many fair coin flips, for example, generate an atomless distribution as the probability of obtaining any given infinite sequence is zero.\n\\begin{definition}\nFor the remainder of the paper, fix a countable endogenous variable set $\\*V$. Define the following classes of SCMs:\n\\begin{align*}\n \\mathfrak{M}_\\prec &= \\text{SCMs over } \\*V \\text{ whose influence relation is extendible to the } \\omega\\text{-like order } \\prec;\\\\\n \\mathfrak{M}_X &= \\text{SCMs over } \\*V \\text{ in which the variable } X \\text{ has no parents: } \\mathbf{Pa}(X) = \\varnothing;\\\\\n \\mathfrak{M} &= \\text{all SCMs over } \\*V = \\bigcup_{\\prec} \\mathfrak{M}_\\prec = \\bigcup_X \\mathfrak{M}_X.\n\\end{align*}\nIf $\\*V$ is infinite then all SCMs in the classes above are assumed to be atomless.\n\\end{definition}\n\n\\section{The Causal Hierarchy} \\label{sec:causalhierarchy}\n\nImplicit in \\S{}\\ref{scms}, and indeed in much of the literature on causal inference, is a hierarchy of causal expressivity. Following the metaphor offered in \\cite{pearl2018book}, it is natural to characterize three levels of the hierarchy as the \\emph{observational}, \\emph{interventional} (experimental), and \\emph{counterfactual} (explanatory). Drawing on recent work \\citep{BCII2020,ibelingicard2020} we make this characterization explicit. The levels will be defined in descending order of causal expressivity (the reverse of \\S\\ref{scms}). Fig.~\\ref{fig:hierarchy}(a) summarizes our definitions.\n\nHigher levels determine lower levels---counterfactuals determine interventionals, and the observational is just an (empty) interventional.\nThus movement ``downward'' in the causal hierarchy corresponds to a kind of projection.\nFor indexed $\\{S_\\beta\\}_{\\beta \\in B}$\nand $B' \\subset B$\nlet $\\varsigma_{B'} : \\mathfrak{P}(\\bigtimes_{\\beta \\in B} S_\\beta) \\to \\mathfrak{P}(\\bigtimes_{\\beta \\in B'} S_\\beta)$\nbe the \\emph{marginalization} map taking a joint distribution to its marginal on $B'$.\n\\begin{definition}\nDefine three composable \\emph{causal projections} $\\{\\varpi_i\\}_{1 \\le i \\le 3}$\nwith signatures and definitions\n\\begin{gather*}\n\\varpi_3 : \\mathfrak{M} \\to \\mathfrak{P}(\\chi_{A \\times \\*V}), \\quad \\varpi_2: \\mathfrak{P}(\\chi_{A \\times \\*V}) \\to \\bigtimes_{\\alpha \\in A} \\mathfrak{P}(\\chi_{\\*V}), \\quad \\varpi_1: \\bigtimes_{\\alpha \\in A} \\mathfrak{P}(\\chi_{\\*V}) \\to \\mathfrak{P}(\\chi_{\\*V});\\\\\n \\varpi_3: {\\mathcal{M}} \\mapsto p^{{\\mathcal{M}}}_{\\mathrm{cf}}, \\quad \\varpi_2: \\mu_3 \\mapsto \\big(\\varsigma_{\\{\\alpha\\} \\times \\*V}(\\mu_3)\\big)_{\\alpha \\in A}, \\quad \\varpi_1: (\\mu_\\alpha)_{\\alpha \\in A} \\mapsto \\mu_{\\varnothing \\coloneqq ()} = \\pi_{\\varnothing \\coloneqq ()}\\big((\\mu_\\alpha)_\\alpha\\big).\n\\end{gather*}\nThe \\emph{causal hierarchy} consists of three sets $\\{\\mathfrak{S}_i\\}_{1 \\le i \\le 3}$ defined as images or projections of $\\mathfrak{M}$:\n\\begin{equation*}\n \\mathfrak{S}_3 = \\varpi_3(\\mathfrak{M}), \\quad \\mathfrak{S}_2 = \\varpi_2(\\mathfrak{S}_3), \\quad \\mathfrak{S}_1 = \\varpi_1(\\mathfrak{S}_2).\n\\end{equation*}\n\\end{definition}\nThese are the three \\emph{Levels} of the hierarchy. The definitions cohere with those of \\S{}\\ref{scms} (and, e.g., \\cite{Pearl2009,BCII2020}):\n\\begin{fact}\nLet ${\\mathcal{M}} \\in \\mathfrak{M}$.\nThen\n$\\mu_3 = \\varpi_3({\\mathcal{M}}) \\in \\mathfrak{S}_3$ trivially coincides with its counterfactual distribution as defined in \\S{}\\ref{ss:scml3}, while\n$(\\mu_\\alpha)_\\alpha = \\varpi_2(\\mu_3) \\in \\mathfrak{S}_2$ coincides with the indexed family of all its interventional distributions (\\S{}\\ref{ss:scml2}), i.e., $\\pi_{\\*W \\coloneqq \\*w} \\big((\\mu_\\alpha)_\\alpha\\big) = p^{{\\mathcal{M}}_{\\*W \\coloneqq \\*w}}$ for each $\\*W \\coloneqq \\*w \\in A$.\nFinally $\\mu = \\varpi_1\\big((\\mu_\\alpha)_\\alpha\\big) \\in \\mathfrak{S}_1$ coincides with its observational distribution (\\S{}\\ref{ss:scml1}).\n\\end{fact}\nThus, e.g., $\\mathfrak{S}_3$ is the set of counterfactual distributions that are consistent with at least some SCM from $\\mathfrak{M}$. It is a fact that $\\mathfrak{S}_3 \\subsetneq \\mathfrak{P}(\\chi_{A \\times \\*V})$ and similarly not every interventional family belongs to $\\mathfrak{S}_2$; see\nAppendix \\ref{app:causalhierarchy} for explicit characterizations.\nAt the observational level, this is simple:\n\\begin{fact}\n$\\mathfrak{S}_1 = \\mathfrak{P}(\\chi_{\\*V})$ in the finite case. In the infinite case, $\\mathfrak{S}_1 = \\{ \\mu \\in \\mathfrak{P}(\\chi_{\\*V}): \\mu \\text{ is atomless}\\}$. \\label{prop:alternative}\n\\end{fact}\nWe will also use the subsets $\\{\\mathfrak{S}^\\prec_i\\}_{i}$ and $\\{\\mathfrak{S}^X_i\\}_{i}$, which are defined analogously but via projection from $\\mathfrak{M}_\\prec$ and $\\mathfrak{M}_X$ respectively.\n\n\n\n\n\n\\subsection{Problems of Causal Inference} \\label{subsection:probs}\nAs elucidated in \\cite{pearl2018book,BCII2020}, the causal hierarchy helps characterize many standard problems of causal inference, in as far as these problems typically involve ascending levels of the hierarchy. Some examples include: \n\\begin{enumerate}\n \\item Classical identifiability: given observational data about some variables in $\\*V$, estimate a \\emph{causal effect} of setting variables $\\*X$ to values $\\*x$ \\citep{Pearl1995,Spirtes2001}. In the notation here, given information about $p^{\\mathcal{M}}(\\*V)$, can we determine $p^{\\mathcal{M}_{\\*X \\coloneqq \\*x}}(\\*Y)$?\n \\item General identifiability: given a mix of observational data and limited experimental data---that is, information about $p^{\\mathcal{M}}(\\*V)$ as well as some experimental distributions of the form $p^{\\mathcal{M}_{\\*W \\coloneqq \\*w}}(\\*V)$---determine $p^{\\mathcal{M}_{\\*X \\coloneqq \\*x}}(\\*Y)$ \\citep{tian-2002,lee2019general}.\n \\item Structure learning: given observational data, and perhaps experimental data, infer properties of the underlying causal influence relation $\\rightarrow$ \\citep{Spirtes2001,Peters}.\n \\item Counterfactual estimation: given a combination of observational and experimental data, infer a counterfactual quantity, such as probability of necessity \\citep{robins-greenland}, or probability of necessity and sufficiency \\citep{Pearl1999,Tian2000} (see also \\S\\ref{section:pns} below). \n \\item Global identifiability: given observational data drawn from $p^{\\mathcal{M}}(\\*V)$ infer the full counterfactual distribution $p_{\\text{cf}}^{{\\mathcal{M}}}(A \\times \\*V)$ \\citep{JMLR:v7:shimizu06a,drton2011}.\n\\end{enumerate} This is not an exhaustive list, and these problems are not all independent of one another. They are also all unsolvable in general. Problems 1, 2, and 3 involve ascending to Level 2 given information at Level 1 (and perhaps partial information at Level 2); problems 4 and 5 ask us to ascend to Level 3 given only Level 1 (and perhaps also Level 2) information. The upshot of the causal hierarchy theorem from \\cite{BCII2020} is that these steps are impossible without assumptions, formalizing the common wisdom, ``no causes in, no causes out'' \\cite{Cartwright}. \nTo understand the statement of the causal hierarchy theorem---and our topological version of it---we explain what it means for the hierarchy to collapse. \n\n\\subsection{Collapse of the Hierarchy}\\label{ss:collapse}\n\nIn the present setting a \\emph{collapse} of the hierarchy can be understood in terms of injectivity of the functions $\\varpi_i$.\n\nFor $i = 1, 2$ let $\\mathfrak{C}_i \\subset \\mathfrak{S}_i$ be the injective fibers of $\\varpi_i$, i.e.,\n$\\mathfrak{C}_i = \\{\\mu_i \\in \\mathfrak{S}_i : \\mu_{i+1} = \\mu_{i+1}' \\text{ whenever } \\varpi_i(\\mu_{i+1}) = \\varpi_i(\\mu'_{i+1}) = \\mu_i \\}$.\nEvery element $\\mu \\in \\mathfrak{C}_i$ is a witness to (global) collapse of the hierarchy: knowing $\\mu$ would be sufficient to determine the Level $i+1$ facts completely.\n\n\n\\begin{figure} \\centering \n\\subfigure [Causal Hierarchy] {\n \\begin{tikzpicture}[framed]\n \\node (m) at (0,0) {$\\mathfrak{M}$}; \n \\node (ss3) at (1.5,0) {$\\mathfrak{S}_3$};\n \\node (ss2) at (3,0) {$\\mathfrak{S}_2$};\n \\node (ss1) at (4.5,0) {$\\mathfrak{S}_1$};\n \n \\path (m) edge[->] (ss3);\n \\node (l1) at (.7,.15) {\\small{}$\\varpi_3$};\n \\path (ss3) edge[->] (ss2);\n \\node (l2) at (2.25,.15) {\\small{}$\\varpi_2$};\n \\path (ss2) edge[->] (ss1);\n \\node (l3) at (3.75,.15) {\\small{}$\\varpi_1$};\n \n \\node (b1) at (-.5,0) {};\n \\node (b2) at (5.5,0) {}; \n \n \\node (ns1) at (1.4,-1.25) {$\\mathfrak{S}^{X \\to Y}_{2}$};\n \\node (b1) at (2.8,-1.25) {$\\dots$};\n \\node (ns2) at (4.4,-1.25) {$\\mathfrak{S}^{X' \\to Y'}_{2}$};\n \n \\path (ss2) edge[->] (ns1);\n \\node (l3) at (1.6,-.55) {\\small{}$\\varpi^{X \\to Y}_{2}$};\n \\path (ss2) edge[->] (ns2);\n \\node (l4) at (4.5,-.55) {\\small{}$\\varpi^{X' \\to Y'}_{2}$};\n \\end{tikzpicture}\n }\n\\subfigure [Collapse Set $\\mathfrak{C}_2$] {\n \\begin{tikzpicture}\n\\draw (.7,1.2) ellipse (2.5cm and .5cm);\n\\draw [gray!60,fill=gray!20] (.15,0) ellipse (1.1cm and .25cm);\n\\draw (.7,0) ellipse (2cm and .4cm);\n\\node (s0) at (1.5,0) {$\\textcolor{blue}{\\bullet}$}; \n\\node (t0) at (1.2,1.2) {$\\textcolor{blue}{\\bullet}$};\n\\node (t1) at (1.55,1.2) {$\\textcolor{blue}{\\bullet}$};\n\\node (t2) at (1.9,1.2) {$\\textcolor{blue}{\\bullet}$};\n\\path (t0) edge[->] (s0);\n\\path (t1) edge[->] (s0);\n\\path (t2) edge[->] (s0);\n\n\\node (s5) at (2.15,0) {$\\textcolor{violet}{\\bullet}$}; \n\\node (t6) at (2.3,1.2) {$\\textcolor{violet}{\\bullet}$};\n\\node (t7) at (2.75,1.2) {$\\textcolor{violet}{\\bullet}$};\n\\path (t6) edge[->] (s5);\n\\path (t7) edge[->] (s5);\n\n\n\\node (s1) at (.15,0) {$\\textcolor{orange}{\\bullet}$};\n\\node (s2) at (-.3,0) {$\\textcolor{green}{\\bullet}$};\n\\node (t3) at (.15,1.2) {$\\textcolor{orange}{\\bullet}$};\n\\node (t4) at (-.3,1.2) {$\\textcolor{green}{\\bullet}$};\n\\node (t5) at (-1,1.2) {$\\textcolor{red}{\\bullet}$};\n\\path (t3) edge[->] (s1);\n\\path (t4) edge[->] (s2);\n\\path (t5) edge[->,color=gray] (s2);\n\\node (x) at (-.675,.6) {$\\textcolor{red}{\\mathsf{X}}$};\n\\node (C2) at (.65,0) {$\\mathfrak{C}_2$};\n\n\\node (ss3) at (3.6,1.2) {$\\mathfrak{S}_3$};\n\\node (ss2) at (3.6,0) {$\\mathfrak{S}_2$};\n\n\\path (ss3) edge[->] (ss2);\n\n\\node (f) at (3.3,.6) {$\\varpi_2$};\n\n\\end{tikzpicture} \n}\n \\caption{(a) $\\mathfrak{S}_3$ can be seen as a coarsening of $\\mathfrak{M}$, abstracting from irrelevant ``intensional'' details. $\\mathfrak{S}_2$ is obtained from $\\mathfrak{S}_3$ by marginalization (also a coarsening), while $\\mathfrak{S}_1$ is a projection of $\\mathfrak{S}_2$ via the ``empty'' intervention. Each map $\\varpi_i$, $i=1,2$, is continuous in the respective weak topology (Prop. \\ref{prop:causalprojectioncontinuous}). The projections $\\varpi^{X \\to Y}_{2}$ from $\\mathfrak{S}_2$ to the 2VE-spaces are likewise continuous and also open (Prop. \\ref{prop:causalprojectioncontinuous}). \\\\\n (b) The shaded region, $\\mathfrak{C}_2 \\subset \\mathfrak{S}_2$, is the collapse set in which Level 2 facts determine all Level 3 facts: those points in $\\mathfrak{S}_2$ whose $\\varpi_2$-preimage in $\\mathfrak{S}_3$ is a singleton set. The main result of this paper is that $\\mathfrak{C}_2$ is \\emph{meager} in weak topology on $\\mathfrak{S}_2$ (Thm. \\ref{thm:hierarchyFormal}). This means $\\mathfrak{C}_2$ contains no open subset, which by Thm. \\ref{thm:l2learning} implies no part of $\\mathfrak{C}_2$ is statistically verifiable, even with infinitely many ideal experiments.}\\label{fig:hierarchy}\n\\end{figure}\n\n\\begin{comment} \n\\begin{figure}\\begin{center}\n \\begin{tikzpicture}\n\\draw (.7,1.2) ellipse (2.5cm and .5cm);\n\\draw [gray!60,fill=gray!20] (.15,0) ellipse (1.1cm and .25cm);\n\\draw (.7,0) ellipse (2cm and .4cm);\n\\node (s0) at (1.5,0) {$\\textcolor{blue}{\\bullet}$}; \n\\node (t0) at (1.2,1.2) {$\\textcolor{blue}{\\bullet}$};\n\\node (t1) at (1.55,1.2) {$\\textcolor{blue}{\\bullet}$};\n\\node (t2) at (1.9,1.2) {$\\textcolor{blue}{\\bullet}$};\n\\path (t0) edge[->] (s0);\n\\path (t1) edge[->] (s0);\n\\path (t2) edge[->] (s0);\n\n\\node (s5) at (2.15,0) {$\\textcolor{violet}{\\bullet}$}; \n\\node (t6) at (2.3,1.2) {$\\textcolor{violet}{\\bullet}$};\n\\node (t7) at (2.75,1.2) {$\\textcolor{violet}{\\bullet}$};\n\\path (t6) edge[->] (s5);\n\\path (t7) edge[->] (s5);\n\n\n\\node (s1) at (.15,0) {$\\textcolor{orange}{\\bullet}$};\n\\node (s2) at (-.3,0) {$\\textcolor{green}{\\bullet}$};\n\\node (t3) at (.15,1.2) {$\\textcolor{orange}{\\bullet}$};\n\\node (t4) at (-.3,1.2) {$\\textcolor{green}{\\bullet}$};\n\\node (t5) at (-1,1.2) {$\\textcolor{red}{\\bullet}$};\n\\path (t3) edge[->] (s1);\n\\path (t4) edge[->] (s2);\n\\path (t5) edge[->,color=gray] (s2);\n\\node (x) at (-.65,.6) {$\\textcolor{red}{\\times}$};\n\\node (C2) at (.65,0) {$\\mathfrak{C}_2$};\n\n\\node (ss3) at (3.5,1.2) {$\\mathfrak{S}_3$};\n\\node (ss2) at (3.5,0) {$\\mathfrak{S}_2$};\n\n\\path (ss3) edge[->] (ss2);\n\n\\node (f) at (3.2,.6) {$\\varpi_2$};\n\n\\end{tikzpicture} \\end{center}\n \\caption{Collapse of the hierarchy}\n \\label{fig:my_label}\n\\end{figure}\n\\end{comment}\n\n\nA first observation is that $\\varpi_1$ is \\emph{never} injective. In other words, the distribution $p^{\\mathcal{M}}(\\*V)$ never determines all the interventional distributions $p^{\\mathcal{M}_{\\*X \\coloneqq \\*x}}(\\*Y)$. This is essentially a way of stating that correlation never implies causation absent assumptions. (See also \\cite[Thm. 1]{BCII2020}.) \n\\begin{proposition}\n$\\mathfrak{C}_1 = \\varnothing$. That is, Level 2 \\emph{never} collapses to Level 1 without assumptions. \\label{prop:collapse1}\n\\end{proposition}\n\nTo overcome this formidable inferential barrier, researchers often assume we are not working in the ``full'' space $\\mathfrak{M}$ of all causal models, but rather some proper subset embodying a range of causal assumptions. This may effectively eliminate counterexamples to collapse (cf. Fig. \\ref{fig:hierarchy}(b)). For problems of type 1 or 2 (from the list above in \\S\\ref{subsection:probs}) it is common to assume we are only dealing with models whose graph (direct influence relation) $\\rightarrow$ satisfies a fixed set of properties. For problems of type 3 it is common to assume that $p^\\mathcal{M}$ and $\\rightarrow$ relate in some way (for instance, through an assumption like \\emph{faithfulness} or \\emph{minimality} \\cite{Spirtes2001}). All of these problems become solvable with sufficiently strong assumptions about the form of the functions $\\{f_V\\}_V$ or the probability space $P$.\n\nIn some cases, the relevant causal assumptions are justified by appeal to background or expert knowledge. In other cases, however, an assumption will be justified by the fact that it rules out only a ``small'' or ``negligible'' or ``measure zero'' part of the full set $\\mathfrak{M}$ of possibilities. As emphasized by a number of authors \\cite{Freedman1997,Uhler,pmlr-v117-lin20a}, not all ``small'' subsets are the same, and it seems reasonable to demand further justification for eliminating one over another. We believe that the framework presented here can contribute to this positive project, but our immediate interest is in solidifying and clarifying limitative results about what cannot be done.\n\n\n\nThe issue of collapse becomes especially delicate when we turn to $\\mathfrak{C}_2$. When do interventional distributions fully determine counterfactual distributions? In contrast to Prop. \\ref{prop:collapse1} we have:\n\\begin{proposition} $\\mathfrak{C}_2 \\neq \\varnothing$. That is, there exists an SCM in which Level 3 collapses to Level 2. \n\\end{proposition}\n\\begin{proof}[Proof sketch] As a very simple example in the finite case, any fully deterministic SCM will result in collapse. This is because, if $(\\mu_{\\alpha})_{\\alpha \\in A}$ are all binary-valued then the measure $\\mu_3 \\in \\mathfrak{P}\\big(\\bigtimes_{\\alpha \\in A} \\chi_{\\*V}\\big)$ that produces the marginals $\\mu_{\\alpha}$ is completely determined: each $\\mu_{\\alpha}$ specifies an element of $\\chi_{\\*V}$, so $\\mu_3$ must assign unit probability to the tuple that matches $\\mu_{\\alpha}$ at the $\\alpha$ projection. \nIn the infinite case, any example must be non-deterministic by atomlessness, but collapse is still possible; see Example \\ref{example:collapse} in Appendix \\ref{app:causalhierarchy}.\n\\end{proof}\n\n\\subsection{Probabilities of Causation} \\label{section:pns}\n\nA handful of counterfactual quantitites over two given variables, collected below, have been particularly prominent in the literature (e.g., \\cite{Pearl1999}). Our main result will show that \\emph{any} of these six quantities (for any two fixed variables) is robust against collapse.\nBelow, fix two distinct variables $Y \\neq X \\in \\*V$ and distinct values $x \\neq x' \\in \\chi_X$, $y \\neq y' \\in \\chi_Y$.\n\\begin{definition}\\label{def:probcaus}\nThe \\emph{probabilities of causation} are the following quantities:\n\\begin{align*}\n P(y_x, y'_{x'}): & \\text{ probability of necessity and sufficiency}\\\\\n P(y'_x, y_{x'}): & \\text{ converse prob. of necessity and sufficiency}\\\\\n P(y'_{x'} \\mid x, y): & \\text{ prob. of necessity} \\qquad P(y_x \\mid x', y'): \\text{ prob. of sufficiency}\\\\\n P(y'_{x'} \\mid y): & \\text{ prob. of disablement} \\qquad P(y_x \\mid y'): \\text{ prob. of enablement}\n\\end{align*}\n\\end{definition}\nConsider, for example, the probability of necessity and sufficiency (PNS), which is the joint probability that $Y$ would take on value $y$ if $X$ is set by intervention to $x$, and $y'$ if $X$ is set to $x'$.\nPNS has been thoroughly studied \\citep{Pearl1999,Tian2000,avin:etal05}, in part due to its widespread relevance: from medical treatment to online advertising, we would like to assess which interventions are likely to be both \\emph{necessary} and \\emph{sufficient} for a given outcome.\nUsing the notation from \\S{}\\ref{ss:scml3}, PNS concerns the measure of sets\n$y_{x}, y'_{x'} = \\pi^{-1}_{(X \\coloneqq x, Y)}(\\{y\\}) \\cap \\pi^{-1}_{(X \\coloneqq x', Y)}(\\{y'\\})$.\n\n\nThe probabilities of causation are paradigmatically Level 3, and we will be interested in their manifestations at Level 2. In that direction we introduce a small part of $\\mathfrak{S}_2$, just enough to witness the behavior of $Y$ (and $X$) under the empty intervention and the two possible interventions on $X$:\n\\begin{definition}\nLet $A_X =\\{\\varnothing \\coloneqq (), X \\coloneqq 0, X \\coloneqq 1\\}$.\nDefine a small subspace $\\mathfrak{S}^{X \\to Y}_{2}\\subset \\bigtimes_{\\alpha \\in A_X} \\mathfrak{P}(\\chi_{\\{X,Y\\}})$ as the image of the map $\\varpi^{X\\to Y}_{2} = \\big(\\varsigma_{\\{X, Y\\}} \\times \\varsigma_{\\{X, Y\\}} \\times \\varsigma_{\\{X, Y\\}}\\big) \\circ \\pi_{A_X}$ (see Fig. \\ref{fig:hierarchy}(a)). Call $\\mathfrak{S}^{X \\to Y}_{2}$ a \\emph{two-variable effect} (2VE) \\emph{space}; fixing $X$, we have a 2VE-space for each $Y$.\n\\end{definition}\n\nIt is known in the literature that the probabilities of causation are not identifiable from the data $p(X,Y)$, $p(Y_{x})$, and $p(Y_{x'})$ (see, e.g., \\cite{avin:etal05} for PNS). As part of our proof of Theorem \\ref{thm:hierarchyFormal} below, we will strengthen this considerably to show them all to be \\emph{generically} unidentifiable, in a topological sense to be made precise.\n\n\n\n\n\n\n\n\\section{The Weak Topology} \\label{sec:weaktopology}\n\nWe now demonstrate how $\\mathfrak{S}_1,\\mathfrak{S}_2$ and $\\mathfrak{S}_3$ can be topologized. \nIn general, given a space $\\vartheta$ and the set $\\mathfrak{S} = \\mathfrak{P}(\\vartheta)$ of Borel probability measures on $\\vartheta$, a natural topology on $\\mathfrak{S}$ can be defined as follows:\n\\begin{definition}\nFor a sequence $(\\mu_n)_n$ of measures in $\\mathfrak{S}$,\nwrite $(\\mu_n)_n \\Rightarrow \\mu$ and say it \\emph{converges weakly} \\citep[p.~7]{Billingsley} to $\\mu$ if $\\int_{\\vartheta} f \\, \\mathrm{d}\\mu_n \\to \\int_{\\vartheta} f \\, \\mathrm{d}\\mu$ for all bounded, continuous $f : \\vartheta \\to \\mathbb{R}$.\nThen the \\emph{weak topology} $\\tau^{\\mathrm{w}}$ on $\\mathfrak{S}$ is that with the following closed sets: $E \\subset \\mathfrak{S}$ is closed in $\\tau^{\\mathrm{w}}$ iff for any weakly convergent sequence $(\\mu_n)_n \\Rightarrow \\mu$ in which every $\\mu_n \\in E$, the limit point $\\mu$ is in $E$.\n\\end{definition}\nThere are several alternative characterizations of $\\tau^{\\mathrm{w}}$, which hold under very general conditions. For instance, it coincides with the topology induced by the so called L\\'{e}vy-Prohorov metric \\citep{Billingsley}. \nThe most useful characterization for our purposes is that it \ncan be generated by subbasic open sets of the form \\begin{equation} \\{\\mu: \\mu(X)>r\\}\n\\label{subbasis-weak}\n\\end{equation} with $X$ ranging over basic clopens in $\\vartheta$ and $r$ over rationals (see, e.g., \\cite[Lemma A.5]{GeninKelly}).\n\nConceptually, the explication of $\\tau^{\\mathrm{w}}$ in terms of weak convergence strongly suggests a connection with statistical learning. We now make this connection precise, building on existing work \\cite{DemboPeres,Genin2018,GeninKelly}. \n\n\\subsection{Connection to Learning Theory}\nRoughly speaking, we will say a hypothesis $H \\subseteq \\mathfrak{S}$ is \\emph{statistically verifiable} if there is some error bound $\\epsilon$ and a sequence of statistical tests that converge on $H$ with error at most $\\epsilon$, when data are generated from $H$. More formally, a \\emph{test} is a function $\\lambda: \\vartheta^n \\rightarrow \\{\\mathsf{accept},\\mathsf{reject}\\}$, where $\\vartheta^n$ is the $n$-fold product of $\\vartheta$, viz. finite data streams from $\\vartheta$. The interest is in whether a ``null'' hypothesis can be rejected given data observed thus far. The \\emph{boundary} of a set $A\\subseteq \\vartheta$, written $\\mathsf{bd}(A)$, is the difference of its closure and its interior. Intuitively, a learner will not be able to decide whether to accept or reject on the boundary. Consequently it is assumed that $\\lambda$ is \\emph{feasible} in the sense that the boundary of its acceptance zone (in the product topology on $\\vartheta^n$) always has measure 0, i.e., $\\mu^n[\\mathsf{bd}(\\lambda^{-1}(\\mathsf{accept}))] = 0$ for every $\\mu \\in \\mathfrak{S}$, where $\\mu^n$ is the $n$-fold product measure of $\\mu$.\n\nSay a hypothesis $H \\subseteq \\mathfrak{S}$ is \\emph{verifiable} \\cite{Genin2018} if there is $\\epsilon>0$ and a sequence $(\\lambda_n)_{n\\in\\mathbb{N}}$ of feasible tests (of the complement of $H$ in $\\mathfrak{S}$, i.e., the ``null hypothesis'') such that \\begin{enumerate}\n \\item $\\mu^n[\\lambda_n^{-1}(\\mathsf{reject})] \\leq \\epsilon$ for all $n$, whenever $\\mu \\notin H$;\n \\item $\\underset{n\\rightarrow\\infty}{\\mbox{lim}}\\;\\mu^n[\\lambda_n^{-1}(\\mathsf{reject})] = 1$, whenever $\\mu \\in H$.\n\\end{enumerate} That is, to be verifiable we only require a sequence of tests that converges in probability to the true hypothesis in the limit of infinite data (requirement 2), while incurring (type 1) error only up to a given bound at finite stages (requirement 1). As an illustrative example, \\emph{conditional dependence} is verifiable \\cite{Genin2018}. This is a relatively lax notion of verifiability.\nFor instance, the hypothesis need not also be \\emph{refutable} (and thus ``decidable''). For our purposes this generality is a virtue: we want to show that certain hypotheses are not statistically verifiable by any method, even in this wide sense. The fundamental link between verifiability and the weak topology is the following, due to \\cite{Genin2018,GeninKelly}:\n\\begin{theorem} \\label{thm:genin} A set $H \\subseteq \\mathfrak{S}$ is verifiable if and only if it is open in the weak topology.\n\\end{theorem}\n\n\\subsection{Topologizing Causal Models}\nWe now reinterpret $\\tau^{\\mathrm{w}}$ at each level of the causal hierarchy: \n\\begin{definition}\nThe \\emph{weak causal topology} $\\tau^{\\mathrm{w}}_i$, $1 \\le i \\le 3$, is the subspace topology on $\\mathfrak{S}_i$, induced by\n\\begin{align*}\n\\text{if } i=3: \\tau^{\\mathrm{w}} \\text{ on } \\mathfrak{P}(\\chi_{A \\times \\*V}); \\quad\n\\text{if } i=2: \\text{product of }\\tau^{\\mathrm{w}} \\text{ on } \\bigtimes_{\\alpha \\in A} \\mathfrak{P}(\\chi_{\\*V}); \\quad\n\\text{if } i=1: \\tau^{\\mathrm{w}} \\text{ on } \\mathfrak{P}(\\chi_{\\*V}).\n\\end{align*}\n\\end{definition}\n\\begin{proposition}\\label{prop:causalprojectioncontinuous}\nIn the weak causal topologies,\n$\\{\\varpi_i\\}_{i = 1, 2}$ are continuous and all projections\n$\\varpi^{X \\to Y}_{2}$ are continuous and open.\n\\end{proposition}\nA significant observation is that the learning theoretic interpretation, originally intended for $\\tau^{\\mathrm{w}}_1$, naturally extends to $\\tau^{\\mathrm{w}}_2$. While data streams at Level 1 amount to passive observations of $\\*V$, data streams at Level 2 can be seen as sequences of experimental results, i.e., observations of ``potential outcomes'' $\\*Y_{\\*x}$. To make verifiability as easy as possible we assume a learner can observe a sample from all conceivable experiments at each step. A learner is thus a function $\\lambda:\\mathcal{E}^n \\rightarrow \\{\\mathsf{accept},\\mathsf{reject}\\}$, where $\\mathcal{E}^n = ((\\chi_{\\*V})^n)_\\alpha$ is the set of potential experimental observations over $n$ trials (with $\\alpha$ indexing the experiments). Construing $\\mathcal{E}^n$ as a product space we can again speak of \\emph{feasibility} of $\\lambda$. \n\nRecall that elements of $\\mathfrak{S}_2$ are tuples $(\\mu_\\alpha)_{\\alpha \\in A}$ of measures. Say a hypothesis $H\\subseteq \\mathfrak{S}_2$ is \\emph{experimentally verifiable} if there is $\\epsilon>0$ and a sequence $(\\lambda_n)_{n \\in \\mathbb{N}}$ of feasible tests such that 1 and 2 above hold, replacing $\\mu^n[\\lambda_n^{-1}(\\mathsf{reject})]$ with $\\prod_{\\alpha} \\mu_\\alpha^n[(\\lambda_n^{-1}(\\mathsf{reject}))_\\alpha]$. That is, when experimental data are drawn from the interventional distributions $(\\mu_\\alpha)_{\\alpha \\in A} \\in H$, we require that the learner eventually converge on $H$ with bounded error at finite stages. We can then show (see Appendix \\ref{app:empirical}): \n\\begin{theorem} A set $H \\subseteq \\mathfrak{S}_2$ is experimentally verifiable if and only if it is open in $\\tau^{\\mathrm{w}}_2$. \\label{thm:l2learning}\n\\end{theorem}\nA similar result can be given for $(\\mathfrak{S}_3,\\tau^{\\mathrm{w}}_3)$, although it is less clear what the empirical content of this result would be.\nNote also that $\\tau^{\\mathrm{w}}_1,\\tau^{\\mathrm{w}}_2,\\tau^{\\mathrm{w}}_3$ give a sequence of increasingly fine topologies on the set of actual SCMs $\\mathfrak{M}$ by simply pulling back the projections. The point is that $\\tau^{\\mathrm{w}}_2$ is the finest that has clear empirical significance, while $\\tau^{\\mathrm{w}}_3$ is the finest in terms of relevance to the causal hierarchy.\n\n\n\n\\section{Collapse is Meager} \\label{sec:main}\nRecall that a set $X \\subseteq \\vartheta$ is \\emph{nowhere dense} if every open set contains an open $Y$ with $X \\cap Y = \\varnothing$. A countable union of nowhere dense sets is said to be \\emph{meager} (or \\emph{of first category}). The complement of a meager set is \\emph{comeager}. Intuitively, a meager set is one that can be ``approximated'' by sets ``perforated with holes'' \\citep{Oxtoby}. Meagerness is notably preserved when taking the preimage under a map that is both continuous and open.\n\nAs discussed above, one intuition highlighted by the weak topology $\\tau^{\\mathrm{w}}$ is that open sets are the kinds of probabilistic propositions that could, in the limit of infinite data, be verified (Thms. \\ref{thm:genin}, \\ref{thm:l2learning}). Correlatively, meager sets in $\\tau^{\\mathrm{w}}$ are so negligible as to be unverifiable: as a meager set contains no non-empty open subsets (by the Baire Category Theorem \\cite{Oxtoby}), it is statistically unverifiable.\nWe will now show that the injective collapse set $\\mathfrak{C}_2$ from \\S{}\\ref{ss:collapse} is topologically meager.\n\nThe crux is to identify a ``good'' comeager 2VE-subspace where collapse \\emph{never} occurs (with separation witnessed by probabilities of causation).\nIn this subspace, the constraints circumscribing Level 3 have sufficient slack to make a tweak without thereby disturbing Level 2 (cf. Figure \\ref{fig:example:separation}).\nWe define the good set as the locus of a set of strict inequalities:\n\\begin{definition}\nA family $(\\mu_\\alpha)_{\\alpha \\in A_X} \\in \\mathfrak{S}^{X\\to Y}_{2}$ is \\emph{$Y$-good}\nif we have the following, abbreviating the members of $A_X$ as $(), x, x'$:\n\\begin{gather}\n0 < \\mu_x(y') - \\mu_{()}(x, y')\n < \\mu_{()}(x'),\\label{cns:ineq:1}\\\\\n0 < \\mu_{()}(x', y') < \\mu_{()}(x')\\label{cns:ineq:2}.\n\\end{gather}\n\\end{definition}\n\\begin{lemma}\n\\label{lem:goodcomeager}\nThe subspace of $Y$-good families is comeager in $\\mathfrak{S}^{X \\to Y}_{2}$.\n\\end{lemma}\n\\begin{proof}[Proof sketch]\nThe non-strict versions of \\eqref{cns:ineq:1}, \\eqref{cns:ineq:2} hold universally, so the complement of the good set is defined by equalities. This is closed and contains no nonempty open by the weak subbasis \\eqref{subbasis-weak}. \\end{proof}\nFigure~\\ref{fig:example:separation} presents the construction in a small, two-variable case, and Lemma~\\ref{lem:separation} below is proven by generalizing it to arbitrary $\\*V$.\nGuaranteeing agreement on every interventional distribution in the general case is subtle (Appendix~\\ref{app:main}):\nit has been observed that enlarging $\\*V$ can enable additional inferences (e.g., \\cite{9363924}), though the next result reflects a dependence on further assumptions.\n\n\\begin{lemma}\\label{lem:separation}\nSuppose $\\prec$ is an order in which $X$ comes first and $(\\mu_\\alpha)_{\\alpha \\in A} \\in \\mathfrak{S}^{\\prec}_2$ is such that $\\varpi^{X\\to Y}_{2}\\big((\\mu_\\alpha)_{\\alpha}\\big)$ is $Y$-good, and\nlet $\\varphi$ be PNS, the converse PNS, the probability of sufficiency, or the probability of enablement (Definition~\\ref{def:probcaus}).\nThen for any $\\mu_3 \\in \\mathfrak{S}^{\\prec}_3$ such that $\\varpi_2(\\mu_3) = (\\mu_\\alpha)_{\\alpha}$, there exists a $\\mu'_3 \\in \\mathfrak{S}^{\\prec}_3$ such that $\\mu_3$ and $\\mu'_3$ disagree on $\\varphi$.\n\\end{lemma}\n\\begin{figure} \\centering \n\\subfigure [$Y$-good Model] {\n\\begin{tabular}{ lllll } \n& & ${\\mathcal{M}}$ & & \\\\\n\\toprule\n$u$ & $P(u)$ & $X_u$ & $Y_{x, u}$ & $Y_{x', u}$\\\\\n\\midrule\n$u_0$ & $\\nicefrac{1}{2}$ & $x'$ & $y$ & $y$\\\\\n$u_1$ & $\\nicefrac{1}{2}$ & $x'$ & $y'$ & $y'$\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\subfigure [Example Separating Levels 2 and 3] {\n\\begin{tabular}{ lllll } \n& & ${\\mathcal{M}}'$ & & \\\\\n\\toprule\n$u$ & $P(u)$ & $X_u$ & $Y_{x, u}$ & $Y_{x', u}$\\\\\n\\midrule\n$u_0$ & $\\nicefrac{1}{2}- \\varepsilon$ & $x'$ & $y$ & $y$\\\\\n$u_1$ & $\\varepsilon$ & $x'$ & $y$ & $y'$\\\\\n$u_2$ & $\\nicefrac{1}{2}- \\varepsilon$ & $x'$ & $y'$ & $y'$\\\\\n$u_3$ & $\\varepsilon$ & $x'$ & $y'$ & $y$\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\caption{(a): the structural functions and exogenous noise for a model ${\\mathcal{M}}$ with direct influence $X \\rightarrow Y$. This ${\\mathcal{M}}$ meets \\eqref{cns:ineq:1} and \\eqref{cns:ineq:2}, so we may apply Lemma~\\ref{lem:separation}, constructing the model $\\mathcal{M}'$ in (b), where $0 < \\varepsilon < \\nicefrac{1}{2}$. Note that $p^{{\\mathcal{M}}}_{\\text{cf}}(y_x, y'_{x'}) = 0$ while $p^{{\\mathcal{M}}'}_{\\text{cf}}(y_x, y'_{x'}) = \\varepsilon$, so that the two models disagree on a Level 3 PNS quantity; on the other hand, it is easy to check agreement on all of Level 2. Similarly, ${\\mathcal{M}}$ and ${\\mathcal{M}}'$ disagree on the converse PNS, probability of sufficiency, and probability of enablement (Definition~\\ref{def:probcaus}).}\n \\label{fig:example:separation}\n\\end{figure}\nNote that by reversing the roles of $x$ and $x'$, we may obtain the same for the probability of necessity and probability of disablement.\nThe main theorem and its important learning-theoretic corollary are now straightforward.\n\\begin{theorem}[Topological Hierarchy] The set $\\mathfrak{C}_2$ of points where all Level 3 facts are identifiable from Level 2 is meager in $(\\mathfrak{S}_2,\\tau^{\\mathrm{w}}_2)$. \\label{thm:hierarchyFormal}\n\\end{theorem}\n\\begin{proof}\nLet $\\mathfrak{D}^{X, Y}_2 \\subset \\mathfrak{S}_2^X$ be the preimage under $\\varpi^{X \\to Y}_{2}$ of the set of $Y$-good tuples in $\\mathfrak{S}^{X \\to Y}_{2}$.\nLemma~\\ref{lem:separation} implies that $\\mathfrak{C}_2 \\cap \\mathfrak{S}_2^{X}$ is contained in $\\mathfrak{S}_2^{X} \\setminus \\mathfrak{D}^{X, Y}_2$, for \\emph{any} $Y \\neq X$.\nMeanwhile, since $\\varpi^{X \\to Y}_{2}$ is continuous and open, Lemma~\\ref{lem:goodcomeager} implies that $\\mathfrak{S}_2^{X} \\setminus \\mathfrak{D}^{X, Y}_2$ is meager in $\\mathfrak{S}_2^{X}$, and thereby also in $\\mathfrak{S}_2$.\nThus $\\mathfrak{C}_2 = \\bigcup_{X \\in \\*V} \\mathfrak{C}_2 \\cap \\mathfrak{S}_2^{X}$ is a countable union of meager sets, and hence meager.\n\\end{proof}\n\\begin{corollary} \\label{cor:hierarchy} No causal hypothesis licensing arbitrary counterfactual inferences (and specifically those of the probabilities of causation) from observational and experimental data is itself statistically (even experimentally) verifiable.\n\\end{corollary}\n\n\\section{Conclusion}\\label{section:conclusion}\n\nWe introduced a general framework for topologizing spaces of causal models, including the space of all (discrete, well-founded) causal models. As an illustration of the framework we characterized levels of the causal hierarchy topologically, and proved a topological version of the causal hierarchy theorem from \\cite{BCII2020}. While the latter shows that collapse of the hierarchy (specifically of Level 3 to Level 2) is \\emph{exceedingly unlikely} in the sense of (Lebesgue) measure, we offer a complementary result: any condition guaranteeing that we could infer arbitrary Level 3 information from purely Level 2 information must be \\emph{statistically unverifiable}, even by experimental means. Both results capture an important sense in which collapse is ``negligible'' in the space of all possible models. As an added benefit, the topological approach extends seamlessly to the setting of infinitely many variables.\n\nThere are many natural extensions of these results. For instance, we have begun work on a version for continuous endogenous variables. Also of interest are subspaces embodying familiar causal assumptions or other well-studied coarsenings of SCMs (see, e.g., \\cite{pmlr-v117-lin20a} on Bayesian networks, or \\cite{GeninMayoWilson,Genin2021} on linear non-Gaussian models), which often render important inference problems solvable, though sometimes only ``generically'' so.\nIn the opposite direction, we expect analogous hierarchy theorems to hold for extensions of the SCM concept, e.g., that dropping the well-foundedness or recursiveness requirements \\cite{bongers2021foundations}.\nAs emphasized by \\cite{BCII2020}, a causal hierarchy theorem should not be construed as a purely limitative result, but rather as further motivation for understanding the whole range of causal-inductive assumptions, how they relate, and what they afford. We submit that the topological constructions presented here can help clarify and systematize this broader landscape.\n\n\n\\subsubsection*{Acknowledgments} \nThis material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-16565. We are very grateful to the five anonymous NeurIPS reviewers for insightful and detailed comments and questions that led to significant improvements in the paper. We would also like to thank Jimmy Koppel, Krzysztof Mierzewski, Francesca Zaffora Blando, and especially Kasey Genin for helpful feedback on earlier versions. Finally, we are indebted to Saminul Haque for identifying a gap in the published version of the paper, which has been corrected in the present arXiv version (see in particular the augmented statement of Prop.~\\ref{prop:causalprojectioncontinuous}). \n\n\n\\medskip\n\n{\n\\small\n\n\\bibliographystyle{abbrvnat}\n\n\\section{Structural Causal Models (\\S\\ref{scms})} \\label{app:causalmodels}\n\\subsection{Background on Relations and Orders}\n\n\\begin{definition}\nLet $C$ be a set.\nThen a subset $R \\subset C \\times C$ is called a \\emph{binary relation} on $C$. We write $cRc'$ if $(c, c') \\in R$.\nThe binary relation $R$ is \\emph{well-founded} if every nonempty subset $D \\subset C$ has a minimal element with respect to $R$, i.e., if for every nonempty $D \\subset C$, there is some $d \\in D$, such that there is no $d' \\in D$ such that $d' R d$.\nThe binary relation $\\left.\\prec\\right. \\subset C \\times C$ is a (strict) \\emph{total order} if it is irreflexive, transitive, and \\emph{connected}: either $c \\prec c'$ or $c' \\prec c$ for all $c \\neq c' \\in C$.\n\\end{definition}\n\\begin{example}\nThe edges of a dag form a well-founded binary relation on its nodes. If $\\*V = \\{V_n\\}_{n \\ge 0}$, then the binary relation $\\rightarrow$ defined by $V_m \\rightarrow V_n$ iff either $0 < m < n$ or $n = 0 < m$ is well-founded but not extendible to an $\\omega$-like total order (see Fact~\\ref{fact:omegalike}) and not locally finite: $V_0$ has infinitely many predecessors $V_1, V_2, \\dots$\n\\end{example}\n\n\\subsection{Proofs}\n\n\\begin{proof}[Proof of Proposition~1.]\nWe assume without loss that $\\*U(V) = \\*U$ for every $V \\in \\*V$.\nFor each $\\*u \\in \\chi_{\\*U}$,\nwell-founded induction along $\\rightarrow$ shows unique existence of a $m^{{\\mathcal{M}}}(\\*u) \\in \\chi_{\\*V}$\nsolving $f_{V}\\big(\\pi_{\\mathbf{Pa}(V)}(m^{{\\mathcal{M}}}(\\*u)), \\*u\\big) = \\pi_V(m^{{\\mathcal{M}}}(\\*u))$ for each $V$.\nWe claim the resulting function $m^{{\\mathcal{M}}}$ is measurable.\nOne has a clopen basis of cylinders, so it suffices to show each preimage $(m^{{\\mathcal{M}}})^{-1}(v)$ is measurable. Recall that here $v$ denotes the cylinder set $\\pi^{-1}_V(\\{v\\}) \\in \\mathcal{B}(\\chi_{\\*V})$, for $v \\in \\chi_V$.\nOnce again this can be established inductively. Note that\n\\begin{align*}\n(m^{{\\mathcal{M}}})^{-1}(v) =\n\\bigcup_{\\mathbf{p} \\in \\chi_{\\mathbf{Pa}(V)}}\\Big[ (m^{{\\mathcal{M}}})^{-1}(\\*p) \\cap \\pi_{\\*U}\\big(f_V^{-1}(\\{v\\}) \\cap (\\{\\*p\\} \\times \\chi_{\\*U})\\big) \\Big].\n\\end{align*}\nwhich is a finite union (by local finiteness) of measurable sets (by the inductive hypothesis) and therefore measurable.\nThus for any ${\\mathcal{M}}$\nthe pushforward $p^{{\\mathcal{M}}} = m^{{\\mathcal{M}}}_*(P)$ is a measure on $\\mathcal{B}(\\chi_{\\*V})$ and gives the observational distribution (Definition~4).\n\\end{proof}\n\n\\begin{proof}[Remark on Definition~6]\nTo see that $p^{{\\mathcal{M}}}_{\\mathrm{cf}}$ thus defined is a measure,\nnote that $p^{{\\mathcal{M}}}_{\\mathrm{cf}} = p^{{\\mathcal{M}}_A}$ and apply Proposition~1, where the model ${\\mathcal{M}}_A$ is defined in Definition~\\ref{def:counterfactualmodel}. This is similar in spirit to the construction of ``twinned networks'' \\citep{BalkePearl} or ``single-world intervention graphs'' \\citep{Richardson}.\n\\end{proof}\n\n\\begin{definition}\\label{def:counterfactualmodel}\nGiven ${\\mathcal{M}}$ as in Def.~\\ref{def:scm:lit}\nand a collection of interventions $A$\nform the following \\emph{counterfactual model} ${\\mathcal{M}}_A = \\langle \\*U, A \\times \\*V, \\{f_{(\\alpha, V)}\\}_{(\\alpha, V)}, P\\rangle$, over endogenous variables $A \\times \\*V$. The counterfactual model has the influence relation $\\rightarrow'$, defined as follows.\nWhere $\\alpha', \\alpha \\in A$ let $(\\alpha', V') \\rightarrow' (\\alpha, V)$ iff $\\alpha' = \\alpha$ and $V' \\rightarrow V$.\nThe exogenous space $\\*U$ and noise distribution $P$ of $\\mathcal{M}_A$ are the same as those of $\\mathcal{M}$,\nthe exogenous parents sets $\\{\\*U(V)\\}_V$ are also identical,\nand the functions are $\\{f_{(\\alpha, V)}\\}_{(\\alpha, V)}$ defined as follows.\nFor any $\\*W \\coloneqq \\*w \\in A$, $V \\in \\*V$, $\\mathbf{p} \\in \\chi_{\\mathbf{Pa}(V)}$, and $\\*u \\in \\chi_{\\*U(V)}$ let\n\\begin{align*}\nf_{(\\*W \\coloneqq \\*w, V)}\\big((\\*W \\coloneqq \\*w, \\mathbf{p}), \\*u\\big) = \\begin{cases}\n\\pi_V(\\*w), & V \\in \\*W\\\\\nf_V(\\mathbf{p}, \\*u), & V \\notin \\*W\n\\end{cases}.\n\\end{align*}\n\\end{definition}\n\n\n\\section{Proofs from \\S\\ref{sec:causalhierarchy}} \\label{app:causalhierarchy}\n\\begin{proof}[Remark on exact characterizations of $\\mathfrak{S}_3$, $\\mathfrak{S}_2$]\nRich probabilistic languages interpreted over $\\mathfrak{S}_3$ and $\\mathfrak{S}_2$ were axiomatized in \\cite{ibelingicard2020}.\nThis axiomatization, along with the atomless restriction, gives an exact characterization for the hierarchy sets.\nStandard form, defined below, gives an alternative characterization exhibiting each $\\mathfrak{S}_3^\\prec$ as a particular atomless probability space (Corollary~\\ref{cor:standardform}).\nFor $\\mathfrak{S}^{X \\to Y}_{2}$ (or $\\mathfrak{S}_2$ in the two-variable case) we need the characterization for the proof of the hierarchy separation result, so it is given explicitly as Lemma~\\ref{prop:p2:characterization:binary} in the section below on 2VE-spaces.\n\\end{proof}\n\\subsection{Standard Form}\\label{ss:app:standardform}\nFix $\\prec$. Note that the map $\\varpi_3$ restricted to $\\mathfrak{M}_\\prec$ does \\emph{not} inject into $\\mathfrak{S}_3^\\prec$, as any trivial reparametrizations of exogenous noise are distinguished in $\\mathfrak{M}_\\prec$.\nIt is therefore useful to identify a ``standard'' subclass $\\mathfrak{M}_\\prec^{\\mathrm{std}}$ on which $\\varpi_3$ is injective with image $\\mathfrak{S}_3^\\prec$, and in which we lose no expressivity.\n\\begin{notation*}\nLet $\\mathbf{Pred}(V) = \\{V' : V' \\prec V\\}$ and denote a \\emph{deterministic} mechanism for $V$ mapping a valuation of its predecessors to a value as $\\texttt{f}_V \\in \\chi_{\\mathbf{Pred}(V)} \\to \\chi_V$. Write an entire collection of such mechanisms, one for each variable, as $\\texttt{\\textbf{f}} = \\{\\texttt{f}_V\\}_{V}$.\nA set $\\*B \\subset \\*V$ is \\emph{ancestrally closed} if $\\mathbf{B} = \\bigcup_{V \\in \\*B} \\mathbf{Pred}(V)$.\nFor any ancestrally closed $\\*B$\nlet $\\xi(\\*B) = \\big\\{(V, \\*p): V \\in \\*B, \\*p \\in \\chi_{\\mathbf{Pred}(V)}\\big\\}$. Note that $\\texttt{\\textbf{F}}(\\*B) = \\bigtimes_{(V, \\mathbf{p}) \\in \\xi(\\*B)} \\chi_V$ encodes the set of all possible such collections of deterministic mechanisms, and we write, e.g., $\\texttt{f} \\in \\texttt{F}(\\*B)$.\nAbbreviate $\\xi(\\*V)$, $\\texttt{F}(\\*V)$ for the entire endogenous variable set $\\*V$ as $\\xi$, $\\texttt{F}$ respectively. We also use $\\texttt{f}$ to abbreviate the set\n\\begin{align}\n\\label{eq:probabilityofmechanisms}\n\\bigcap_{\\substack{V \\in \\*B\\\\ \\*p \\in \\chi_{\\mathbf{Pred}(V)}}} \\pi^{-1}_{(\\mathbf{Pred}(V) \\coloneqq \\*p, V)}(\\{\\texttt{f}(\\*p)\\}) \\in \\mathcal{B}(\\chi_{A \\times \\*V})\n\\end{align}\nso we can write, e.g., $p^{{\\mathcal{M}}}_{\\mathrm{cf}}(\\texttt{f})$ for the probability in ${\\mathcal{M}}$ that the effective mechanisms $\\texttt{f}$ have been selected (by exogenous factors) for the variables $\\*B$.\n\\end{notation*}\n\\begin{definition}\\label{def:standardform}\n\nThe SCM ${\\mathcal{M}} = \\langle \\*U, \\*V, \\{f_V\\}_V, P\\rangle$ of Def.~\\ref{def:scm:lit} is in \\emph{standard form} over $\\prec$, and we write ${\\mathcal{M}} \\in \\mathfrak{M}_\\prec^{\\mathrm{std}}$, if we have that $\\left.\\rightarrow\\right. = \\left.\\prec\\right.$ for its influence relation, $\\*U = \\{U\\}$ for a single exogenous variable $U$ with $\\chi_U = \\texttt{{F}}$, $P \\in \\mathfrak{P}(\\texttt{F})$ for its exogenous noise space, and for every $V$, we have that $\\*U(V) = \\*U = \\{U\\}$ and the mechanism $f_V$ takes $\\mathbf{p}, (\\{ \\texttt{f}_V\\}_V) \\mapsto \\texttt{f}_V(\\mathbf{p})$ for each $\\*p \\in \\chi_{\\mathbf{Pred}(V)}$ and joint collection of deterministic functions $\\{ \\texttt{f}_V\\}_V \\in \\texttt{F} = \\chi_{U}$.\n\\end{definition}\nEach unit $\\*u$ in a standard form model amounts to a collection $\\{ \\texttt{f}_V\\}_V$ of deterministic mechanisms, and each variable is determined by a mechanism specified by the ``selector'' endogenous variable $U$.\n\\begin{lemma}\nLet ${\\mathcal{M}} \\in \\mathfrak{M}_\\prec$. Then there exists ${\\mathcal{M}}^{\\mathrm{std}} \\in \\mathfrak{M}_\\prec^{\\mathrm{std}}$ such that $\\varpi_3({\\mathcal{M}}) = \\varpi_3({\\mathcal{M}}^{\\mathrm{std}})$.\n\\end{lemma}\n\\begin{proof}\nTo give ${\\mathcal{M}}^{\\mathrm{std}}$ define a measure $P \\in \\mathfrak{P}(\\texttt{F})$ as in Def.~\\ref{def:standardform} on a basis of cylinder sets by the counterfactual in ${\\mathcal{M}}$\n\\begin{multline}\nP\\big(\\pi^{-1}_{(V_1, \\*p_1)}(\\{v_1\\}) \\cap \\dots \\cap \\pi^{-1}_{(V_n, \\*p_n)}(\\{v_n\\})\\big)\\\\\n= p_{\\mathrm{cf}}^{{\\mathcal{M}}}\\big( \\pi^{-1}_{(\\mathbf{Pred}(V_1) \\coloneqq \\*p_1, V_1)}(\\{v_1\\}) \\cap \\dots \\cap \\pi^{-1}_{(\\mathbf{Pred}(V_n) \\coloneqq \\*p_n, V_n)}(\\{v_n\\}) \\big). \\label{eq:standardformdefinition}\n\\end{multline}\nTo show that $\\varpi_3({\\mathcal{M}}) = \\varpi_3({\\mathcal{M}}^{\\mathrm{std}})$ it suffices to show that any two models agreeing on all counterfactuals of the form \\eqref{eq:standardformdefinition} must agree on all counterfactuals in $A$.\nSuppose $\\alpha_i \\in A$, $V_i \\in \\*V$, $v_i \\in \\chi_{V_i}$ for $i = 1, \\dots, n$.\nLet $\\*{B} = \\bigcup_{i} \\mathbf{Pred}(V_i)$ and given $\\texttt{f} = \\{\\texttt{f}_V\\}_V$,\ndefine $\\texttt{f}^{\\*W \\coloneqq \\*w}_V$ to be a constant function mapping to $\\pi_V(\\*w)$ if $V \\in \\*W$ and $\\texttt{f}^{\\*W \\coloneqq \\*w}_V = \\texttt{f}_V$ otherwise.\nWrite $\\texttt{f} \\models V = v$ if $\\pi_V(\\*v) = v$ for that $\\*v \\in \\chi_{\\*V}$ such that $\\texttt{f}_{V}\\big(\\pi_{\\mathbf{Pred}(V)}(\\*v)\\big) = \\pi_{V}(\\*v)$ for all $V$.\nFinally, \nnote that\n\\begin{align*}\n \\bigcap_{i=1}^n \\pi^{-1}_{(\\alpha_i, V_i)}(\\{v_i\\}) = \\bigsqcup_{\\substack{\\{\\texttt{\\textbf{f}}_{V}\\}_{V \\in \\*B} \\in \\texttt{\\textbf{F}}(\\*B) \\\\\\{\\texttt{\\textbf{f}}^{\\alpha_i}_{V}\\}_{V \\in \\*B} \\models V_i = v_i \\\\ \\text{for each } i }} \\{\\texttt{f}_V\\}_{V \\in \\*B}\n\\end{align*}\nwhere each set in the finite disjoint union is of the form \\eqref{eq:probabilityofmechanisms}.\nThus the measure of the left-hand side can be written as a sum of measures of such sets, which use only counterfactuals of the form \\eqref{eq:standardformdefinition},\nshowing agreement of the measures (by Fact~1).\n\\end{proof}\n\\begin{corollary}\\label{cor:standardform}\n$\\mathfrak{S}_3^\\prec$ bijects with the set of atomless measures in $\\mathfrak{P}(\\texttt{F})$, which we denote $\\mathfrak{S}^\\prec_{\\mathrm{std}}$.\nWe write the map as $\\varpi^\\prec_{\\mathrm{std}} : \\mathfrak{S}_3^\\prec \\to \\mathfrak{S}^\\prec_{\\mathrm{std}}$.\n\\qed\n\\end{corollary}\nWhere the order $\\prec$ is clear, the above result permits us to abuse notation, using e.g. $\\mu$ to denote either an element of $\\mathfrak{S}_3^\\prec$ or its associated point $\\varpi^\\prec_{\\mathrm{std}}(\\mu)$ in $\\mathfrak{S}^\\prec_{\\mathrm{std}}$.\nWe will henceforth indulge in such abuse.\n\n\\begin{proof}[Proof of Fact~\\ref{prop:alternative}]\nThe follows easily from Lem.~\\ref{lem:acausals1canonical} below, adapted from \\citet[Thm.~1]{suppes:zan81}.\nThis shows that every atomless distribution is generated by some SCM; furthermore, it can chosen so as to exhibit no causal effects whatsoever.\n\\end{proof}\n\\begin{definition}\nSay that $\\nu \\in \\mathfrak{P}\\big(\\texttt{\\textbf{F}}(\\*V)\\big)$ is \\emph{acausal} if $\\nu(\\pi^{-1}_{(V, \\*p)}(\\{v_1\\}) \\cap \\pi^{-1}_{(V, {\\*p}')}(\\{v_2\\})\\big) = 0$\nfor every $(V, \\*p), (V, \\*p') \\in \\xi$ and $v_1 \\neq v_2 \\in \\chi_V$.\n\n\\end{definition}\n\\begin{lemma}\\label{lem:acausals1canonical}\nLet $\\mu \\in \\mathfrak{P}(\\chi_{\\*V})$ be atomless. Then there is a ${\\mathcal{M}} \\in \\mathfrak{M}^{\\mathrm{std}}_\\prec$ (see Def.~\\ref{def:standardform}) with an acausal noise distribution such that $\\mu = (\\varpi_1 \\circ \\varpi_2 \\circ \\varpi_3)({\\mathcal{M}})$. \n\\end{lemma}\n\\begin{proof}\nConsider $\\nu \\in \\mathfrak{P}\\big(\\texttt{\\textbf{F}}(\\*V)\\big) = \\mathfrak{P}\\big(\\bigtimes_{(V, \\*p)} \\chi_V \\big)$ determined on a basis as follows:\n$\\nu\\big( \\pi^{-1}_{(V_1, \\mathbf{p}_1)}(\\{v_1\\})\\cap \\dots \\cap \\pi^{-1}_{(V_n, \\*p_n)}(\\{v_n\\}) \\big) = \\mu\\big( \\pi^{-1}_{V_1}(\\{v_1\\}) \\cap \\dots \\cap \\pi^{-1}_{V_n}(\\{v_n\\}) \\big)$.\nThis is clearly acausal and atomless.\n\\end{proof}\n\n\\subsection{Proofs from \\S{}{3.2}}\n\n\\begin{proof}[Proof of Prop. \\ref{prop:collapse1} (Collapse set $\\mathfrak{C}_1$ is empty)]\nLet $\\mu \\in \\mathfrak{S}_1$ and $\\nu \\in \\mathfrak{S}^\\prec_{\\mathrm{std}}$ with $(\\varpi_1 \\circ \\varpi_2 \\circ \\varpi_{\\mathrm{std}}^{-1})(\\nu) = \\mu$.\nBy Lemma \\ref{lem:acausals1canonical} we may assume $\\nu$ is acausal.\nLet $X$ be the first, and $Y$ the second variable with respect to $\\prec$.\nNote there are $x^*$, $y^*$ such that $\\mu(\\pi^{-1}_X(\\{x^*\\}) \\cap \\pi^{-1}_Y(\\{y^*\\})) > 0$;\nlet $x^\\dagger \\neq x^*$, $y^\\dagger \\neq y^*$.\nConsider $\\nu'$ defined as follows where $\\digamma_3$ stands for any set of the form\n$\\pi^{-1}_{(V_1, {\\*p}_1)}(\\{v_1\\}) \\cap \\dots \\cap \\pi^{-1}_{(V_n, {\\*p}_n)}(\\{v_n\\}) \\subset \\texttt{\\textbf{F}}(\\*V)$, for $V_i \\in \\*V$, $\\*p_i \\in \\chi_{\\mathbf{P}(V_i)}$, $v_i \\in \\chi_{V_i}$,\nand $\\digamma_1$ is the corresponding $\\pi^{-1}_{V_1}(\\{v_1\\}) \\cap \\dots \\cap \\pi^{-1}_{V_n}(\\{v_n\\}) \\subset \\chi_{\\*V}$.\n\\begin{multline*}\n \\nu'\\big( \\pi^{-1}_{(X, ())}(\\{x\\}) \\cap \\pi^{-1}_{(Y, (x^*))}(\\{y_*\\}) \\cap \\pi^{-1}_{(Y, (x^{\\dagger}))}(\\{y_\\dagger\\}) \\cap \\digamma_3 \\big) =\\\\\n \\begin{cases}\n \\mu\\big(\\pi^{-1}_X(\\{x^*\\}) \\cap \\pi^{-1}_Y(\\{y^*\\}) \\cap \\digamma_1 \\big), & x = x^*, y_* = y^* \\neq y_\\dagger\\\\% = y^{\\dagger}\\\\\n 0, & x = x^*, y_* = y^{\\dagger} \\neq y_\\dagger\\\\% = y^*\\\\\n 0, & x = x^*, y_* = y_{\\dagger} = y^*\\\\\n \\mu\\big(\\pi^{-1}_X(\\{x^*\\}) \\cap \\pi^{-1}_Y(\\{y^\\dagger\\}) \\cap \\digamma_1 \\big), & x = x^*, y_* = y_{\\dagger} = y^\\dagger\\\\\n \\mu\\big(\\pi^{-1}_X(\\{x^\\dagger\\}) \\cap \\pi^{-1}_Y(\\{y\\}) \\cap \\digamma_1 \\big), & x = x^\\dagger\n \\end{cases}\n\n\n\\end{multline*}\nWe claim that $\\mu = \\mu'$ where $\\mu' = (\\varpi_1 \\circ \\varpi_2 )(\\nu')$; it suffices to show agreement on sets of the form $\\pi^{-1}_X(\\{x\\}) \\cap \\pi^{-1}_Y(\\{y\\}) \\cap \\digamma_1$. If $x = x^\\dagger$ then the last case above occurs; if $x = x^*$ and $y = y^\\dagger$ then we are in the fourth case; if $x = x^*$ and $y = y^*$ then exclusively the first case applies. In all cases the measures agree.\nLet $(\\nu_\\alpha)_\\alpha = \\varpi_2(\\nu)$ and $(\\nu'_\\alpha)_\\alpha = \\varpi_2(\\nu')$\nbe the Level 2 projections of $\\nu$, $\\nu'$ respectively.\nNote that $\\nu_{X \\coloneqq x^\\dagger}(y^\\dagger) < \\nu'_{X \\coloneqq x^\\dagger}(y^\\dagger)$.\nThis shows that the standard-form measures $\\nu$, $\\nu'$ project down to different points in $\\mathfrak{S}_2$ (in particular differing on the $Y$-marginal at the index corresponding to the intervention $X \\coloneqq x^\\dagger$) while projecting to the same point in $\\mathfrak{S}_1$.\nThus $\\mu \\notin \\mathfrak{C}_1$ and since $\\mu$ was arbitrary, $\\mathfrak{C}_1 = \\varnothing$.\n\\end{proof}\n\n\n\n\\begin{example}[Collapse set $\\mathfrak{C}_2$ is nonempty]\nWe present a $\\mu \\in \\mathfrak{S}_{\\mathrm{std}}^{\\prec}$ for which $\\varpi_2(\\mu) \\in \\mathfrak{C}_2$.\nLet ${\\*S}_n \\subset \\*V$ be the ancestrally closed (\\S\\ref{ss:app:standardform}) set of the $n$ least variables with respect to $\\prec$ and $X$ be the first variable with respect to $\\prec$; thus, e.g., ${\\*S}_1 = \\{X\\}$.\nWhere $\\texttt{\\textbf{f}} = \\{\\texttt{f}_V\\}_{V \\in {\\*S}_n} \\in \\texttt{F}(\\textbf{S}_n)$, define $\\mu(\\texttt{\\textbf{f}}) = 0$ if there is any $V \\in {\\*S}_n \\setminus \\{X\\}$, $\\*p \\neq (0, \\dots, 0) \\in \\chi_{\\mathbf{Pred}(V)}$ such that $\\texttt{f}_V(\\*p) = 0$, and otherwise define $\\mu(\\texttt{\\textbf{f}}) = 1\/2^n$.\nNote that this example is \\emph{monotonic} in the sense of \\cite{Angrist,Pearl1999}.\n\nWe claim $\\mu' = \\mu$ for any $\\mu' \\in \\mathfrak{S}_{\\mathrm{std}}^\\prec$ projecting to the same Level 2, i.e., such that $\\varpi_2(\\mu') = \\varpi_2(\\mu)$; note that it suffices to consider only candidate counterexamples with order $\\prec$ since $\\varpi_2(\\mu) \\notin \\mathfrak{S}_2^{\\prec'}$ for any $\\left.\\prec'\\right. \\neq \\left.\\prec\\right.$.\nIt suffices to show that $\\mu(\\texttt{f}) = \\mu'(\\texttt{f})$ for any $n$ and $\\texttt{\\textbf{f}} = \\{\\texttt{f}_V\\}_{V \\in {\\*S}_n}$; recall that in the measures, $\\texttt{f}$ denotes a set of the form \\eqref{eq:probabilityofmechanisms}.\nLet $(\\mu_\\alpha)_\\alpha = \\varpi_2(\\mu) \\in \\mathfrak{S}_2^\\prec$ and $(\\mu'_\\alpha)_\\alpha = \\varpi_2(\\mu')$, with $(\\mu_\\alpha)_\\alpha = (\\mu'_\\alpha)_\\alpha$.\nSince $\\mu'_{\\mathbf{Pred}(V) \\coloneqq \\*p}(\\pi^{-1}_V(\\{1\\})) = 1$ for any $V \\in {\\*S}_n \\setminus \\{X\\}$, $\\*p \\neq (0, \\dots, 0)$, probability bounds show $\\mu'(\\texttt{f})$ vanishes unless $\\texttt{f}_V(\\*p) = 1$ for each such $\\*p$, in which case\n\\begin{equation}\\label{eq:toreducetol2}\n\\mu'(\\texttt{f})=\n\\mu'\\Big(\n\\bigcap_{i=1}^n \\pi^{-1}_{(V_i, \\{V_1, \\dots, V_{i-1}\\} \\coloneqq (0, \\dots, 0))}(\\{v_i\\})\\Big)\n\\end{equation}\nfor some $v_i \\in \\chi_{V_i}$,\nwhere we have labeled the elements of ${\\*S}_n$ as $V_1, \\dots, V_n$,\nwith $V_1 \\prec \\dots \\prec V_n$.\nWe claim this is reducible---again using probabilistic reasoning alone---to a linear combination of quantities fixed by $(\\mu'_\\alpha)_\\alpha$, the Level 2 projection of $\\mu'$, which is the same as the projection $(\\mu_\\alpha)_\\alpha$ of $\\mu$.\nThis can be seen by an induction on the number $m = \\left|M\\right|$ where $M = \\{i : v_i = 1 \\}$: note\n\\eqref{eq:toreducetol2} becomes\n\\begin{multline*\n\\mu'\\Big(\n\\bigcap_{\\substack{i \\notin M}} \\pi^{-1}_{(V_i, \\{V_1, \\dots, V_{i-1}\\} \\coloneqq (0, \\dots, 0))}(\\{0\\})\\Big)\\\\\n- \\sum_{M' \\subsetneq M}\\mu'\\Big(\n\\bigcap_{\\substack{i \\notin M'}} \\pi^{-1}_{(V_i, \\{V_1, \\dots, V_{i-1}\\} \\coloneqq (0, \\dots, 0))}(\\{0\\})\n\\cap\n\\bigcap_{\\substack{i \\in M'}} \\pi^{-1}_{(V_i, \\{V_1, \\dots, V_{i-1}\\} \\coloneqq (0, \\dots, 0))}(\\{1\\})\n\\Big)\n\\end{multline*}\nand the inductive hypothesis implies each summand can be written in the sought form\nwhile the first term becomes\n$\\mu'\\big(\\bigcap_{i\\notin M} \\pi^{-1}_{(V_i, ())}(\\{0\\})\\big) = \\mu'_{()}\\big(\\bigcap_{i \\notin M}\\pi^{-1}_{V_1}(\\{0\\})\\big) = \\mu_{()}\\big(\\bigcap_{i \\notin M}\\pi^{-1}_{V_1}(\\{0\\})\\big)$.\nHere $()$ abbreviates the empty intervention $\\varnothing \\coloneqq ()$.\nThus any Level 3 quantity reduces to Level 2, on which the two measures agree by hypothesis.\n\\label{example:collapse}\n\\end{example}\n\n\\subsection{Remarks on \\S{}{3.3}}\\label{ss:app:3.3}\n\n\\begin{lemma}\n\\label{prop:p2:characterization:binary}\n Let $(\\mu_{\\alpha})_{\\alpha} \\in \\bigtimes_{\\alpha \\in A^{X \\to Y}_{2}} \\mathfrak{P}(\\chi_{X, Y})$.\n Then $(\\mu_\\alpha)_{\\alpha} \\in \\mathfrak{S}^{X \\to Y}_{2}$ iff\n \\begin{align}\\label{eq:snsr}\n \\mu_{X \\coloneqq x}( x ) = 1\n \\end{align}\n for every $x \\in \\chi_X$\n and \\begin{align}\\label{eq:snscm}\n \\mu_{X \\coloneqq x}(y)\n \\ge \\mu_{()}(x, y)\n \\end{align}\n for every $x \\in \\chi_{X}$, $y \\in \\chi_Y$. Here $x, y$ abbreviates the basic set $\\pi_X^{-1}(\\{x\\}) \\cap \\pi_Y^{-1}(\\{y\\})$.\n\\end{lemma}\n\\begin{proof\nIt is easy to see that \\eqref{eq:snsr}, \\eqref{eq:snscm} hold for any $(\\mu_\\alpha)_\\alpha$.\nFor the converse,\nconsider the two-variable model over endogenous $\\*Z = \\{X, Y\\}$ with $X \\prec Y$; note that $|\\texttt{\\textbf{F}}(\\*Z)| = 8$.\nA result of\n\\citet{tian:etal06} gives that this model is characterized exactly by \\eqref{eq:snsr}, \\eqref{eq:snscm} so for any such $(\\mu_\\alpha)_\\alpha$ there is a distribution on $\\texttt{\\textbf{F}}(\\*Z)$ such that this model induces $(\\mu_\\alpha)_\\alpha$.\nIt is straightforward to extend this distribution to an atomless measure on $\\texttt{\\textbf{F}}(\\*V)$.\n\\end{proof}\n\n\n\\section{Proofs from \\S\\ref{sec:weaktopology}} \\label{app:empirical}\n\n\\begin{proof}[Proof of Prop.~\\ref{prop:causalprojectioncontinuous}]\nThe continuity of any of the maps amounts to the continuity of projections in product spaces and marginalizations in weak convergence spaces. The latter follows easily from results in \\S{}3.1.3 of \\cite{Genin2018} or \\cite{Billingsley}.\n\nAs for the openness of any $\\varpi_2^{X \\to Y}$, note we can write any $S \\subset \\mathfrak{S}_2$ as $S = \\bigcup_{\\prec} S \\cap \\mathfrak{S}_2^\\prec$ where $\\prec$ in the union ranges over all total orders of $\\*V$. It thus suffices to show that for any $\\prec$ the image of any open $S \\subset \\mathfrak{S}_2^\\prec$ is open.\nDefine the map $\\varpi_2^{\\prec}: \\mathfrak{S}_{\\mathrm{std}}^\\prec \\to \\mathfrak{S}_2^\\prec$ and the map $\\varpi_2^{X \\to Y, \\prec}: \\mathfrak{S}_2^\\prec \\to \\mathfrak{S}_2^{X \\to Y}$ as restrictions of $\\varpi_2$ and $\\varpi_2^{X \\to Y}$ respectively. Evidently, $\\varpi_2^{\\prec}$ is continuous so it suffices to show that $\\varpi_2^{X \\to Y, \\prec} \\circ \\varpi_2^{\\prec}$ is open.\n\nFor any $n \\ge 1$ let $\\*S_{n, \\prec}$ be the initial segment of the first $n$ variables in $\\*V$ when ordered according to $\\prec$, as in Ex.~\\ref{example:collapse}, and define sets $\\mathfrak{S}_{\\mathrm{std}}^{n, \\prec}$, $\\mathfrak{S}_2^{n, \\prec}$ analogously to $\\mathfrak{S}_{\\mathrm{std}}^\\prec$, $\\mathfrak{S}_2^{\\prec}$ but over the set of variables $\\*{S}_{n, \\prec}$.\nDefine maps $\\varpi_{2}^{n, \\prec} : \\mathfrak{S}_{\\mathrm{std}}^{n, \\prec} \\to \\mathfrak{S}_2^{n, \\prec}$ and $\\varpi_2^{X \\to Y, n, \\prec} :\\mathfrak{S}_2^{n, \\prec} \\to \\mathfrak{S}_2^{X \\to Y}$, where $X, Y \\in \\*{S}_{n, \\prec}$, in a similar fashion.\nDefine also a map $\\varpi_{\\mathrm{std}}^{n, \\prec}: \\mathfrak{S}_{\\mathrm{std}}^{\\prec} \\to \\mathfrak{S}_{\\mathrm{std}}^{n, \\prec}$ as a marginalization taking a distribution over mechanisms (recall \\S{}\\ref{ss:app:standardform}) determining all of $\\*V$ to a distribution over deterministic mechanisms for $\\*S_{n, \\prec}$.\nLet $S \\subset \\mathfrak{S}_{\\mathrm{std}}^\\prec$ be an arbitrary basic open set and let $n$ be least such that $\\*{S}_{n, \\prec}$ contains $X$, $Y$, and every variable whose structural equation appears as a cylinder in the finite intersection defining $S$.\nThen note that $\\big(\\varpi_2^{X \\to Y, \\prec} \\circ \\varpi_2^{\\prec}\\big)(S) = \\big(\\varpi_2^{X \\to Y, n, \\prec} \\circ \\varpi_2^{n, \\prec} \\circ \\varpi_{\\mathrm{std}}^{n, \\prec}\\big)(S)$ and $\\varpi_{\\mathrm{std}}^{n, \\prec}(S)$ is certainly open, so it suffices to show that $\\varpi_2^{X \\to Y, n, \\prec} \\circ \\varpi_2^{n, \\prec} : \\mathfrak{S}_{\\mathrm{std}}^{n, \\prec} \\to \\mathfrak{S}_2^{X \\to Y}$ is open for any $n$.\n\nTo see this, note that $\\mathfrak{S}_2^{X \\to Y}$ and $\\mathfrak{S}_{\\mathrm{std}}^{n, \\prec}$ in the weak topology are homeomorphic to (subsets of products of) probability simplices in appropriate Euclidean spaces $\\mathbb{R}^m$ with the standard topology, as they are distributions over a discrete space.\nThe latter in fact is exactly a probability simplex while $\\mathfrak{S}_2^{X \\to Y}$ is polyhedral by Lemma~\\ref{prop:p2:characterization:binary}, and $\\varpi_2^{X \\to Y, n, \\prec} \\circ \\varpi_2^{n, \\prec}$ is the restriction of a surjective linear mapping under the aforementioned homeomorphism.\nThis must be open by \\citet[Corollary~8]{MIDOLO20091186}.\n\\end{proof}\n\n\\begin{proof}[Proof of Thm. \\ref{thm:l2learning}]\nWe show how Theorem~3.2.1 of \\cite{Genin2018} can be applied to derive the result. Specifically, let $\\Omega = \\bigtimes_\\alpha \\chi_{\\*V}$. Let $\\mathcal{I}$ be the usual clopen basis, and let $W$ be the set of Borel measures $\\mu \\in \\mathfrak{P}(\\Omega)$ that factor as a product $\\mu = \\times_\\alpha \\mu_\\alpha$ where each $\\mu_\\alpha \\in \\mathfrak{S}_1$ and $(\\mu_\\alpha)_\\alpha \\in \\mathfrak{S}_2$. This choice of $W$ corresponds exactly to our notion of experimental verifiability.\n\nIt remains to check that a set is open in $W$ iff the associated set is open in $\\mathfrak{S}_2$ (homeomorphism).\nIt suffices to show their convergence notions agree. Suppose $(\\nu_n)_n$ is a sequence, each $\\nu_n \\in W$, converging to $\\nu = \\times_\\alpha \\mu_\\alpha \\in W$. We have for each $n$ that $\\nu_n = \\times_\\alpha \\mu_{n, \\alpha}$ such that $(\\mu_{n,\\alpha})_\\alpha \\in \\mathfrak{S}_2$. By Theorem~3.1.4 in \\cite{Genin2018}, which is straightforwardly generalized to the infinite product, for each fixed $\\alpha$ we have $(\\mu_{n, \\alpha})_n \\Rightarrow \\mu_\\alpha$. This is exactly pointwise convergence in the product space $\\mathfrak{S}_2$, and the same argument in reverse works for the converse.\n\\end{proof}\n\n\\section{Proofs from \\S\\ref{sec:main}} \\label{app:main}\n\nWe will use the following result to categorize sets in the weak topology.\n\\begin{lemma}\\label{prop:probiscontinuous}\n If $X \\subset \\vartheta$ is a basic clopen,\n the map $p_X : (\\mathfrak{S}, \\tau^{\\mathrm{w}}) \\to ([0, 1], \\tau)$ sending $\\mu \\mapsto \\mu(X)$ is continuous and open (in its image), where $\\tau$ is as usual on $[0, 1] \\subset \\mathbb{R}$.\n\\end{lemma}\n\\begin{proof}\nContinuous: the preimage of the basic open $(r_1, r_2) \\cap p_X(\\mathfrak{S})$ where $r_1, r_2 \\in \\mathbb{Q}$ is $\\{ \\mu: \\mu(X) > r_1 \\} \\cap \\{ \\mu : \\mu(X) < r_2 \\} = \\{ \\mu: \\mu(X) > r_1 \\} \\cap \\{ \\mu : \\mu(\\vartheta \\setminus X) > 1- r_2 \\}$, a finite intersection of the subbasic sets \\eqref{subbasis-weak} from \\S{}\\ref{sec:weaktopology}.\nSee also \\citet[Corollary~17.21]{Kechris1995}.\n\nOpen: if $X = \\varnothing$ or $\\vartheta$, then $p_X(\\mathfrak{S}) = \\{0\\}$ or $\\{1\\}$ resp., both open in themselves.\nElse $p_X(\\mathfrak{S}) = [0, 1]$;\nwe show any $Z = p_X\\big(\\bigcap_{i = 1}^n \\{\\mu: \\mu(X_i)>r_i\\} \\big)$ is open.\nConsider a mutually disjoint, covering $\\mathcal{D} = \\big\\{ \\bigcap_{i=0}^n Y_i : Y_0 \\in \\{X, \\vartheta \\setminus X\\},\\text{ each } Y_i \\in \\{X_i, \\vartheta \\setminus X_i\\}\\big\\}$ \nand space $\\Delta = \\{(\\mu(D))_{D \\in \\mathcal{D}} : \\mu \\in \\mathfrak{S}\\} \\subset \\mathbb{R}^{2^{n+1}}$.\nJust as in the Lemma, we have $\\textsf{p}_{S} : \\Delta \\to [0, 1]$, for each $S \\subset \\mathcal{D}$ taking $(\\mu(D))_D \\mapsto \\sum_{D \\in S} \\mu(D)$.\nNote $Z = \\textsf{p}_{\\{D: D\\cap X \\neq \\varnothing\\}}\\big( \\bigcap_{i=1}^n\\textsf{p}^{-1}_{\\{D: D\\cap X_i \\neq \\varnothing\\}}((r_i, 1]) \\big)$ so it suffices to show $\\textsf{p}_S$ is continuous and open; this is straightforward (see the end of the proof of Prop.~\\ref{prop:causalprojectioncontinuous}).\n\\end{proof}\n\n\\begin{proof}[Full proof of Lem.~\\ref{lem:goodcomeager}]\nWe show a stronger result, namely that the complement of the good set is nowhere dense.\nBy rearrangement and laws of probability we find that the second inequality in \\eqref{cns:ineq:1} is equivalent to\n\\begin{align*}\n\\mu_x(y') \n &< \\mu_{()}(x') + \\mu_{()}(x, y')\\\\\n 1- \\mu_x(y) &< \\underbrace{\\mu_{()}(x') + \\mu_{()}(x)}_1 - \\mu_{()}(x, y)\\\\\n \\mu_x(y) &> \\mu_{()}(x, y).\n\\end{align*}\nLemma~\\ref{prop:p2:characterization:binary} then entails the non-strict analogues of all four inequalities in \\eqref{cns:ineq:1}, \\eqref{cns:ineq:2} are met for any $(\\mu_\\alpha)_\\alpha \\in \\mathfrak{S}^{X \\to Y}_{2}$, so we show that converting each to an equality yields a nowhere dense set, whose finite union is also nowhere dense.\nNote that we have a continuous, open (again, refer to the end of the proof of Prop.~\\ref{prop:causalprojectioncontinuous}), and surjective observational projection $\\pi_{()}: \\mathfrak{S}^{X \\to Y}_{2} \\to \\mathfrak{P}\\big(\\chi_{\\{X, Y\\}}\\big)$,\nand the first inequality in \\eqref{cns:ineq:2} is met iff $(\\mu_\\alpha)_\\alpha \\in \\big(p_{x', y'} \\circ \\pi_{()}\\big)^{-1}(\\{0\\})$ where $p_{x', y'}$ is the map from Lemma~\\ref{prop:probiscontinuous} and $x', y'$ denotes the set $\\pi_X^{-1}(\\{x'\\}) \\cap \\pi_Y^{-1}(\\{y'\\}) \\subset \\chi_{\\{X, Y\\}}$.\nThis is nowhere dense as it is the preimage of the nowhere dense set $\\{0\\}\\subset [0, 1]$ under a map which is continuous and open by Lemma~\\ref{prop:probiscontinuous}. The second inequality of \\eqref{cns:ineq:2} is wholly analogous after rearrangement.\n\nAs for \\eqref{cns:ineq:1},\ndefine a function $d: \\mathfrak{S}^{X \\to Y}_2 \\to [0, 1]$ taking $(\\mu_\\alpha)_\\alpha \\mapsto \\mu_{X \\coloneqq x}(y') - \\mu_{()}(x, y')$; this function $d$ is continuous by Lemma~\\ref{prop:probiscontinuous} and the continuity of addition and projection, and is once again open. Note\nthat the first inequality of \\eqref{cns:ineq:1} holds iff $d((\\mu_\\alpha)_\\alpha) = 0$.\nFor any $\\mu \\in \\mathfrak{S}_3^{X}$ such that $(\\varpi^{X \\to Y}_{2}\\circ \\varpi_2)(\\mu) = (\\mu_\\alpha)_\\alpha$, note that $d((\\mu_\\alpha)_\\alpha) = \\mu\\big(x', y'_x\\big)$ where $x', y'_x$ abbreviates the basic set $\\pi_{((), X)}^{-1}(\\{x'\\}) \\cap \\pi_{(X \\coloneqq x, Y)}^{-1}(\\{y'\\}) \\in \\mathcal{B}(\\chi_{A \\times \\*V})$.\nThus $d$ is surjective, so that $d^{-1}(\\{0\\})$ is nowhere dense since $\\{0\\} \\subset [0, 1]$ is nowhere dense.\nThe second inequality in \\eqref{cns:ineq:1} is again totally analogous.\n\\end{proof}\n\n\\begin{proof}[Proof of Lem.~\\ref{lem:separation}]\nAbbreviate $\\mu_3$ as $\\mu$, and without loss take $\\mu \\in \\mathfrak{S}^\\prec_{\\mathrm{std}}$.\nNote that \\eqref{cns:ineq:1}, \\eqref{cns:ineq:2} entail\n\\begin{equation*}\n0 < \\mu(x', y'_x) < \\mu(x'), \\quad 0 < \\mu(x', y'_{x'}) < \\mu(x').\n\\end{equation*}\nand therefore\n\\begin{align*}\n0 < \\mu\\big(\\pi^{-1}_{((), X)}(\\{x'\\}) \\cap \\pi^{-1}_{(x^*, Y)}(\\{1\\})\\big) < \\mu\\big(\\pi^{-1}_{((), X)}(\\{x'\\})\\big)\n\\end{align*}\nfor each $x^* \\in \\chi_X = \\{0, 1\\}$.\nIn turn this \nentails that there are some values $y_{0}, y_{1} \\in \\{ 0, 1 \\}$ such that\n$\\mu(\\Omega_1) > 0$, $\\mu(\\Omega_2) > 0$\nwhere the disjoint sets $\\{\\Omega_i\\}_i$ are defined as\n\\begin{align*}\n\\Omega_1 &= \\pi^{-1}_{((), X)}(\\{x'\\}) \\cap \\pi^{-1}_{(X \\coloneqq 0, Y)} (\\{y_0\\}) \\cap \\pi^{-1}_{(X \\coloneqq 1, Y)} (\\{y_1\\})\\\\\n\\Omega_2 &= \\pi^{-1}_{((), X)}(\\{x'\\}) \\cap \\pi^{-1}_{(X \\coloneqq 0, Y)} (\\{y^\\dagger_0\\}) \\cap \\pi^{-1}_{(X \\coloneqq 1, Y)} (\\{y^\\dagger_1\\})\n\\end{align*}\nwhere in the second line, $y_0^\\dagger = 1-y_0$ and $y_1^\\dagger = 1 - y_1$.\nNote that for $i = 1, 2$ we have conditional measures $\\mu_i(S_i) = \\frac{\\mu(S_i)}{\\mu(\\Omega_i)}$ for $S_i \\in \\mathcal{B}(\\Omega_i)$; further, $\\Omega_i$ is Polish, since each is clopen.\nThis implies $\\Omega_i$ is a standard atomless (since $\\mu$ is) probability space under $\\mu_i$.\nBy \\citet[Thm.~17.41]{Kechris1995}, there are Borel isomorphisms $f_i : \\Omega_i \\hookdoubleheadrightarrow [0, 1]$ pushing $\\mu_i$ forward to Lebesgue measure $\\lambda$, i.e., $\\mu_i(f_i^{-1}(B)) = \\lambda(B)$ for $B \\in \\mathcal{B}([0, 1])$.\nThus $g = f_2^{-1} \\circ f_1 : \\Omega_1 \\hookdoubleheadrightarrow \\Omega_2$ is $\\mu_i$-preserving: for $X_1 \\in \\mathcal{B}(\\Omega_1)$,\n\\begin{align}\n\\mu(g(X_1)) = \\frac{\\mu(\\Omega_2)}{\\mu(\\Omega_1)} \\mu(X_1).\\label{eqn:NOCBKNantKPlTpiViI0=}\n\\end{align}\n\nConsider $\\mu' = \\varpi_3(\\mathcal{M}')$ for a new $\\mathcal{M}' \\in \\mathfrak{M}_\\prec$, given as follows. Its exogenous valuation space is $\\chi_{\\*U} = \\Omega'$ where we define the sample space $\\Omega' = \\texttt{\\textbf{F}}(\\*V) \\times \\{ \\mathrm{T}, \\mathrm{H} \\}$; that is, a new exogenous variable representing a coin flip is added to some representation of the choice of deterministic standard form mechanisms.\nFix constants $\\varepsilon_1, \\varepsilon_2 \\in (0, 1)$ with $\\varepsilon_1 \\cdot \\mu(\\Omega_1) = \\varepsilon_2 \\cdot \\mu(\\Omega_2)$ and define its exogenous noise distribution $P$ by\n\\begin{equation}\\label{eq:modprobz}\nP(X \\times \\{\\mathrm{S}\\}) = \n\\begin{cases}\n(1-\\varepsilon_1)\\cdot \\mu(X), & X \\subset \\Omega_1, \\mathrm{S} = \\mathrm{T}\\\\\n\\varepsilon_1 \\cdot \\mu(X), & X \\subset \\Omega_1, \\mathrm{S} = \\mathrm{H}\\\\\n(1-\\varepsilon_2)\\cdot\\mu(X), & X \\subset \\Omega_2, \\mathrm{S} = \\mathrm{T}\\\\\n\\varepsilon_2 \\cdot \\mu(X), & X \\subset \\Omega_2, \\mathrm{S} = \\mathrm{H}\\\\\n\\mu(X), & X \\subset \\texttt{\\textbf{F}}(\\*V) \\setminus (\\Omega_1 \\cup \\Omega_2), \\mathrm{S} = \\mathrm{T}\\\\\n0, & X \\subset \\texttt{\\textbf{F}}(\\*V) \\setminus (\\Omega_1 \\cup \\Omega_2), \\mathrm{S} = \\mathrm{H}\n\\end{cases}.\n\\end{equation}\nWhere $\\texttt{\\textbf{f}} \\in \\texttt{\\textbf{F}}(\\*V)$ and $V \\in \\*V$ write $\\texttt{f}_V$ for the deterministic mechanism (of signature $\\chi_{\\mathbf{Pred}(V)} \\to \\chi_V$) for $V$ in $\\texttt{f}$. (Note that each $\\texttt{f}$ is just an indexed collection of such mechanisms $\\texttt{f}_V$.)\nThe function $f'_V$ in $\\mathcal{M}'$ is defined at the initial variable $X$ as\n$f'_X(\\texttt{\\textbf{f}}, \\mathrm{S}) = \\texttt{\\textbf{f}}_X$ for both values of $\\mathrm{S}$,\nand for $V \\neq X$ is defined as follows, where $\\*p \\in \\mathbf{Pred}(V)$:\n\\begin{equation} \\label{eqn:modseqz}\n{f}'_V\\big(\\*p, (\\texttt{\\textbf{f}}, \\mathrm{S})\\big) =\n \\begin{cases}\n (g(\\texttt{\\textbf{f}}))_V(\\*p), & \\texttt{\\textbf{f}} \\in\\Omega_1, \\mathrm{S} = \\mathrm{H},\\, \\pi_X(\\*p) = x\\\\\n (g^{-1}(\\texttt{\\textbf{f}}))_V(\\*p), & \\texttt{\\textbf{f}} \\in\\Omega_2, \\mathrm{S} = \\mathrm{H},\\, \\pi_X(\\*p) = x\\\\\n \\texttt{\\textbf{f}}_V(\\*p), &\\textnormal{otherwise}\n \\end{cases}.\n \\end{equation}\n\nWe claim that $\\varpi_2(\\mu') = \\varpi_2(\\mu)$.\nIt suffices to show for any $\\*Z \\coloneqq \\*z \\in A$ and ${\\mathbf{w}} \\in \\chi_{{\\*W}}$, $\\*W$ finite,\nwe have\n\\begin{equation}\n \\mu(\\theta) = \\mu'(\\theta), \\text{ where }\\theta = \\bigcap_{W \\in {\\*W}}\\pi^{-1}_{(\\*Z \\coloneqq \\*z, W)}(\\{\\pi_W(\\*w)\\}).\\label{eqn:IOXCVOIXCJVSOIJDklfsjdlfksdj}\n\\end{equation}\nAssume $\\pi_Z(\\*w) = \\pi_Z(\\*z)$ for every $Z \\in \\*Z \\cap {\\*W}$, since both sides of \\eqref{eqn:IOXCVOIXCJVSOIJDklfsjdlfksdj} trivially vanish otherwise.\nWhere $\\texttt{\\textbf{f}} \\in \\texttt{\\textbf{F}}(\\*V)$\nwrite, e.g., $\\texttt{\\textbf{f}} \\models \\theta$ if $m^{{\\mathcal{M}}_A}(\\texttt{\\textbf{f}}) \\in \\theta$, where ${\\mathcal{M}}$ is a standard form model (Def.~\\ref{def:standardform}); for\n$\\omega' \\in \\Omega'$ write $\\omega' \\models' \\theta$\nif $m^{{\\mathcal{M}}'_A}(\\omega') \\in \\theta$. By the last two cases of \\eqref{eqn:modseqz} we have\n\\begin{align}\n\\mu'(\\theta)\n&= \\sum_{\\mathrm{S} = \\mathrm{T}, \\mathrm{H}}P\\big( \\{ (\\texttt{\\textbf{f}}, \\mathrm{S}) \\in \\Omega' : (\\texttt{\\textbf{f}}, \\mathrm{S}) \\models' \\theta \\} \\big)\\nonumber\\\\\n&= \\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\texttt{\\textbf{F}}(\\*V) \\setminus (\\Omega_1 \\cup \\Omega_2) : \\texttt{\\textbf{f}} \\models \\theta \\}\\big) +\n\\sum_{\\substack{\\mathrm{S} = \\mathrm{T}, \\mathrm{H}\\\\ i = 1, 2}} P\\big( \\{ (\\texttt{\\textbf{f}}, \\mathrm{S}) \\in \\Omega' : \\texttt{\\textbf{f}} \\in \\Omega_i, (\\texttt{\\textbf{f}}, \\mathrm{S}) \\models' \\theta \\} \\big).\\label{eqn:CXJSDFJKSDKFJKSDVC}\n\\end{align}\nApplying the first four cases of \\eqref{eq:modprobz} and the third case of \\eqref{eqn:modseqz}, the second term of \\eqref{eqn:CXJSDFJKSDKFJKSDVC} becomes\n\\begin{equation}\n\\sum_i \\Big[\\varepsilon_i \\cdot \\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\Omega_i : (\\texttt{\\textbf{f}}, \\mathrm{H}) \\models' \\theta \\} \\big) +\n \\left(1- \\varepsilon_i\\right) \\cdot\\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\Omega_i : \\texttt{\\textbf{f}} \\models \\theta \\} \\big)\\Big] .\\label{eqn:ckxljvXIOCJVOX}\n\\end{equation}\nEither $X \\in \\*Z$ and $\\pi_X(\\*z) = x$, or not.\nIn the former case: defining $X_i = \\{\\texttt{\\textbf{f}} \\in \\Omega_i : \\texttt{\\textbf{f}} \\models \\theta\\}$ for each $i = 1, 2$,\nthe first two cases of \\eqref{eqn:modseqz} yield that\n\\begin{align}\n\\{ \\texttt{\\textbf{f}} \\in \\Omega_1 : (\\texttt{\\textbf{f}}, \\mathrm{H}) \\models' \\theta \\} &= \\{\\texttt{\\textbf{f}} \\in \\Omega_1 : g(\\texttt{\\textbf{f}}) \\models \\theta\\}\n= g^{-1}(X_2 )\\nonumber\\\\\n\\{ \\texttt{\\textbf{f}} \\in \\Omega_2 : (\\texttt{\\textbf{f}}, \\mathrm{H}) \\models' \\theta \\} &= \\{\\texttt{\\textbf{f}} \\in \\Omega_2 : g^{-1}(\\texttt{\\textbf{f}}) \\models \\theta\\}\n= g(X_1).\\label{eqn:yDuRPR\/5r83+sSdJ2Iw=}\n\\end{align}\nApplying \\eqref{eqn:yDuRPR\/5r83+sSdJ2Iw=} and \\eqref{eqn:NOCBKNantKPlTpiViI0=}, \\eqref{eqn:ckxljvXIOCJVOX} becomes\n\\begin{gather}\n\\varepsilon_1 \\cdot \\frac{\\mu(\\Omega_1)}{\\mu(\\Omega_2)} \\cdot \\mu\\big( X_2 \\big)\n + \\left(1- \\varepsilon_1\\right) \\cdot\\mu\\big( X_1 \\big)\n + \\varepsilon_2 \\cdot\\frac{\\mu(\\Omega_2)}{\\mu(\\Omega_1)} \\cdot \\mu\\big( X_1\\big)\n + \\left(1- \\varepsilon_2\\right) \\cdot\\mu\\big( X_2 \\big)\\nonumber\\\\\n = \\mu(X_1) + \\mu(X_2),\\label{eqn:CxoFLyrcYfY8on7xY\/A=}\n\\end{gather}\nthe final cancellation by choice of $\\varepsilon_1, \\varepsilon_2$.\nIn the latter case: since $m^{{\\mathcal{M}}}(\\texttt{\\textbf{f}}) \\in \\pi^{-1}_X(\\{x'\\})$ for any $\\texttt{\\textbf{f}} \\in \\Omega_1 \\cup \\Omega_2$, the third case of \\eqref{eqn:modseqz} gives $\\{ \\texttt{\\textbf{f}} \\in \\Omega_i : (\\texttt{\\textbf{f}}, \\mathrm{H}) \\models'\\theta\\} = X_i$. Thus \\eqref{eqn:ckxljvXIOCJVOX} becomes \\eqref{eqn:CxoFLyrcYfY8on7xY\/A=} in either case.\nPutting in \\eqref{eqn:CxoFLyrcYfY8on7xY\/A=} as the second term in \\eqref{eqn:CXJSDFJKSDKFJKSDVC}, we find $\\mu(\\theta) = \\mu'(\\theta)$.\n\nNow we claim $\\mu(\\zeta) \\neq \\mu'(\\zeta)$\nfor $\\zeta = \\zeta_0 \\cap \\zeta_1$ where\n$\\zeta_1 = \\pi^{-1}_{(X \\coloneqq 1, Y)}(\\{y_1\\})$ and $\\zeta_0 = \\pi^{-1}_{(X \\coloneqq 0, Y)}(\\{y_0\\})$.\nWe have\n\\begin{align}\n\\mu'(\\zeta) = &\\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\Omega \\setminus (\\Omega_1 \\cup \\Omega_2) : \\texttt{\\textbf{f}} \\models \\zeta \\} \\big)\\nonumber\\\\\n&+ \\sum_{i=1,2} \\Big[\\varepsilon_i \\cdot \\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\Omega_i : (\\texttt{\\textbf{f}}, \\mathrm{H}) \\models' \\zeta \\} \\big) +\n\\left(1- \\varepsilon_i\\right) \\cdot\\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\Omega_i : \\texttt{\\textbf{f}} \\models \\zeta \\} \\big)\\Big].\\label{eqn:XT26+J4OsmYVQhi6+dY=}\n\\end{align}\nFirst suppose that $x = 0$.\nIf $\\texttt{\\textbf{f}} \\in \\Omega_1$, then\nnote that $(\\texttt{\\textbf{f}}, \\mathrm{H}) \\models' \\zeta_0$ iff $g(\\texttt{\\textbf{f}}) \\models \\zeta_0$, but this is never so, since $g(\\texttt{\\textbf{f}}) \\in \\Omega_2$.\nIf $\\texttt{\\textbf{f}} \\in \\Omega_2$,\nthen\n$(\\texttt{\\textbf{f}}, \\mathrm{H}) \\models' \\zeta_1$ iff $\\texttt{\\textbf{f}} \\models \\zeta_1$, which is never so again by choice of $\\Omega_2$.\nIf $x = 1$ then we find that $(\\texttt{\\textbf{f}}, \\mathrm{H}) \\not\\models \\zeta_1$ (if $\\texttt{\\textbf{f}} \\in \\Omega_1$) and $(\\texttt{\\textbf{f}}, \\mathrm{H}) \\not\\models \\zeta_0$ (if $\\texttt{\\textbf{f}} \\in \\Omega_2$).\nThus $(\\texttt{\\textbf{f}}, \\mathrm{H}) \\not\\vDash ' \\zeta$ for any $\\texttt{\\textbf{f}} \\in \\Omega_1 \\cup \\Omega_2$ and \\eqref{eqn:XT26+J4OsmYVQhi6+dY=} becomes\n\\begin{equation*}\n \\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\Omega : \\texttt{\\textbf{f}} \\models \\zeta \\} \\big)\n - \\sum_{i=1,2} \\varepsilon_i \\cdot \\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\Omega_i : \\texttt{\\textbf{f}} \\models \\zeta \\} \\big)\n = \\mu\\big( \\{ \\texttt{\\textbf{f}} \\in \\Omega : \\texttt{\\textbf{f}} \\models \\zeta \\} \\big) - \\varepsilon_1 \\cdot\\mu(\\Omega_1) < \\mu(\\zeta).\n\\end{equation*}\n\nIt is straightforward to check (via casework on the values $y_0$, $y_1$) that $\\mu$ and $\\mu'$ disagree also on the PNS: $\\mu(y_x, y'_{x'}) \\neq \\mu'(y_x, y'_{x'})$ as well as its converse.\nAs for the probability of sufficiency (Definition \\ref{def:probcaus}), note that\n\\begin{align*}\n P(y_x \\mid x', y') = \\frac{P(y_x, x', y'_{x'}) + \\overbrace{P(y_x, y'_x, x', x)}^0}{P(x', y')}\n\\end{align*}\nand it is again easily seen (given the definition of the $\\Omega_i$) that $\\mu(y_x, x', y'_{x'}) \\neq \\mu'(y_x, x', y'_{x'})$ while the two measures agree on the denominator; similar reasoning shows disagreement on the probability of enablement, since\n\\begin{equation*}\n P(y_x \\mid y') = \\frac{P(y_x, y'_{x'}, x') + \\overbrace{P(y_x, y'_{x}, x)}^0}{P(y')}. \\qedhere\n\\end{equation*}\n \n \n\\end{proof}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\nNon-equilibrium quantum many-body systems are nowadays routinely probed in experiments with ultracold atoms with unprecedented control over their parameters such as particle number, interaction strength, and external potentials. A plethora of non-equilibrium systems has been realized to address a wide variety of physics questions, including Anderson localization of Bose-Einstein condensates (BECs) \\cite{clement_suppression_2005, fort_effect_2005, schulte_routes_2005, billy_direct_2008, roati_anderson_2008}, many-body localization in disordered lattices \\cite{schreiber_observation_2015, lukin_probing_2019}, pre-thermalization of one-dimensional (1D) BECs \\cite{gring_relaxation_2012}, and quench-dynamics of spin-model systems \\cite{bernien_probing_2017}, to name just a few.\\\\ \nAt the same time, the theoretical understanding of non-equilibrium quantum many-body systems still lags behind that obtained for stationary or equilibrium systems. In particular, the role of ``classical'' chaos on non-equilibrium quantum many-body systems is currently subject of intense scrutiny, see e.g.~\\cite{ho_periodic_2019, hallam_lyapunov_2019, lewis-swan_unifying_2019, xu_does_2020}. For a bosonic quantum many-body system, the mean-field limit can be viewed as the classical limit in which the particle creation and annihilation operators lose their quantum properties and start to act as classical fields. The mean-field limit, typically involves non-linear (partial) differential equations, and can exhibit chaos with exponential separation in Hilbert space characterized by a positive Lyapunov exponent $\\lambda$ \\cite{brezinova_wave_2011, cassidy_threshold_2009}. One of the open questions is, whether wave chaos, more specifically the positive $\\lambda$, imprints itself onto the dynamics of a quantum many-body system even at finite particle numbers and how such an imprint could be measured. Recently, out-of-time-order correlators have been suggested as suitable probes for a positive $\\lambda$ (see e.g.~\\cite{Kitaev2014,sekino_fast_2008,shenker_black_2014,maldacena_bound_2016,lewis-swan_unifying_2019, xu_does_2020}). \\\\\nIn this paper, we find an imprint of chaos on a different observable within a paradigmatic bosonic system: A quasi 1D BEC initially trapped harmonically and then released to expand in a shallow disordered or periodic potential. We show that the fraction of non-condensed particles increases exponentially over time and that the associated rate is given by the Lyapunov exponent $\\lambda$ obtained from mean-field chaos. The depletion occurs on time scales during which most of the initial interaction energy is converted into kinetic energy, and comes to a halt at times close to the so-called scrambling time (or Ehrenfest time) \\cite{rammensee_many-body_2018, tomsovic_post-ehrenfest_2018, maldacena_bound_2016}. We observe chaos-induced depletion both in shallow disordered as well as periodic potentials showing that the effect is quite general and does not rely on the intrinsic randomness of a disordered landscape.\\\\\nFinally, we demonstrate that the condensate depletion and thus the Lyapunov exponent $\\lambda$ is accessible experimentally through the analysis of fluctuations of the total particle density in momentum space. While the condensed part is coherent and leads to interference fringes in the total density, the non-condensed part is incoherent and piles up over time as a non-fluctuating background. Analyzing the interference fringes after time of flight would thus allow to extract experimentally the fraction of non-condensed particles as a function of time and compare to the theoretically obtained $\\lambda$. Condensate depletion thus offers itself as an experimentally accessible probe to investigate the role of (mean-field or classical) chaos in non-equilibrium quantum matter.\\\\\nWhile our findings are generally applicable to bosonic systems that exhibit mean-field chaos, we pick one specific system already realized experimentally \\cite{billy_direct_2008} to obtain numerical results. The initially harmonically trapped quasi 1D BEC of $N$ $^{87}$Rb atoms is released at $t=0$ to expand in a shallow potential. As units we use $\\hbar=m=\\omega_0=1$, with $\\omega_0$ being the frequency of the initial longitudinal harmonic trap which amounts to a time unit of $t_0\\approx30$ms and a space unit of $x_0\\approx4.6\\mu$m. For the number of atoms, we take $N=1.2\\times 10^4$ following \\cite{billy_direct_2008}, as well as larger values, i.e.~$N=1.2\\times10^5$ and $N=1.2\\times10^6$ to investigate the effect of varying $N$.\\\\\nDescribing the quasi-1D system on a mean-field level the Gross-Pitaevskii equation (GPE) takes the form\n\\begin{equation}\ni\\frac{\\partial \\psi(x,t)}{\\partial t} = \n\\left(-\\frac{1}{2}\\frac{\\partial^2}{\\partial x^2}\n+V(x)+g|\\psi(x,t)|^2\\right)\\psi(x,t),\n\\label{eq:gpe}\n\\end{equation}\nwhere the nonlinearity is $g\\approx400$ with the above parameters and the normalization $\\int dx|\\psi(x,t)|^2=1$. The potential $V(x)$ corresponds to the harmonic potential at $t=0$, and to the periodic or disordered speckle potential at $t>0$ with amplitude much smaller than the mean energy per particle $e$. This system exhibits chaos on the mean-field level \\cite{brezinova_wave_2011}: Two wave functions, $\\psi_a(x,0)$ and $\\psi_{b}(x,0)$, respectively, initially very close to each other in Hilbert space as measured by a distance norm, separate exponentially in time until quasi-orthogonality is reached, see Fig.~\\ref{fig:N6}.\n\\begin{figure}[t]\n\t\\includegraphics[width=\\columnwidth]{N6_grid_bog_comp_log.pdf}\n\t\\caption{$d^{(2)}$ (Eq.~\\ref{eq:d2}) for two initial conditions obtained by linear distortion of the mean-field ground state (black dashed-dotted line), distance function $\\bar d^{(2)}$ (Eq.~\\ref{eq:bar_d2}) averaged over the stochastic ensemble (triangles), and fraction of incoherent particles $n_\\text{incoh}$ (squares). Data shown at integer values of $t$ (except for $d^{(2)}$), the lines serve as guides for the eye. The red dashed lines mark an exponential increase with $\\lambda=1.44t_0^{-1}$. Number of particles is $N=1.2\\times 10^6$, periodic potential $V(x)=V_P\\cos{(k_Px)}$ used with $V_P=0.3e$, $k_P=\\pi\/3\\xi$ and $\\xi$ the healing length.}\n\t\\label{fig:N6}\n\\end{figure}\nThe rate of the exponential growth is given by the Lyapunov exponent $\\lambda$. For the distance norm we take\n\\begin{align}\nd^{(2)}_{a,b}(t) &= \\frac{1}{2}\\int dx |\\psi_a(x,t)-\\psi_{b}(x,t)|^2 .\n\\label{eq:d2}\n\\end{align}\nThe Lyapunov exponent $\\lambda$ shows systematic trends as a function of the parameters of the system: It vanishes in absence of inter-particle interactions for arbitrary potentials, as well as in presence of inter-particle interactions in free space (i.e.~without any potential). At fixed period of the periodic potential $k_P$, or fixed correlation length $\\sigma$ of the speckle potential, it increases both with nonlinearity and the potential amplitude \\cite{brezinova_wave_2011}, see the supplemental material (SM).\\\\\nTo find imprints of chaos on measurable observables of the quantum many-body system with finite $N$ a theory beyond mean-field has to be applied. The multi-configurational time-dependent Hartree method for bosons (see, e.g.~\\cite{alon_multiconfigurational_2008,lode_colloquium_2020}), while being in principle exact for a sufficient number of orbitals, suffers from the exponentially growing configuration space. For the particle numbers considered, only two orbitals can be afforded numerically \\cite{brezinova_wave_2012}. As more than two orbitals are populated during the propagation, the MCTHB method entails a large and not easy to quantify error. We, therefore, resort to the truncated Wigner approximation (TWA), see e.g.~\\cite{Steel1998,Sinatra2002,blakie_dynamics_2008,dujardin_breakdown_2016} which employs the Wigner representation $W$ for (in general) a many-body density matrix $\\hat \\rho$\n\\begin{align}\nW(&\\psi_1,\\ldots,\\psi_M,\\psi_1^* \\ldots,\\psi_M^*) = \\frac{1}{\\pi^{2M}}\\nonumber \\\\\n&\\times \\int dz^{2M}\\Tr\\left[\\hat \\rho e^{i\\sum_j\\left( z_j^*\\hat\\psi_j^\\dagger\n+iz_j\\hat{\\psi_j}\\right)}\n\\right]\ne^{-i\\sum_j\\left(z_j^*\\psi_j^*-iz_j\\psi_j\\right)}.\n\\end{align}\n$W$ can be viewed as a phase-space representation of the quantum many-body state. $M$ is the total number of modes in which particles can be created or annihilated and $\\hat \\psi_j^\\dagger$ and $\\hat \\psi_j$ are the corresponding creation and annihilation operators, respectively. In general, particles can be created or annihilated in an arbitrary single particle mode denoted by $j$. We choose $j$ to represent a specific point in space assuming for simplicity an equidistant spatial discretization. We have made sure, however, that the spatial grid is fine enough, i.e.~the distance between grid points $dx<1\/k_\\text{max}$ with $k_\\text{max}$ being the largest relevant momentum in the system, such that we are still in the continuum limit.\\\\ \nHaving $W$ as a function of time at disposal would allow to evaluate all expectation values of symmetrized products of creation and annihilation operators. The exact equation of motion for $W$ can be obtained using von Neumann's equation of motion for $\\hat\\rho$. However, it proves to be intractable, such that approximations have to be invoked. Within the TWA \\cite{Steel1998, Sinatra2002, blakie_dynamics_2008}, the \ntime evolution of $W$ is sampled stochastically with an ensemble of trajectories obeying the GPE, Eq.~\\ref{eq:gpe}. (The only modification comes from the fact that we have to discretize space such that the second derivative in Eq.~\\ref{eq:gpe} has to be replaced by its second-order finite difference approximation.) It has been shown \\cite{schlagheck_enhancement_2019, tomsovic_post-ehrenfest_2018, dujardin_describing_2015} that this approximation amounts to neglecting non-classical trajectories as well as interferences between distinct trajectories in many-body Hilbert space. The question then arises at which times do these neglected effects start to play a role and become non-negligible. For single- or few-particle systems, sampling the time evolution with classical trajectories is accurate up to the point where an initially maximally localized state has spread over the whole system. This time is called the Ehrenfest time $\\tau_E$ \\cite{ehrenfest_bemerkung_1927} which is, in presence of classical chaos, inversely proportional to $\\lambda$ and grows logarithmically with $1\/\\hbar$. This concept can be extended into the many-body regime for bosonic systems with $\\hbar$ being replaced by the effective Planck constant $\\hbar_\\text{eff}\\simeq 1\/N$. Following the lines of \\cite{rammensee_many-body_2018, tomsovic_post-ehrenfest_2018} we thus assume that our results are accurate up to the time $\\tau_E=\\frac{1}{\\lambda}\\log{N}$.\\\\\nThe initial conditions within the stochastic ensemble of trajectories are constructed such as to correctly sample the phase-space distribution of the underlying initial quantum state, which in our case is a BEC at zero temperature. The stochasticity of the ensemble comes solely from the sampling of this initial state since Eq.~\\ref{eq:gpe} is completely deterministic. We follow \\cite{Sinatra2002, blakie_dynamics_2008, Steel1998} and construct the initial wave functions by adding to the mean-field ground state in the harmonic trap vacuum fluctuations in form of Gaussian noise (see the SM).\\\\\nThe most relevant observables in our case will be the coherent part of the particle density given by $\\rho_\\text{coh}(x_j,t) = |\\langle \\hat \\psi_j(t) \\rangle|^2$, as well as the one-particle reduced density matrix (1RDM) $D_{ij}(t)=\\langle \\hat\\psi_i^\\dagger(t)\\hat\\psi_j(t)\\rangle$. The term ``coherent\" in defining $\\rho_\\text{coh}(x_j,t)$ points to the fact that only a macroscopically occupied state with a spatially non-random phase will survive the averaging. $\\rho_\\text{coh}(x_j,t)$ can therefore be associated with the density of condensed particles. Alternatively \\cite{penrose_bose-einstein_1956,leggett_bose-einstein_2001}, the condensate state is defined through a macroscopic occupation of one eigenstate of the 1RDM. We show in the SM that these two definitions of the condensate give practically identical results for the depletion over time such that we use throughout the remainder of the paper the term coherent synonymously to condensed.\\\\\nWithin the stochastic ensemble of trajectories, expectation values can be calculated as\n$\\langle \\hat\\psi_j(t)\\rangle = \\frac{1}{N_s}\\sum_{s=1}^{N_s}\\psi_s(x_j,t)\n$,\nwith $N_s$ ($\\gg1$) being the number of Gross-Pitaevskii trajectories $\\psi_{s}(x_j,t)$ within the ensemble. To calculate the 1RDM, one has to rewrite $\\langle \\hat\\psi_i^\\dagger\\hat\\psi_j + \\hat \\psi_j\\hat\\psi_i^\\dagger\\rangle = \n2\\langle \\hat\\psi_i^\\dagger\\hat\\psi_j \\rangle + \\frac{1}{Ndx}\\delta_{ij}$ using the commutator relation $[\\hat\\psi_j,\\hat\\psi_i^\\dagger] = \\frac{1}{Ndx}\\delta_{ij}$.\nThe term $\\delta_{ij}\/dx$ is the discrete version of the $\\delta$-function for a continuous system, and \nthe factor $1\/N$ comes from our normalization of the wave functions of Eq.~\\ref{eq:gpe} to one, or equivalently, the creation and annihilation operators to $1\/N$.\nThe 1RDM is then given by\n$D_{ij}(t) = \\frac{1}{N_s}\\sum_{s=1}^{N_s}\\psi^{*}_s(x_i,t)\\psi_s(x_j,t) - \\frac{1}{2Ndx}\\delta_{ij}\n$,\nand the total particle density is $\\rho_\\text{total}(x_j,t) = D_{jj}(t)$. The fraction of coherent particles is determined by \n$n_\\text{coh}(t)=\\sum_j dx\\rho_\\text{coh}(x_j,t)$. Accordingly, the fraction of incoherent particles is $n_\\text{incoh}(t) = 1-n_\\text{coh}(t)$. The crucial observation now is that\n\\begin{align}\nn_\\text{incoh}(t) = \\frac{1}{N_s^2}\\sum_{s,r}d^{(2)}_{s,r}(t)\n-\\frac{L}{2Ndx} = \\bar d^{(2)}(t),\n\\label{eq:bar_d2}\n\\end{align}\nwhich we obtain using Eq.~\\ref{eq:d2}, taking into account that the norm of the wave functions within the ensemble is $1\/N_s\\sum_s\\sum_j dx |\\psi_s(x_j)|^2 = 1 + L\/2Ndx$ with $L$ the length of the system. (Note that the term $L\/dx$ counts the number of single-particle modes to which vacuum fluctuations have been added.) For a detailed derivation, see the SM. The right-hand side of Eq.~\\ref{eq:bar_d2} is (apart from a constant term) the arithmetic mean over the distance function between all pairs of mean-field trajectories, and we denote it with $\\bar d^{(2)}(t)$.\\\\\nWe have explicitly verified the equality of Eq.~\\ref{eq:bar_d2} numerically by independently calculating the arithmetic mean of the distance function, $\\bar d^{(2)}(t)$, and comparing it to $n_\\text{incoh}(t)$, see Fig.~\\ref{fig:N6}.\n\\begin{figure}[t]\n\t\\includegraphics[width=\\columnwidth]{N_number_comp_combined.pdf}\n\t\\caption{Left column (a) and (b) for the periodic potential with $V_P=0.3e$ and $k_P=\\pi\/3\\xi$: (a) Fraction of incoherent particles $n_\\text{incoh}$ for different particle numbers $N$ ($N$ being $1.2$ times the number near each curve), and $d^{(2)}$ for the linearly distorted initial conditions. Red dashed lines correspond to an exponential increase with $\\lambda=1.44 t_0^{-1}$. (b) Linear plot of (a) including $n^\\text{total}_\\text{low-env}$ extracted from the total density only, see Fig.~\\ref{fig:incoh_den}. The Ehrenfest time $\\tau_E=1\/\\lambda \\ln{N}$ is marked for each curve. Right column (c) and (d) same as left column but for one realization of speckle disorder with $V_D=0.3e$ and correlation length $\\sigma =0.57\\xi$. In (c) the Lyapunov exponent is $\\lambda =1.43 t_0^{-1}$. Data is shown at integer values of $t$ (except for $d^{(2)}$), the lines serve as guides for the eye.}\n\t\\label{fig:N_numb}\n\\end{figure}\n\\begin{figure}[t]\n\t\\includegraphics[width=\\columnwidth]{incoherent_density_k_2.pdf}\n\t\\caption{Total density $\\tilde\\rho_\\text{total}(k,t)$ (solid), as well as coherent $\\tilde\\rho_\\text{coh}(k,t)$ (filled), and incoherent part $\\tilde\\rho_\\text{incoh}(k,t)$ (dashed) for $N=1.2\\times10^6$ at (a) $t=5t_0$ and (b) $t=9t_0$. The orange dots mark the lower envelope of the strong fluctuations of $\\tilde\\rho_\\text{total}(k,t)$, and can be used as an accurate estimate for $\\tilde\\rho_\\text{incoh}(k,t)$. Same potential as in Fig.~\\ref{fig:N6}.}\n\t\\label{fig:incoh_den}\n\\end{figure}\nThe observed rate of exponential growth does not depend on the specific choice of the two close initial conditions such that we can clearly associate it with a Lyapunov exponent $\\lambda$. The equality between $\\bar d^{(2)}(t)$ and $n_\\text{incoh}(t)$ proves that, if the mean-field limit is chaotic, the fraction of incoherent particles will grow exponentially with a rate given exactly by the mean-field $\\lambda$. The exponentially fast depletion is thus chaos-induced, or seen from another perspective, measures mean-field chaos. Importantly, the exponential increase happens on shorter time scales than $\\tau_E$, i.e.~before effects neglected within the TWA start to play a role.\\\\\nWhile Fig.~\\ref{fig:N6} depicts the exponential increase for $N=1.2\\times10^6$ particles, we see the same exponential increase, i.e.~the same $\\lambda$, also for smaller particle numbers, see Fig.~\\ref{fig:N_numb}. We have varied $N$ while keeping the nonlinearity $g\\propto a_sN$ constant, which amounts to increasing the scattering length $a_s$ by the same factor $N$ is decreased, which preserves the classical phase space. Indeed, with decreasing $N$ and increasing $a_s$, the BEC naturally shows larger initial depletion, but upon expansion in the periodic potential, the same $\\lambda$ emerges.\\\\ \nWe now turn to the question of how the present chaos-induced depletion could be observed in an experiment. We analyze the total particle density in momentum space $\\tilde\\rho_\\text{total}(k,t)$ which is accessible in experiments through time-of-flight measurements, see e.g.~\\cite{gericke_high-resolution_2008, erne_universal_2018}. During the expansion of the BEC, matter waves start to scatter at the potential landscape preserving initially their phase coherence. This scattering creates fluctuations in momentum space with increasingly higher frequencies as waves originating from points increasingly farther apart in real space coherently interfere. Ultimately, the density exhibits strong fluctuations reaching down to almost zero density, provided that inelastic scattering has been negligible up until this point in time, Fig.~\\ref{fig:incoh_den} (a). During inelastic scattering particles lose energy, phase information, and with it, the ability to create interference fringes. These particles constitute the incoherent part of the density which piles up in form of an almost non-fluctuating background. Using a simple algorithm that determines the lower envelope of the fluctuations in the total density, we obtain a functional form very close to $\\tilde\\rho_\\text{incoh}(k,t)$, see Fig.~\\ref{fig:incoh_den} (b). Interpolating between the points of the lower envelope and integrating, we obtain $n^\\text{total}_\\text{low-env}$, which follows $n_\\text{incoh}$ closely, see Fig.~\\ref{fig:N_numb} (b). We emphasize that $n^\\text{total}_\\text{low-env}$ is extracted from the total density only. From Fig.~\\ref{fig:N_numb} (b) it is obvious that the extraction mechanism will work best for high particle numbers with a small scattering length (e.g., for $N=1.2\\times 10^6$ two orders of magnitude of exponential growth can be resolved). For smaller $N$ and correspondingly larger scattering lengths $a_s$ the incoherent density starts to pile up before coherent scattering produces sufficiently strong fluctuations in the coherent part of the density. Therefore, the close association of a non-fluctuating density with $\\tilde\\rho_\\text{incoh}(k,t)$ is broken initially. It becomes, however, more and more accurate over time such that, in the experiment, one could observe the behavior of $\\tilde\\rho_\\text{incoh}(k,t)$ also beyond $\\tau_E$, where interferences of many-body trajectories not included within the TWA become relevant.\\\\\nIn order to measure the incoherent fraction of the total density in an experiment it is pivotal to resolve the deep minima of the fluctuations. Half of the distance between two minima is $\\Delta k\\gtrsim 0.04 x_0^{-1}$. Assuming a linear pixel size of a CCD camera of $2\\mu$m the fluctuations could be resolved after about $300$ms time of flight. The peak amplitude of the fluctuations is $\\tilde\\rho_\\text{total}(k)\\gtrsim0.03 x_0$ leading to $\\tilde\\rho_\\text{total}(k)\\Delta k = 1.2\\times10^{-3}$ such that the number of particles within each hump is greater than $100$ for $N=1.2\\times10^5$ and $N=1.2\\times10^6$. Despite the large time of flight necessary, we believe that the here proposed extraction could be realized in state-of-the-art BEC experiments.\\\\\nFor the disorder potential, we mostly see the same behavior as for the periodic potential, see Fig.~\\ref{fig:N_numb} (c) and (d): \n\\begin{figure}[t]\n\t\\includegraphics[width=\\columnwidth]{incoherent_density_k_disorder.pdf}\n\t\\caption{Particle density in momentum space at $t=8t_0$ for the speckle disorder with $V_D=0.3e$ and $\\sigma =0.57\\xi$ averaged over $10$ realizations (additional smoothing of the curves has been applied). The vertical lines mark the Landau velocity approximated by $k_L=\\sqrt{\\mu(t)}$ with $ \\mu(t)$ the chemical potential at time $t$.}\n\t\\label{fig:incoh_den_dis}\n\\end{figure}\n$n_\\text{incoh}$ grows exponentially with $\\lambda$ independent of the particle number $N$. Note that we did not perform any averages over disorder realizations here. As to the extraction of the incoherent part of the density from $\\tilde \\rho_\\text{total}(k,t)$ there is one point worth mentioning. Due to the broad spectrum of frequencies the speckle disorder offers, within few time steps, slow particles start to be scattered coherently and intertwine with particles that have lost their coherence through inelastic (i.e.~incoherent) scattering near $k=0$. The result is a local maximum in $\\tilde \\rho_\\text{total}(k,t)$ near $k=0$, and local minima near the Landau velocity $\\pm k_L$ due to inelastic scattering out of this momentum, see Fig.~\\ref{fig:incoh_den_dis}. Since, however, slow particles scatter from positions in space close to each other, this scattering produces fluctuations with low frequencies as compared to the fluctuations observed for larger $k$. It is, therefore, impossible to identify $\\tilde \\rho_\\text{incoh}(k,t)$ near $k=0$ based on the fluctuations of the total density, initially. At later times the local maximum near $k=0$ consists of incoherent particles only such that $n^\\text{total}_\\text{low-env}$ again accurately predicts the value of $n_\\text{incoh}$, see Fig.~\\ref{fig:N_numb}. For $N=1.2\\times10^6$ the agreement between $n^\\text{total}_\\text{low-env}$ and $n_\\text{incoh}$ is accurate only after $t\\gtrsim12t_0$ such that we refrained from plotting it.\\\\\nIn conclusion, we have shown that a BEC expanding in a shallow periodic or disordered potential is subject to an exponentially growing depletion, and that the depletion is characterized by the ``classical\" (mean-field) Lyapunov exponent $\\lambda$. We have thus found a new observable that allows to identify the finger-print of classical chaos on the non-equilibrium many-body dynamics of a quantum system with a finite number of particles. In addition, we have shown how our results could be measured in an experiment by analyzing the visibility of the fluctuations of the particle density after time of flight. This opens up the possibility to verify our predictions experimentally for a real many-body system.\\\\\n\nWe thank Joachim Burgd\u00f6rfer, David Gu\u00e9ry-Odelin, Dana Orsolits, Thorsten Schumm, and Juan-Diego Urbina for helpful discussions. This work has been supported by the WWTF grant MA14-002. S. D. acknowledges support by the International Max Plank Research School of Advanced Photon Science (IMPRS-APS). Calculations\nwere performed on the Vienna Scientific Cluster (VSC3). \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Introduction}\n\nMost of efficient cetacean localisation systems are based on the Time Delay Of Arrival (TDOA) estimation from detected\\footnote{As click\/whistles detector, matching filter is often prefered} animal's click\/whistles signals \\cite{nosalGT,fredericbenard_herveglotin_2009}. Long-base hydrophones'array is involving several fixed, efficient but expensive hydrophones \\cite{3Dtracking} while short-base version is requiring a precise array's self-localization to deliver accurate results. Recently (see \\cite{IFA_DCL}), based on Leroy's attenuation model versus frequencies \\cite{Leroy}, a range estimator have been proposed. This approach is working on the detected most powerful pulse inside the click signal and is delivering a rough range' estimate robust to head orientation variation of the animal. Our purpose is to use i) these hydrophone' array measurements recorded in diversified sea conditions and ii) the associated ground-truth trajectories of spermwhale (obtained by precise TDAO and\/or Dtag systems) to regress both position and azimuth of the animal from a third-party hydrophone\\footnote{We assume that the velocity vector is colinear with the head's angle.} (typically onboard, standalone and cheap model).\n\nWe claim, as in computer-vision field, that BoF approach can be successfully applied to extract a global and invariant representation of click's signals. Basically, the pipeline of BoF approach is composed of three parts: i) a local features extractor, ii) a local feature encoder (given a dictionary pre-trained on data) and iii) a pooler aggregating local representations into a more robust global one. Several choice for encoding local patches have been developed in recent years: from hard-assignment to the closest dictionary basis (trained for example by $K$means algorithm) to a sparse local patch reconstruction (involving for example Orthognal Maching Pursuit (OMP) or LASSO algorithms).\n\n\\section{Global feature extraction by spare coding}\n\n\n\\subsection{Local patch extraction}\n\nLet's denote by $\\g{C}\\triangleq\\{\\g{C}^j\\}$, $j=1,\\ldots,H$ the collection of detected clicks associated with the $j^{th}$ hydrophone of the array composed by $H$ hydrophones. Each matrix $\\g{C}^j$ is defined by $\\g{C}^j\\triangleq\\{\\g{c}_i^j\\}$, $i=1,\\ldots,N^j$ where $\\g{c}_i^j\\in\\mathds{R}^n$ is the $i^{th}$ click of the $j^{th}$ hydrophone. For our \\textit{Bahamas2} dataset \\cite{3Dtracking}, we choose typically $n=2000$ samples surrounding the detected click. The total number of available clicks is equal to $N=\\sum\\limits_{i=1}^{H}N^j$.\n\nAs local features, we extract simply some local signal patches of $p\\leq n$ samples (typically $p=128$) and denoted by $\\g{z}_{i,l}^j\\in\\mathbb{R}^p$. Furthermore all $\\g{z}_{i,l}^j$ are $\\ell_2$ normalized. For each $\\g{c}_{i}^j$, a total of $L$ local patches $\\g{Z}_{i}^j\\triangleq\\{\\g{z}_{i,l}^j\\}$, $l=1,\\ldots,L$ equally spaced of $\\lceil\\frac{n}{L}\\rceil$ samples are retrieved (see Fig.~\\ref{patch_extraction}). All local patches associated with the $j^{th}$ hydrophone is denoted by $\\g{Z}^{j}\\triangleq\\{\\g{Z}_i^{j}\\}$, $i=1,\\ldots,N^j$ while $\\g{Z}\\triangleq\\{\\g{Z}^j\\}$ is denoting all the local patches matrix for all hydrophones. A final post-processing consists in uncorrelate local features by PCA training and projection with $p'\\leq p$ dimensions.\n\n\\begin{figure*}[!ht]\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=5cm,width=7.5cm]{ex_click.pdf} &\n\\includegraphics[height=5cm,width=7.5cm]{local_features.pdf}\n\\end{tabular}\n\\caption{Left: Example of detected click with $n=2000$. Right: extracted local features with $p=128$, $L=1000$ (one local feature per column).}\n\\label{patch_extraction}\n\\end{center}\n\\end{figure*}\n\n\\subsection{Local feature encoding by sparse coding}\n\nIn order to obtain a global robust representation of $\\g{c}\\subset\\g{C}$, each associated local patch $\\g{z}\\subset\\g{Z}$ are first linearly encoded \\textit{via} the vector $\\g{\\alpha}\\in\\mathbb{R}^k$ such as $\\g{z}\\approx\\g{D}\\g{\\alpha}$ where $\\g{D}\\triangleq [\\g{d}_1,\\ldots,\\g{d}_k]\\in\\mathbb{R}^{p\\times k}$ is a pre-trained dictionary matrix whose column vectors respect the constraint $\\g{d}_j^T\\g{d}_j=1$. In a first attempt to solve this linear problem, $\\g{\\alpha}$ can be the solution of the Ordinary Least Square (OLS) problem:\n\\begin{equation}\\label{1}\nl_{OLS}(\\g{\\alpha}|\\g{z};\\g{D}) \\triangleq \\min_{\\g{\\alpha} \\in \\mathbb{R}^k}\\left\\lbrace\\frac{1}{2}\\Vert \\g{z} - \\g{D}\\g{\\alpha}\\Vert_2^2 \\right\\rbrace.\n\\end{equation}\nOLS formulation can be extended to include regularization term avoiding overfitting. We obtain the ridge regression (RID) formulation:\n\\begin{equation}\nl_{RID}(\\g{\\alpha}|\\mathbf{z};\\mathbf{D}) \\triangleq \\min_{\\g{\\alpha} \\in \\mathbb{R}^k}\\left\\lbrace\\frac{1}{2}\\Vert \\g{z} - \\g{D}\\g{\\alpha}\\Vert_2^2 + \\beta \\Vert \\g{\\alpha} \\Vert_2^2 \\right\\rbrace.\n\\end{equation}\nThis problem have an analytic solution $\\g{\\alpha} = (\\g{D}^T\\g{D} + \\beta\\g{I}_k)^{-1}\\g{D}^T\\g{z}$. Thanks to semi-positivity of $\\g{D}^T\\g{D} + \\beta\\g{I}_k$, we can use a cholesky factor on this matrix to solve efficiently this linear system. In order to decrease reconstruction error and to have a sparse solution, this problem can be reformuled as a constrained Quadratic Problem (QP):\n\\begin{equation}\nl_{SC}(\\g{\\alpha}|\\g{z};\\g{D}) \\triangleq \\min_{\\g{\\alpha} \\in \\mathbb{R}^k} \\frac{1}{2}\\Vert \\g{z} - \\g{D} \\g{\\alpha} \\Vert_2^2 \\ s.t. \\ \\ \\Vert\\boldsymbol{\\alpha}\\Vert_1 = 1.\n\\end{equation}\nTo solve this problem, we can use a QP solver involving high combinatorial computation to find the solution. Under RIP assumptions \\cite{Tibshirani94regressionshrinkage}, a greedy approach can be used efficiently to solve and eq. 3 and this latter can be rewritten as:\n\\begin{equation}\\label{2}\nl_{SC}(\\g{\\alpha}|\\g{z};\\g{D}) \\triangleq \\min_{\\g{\\alpha} \\in \\mathbb{R}^k} \\frac{1}{2}\\Vert \\g{z} - \\g{D} \\g{\\alpha} \\Vert_2^2 + \\lambda\\Vert\\g{\\alpha}\\Vert_1,\n\\end{equation}\nwhere $\\lambda$ is a regularization parameter which controls the level of sparsity. This problem is also known as basis pursuit \\cite{Chen98atomicdecomposition} or the Lasso \\cite{Tibshirani94regressionshrinkage}. To solve this problem, we can use the popular Least angle regression (LARS) algorithm.\n\n\\subsection{Pooling local codes}\n\nThe objective of pooling \\cite{Boureau10atheoretical,Feng11} is to transform the joint feature representation into a new, more usable one that preserves important information while discarding irrelevant detail.\nFor each click signal, we usually compute $L$ codes denoted $\\g{V} \\triangleq \\left\\lbrace\\g{\\alpha}_i\\right\\rbrace$, $i = 1,\\ldots,L$.\nLet define $\\g{v}^{j}\\in\\mathbb{R}^L$, $j=1,\\ldots,k$ as the $j^{th}$ row vector of $\\g{V}$. It is essential to use feature pooling to map the response vector $\\g{v}^{j}$ into a statistic value $f(\\g{v}^{j})$ from some spatial pooling operation $f$. We use $\\g{v}^{j}$, the response vector, to summarize the joint distribution of the $j^{th}$ compounds of local features over the region of interest (ROI). We will consider the $\\ell_{\\mu}$-norm pooling and defined by:\n\\begin{equation}\nf_n(\\g{v};\\mu) = \\left(\\sum_{m=1}^L |v_m|^{\\mu}\\right)^{\\frac{1}{\\mu}} \\ \\ s.t. \\ \\mu \\neq 0.\n\\end{equation}\nThe parameter $\\mu$ determines the selection policy for locations. When $\\mu = 1$, $\\ell_{\\mu}$-norm pooling is equivalent to sum-pooling and aggregates the responses over the entire region uniformly. When $\\mu$ increases, $\\ell_{\\mu}$-norm pooling approaches max-pooling. We can note the value of $\\mu$ tunes the pooling operation to transit from sum-pooling to max-pooling.\n\n\n\\subsection{Pooling codes over a temporal pyramid}\nIn computer vision, Spatial Pyramid Matching (SPM) is a technic (introduced by \\cite{Lazebnik2006}) which improves classification accuracy by performing a more robust local analysis. We will adopt the same strategy in order to pool sparse codes over a temporal pyramid (TP) dividing each click signal into ROI of different sizes and locations. Our TP is defined by the matrix $\\g{\\Lambda}$ of size $(P \\times 3)$ \\cite{sebastienparis_xanaduhalkias_herveglotin_2013}:\n\\begin{equation}\n\\g{\\Lambda} = [\\g{a}, \\g{b}, \\g{\\Omega}],\n\\end{equation}\nwhere $\\g{a}$, $\\g{b}$, $\\g{\\Omega}$ are 3 $(P \\times 1)$ vectors representing subdivision ratio, overlapping ratio and weights respectively. $P$ designs the number of layers in the pyramid. Each row of $\\g{\\Lambda}$ represents a temporal layer of the pyramid, \\textit{i.e.} indicates how do divide the entire signal into sub-regions possibly overlapping. For the $i^{th}$ layer, the click signal is divided into $D_i=\\lfloor\\frac{1-a_i}{b_i}+1\\rfloor$ ROIs where $a_i$, $b_i$ are the $i^{th}$ elements of vector $\\g{a}$, $\\g{b}$ respectively. For the entiere TP, we obtain a total of $D=\\sum\\limits_{i=1}^{P}D_i$ ROIs. Each click signal $\\g{c}$ $(n \\times 1)$ is divided into temporal ROI $\\g{R}_{i,j}$, $i=1,\\ldots,P$, $j=1,\\ldots,D_i$ of size $(\\lfloor a_i.n\\rfloor \\times 1)$. All ROIs of the $i^{th}$ layer have the same weight $\\Omega_i$. For the $i^{th}$ layer, ROIs are shifted by $\\lfloor b_i.n\\rfloor$ samples. A TP with $\\g{\\Lambda} = \\left[\\begin{array}{ccc}\n1 & 1 & 1 \\\\ \\frac{1}{2} & \\frac{1}{4} & 1 \\end{array}\\right]$ is designing a 2-layers pyramid with $D=1+4$ ROIs, the entiere signal for the first layer and $4$ half-windows of $\\frac{n}{2}$ samples with $25\\%$ of overlapping for the second layer.\nAt the end of pooling stage over $\\g{\\Lambda}$, the global feature $\\g{x}\\in\\mathbb{R}^d$, $d=D.k$ is defined by the weighted concatenation (by factor $\\Omega_i)$ of $L$ pooled codes associated with $\\g{c}$.\n\n\n\n\\subsection{Dictionary learning}\n\nTo encode each local features by sparse coding (see eq.~\\ref{2}), a dictionary $\\g{D}$ is trained offline with an important collection of $M\\leq N.L$ local features as input. One would minimize the regularized empirical risk $\\mathcal{R}_M$:\n\\begin{equation}\n\\begin{array}{c}\n\\mathcal{R}_M(\\g{V},\\g{D}) \\triangleq \\displaystyle\\frac{1}{M}\\sum\\limits_{i=1}^M \\frac{1}{2}\\Vert \\g{z}_i - \\g{D} \\g{\\alpha}_i \\Vert_2^2 + \\lambda\\Vert\\g{\\alpha}_i\\Vert_1\n\\\\\n\\\\\n\\ s.t. \\ \\g{d}_j^T\\g{d}_j=1.\n\\end{array}\n\\end{equation}\nUnfortunatly, this problem is not jointly convex but can be optimized by alternating method:\n\\begin{equation}\n\\mathcal{R}_M(\\g{V}|\\g{\\hat{D}}) \\triangleq \\frac{1}{M}\\sum\\limits_{i=1}^M \\frac{1}{2}\\Vert \\g{z}_i - \\g{\\hat{D}} \\g{\\alpha}_i \\Vert_2^2 + \\lambda\\Vert\\g{\\alpha}_i\\Vert_1,\n\\end{equation}\nwhich can be solved in parallel by LASSO\/LARS and then:\n\\begin{equation}\n\\mathcal{R}_M(\\g{D}|\\g{\\hat{V}}) \\triangleq \\frac{1}{M}\\sum\\limits_{i=1}^M \\frac{1}{2}\\Vert \\g{z}_i - \\g{D} \\g{\\hat{\\alpha}}_i \\Vert_2^2 \\ \\ s.t. \\ \\g{d}_j^T\\g{d}_j=1.\\label{eq_dico}\n\\end{equation}\nEq.~\\ref{eq_dico} have an analytic solution involving a large matrix $(k\\times k)$ inversion and a large memory occupation for storing the matrix $\\g{V}$ $(k\\times M)$. Since $M$ is potentially very large (up to 1 million), an online method to update dictionary learning is prefered \\cite{Mairal_2009}. Figure \\ref{baha_click_range_dico} depicts 3 dictionary basis vectors learned \\textit{via} sparse coding. As depicted, some elements reprensents more impulsive responses while some more harmonic responses.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[height=5.5cm,width=7.5cm]{baha_click_range_dico.pdf}\n\\caption{Example of trained dictionary basis with sparse coding.}\n\\label{baha_click_range_dico}\n\\end{center}\n\\end{figure}\n\n\n\\section{Range and azimuth logistic regression from global features}\n\nAfter the pooling stage, we extracted unsupervisly $N$ global features $\\g{X}\\triangleq\\{x_i\\}\\in\\mathbb{R}^{d\\times N}$. We propose to regress \\textit{via} logistic regression both range $r$ and azimuth $az$ (in $x-y$ plan, when animal reach surface to breath) from the animal trajectory groundtruth denoted $\\g{y}$. For the current train\/test splitsets of the data, such as $\\g{X}=\\g{X}_{train}\\bigcup\\g{X}_{test}$, $\\g{y}=\\g{y}_{train}\\bigcup\\g{y}_{test}$ and $N=N_{train}+N_{test}$, $\\forall$ $\\{\\g{x}_i,y_i\\}\\in\\g{X}_{train}\\times \\g{y}_{train}$, we minimize:\n\\begin{equation}\n\\widehat{\\g{w}}_{\\theta}=\\arg\\min\\limits_{\\g{w}_{\\theta}}\\left\\{\\frac{1}{2}\\g{w}_{\\theta}^T\\g{w}_{\\theta} + C\\sum\\limits_{i=1}^{N_{train}}\\log(1+e^{-y_i\\g{w}_{\\theta}^T\\g{x}_i})\\right\\},\\label{logistic_regression}\n\\end{equation}\nwhere $y_i$ denotes $r_i$ and $az_i$ for $\\theta=r$ and $\\theta=az$ respectively. Eq.~\\ref{logistic_regression} can be efficiently solved for example with Liblinear software \\cite{liblinear2008}. In the test part, range and azimuth for any $\\g{x}_i\\in\\g{X}_{test}$ are recontructed linearly by $\\widehat{r}_i=\\widehat{\\g{w}}_r^T\\g{x}_i$ and by $\\widehat{az}_i=\\widehat{\\g{w}}_{az}^T\\g{x}_i$ respectively.\n\n\\section{Experimental results}\n\n\\subsection{bahamas2 dataset}\n\nThis dataset \\cite{3Dtracking} contains a total of $N=6134$ detected clicks for $H=5$ different hydrophones (named $H^7$, $H^8$, $H^9$, $H^{10}$ and $H^{11}$ and with $N^7=1205$, $N^8=1238$, $N^9=1241$, $N^{10}=1261$ and $N^{11}=1189$ respectively).\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[height=5.5cm,width=7.5cm]{baha_traj_and_hydros.pdf}\n\\caption{The 2D trajectory (in $x\u2212y$ plan) of the single sperm whale observed during $25$ min and corresponding hydrophone's positions.}\n\\label{true_trajectory}\n\\end{center}\n\\end{figure}\nTo extract local features, we chose $n=2000$, $p=128$ and $L=1000$ (tuned by model selection). For both the dictionary learning and the local features encoding, we chose $\\lambda=0.2$ and fixed $15$ iterations to train dictionary on a subset of $M=400.000$ local features drawn uniformaly. We performed\n$K=10$ cross-validation where training sets reprensented $70\\%$ of the total of extracted global features, the rest for the testing sets. Logistic regression parameter $C$ is tuned by model selection. We compute the average root mean square error (ARMSE) of range\/azimuth estimates per hydrophone: $ARMSE(l)=\\frac{1}{K}\\sum\\limits_{i=1}^{K}\\sqrt{\\sum\\limits_{j=1}^{N_{test}^l}(y_{i,j}^l-\\widehat{y}_{i,j}^l)^2}$ where $y_{i,j}^l$, $\\widehat{y}_{i,j}^l$\n and $N_{test}^l$ represent the ground truth, the estimate and the number of test samples for the $l^{th}$ hydrophone respectively. The global ARMSE is then calculated by $\\overline{ARMSE}=\\frac{1}{H}\\sum\\limits_{l=1}^{H}ARMSE(l)$.\n\n \\subsection{$\\ell_{\\mu}$-norm pooling case study}\n\n For prilimary results, we investigate the influence of the $\\mu$ parameter during the pooling stage. We fix the number of dictionary basis to $k=128$ and the temporal pyramid equal to $\\g{\\Lambda}_1=\\left[1,1,1\\right]$, \\textit{i.e.} we pool sparse codes on whole the temporal click signal.\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[height=5.5cm,width=7.5cm]{baha_click_range_nu_pooling.pdf}\n\\caption{$\\overline{ARMSE}$ vs. $\\mu$ for range estimation.}\n\\label{baha_click_range_nu_pooling}\n\\end{center}\n\\end{figure}\nA value of $\\mu=\\{3,4\\}$ seems to be a good choice for this pooling procedure. For $\\mu\\geq20$, results are similar to those obtained by max-pooling. For azimuth, we observe also the same range of $\\mu$ values.\n\n \\subsection{Range and azimuth regression results}\n\n Here, we fixed the value of $\\mu=3$ and we varied the number of dictionary basis $k$ from $128$ to $4096$ elements. We also investigated the influence of the temporal pyramid and we give results for two particulary choices: $\\g{\\Lambda}_1=\\left[1,1,1\\right]$ and $\\g{\\Lambda}_2=\\small\\left[\\begin{array}{ccc}1 & 1 &1\\\\ \\frac{1}{3} & \\frac{1}{3}&1\\end{array}\\right]$. For $\\g{\\Lambda}_2$, the sparse are first pooled over all the signal then pooled over 3 non-overlapping windows for a total of $1+3=4$ ROIs.\n In order to compare results of our presented method, we also give results for an hand-craft feature \\cite{IFA_DCL} specialized for spermwhales and based on the spectrum of the most energetic pulse d\u00e9tected inside the click. This specialized feature, denoted \\textit{Spectrum feature}, is a 128 points vector.\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[height=5.5cm,width=7.5cm]{baha_click_range.pdf}\n\\caption{$\\overline{ARMSE}$ vs. $k$ for range estimation with $\\mu = 3$.}\n\\label{baha_click_range}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[height=5.5cm,width=7.5cm]{baha_click_azimuth.pdf}\n\\caption{$\\overline{ARMSE}$ vs. $k$ for azimuth estimation with $\\mu = 3$.}\n\\label{baha_click_azimuth}\n\\end{center}\n\\end{figure}\n\nFor both range and azimuth estimate, from $k=2048$, our method outperforms results of the \\textit{Spectrum feature} and particulary for azimuth estimate. Using a temporal pyramid for pooling permits also to improve slightly results.\n\n\\section{Conclusions and perspectives}\n\nWe introduced in the paper, for spermwhale localization, a BoF approach \\textit{via} sparse coding delivering rough estimates of range and azimuth of the animal, specificaly towarded for mono-hydrophone configuration. Our proposed method works directly on the click signal without any prior pulses detection\/analysis while being robust to signal transformation issue by the propagation. Coupled with non-linear filtering such as particle filtering \\cite{Arulampalam02}, accurate animal position estimation could be perform even in mono-hydrophone configuration. Applications for anti-collision system and whale whatching are targeted with this work.\n\nAs perspective, we plan to investigate other local features such as spectral features, MFCC \\cite{Davis80,Rabiner93}, Scattering transform features \\cite{AndenM11}. These latter can be considered as a hand-craft first layer of a deep learning architecture with 2 layers.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\n\n\nThis note is a development based on a recent paper \\cite{Kallosh:2021ors} where we have performed a covariant (Lagrangian) quantization of gravity in a black hole background in the Regge-Wheeler set up \\cite{Regge:1957td,Zerilli:1971wd,Martel:2005ir}. The gauge-fixing condition in \\cite{Kallosh:2021ors} includes the Regge-Wheeler gauge for ${\\ell }\\geq 2$ modes, and a certain background covariant gauge for ${\\ell }<2$ modes, where Regge-Wheeler gauge is not valid. We will refer to the gauge in \\cite{Kallosh:2021ors} covering all ${\\ell }$ modes, as a `generalized Regge-Wheeler gauge'.\n\n\n\n\n\n\nThe Feynman path integral for gravity, viewed as quantum field theory (QFT), is defined by De Witt-Faddeev-Popov \\cite{DeWitt:1967ub,Faddeev:1967fc} and takes a form, in absence of sources \n\\begin{equation}\n \\int D h\n \\,J_\\chi ( g,h) \\delta \\big (\\chi_\\alpha ( g,h)\\big ) e^{\\mathrm{i} S ( g+h) } \\, .\n\\label{pi}\\end{equation}\nHere we integrate over the perturbations $h$ in the background metric $g$. The gauge-fixing conditions are $\\chi_\\alpha(g, h)=0$. The Jacobian designed to make this path integral independent on the choice of the gauge-fixing conditions can be presented with the help of the Faddeev-Popov (FP) ghosts \\cite{Faddeev:1967fc} \\begin{equation}\nJ_\\chi = \\int D\\bar C^\\alpha DC_\\beta \\, e^{\\mathrm{i} \\int \\mathrm{d}^4 x \\, \\bar C^\\alpha(x) Q_{\\alpha} {}^{ \\beta} (g, h)C_\\beta(x) }\\, .\n\\end{equation}\nThe differential operator in the ghost action is defined by the gauge variation of the gauge-fixing finctions\n$\n\\delta \\chi_\\alpha = Q_{\\alpha}{}^{\\beta} \\xi_\\beta\n$. For the choice of the gauge-fixing functions made in \\cite{Kallosh:2021ors}, which in addition to Regge-Wheeler gauge for $l\\geq 2$ modes includes gauges for $l<2$ modes, a generalized Regge-Wheeler gauge, we have found that the ghost actions do not have time derivatives in Schwarzschild coordinates. We therefore predicted that in the a generalized Regge-Wheeler gauge \\cite{Kallosh:2021ors} the canonical Hamiltonian according to the rules for gauge theories \n\\cite{Faddeev:1969su,Fradkin:1970pn,Faddeev:1973zb} is expected to be unitary.\n\nIn this note we will present the quadratic in gravitational perturbations $h$ part of the gravity Hamiltonian in the spherical harmonic basis. Before doing this we will perform the standard counting of physical degrees of freedom in this case. The structure of the Hamiltonian will confirm this counting.\n\nThe standard counting of physical degrees of freedom in gauge theories in the QFT context of the Feynman path integral is the same in either Lagrangian or Hamiltonian quantization, and it is also gauge independent, if performed correctly. \nThe general counting formula is formulated for the number of gauge field components equal to $n+k$ in case of $k$ gauge symmetries. The total number of physical degrees of freedom is\n\\begin{equation} n-k\n\\label{number}\\end{equation}\nThis final counting formula in QFT is valid for any choice of gauge-fixing, but the procedure is different for unitary and pseudo-unitary gauges. \nFor example in 4D the metric has $n+k=10$ components and there are $k=4$ gauge symmetries, the counting is $n-k=(10- 4)-4=2$.\n\n\n\nIn QFT\nin the class of unitary gauges Hamiltonians have manifestly ghost-free underlying Hilbert spaces. There are $(p^*, q^*)$ variables in Faddeev's theorem \\cite{Faddeev:1969su} as described in \\cite{Kallosh:2021ors}.\nThis means that all $n-k$ physical states have positive definite metric. The S-matrix is unitary.\n\\begin{equation}\n\\# \\, {\\rm degrees \\, of\\, freedom}_{\\rm unitary \\, H} \\, = n-k\n\\label{numberU}\\end{equation}\nMeanwhile, \nin other gauges, for example, 4D Lorentz covariant gauges in gravity, the Hamiltonians are ``pseudo-unitary'' with underlying state spaces with negative-norm ghost degrees of freedom \\cite{Fradkin:1977hw,Batalin:1977pb}. In such case the counting goes as follows: there are $n+k$ states with positive norm and $2k$ states with negative norm presented by FP anti-commuting ghosts, so the total counting, with account of negative norm states, is the same as in unitary gauges\n\\begin{equation}\n\\# \\, {\\rm degrees \\, of\\, freedom}_{\\rm pseudo-unitary \\, H} = n+k -2k \\Rightarrow n-k\n\\end{equation}\n The S-matrix is pseudo-unitary in a space of states with the indefinite metric.\n\n\n\n\n\n \nWe will see that the quadratic in $h$ part of the gravity Hamiltonian in spherical harmonic basis does support this counting. In the class of gauges used in \\cite{Kallosh:2021ors} the canonical Hamiltonian is unitary, as predicted there.\n\nThe complete form of the Hamiltonian to all orders of $h$ is beyond the scope of this paper. However, in \\cite{Kallosh:2021ors} we have argued that the non-linear couplings of ghosts to all orders in $h$ are free of time derivatives on ghosts. Therefore one would expect that the non-linear in $h$ terms in $H$ will be consistent with the unitarity of the Hamiltonian which will be deduced in this note at the level quadratic in $h$.\n\n The corresponding part of the action $S( g+h)$, quadratic in $h$, is of the form\n\\begin{equation}\nS= {1\\over 2} \\int h_{\\mu\\nu} S^{ \\mu\\nu \\lambda \\delta} (g) h_{\\lambda \\delta}\n\\label{quad} \\end{equation}\nHere $S^{ \\mu\\nu \\lambda \\delta} (g)$ is a differential operator depending on the background metric $g$. The left hand side of equations of motion ${\\delta S\\over \\delta h_{\\mu\\nu} }=0$ linear in $h$ takes the form\n\\begin{equation}\nQ^{\\mu\\nu} \\equiv {\\delta S\\over \\delta h_{\\mu\\nu} }= S^{ \\mu\\nu \\lambda \\delta} (g) h_{\\lambda \\delta}\n\\label{EOM}\\end{equation}\nOne can restore the action in eq. \\rf{quad} from the information available in eq. \\rf{EOM}.\n\nIn the spherical harmonic basis the 4D spacetime is split into $\\mathcal{M} = \\mathcal{M}_2\\times\\mathbb{S}^2$ with coordinates $(x^{a},\\theta^A)$, $a=1,2$. The 4D perturbations $h_{\\mu\\nu} $ are represented by 2D fields for each $({\\ell }, m)$ \\cite{Regge:1957td,Zerilli:1971wd,Martel:2005ir}. \nThe corresponding equations are known and we will use them as derived in \\cite{Martel:2005ir} in Schwarzschild coordinates. Once the quadratic Lagrangian is known, it is possible to derive the relevant quadratic in $h$ Hamiltonian.\nFor ${\\ell }\\geq 2$ modes the corresponding quadratic Hamiltonian was constructed by Moncrief in \\cite{Moncrief:1974am} where also the relevant Regge-Wheeler \\cite{Regge:1957td} and Zerilli \\cite{Zerilli:1971wd} equations were re-derived in the form of Hamiltonian equations of motion. The Hamiltonian was derived in \\cite{Moncrief:1974am} in absence of source terms.\nFor ${\\ell }<2$ the Hamiltonian was not studied, to the best of our knowledge. In \\cite{Moncrief:1974am} it was explained that the attention was restricted to modes with $l\\geq 2$ since the modes with $l<2$ are nonradiative and require a special treatment.\n\n\nHere we will use the known field equations \\rf{EOM} in the form given in \\cite{Martel:2005ir} in Schwarzschild coordinates, which allow to derive the Lagrangian in \\rf{quad}. From the quadratic Lagrangian we derive a canonical quadratic part of the Hamiltonian, with account of the algebraic constraints in our gauges. We will conclude that there are no physical degrees of freedom suitable for quantization at ${\\ell }<2$. Our definition of quantized degrees of freedom involves the QFT quantization conditions in 2D space of the form\n\\begin{equation}\n[q(r, t) , p(r', t) ]= i \\delta (r-r')\n\\label{quant}\\end{equation}\nThe classical field equations for low multipoles in presence of sources are known to have non-trivial solutions. For example for the monopoles ${\\ell }=m=0$ there are solutions like \n$\nh^{00}_{tt} \\sim {\\delta M\\over r}\n$, \nthey are known to affect the the black hole mass. However, there are no solutions of the constraint equations compatible with the quantization condition \\rf{quant} for ${\\ell }<2$.\n\nAll our results are valid for any mass $M$ of the Schwarzschild black hole, and the limit to $M = 0$ is continuous. This means that they apply not only to the quantization in the black hole background, but also to the unitary quantization of the gravitational field in the Minkowski space background in spherical coordinates.\n\n\n\n\n\\section{ Counting Gravity Physical Degrees of Freedom in the Spherical Harmonic Basis}\nThe ansatz of Regge-Wheeler for the metric perturbations $h_{\\mu\\nu}$ with spherical harmonics of definite parity is given in \\cite{Regge:1957td,Zerilli:1971wd,Martel:2005ir}. In our \n recent paper \\cite{Kallosh:2021ors} it was adapted for the purpose of quantization following the formalism and notations in \\cite{Martel:2005ir}. In particular, we have presented the gauge symmetry transformations to all orders in $h$. The background metric in Schwarzschild coordinates is\n \\begin{equation}\ng_{\\mu\\nu}dx^\\mu dx^\\nu=\t-f(r)\\,\\mathrm{d}t^2 + \\frac{\\mathrm{d}r^2}{f(r)} + r^2(x)\\,\\mathrm{d}\\Omega^2_2\\, , \\qquad f(r) = 1-\\frac{2GM}{r}\n\t\\label{SchldBackgroundGauge}\n\\end{equation} \n\n\\\n\n The 2D fields representing all components of $h_{\\mu\\nu}$ in 4D include the following \n\\begin{eqnarray} \\label{RWA}\n&& h_{ab}^{\\ell m(+)}, \\quad j_{a}^{\\ell m(+)}, \\quad K^{\\ell m(+)}, \\quad G^{\\ell m(+)}\\, \\hskip 3 cm {\\ell } >1 , \\quad {\\rm even}\\\\\n&& h_{a}^{\\ell m(-)}, \\quad h_2^{\\ell m(-)}\\, \\hskip 6.5 cm {\\ell } > 1 , \\quad {\\rm odd} \\\\\n&&h_{ab}^{1 m(+)}, \\quad j_{a}^{1 m(+)}, \\quad K^{1m(+)} \\qquad \\hskip 4 cm {\\ell }=1, \\quad {\\rm even}\n\\label{ansatzDe} \\\\\n&&h_{a}^{1 m(-)} \\hskip 8.2 cm {\\ell }=1, \\quad {\\rm odd}\\\\\n&&h_{ab}^{0 0(+)}, \\quad K^{00(+)} \\hskip 6.6 cm {\\ell }=0, \\quad {\\rm even}\n\\label{ansatzM} \n \\end{eqnarray}\nThe gauge symmetries are also expanded in spherical harmonics. In the form given in our recent paper \\cite{Kallosh:2021ors} these are\n \\begin{eqnarray}\n\\xi^{{\\ell } > 1 } \\quad {\\rm even} \\qquad &&\\Rightarrow \\qquad \\{ \\xi_{a}^{\\ell m(+)}, \\xi^{\\ell m(+)} \\}\n\\label{sym}\\\\\n\\xi^{{\\ell } > 1} \\quad {\\rm odd} \\qquad &&\\Rightarrow \\qquad \\{ \\xi^{\\ell m(-)} \\}\n\\label{sym}\\\\\n\\xi^{{\\ell }=1} \\quad {\\rm even} \\qquad &&\\Rightarrow \\qquad \\{ \\xi_{a}^{1 m(+)}, \\xi^{1 m(+)} \\} \n\\label{symDe} \\\\\n\\xi^{{\\ell }=1} \\quad {\\rm odd} \\qquad && \\Rightarrow \\qquad \\{ \\xi^{1 m(-)} \\} \\\\\n\\xi^{{\\ell }=0} \\quad {\\rm even} \\qquad &&\\Rightarrow \\qquad \\{ \\xi_{a}^{0 0(+)} \\}\n\\label{symM} \n \\end{eqnarray}\nThe gauge symmetry parameters $\\xi_{a}^{\\ell m (+)}$, $\\xi^{\\ell m (+)}$, $\\xi^{\\ell m (-)}$ \ncan be regarded as scalar and vector fields on $\\mathcal{M}_2$.\n\nThe counting of physical degrees of freedom in these 5 sectors is\n\\begin{enumerate}\n \\item ${\\ell } >1 \\quad {\\rm even}: \\quad \\, n+k = 7, \\quad k= 3\\, \\quad \\Rightarrow \\quad n+k -2k = 7- 2\\cdot 3=1$\n \\item ${\\ell } >1 \\, \\, \\quad {\\rm odd}: \\quad \\, n+k = 3, \\quad k= 1\\, \\quad \\Rightarrow \\quad n+k -2k= 3- 2\\cdot 1=1$\n \\item ${\\ell } =1 \\quad {\\rm even}: \\quad \\, n+k = 6, \\quad k= 3\\, \\quad \\Rightarrow \\quad n+k -2k= 6- 2\\cdot 3=0$\n \\item ${\\ell } =1 \\, \\, \\quad {\\rm odd}: \\quad \\, n+k = 2, \\quad k= 1\\, \\quad \\Rightarrow \\quad n+k -2k= 2- 2\\cdot 1=0$ \n \\item ${\\ell } =0 \\quad {\\rm even}: \\quad \\, n+k = 4, \\quad k= 2\\, \\quad \\Rightarrow \\quad n+k -2k= 4- 2\\cdot 2=0$\n\\end{enumerate}\nThus we find that in $l\\geq 2$ sector there is one even and one odd physical degree of freedom for each $({\\ell },m)$. There are no degrees of freedom for any of ${\\ell }<2$.\n\n\\section{Quadratic Lagrangian\/Hamiltonian for ${\\ell }\\geq 2 $ Modes}\n\\subsection{${\\ell }\\geq 2 $ even} \n There are 7 fields here, $h_{ab}^{\\ell m(+)}, \\quad j_{a}^{\\ell m(+)}, \\quad K^{\\ell m(+)}, \\quad G^{\\ell m(+)}$.\n There are 7 equations of motion for these fields. Now we can add the 3 Regge-Wheeler gauge-fixing conditions\n \\begin{equation}\n G=j_a=0\n \\end{equation} \nThe remaining 4 fields are $h_{ab}^{\\ell m(+)}, K^{\\ell m(+)}$. We expect to identify 3 constraints which will leave us with just one canonical degree of freedom. These equations are according to \\cite{Martel:2005ir}\n \\begin{eqnarray} \\label{MarP}\nQ^{tt} &=& -\\frac{\\partial^2}{\\partial r^2} {K} \n- \\frac{3r-5M}{r^2 f} \\frac{\\partial}{\\partial r} {K} \n+ \\frac{f}{r} \\frac{\\partial}{\\partial r} {h}_{rr}\n+ \\frac{(\\lambda+2)r + 4M}{2r^3} {h}_{rr} \n+ \\frac{\\mu}{2r^2 f} {K}, \\\\ \nQ^{tr} &=& \\frac{\\partial^2}{\\partial t \\partial r} {K} \n+ \\frac{r-3M}{r^2 f} \\frac{\\partial}{\\partial t} {K} \n- \\frac{f}{r} \\frac{\\partial}{\\partial t} {h}_{rr} \n- \\frac{\\lambda}{2r^2} {h}_{tr}, \\cr\nQ^{rr} &=& -\\frac{\\partial^2}{\\partial t^2} {K} \n+ \\frac{(r-M)f}{r^2} \\frac{\\partial}{\\partial r} {K} \n+ \\frac{2f}{r} \\frac{\\partial}{\\partial t} {h}_{tr} \n- \\frac{f}{r} \\frac{\\partial}{\\partial r} {h}_{tt} \n+ \\frac{\\lambda r + 4M}{2r^3} {h}_{tt} \n- \\frac{f^2}{r^2} {h}_{rr} \n- \\frac{\\mu f}{2r^2} {K}, \\cr \nQ^\\flat &=& -\\frac{\\partial^2}{\\partial t^2} {h}_{rr} \n+ 2 \\frac{\\partial^2}{\\partial t \\partial r} {h}_{tr} \n- \\frac{\\partial^2}{\\partial r^2} \\tilde{h}_{tt} \n- \\frac{1}{f} \\frac{\\partial^2}{\\partial t^2} {K} \n+ f \\frac{\\partial^2}{\\partial r^2} \\tilde{K} \n+ \\frac{2(r-M)}{r^2 f} \\frac{\\partial}{\\partial t} {h}_{tr} \n- \\frac{r-3M}{r^2 f} \\frac{\\partial}{\\partial r} {h}_{tt} \n\\cr & & \\mbox{} \n- \\frac{(r-M)f}{r^2} \\frac{\\partial}{\\partial r} {h}_{rr} \n+ \\frac{2(r-M)}{r^2} \\frac{\\partial}{\\partial r} {K} \n+ \\frac{\\lambda r^2-2(2+\\lambda)Mr+4M^2}{2r^4 f^2}{h}_{tt} \n- \\frac{\\lambda r^2-2\\mu Mr-4M^2}{2r^4} {h}_{rr}\\nonumber \n\\end{eqnarray} \nHere \n\\begin{equation}\n\\lambda = {\\ell }({\\ell }+1) \\qquad \\mu = ({\\ell }-1)({\\ell }+2)\n\\end{equation}\nThe quadratic in $h$ Lagrangian can be restored from these equations as explained in eqs. \\rf{quad}, \\rf{EOM}. One can proceed by defining for each of the 4 fields their canonical momenta. For example, there is no time derivative on $h_{tt}$ in the action, therefore $p_{tt}=0$, the other 3 coordinates in the form of ${\\cal L} (q, \\dot q)$ do have time derivatives, however, two more combinations of $q$'s and $p$'s are constrained. Only one independent canonical degree of freedom out of 4 is left.\n\nThe Hamiltonian of the related system starting with the Arnowitt, Deser, Misner construction was derived in \\cite{Moncrief:1974am}. We skip the details of the derivation here starting with the field equations \n\\rf{MarP} since the answer for the corresponding Lagrangian can be also reconstructed from the \n Zerilli-Moncrief function \\cite{Zerilli:1971wd,Moncrief:1974am} which in Regge-Wheeler gauge is\n\\[\n\\Psi_{\\rm even} = \\frac{2r}{{\\ell }({\\ell }+1)} \\biggl[ {K} \n+ \\frac{2f}{\\Lambda} \\biggl( f {h}_{rr} \n- r {K}_{,r} \\biggr) \\biggr], \\qquad l\\geq 2\n\\]\nwhere $\\Lambda = ({\\ell }-1)({\\ell }+2) + 6M\/r$. The equation of motion in the form of the Zerilli-Moncrief function $\\Psi^{lm}_{even}$ as given in \\cite{Martel:2005ir} is\n\\begin{equation}\n(\\Box - V_{\\rm even}) \\Psi_{\\rm even}=S_{\\rm even}\n\\label{ZM}\\end{equation}\nwhere $\\Box= g^{ab} {\\cal D}_a {\\cal D}_b$ is the Laplacian operator on ${\\cal M}_2$, $V_{\\rm even}$ depends on $r$ as well as on $M$ and on $l$, and $S_{\\rm even}$ is the contribution from sources. We refer to details given in \\cite{Martel:2005ir}, where also the relation between Zerilli-Moncrief function and the original Regge-Wheeler function is explained.\nEquation \\rf{ZM} can be derived from the Lagrangian of the form \\rf{quad}\n\\begin{equation}\n{\\cal L} = \\sum_{{\\ell } \\geq 2 , m} \\Big [ {1\\over 2} \\Psi_{\\rm even}( \\Box - V_{\\rm even})\\Psi_{\\rm even} - \n \\Psi_{\\rm even} S_{\\rm even}\\Big ]\\end{equation}\n This can be rewritten in the form producing a quadratic part of the Hamiltonian. With $\\Psi_{\\rm even}\\equiv Q_{\\rm even}$ and its canonically conjugate $P_{\\rm even}$ and, in absence of sources\n\\begin{equation}\n H_{{\\ell } \\geq 2, \\rm even}= {1\\over 2} \\sum_{{\\ell }\\geq 2, m} \n \\int \\Big [dr f( P^{{\\ell }, m } )^{2 }_{\\rm even}+ f (Q_{, r}^{{\\ell }, m })_{\\rm even}^2 + V_{\\rm even}\n (Q^{{\\ell }, m })_{\\rm even}^2\\Big ]\n\\label{Heven} \\end{equation}\nwhere\n\\begin{equation} \nV_{\\rm even} = \\frac{1}{\\Lambda^2} \\biggl[ \\mu^2 \\biggl(\n \\frac{\\mu+2}{r^2} + \\frac{6M}{r^3} \\biggr) \n+ \\frac{36M^2}{r^4} \\biggl(\\mu + \\frac{2M}{r} \\biggr) \\biggr] \n\\label{4.26}\n\\end{equation}\nThis is an example of the Faddeev's theorem \\cite{Faddeev:1969su}, which we described in \\cite{Kallosh:2021ors}, where starting from the original constrained variables $(p_i, q^i)$ with constraints $\\phi^\\alpha(p,q)$ one can perform a canonical transformation with $p'_\\alpha =\\chi_\\alpha (p,q) =0$ and $ q^{'\\alpha} = q^{'\\alpha} (p^*, q^*)$ \nso that the independent set of canonical variables is $(p^*, q^*)$. In this particular case we find just one set of $(p^*, q^*)$, which are the \n Zerilli-Moncrief function $\\Psi$ of the original variables, and its canonical conjugate.\n\n \n\\subsection{${\\ell }\\geq 2 $ odd} \n\nThere are 3 fields in this sector: $h_{a}^{\\ell m(-)}, \\quad h_2^{\\ell m(-)}$. In the RW gauge\n\\begin{equation}\nh_2^{\\ell m(-)}=0\\, .\n\\end{equation}\nEquations of motion for the remaining two fields are\n\\begin{eqnarray*} \nP^t &=& - \\frac{\\partial^2}{\\partial t \\partial r} {h}_r \n+ \\frac{\\partial^2}{\\partial r^2} {h}_t \n- \\frac{2}{r} \\frac{\\partial}{\\partial t} {h}_r \\\n- \\frac{\\lambda r - 4M}{r^3 f} {h}_t, \\\\ \nP^r &=& \\frac{\\partial^2}{\\partial t^2} {h}_r \n- \\frac{\\partial^2}{\\partial t \\partial r} {h}_t \n+ \\frac{2}{r} \\frac{\\partial}{\\partial t} {h}_t \n+ \\frac{\\mu f}{r^2} {h}_r,\n\\end{eqnarray*} \nRestoring the quadratic Lagrangian and using partial integration one can identify one field which enters into Lagrangian without a time derivative, this is $h_t$. \n\\begin{equation} \n{\\cal L} = h_t \\Big (- \\frac{\\partial^2}{\\partial t \\partial r} {h}_r \n+ {1\\over 2} \\frac{\\partial^2}{\\partial r^2} {h}_t \n- \\frac{2}{r} \\frac{\\partial}{\\partial t} {h}_r \\\n- {1\\over 2} \\frac{\\lambda r - 4M}{r^3 f} {h}_t \\Big ) + {1\\over 2} h_r \\Big (\\frac{\\partial^2}{\\partial t^2} {h}_r \n+ \\frac{\\mu f}{r^2} {h}_r\\Big ) \n\\end{equation}\nThus we find\n\\begin{eqnarray}\n&&p_t=0\\\\\n&&p_r= h_{t, r} - h_{r, t} -{2\\over r} h_t\n\\end{eqnarray}\nand there is a constraint for $h_t$ algebraically related to $p_r$ \n\\begin{equation}\n \\Big ( \\partial_r + \\frac{2}{r} \\Big ) \\Big (p_r + h_{t, r} -{2\\over r} h_t\n\\Big ) \n- {h}_{t,rr} \n+ \\frac{\\lambda r - 4M}{r^3 f} {h}_t =0\n\\end{equation}\nTherefore there is one independent degree of freedom $(h_r, p_r)$. These are Faddeev's $(p^*, q^*)$ variables, exactly one set in agreement with the counting give above. One can write the corresponding Hamiltonian $H(h_r, p_r)$ and the field equations.\n\nOn the other hand, the Hamiltonian for this system was already derived in \\cite{Moncrief:1974am} in the framework of the Arnowitt, Deser, Misner construction. The field equations were derived in \\cite{Cunningham:1978zfa}, where the corresponding Cunningham-Price-Moncrief function was introduced. In notation of \\cite{Martel:2005ir}, this function is\n\\begin{equation}\n\\Psi_{\\rm odd}^{lm} = \\frac{2r}{({\\ell }-1) ({\\ell }+2) } \\biggl( \n {h}_{t , r} ^{{\\ell }m} - \n {h}_{r,t}^{{\\ell }m} - \\frac{2}{r} {h}_t^{{\\ell }m} \n\\biggr). \n\\end{equation}\nThis function in terms of canonical variables above depends on $(p_r, h_r)$.\nAs in the even case discussed above we are lead to a single field equation for the Cunningham-Price-Moncrief function\n\\begin{equation}\n(\\Box - V_{\\rm odd}) \\Psi_{\\rm odd}=S_{\\rm odd}\n\\label{odd}\\end{equation}\nHere the expressions for $V_{\\rm odd}$ and $ S_{\\rm odd}$ are given in \\cite{Martel:2005ir}, where also \nthe relation between Cunningham-Price-Moncrief function and the original Regge-Wheeler function is explained.\nWith $\\Psi_{\\rm odd}\\equiv Q_{\\rm odd}$ the Hamiltonian is\n\n \\begin{equation}\n H_{{\\ell } \\geq 2, \\rm odd}= {1\\over 2} \\sum_{{\\ell }\\geq 2, m} \n \\int \\Big [dr f( P^{{\\ell }, m } )^{2 }_{\\rm odd}+ f (Q_{, r}^{{\\ell }, m })_{\\rm odd}^2 + \\Big ( \\frac{{\\ell }({\\ell }+1) }{r^2} - {6M \\over r^3}\\Big ) \n (Q^{{\\ell }, m })_{\\rm odd}^2\\Big ]\n \\label{Hodd}\\end{equation}\n \n\\section{Quadratic Lagrangian\/Hamiltonian for ${\\ell }<2$ Modes}\n\\subsection{${\\ell }=1$ even} \nOur 6 fields are $h_{ab}^{1 m(+)}, j_{a}^{1 m(+)}, K^{1m(+)}$. We take a gauge-fixing condition \\cite{Kallosh:2021ors}\n\\begin{equation}\nj_{a}^{1 m(+)}= K^{1m(+)}=0\n\\end{equation}\nThe remaining filelds $h_{ab}^{1 m(+)}$ in this gauge satisfy the field equations\n\\begin{eqnarray*} \nQ^{tt} &=& \n \\frac{f}{r} \\frac{\\partial}{\\partial r}{h}_{rr}\n+ \\frac{2(r + M)}{r^3} {h}_{rr}, \\\\ \nQ^{tr} &=& \n- \\frac{f}{r} \\frac{\\partial}{\\partial t} {h}_{rr} \n- \\frac{1}{r^2} {h}_{tr}, \\\\\nQ^{rr} &=& \n \\frac{2f}{r} \\frac{\\partial}{\\partial t} {h}_{tr} \n- \\frac{f}{r} \\frac{\\partial}{\\partial r}{h}_{tt} \n+ \\frac{ r + 2M}{r^3} {h}_{tt} \n- \\frac{f^2}{r^2}{h}_{rr} \n\\end{eqnarray*} \nWe can therefore reconstruct the Lagrangian of the form \\rf{quad} which will produce these equations.\n\\begin{equation}\n{\\cal L} = h_{tt} Q^{tt} + \\Big (\\frac{\\partial}{\\partial t} h_{tr}\\Big) \\frac{ 2f}{r} {h}_{rr} \n- h_{tr} \\frac{1}{2r^2} {h}_{tr} -h_{rr} \\frac{f^2}{ 2r^2}{h}_{rr} \n\\end{equation}\nWe now define $q\\equiv h_{tr}, \\, p \\equiv \\frac{ 2f}{r} {h}_{rr}$ and $h_{tt} \\equiv \\lambda$\n\\begin{equation}\n{\\cal L} = \\dot q p + \\lambda Q^{tt} (p, \\partial_r p) \n- q^2 \\frac{1}{2r^2} -{1\\over 8} p^2\n\\end{equation}\nWe integrate out the Lagrange multiplier and find\n\\begin{equation}\n{\\cal L} = \\dot q p \n- q^2 \\frac{1}{ 2r^2} -{1\\over 8} p^2\n\\end{equation}\nwhere \n\\begin{equation}\n \\frac{f}{r} \\frac{\\partial}{\\partial r}{rp\\over 2f}\n+ \\frac{2(r + M)}{r^3} {rp\\over 2f}=0 \\qquad \\Rightarrow \\qquad p_{,r} + F(r) p=0\n\\label{Cp}\\end{equation}\nThe algebraic constraint which $p$ has to satisfy contradicts the commutation relation which have to be imposed for quantization, as shown in eq. \\rf{quant}.\nThere is no solution of the algebraic constraint \\rf{Cp} for the canonical momentum $p(t,r)$ which would be consistent with the quantization condition, only $p=0$ is a consistent one. We conclude there that there are no physical degrees of freedom left in this sector,\n\\begin{equation}\nH_{{\\ell }=1, \\rm even} =0\n\\end{equation}\n This is in agreement with the counting we presented above.\n\n\n\\subsection{${\\ell }=1$ odd} \n\nThere are 2 fields: $h_{a}^{1 m(-)}$. We take a gauge-fixing condition $h_{r}^{1 m(-)}=0$ \\cite{Kallosh:2021ors}. In this gauge the remaining field equation is\n\\begin{equation}\nP^t = \\frac{\\partial^2}{\\partial r^2} {h}_t \n- \\frac{2}{r^2 } {h}_t\n\\end{equation}\nThe Lagrangian which will generate this equation is\n\\begin{equation}\n{\\cal L}={1\\over 2} h_t \n\\Big (\\frac{\\partial^2}{\\partial r^2} {h}_t \n- \\frac{2}{r^2} {h}_t \\Big )\n\\end{equation}\nThere is one field here where the Lagrangian $ {\\cal L}(q)$ does not have time derivative of this field, therefore $p={\\delta {\\cal L}\\over \\dot h_t}=0 $. There are no canonical variables here and the Hamiltonian vanishes \n\\begin{equation}\nH_{{\\ell }=1, \\rm odd} =0\n\\end{equation}\nThis is in agreement with the counting we presented above.\n\n\n\\subsection{${\\ell }=0$ even} \n\nThere are 4 fields here: $h_{ab}^{0 0(+)}, \\quad K^{00(+)}$. We take a gauge-fixing conditions $K=h_{tr}=0$ \\cite{Kallosh:2021ors}.\nThe remaining field equations are\n\\begin{eqnarray*} \nQ^{tt} &=& \\frac{f}{r} \\frac{\\partial}{\\partial r} {h}_{rr}\n+ \\frac{r + 2M}{r^3} {h}_{rr} \n, \\\\ \nQ^{rr} &=& \n- \\frac{f}{r} \\frac{\\partial}{\\partial r} {h}_{tt} \n+ \\frac{ 2M}{r^3} {h}_{tt} \n- \\frac{f^2}{r^2} {h}_{rr}\n\\end{eqnarray*} \nThe Lagrangian which will generate these equations is\n\\begin{equation}\n{\\cal L}=h_{tt} \\Big (\\frac{f}{r} \\frac{\\partial}{\\partial r} {h}_{rr}\n+ \\frac{(r + 2M)}{r^3} {h}_{rr} \\Big ) -{f^2\\over 2r^2} h_{rr}^2\n\\end{equation}\nThere are 2 fields, $q^1, q^2$, but there are no time derivatives in the Lagrangian, $p_1=p_2=0$, no canonical variables and the Hamiltonian vanishes\n\\begin{equation}\nH_{{\\ell }=0} =0\n\\end{equation}\n This is again in agreement with the counting we presented above.\n\n\n\n\\section{A special role of ${\\ell }=0,1$ in quantization of gravity}\n\nIs there any relation between the well known fact about the absence of radiation from monopoles and dipoles in gravity and the fact we observed here, that there are no quantum physical degrees of freedom in monopoles and dipoles when gravity is quantized in spherical harmonics basis? The answer is yes, and it has to do with the tensor nature of gravity, so that radiation starts with quadrupoles ${\\ell }\\geq 2$. \n\nRegge-Wheeler ansatz for ${\\ell }\\geq 2$ has 10 functions depending on coordinates of ${\\cal M}_2$ listed in eqs. \\rf{RWA}-\\rf{ansatzM}. Here we show them in the matrix form contracted with spherical functions.\n\\begin{equation}\nh_{\\mu\\nu}^{{\\ell }>1} =\\begin{pmatrix}\n h^{{\\ell }m}_{ab} Y^{{\\ell }m} & & & & {\\color {blue}j^{{\\ell }m}_a Y_B^{{\\ell }m}} \\\\\n\\cr \n {\\color {blue} j^{{\\ell }m}_a Y_B^{{\\ell }m} } & & & & r^2 K^{{\\ell }m} \\Omega_{AB} Y^{{\\ell }m} + {\\color {red} G^{{\\ell }m} Y_{AB}^{{\\ell }m}}\\end{pmatrix}^{(+)} + \\begin{pmatrix}\n0 & & & & {\\color {blue} h^{{\\ell }m}_a X_B^{{\\ell }m}} \\\\\n\\cr \n {\\color {blue} h^{{\\ell }m}_a X_B^{{\\ell }m}} & & & & {\\color {red} h^{{\\ell }m}_2 X_{AB}^{{\\ell }m}} \\end{pmatrix}^{(-)} \\, .\n\\end{equation}\nThe number gauge symmetries in all cases with ${\\ell } >0$ is the same since $\\xi_\\mu$ is a vector\n\\begin{equation}\n\\xi_{\\mu}^{{\\ell }>0} =\\begin{pmatrix}\n\\xi^{{\\ell }m}_{a} Y^{{\\ell }m} \\\\\n\\cr \n {\\color {blue}\\xi^{{\\ell }m} Y_A^{{\\ell }m}} \\end{pmatrix}^{(+)} \\, + \\begin{pmatrix}\n0 \\\\\n\\cr \n {\\color {blue}\\xi^{{\\ell }m}_a X_B^{{\\ell }m} } \\end{pmatrix}^{(-)} \\, .\n\\end{equation}\nTherefore we find that instead of 10 fields (even and odd) as for ${\\ell }\\geq 2$ we have 8 fields (even and odd) for ${\\ell }=1$, no fields in red\n\\begin{equation}\nh_{\\mu\\nu}^{{\\ell }=1} =\\begin{pmatrix}\n h^{{\\ell }m}_{ab} Y^{{\\ell }m} & & & & j^{{\\ell }m}_a Y_B^{{\\ell }m} \\\\\n\\cr \n j^{{\\ell }m}_a Y_B^{{\\ell }m} & & & & r^2 K^{{\\ell }m} \\Omega_{AB} Y^{{\\ell }m} \\end{pmatrix}^{(+)} + \\begin{pmatrix}\n0 & & & &h^{{\\ell }m}_a X_B^{{\\ell }m} \\\\\n\\cr \nh^{{\\ell }m}_a X_B^{{\\ell }m} & & & & 0 \\end{pmatrix}^{(-)} \\, .\n\\end{equation}\nTherefore from 10-2 =8 states we subtract a double set of 4 symmetries, and find no degrees of freedom for ${\\ell }=1$ since 8-8=0.\n\nAt ${\\ell}=0$ $Y_{AB}^{00}= X_{AB}^{00}=0$, the terms in red are absent, but also $Y_{A}^{00}= X_{A}^{00}=0$, all blue terms are absent. \n\\begin{equation}\nh_{\\mu\\nu}^{{\\ell }=0} =\\begin{pmatrix}\n h^{{\\ell }m}_{ab} Y^{{\\ell }m} & & & & 0 \\\\\n\\cr \n0 & & & & r^2 K^{{\\ell }m} \\Omega_{AB} Y^{{\\ell }m} \\end{pmatrix}^{(+)} + \\begin{pmatrix}\n0 & & & &0 \\\\\n\\cr \n0 & & & & 0 \\end{pmatrix}^{(-)} \\, .\n\\end{equation}\n\\begin{equation}\n\\xi_{\\mu}^{{\\ell }=0} =\\begin{pmatrix}\n\\xi^{{\\ell }m}_{a} Y^{{\\ell }m} \\\\\n\\cr \n0 \\end{pmatrix}^{(+)} \\, + \\begin{pmatrix}\n0 \\\\\n\\cr \n0 \\end{pmatrix}^{(-)} \\, .\n\\end{equation}\nWe are left with 4 fields and 2 gauge symmetries, there are no degrees of freedom for ${\\ell }=0$: 4-4=0.\n\n\n\\section{Quantization of Gravity in Spherical Harmonics Basis in the Flat Background}\nThe procedure of Lagrangian quantization performed in \\cite{Kallosh:2021ors} as well as the values of the unitary quadratic Hamiltonians presented in this paper, have a smooth limit from the Schwarzschild background to a flat one. In Schwarzschild coordinates this means that the limit $M \\rightarrow 0$ is regular.\n\n In particular, Zerilli-Moncrief function for ${\\ell }\\geq 2$ in Regge-Wheeler gauge in the limit $M \\rightarrow 0$ is \n\\begin{equation}\n\\Psi_{\\rm even}^{{\\ell }m} = \\frac{2r}{{\\ell }({\\ell }+1)} \\biggl[ {K} \n+ \\frac{2}{({\\ell }-1)({\\ell }+2)} \\biggl( {h}_{rr} \n- r {K}_{,r} \\biggr) \\biggr], \\qquad {\\ell }\\geq 2\n\\end{equation}\nThe Cunningham-Price-Moncrief function is\n\\begin{equation}\n\\Psi_{\\rm odd}^{{\\ell } m} = \\frac{2r}{({\\ell }-1) ({\\ell }+2) } \\biggl( \n {h}_{t , r} ^{{\\ell }m} - \n {h}_{r,t}^{{\\ell }m} - \\frac{2}{r} {h}_t^{{\\ell }m} \n\\biggr \n), \n \\qquad {\\ell }\\geq 2 \\end{equation}\n The quadrartic part of the Hamiltonian in both cases is\n \\begin{equation}\n H_{\\rm even\/odd}= {1\\over 2} \\sum_{{\\ell }\\geq 2, m} \n \\int \\Big [dr ( P^{{\\ell }, m } )^{2 }_{\\rm even\/odd}+ (Q_{, r}^{{\\ell }, m })_{\\rm even\/odd}^2 + \\frac{{\\ell }({\\ell }+1) }{r^2} \n (Q^{{\\ell }, m })_{\\rm even\/odd}^2\\Big ]\n\\label{HamFlat} \\end{equation}\nHere $Q_{\\rm even\/odd} = \\Psi _{\\rm even\/odd}$ and $P_{\\rm even\/odd}$ is the corresponding canonical conjugate. At the quadratic level these are the only 2 physical states which appear in the unitary Hamiltonian.\n\nThe higher order terms in the each of the quantized actions, at the black hole background and in the flat background still have to be constructed.\n\n\\section{A comment on Regge-Wheeler and Teukolsky formalism and gravity waves}\n\n\nThe Cunningham-Price-Moncrief (CPM) master function and the Zerilli-Moncrief (ZM) master function, which were identified here as canonical variables in the gravity, appear to play some role also in a more interesting case of the Kerr black holes. Namely, as pointed out in a review \\cite{Pound:2021qin}, there is a relation via Chandrasekhar transformation between these functions and Teukolsky radial function. Note that \nTeukolsky equations for the Weyl tensor components use the expansion in terms of the spin-weighted spheroidal harmonics. Such and expansion for the metric starts with ${\\ell }=2$.\n\nThere is also an interesting relation between the metric perturbation far from the source and our canonical variables in the generalized Regge-Wheeler gauge. Namely, according to \\cite{Pound:2021qin} the gravitational wave strain can be determined directly from CPM and ZM functions of the metric. Using the \n Chandrasekhar transformation between these functions and Teukolsky radial function, and some properties of $\\psi_4= C_{n\\bar m n\\bar m}$ the gravitational strain was given as\n\n\n\\begin{equation}\nr(h_+ -i h_x) =\\sum_{{\\ell }\\geq 2} \\sum_{ |m| \\leq {\\ell }} {D\\over 2} \\Big (\\Psi_{\\rm even}^{{\\ell }m}- i \\Psi_{\\rm even}^{{\\ell }m}\\Big ) \\, \\hskip 1 mm {}_{- 2} Y_{{\\ell }, m } (\\theta, \\phi)\n\\end{equation}\nwhere ${}_{- 2} Y_{{\\ell }, m } (\\theta, \\phi)$ is the the spin-weighted spheroidal harmonic.\nThat equality holds in the limit $r \\rightarrow \\infty $ (at fixed $u=t-r_*$).\nHere the constant\n\\begin{equation}\nD= \\sqrt {({\\ell }-1) ({\\ell }+1) ({\\ell }+1)}\n\\end{equation}\nis the Schwarzschild limit of the constant that appears in the Teukolsky-Starobinsky identities. Clearly, the cases ${\\ell }=0,1$ drop from the formula for the gravitational waves. This is in agreement with the fact established in this paper that these modes have no physical degrees of freedom.\n\n\n\n\\section{Summary} \n\n\n\nIn this note we have counted the number of physical quantized degrees of freedom of Einstein gravity in spherical harmonic basis using the standard formula: this number is given by $n-k$, where $n+k$ is the number of components of gauge fields and the gauge theory has $k$ gauge symmetries. For example, in 4D the graviton has $n+k= 6+4= 10$ components and there are $k=4$ gauge symmetries. The number of physical degrees of freedom is $n-k= (n+k) - 2k= 10-8=2$.\n\n\nIn spherical harmonic basis we have found that for each ${\\ell }, m$ in ${\\ell } \\geq 2$ sector there is one degree of freedom for even parity states and one degree of freedom for odd parity states. In ${\\ell }<2$ sector of gravity we have found that there are no physical degrees of freedom.\n\nTo construct the Hamiltonian we start with the Regge-Wheeler formulation \\cite{Regge:1957td,Zerilli:1971wd,Martel:2005ir} of Einstein gravity in spherical harmonic basis in the background of a Schwarzschild black hole. The part of the action $S(g+h)$ quadratic in perturbations $h_{\\mu\\nu}$ in eq. \\rf{quad} can be presented in spherical harmonic basis using the explicit form of equations of motion linear in perturbations, as shown in eq. \\rf{EOM}. We take these explicit expressions $Q^{\\mu\\nu} = {\\delta S (g, h) \\over \\delta h_{\\mu\\nu}} $, which are linear in $h_{\\mu\\nu}$, \nfrom \\cite{Martel:2005ir}, and reconstruct the part of the action $S(g+h)$ quadratic in perturbations $h_{\\mu\\nu}$. We impose the generalized Regge-Wheeler gauge\n \\cite{Kallosh:2021ors}. The action quadratic in fields we take in Schwarzschild coordinates and proceed with canonical quantization, defining canonical momenta and constraints. \n\nFor ${\\ell } \\geq 2$ fields the procedure leads to one independent degree of freedom for even and one for odd modes in each case with ${\\ell }, m$, in agreement with the counting of physical degrees of freedom. We conclude that up to a canonical transformation such a Hamiltonian is equivalent to the one presented in \\cite{Moncrief:1974am} where the corresponding canonical variables are Zerilli-Moncrief function \\cite{Zerilli:1971wd,Moncrief:1974am} for even modes and a Cunningham-Price-Moncrief function \\cite{Cunningham:1978zfa} for odd modes. In \\cite{Moncrief:1974am} the modes with ${\\ell }<2$ were not studied.\n\nWe apply our method also for ${\\ell }<2$ modes. In each sector for ${\\ell }=1$, even and odd case and for \n${\\ell }=0$ we first reproduce the action from the explicit expressions $Q^{\\mu\\nu} = {\\delta S (g, h) \\over \\delta h_{\\mu\\nu}} $ linear in $h_{\\mu\\nu}$. We use the gauge-fixing condition for low multipoles in \\cite{Kallosh:2021ors} and identify the canonical variables and constraints. In each case the conclusion is that there are no independent unconstrained canonical variables suitable for the quantized Hamiltonian. This is again in agreement with the counting of degrees of freedom performed earlier. \n\n\nThe original goal of this investigation was to develop a consistent method of quantization of gravitational field in the background of a Schwarzschild black hole \\cite{Kallosh:2021ors}. However, we found that in Schwarzschild coordinates the limit $M \\rightarrow 0$ is regular, and therefore the quantization procedure is valid in the Minkowski background as well. In this paper we found the Hamiltonian describing unitary evolution of gravitational perturbations in spherical coordinates, which equally well applies to quantization of gravity in Minkowski background as well as in the Schwarzschild black hole background. The choice of the generalized Regge-Wheeler gauge in \\cite{Kallosh:2021ors} where the gravity Hamiltonian is unitary requires to use the spherical harmonic basis for the metric perturbations. This {\\it unitary gauge} is a Regge-Wheeler gauge $G^{\\ell m(+)}=j_a^{\\ell m(+)}=h_2^{\\ell m(-)}=0$ for ${\\ell } \\geq 2$. For ${\\ell } =1$ it is $j_{a}^{1 m(+)}= K^{1m(+)}=h_{r}^{1 m(-)}=0$ and for ${\\ell } =0$ it is $K^{00}=h_{tr}^{00}=0$.\n\nIn this generalized Regge-Wheeler gauge, the quadratic part of the Hamiltonian for ${\\ell }<2$ modes is vanishing, whereas for ${\\ell }\\geq 2$ it is given in eqs. \\rf{Heven}, \\rf{Hodd} in the black hole background and in eq. \\rf{HamFlat} in Minkowski background. \n\n\\section*{Acknowledgement}\nI am grateful to A. Barvinsky, E. Coleman, A. Linde, E. Poisson, A. Rahman, P. Stamp, A. Starobinsky, A. Vainshtein, A. Van Proeyen and I. Volovich for stimulating and helpful discussions. \nI am supported by the SITP, by the US National Science Foundation Grant PHY-2014215 and by the Simons Foundation Origins of the Universe program (Modern Inflationary Cosmology collaboration). \n \n\n\n\\\n\n\\\n\n\n\n\n\\bibliographystyle{JHEP}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe \\LaTeXe{} document class {\\tt appolb.cls} should be used \nby starting the file with\\\\\n{\\tt\\verb|\\documentclass{appolb}|}\n\nOur main goal is to let the authors see how the text and equations\nfit to our page layout --- the text column size is 126 mm\n$\\times$ 190 mm. The style is very similar to the original Latex\n{\\tt article}, \\ie most of the commands are used in the same way although\nsome of them result in a different text formatting.\nThere are also some new commands, which are described below. \n\n\\section{Options}\nOptional parameters to the {\\tt appolb} class can be given, as usually, in square\nbrackets, \\eg\\\\\n{\\tt\\verb|\\documentclass[letterpaper,draft]{appolb}|}\\\\\nDefault options are: {\\tt a4paper,final}.\n\n{\\parindent=0pt\\obeylines\nAvailable options:\n{\\tt draft} or {\\tt final} --- show or hide the overfull rule\n{\\tt letterpaper} or {\\tt a4paper} --- select paper size\n}\n\n\\section{Commands}\n\\parindent=0pt\n{\\tt\\verb|\\eqsec|}\n\nCall this macro before the first {\\tt\\verb|\\section|}\ncommand if you want equations numbered as \n(SectionNumber.EqNumber).\nYou can uncomment line \\the\\eLiNe\\ of this file \n({\\tt\\jobname.tex}) to see the effect.\n\n\\subsection{Shortcuts}\n{\\obeylines\n{\\tt \\verb|\\ie|} gives: \\ie\n{\\tt \\verb|\\eg|} gives: \\eg\n{\\tt \\verb|\\cf|} gives: \\cf\n}\nThe macros provide appropriate spacing\nwithout the need for any curly braces \\{\\}.\n\n\\subsection{Math mode operators}\n{\\tt \\verb|\\Tr|} gives: $\\Tr$\n\n{\\tt \\verb|\\e|} gives: $\\e$ --- straight `e' in math mode.\n\n\\subsection{{\\tt eqletters} environment}\n\nEnumarate equations with\na number and a lower-case letter, \\eg\n\\begin{eqletters}\n\\label{myeq}\n\\begin{eqnarray}\nA_1 &=& F(1)\\,,\n\\label{me1}\n\\\\\nA_2 &=& F(2)\\,.\n\\label{me2}\n\\end{eqnarray}\nAs long as the {\\tt eqletters} environment is active all equations are\nnumbered with letters, \\eg\n\\begin{equation}\nL = \\Half a = \\half A\n\\end{equation}\n\\end{eqletters}\n\nEquations (\\ref{me1}) and (\\ref{me2}) can be referenced as Eqs. (\\ref{myeq}).\nThe {\\tt \\verb|\\label|} statement used to generate the latter reference\nmust be placed outside any {\\tt eqnarray} or {\\tt equation} environment.\n\n\\end{document}\n\n\n\\section{Introduction}\n\nThese lectures concern the properties of strongly interacting matter at high energy density. Such matter occurs in a number of contexts. The high density partonic matter that controls the early stages of hadronic collisions at very high energies is largely made of very coherent gluonic fields. In a single hadron, such matter forms the small x part of a wavefunction, a Color Glass Condensate. After a collision of two hadrons, this matter almost instantaneously is transformed into longitudinal color electric and color magnetic fields. The ensemble of these fields in their early time evolution is called the Glasma. The decay products of these fields thermalize and form a high temperature gas of quarks and gluons, the Quark Gluon Plasma. In collisions at lower energy, and perhaps in naturally occurring objects such as neutron stars, there is high baryon density matter at low temperature. This is Quarkyonic matter.\n\nThere is a very well developed literature concerning these various forms of matter. It is not the purpose of these lectures to provide a comprehensive review. I will concentrate on motivating and describing such matter from simple intuitive physical pictures and from simple structural aspects of QCD. I will attempt at various places to relate what is conjectured or understood about such matter to experimental results from accelerator experiments.\n\n\n\n\\section{Lecture I: The Color Glass Condensate and the Glasma}\n\nThe parton distributions of gluons, valence quarks and sea quarks can be measured for some momentum scale less than a resolution scale $Q$ as a function of their fractional momentum $x$ of a high energy hadron. The lowest value of $x$ accessible for a fixed hadron energy $E$ is typically\n$x_{min} \\sim \\Lambda_{QCD}\/E_{hadron}$. The small x limit is therefore the high energy limit.\n\nIt is remarkable that as $x$ is decreased, as we go to the high energy limit, that the gluon density dominates the constituents of a hadron for $x \\le 10^{-1}$. The various distributions are shown as a function of $x$ in Fig. \\ref{gluondominance}. The gluon density rises like a power of $x$ like $x^{-\\delta}$, $\\delta \\sim .2-.3$\nat accessible energies\n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=gluondominance.pdf, width=0.70\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small The parton distribution as a function of $x$.}\n \\label{gluondominance}\n\\end{figure}\nThe area of a hadron grows slowly with energies. Cross sections grow roughly as $ln^2(1\/x)$ for small x.\nThis means that the rapidly growing gluon distribution results in a high density system of gluons. At high density, the gluons have small separation and by asymptotic freedom, the intrinsic strength of their interaction must be weak.\n\nA small intrinsic interaction strength does not mean that interactions are weak. Consider gravity: The interactions between single protons is very weak, but the force of gravity is long range, and the protons in the earth act coherently, that is always with the same sign. This results in a large force of gravity. This can also happen for the gluons inside a hadron, if their interactions are coherent.\n\nTo understand how this might happen, suppose we consider gluons of a fixed size $r_0 \\sim 1\/p_T$ where\n$p_T$ is its transverse momentum. We assume that at high energy, the gluons have been Lorentz contracted into a thin sheet, so we need only consider the distribution of gluons in the transverse plane. If\nwe start with a low density of gluons at some energy, and then evolve to higher energy, the density of gluons increases. When the density is of order one gluon per size of the gluon, the interaction remains weak because of asymptotic freedom. When the density is of order $1\/\\alpha_S$, the coherent interactions are strong, and adding another gluon to the system is resisted by a force of order $1$. The gluons act as hard spheres. One can add no more gluons to the system of this size. It is however possible to add in smaller gluons, in the space between the closely packed gluons of size $r_0$. This is shown in Fig. \\ref{saturation}\n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=saturation.pdf, width=0.60\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small Increasing the gluon density in a saturated hadron when going to higher energy.}\n \\label{saturation}\n\\end{figure}\n\nThe physical picture we derive means that below a certain momentum scale, the saturation scale $Q_{sat}$,\nthe gluon density is saturated and above this scale it is diffuse. The saturation momentum scale grows with energy and need not itself saturate\\cite{Gribov:1984tu}-\\cite{McLerran:1993ka}.\n\nThe high phase space density of gluons, $dN\/dyd^2p_Td^2r_T \\sim 1\/\\alpha_S$ suggests that one can describe the gluons as a classical field. A phase space density has a quantum mechanical interpretation\nas density of occupation of quantum mechanical states. When the occupation number is large, one is in the classical limit.\n\nOne can imagine this high density gluon field generated from higher momentum partons. We introduce the idea of sources corresponding to high $x$ partons and fields as low $x$ partons. Because the high $x$ parton sources are fast moving, their evolution in time is Lorentz time dilated. The gluon field produced by these sources is therefore static and evolves slowly compared to its natural time scale of evolution. This ultimately means that the different configurations of sources are summed over incoherently, as in a spin glass. \n\nWe call this high energy density configuration of colored fields a Color Glass Condensate. The word color is because the gluons that make it are colored. The word condensate is used because the phase space density of gluons is large, and because this density is generated spontaneously. The word glass is used because the typical time scale of evolution of the classical fields is short compared to the Lorentz time dilated scales associated with the sources of color.\n\nThere is an elaborate literature on the Color Glass Condensate and an excellent review is by Iancu and Venugopalan\\cite{Iancu:2003xm}. Evolution of the CGC to small values of x is understood, as well as many relationships between deep inelastic scattering, deep inelastic diffraction and high energy nucleus-nucleus,\nproton-nucleus and proton-proton scattering. The CGC is a universal form of matter in the high energy limit. The theoretical ideas underlying the CGC are largely unchallenged as a description of the high energy limit of QCD, but the issue of when the approximation appropriate for the high energy limit are valid remains contentious. \n\nIn the description of high energy hadron hadron collisions, we consider the collision of two sheets of CGC as shown in Fig. \\ref{sheets}. The color electric and color magnetic fields of the CGC are visualized as sheets of Lenard-Wiechart potentials. These are classical gluon fields whose polarization and color are random, with an intensity distribution determined by the underlying theory of the CGC.\n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=sheets.pdf, width=0.70\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small The collision of two sheets of CGC.}\n \\label{sheets}\n\\end{figure}\n\nUpon collision of these sheets, the sheets become charged with color magnetic and color electric charge distributions of equal magnitude but opposite sign locally in the transverse plane of the sheets\\cite{Kovner:1995ja}-\\cite{Lappi:2006fp}. In the high\nenergy limit sources of color electric and color magnetic field must be treated on an equal footing because of the self duality of QCD. This induced charge density produces longitudinal color electric and color magnetic fields between the two sheets. These fields are longitudinally boost invariant and therefore have the correct structure to account for Bjorken's initial conditions in heavy ion collisions\\cite{Bjorken:1982qr}. The typical transverse length scale over which the flux tubes vary is $1\/Q_{sat}$. The initial density of produced gluons is on dimensional grounds\n\\begin{equation}\n {1 \\over {\\pi R^2}} {{dN} \\over {dy}} \\sim {{Q_{sat}^2} \\over {\\alpha_S}}\n\\end{equation}\nBecause there are both color electric and color magnetic fields, there is a topological charge density of maximal strength induced $FF^D \\sim Q_{sat}^2\/\\alpha_S$ \n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=glasma.pdf, width=0.70\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small Glasma flux tube produces after the collision.}\n \\label{glasma}\n\\end{figure}\n\nThe decay of products of the Glasma is what presumably makes a thermalized Quark Gluon Plasma. It is not clear how this thermalization takes place. It is quite likely that in the decay of these fields, a turbulent fluid arises, and perhaps this fluid can generate an expansion dynamics similar to that of a thermalized QGP for at least some time\\cite{Dusling:2010rm}.\n\n\\subsection{The CGC and Electron-Hadron Scattering}\n\nIf the only momentum scale that controls high energy scattering is the saturation momentum,\nthen there will be scaling\\cite{Stasto:2000er}. In particular, the cross section for deep inelastic scattering will be a function\n\\begin{equation}\n\\sigma_{\\gamma^*p} \\sim F(Q^2\/Q_{sat}^2)\n\\end{equation}\nrather than a function of $Q^2$ and $x$ independently. The x dependence of the saturation momentum may be determined empirically as $Q_{sat}^2 \\sim Q_0^2\/x^\\delta$ where $\\delta = 0.2-0.3$, which is consistent with analysis of evolution equations\\cite{Balitsky:1995ub}-\\cite{Mueller:2002zm}.\nThe scaling relationship can be derived from the classical theory for $Q^2 \\le Q_{sat}^2$. It can further be shown to extend over a much larger range of $Q^2$\\cite{Iancu:2002tr}. For large values of $Q^2$ this scaling is a consequence of the linear evolution equations, but the global structure is determined by the physics of saturation. Such a simple scaling relationship describes deep inelastic scattering data for $ x \\le 10^{-2}$.\n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=geometric.pdf, width=0.70\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small The geometric scaling in deep inelastic scattering.}\n \\label{geometric}\n\\end{figure}\n\nUsing evolution equations for the CGC including the effects of running coupling constant\\cite{Albacete:2007yr}-\\cite{Balitsky:2008zza}, one can compute\ndeep inelastic scattering structure functions at small x\\cite{Albacete:2009fh}. This involves very few parameters, and provides comprehensive description of deep inelastic scattering data at $x \\le 10^{-2}$.\nThe description of $F_2$ in deep inelastic scattering is shown in Fig. \\ref{f2}. It should be noted that in\nthe CGC description of deep inelastic scattering, the gluon distribution function is the Fock space distribution of gluons inside a hadron. It can never become negative. In the description of the $F_2$ data,\nthe gluon distribution function is not becoming small at small $Q^2$ as is the case in some linear evolution fits. This is intuitively reasonable since we have no reason to expect that the Fock space distribtuion of gluons in a hadron should become small at small $Q^2$.\n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=f2.pdf, width=0.90\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small The CGC description of F2 data in deep inelastic scattering.}\n \\label{f2}\n\\end{figure}\n\nThe Color Glass Condensate description may also be applied to diffractive deep inelastic scattering,\nand with the same parameters that describe deep inelastic scattering does an excellent job of describing\nthe data. In addition, there are measurements of the longitudinal structure function, a quantity directly proportional to the gluon density. Conventional descriptions that use linear DGLAP evolution equations are somewhat challenged by this data, but the CGC description naturally fits the data. \n\nTo summarize, the CGC description of deep inelastic scattering at small x naturally describes $F_2$, $F_L$\nand diffractive data. It is a successful phenomenology Why is the CGC therefore not accepted as the standard description? The problem is that the linear evolution DGLAP descriptions describe $F_2$ adequately, except in the region where the perturbative computations most probably breaks down. They do not do a very good job on the low $Q^2$ $F_L$ data, but this is where there is a fair uncertainty in the data.\nThe diffractive data is naturally described in the CGC framework, but there are other successful models.\nUltimately, there is no consensus within the deep inelastic scattering community that the CGC is needed in order to describe the data.\n\n\\subsection{The CGC and Heavy Ion Collisions}\n\n\\subsubsection{Multiplcities in RHIC Nulcear Collsions}\n\nOne of the early successes of the CGC was the description of multiplicity distributions in deep inelastic scattering\\cite{Kharzeev:2000ph}-\\cite{Kharzeev:2001yq}. Recall that the phase space distribution of gluons up to the saturation momentum is of order $Q_{sat}^2\/\\alpha_S(Q_{sat})$. We will assume that the distribution of\ninitially produced gluons is proportional to this distribution of gluons in the hadron wavefunctions of the colliding nuclei and further,that the multiplicity of produced gluon is proportional to the final\nstate distribution of pions. We get\n\\begin{equation}\n{1 \\over \\sigma}~ {{dN} \\over {dy}} \\sim {1 \\over{ \\alpha_S(Q_{sat})}}Q_{sat}^2 \\sim A^{1\/3} x^{-\\delta}\n\\end{equation} \nHere $\\sigma$ is the area of overlap of the two nuclei in the collision and A the number of nucleons that participate in the collision. $\\sigma ~Q_{sat}^2 \\sim A$\nat low energies assumes no shadowing of nucleon parton distributions and is consistent with\ninformation concerning deep inelastic scattering on nuclear targets. In the collisions of nuclei one can directly measure the number of nucleonic participants in the collisions, a number that varies with the centrality of the collision. One can then compare the central region multiplicity with the number of participants so determined. Such a comparison is shown in\\cite{Adcox:2004mh}-\\cite{Back:2004je} Fig. \\ref{aa} .\nThe saturation description of Kharzeev and Nardi provides a good description of the centrality dependence of the collisions\\cite{Kharzeev:2000ph}. It also does well with the energy dependence. Refinements of this description can provide a good description of the rapidity distribution of produced particles\\cite{Kharzeev:2001yq}.\n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=aa.pdf, width=0.90\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small The multiplicity as a function of the number of nucleon participants in heavy ion collisions.}\n \\label{aa}\n\\end{figure}\n\n\\subsubsection{Limiting Fragmentation}\n\nA general feature of high energy hadronic scattering is limiting fragmentation. If one measures the distribution of particles as a function of rapidity up to some fixed rapidity from the rapidity of one of the colliding particles, then the distribution is independent of collision energy. The region over which this scaling occurs\nincreases as the energy of the colliding particles increases. Such scaling is shown in Fig. \\ref{limfrag}.\n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=limfrag.pdf, width=0.90\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small Limiting fragmentation in RHIC nuclear collisions.}\n \\label{limfrag}\n\\end{figure}\nSuch limiting fragmentation is natural in the CGC approach. For example in Fig. \\ref{limfrag},\nwe see that the region of limiting fragmentation increases as beam energy increases\\cite{Back:2004je}. If we think of the region where there is limiting fragmentation as sources for fields at small more central rapidities, then we see that going to higher energies corresponds to treating a larger region as sources. In a renormalization group language, this simply means that one is integrating out fluctuations at less central rapidities, to generate an effective theory for the particles at more central rapidity. A quantitative description of limiting fragmentation within the theory of the CGC is found in Ref. \\cite{Gelis:2006tb}.\n\n\\subsubsection{Single Particle Distributions in dAu Collisions}\n\nSome of the early predictions of the CGC were generic features of the single particle inclusive distributions\nseen in hadron-nucleus collisions. There are two competing effects. The first is multiple scattering of a hadron as it traverses a nucleus. This effect is included n the CGC gluon distributions as an enhancment\nof the gluon distribution for $p_T$ at transverse momentum of the order of the saturation momentum,\nwith a corresponding depletion at smaller momentum. There is little effect at high $p_T$. The other effect is that in the evolution of the gluon distribution to small $x$, the saturation momentum acts as a cutoff in the\nbremstrahlung like integrals that generate such small x gluons. Nuclei have a larger saturation momentum\nthan do hadrons, so the small x gluon distribution for nuclei will be suppressed relative to that for hadrons. Put another way, this effect will generate a suppression for more central collisions. The sum of these effects is shown in Fig. \\ref{dA}\\cite{Baier:2003hr}-\\cite{Iancu:2004bx}. \nThe different curves correspond to different rapidities of the produced particle,\nbeginning with the top curve being near the fragmentation region of the nucleus. As one evolve further in rapidity, the enhancement at intermediate transverse momentum disappears and is replaced by a smooth curve with an overall suppression of produced particles.\n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=dA.pdf, width=0.90\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small The ratio of particles emitted in dA and AA collisions to that in proton due to CGC effects.}\n \\label{dA}\n\\end{figure}\n\nThe pattern of suppression suggested by the Color Glass Condensate was first seen in dAu collisions\nin the Brahms collaboration\\cite{Arsene:2004fa}, and later confirmed by the other experiments \\cite{Adcox:2004mh},\\cite{Back:2004je},\\cite{Adams:2005dq}. The Brahms experiments demonstrated\nthat in the nuclear target fragmentation region that at intermediate $p_T$ there was en enhancement in $R_{dA}$\nas a function of centrality, but in the deuteron fragmentation region, there was a depletion as a function of centrality. The CGC provided the only model that predicted such an effect, and it remains the only \ntheory that can quantitatively explain the suppression seen in the deuteron fragmentation region.\n\n\\subsubsection{Heavy Quark and $J\\Psi$ Production}\n\nIf the saturation momentum is small compared to a quark mass, it can be treated as very heavy. It should have perturbative incoherent production cross sections. If the saturation momentum is large compared to a quark\nmass, the quarks should be thought of as light mass. Cross sections for production should be coherent,\nand for example in $pA$ collisions, scale as $A^{2\/3}$. In the deuteron fragmentation region of dAu collisions we would expect suppression of heavy quark and charmonium cross sections relative to the nuclear fragmentation region. \n\\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=jpsi.pdf, width=0.90\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small The $J\/\\Psi$ production cross section as a function of centrality and rapidity.}\n \\label{jpsi}\n\\end{figure}\n In Fig. \\ref{jpsi}, the ratio of central to peripheral cross sections for $J\/\\Psi$ production is shown as a function of centrality and rapidity. Note the strong suppression in the forward region for central collisions, as expected from the CGC. Precise computations are difficult for the charm quark since its mass is close to the saturation momentum. Such computations are in agreement with the data at forward rapidity\\cite{Kharzeev:2008nw}-\\cite{Kopeliovich:2010nw}.\n \n \\subsubsection{Two Particle Correlations}\n \n The Glasma flux tubes induced by the collision of two hadrons will generate long range correlations in rapidity. In heavy ion collisions, this may be seen in forward backward correlations, as measured in STAR. The correlation increases in strength with higher energy collisions or more central collisions. This is expected in the CGC-Glasma description because for more central collisions the saturation momentum\n is bigger, so that the system is more correlated. (The coupling becoming weaker means the system is more\n classical, and therefore the leading order contribution associated with Glasma flux tubes becomes\n relatively more important.) Such forward-backward correlations are shown in Fig. \\ref{fb} as a function of rapidity and centrality\\cite{:2009dqa}-\\cite{Lappi:2009vb}. The value of the correlation coefficient b\n can be shown to be bounded $b \\le 1\/2$.\n \\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=fb.pdf, width=0.90\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small The strength of forward backward correlations as a function of rapidity and centrality.}\n \\label{fb}\n\\end{figure}\n\nSuch two particle correlations in the Glasma can generate ridge like structures seen in two particle correlation experiments in azimuthal angle and rapidity\\cite{Shuryak:2007fu}-\\cite{Dumitru:2008wn}. The long range rapidity correlation is intrinsic to the Glasma. The angular correlation might be generated by flow effects at later times in the collision, by opacity and trigger bias effects, or by\nintrinsic angular correlations associated with the decay of Glasma flux tubes\\cite{Gavin:2008ev}-\\cite{Dumitru:2010iy}.\n\n\\subsubsection{The Negative Binomial Distribution}\n\nThe decay of a single Glasma flux tube generates a negative binomial distribution of produced particles\\cite{Gelis:2009wh}.\nA sum of negative binomial distributions is again a negative binomial distribution. Such oa form of the distribution describes the RHIC data well. It is difficult with the heavy ion data to isolate those effects due to an intrinsic negative binomial distribution and those due to impact parameter. It is possible to isolate the effects of impact parameter, but it demands a high statistics study. \n\n\\subsubsection{Two Particle Azimuthal Angular Correlations in dA Collisions}\n\nThe CGC will de-correlate forward-backward angular correlations when the the transverse momentum of produced particles is of order the saturation momentum\\cite{Kharzeev:2004bw}-\\cite{Albacete:2010rh}.\nThis is because near the produced particles get momentum from the CGC and\ntherefore are not back-to-back correlated. In dAu collisions such an effect will be largest at forward rapidities near the fragmentation region of the deuteron, since this corresponds to the smallest values of x for the nuclear target. This kinematic region is least affected by multiple scattering on the nucleus. This effect has been seen by the STAR and PHENIX collaborations\\cite{Braidot:2010ig}-\\cite{Meredith:2009fp}. There is a good quantitative description by Tuchun and by Albacete and Marquet,\\cite{Tuchin:2009nf}-\\cite{Albacete:2010rh} as shown in the figure \\ref{dafb}\n \\begin{figure}[!htb]\n\\begin{center}\n \\mbox{{\\epsfig{figure=dafb.pdf, width=0.90\\textwidth}}}\n \\end{center}\n\\caption[*]{ \\small Forward rapidity, forward backward angular correlations in dAu collisions\nas a function of centrality.}\n \\label{dafb}\n\\end{figure}\n\n\\subsection{Concluding Comments on the CGC and the Glasma}\n\nThere is now a wide variety or experimental data largely consistent with the CGC and Glasma based description. There is a well developed theoretical framework that provides a robust phenomenology\nof both electro-hadron scattering and hadron scattering, There are new areas that are developing that I have not had time to discuss. One is the possibility to see effects of topological charge change in heavy ion collisions, the Chiral Magnetic Effect\\cite{Kharzeev:2007jp}. Another area is pp collisions at the LHC, where some work concerning recent experimental data was developed at this school\\cite{McLerran:2010ex}. \n\n\n\\section{Lecture II: Matter at High Temperature: The Quark Gluon Plasma}\n\n\\section{Matter at Finite Temperature}\n\n\\subsection{Introdcution}\n\nIn this lecture I will describe the properties of matter at high temperature. The discussion here will be theoretical. There is a wide literature on the phenomenology of the Quark Gluon Plasma and its possible\ndescription of heavy ion collisions at RHIC energies. The interested reader is referred to that literature.\nI will here develop the ideas of decofinement, chiral symmetry restoration based in part on a simple description using the large number of colors limit of QCD.\n\n\\subsection{Confinement}\n\nThe partition function is\n\\begin{equation}\n Z = Tr~e^{-\\beta H + \\beta \\mu_B N_B}\n\\end{equation}\nwhere the temperature is $T = 1\/\\beta$ and $N_B$ is the baryon number and $\\mu_B$ is the baryon number chemical potential. Operator expectation values are\n\\begin{equation}\n = {{Tr~ O ~e^{-\\beta H + \\beta \\mu_B N_B}} \\over Z}\n\\end{equation}\n Under the substitution $e^{-\\beta H} \\rightarrow e^{-itH}$, the partition function becomes the time\n evolution operator of QCD. Therefore, if we change $t \\rightarrow it$,and redefine zeroth\n components of fields by appropriate factors of i, and introduce Euclidean gamma matrices with anti-commutation relations\n \\begin{equation}\n \\{ \\gamma^\\mu, \\gamma^\\nu \\} = -2 \\delta^{\\mu \\nu}\n \\end{equation}\n then for QCD, the partition function has the path integral representation \n \\begin{equation}\n Z = \\int~[dA] [d\\overline \\psi ] [d\\psi] exp\\left\\{ -\\int_0^\\beta~ d^4x~\\left( {1 \\over 4 }F^2 \n +\\overline \\psi \\left[ {1 \\over i} \\gamma \\cdot D + m+ i \\mu_Q \\gamma^0 \\psi \\right]\\right) \\right\\}\n \\end{equation}\nHere the fermion field is a quark field so that the baryon number chemical potential is\n\\begin{equation}\n \\mu_Q = {1 \\over N_c} \\mu_B\n\\end{equation}\nThis path integral is in Euclidean space and is computable using Monte Carlo methods when\nthe quark chemical potential vanishes. If the quark chemical potential is non-zero, various contributions appear with different sign, and the Monte Carlo integrations are poorly convergent. Boundary conditions\non the fields must be specified on account of the finite length of the integration in time. They are periodic for Bosons and anti-periodic for Fermions, and follow from the trace in the definition of the partition function.\n\nA straightforward way to probe the confining properties of the QCD matter is to introduce a heavy\ntest quark. If the free energy of the heavy test quark is infinite, then there is confinement,\nand if it is finite there is deconfinement. We shall see below that the free energy of an quark added to the system is\n\\begin{equation}\n e^{-\\beta F_q} = \n \\label{L}\n\\end{equation}\nwhere\n \\begin{equation}\n L(\\vec{x}) = {1 \\over N_c} Tr~ P~ e^{i \\int ~dt~ A^0(\\vec{x},t)}\n \\end{equation}\nSo confinement means $ = 0$ and deconfinement means that $$ is finite. The path ordered phase integration which defines the line operator $L$ is shown in Fig. \\ref{line}. Such a path ordered phase is called a Polyakov loop or Wilson line.\n\\begin{figure}[ht]\n \\begin{center}\n \\includegraphics[width=0.50\\textwidth]{line.jpg}\n \\end{center}\n \\caption{The contour in the t plane which defines the Polyakov loop. The space is closed in time because of the periodic boundary conditions imposed by the definition of the partition function.}\n\\label{line}\n\\end{figure}\n\nIt is possible to prove that the free energy of a heavy static quark added to the system is given by Eqn. \\ref{L}\nusing the effective action for a very heavy quark:\n\\begin{equation}\n S_{HQ} = \\int ~dt ~\\overline \\psi (\\vec{x},t) ~{1 \\over i}\\gamma^0 D^0~ \n\\psi (\\vec{x},t).\n\\end{equation}\nThe Yang-Mills action is invariant under gauge transformations that are periodic up to an element of the center of the gauge group. The center of the gauge group is a set of diagonal matrices matrix $Z_p = e^{2\\pi i p\/N} \\overline I$ where $\\overline I$ is an identity matrix.\nThe quark contribution to the action is not invariant, and $L \\rightarrow Z_p L$ under this transformation. In a theory with only dynamical gluons, the energy of a system of $n$ quarks minus antiquarks is invariant under the center symmetry transformation only if $n$ is an integer multiple of $N$. Therefore, when the center symmetry is realized, the only states of finite free energy\nare baryons plus color singlet mesons.\n\nThe realization of the center symmetry, $L \\rightarrow Z_p L$\nis equivalent to confinement. This symmetry is like the global rotational symmetry of a spin system, and it may be either realized or broken. At large separations, the correlation of a line and its adjoint, corresponding to a quark-antiquark pair is\n\\begin{equation}\n lim_{r \\rightarrow \\infty} = Ce^{-\\kappa r} + \n\\end{equation} \nsince upon subtracting a mean field term, correlation functions should vanish exponentially. Since\n\\begin{equation}\n e^{-\\beta F_{q \\overline q} (r)} = \n\\end{equation}\nwe see that in the confined phase, where $ = 0$, the potential is linear, but in the unconfined phase,\nwhere $$ is non-zero, the potential goes to a constant at large separations.\n\nThe analogy with a spin system is useful. For the spin system corresponding to QCD\nwithout dynamical quarks,\nthe partition function can be written as\n\\begin{equation}\n Z = \\int~ [dA]~e^{- {1\\over g^2} S[A]}\n\\end{equation}\nThe effective temperature of the spin system associated with the gluon fields is $T_{eff} \\sim g^2$.\nBy asymptotic freedom of the strong interactions, as real temperature gets larger, the effective temperature gets smaller. So at large real temperature (small effective temperature) we expect an ordered system, where the $Z_N$ symmetry is broken, and there is deconfinement. For small real temperature corresponding to large effective temperature, there is disorder or confinement.\n\nThe presence of dynamical fermions breaks the $Z_N$ symmetry. This is analogous to placing a spin system in an external magnetic field. There is no longer any symmetry associated with confinement, and the phase transition can disappear. This is what is believed to happen in QCD for physical masses of quarks. What was a first order phase transition for the theory in the absence of quarks becomes a continuous change in the properties of the matter for the theory with quarks.\n\nAnother way to think about the confinement-decofinement transition is a change in the number of degrees of freedom. At low temperatures, there are light meson degrees of freedom. Since these\nare confined, the number of degrees of freedom is of order one in the number of colors. In the unconfined world, there are $2(N_c^2-1)$ gluons, and $4N_cN_f$ fermions where $N_f$ is the number of light mass fermion families. The energy density scaled by $T^4$ is a dimensionless number and directly proportional to the number of degrees of freedom. We expect it to have the property shown in Fig. \\ref{et4} for pure QCD in the absence of quarks. The discontinuity at the deconfinement temperature, $T_d$ is the latent heat of the phase transition.\n\\begin{figure}[ht]\n \\begin{center}\n \\includegraphics[width=0.60\\textwidth]{et4.jpg}\n \\end{center}\n \\caption{The energy density scaled by $T^4$ for QCD in the absence of dynamical quarks.}\n\\label{et4}\n\\end{figure}\n\nThe energy density can be computed using lattice Monte Carlo methods. The result of such computation is shown in Fig. \\ref{et4lat}. The discontinuity present for the theory with no quarks becomes a rapid cross over when dynamical quarks are present.\n\nThe large $N_c$ limit gives some insight into the properties of high temperature \nmatter\\cite{'tHooft:1973jz}-\\cite{Thorn:1980iv}. As $N_c \\rightarrow \\infty$, the energy density itself is an order parameter for the decofinement phase transition. Viewed from the hadronic world, there is an amount of energy density $\\sim N_c^2$ which must be inserted \nto surpass the transition temperature. At infinite $N_c$ this cannot happen, as this involves an infinite amount of energy. There is a Hagedorn limiting temperature, which for finite $N_c$ would have been the deconfinement temperature.\n\\begin{figure}[ht]\n \\begin{center}\n \\includegraphics[width=0.60\\textwidth]{et4lat.jpg}\n \\end{center}\n \\caption{The energy density scaled by $T^4$ measured in QCD from lattice Monte Carlo simulation. Here there are quarks with realistic masses.}\n\\label{et4lat}\n\\end{figure}\n\nThe Hagedorn limiting temperature can be understood from the viewpoint of the hadronic world as arising from an exponentially growing density of states. In a few paragraphs, we will argue that mesons and glueballs are very weakly interacting in the limit of large $N_c$. Therefore, the partition function is\n\\begin{equation}\n Z = \\int~ dm~\\rho (m) e^{-m\/T}\n \\end{equation}\n Taking $\\rho(m) \\sim m^\\alpha e^{\\kappa m}$, so that \n \\begin{equation}\n \\sim {1 \\over {1\/T-\\kappa}}\n \\end{equation} \ndiverges when $T \\rightarrow 1\/\\kappa$\n\n\\subsection{A Brief Review of the Large $N_c$ Limit}\n\nThe large $N_c$ limit for an interacting theory takes $N_c \\rightarrow \\infty $ with the 't Hooft coupling\n$g^2_{'t Hooft} = g^2 N_c$ finite. This approximation has the advantage that the interactions among quarks and gluons simplify. For example, at finite temperature, the disappearance of confinement\nis associated with Debye screening by gluon loops, as shown in Fig. \\ref{loop}a. This diagram generates a screening mass of order $M^2_{screening} \\sim g^2_{'t Hooft} T^2$. On the other hand the quark loop contribution is smaller by a power of $N_c$ and vanishes in the large $N_c$ limit.\n\\begin{figure}[htbp]\n\\begin{center}\n\\begin{tabular} {l l l}\n\\includegraphics[width=0.50\\textwidth] {glueloop.jpg} & &\n\\includegraphics[width=0.46\\textwidth] {quarkloop.jpg} \\\\\n& & \\\\\na & & b \\\\\n\\end{tabular}\n\\end{center}\n\\caption{a: The gluon loop contribution to the heavy quark potential. b: The quark loop contribution to the potential}\n\\label{loop}\n\\end{figure}\n\nTo understand interactions, consider Fig. \\ref{int}a. This corresponds to a mesonic current-current interaction through quarks. In powers of $N_c$, it is of order $N_c$. Gluon interactions will not change this overall factor. The three current interaction is also of order $N_c$ as shown in Fig. \\ref{int}b. The three meson vertex, $G$ which remains after amputating the external lines, is therefore of order $1\/\\sqrt{N_c}$. A similar argument shows that the four meson interaction is of order $1\/N_c$.\nUsing the same arguments, one can show that the 3 glueball vertex is of order $1\/N_c$ and the four glueball interaction of order\n$1\/N_c^2$.\n\nThese arguments show that QCD at large $N_c$ becomes a theory of non-interacting mesons and glueballs. There are an infinite number of such states because excitations can never decay. In fact, the spectrum of mesons seen in nature does look to a fair approximation like non-interacting particles. Widths of resonances are typically of order $200~ MeV$, for resonances with masses up to several $GeV$.\n\\begin{figure}[htbp]\n\\begin{center}\n\na \\includegraphics[width=0.75\\textwidth ] {loop1.jpg} \\\\\n ~ \\\\\n~~b ~~~ \\includegraphics[width=0.75\\textwidth ] {loop2.jpg} \\\\\n\n\\end{center}\n\\caption{a: The quark loop corresponding to a current-current interaction. b: A quark loop corresponding to a three current interaction.}\n\\label{int}\n\\end{figure}\n\\subsection{Mass Generation and Chiral Symmetry Breaking}\n\nQCD in the limit of zero quark masses has a $U(1) \\times SU_L(2) \\times SU_R(2)$ symmetry. (The $U_5(1)$ symmetry is explicitly broken due to the axial anomaly.) Since the pion field, $\\overline \\psi \\tau^a \\gamma_5 \\psi$ is generated by an $SU_{L-R}(2)$ transformation of the sigma field, $\\overline \\psi \\psi$, the energy (or potential) in the space of the pion-sigma field is degenerate under this transformation.\nIn nature, pions have anomalously low masses. This is believed to be a consequence of chiral symmetry breaking, where the $\\sigma $ field acquires an expectation value, and the pion fields are Goldstone bosons associated with the degeneracy of the potential under the chiral rotations.\n\nSuch symmetry breaking can occur if the energy of a particle-antiparticle pair is less than zero, as shown in Fig. \\ref{hole}. On the left of this figure is the naive vacuum where the negative energy states associated with quark are filled. The right hand side of the figure corresponds to a particle hole excitation, corresponding to a sigma meson. Remember that a hole in the negative energy sea corresponds to an antiparticle with the opposite momentum and energy. If the $\\sigma$ meson excitation has negative energy, the system is unstable with respect to forming a condensate of these mesons.\n\\begin{figure}[ht]\n \\begin{center}\n \\includegraphics[width=0.40\\textwidth]{hole.jpg}\n \\end{center}\n \\caption{The energy levels of the Dirac equation. Unfilled states are open circles and filled states are solid circles. For the free Dirac equation, negative energy states are filled and positive energy states are unoccupied, as shown on the left hand side. A mesonic excitation corresponding to a particle hole pair is shown on the right hand side.}\n\\label{hole}\n\\end{figure}\n\nAt sufficiently high temperature, the chiral condensate might melt. Indeed this occurs\\cite{Karsch:2001cy} .For QCD,\nthe chiral and deconfinement phase transition occur at the same temperature. At a temperature of about $170 - 200~ MeV$, both the linear potential disappears and chiral symmetry is restored. It is difficult to make a precise statement about the indentification of the chiral and deconfinement phase transitions,\nsince as argued above, for QCD with quarks, there is not a real phase transition associated with deconfinement\\cite{Bazavov:2009zn}-\\cite{Borsanyi:2010cj}.\nAlso, when quarks have finite masses, as they do in nature, chiral symmetry is not an exact symmetry, and there need be no strict phase transition associated with its restoration. Nevertheless, the cross over is quite rapid, and there are rapid changes in the both the potential and the sigma condensate $<\\overline \\psi \\psi >$ at temperatures which are in a narrow range.\n\n\\section{Lecture III: Matter at High Baryon Number Density: Quarkyonic Matter}\n\nI now turn to a discussion of the phase diagram of QCD at finite baryon number density.\n\nIn the large $N_c$ limit of QCD, the nucleon mass is of order $N_c$\\cite{'tHooft:1973jz}-\\cite{Witten:1979kh}. This means that in the confined phase of hadronic matter, for baryon chemical potential $\\mu_B \\le M_N$, the baryon number density is\nessentially zero:\n\\begin{equation}\n \\sim e^{(\\mu_B-M_N)\/T} \\sim e^{-N_c}\n\\end{equation} \nFor temperatures above the de-confinement phase transition the baryon number is non-zero since there the baryon number density is controlled by $e^{-M_q\/T} \\sim 1$, and quark masses are independent of\n$N_c$. For sufficiently large chemical potential the baryon number density can be nonzero also. The Hadronic Matter phase of QCD is characterized in large $N_c$ by zero baryon number density, but at higher density there is a new phase.\n \\begin{figure}\n\\centering\n\\includegraphics[scale=0.30]{phasediagram}\t \n\\caption[]{The revised phase diagram of QCD}\n\\label{phasediagram}\n\\end{figure}\n\nIn the large $N_c$ limit, fermion loops are suppressed by a factor of $1\/N_c$. Therefore the contribution to Debye screening from quarks cannot affect the quark potential until\n\\begin{equation}\n M_{Debye}^2 \\sim \\alpha_{t'Hooft}~ \\mu_{quark}^2\/N_c \\sim \\Lambda^2_{QCD}\n\\end{equation}\nHere the quark chemical potential is $\\mu_B = N_c \\mu_{quark}$. The relationship involving the Debye mass means there is a region parametrically large chemical potential $M_N \\le \\mu_B \\le \\sqrt{N_c}M_N$ where matter is confined, and has finite baryon number. This matter is different than either the Hadronic Matter or the De-Confined Phases. It is called Quarkyonic because it exists at densities parametrically large compared to the QCD scale, where quark degrees of freedom are important,\nbut it is also confined so the degrees of freedom may be thought of also as those of confined \nbaryons\\cite{McLerran:2007qj}-\\cite{Hidaka:2008yy}.\n\nThe width of the transition region between the Hadronic phase and the Quarkyonic phase is estimated\nby requiring that the baryon number density become of order $N_B\/V \\sim k_{Fermi}^3 \\sim \\Lambda_{QCD}^3$. Recall that the baryon chemical potential is $\\mu_B \\sim M_N + k_f^2\/2M_N$ for small $k_F$, so that the width of the transition in $\\mu_B$ is very narrow, of order $1\/N_c$. This is $\\delta \\mu_{qaurk} \\sim 1\/N_c^2$ when expressed in terms of $\\mu_{quark}$ which is the finite variable in the large $N_c$ limit.\n \\begin{figure}\n\\centering\n\\includegraphics[scale=0.60]{line_pbm}\t \n\\caption[]{Chemical potentials and temperatures at decoupling.}\n\\label{line}\n\\end{figure}\n\n\nThe transition from Hadronic Matter to that of the Quark Gluon Plasma may be thought of as a change in the number of degrees of freedom of matter. Hadronic Matter at low temperatures has 3 pion degrees of freedom. The quark gluon plasma has of order $2(N_c^2-1)$ degrees of freedom corresponding to gluons and $4 N_c$ degrees of freedom for each light mass quark. The change in degrees of freedom is of order $N_c^2$ in the large $N_c$ limit. At very high baryon number densities, the quarks in the Fermi sea interact at short distances, and although strictly speaking are confined, behave like free quarks. The number of degrees of freedom is therefore of order $N_c$. Each phase has different numbers of degrees of freedom, and is presumably separated from the other by a rapid crossover.\n \\begin{figure}\n\\centering\n\\includegraphics[scale=0.60]{horn}\t \n\\caption[]{Ratios of abundances of various particles .}\n\\label{horn}\n\\end{figure}\nQuarkyonic matter is confined and therefore thermal excitations such as mesons, glueballs, and Fermi surface excitations must be thought of as confined. The quarks in the Fermi sea are effectively weakly interacting since their interactions take place at short distances. So in some sense, the matter is ``de-confined\" quarks in the Fermi sea with confined glueball, mesons and Fermi surface excitations\\cite{Castorina:2010gy}.\n\nIn Hadronic Matter, chiral symmetry is broken and in Deconfined Matter it is broken. In Quarkyonic Matter chiral symmetry is broken by the formation of charge density waves from binding of quark and quark hole excitations near the Fermi surface\\cite{Deryagin:1992rw}. In order that the quark hole have small relative momentum to the quark, the quark hole must have momentum opposite to that of the quark. This means the quark-quark hole excitation has total net momentum, and therefore the finite wavelength of the corresponding bound state leads to a breaking of translational invariance. The chiral condensate turns out to be a chiral spiral where the chiral condensate rotates between different Goldstone boson\nas one moves through the condensate\\cite{Kojo:2009ha}. Such condensation may lead to novel crystalline structures\\cite{tsvelik}.\n\nA figure of the hypothetical phase diagram of QCD is shown in Fig. \\ref{phasediagram} for $N_c = 3$. Also shown is the weak liquid-gas phase transition, and the phase associated with color superconductivity. Although the color superconducting phase cannot coexist with quarkyonic matter in infinite $N_c$, for finite $N_c$ there is such possibility. The lines on this phase diagram might correspond to true phase transitions or rapid cross overs. The confinement-deconfinement transition is known to be a cross over. In the FPP-NJL model\\cite{Fukushima:2003fw}-\\cite{Pisarski:2000eq}, the Hadronic-Quarkyonic transition is first order\\cite{McLerran:2008ua}, but nothing is known from lattice computations. If as we conjecture, there is region where chiral symmetry is broken by translationally non-invariant modes, then this region must be surrounded by a line of phase transitions. I call this region Happy Island becuase it is an island of matter in the $\\mu_B-T$ plane.\n\nA remarkable feature of this plot is the triple point where the Hadronic Matter, Deconfined Matter and\nQuarkyonic Matter all meet\\cite{Andronic:2009gj}. This triple point is reminiscent of the triple point for the liquid, gas and vapor phases of water. \n\nSince we expect a rapid change in the number of degrees of freedom across the transitions between\nthese forms of matter, an expanding system crossing such a transition will undergo much dilution would undergo much dilution at a fixed value of temperature or baryon chemical potential\\cite{BraunMunzinger:1994xr}-\\cite{Heinz:1999kb}. One might expect in heavy ions to see decoupling of particle number changing processes at this transition, and the abundances of produced particles will be characteristic of the transition. In Fig. \\ref{line}\n\nIn the Fig. \\ref{line}, the expectations for the confinement-deconfinement transition are shown with the dotted red line. It is roughly constant with the baryon chemical potential, and the constant value of temperature is taken from lattice estimates. The dark dashed curve represents $\\mu_B -T = cons \\times M_N$, corresponding to a simple model for the Quarrkyonic transition. Such a very simple description does remarkably well. \n\nA triple point is suggested at a baryon chemical potential near 400 MeV, and temperature near 160 MeV. This corresponds to a center of mass energy for Pb-Pb collisions of 9-10 GeV. This is near where there are anomalies in the abundances of rations of particles\\cite{Gazdzicki:1998vd}, as shown in Fig. \\ref{horn}.\nShown are fits using statistical models of abundances of particles using chemical potentials and temperature extracted from experimental data. The sharp peak reflects the change in behavior as one proceeds along the dashed line of Fig. \\ref{line} corresponding to the Quarkyonic transition and joins to the dotted red line \nof the deconfinement transition\n\nIt is remarkable that the value of beam energy where this occurs corresponds to the hypothetical triple point of Fig. \\ref{line}, and that this is the density where the energy density stored in baryons becomes equal to that stored in mesons, Fig. \\ref{baryons},\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.60]{baryons}\t \n\\caption[]{Energy density stored in baryons compared to that stored in mesons.}\n\\label{baryons}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\section*{Acknowledgments}\n\nI gratefully acknowledge the organizers of the 50'th Crakow School of Theoretical Physics, in particular,\nMichal Praszalowicz, for making this wonderful and extraordinary meeting.\nThe research of L. McLerran is supported under DOE Contract No. DE-AC02-98CH10886.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzahhz b/data_all_eng_slimpj/shuffled/split2/finalzzahhz new file mode 100644 index 0000000000000000000000000000000000000000..6524389ada1b47f9299c55d4b6cc937041b54fac --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzahhz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nHigh-energy neutrino astronomy has finally begun. \nThe detection of PeV-energy neutrinos~\\cite{icecubePeV2013} and the follow-up\nanalyses~\\cite{icecubeHESE2013} by the IceCube Collaboration revealed the existence\nof astrophysical ``on-source'' neutrinos at energies ranging from TeV to PeV.\nThese neutrinos are expected to be produced by the interactions\nof ultrahigh-energy cosmic-ray (UHECR) protons via $pp$ collisions\nor $\\gamma p$ collisions. The bulk intensity of these neutrinos,\n$E_\\nu^2 \\phi_{\\nu_e+\\nu_\\mu+\\nu_\\tau}\\simeq \n3.6\\times 10^{-8} {\\rm GeV} {\\rm cm^{-2}} \\sec^{-1} {\\rm sr^{-1}}$,\nprovides an important clue to understanding the general characteristics\nof UHECR sources through the connection between the observed cosmic-ray\nand neutrino intensities. \n\nIn the even higher-energy region from EeV to 100 EeV (EeV $=10^9 {\\rm GeV}$),\nthe highest-energy cosmic-ray (HECR) protons generate EeV-energy neutrinos\nvia interactions with cosmic microwave background (CMB) photons~\\cite{GZK} and extragalactic background light (EBL)\nduring their propagation in intergalactic space. The intensity of these\n``GZK cosmogenic'' neutrinos~\\cite{BZ} averaged over the sky is a consequence\nof the integral of the HECR emission over cosmic time, as neutrinos are\nstrongly penetrating particles that can travel cosmological\ndistances. It is, therefore, an observational probe to trace\nthe HECR source evolution. In particular, the cosmogenic neutrino intensity\nfrom 100 PeV to 10 EeV is highly sensitive to the evolution of the HECR emission rate\nand less dependent on other uncertain factors such as the highest energy\nof accelerated cosmic rays at their sources. As this energy range coincides with\nthe central region covered by the IceCube ultrahigh-energy neutrino searches,\nthe flux sensitivity achieved by IceCube has started to constrain a sizable\nparameter space of HECR source evolution, revealing the general\ncharacteristics of UHECR and HECR sources independent of the cosmic-ray acceleration model.\n\nIn this article, we review the new knowledge of UHECR\/HECR sources\nprovided by neutrino observations by IceCube. \nThe standard $\\Lambda$CDM cosmology with $H_0 = 73.5$ km s$^{-1}$ Mpc$^{-1}$, $\\Omega_{\\rm M} = 0.3$, \nand $\\Omega_{\\Lambda}=0.7$ is assumed throughout this article.\n\n\\section{The cosmic neutrino spectrum: Overview}\n\\label{sec:overview}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{.\/NeutrinoFluxOverviewWzCaption.pdf}\n\\caption{Illustrative display of the differential fluxes of diffuse neutrinos having various origins. \nSupernova relic neutrinos are expected to appear in the MeV sky. Atmospheric\nneutrinos originating in extensive cosmic-ray airshowers dominate the background\nfor high-energy cosmic neutrino detection. Astrophysical neutrinos produced\n{\\it in situ} at cosmic-ray sources emerge at TeV and PeV energies, which have now been\ndetected by IceCube. GZK cosmogenic neutrinos, whose flux has yet to be measured, fill the highest-energy universe. Shown here is the possible range of fluxes\nfrom various source evolution models. The proton-dominated HECR composition is assumed.}\n\\label{fig:neut_fluxes}\n\\end{figure}\n\nFigure~\\ref{fig:neut_fluxes} displays the spectrum of neutrinos coming from\nall over the sky, {\\it i.e.}, diffuse neutrino fluxes. The massive background\nof atmospheric neutrinos ranging in energy over many orders of magnitude\nhad masked the astrophysical neutrinos until the IceCube observatory finally\nrevealed their existence. The spectrum shown here, taken from a model~\\cite{YoshidaTakami2014},\nhas a cutoff feature at $\\sim{\\rm PeV}$, but it is not clear yet whether the IceCube neutrino\nfluxes have a spectral cutoff. It is not even obvious that the spectrum can be well\ndescribed by a single power law formula. An analysis with enhanced sensitivity\nin the 10 TeV region seems to exhibit a trend toward a softer spectrum~\\cite{MESE} than\nthe up-going diffuse muon neutrino analysis, which is sensitive\nat energies above $\\sim 100\\ {\\rm TeV}$~\\cite{diffuse_nu}.\nOn-going efforts in the IceCube Collaboration will ultimately resolve these \nissues. Nevertheless, the intensity, \n$E_\\nu^2 \\phi_{\\nu_e+\\nu_\\mu+\\nu_\\tau}\\simeq \n3.6\\times 10^{-8} {\\rm GeV} {\\rm cm^{-2}} \\sec^{-1} {\\rm sr^{-1}}$, \nhas been well determined within a factor of two, and the implications\nfor the origin of UHECRs based on the intensity are not affected by these\ndetails of the spectral structure. If the neutrino emitters are also\nsources of the cosmic rays we are observing (which is very likely but not an undeniable assumption),\nwe can associate the neutrino flux with their parent cosmic-ray proton flux, \nand its comparison to the {\\it observed} cosmic-ray spectrum places\nsome constraints on the source characteristics.\n\nThe GZK cosmogenic neutrinos are expected to emerge in the 100 PeV--EeV sky.\nTheir intensity at the highest-energy end ($\\sim50-100\\ {\\rm EeV}$) depends mainly on\nthe maximal accelerated energy of cosmic rays at their sources\nand is not relevant to the ultrahigh-energy neutrino search by IceCube,\nas it is most sensitive at energies below 10 EeV.\nThe intensity at the lowest-energy tail ($\\sim10-100\\ {\\rm PeV}$)\nis determined by the EBL density\nand its evolution, which is EBL-model-dependent and could vary\nthe flux by a factor of $\\sim5$ at $\\sim10\\ {\\rm PeV}$.\nThe EeV-energy intensity is decided primarily by the HECR source evolution\nin redshift space.\nThe integral intensity of cosmogenic neutrinos above 100 PeV ranges from $10^{-17}$ to \n$\\sim3\\times 10^{-16} {\\rm cm^{-2}} \\sec^{-1} {\\rm sr^{-1}}$\ndepending on these factors. The IceCube detection exposure\nfor UHE neutrinos has now reached $\\sim3\\times 10^{16} {\\rm cm^{2}} \\sec {\\rm sr}$,\nand one can see that the IceCube sensitivity enables access to\na significant parameter space of the cosmogenic neutrino production models.\n\n\\section{The constraints on PeV- and EeV-energy UHECR sources}\n\\label{sec:PeV-EeV}\n\nThe flux of astrophysical neutrinos produced by UHECR protons at\ntheir sources is related to their parent cosmic-ray intensity\nvia the proton-to-neutrino conversion efficiency. The efficiency is usually\nparameterized in the form of the optical depth of the proton interactions.\nFor neutrinos produced through photomeson production ($\\gamma p$),\ntheir diffuse flux integrating emitted neutrinos over all the sources\nof UHECR protons with a spectrum in the source\nframe, $\\sim \\kappa_{\\rm CR}(E_{\\rm CR}\/E_0)^{-\\alpha}$ (where $E_0$ is the reference energy,\nwhich is conveniently set to $\\sim10$ PeV), is described as~\\cite{YoshidaTakami2014}\n\\begin{widetext}\n\\begin{eqnarray}\n\\phi_{\\nu_e + \\nu_\\mu + \\nu_\\tau}(E_\\nu) &\\simeq& \n\\frac{2 n_0 \\kappa_{\\rm CR}}{\\alpha^2} \\frac{c}{H_0} \n\\frac{s_{\\rm R}}{\\sqrt{(s_{\\rm R} + m_{\\pi}^2 - m_p^2)^2 - 4 s_{\\rm R} m_{\\pi}^2}} \n\\frac{3}{1 - r_{\\pi}} \n\\frac{(1 - e^{-\\tau_0})}{2 (m - \\alpha) - 1} \\Omega_{\\rm M}^{- \\frac{m - \\alpha + 1}{3}} \\nonumber \\\\\n&& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\times \n\\left[ \\left\\{ \\Omega_{\\rm M} (1 + z_{\\rm max})^3 + \\Omega_{\\Lambda} \\right\\}^{\\frac{m - \\alpha}{3} - \\frac{1}{6}} - 1 \\right] \n\\left( \\frac{E_{\\nu}}{E_0 x_{\\rm R}^+ (1 - r_{\\pi})} \\right)^{-\\alpha}. \n\\label{eq:onsource_simple}\n\\end{eqnarray}\n\\end{widetext}\nHere $n_0$ is the source number density at the present epoch,\nand the source evolution is parameterized as $\\psi(z)=(1+z)^m$\nextending to the maximal redshift $z_{\\rm max}$\nsuch that the parameter $m$ represents the scale of the cosmological\nevolution often used in the literature. Further, $s_R\\ (\\simeq 1.5 {\\rm GeV^2})$ \nis the squared collision energy at the $\\Delta$ resonance of photopion production,\n$r_{\\pi} \\equiv m_{\\mu}^2 \/ m_{\\pi}^2 \\simeq 0.57$ is the muon-to-pion mass-squared ratio,\n$m_p$ is the proton mass, and $x_{\\rm R}^+\\ (\\simeq 0.36)$ \nis the kinematically maximal bound \nof the relative energy of emitted pions normalized by the parent cosmic-ray energy.\n$\\tau_0$, the optical depth of $\\gamma p$ interactions at the reference energy $E_0$,\nlinks the neutrino flux to the parent cosmic-ray intensity determined by $\\kappa_{\\rm CR}$.\n\nThe cosmic-ray flux integrating a UHECR spectrum over all\nthe sources in the redshift space, which corresponds to the UHECR spectrum\nwe {\\it observe}, is given by\n\\begin{eqnarray}\n\\phi_{\\rm CR}(E_{\\rm CR}) &=& \nn_0 c \\kappa_{\\rm CR} \\int_0^{z_{\\rm max}} dz \\nonumber \\\\\n&& (1 + z)^{1 - \\alpha} \\psi(z) \\left| \\frac{dt}{dz} \\right| \ne^{- \\tau_0} \n\\left( \\frac{E_{\\rm CR}}{E_0} \\right)^{-\\alpha},\\nonumber\\\\\n&&\n\\end{eqnarray}\nneglecting intergalactic magnetic fields and the energy loss\nin the CMB field during UHECR propagation.\nIntroducing some analytical approximations leads to the following simple formula:\n\\begin{eqnarray}\n\\phi_{\\rm CR}(E_{\\rm CR}) &\\simeq& \n2n_0 \\kappa_{\\rm CR} \\frac{H_0}{c} e^{-\\tau_0}\\left( \\frac{E_{\\rm CR}}{E_0} \\right)^{-\\alpha}\\nonumber\\\\\n&& \\frac{1}{2(m-\\alpha)-1} \\Omega_{\\rm M}^{- \\frac{m-\\alpha+1}{3}}\\nonumber\\\\\n&&\\left[ \\left\\{ \\Omega_{\\rm M} (1 + z_{\\rm max})^3 + \\Omega_{\\Lambda} \\right\\}^{\\frac{m-\\alpha}{3} - \\frac{1}{6}} - 1 \\right].\\nonumber\\\\\n&& \n\\label{eq:UHECR_flux_approx}\n\\end{eqnarray}\n\nComparing this formula to Equation~(\\ref{eq:onsource_simple}),\none can find that the source evolution effect represented by the evolution\nparameter $m$ is canceled in the ratio of the neutrino flux\nto the parent UHECR flux. This is because both the secondary produced neutrinos\nand emitted UHECRs originate in the same sources with the same evolution history.\nConsequently, the optical depth $\\tau_0$ is a deciding factor\nin the relation between these two fluxes. The TeV--PeV neutrino observation\nby IceCube that determines the neutrino flux $\\phi_\\nu$ thus\nassociates the UHECR source optical depth with the UHECR flux.\n\n\\begin{figure}[tb]\n \\includegraphics[width=0.4\\textwidth]{.\/CRFluxWzDumpOpticalDepthContourHESE_SFR.pdf}\n \\caption{Constraints on the optical depth of UHECR sources \nfor PeV-energy neutrino production and the energy flux of \nextragalactic UHECRs, $E_{\\rm CR}^2\\phi_{\\rm CR}$, at an energy of 10 PeV~\\cite{YoshidaTakami2014}. \nThe regions between the two blue solid curves ($\\alpha=2.5$), \ngreen dashed curves ($\\alpha=2.7$), and light blue dot-dashed curves ($\\alpha=2.3$) \nare allowed by the present IceCube observations~\\cite{icecubePeV2013,icecubeHESE2013}.\nThe unshaded region highlights the allowed region for $\\alpha=2.5$ \ntaking into account the observed intensity of UHECRs measured by the IceTop experiment~\\cite{icetop2013}.}\n\\label{fig:constraints_10PeV} \n\\end{figure}\n\n\\begin{figure}\n \\includegraphics[width=0.4\\textwidth]{.\/CRFluxWzDumpOpticalDepthContourHESE1EeV_SFR.pdf}\n \\caption{Same as Figure~\\ref{fig:constraints_10PeV}, but constraints against the cosmic-ray flux\n at an energy of 1 EeV, showing the constraints when the UHECR spectrum from PeV neutrino\n sources extends to higher energies.}\n\\label{fig:constraints_1EeV} \n\\end{figure}\n\nFigure~\\ref{fig:constraints_10PeV}\ndisplays the relations between the optical depth and the UHECR flux for several values of $\\alpha$,\nall of which are consistent with the IceCube observation\nat the present statistics~\\cite{icecubeHESE2013}. Star-formation-like source evolution\nis assumed, but the other assumptions regarding the evolution \nwould not change the main results, as explained above.\nThe smaller optical depth, implying a lower neutrino production efficiency, would require\nmore UHECR protons to be compatible with the neutrino intensity measured by IceCube.\nThe optical depth of the UHECR sources must be larger than 0.01,\nas the parent UHECR flux would exceed the observed cosmic-ray flux otherwise.\nThe proton flux from the neutrino sources contributes more than at least a few percent\nof all the UHECRs in the 10-PeV energy range. The magnetic horizon effect\nwould not change these constraints unless the sources are very rare, for example, \nif their number density is much smaller than $\\sim10^{-6} {\\rm Mpc}^{-3}$~\\cite{YoshidaTakami2014}.\nNote that the lower bound of the source density set by the small-scale UHECR anisotropy study\nconducted by the Auger observatory~\\cite{auger_density} is $6\\times 10^{-6} {\\rm Mpc}^{-3}$.\n\nGamma-ray bursters (GRBs) are strong candidates for UHECR acceleration sites\nand therefore high-energy neutrino production sites. \nInternal shocks are the most popular sites to produce\nhigh-energy neutrinos. An optical depth of $0.1-10^{-2}$ can be achieved,\ndepending on the dissipation radius, which satisfies the optical depth condition\nshown in Figure~\\ref{fig:constraints_10PeV}.\nHowever, their energetics may be problematic.\nThe typical gamma-ray energy output of a\nregular GRB is $10^{52}$ erg in gamma rays, and the local\noccurrence rate of long-duration GRBs is $\\sim1 {\\rm Gpc^{-3}}\\ {\\rm yr^{-1}}$.\nThese data indicate that the luminosity of local cosmic rays generated from GRBs is\n$10^{44}(\\eta_p\/10) {\\rm erg}\\ {\\rm Mpc^{-3}}\\ {\\rm yr^{-1}}$,\nwhere $\\eta_p$ is the ratio of\nthe UHECR output and $\\gamma$-ray output, which is known as the\nbaryon loading factor. This luminosity is 2 orders\nof magnitude smaller than that of UHECRs at 10 PeV and\nthus is too low to meet the requirement shown in Figure~\\ref{fig:constraints_10PeV}\nthat the UHECR flux from the sources must account for $\\geq0.1$ of the total UHECR flux.\nWe conclude that \nGRBs are unlikely to be major sources of both PeV-energy UHECRs and neutrinos.\n\nAmong known astronomical objects,\nonly flat-spectrum radio quasars (FSRQs)\ncan realize a large $\\gamma p$ optical depth ($\\geq0.01$)\nand large energetics. The typical $\\gamma$-ray luminosity density\nof FSRQs is $\\sim10^{46} {\\rm erg}\\ {\\rm Mpc^{-3}}\\ {\\rm yr^{-1}}$\nin our local universe. This is comparable to the local density\nof UHECRs at $\\sim$10 PeV.\n\nFigure~\\ref{fig:constraints_1EeV} shows the constraints on the $\\gamma p$ optical depth\nand UHECR flux when the energy spectrum of UHECR protons emitted from the neutrino sources\nextends to much higher energies than the observed neutrino energies.\nThe allowed regions in the parameter space become much smaller than those\nfor the constraints for $E_0 = 10$ PeV because the spectrum of the observed cosmic rays\nis steeper than the observed neutrino spectrum.\nNote that the optical depth constrained here is not at the EeV level, but at the PeV level,\nbecause PeV-energy protons are responsible for the neutrinos detected by IceCube.\nThe constraints suggest that the optical depth of protons for PeV-energy\nneutrinos is rather high, $\\tau_0\\geq 0.2$, and also that a major\nfraction of UHECRs in the EeV region is extragalactic\nprotons. This supports the ``dip'' transition model~\\cite{dip_model} of UHECR protons, \nwhere the ankle structure of the cosmic-ray spectrum, which\nappears at 3 to 10 EeV, is caused by the energy loss of\nextragalactic UHECR protons by Bethe--Heitler pair production with\nCMB photons. This model predicts a high GZK cosmogenic neutrino flux \nat 10--100 PeV.\nHowever, as we see in the next section,\nthe null detection of cosmogenic neutrino candidates in the IceCube seven year dataset\nexcludes the dip transition scenario if HECRs are proton-dominated.\nThis suggests that the source of neutrinos\nseen by IceCube is not the main source of cosmic rays at EeV energies or higher.\nNo known class of astronomical objects can \nmeet the stringent requirement of the $\\gamma p$ optical depth, $\\tau_0\\geq 0.1$,\nand the UHECR energetics. Only bright FSRQs such as those with \n$L_\\gamma\\sim 10^{50} {\\rm erg}\\ \\sec^{-1}$ {\\it could} realize\n$\\tau_0\\sim 0.1$, but such FSRQs are too rare to energetically reproduce\nthe observed UHECR flux at 1 EeV.\n\nAccording to these considerations, none of the known extragalactic objects\ncan be found to function as an origin of both PeV-energy\nneutrinos and HECRs. \n\n\\section{The constraints on the highest-energy cosmic-ray sources}\n\\label{sec:EeV}\n\n\\begin{figure*}\n \\includegraphics[width=0.4\\textwidth]{.\/GZK2DHistogram.pdf}\n \\includegraphics[width=0.4\\textwidth]{.\/AstrophysE2_58_2DHistogram.pdf}\n \\caption{Expected event distributions on the plane of the energy proxy\nand cosine of the reconstructed zenith angle seen by IceCube. \nSimulation of the GZK cosmogenic model~\\cite{Ahlers2010}\n(left panel) and astrophysical neutrino models with a spectrum following $E_\\nu^{-2.5}$\nare shown with the intensity measured by IceCube~\\cite{IceCubeHESE2014}. The $z$ axis displays the number of events seen by the IceCube extremely high-energy analysis\nbased on the seven year data.}\n\\label{fig:energy_cosZenith}\n\\end{figure*}\n\nThe analysis of seven years of IceCube data obtained in the search for ultrahigh-energy\nneutrinos (energies larger than 10 PeV) has been reported~\\cite{EHE2016}.\nThe analysis was optimized in particular for neutrinos with energies above 100 PeV.\nThe exposure has reached $\\sim10^{17}{\\rm cm^2} \\sec\\ {\\rm sr}$ at 1 EeV,\nwhich makes it possible to probe an important region of the parameter space \nin the GZK cosmogenic neutrino models. \nTwo events with estimated deposited energies of\n2.6 and 0.77 PeV, respectively, were identified in this analysis, but no events were found\nin the higher-energy region. This observation presents a serious challenge\nto the standard baseline candidates of HECR sources discussed in the literature.\n\nAn IceCube simulation was used\nto predict the number of events IceCube would detect on the plane of the reconstructed\nenergy and zenith angle for each model of ultrahigh-energy neutrinos (including GZK cosmogenic neutrinos). \nFigure~\\ref{fig:energy_cosZenith} shows examples\nfrom the models. The resolution of the reconstructed deposited energies of energetic\nevents is less than excellent owing to the stochastic nature of the muon energy loss profile\nat PeV energies. Furthermore, IceCube's ability to associate the estimated energy deposit\nof an event with its parent neutrino energy is rather limited because only a small fraction\nof the neutrino energy is converted to the visible form ({\\it i.e.}, the deposited muon energy)\nby the IceCube detectors. Nevertheless, Figure~\\ref{fig:energy_cosZenith} exhibits\nclear differences between two different models. The GZK model yields an event distribution\nwith an energy peak higher than the softer astrophysical neutrino models.\nThe events from the GZK model are also distributed more sharply in the horizontal\ndirection {\\it i.e.}, $\\cos({\\rm zenith})=0$. This is because neutrinos with\nenergies at the EeV level experience strong absorption effects \nas they propagate through the Earth. These features, as well as\nthe total event rate (which is equivalent to the normalization of the event distributions\nshown in Figure~\\ref{fig:energy_cosZenith}), makes it possible to determine which models\nare compatible with the observation. The binned Poisson log-likelihood ratio\ntest was performed. The simulated event distributions \non the energy--zenith angle plane, such as those shown in Figure~\\ref{fig:energy_cosZenith},\ngive the expected number of events in each bin of the energy proxy and\ncosine of the zenith, which were used to construct the binned Poisson likelihood. A log-likelihood ratio was\nused as a test statistic, and an ensemble of pseudo-experiments to derive\nthe test statistical distribution was used to calculate the p-values.\n\n\\begin{figure}\n \\includegraphics[width=0.4\\textwidth]{.\/GZKFluxConstraints.pdf}\n \\caption{Energy spectra of various cosmogenic neutrino\n models~\\cite{Ahlers2010, Kotera2010,Aloisio2015, Yoshida1993}. The spectra shown\nby red (brown) curves are rejected (disfavored), with p-values less than 10\\% (32\\%).\nAll these models assume a proton-dominated HECR composition.}\n\\label{fig:gzk_flux}\n\\end{figure}\n\nThe hypothesis that the two observed {\\it PeV-ish} events are of GZK cosmogenic origin\nis rejected, with a p-value of 0.3\\% (which implies that it is incompatible \nwith the event distribution shown in the left panel of Figure~\\ref{fig:energy_cosZenith})\nbut is consistent with a generic astrophysical power-law flux such\nas $E_\\nu^{-2}$ or $E_\\nu^{-2.5}$~\\cite{EHE2016}. \nThis result makes it possible to set an upper limit\non the ultrahigh-energy neutrino flux extending above 10 PeV and thus to also test\nthe GZK cosmogenic neutrino models using the binned Poisson log-likelihood ratio method.\n\nThe various cosmogenic neutrino energy spectra are displayed in\nFigure~\\ref{fig:gzk_flux}. Many of them are rejected or disfavored by the IceCube\nobservation. Regardless of where the HECR sources are\nand how they accelerate cosmic rays, the emitted HECR protons {\\it must} produce\nsecondary neutrinos by the GZK mechanism as they travel through space.\nIn this sense, any consequences of these bounds on GZK neutrinos\nare considered as robust and model-independent arguments.\n\nWe summarize the findings below.\n\n\\begin{itemize}\n\\item Cosmogenic models with the maximal flux allowed\n by the Fermi-LAT measurement~\\cite{FermiDiffuse} of \n the diffuse extragalactic $\\gamma-$ray background are rejected.\n This finding implies that the present limits imposed by the neutrino observation are\n at least as stringent as those imposed by the $\\gamma-$ray observation.\n\n\\item HECR source evolution comparable to the star formation rate (SFR) is beginning to\nbe constrained. Sources evolving more strongly than the SFR, such as FSRQs and GRBs, are unlikely\nto be HECR sources; otherwise, IceCube would have detected cosmogenic-neutrino-induced events already.\n\n\\item Any GZK cosmogenic type of energy spectrum must have an intensity below\n$E_\\nu^2\\phi_{\\nu_e + \\nu_\\mu + \\nu_\\tau}(E_\\nu) = 3\\times 10^{-9} \n{\\rm GeV}\\ {\\rm cm}^{-2}\\ \\sec^{-1}\\ {\\rm sr}^{-1}$ at 100 PeV. This limit rejects the dip transition\nmodel of UHECRs.\n\n\\end{itemize}\n\n\\begin{figure}\n \\includegraphics[width=0.45\\textwidth]{.\/Plot_2DConstraint_m_zmax_0909.pdf}\n \\caption{Constraints on HECR source evolution parameters. The emission rate per co-moving volume\n is parameterized as $\\psi(z)=(1+z)^m$ with redshifts up to $Z_{\\rm max}$. GZK cosmogenic neutrino\n fluxes for various $m$ and $Z_{\\rm max}$ values are calculated by the approximated analytical\n formulation~\\cite{YoshidaIshihara2012} and used for the likelihood calculation to derive\n the confidence levels. The boxes indicate approximate parameter regions for the SFR~\\cite{Beacom} and\n FR-II-A~\\cite{FR2-A} and -B~\\cite{FR2-B} radio galaxies.}\n\\label{fig:evolution}\n\\end{figure}\n\nMore generic constraints obtained by the IceCube Collaboration~\\cite{EHE2016} by scanning the parameter space for the\nsource evolution function, $\\psi(z)=(1+z)^m$,\nextending to the maximal redshift $Z_{\\rm max}$ \nare shown in Figure~\\ref{fig:evolution}.\nThe parameterized analytical formula for the cosmogenic fluxes~\\cite{YoshidaIshihara2012} is used here.\nBecause only the CMB is assumed as the target photon field in the parameterization,\nthe limits are systematically weaker than those on the models that include EBLs.\nApproximate regions for the SFR and the evolution\nof Fanaroff--Riley type II (FR-II) galaxies are also shown for comparison.\nNote that neutrinos yielded at redshifts larger than 2\nrepresent only a minor portion of the total cosmogenic fluxes owing to redshift\ndilution~\\cite{Kotera2010, YoshidaIshihara2012}.\nThis is especially true for the cosmogenic neutrino component created\nby interactions with the CMB (not the EBL). Considering this fact,\ntogether with the estimation that\nthe luminosity function of FR-II-type AGNs or FSRQs falls off rapidly at redshifts beyond\n$z\\simeq 2$, and that the evolution of the SFR becomes more or less constant or falls off\nat redshifts beyond $z\\sim2.5$, the boxes representing SFR and FR-II evolution\nin Figure~\\ref{fig:evolution} approximate well their representation by\nthe generic evolution function $\\psi(z)=(1+z)^m$ used in the plot.\nOne can find that HECR source evolution stronger than the SFR is unlikely.\n\n\\begin{figure}\n \\includegraphics[width=0.45\\textwidth]{.\/AstroMuraseAGNConstraints.pdf}\n \\caption{Constraints on the fluxes of astrophysical neutrinos produced\nin the inner jets of radio-loud AGNs~\\cite{murase2014}. Two bounds for\nthe UHECR spectral index, $\\alpha=2.0$ (thin) and $2.3$ (thick),\nare shown.}\n\\label{fig:agn_nu}\n\\end{figure}\n\nAll the constraints on the HECR origins described so far \nrely on one critical assumption, that HECRs are proton-dominated.\nIf HECRs are of mixed- or heavy-nuclei composition, the resultant GZK cosmogenic\nflux is lower than the proton UHECR case by more than an order of magnitude,\nand the present IceCube detection sensitivity cannot reach this low intensity.\nIt is expected, however, that neutrinos with energies from the PeV level to the EeV level and beyond\nmay be produced {\\it in situ} at the HECR acceleration site. \nThe AGN neutrino models are good examples. A recent theoretical study\nof ultrahigh-energy neutrino generation in the inner jets \nof radio-loud AGNs~\\cite{murase2014} found that, \ntaking into account the blazar sequence,\nFSRQs can emit PeV--EeV neutrinos, and BL Lac objects\ncan be HECR (heavy) {\\it nuclei} sources~\\cite{murase2012}.\nThe predicted PeV--EeV neutrino intensity is proportional\nto the baryon loading factor, that is, the ratio of the UHECR luminosity\nto the electromagnetic radiation luminosity $L_{\\rm CR}\/L_\\gamma$.\nThe null detection of 100 PeV--EeV neutrinos by IceCube thus\nbounds this factor.\n\nFigure~\\ref{fig:agn_nu} shows the present bound\non the fluxes of neutrinos from radio-loud AGNs\nby IceCube~\\cite{EHE2016}.\nThe observed HECR generation rate, $\\sim$10 EeV \n($10^{44}\\ {\\rm erg}\\ {\\rm Mpc}^{-3}\\ {\\rm yr}^{-1}$), requires loading factors\nof around 3 and 100 for UHECR spectral indices of $\\alpha = 2$ and 2.3, respectively.\nThe present constraints are comparable to or slightly below the values\nrequired for radio-loud AGN inner jets to be responsible for\nthe majority of UHECRs\/HECRs. \nThe neutrino observation has started to exclude a sizable parameter\nspace in the models of AGNs as an origin of HECRs even if\nHECRs are composed of heavy nuclei, although this is a model-dependent\nargument.\n\nFast-spinning newborn pulsars are also proposed as candidate sources\nof HECRs~\\cite{olinto1997}. This proposal predicts a heavy-nuclei-dominated composition\nat the highest energies and thus would yield GZK neutrinos too rare\nto be detected. However, in this model, the accelerated particles traveling through\nthe expanding supernova ejecta surrounding the star\nproduce neutrinos with energies of 100 PeV to $\\sim$EeV. The predicted\ndiffuse neutrino flux from fast pulsars is \n$E_\\nu^2 \\phi_{\\nu_e+\\nu_\\mu+\\nu_\\tau}\\simeq \n1.1\\times 10^{-8} {\\rm GeV} {\\rm cm^{-2}} \\sec^{-1} {\\rm sr^{-1}}$,\ndepending on the source emission evolution, and\nis accessible at the IceCube detection sensitivity~\\cite{fang2014}.\n\nA binned Poisson log-likelihood test of this model was performed by\nthe IceCube Collaboration~\\cite{EHE2016}. The model is rejected\nif the evolution of the source emission history traces the standard SFR, although\nit is not ruled out if the emission rate evolves more slowly than the SFR.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nThe detection of TeV--PeV neutrinos and the null detection\nof EeV neutrinos by IceCube have yielded many insights on the origin of UHECRs, which cannot be probed by other cosmic messengers.\nThe most popular candidates for UHECR\/HECR sources, GRBs and radio-loud AGNs,\nhave now faced serious challenges from recent results in neutrino astronomy.\nGRBs and AGNs can still contribute to the observed bulk\nof the highest-energy cosmic rays but are unlikely to be their {\\it dominant} sources.\nAstronomical objects tracing the standard SFR or evolving much more slowly\nare needed to explain the observation without fine-tuning.\nIf the highest-energy cosmic rays are not proton-dominated,\nthese constraints are certainly relaxed. The model-dependent tests\ndescribed here, however, have already placed limits on some of\nthe parameter space of the AGN\/pulsar scenarios.\n\nThere is a loophole: a hypothesis that any high-energy neutrino emission\ndoes not involve cosmic-ray emission. A good example is the GRB choked jet\nmodel~\\cite{chokedGRB}. In dense environments, the optical depth $\\tau_0\\gg 1$,\nwhich implies that all the proton\u5c08 energy is\nconverted into neutrinos; {\\it i.e.}, the observed UHECRs and\nneutrinos are not directly connected.\n\nHow can we identify UHECR\/HECR sources that evolve\nat the usual SFR, or even more slowly?\nReal-time multi-messenger observation\ntriggered by high-energy neutrinos is a possible answer. IceCube has launched\nthe Gamma-ray Coordinates Network-based alert delivery system~\\cite{icecubeGCN}.\nThe search algorithms for Extremely High-Energy neutrinos~\\cite{icecubePeV2013, EHE2016}\nand High-Energy Starting Events~\\cite{icecubeHESE2013, IceCubeHESE2014},\nthe analysis channels that discovered the high-energy cosmic neutrinos, are now running\nin real time at IceCube's South Pole data servers.\nOnce a high-energy-neutrino-induced event is detected, an alert is sent immediately\nto trigger follow-up observations by other astronomical instruments.\nIf UHECR\/HECR sources are transient neutrino sources,\nwe may be able to identify them by follow-up detection with optical\/X-ray\/$\\gamma$-ray\/radio\ntelescopes. This is probably a promising way to approach identification of\nthe yet-unknown origins of high-energy cosmic rays.\n\n\\section*{Acknowledgments}\nI am grateful to the CRIS 2016 organizers for\ntheir warm hospitality.\nI acknowledge my colleagues in the IceCube Collaboration\nfor useful discussions and suggestions.\nI also appreciate the input of Kunihito Ioka, Kumiko Kotera, Kohta Murase, \nand Hajime Takami on the theoretical arguments.\nSpecial thanks go to Aya Ishihara, who has worked together\non the analyses of extremely high-energy neutrinos \nfor many years. This work is supported by JSPS\nGrants-in-Aid for Scientific Research (Project \\#25105005 and \\#25220706).\n\n\n\n\n\\nocite{*}\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Sun is the best known star to astronomers, and is commonly used as a template in the study of other similar objects. Yet, there are still some of its aspects that are not well understood and that are crucial for a better understanding of how stars, and consequently how planetary systems and life evolve: how do the more complex physical parameters of a Sun-like star, such as rotation and magnetic activity, change with time? Is the Sun unique or typical (i.e., an average Sun-like star)? If the Sun is common, it would mean that life does not require a special star for it to flourish, eliminating the need to evoke an anthropic reasoning to explain it.\n\nIn an effort to assess how typical the Sun is, \\citet{2008ApJ...684..691R} compared 11 of its physical parameters with nearby stars, and concluded that the Sun is, in general, typical. Although they found it to be a slow-rotator against 276 F8 -- K2 (within $\\pm 0.1$ M$_\\odot$) nearby stars, this result may be rendered inconclusive owing to unnacounted noise caused by different masses and ages in their sample. Other studies have suggested that the Sun rotates either unusually slow \\citep{1979PASP...91..737S, 2015A&A...582A..85L} or regularly for its age \\citep{1983ApJS...53....1S, 1985AJ.....90.2103S, 1984ApJ...281..719G, 1998SSRv...85..419G, 2003ApJ...586..464B}, but none of them comprised stars that are very similar to the Sun, therefore preventing a reliable comparison. In fact, with \\textit{Kepler} and \\textit{CoRoT}, it is now possible to obtain precise measurements of rotation periods, masses and ages of stars in a very homogeneous way \\citep[e.g.,][]{2015EPJWC.10106016C, 2012A&A...548L...1D, 2014ApJS..210....1C}, but they generally lack high precision stellar parameters, which are accessible through spectroscopy. The challenging nature of these observations limited ground-based efforts to smaller, but key stellar samples \\citep[e.g.,][]{2003A&A...397..147P, 2012AN....333..663S}.\n\nThe rotational evolution of a star plays a crucial role in stellar interior physics and habitability. Previous studies proposed that rotation can produce extra mixing that is responsible for depleting the light elements Li and Be in their atmospheres \\citep{1989ApJ...338..424P, 1994A&A...283..155C, 2015A&A...576L..10T}, which could explain the disconnection between meteoritic and solar abundances of Li \\citep{2010A&A...519A..87B}. Moreover, rotation is highly correlated with magnetic activity \\citep[e.g.,][]{1984ApJ...279..763N, 1993ApJS...85..315S, 1995ApJ...438..269B, 2008ApJ...687.1264M}, and this trend is key to understand how planetary systems and life evolve in face of varying magnetic activity and energy outputs by solar-like stars during the main sequence \\citep{2009IAUS..258..395G, 2005ApJ...622..680R, 2016ApJ...820L..15D}.\n\nA theoretical treatment of rotational evolution from first principles is missing, so we often rely on empirical studies to infer about it. One of the pioneer efforts in this endeavor produced the well known Skumanich relation $v \\propto t^{-1\/2}$, where $v$ is the rotational velocity and $t$ is the stellar age \\citep{1972ApJ...171..565S}, which describes the rotational evolution of solar-type stars in the main sequence, and can be derived from the loss of angular momentum due to magnetized stellar winds \\citep[e.g.,][]{1988ApJ...333..236K, 1992ASPC...26..416C, 2003ApJ...586..464B, 2013A&A...556A..36G}. This relation sparked the development of gyrochronology, which consists in estimating stellar ages based on their rotation, and it was shown to provide a stellar clock as good as chromospheric ages \\citep{2007ApJ...669.1167B}. However, in Skumanich-like relations, the Sun generally falls on the curve (or plane, if we consider dependence on mass) defined by the rotational braking law by design. Thus it is of utmost importance to assess how common the Sun is in order to correctly calibrate it.\n\nSubsequent studies have proposed modifications to this paradigm of rotation and chromospheric activity evolution \\citep[e.g.,][]{1991ApJ...375..722S, 2004A&A...426.1021P}, exploring rotational braking laws of the form $v \\propto t^{-b}$. The formalism by \\citet{1988ApJ...333..236K} shows that this index $b$ can be related to the geometry of the stellar magnetic field, and that Skumanich's index ($b = 1\/2$) corresponds to a geometry that is slightly more complex than a simple radial field. It also dictates the dependence of the angular momentum on the rotation rate, and in practice, it determines how early the effects of braking are felt by a model. Such prescriptions for rotational evolution have a general agreement for young ages up to the solar age \\citep[see][and references therein]{2016arXiv160507125S, 2016A&A...587A.105A}, but the evolution for older ages still poses an open question. In particular, \\citet{2016Natur.529..181V} suggested that stars undergo a weakened magnetic braking after they reach a critical value of the Rossby number, thus explaining the stagnation trend observed on the rotational periods of older Kepler stars.\n\nIn order to assess how typical the Sun is in its rotation, our study aims to verify if it follows the rotational evolution of stars that are very similar to it, an objective that is achieved by precisely measuring their rotational velocities and ages. We take advantadge of an unprecedented large sample of solar twins \\citep{2014A&A...572A..48R} using high signal-to-noise ($S\/N > 500$) and high resolution ($R > 10^5$) spectra, which provides us with precise stellar parameters and is essential for the analysis that we perform (see Fig. \\ref{widths} for an illustration of the subtle effects of rotation in stellar spectra of Sun-like stars).\n\n\n\\section{Working sample}\n\nOur sample consists of bright solar twins in the Southern Hemisphere, which were mostly observed in our HARPS Large Program (ID: 188.C-0265) that aimed to search for planetary systems around stars very similar to the Sun \\citep[][Papers I, II and III, respectively, of the series The Solar Twin Planet Search]{2014A&A...572A..48R, 2015A&A...581A..34B, 2016A&A...590A..32T}. These stars are loosely defined as those that have T$\\mathrm{_{eff}}$, $\\log{g}$ and [Fe\/H] inside the intervals $\\pm 100$ K, $\\pm 0.1$ [cgs] and $0.1$ dex, respectively, around the solar values. It has been shown that these limits guarantee $\\sim$0.01 dex precision in the relative abundances derived using standard model atmosphere methods amd that the systematic uncertainties of that analysis are negligible within those ranges \\citep{2014ApJ...795...23B, 2015A&A...583A.135B, 2015A&A...582A..17S, 2016A&A...589A..17Y}. In total, we obtained high precision spectra for 73 stars and used data from 9 more targets observed in other programs, all of them overlapping the sample of 88 stars from Paper I. We used the spectrum of the Sun (reflected light from the Vesta asteroid) from the ESO program 088.C-0323, which was obtained with the same instrument and configuration as the solar twins.\n\nThe ages of the solar twin sample span between $0-10$ Gyr and are presented in the online material (Table \\ref{params}). They were obtained by \\citet{2016A&A...590A..32T} using Yonsei-Yale isochrones \\citep{2001ApJS..136..417Y} and probability distribution functions as described in \\citet{2013ApJ...764...78R,2014A&A...572A..48R}. Uncertainties are assumed to be symmetric. These ages are in excellent agreement with the ones obtained in Paper I, with a mean difference of $-0.1 \\pm 0.2$ Gyr (see footnote 5 in Paper III). We adopted 4.56 Gyr for the solar age \\citep{1995RvMP...67..781B}. The other stellar parameters ($T\\rm_{eff}$, $\\log{g}$, [Fe\/H] and microturbulence velocities $v\\rm_t$) were obtained by \\citet{2014A&A...572A..48R}. The stellar parameters of HIP 68468 and HIP 108158 were updated by \\citet{2016A&A...590A..32T}.\n\nOur targets were observed at the HARPS spectrograph\\footnote{\\footnotesize{The initial plan was to use the observations from the MIKE spectrograph, as described by the Paper I. However, we decided to use the HARPS spectra due to its higher spectral resolving power.}} \\citep{2003Msngr.114...20M} which is fed by ESO's 3.6 m telescope at La Silla Observatory. When available publicly, we also included all observations from other programs in our analysis in order to increase the signal to noise ratio ($S\/N$) of our spectra. However, we did not use observations for 18 Sco (HIP 79672) from May 2009\\footnote{\\footnotesize{These observations have instrumental artifacts.}} and we did not include observations post-HARPS upgrade (June 2015) when combining the spectra\\footnote{\\footnotesize{The spectra had a different shape in the red side, and since there were few observations, we chose not to use them to eliminate eventual problems with combination and normalization.}}.\n\nThe wavelength coverage for the observations ranged from 3780 to 6910 \\AA, with a spectral resolving power of $R = \\lambda\/\\Delta\\lambda = 115000$. Data reduction was performed automatically with the HARPS Data Reduction Software (DRS). Each spectrum was divided in two halves, corresponding to the mosaic of two detectors (one optimized for the blue and other for the red wavelengths). In this study we only worked with the red part (from 5330 to 6910 \\AA) due to its higher $S\/N$ and the presence of cleaner lines. The correction for radial velocities was performed with the task \\texttt{dopcor} from IRAF\\footnote{\\footnotesize{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.}}, using the values obtained from the pipeline's cross-correlation function (CCF) data. The different observations were combined with IRAF's \\texttt{scombine}. The resulting average (of the sample) signal to noise ratio was 500 around 6070 \\AA. The red regions of the spectra were normalized with $\\sim$30th order polynomial fits to the upper envelopes of the entire red range, using the task \\texttt{continuum} on IRAF. We made sure that the continuum of the stars were consistent with the Sun's. Additionally, we verified that errors in the continuum determination introduce uncertainties in $v \\sin{i}$ lower than $0.1$ km s$^{-1}$.\n\n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\hsize]{line_comp.pdf}\n \\caption{Comparison of the spectral line broadening between two solar twins with different projected rotational velocities. The wider line correspond to HIP 19911, with $v \\sin i \\approx 4.1$ km s$^{-1}$, and the narrower one comes from HIP 8507, with $v \\sin i \\approx 0.8$ km s$^{-1}$.}\n \\label{widths}\n \\end{figure}\n\n\n\\section{Methods}\\label{methods}\n\nWe analyze five spectral lines, four due to Fe I and one to Ni I (see Table \\ref{lines}; equivalent widths were measured using the task \\texttt{splot} in IRAF.), that were selected for having low level of contamination by blending lines. The rotational velocity of a star can be measured by estimating the spectral line broadening that is due to rotation. The rotation axes of the stars are randomly oriented, thus the spectroscopic measurements of rotational velocity are a function of the inclination angle ($v \\sin{i}$).\n\n\\begin{table}[h]\n\\begin{center}\n\\caption{Line list used in the projected stellar rotation measurements.}\n\\begin{tabular}{cccccc}\n\\hline \\hline\\\\[-2ex]\nWavelength & Z & Exc. pot. & $\\log{(gf)}$ & $v_{\\mathrm{macro}}^\\odot$ & EW$^\\odot$\\\\\n(\\AA) & & (eV) & & (km s$^{-1}$) & (\\AA)\\\\ \\hline\n6027.050 & 26 & 4.076 & -1.09 & 3.0 &\t0.064 \\\\\n6151.618 & 26 & 2.176 & -3.30 & 3.2 &\t0.051 \\\\\n6165.360 & 26 & 4.143 & -1.46 & 3.1 &\t0.045 \\\\\n6705.102 & 26 & 4.607 & -0.98 & 3.6 &\t0.047 \\\\\n6767.772 & 28 & 1.826 & -2.17 & 2.9 &\t0.079 \\\\\n\\hline\n\\end{tabular}\n\\tablefoot{EW are the equivalent widths and $v_{\\mathrm{macro}}$ are the macroturbulence velocities measured as in Sect. \\ref{vmacro_det}.}\n\\label{lines}\n\\end{center}\n\\end{table}\n\nWe estimate $v \\sin{i}$ for our sample of solar twins using the 2014 version of MOOG Synth \\citep{1973PhDT.......180S}, adopting stellar atmosphere models by \\citet{2004astro.ph..5087C}, with interpolations between models performed automatically by the Python package qoyllur-quipu\\footnote{\\footnotesize{Available at \\url{https:\/\/github.com\/astroChasqui\/q2}}} \\citep[see][]{2014A&A...572A..48R}. The instrumental broadening is taken into account by the spectral synthesis. We used the stellar parameters from \\citet{2016A&A...590A..32T} and microturbulence velocities from \\citet{2014A&A...572A..48R}. Macroturbulence velocities ($v_{\\mathrm{macro}}$) were calculated by scaling the solar values, line by line (see Sect. \\ref{vmacro_det}). Estimation of the rotational velocities was performed with our own algorithm\\footnote{\\footnotesize{Available at \\url{https:\/\/github.com\/RogueAstro\/PoWeRS}}} that makes automatic measurements for all spectral lines for each star. We applied fine tuning corrections by eye for the non-satisfactory automatic line profile fittings, and quote $v \\sin{i}$ as the mean of the values measured for the five lines. See Sects. \\ref{vmacro_det} and \\ref{vsini_det} for a detailed description on rotational velocities estimation as well as their uncertainties. Fig. \\ref{sun_fit} shows an example of spectral line fitting for one feature in the Sun.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=\\hsize]{sun_line.pdf}\n\\caption{Example of line profile fitting for the Fe I feature at 6151.62 \\AA\\ in the spectrum of the Sun. The continuous curve is the synthetic spectrum, and the open circles are the observed data.}\n\\label{sun_fit}\n\\end{figure}\n\n\\subsection{Macroturbulence velocities}\\label{vmacro_det}\n\nWe tested the possibility of measuring $v_{\\mathrm{macro}}$ (radial-tangential profile) simultaneously with $v \\sin{i}$, but even when using the extremely high-resolution spectra of HARPS, it is difficult to disentangle these two spectral line broadening processes, which is probably due to the low values of these velocities. Macroturbulence has a stronger effect on the wings of the spectral lines, but our selection of clean lines still has some contamination that requires this high-precision work to be done by eye. Some stars show more contamination than others, complicating the disentaglement. Fortunately, the variation of macroturbulence with effective temperature and luminosity is smooth \\citep{2005oasp.book.....G}, so that precise values of $v_{\\mathrm{macro}}$ could be obtained by a calibration. Thus we adopted a relation that fixes macroturbulence velocities in order to measure $v \\sin{i}$ with high precision using an automatic code, which provides the additional benefits of reproducibility and lower subjectivity.\n\nThe macroturbulence velocity is known to vary for different spectral lines \\citep{2005oasp.book.....G}, so for our high-precision analysis, we do not adopt a single value for each star. Instead, we measure the $v_{\\mathrm{macro}}$ for the Sun in each of the spectral lines from Table \\ref{lines}, and use these values to scale the $v_{\\mathrm{macro}}$ for all stars in our sample using the following equation\\footnote{\\footnotesize{In the future, it should be possible to calibrate macroturbulence velocities using 3D hydrodynamical stellar atmosphere models \\citep[e.g.,][]{2013A&A...557A..26M} by using predicted 3D line profiles (without rotational broadening) as observations and determine which value of $v_{\\rm{macro}}$ is needed to reproduce them with 1D model atmospheres.}}:\n\n\\begin{eqnarray}\nv_{\\mathrm{macro},\\lambda}^{*} = v_{\\mathrm{macro},\\lambda}^{\\odot} - 0.00707\\ T_{\\mathrm{eff}} + 9.2422 \\times 10^{-7}\\ T_{\\mathrm{eff}}^2 \\nonumber \\\\ + 10.0 + k_1 \\left(\\log{g} - 4.44\\right) + k_2 \\\\\n\\equiv f(T_{\\mathrm{eff}}) + k_1\\left(\\log{g} - 4.44\\right) + k_2 \\nonumber\n\\label{vmacro_eq}\n\\end{eqnarray}where $v\\rm_{macro,\\lambda}^{\\odot}$ is the macroturbulence velocity of the Sun for a given spectral line, $T_{\\mathrm{eff}}$ and $\\log{g}$ are, respectively, the effective temperature and gravity of a given star, $k_1$ is a proportionality factor for $\\log{g}$ and $k_2$ is a small correction constant.\n\nThis formula is partly based on the relation derived by \\citet{2012A&A...543A..29M} (Eq. E.1 in their paper) from the trend of macroturbulence with effective temperature in solar-type stars described by \\citet{2005oasp.book.....G}. The $\\log{g}$-dependent term (a proxy for luminosity) comes from the empirical relation derived by \\citet{2014MNRAS.444.3592D} (Eq. 8 in their paper), and is based on spectroscopic measurements of $v_{\\mathrm{macro}}$ of \\textit{Kepler} stars, which were disentangled from $v \\sin{i}$ using asteroseismic estimates of the projected rotational velocities. Doyle et al. obtained a value for the proportionality factor $k_1$ of -$2.0$. However their uncertainties on $v_{\\mathrm{macro}}$ were of the order of 1.0 km s$^{-1}$. Thus, we decided to derive our own values of $k_1$ and $k_2$ by simultaneously measuring $v_{\\mathrm{macro}}$ and $v \\sin{i}$ of a sub-sample of solar twins.\n\nThis sub-sample was chosen to contain only single stars or visual binaries mostly in the extremes of $\\log{g}$ ($4.25$ -- $4.52$) in our entire sample. We assume these values to have a linear relationship with $v_{\\mathrm{macro}}$ inside this short interval of $\\log{g}$. We used as a first guess the values of $v \\sin{i}$ and $v_{\\mathrm{macro}}$ from a previous, cruder estimation we made, and performed line profile fits by eye using MOOG Synth. The velocities in Table \\ref{vmacro_stars} are the median of the values measured for each line and their standard error. Note that these $v \\sin{i}$ are not consistently measured in the same way that the final results are. The rotational velocity broadening was calculated by our own code (see Sect. \\ref{vsini_det} for details). By performing a linear fit in the $v_{\\mathrm{macro}} - f(T_{\\mathrm{eff}})$ vs. $\\log{g}-4.44$ relation ($f$ comprises all the $T_{\\mathrm{eff}}$-dependent terms, the macroturbulence velocity of the Sun and the known constant on Eq. \\ref{vmacro_eq}), we obtain that $k_1 = -1.81 \\pm 0.26$ and $k_2 = -0.05 \\pm 0.03$ (see Fig. \\ref{vmacro_logg}). For the stars farthest from the Sun in $\\log{g}$ from our sample, these values of $k_1$ and $k_2$ would amount to differences of up to $\\pm 0.4$ km s$^{-1}$ in their macroturbulence velocities, therefore it is essential to consider the luminosity effect on $v\\rm_{macro}$ for accurate $v \\sin{i}$ determinations.\n\n\\begin{table}[h]\n\\begin{center}\n\\caption{Simultaneous measurements of rotational and macroturbulence velocities of stars in the extremes of $\\log{g}$ from our sample of solar twins.}\n\\begin{tabular}{lcccc}\n\\hline \\hline\\\\[-2ex]\nStar & $v \\sin{i}$ & $v_{\\mathrm{macro}}$ & $T\\rm_{eff}$ & $\\log{g}$ \\\\\n & (km s$^{-1}$) & (km s$^{-1}$) & & \\\\ \\hline\nHIP 115577 & $0.95 \\pm 0.05$ & $3.35 \\pm 0.09$ & 5699 & 4.25 \\\\\nHIP 65708 & $1.20 \\pm 0.09$ & $3.55 \\pm 0.08$ & 5755 & 4.25 \\\\\nHIP 74432 & $1.40 \\pm 0.03$ & $3.35 \\pm 0.08$ & 5684 & 4.25 \\\\\nHIP 118115 & $1.40 \\pm 0.10$ & $3.43 \\pm 0.09$ & 5808 & 4.28 \\\\\nHIP 68468 & $1.75 \\pm 0.07$ & $3.70 \\pm 0.08$ & 5857 & 4.32 \\\\\nHIP 41317 & $1.55 \\pm 0.03$ & $3.10 \\pm 0.06$ & 5700 & 4.38 \\\\\nSun & $1.75 \\pm 0.07$ & $3.30 \\pm 0.06$ & 5777 & 4.44 \\\\\nHIP 105184 & $2.50 \\pm 0.03$ & $3.21 \\pm 0.08$ & 5833 & 4.50 \\\\\nHIP 10175 & $1.55 \\pm 0.06$ & $3.05 \\pm 0.08$ & 5738 & 4.51 \\\\\nHIP 114615 & $2.20 \\pm 0.03$ & $3.25 \\pm 0.08$ & 5816 & 4.52 \\\\\nHIP 3203 & $3.90 \\pm 0.03$ & $3.40 \\pm 0.10$ & 5850 & 4.52 \\\\\n\\hline\n\\end{tabular}\n\\label{vmacro_stars}\n\\end{center}\n\\end{table}\n\n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\hsize]{vmacro_logg.pdf}\n \\caption{Linear relation between $v_{\\mathrm{macro}}$ and $\\log{g}$ (a proxy for luminosity) for the stars on Table \\ref{vmacro_stars}. See the definition of $f(T_{\\mathrm{eff}})$ in Sect. \\ref{vmacro_det}. The orange continuous line represents our determination of a proportionality coefficient of -1.81 and a vertical shift of -0.05 km s$^{-1}$. The black dashed line is the coefficient found by \\citet{2014MNRAS.444.3592D}. The light grey region is a composition of 200 curves with parameters drawn from a multivariate gaussian distribution. The Sun is located at the origin.}\n \\label{vmacro_logg}\n \\end{figure}\n\nTo obtain the macroturbulence velocities for the Sun to use in Eq. \\ref{vmacro_eq}, we forced the rotational velocity of the Sun to 1.9 km s$^{-1}$ \\citep{1970SoPh...12...23H}, and then estimated values of $v\\rm_{macro, \\lambda}^{\\odot}$ by fitting each line profile using MOOG Synth, and the results are shown in Table \\ref{lines}. We estimate the error in determining $v\\rm_{macro,\\lambda}^{\\odot}$ to be $\\pm 0.1$ km s$^{-1}$. Since Eq. \\ref{vmacro_eq} is an additive scaling, the error for $v\\rm_{macro}$ of all stars is the same as in the Sun\\footnote{\\footnotesize{The uncertainties in stellar parameters have contributions that are negligible compared to the ones introduced by the error in $v_{\\mathrm{macro}}$.}}.\n\n\\subsection{Rotational velocities}\\label{vsini_det}\n\nOur code takes as input the list of stars and their parameters (effective temperature, surface gravity, metallicity and microturbulence velocities obtained on Paper I), their spectra and the spectral line list in MOOG-readable format. For each line in a given star, the code automatically corrects the spectral line shift and the continuum. The first is done by fitting a second order polynomial to the kernel of a line and estimating what distance the observed line center is from the laboratory value. Usually, the spectral line shift corrections were of the order of $10^{-2}$ \\AA, corresponding to 0.5 km s$^{-1}$ in the wavelength range we worked on. This is a reasonable shift that likely arises from a combination of granulation and gravitational redshift effects, which are of similar magnitude. The continuum correction for each line is defined as the value of a multiplicative factor that sets the highest flux inside a radius of 2.5 \\AA\\ around the line center to 1.0. The multiplicative factor usually has a value inside the range $1.000 \\pm 0.002$.\n\nThe code starts with a range of $v \\sin{i}$ and abundances and optimizes these two parameters through a series of iterations that measure the least squares difference between the observed line and the synthetic line (generated with MOOG synth). Convergence is achieved when the difference between the best solution and the previous one, for both $v \\sin{i}$ and abundance, is less than 1\\%. Additionally, the code also forces at least 10 iterations in order to avoid falling into local minima.\n\nOne of the main limitations of MOOG Synth for our analysis is that it has a \"quantized\" behavior for $v \\sin{i}$: the changes in the synthetic spectra occur most strongly in steps of 0.5 km s$^{-1}$. This behavior is not observed in varying the macroturbulence velocities. Therefore, we had to incorporate a rotational broadening routine in our code that was separated from MOOG. We used the Eq. 18.14 from \\citet{2005oasp.book.....G}, in velocity space, to compute the rotational profile\\footnote{\\footnotesize{This is the same recipe adopted by the radiative transfer code MOOG.}}:\n\n\\begin{equation}\nG(v) = \\frac{2(1-\\epsilon)\\left[ 1-(v\/v\\rm_L)^2 \\right]^{1\/2} + \\frac{1}{2} \\pi \\epsilon \\left[ 1-(v\/v\\rm_L)^2 \\right]}{\\pi v\\rm_L (1-\\epsilon\/3)}\\mathrm{,}\n\\end{equation}where $v\\rm_L$ is the projected rotational velocity and $\\epsilon$ is the limb darkening coefficient (for which we adopt the value 0.6). The rotational profile $G(v)$ is then convolved with MOOG's synthetic profiles (which were generated with $v \\sin{i}$ = 0).\n\nThe total uncertainties in rotational velocities are obtained from the quadratic sum of the standard error of the five measurements and an uncertainty of 0.1 km s$^{-1}$ introduced by the error in macroturbulence velocities. Systematic errors in the calculation of $v\\rm_{macro,\\lambda}$ for the stars do not significantly contribute to the $v \\sin{i}$ uncertainties.\n\nSome of the stars in the sample show very low rotational velocities, most probably due to the effect of projection (see left panel of Fig. \\ref{sku}). The achieved precision is validated by comparison with the values of the full-width at half maximum (FWHM) measured by the cross-correlation function (CCF) from the data reduction pipeline, with the effects of macroturbulence subtracted (see Fig. \\ref{ccf}). The spectroscopic binary star HIP 103983 has an unusually high $v \\sin{i}$ when compared to the CCF FWHM, and a verification of its spectral line profiles reveals the presence of distortions that are the most probably caused by mis-measurement of rotational velocity (contamination of the combined spectrum by a companion -- observations range from October 2011 to August 2012). We obtained a curve fit for the $v \\sin{i}$ vs. CFF FWHM (km s$^{-1}$) using a similar relation as used by \\citet{2001A&A...375..851M, 2004A&A...426.1021P, 2007A&A...475.1003H}, which resulted in the following calibration: $v \\sin{i} = \\sqrt{(0.73 \\pm 0.02) \\left[\\mathrm{FWHM}^2 - v_{\\mathrm{macro}}^2 - (5.97 \\pm 0.01)^2\\right]}$ km s$^{-1}$ \\citep[estimation performed with the MCMC code \\texttt{emcee}\\footnote{\\footnotesize{Available at \\url{http:\/\/dan.iel.fm\/emcee\/current\/}}}][]{2013PASP..125..306F}. The scatter between the measured $v \\sin{i}$ and the ones estimated from CCF is $\\sigma = 0.20$ km s$^{-1}$ (excluding the outlier HIP 103983). The typical uncertainty in the rotational velocities we obtain with our method -- line profile fitting with extreme high resolution spectra -- is 0.12 km s$^{-1}$, which implies that the average error of the CCF FWHM $v \\sin{i}$ scaling is 0.16 km s$^{-1}$, which could be significantly higher if the broadening by $v_\\mathrm{macro}$ is not accounted for.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=\\hsize]{ccf_vs_moog.pdf}\n\\caption{Comparison between our estimated values of $v \\sin{i}$ (y-axis) and the ones inferred from the cross-correlation funcion FWHM (x-axis). The spread around the 1:1 relation (black line) is $\\sigma = 0.20$ km s$^{-1}$.}\n\\label{ccf}\n\\end{figure}\n\n\n\\section{Binary stars}\n\nWe identified 16 spectroscopic binaries (SB) in our sample of 82 solar twins by analyzing their radial velocities; some of these stars are reported as binaries by \\citet{2014AJ....147...86T, 2014AJ....147...87T, 2001AJ....121.3224M, 2015ApJ...802...37B}. We did not find previous reports of multiplicity for the stars HIP 30037, HIP 62039 and HIP 64673 in the literature. Our analysis of variation in the HARPS radial velocities suggest that the first two are probable SBs, while the latter is a candidate. No binary shows a double-lined spectrum, but HIP 103983 has distortions that could be from contamination by a companion. The star HIP 64150 is a Sirius-like system with a directly observed white dwarf companion \\citep{2013ApJ...774....1C,2014ApJ...783L..25M}. The sample from Paper I contains another SB, HIP 109110, for which we could not reliably determine the $v \\sin{i}$ due to strong contamination in the spectra, possibly caused by a relatively bright companion. Thus, we did not include this star in our sample.\n\n\\begin{figure*}[!ht]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=0.48\\textwidth]{all_stars.pdf} & \\includegraphics[width=0.48\\textwidth]{sku_final.pdf} \\\\\n\\end{tabular}\n\\caption{Projected rotational velocity of solar twins in function of their age. The Sun is represented by the symbol $\\odot$. Left panel: all stars of our sample; the orange triangles are spectroscopic binaries, blue circles are the \\textit{selected sample} and the blue dots are the remaining non-spectroscopic binaries. Right panel: the rotational braking law; the purple continous curve is our relation inferred from fitting the \\textit{selected sample} (blue circles) of solar twins with the form $v \\sin{i} = v_\\mathrm{f} + m\\ t^{-b}$, where $t$ is the stellar age, and the fit parameters are $v_\\mathrm{f} = 1.224 \\pm 0.447$, $m = 1.932 \\pm 0.431$ and $b = 0.622 \\pm 0.354$, with $v_\\mathrm{f}$ and $b$ highly and positively correlated. The light grey region is composed of 300 curves that are created with parameters drawn from a multivariate gaussian distribution defined by the mean values of the fit parameters and their covariance matrix. Skumanich's law (red $\\times$ symbols, calibrated for $v^\\odot_{\\mathrm{rot}} = 1.9$ km s$^{-1}$) and the rotational braking curves proposed by \\citet[][black dashed curve, smoothed]{2014ApJ...790L..23D} and \\citet[][black dot-dashed curve]{2004A&A...426.1021P} are plotted for comparison.}\n\\label{sku}\n\\end{figure*}\n\nOf these 16 spectroscopic binaries, at least four of them (HIP 19911, 43297, 67620 and 73241) show unusually high $v \\sin{i}$ (see the left panel of Fig. \\ref{sku}). These stars also present other anormalities, such as their [Y\/Mg] abundances \\citep{2016A&A...590A..32T} and magnetic activity \\citep{2014A&A...572A..48R, F16inprep}. The solar twin blue straggler HIP 10725 \\citep{2015A&A...584A.116S}, which is not included in our sample, also shows a high $v \\sin{i}$ for its age. We find that five of the binaries have rotational velocities below the expected for Sun-like stars, but this is most likely an effect of projection of the stars' rotational axes. For the remaining binaries, which follow the rotational braking law, it is again difficult to disentangle this behavior from the $\\sin{i}$, and a statistical analysis is precluded by the low numbers involved. Tidal interactions between companions that could potentially enhance rotation depend on binary separation, which is unknown for most of these stars. They should be regular rotators, since they do not show anormalities in chromospheric activity \\citep{F16inprep} or [Y\/Mg] abundances \\citep{2016A&A...590A..32T}.\n\nBased on the information that at least 25\\% of the spectroscopic binaries in our sample show higher rotational velocities than expected for single stars, we conclude that stellar multiplicity is an important enhancer of rotation in Sun-like stars. Blue stragglers are expected to have a strong enhancement on rotation due to injection of angular momentum from the donor companion.\n\n\n\\section{The rotational braking law}\\label{br_law}\n\nIn order to correctly constrain the rotational braking, we removed from this analysis all the spectroscopic binaries. The non-SB HIP 29525 displays a $v \\sin{i}$ much higher than expected ($3.85 \\pm 0.13$ km s$^{-1}$), but it is likely that this is due to an overestimated isochronal age ($2.83 \\pm 1.06$ Gyr). Because it is a clear outlier in our results, we decided to not include HIP 29525 in the rotational braking determination. \\citet{2010A&A...521A..12M} found X-ray and chromospheric ages of 0.55 and 0.17 Gyr, respectively, for HIP 29525. We then divided the remaining 65 stars and the Sun in bins of 2 Gyr, and removed from this sample all the stars which were below the 70th percentile of $v \\sin{i}$ in each bin\\footnote{\\footnotesize{By doing a simple simulation with angles $i$ drawn from a flat distribution between 0 and $\\pi\/2$, we verify that 30\\% of the stars should have $\\sin{i}$ above 0.9.}}. This allowed us to select the stars that had the highest chance of having $\\sin{i}$ above $0.9$. In total, 21 solar twins and the Sun compose what we hereafter reference as the \\textit{selected sample}. Albeit this sub-sample is smaller, it has the advantage of mostly removing uncertainties on the inclination angle of the stellar rotation axes\\footnote{\\footnotesize{This procedure can also allow for unusually fast-rotating stars (although rare) with $\\sin {i}$ below 0.9 to leak into our sample.}}. We stress that the only reason we can select the most probable edge-on rotating stars ($i = \\pi\/2$) is because we have a large sample of solar twins in the first place.\n\nWe then proceeded to fit a general curve to the \\textit{selected sample} (see Fig. \\ref{sku}) using the method of orthogonal distance regression \\citep[ODR,][]{boggs1990orthogonal}, which takes into account the uncertainties on both $v \\sin{i}$ and ages. This curve is a power law plus constant of the form $v = v_\\mathrm{f} + m\\ t^{-b}$ \\citep[the same chromospheric activity and $v \\sin{i}$ vs. age relation used by][]{2004A&A...426.1021P, 2009IAUS..258..395G}, with $v$ (rotational velocity) and $v\\rm_f$ (asymptotic velocity) in km s$^{-1}$ and $t$ (age) in Gyr.\n\nWe find that the best fit parameters are $v_\\mathrm{f} = 1.224 \\pm 0.447$, $m = 1.932 \\pm 0.431$ and $b = 0.622 \\pm 0.354$ (see right panel of Fig. \\ref{sku}). These large uncertainties are likely due to: i) the strong correlation between $v_\\mathrm{f}$ and $b$; and ii) the relatively limited number of datapoints between 1 and 4 Gyr, where the parameter is most effective in changing the values of $v$. This limitation is also present in past studies \\citep[e.g.,][]{2016Natur.529..181V, 2003ApJ...586..464B, 2004A&A...426.1021P, 2008ApJ...687.1264M, 2014A&A...572A..34G, 2016A&A...587A.105A}. On the other hand, our sample is the largest comprising solar twins, and therefore should produce more reliable results. With more datapoints, we could be able to use 1 Gyr bins instead of 2 Gyr in order to select the fastest rotating stars, which would result in a better sub-sample for constraining the rotational evolution for young stars.\n\nThe relation we obtain is in contrast with some previous studies on modelling the rotational braking \\citep{2001ApJ...561.1095B, 2003ApJ...586..464B, 2015A&A...584A..30L} which either found or assumed that the Skumanich's law explains well the rotational braking of Sun-like stars. The conclusions by \\citet{2016Natur.529..181V} limit the range of validation up to approximately the solar age (4 Gyr) for stars with solar mass. When we enforce the Skumanich's power law index $b = 1\/2$, we obtain a worse fit between the ages 2 and 4 Gyr (and, not surprisingly, also after the solar age).\n\nOur data and the rotational braking law that results from them show that the Sun is a normal star regarding its rotational velocity when compared to solar twins. However, they do not agree with a regular Skumanich's law \\citep[][red $\\times$ symbols in Fig. \\ref{sku}]{2007ApJ...669.1167B}. We find a better agreement with the model proposed by \\citet[][black dashed curve in Fig. \\ref{sku}]{2014ApJ...790L..23D}, especially for stars older than 2 Gyr. This model is thoroughly described in Appendix A of \\citet{2012A&A...548L...1D}. In summary, it uses an updated treatment of the instabilities relevant to the transport of angular momentum according to \\citet{1992A&A...265..115Z} and \\citet{1997A&A...317..749T}, with an initial angular momentum for the Sun $J_0 = 1.63 \\times 10^{50}$ g cm$^2$ s$^{-1}$. Its corresponding rotational braking curve is computed using the output radii of the model, which vary from $\\sim$1 R$_\\odot$ at the current solar age to 1.57 R$_\\odot$ at the age of 11 Gyr, and it changes significantly if we use a constant radius $R = 1$ R$_\\odot$, resulting in a more Skumanich-like rotational braking.\n\nOur result agrees with the chromospheric activity vs. age behavior for solar twins obtained by \\citet{2014A&A...572A..48R}, in which a steep decay of the $R'\\rm_{HK}$ index during the first 4 Gyr was deduced (see Fig. 11 in their paper). The study by \\citet{2004A&A...426.1021P} also suggests a steeper power-law index ($b = 1.47$) than Skumanich's ($b_\\mathrm{S} = 1\/2$) in the rotational braking law derived from young open clusters, the Sun and M 67. However, as seen in Fig. \\ref{sku}, their relation significantly overestimates the rotational velocities of stars, especially for those older than 2 Gyr. This is most probably caused by other line broadening processes, mainly the macroturbulence, which were not considered in that study. As we saw in Sect. \\ref{vmacro_det}, those introduce important effects that are sometimes larger than the rotational broadening. Moreover, a CCF-only analysis tends to produce more spread in the $v \\sin{i}$ than the more detailed analysis we used.\n\nThe rotational braking law we obtain produces a similar outcome to that achieved by \\citet{2016Natur.529..181V} for stars older than the Sun (a weaker rotational braking law after solar age than previously suggested). Our data also requires a different power law index than Skumanich's index for stars younger than the Sun, one that accounts for an earlier decay of rotational velocities up to 2 Gyr.\n\nThe main sequence spin-down model by \\citet{1988ApJ...333..236K} states that, for constant moment of inertia and radius during the main sequence, we would have\n\n\\begin{equation}\n v_{\\mathrm{eq}} \\propto t^{-3\/(4an)} \\mathrm{,}\n \\label{kawaler}\n\\end{equation}where $v_{\\mathrm{eq}}$ is the rotational velocity at the equator and $a$ and $n$ are parameters that measure the dependence on rotation rate and radius, respectively (see Eqs. 7, 8 and 12 in their paper). If we assume a dipole geometry for the stellar magnetic field ($B_\\mathrm{r} \\propto B_0 r^3$), then $n = 3\/7$. Furthermore, assuming that $a = 1$, then Eq. \\ref{kawaler} results in $v_{\\mathrm{eq}} \\propto t^{-7\/4} = t^{-1.75}$. Skumanich's law ($v_{\\mathrm{eq}} \\propto t^{-0.5}$) is recovered for $n = 3\/2$, which is close to the case of a purely radial field ($n = 2$, $v_{\\mathrm{eq}} \\propto t^{-0.38}$). A more extensive exploration of the configuration and evolution of magnetic fields of solar twins is outside the scope of this paper, but our results suggest that the rotational rotational braking we observe on this sample of solar twins stems from a magnetic field with an intermediate geometry between dipole and purely radial.\n\n\n\\section{Conclusions}\n\nWe analyzed the rotational velocities of 82 bright solar twins in the Southern Hemisphere and the Sun using extremely high resolution spectra. Radial velocities revealed that our sample contained 16 spectroscopic binaries, three of which (HIP 30037, 62039, 64673) were not listed as so in the literature. At least five of these stars show an enhancement on their measured $v \\sin{i}$, which is probably caused by interaction with their close-by companions. They also present other anomalies in chemical abundances and chromospheric activities. We did not clearly identify non-spectroscopic binary stars with unusually high rotational velocities for their age.\n\nIn order to better constrain the rotational evolution of the solar twins, we selected a subsample of stars with higher chances of having their rotational axis inclination close to $\\pi\/2$ (almost edge-on). We opted to use carefully measured isochronal ages for these stars because it is the most reliable method available for this sample. We finally conclude that the Sun seems to be a common rotator, within our uncertainties, when compared to solar twins, therefore it can be used to calibrate stellar models.\n\nMoreover, we have found that Skumanich's law does not describe well the rotation evolution for solar twins observed in our data, a discrepancy that is stronger after the solar age. Therefore, we propose a new rotational braking law that supports the weakened braking after the age of the Sun, and comes with a earlier decay in rotational velocities up to 2 Gyr than the classical Skumanich's law. Interestingly, it also reveals an evolution that is more similar to the magnetic activity evolution observed in Sun-like stars, which sees a steep decay in the first 3 Gyr and flattens near the solar age. Additionally, we suggest that more high-precision spectroscopic observations of solar twins younger and much older than the Sun could help us better constrain the rotational evolution of solar-like stars.\n\\begin{acknowledgements}\n LdS thanks CAPES and FAPESP, grants no. 2014\/26908-1 and 2016\/01684-9 for support. JM thanks for support by FAPESP (2012\/24392-2). LS acknowledges support by FAPESP (2014\/15706-9). We also would like to thank the anonymous referee for the valuable comments that significantly improved this manuscript.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\renewcommand{\\thefootnote}{\\arabic{footnote}}\n\\setcounter{footnote}{0}\n\n\nMore than a decade ago, matrix model was expected to realize the non-perturbative definition of string field theories through the double scaling limit\\cite{DS}.\nIn the matrix models, the interaction of the spin cluster domain wall (I-K type interaction) plays an important role to obtain the non-critical string field theory\\cite{IK}.\nHermitian matrix models formulate the Dynamical Triangulation of the discrete 2D surface, in which orientable strings propagate or interact\\cite{JR}\\cite{AJ}\\cite{Mog}. \nThe Loop gas model is a description of the non-critical string field theory, in which every string is located in 1-dimensional discrete target-space point $x$ and interacts with another one in the same point or neighboring points in each time evolution\\cite{KK}.\nIt is formulated by the matrix possessing the additional index $x$\\cite{Kos}\\cite{EKN}.\nOn the other hand, it is well known that the stochastic quantization of the matrix models is effective to deduce the string field theories, and the stochastic time plays the role of the geodesic distance on the 2D random surfaces\\cite{JR}\\cite{Nak}.\nHowever, one of the problems in the Dynamical Triangulation is that the probability of splitting interaction becomes too large to construct the realistic space-time.\nThis problem becomes more severe in higher dimension.\nEven in 2D model, the whole surface of the world-sheet is covered with many projections of infinitesimal baby universe.\n\nThe Causal Dynamical Triangulation (CDT) model is proposed to improve the above problem\\cite{AL}.\nIn this model, the triangulation is severely restricted because of the time-foliation structure.\nThe most characteristic feature of the CDT model is that the causality forbid the splitting and merging interaction.\nThere appears no baby universe and a string propagator becomes a torus with smooth surface.\nThe CDT model is generalized to include only the splitting interaction but not the merging interaction.\nIt is the Generalized Causal Dynamical Triangulation (GCDT) model and baby universes make the world surface not be smooth\\cite{ALWZ}.\nA string field theory is constructed from the CDT and its Schwinger-Dyson (S-D) equations are investigated\\cite{ALWWZ1}.\nIt is also formulated by a matrix model further\\cite{ALWWZ2}.\nRecently, the GCDT model is extended to include additional I-K type interaction and its matrix model formulation is also proposed\\cite{FSW}.\nThe S-D equation of this model has features of the non-critical string field theory.\n\nIn this note, we construct the CDT model from the matrix model for the loop gas model.\nWe assign the matrix an additional discrete index, which is interpreted as discrete time or geodesic distance.\nIn the original loop gas model, it is interpreted as space.\nThen, we apply the stochastic quantization method to this model in order to realize a string field theory of the GCDT model.\nThe main difference from other matrix model formulation is that the stochastic time does not have relation to the geodesic distance in our model.\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics [width=100mm, height=20mm]{CDT1.eps}\n\\caption{A configulation of the CDT triangulation of the 1-step propagation}\n\\label{fig:CDT1}\n\\end{center}\n\\end{figure}\n\nAt the beginning, we briefly review some fundamental nature about the CDT in the 2D space-time.\nIn this model, a torus of loop propagation is sliced to many rings with small width $a$, the minimal discrete time.\nEach ring corresponds to the 1-step time propagation of a loop, from an edge to another edge.\nLoops of the edges are composed of the links with length $a$ so that the length of loop is also discretized.\nA ring is constructed with triangles, one of whose three edges has to be a component of the loop on one side and the other two edges are connected to other triangles.\nTherefore the 1-step propagation from a loop with the length $k$ to another one with the length $m$ is composed of $k+m$ triangles, with $k$ upward ones and $m$ downward ones (Fig.\\ref{fig:CDT1}).\nSince the number of the configuration corresponds to the amplitude of this 1-step propagation, we define the 1-step \"two-loop function\" as\n\\bea\n\\label{eq:amplitude}\nG^{(0)} (k,m; 1) \\equiv {g^{k+m} \\over k+m}~_{k+m}{\\rm C}_k ,\n\\ena\nwhere $g$ is associated with each triangle and $_{k+m}{\\rm C}_k$ expresses the binomial coefficient.\nIt is rather convenient to define the 1-step marked propagator of the loop with length $k$ as $G^{(1)} (k,m;1) \\equiv k G^{(0)}(k,m;1)$.\nWe can construct $t$-step unmarked (and marked) propagators by piling up the 1-step unmarked (and marked) propagators\n\\bea\n\\label{eq:tstep}\nG^{(0)} (n,m;t) & = & \\sum_{k=1}^{\\infty} G^{(0)}(n,k;t-1) k G^{(0)} (k,m;1), \\nonumber \\\\\nG^{(1)} (n,m;t) & = & \\sum_{k=1}^{\\infty} G^{(1)}(n,k;t-1) G^{(1)} (k,m;1),\n\\ena\nrespectively.\nThe disc amplitude $W(n)$ is the summation of the amplitudes such that the loop with the length $n$ becomes to zero in some future time, and it is expressed as\n\\bea\n\\label{eq:discamp}\nW(n) \\equiv \\sum_{t}^{\\infty} G^{(1)} (n,0;t).\n\\ena\nThen, we expect the superposing relation,\n\\bea\n\\label{eq:discsum}\nW(n) \\equiv \\sum_{k=1}^{\\infty} G^{(1)} (n,k;1) W(k)+G^{(1)}(n,0;1).\n\\ena\n\nOriginally, the CDT model does not contain splitting interaction nor merging interaction, because these processes violate the causality.\nThis means that a saddle point on the world-sheet causes to two distinct light cones.\nHowever, we can include the splitting interaction if we impose the condition such that any separated baby loop shrinks to length zero and the mother loop propagates without interacting with it.\nIt is the GCDT model, in which the \"causality\" in a broad sense is respected.\n \nWe propose a matrix model of the modified version of the loop gas model, with a fundamental matrix $(M_{tt'})_{ij}$, where the indices $i,~j$ run from 1 to $N$.\nThe $N \\times N$ matrix $M_{tt'}$ corresponds to a link variable which connects two sites on the discrete times $t$ and $t'$ with the direction from $t$ to $t'$.\nThen we start with the action of $U(N)$ gauge invariant form, \n\\bea\n\\label{eq:actionM}\nS[M] = -g\\sqrt{N} {\\rm tr} \\sum_t M_{tt'} + {1 \\over 2}{\\rm tr} \\sum_{t,t'} M_{tt'} M_{t't} - {g \\over 3\\sqrt{N}} {\\rm tr} \\sum_{t,t',t''} M_{tt'} M_{t't''} M_{t''t},\n\\ena\nwith the partition function $Z = \\int {\\cal D} M e^{-S[M]}$.\n$M_{tt} \\equiv A_t$ is a hermitian matrix, which corresponds to a link of discrete string element soaked in one time $t$.\n$M_{t,t+1} \\equiv B_t$ and $M_{t+1,t} \\equiv B^{\\dagger}_t$ are associated with a link connecting sites on the nearest neighboring times $t$ and $t+1$.\nOtherwise $M_{tt'} = 0$ (for $t' \\neq t, t \\pm 1$).\nHence we can rewrite the partition function as $Z = \\int {\\cal D} A {\\cal D} B {\\cal D} B^{\\dagger} e^{-S[A,B,B^{\\dagger} ] } $ with the action \n\\bea\n\\label{eq:actionAB}\nS[A, B, B^{\\dagger}] &=& -g\\sqrt{N} {\\rm tr} \\sum_t A_{t} + {1 \\over 2}{\\rm tr} \\sum_t A_t^2 + {\\rm tr} \\sum_t B_t B_t^{\\dagger} \\nonumber \\\\\n & & - {g \\over 3\\sqrt{N}} {\\rm tr} \\sum_t A_t^3 - {g \\over \\sqrt{N}} {\\rm tr} \\sum_t (A_t B_t B_t^{\\dagger} + A_{t+1} B_t^{\\dagger} B_t).\n\\ena\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics [width=100mm, height=35mm]{CDT2.eps}\n\\caption{The triangulation assignment for the cubic terms and the square term}\n\\label{fig:CDT2}\n\\end{center}\n\\end{figure}\nTwo square terms in the first line play the role of gluing the links of two triangles, and three cubic terms in the second line express the triangles which are the elements of surface (Fig.\\ref{fig:CDT2}).\nWhile the last two terms composed of $A, B$ and $B^{\\dagger}$ correspond to the triangles on the surfaces of string propagation ring, the cubic term of $A$ produces a triangle soaked in one time-slice, which does not exist in the CDT model.\nThrough integrating out the non-hermitian matrices $B_t$ and $B^{\\dagger}_t$, we obtain the effective theory only with equi-temporal link matrices $A_t$.\nThe partition function becomes $Z \\equiv \\int {\\cal D} A e^{-S_{\\rm eff} [A]}$, where\n\\begin{eqnarray*}\nS_{\\rm eff} [A] = {\\rm tr} \\sum_t \\left[ -g \\sqrt{N} A_t + {1 \\over 2} A_t^2 - {g \\over 3\\sqrt{N}} A_t ^3 + {\\rm log} \\Bigl\\{ {\\bf 1} - {g \\over \\sqrt{N}} \\left( A_t {\\bf 1}_{t+1} + {\\bf 1}_t A_{t+1} \\right) \\Bigr\\} \\right].\n\\end{eqnarray*}\nIf we define a loop variable $\\phi _t (n) \\equiv {1 \\over N}{\\rm tr}({A_t \\over \\sqrt{N}})^n$ as the discrete closed string of length $n$ in time $t$, the effective action of the loop variables is written as\n\\bea\n\\label{eq:actioneff}\nS_{\\rm eff} [\\phi , A] & = & - N^2 \\sum_t \\left[ {g \\over N} {\\rm tr} {A_t \\over \\sqrt{N}} - {1 \\over 2N}{\\rm tr} \\left( {A_t \\over \\sqrt{N}} \\right) ^2 + { g \\over 3N} {\\rm tr} \\left( {A_t \\over \\sqrt{N}} \\right) ^3 \\right. \\nonumber \\\\\n & & \\left. + \\sum_{k=0}^{\\infty} \\sum_{m=0}^{\\infty} G^{(0)}(k,m;1) \\phi _t (k) \\phi _{t+1}(m) \\right],\n\\ena\nwhere $G^{(0)}(k,m;1)$ is the two-loop function of the 1-step time appeared in the CDT model.\nThe last term of eq.(\\ref{eq:actioneff}) is expressed graphically in Fig.\\ref{fig:CDT1}.\nWe may construct the $t$-step propagator as \n\\begin{eqnarray*}\nnmG^{(0)}(n,m;t) = \\langle \\phi_0 (n) \\phi _t (m) \\rangle = {1 \\over Z} \\int {\\cal D} A \\phi _0 (n) \\phi _t (m) e^{-S_{\\rm eff}[A]}, \n\\end{eqnarray*}\nThis expresses the sum over all ways of connecting $\\phi _0 (n)$ with $\\phi _t (m)$ by $t$ times piling of 1-step two-loop functions.\nThus we realize the matrix model formulation of the CDT model.\n\nNow, we extend this model to the GCDT with loop interactions by applying the stochastic quantization method to the above model.\nThe Langevin equation for a matrix variable and white noise correlation\n\\bea\n\\label{eq:langevinA}\n\\v (A_t)_{ij} = - {{\\partial S_{\\rm eff}} \\over {\\partial (A_t)_{ji}}} \\v \\t + \\v (\\xi _t)_{ij}, ~~~~~~\n\\langle \\v (\\xi _t)_{ij} \\v (\\xi _t')_{kl} \\rangle _\\xi = 2\\v \\t \\d _{tt'} \\d _{il} \\d _{jk},\n\\ena\ndescribe the evolution of the matrices on the step of the unit stochastic time $\\v \\t$.\nThey generate the Langevin equation for a loop variable \n\\bea\n\\label{eq:langevin}\n\\v \\phi _t (n) &=& \\v \\t n \\left[ g \\phi _t (n-1) - \\phi _t (n) + g \\phi _t (n+1) \n + \\sum_{k =0}^{n-2} \\phi _t ( k ) \\phi_t (n - k - 2) \\right. \\nonumber \\\\\n & & + \\sum_{k=1}^{\\infty} \\sum_{m=0}^{\\infty} G^{(1)} (k,m;1) \\phi _t (n+k-2) \\phi _{t+1} (m) \\nonumber \\\\\n & & \\left. + \\sum_{k=1}^{\\infty} \\sum_{\\ell =0}^{\\infty} G^{(2)} (\\ell ,k;1) \\phi _t (n+k-2) \\phi _{t-1} (\\ell) \\right] + \\v \\zeta _t (n),\n\\ena\nwhere $G^{(1)} (k,m;1) \\equiv kG^{(0)} (k,m,t=1) $ and $G^{(2)} (k,m;1) \\equiv mG^{(0)} (k,m,t=1) $ are 1-step marked propagators with a mark on the entrance loop and the exit loop, respectively.\n$\\v \\zeta _t (n) \\equiv N^{-1-{n \\over 2}} n {\\rm tr} ( \\v \\xi _t A_t^{n-1} )$ is the constructive noise term which satisfies the correlation\n\\bea\n\\label{eq:noisecorrelation}\n\\langle \\v \\zeta _t (n) \\v \\zeta _{t'} (m) \\rangle _\\xi = 2 \\v \\t \\d _{tt'} {1 \\over N^2} nm \\langle \\phi _t (n+m-2) \\rangle _\\xi .\n\\ena\nAny observable $O(\\phi)$ composed of loop variables is deformed, under the stochastic time 1-step progress $\\v \\t$, following the variation of $\\phi _t (n)$ with the Langevin equation (\\ref{eq:langevin}) and the noise correlation (\\ref{eq:noisecorrelation}).\nThe generator of this $\\v \\t$ evolution corresponds to the Fokker-Planck (F-P) Hamiltonian $H_{\\rm FP}$,\n\\bea\n\\label{eq:FPHdef}\n\\langle \\v O(\\phi ) \\rangle _\\xi &=& \\langle \\sum_m \\v \\phi _t (m) {\\partial \\over \\partial \\phi _t (m) } O(\\phi ) + {1 \\over 2} \\sum_{m,n} \\v \\phi _t (m) \\v \\phi _t (n) {\\partial ^2 \\over \\partial \\phi _t (m) \\partial \\phi _t (n)} O(\\phi ) \\rangle _\\xi \\nonumber \\\\\n & & +{\\rm O}(\\v \\t^{3 \\over 2}) \\nonumber \\\\\n & \\equiv & - \\v \\t \\langle H_{\\rm FP} O(\\phi ) \\rangle _{\\xi}.\n\\ena\nWe interpret $\\phi _t (n)$ and $\\pi _t (n) \\equiv {\\partial \\over \\partial \\phi _t (n) }$ as the operators for creation and annihilation of the loop with length $n$ at time $t$, respectively.\nThey fulfill the following commutation relation:\n\\bea\n\\label{eq:commutation}\n[\\pi _t (n), \\phi _{t'} (m) ] = \\d _{tt'} \\d _{nm}.\n\\ena\nThen the F-P Hamiltonian is expressed as\n\\bea\n\\label{eq:FP1}\nH_{\\rm FP} = -{1 \\over N^2} \\sum_t \\sum_{n=1}^{\\infty} n L_t (n-2) \\pi _t (n),\n\\ena\nwhere $L_t (n)$ is defined by\n\\bea\n\\label{eq:generator}\nL_t (n) &=& -N^2 \\Biggl[ g \\phi _t (n+1) - \\phi _t (n+2) + g \\phi _t (n+3) \\Biggr. \\nonumber \\\\\n& & +\\sum_{k=1}^{\\infty} \\sum_{m=0}^{\\infty} G^{(1)} (k,m;1) \\phi _t (n+k) \\phi _{t+1} (m) \n + \\sum_{k=1}^{\\infty} \\sum_{\\ell =0}^{\\infty} G^{(2)} (\\ell ,k;1) \\phi _t (n+k) \\phi _{t-1} (\\ell) \\nonumber \\\\\n& & \\left. + \\sum_{k=0}^n \\phi _t (k) \\phi _t (n-k) + {1 \\over N^2} \\sum_{k=1}^{\\infty} k \\phi _t (n+k) \\pi _t (k) \\right],\n\\ena\nand it satisfies the Virasoro algebra\n\\bea\n\\label{eq:virasoro}\n[L _t (n), L _{t'} (m) ] = (n-m)\\d _{tt'} L _t (n+m).\n\\ena\n\nIn order to take the continuum limit we introduce the minimum length of this matrix model $a$.\nThe continuum limit is realized by taking $a$ to zero simultaneously with $N$ to infinity.\nIt is called the double scaling limit.\nAccording to the CDT model, we set the continuum loop length as $ L \\equiv an $ and time as $T \\equiv at $\\cite{AL}.\nWe also define the cosmological constant $\\Lambda$ by ${1 \\over 2}e^{-{1 \\over 2}a^2 \\Lambda} = g$.\nHere we introduce two scaling parameters $D$ and $D_N$.\nSince the commutation relation of the continuum field operators satisfy,\n\\bea\n\\label{eq:commutator}\n[\\Pi (L;T), \\Phi (L';T') ] = \\d (T-T') \\d (L-L'),\n\\ena\nthe scaling of the loop field operators can be described as $\\Phi (L;T) \\equiv a^{-{1 \\over 2}D} \\phi _t (n) $ and $\\Pi (L;T) \\equiv a^{{1 \\over 2}D-2} \\pi _t (n)$ by using $D$.\nTo keep the effect of the first term of the last line in eq.(\\ref{eq:generator}), the splitting interaction, we fix the scaling of the infinitesimal stochastic time as\n\\bea\n\\label{eq:stochastic}\nd \\t \\equiv a^{{1 \\over 2}D-2} \\v \\t.\n\\ena\nHence the existence of the continuum stochastic time requires $D>4$.\nThe terms in the second line of eq.(\\ref{eq:generator}) express the characteristic interaction of this model, which corresponds to the I-K type interaction.\nThus we maintain these terms by redefining the 1-step propagator as $\\tilde{G}^{(1)}(L_1, L_2 ; a) \\equiv a^{-1} G^{(1)} (k,m;1)$, which gives the expression using the continuum lengths.\nWe define the string coupling constant as $G_{\\rm st} \\equiv {1 \\over N^2} a^{D_N}$ with $D_N$.\nThen we obtain the continuum limit of the F-P Hamiltonian ${\\cal H}_{\\rm FP}$ by $\\v \\t H_{\\rm FP} \\equiv d \\t {\\cal H}_{\\rm FP}$, and it is written as\n\\bea\n\\label{eq:FPhamiltonian}\n{\\cal H}_{\\rm FP} & = & \\int dT \\int_0^{\\infty} dL L \\left[ a^{-{1 \\over 2}D+3} {1 \\over 2} \\left( {{\\partial ^2} \\over {\\partial L ^2}} - \\Lambda \\right) \\Phi (L;T) \\right. \\nonumber \\\\\n & & + \\int _0^{\\infty} dL_1 \\int _0^{\\infty} dL_2 \\tilde G ^{(1)} (L_1, L_2; a) \\Phi (L+L_1;T) \\Phi (L_2; T+a) \\nonumber \\\\\n & & + \\int _0^{\\infty} dL_1 \\int _0^{\\infty} dL_2 \\tilde G ^{(2)} (L_2, L_1; a) \\Phi (L+L_1;T) \\Phi (L_2; T-a) \\nonumber \\\\\n & & + \\int _0^L dL_1 \\Phi (L_1 ; T) \\Phi (L - L_1 ;T) \\nonumber \\\\\n & & \\left. + a^{-D_N -D+1} G_{\\rm st} \\int _0^{\\infty} dL_1 L_1 \\Phi (L+L_1; T) \\Pi (L_1; T) \\right] \\Pi(L ; T).\n\\ena\nThe first term on the r.h.s is the potential term, which means the propagation of a loop in an equi-temporal slice.\nWe have to remember that any propagation of the loop in one equi-temporal slice is not contained in the GCDT model.\nHence we expect this term to scale out in the continuum limit.\nThis fact requires $D<6$.\nIt should be noted that, thanks to the scaling of the cosmological constant, after summing up the first three terms in eq.(\\ref{eq:generator}) the scaling of the propagation terms are enhanced two order higher compared with that of the original terms.\nThis enhancement makes the above restriction $D<6$ for $D$ consistent with another restriction $D>4$ from eq.(\\ref{eq:stochastic}).\n\nThe next two terms are I-K type interactions which are similar terms appeared in the non-critical string field theory model (Fig.\\ref{fig:CDT3}(b))\\cite{IK}.\nThe second and the third terms cause the annihilation of a loop with the length $L$ and creation of a loop with the length $L+L_1$ at the same time $T$.\nThey also create another loop with the length $L_2$ at the infinitesimal future time $T+a$ and infinitesimal past time $T-a$, respectively.\nThen the lengths $L_1$ and $L_2$ are connected by the infinitesimal 1-step marked propagator.\nThe fourth term is the splitting interaction, which annihilates a loop with length $L$ and create two loops with the sum of their lengths $L$, simultaneously.\nThe last term expresses the merging interaction, which annihilates two loops and create one loop whose length is equal to the sum of the two annihilated loops.\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics [width=100mm, height=25mm]{CDT3.eps}\n\\caption{(a) Splitting interaction and (b) I-K type interaction. The entrance loop propagates upward to the exit loop as the mother universe, while the baby universe must disappear to the vacuum before long. On the surface of propagating world-sheet, many baby universes of two types are attached as quantum effect.}\n\\label{fig:CDT3}\n\\end{center}\n\\end{figure}\n\nWhile the splitting interaction is permitted in the GCDT model, the merging interaction should be forbidden because of the \"causality\" in a broad sense.\nFor this purpose we restrict the scaling parameter $D_N$ to satisfy\n\\bea\n\\label{eq:DN}\n D_N < -D+1.\n\\ena\nCombining this condition and $4-3$ it is violated.\nWhen $-5 \\le D_N \\le -3$, the condition of eq.(\\ref{eq:DN}) must be taken into account.\n\nThe interpretation of the stochastic process is following.\nIf the stochastic process is not included, we have the exact CDT model.\nEach stochastic time step $\\v \\t$ produces one baby universe in either of two ways, that is the ordinary splitting interaction and the I-K type interaction (Fig.\\ref{fig:CDT3}).\nWhile infinitesimal time or geodesic distance scales as $dT = a$, infinitesimal stochastic time scales by eq.(\\ref{eq:stochastic}) with $46$, the loop propagator contains too many stochastic processes, dominated by the propagation in the equi-temporal slice.\nThe string interactions are suppressed except for the merging interaction, whose probability depends on the scaling parameter $D_N$.\nHence, we can imagine that the loop propagator has too much length-fluctuation in the time evolution.\nIn $D=6$, each equi-temporal slice contains fertile stochastic processes including propagation, as well as splitting and I-K type interactions.\n\\footnote{In the S-D equation, as we will discuss later, an additional potential term concerning propagation $L ({\\partial ^2 \\over \\partial L^2}-\\Lambda ){W}(L)$, that is characteristic in the CDT model, survives in the scaling limit.}\\\nHowever, it is not desirable from the viewpoint of our model.\n\nFinally, we derive the S-D equation from the continuum version of the Langevin equation.\nThe disc amplitude is the expectation value of a loop variable, which propagates and shrink to nothing eventually.\nIt is expressed in the continuum expression as\n\\begin{eqnarray*}\nW(L) \\equiv \\langle \\Phi (L;T) \\rangle \\equiv \\int _0^{\\infty} dT \\tilde{G}^{(1)} (L,0;T).\n\\end{eqnarray*}\nWith the help of eq.(\\ref{eq:discsum}), at the level of the expectation value we can expect the relation \n\\bea\n\\label{eq:disc}\n\\langle \\int _0^{\\infty} dL_2 \\tilde{G}^{(1)}(L_1,L_2;a) \\Phi (L_2; T+a) \\rangle = W(L_1).\n\\ena\nFrom this the S-D equation is derived as follows,\n\\bea\n\\label{eq:SD}\n\\int _0^L dL_1 W(L_1) W(L-L_1) +2 \\int _0^{\\infty} dL_1 W(L+L_1)W(L_1) =0. \n\\ena\nIn terms of the Laplace-transformed variable $\\tilde{W}(z) \\equiv \\int _0^{\\infty} dL e^{-Lz} W(L) $, eq.(\\ref{eq:SD}) is transformed to the expression in the Laplace space.\nWith the exchange of $z \\rightarrow -z$, we obtain another equation for $\\tilde{W}(-z)$.\nThen using above two equations, we obtain the S-D equation as\n\\bea\n\\label{eq:LSD}\n\\tilde{W}(z)^2 +2 \\tilde{W}(z) \\tilde{W}(-z) +\\tilde{W}(-z)^2 = \\mu ,\n\\ena\nwhere $\\mu$ is some constant.\nThis S-D equation expresses the same form as the one of the non-critical closed string field theory except for the coefficient of the second term of the l.h.s..\nIt is replaced with a coefficient $2{\\rm cos} \\pi p_0 $, where the background momentum $p_0 ={1 \\over m}$ corresponts to the central charge $c=1-{6 \\over m(m+1)}$.\nThe last term of the effective action eq.(\\ref{eq:actioneff}) is rewritten as $\\sum_{t,t'} C_{tt'} \\sum_{k,m}^{\\infty} G^{(0)}(k,m;1) \\phi _t (k) \\phi _{t'}(m) $ with an adjacency matrix $C_{tt'} \\equiv \\d _{t',t+1} + \\d _{t',t-1} $.\nIf we adopt a twisted adjacency matrix $C^{(p_0)}_{tt'}= e^{i\\pi p_0}\\d _{t',t+1} + e^{-i\\pi p_0} \\d _{t',t-1} $ as ref.\\cite{Kos}, instead of our choice $C_{tt'} $, we can express the loop bilinear term of the effective action eq.(\\ref{eq:actioneff}) as\n\\begin{eqnarray*}\n 2 {\\rm cos}\\pi p_0 \\sum _{t} \\sum_{k=0}^{\\infty} \\sum_{m=0}^{\\infty} G^{(0)}(k,m;1) \\phi _t (k) \\phi _{t+1}(m).\n\\end{eqnarray*}\nWe obtain the S-D equation of the non-critical string field theory exactly which has the right coefficient for the second term on the l.h.s. of eq.(\\ref{eq:LSD}).\n\nIn conclusion, we have proposed the matrix model formulation to construct the 2D GCDT model.\nUsing the stochastic quantization approach and taking a continuum limit, we obtain the non-critical string field theory.\nThe scaling parameter $D$ and $D_N$ on the double scaling limit are fixed as eq.(\\ref{eq:DN}) with $40$ (expanding universe) deviating from $H_{\\Lambda \\rm CDM}(z)$, strict observational constraints from CMB still require\n\\begin{equation}\n\\int_0^{z_*}\\frac{\\dd{z}}{H_{\\Lambda \\rm CDM}(z)}\\approx \\int_0^{z_*}\\frac{\\dd{z}}{H(z)}, \\label{eq:approx}\n\\end{equation}\ncf., $D_M(z_*)=13872.83\\pm 25.31\\,\\rm Mpc$ ($\\Lambda$CDM Planck18).\nFor simplicity, we will assume\nthe approximation in Eq.~\\eqref{eq:approx} to be exact, and comment on the approximate case when necessary. Now, we define the deviation of a cosmological model from $\\Lambda$CDM in terms of its Hubble radius, $H(z)^{-1}$, as follows:\n\\begin{equation}\n\\psi(z)\\equiv\\frac{1}{H(z)}-\\frac{1}{H_{\\Lambda \\rm CDM}(z)}.\\label{eqn:devdef}\n\\end{equation}\nThen, we have\n\\begin{equation}\n D_{M}(z_*)=c\\int_0^{z_*}\\dd{z}\\qty[\\frac{1}{H_{\\Lambda \\rm CDM}(z)}+\\psi(z)],\\label{eq:exact}\n\\end{equation}\nand consequently, the exact version of Eq.~\\eqref{eq:approx} implies\n\\begin{equation}\n\\Psi(z_*)\\equiv\\int_0^{z_*}\\psi(z) \\dd{z}=0.\\label{eq:int}\n\\end{equation}\nOur assumption that the pre-recombination universe is accurately described by $H_{\\Lambda \\rm CDM}(z)$, viz., $H(z\\geq z_*)=H_{\\Lambda \\rm CDM}(z\\geq z_*)$, implies another condition on $\\psi(z)$, that is, \n\\begin{equation}\n\\psi(z\\geq z_*)=0. \\label{eq:prereccond}\n\\end{equation}\n\nThis mathematical framework allows one to naturally classify a family of $H(z)$ functions which can deviate, even significantly, from $H_{\\Lambda \\rm CDM}(z)$, but still have the same $D_M(z_*)$ the $\\Lambda$CDM model has, ensuring basic consistency with the CMB measurements at the background level (one might want to also consider the constraints on $\\rho_{\\rm m0}$ and $\\rho_{\\rm r0}$ from CMB). This family is described by \n\\begin{equation}\nH(z)=\\frac{H_{\\Lambda \\rm CDM}(z)}{1+\\psi(z)H_{\\Lambda \\rm CDM}(z)},\\label{eq:hz}\n\\end{equation}\nwhere $\\psi(z)$ satisfies the conditions introduced in~\\cref{eq:int,eq:prereccond}. We notice from this equation that introduction of the condition $-H^{-1}_{\\Lambda \\rm CDM}(z)<\\psi(z)<\\infty$ ensures that, in the past ($z>0$), $H(z)$ never diverges (except at the Big Bang) and the universe has always been expanding. Also, on top of all these conditions, let us demand \n\\begin{equation}\n\\psi(z=0)=0\\label{eq:present}\n\\end{equation}\nsince we know the universe at $z\\sim0$ is well described by the standard $\\Lambda$CDM model \\cite{Planck:2018vyg,Alam:2020sor,DES:2021wwk}.\n\nWe notice that \\cref{eq:int,eq:prereccond,eq:present} describe characteristic properties of functions that are known as \\textit{wavelets} where \\cref{eq:int} is true for wavelets that satisfy the \\textit{admissibility condition} \\cite{Chui:1992}. Wavelets are oscillatory (not necessarily periodic) functions that are well-localized, i.e., they have compact support or they vanish approximately outside of a compact set of its parameters. With such boundary conditions that the function should absolutely or approximately vanish outside of certain bounds, \\cref{eq:int} requires that the function oscillates at least once if it does not vanish everywhere; because, say $\\psi(z)<0$ for a certain value of $z$, this integral can vanish only if $\\psi(z)>0$ at another value of $z$, hence the oscillation. Note that, for a continuous $\\psi(z)$, this argument also implies that there exists at least one value of $z$ in the interval $(0,z_*)$ for which $\\psi=0$; this corresponds to the Rolle's theorem, which in our particular case states that the conditions $\\Psi(0)=0$ and $\\Psi(z_*)=0$ imply the existence of a $z_p\\in(0,z_*)$ for which $\\psi(z_p)=0$. \\textit{Thus, the deviations from the standard $\\Lambda$CDM model's Hubble radius, $\\psi(z)$, must be described by admissible wavelets, i.e., must have a wiggly (wave-like) behaviour characterized by the conditions given in \\cref{eq:int,eq:prereccond,eq:present}.}\n\nWe proceed with showing explicitly that the characteristics of $\\psi(z)$ described above, corresponds to a wiggly behaviour for $H(z)$ with respect to $H_{\\Lambda \\rm CDM}(z)$ \\textit{in a particular way}, namely, not necessarily wavelet type but such that $\\psi(z)$ is an admissible wavelet; to see this, we define a unitless parameter $\\delta(z)$, namely, the fractional deviation from $H_{\\Lambda\\rm CDM}(z)$, as follows;\n\\begin{equation}\n\\label{eqn:deltaH}\n\\delta(z)\\equiv \\frac{H(z)-H_{\\Lambda \\rm CDM}(z)}{H_{\\Lambda \\rm CDM}(z)}=-\\frac{\\psi(z)H_{\\Lambda \\rm CDM}(z)}{1+\\psi(z)H_{\\Lambda \\rm CDM}(z)}.\n\\end{equation}\nWe see that if we demand an ever-expanding universe $H(z)>0$, we should set $\\delta(z)>-1$. And, in what follows, unless otherwise is stated, we continue our discussions with the assumption that $\\delta(z)>-1$. For small deviations from $\\Lambda$CDM, i.e., $|\\delta(z)|\\ll1$, we can also write\n\\begin{equation}\n\\label{eqn:sdevH}\n\\delta(z)\\approx-\\psi(z)H_{\\Lambda \\rm CDM}(z).\n\\end{equation}\nThe small deviation region is quite important to study; because, despite its shortcomings, $\\Lambda$CDM is still the simplest model to explain the cosmological observations with remarkable accuracy. Particularly, in the late universe, the small deviation approximation is robustly imposed by many cosmological probes that require $|\\delta(z)|\\ll1$ for ${z\\lesssim2.5}$; even the largest discrepancies between the $H_{\\Lambda \\rm CDM}(z)$ of the Planck 2018 $\\Lambda$CDM~\\cite{Planck:2018vyg} and observed $H(z)$ values, viz., $H_0=73.04 \\pm 1.04$ km s${}^{-1}$ Mpc${}^{-1}$ (the SH0ES $H_0$ measurement~\\cite{Riess:2021jrx}) and $H(2.33)=224\\pm8{\\rm \\,km\\, s^{-1}\\, Mpc^{-1}}$ (the Ly-$\\alpha$-quasar data~\\cite{eBOSS:2020yzd}) correspond to $\\delta_0\\sim0.08$ and $\\delta(z=2.33)\\sim-0.05$, respectively. The form of~\\cref{eqn:sdevH} makes it even easier to see that $H(z)$ will have wiggles; since $H_{\\Lambda \\rm CDM}(z)$ is a monotonically varying function of $z$ and strictly positive, when $\\psi(z)$ changes sign (as it must at least once), this sign change (around which the small deviation condition is clearly satisfied) is directly reflected on $\\delta(z)$, producing a wiggle. Furthermore, respecting the successes of the $\\Lambda$CDM model, one may even wish to impose $|\\delta(z)|\\ll1$ at all times. In this case, since $H_{\\Lambda \\rm CDM}(z)$ monotonically grows with increasing redshift, one should have $\\psi(z)\\to0$ fast enough with $z\\to z_*$, such that the small deviation condition $\\abs{\\psi(z)H_{\\Lambda \\rm CDM}(z)}\\ll1$ is not broken.\n\nHaving said that, note the interesting extra behaviours apparent from the full form of $\\delta(z)$ in~\\cref{eqn:deltaH}: first, as mentioned before, ${\\psi(z)=-H^{-1}_{\\Lambda \\rm CDM}(z)}$ results in a singular $H(z)$ function and is not allowed for finite $z$ values; second, while the previous condition might seem as it requires either one of the confinements ${\\psi(z)>-H^{-1}_{\\Lambda \\rm CDM}(z)}$ or ${\\psi(z)<-H^{-1}_{\\Lambda \\rm CDM}(z)}$ at all times, in principle, $\\psi(z)$ can be discontinuous and is not necessarily confined to one of these regions; third, \\cref{eqn:sdevH} indicates that $\\psi(z)<0$ corresponds to $\\delta(z)>0$, yet, for a region in which ${\\psi(z)<-H^{-1}_{\\Lambda \\rm CDM}(z)}$, we have $\\delta(z)<0$ despite having $\\psi(z)<0$, but, looking at~\\cref{eqn:devdef}, such a region also corresponds to an extreme case with $H(z)<0$ and the universe would have gone through a contracting phase.\n\n\nFinally, it is worth noting that due to the wiggly behaviour of the wavelets, similar to $H(z)$, the other important kinematical parameters in cosmology, the deceleration parameter $q=-1+\\frac{{\\rm d}}{{\\rm d}t}\\left[H^{-1}(z)\\right]$ and the jerk parameter $j=\\frac{{\\rm d}^3a\/{\\rm d}t^3}{aH^3(z)}$ (which is simply $j_{\\Lambda\\rm CDM}=1$ for $\\Lambda$CDM) will also exhibit wiggly behaviors; the deceleration parameter will oscillate around its usual evolution in $\\Lambda$CDM, $q_{\\Lambda\\rm CDM}(z)$, as can be immediately seen from\n\\begin{equation}\n q(z)=q_{\\Lambda\\rm CDM}(z)+\\frac{{\\rm d}\\psi(z)}{{\\rm d}t},\n\\end{equation}\nobtained by using \\cref{eqn:devdef} in the definition of $q(z)$. And, the jerk parameter will oscillate around its constant $\\Lambda$CDM value of unity. These behaviours are reminiscent of the non-parametric reconstructions in Refs.~\\cite{Mukherjee:2020vkx,Mukherjee:2020ytg}.\n\n\n\\section{Wiggles in dark energy density descended from the wavelets}\n\nIn the late universe where dust and DE are the only relevant components, we can treat $H(z)$ as an extension of ${H_{\\Lambda \\rm CDM}}(z)$ with the same matter density parameter $\\rho_{\\rm m}(z)$ but with a minimally interacting dynamical DE that explains the deviation of $\\delta(z)$ from zero; hereby, can write the DE density as $\\rho_{\\rm DE}(z)\\equiv3H^2(z)-\\rho_{\\rm m}(z)$, viz.,\n\\begin{equation}\n\\begin{aligned}\n\\label{eqn:drho}\n\\rho_{\\rm DE}(z)&=3H^2_{\\Lambda \\rm CDM}(z)[1+\\delta(z)]^2-\\rho_{\\rm m0}\\qty(1+z)^3\\\\\n&=\\rho_{\\rm DE0}+3H^2_{\\Lambda \\rm CDM}(z)\\delta(z)[2+\\delta(z)],\n\\end{aligned}\n\\end{equation}\nfrom which we can write the deviation of the DE density from $\\Lambda$, i.e., ${\\Delta\\rho_{\\rm DE}(z)\\equiv\\rho_{\\rm DE}(z)-\\rho_{\\Lambda}}$ (where we have ${\\rho_\\Lambda=\\rho_{\\rm DE0}}$), as follows:\n\\begin{equation}\n\\Delta\\rho_{\\rm DE}(z)=3H^2_{\\Lambda \\rm CDM}(z)\\delta(z)[2+\\delta(z)].\\label{eq:deltarho}\n\\end{equation}\nFor small deviations from $\\Lambda$CDM, these read\n\\begin{align}\n \\rho_{\\rm DE}(z)&\\approx \\rho_{\\rm DE0}+6\\delta(z)H^2_{\\Lambda \\rm CDM}(z),\\\\\n \\Delta\\rho_{\\rm DE}(z)&\\approx6\\delta(z)H^2_{\\Lambda \\rm CDM}(z),\n\\end{align}\ncorrespondingly. Thus, because $\\delta(z)$ is oscillatory around zero, $\\Delta\\rho_{\\rm DE}(z)$ will also be oscillatory around zero and its small oscillations will be scaled by $6H^2_{\\Lambda \\rm CDM}(z)$; in other words, the wiggles in $H(z)$ are implied by wiggles in $\\rho_{\\rm DE}(z)$ scaled by $6 H^2_{\\Lambda \\rm CDM}(z)$. That is, observational fitting\/non-parametric reconstruction procedures predicting wiggles in $H(z)$ will predict corresponding wiggles in $\\rho_{\\rm DE}(z)$ reconstructions.\n\nEven if our assumption that the pre-recombination universe is not modified with respect to the standard cosmology [implying~\\cref{eq:prereccond}], is taken to be approximate, for $z>z_*$, the fluctuations in the DE density should be much smaller than the matter energy density, ${\\abs{\\Delta\\rho_{\\rm DE}(z)\/\\rho_{\\rm m}(z)}\\ll1}$, in the matter dominated epoch, and much smaller than the radiation energy density, ${\\abs{\\Delta\\rho_{\\rm DE}(z)\/\\rho_{\\rm r}(z)}\\ll1}$, in the radiation dominated epoch. Since for both of these epochs the relevant energy densities can be well-approximated by the critical energy density of $\\Lambda$CDM $\\rho_{\\rm c}(z)\\equiv3H^2_{\\Lambda\\rm CDM}(z)$, in this approximate case for $z>z_*$, instead of $\\Delta \\rho_{\\rm DE}(z)=0$, we can write the more relaxed condition\n\\begin{equation}\n \\abs{\\frac{\\Delta\\rho_{\\rm DE}(z)}{\\rho_{\\rm c}(z)}}=\\abs{\\delta(z)[2+\\delta(z)]}\\ll1. \\label{eq:pert}\n\\end{equation}\nThis is satisfied for both $\\delta(z)\\sim 0$ (small deviation from $\\Lambda$CDM), and $\\delta(z)\\sim-2$ (corresponds to contracting universe), but only the former is of interest to us. Since~\\cref{eq:pert} requires small $|\\delta(z)|$ to be satisfied, it can be rewritten as\n\\begin{equation}\n\\abs{\\frac{\\Delta\\rho_{\\rm DE}(z)}{\\rho_{\\rm c}(z)}}\\approx2\\abs{\\delta(z)}\\approx2\\abs{-\\psi(z)H_{\\Lambda \\rm CDM}(z)}\\ll1, \\label{eq:rapidity}\n\\end{equation}\nfrom which we immediately see that $\\psi(z)$ should vanish rapid enough with increasing $z$ at large redshifts so that our assumption of almost unmodified pre-recombination physics holds.\n\nWe calculate from Eq.~\\eqref{eqn:drho} that the DE density passes below zero, $\\rho_{\\rm DE}(z)<0$, for\n\\begin{equation}\n\\delta(z)<-1+\\sqrt{1-\\frac{\\rho_{\\rm DE0}}{3H^2_{\\Lambda \\rm CDM}(z)}},\n\\end{equation}\nwhich can also be written as follows:\n\\begin{equation}\n\\delta(z)<-1+\\sqrt{1-\\frac{\\Omega_{\\rm DE0}}{\\Omega_{\\rm DE0}+(1-\\Omega_{\\rm DE0})(1+z)^3}}.\n\\end{equation}\nAccordingly, using Planck 2018 best fit $\\Lambda$CDM values $\\Omega_{\\rm m0}=0.3158$ and $H_0=67.32{\\rm \\,km\\, s^{-1}\\, Mpc^{-1}}$~\\cite{Planck:2018vyg}, it turns out that $\\delta (2.33)<-0.028$, i.e., \n$\\Delta H(2.33)\\equiv{H(2.33)-H_{\\Lambda\\rm CDM}(2.33)<-6.65{\\rm \\,km\\, s^{-1}\\, Mpc^{-1}}}$ (corresponding to $H(2.33)\\lesssim230.536{\\rm \\,km\\, s^{-1}\\, Mpc^{-1}}$), requires the DE density to yield negative values. Note that $H(2.33)=228\\pm7{\\rm \\,km\\, s^{-1}\\, Mpc^{-1}}$ from the Ly-$\\alpha$-Ly$\\alpha$ and $H(2.33)=224\\pm8{\\rm \\,km\\, s^{-1}\\, Mpc^{-1}}$ from the Ly-$\\alpha$-quasar data~\\cite{eBOSS:2020yzd}. \\textit{Thus, considering their mean values, fitting these data perfectly, requires a negative DE density at $z\\sim 2.3$.} In this sense, the function $\\delta(z)$ can also be used as a diagnostic to test for a negative DE density if all modifications to Planck $\\Lambda$CDM are attributed to DE.\n\nLastly, the continuity equation for the DE, viz., $\\dot\\rho_{\\rm DE}(z)+3H(z)[\\rho_{\\rm DE}(z)+p_{\\rm DE}(z)]=0$, implies $\\varrho_{\\rm DE}(z)=\\frac{1+z}{3} \\rho_{\\rm DE}'(z)$ for the inertial mass density, $\\varrho_{\\rm DE}(z)\\equiv\\rho_{\\rm DE}(z)+p_{\\rm DE}(z)$, and $w_{\\rm DE}(z)=-1+\\frac{1+z}{3} \\rho_{\\rm DE}'(z)\/\\rho_{\\rm DE}(z)$ for the corresponding EoS parameter $w_{\\rm DE}(z)\\equiv p_{\\rm DE}(z)\/\\rho_{\\rm DE}(z)$, where $\\rho_{\\rm DE}(z)$ is the DE density as defined in~\\eqref{eqn:drho}, $p_{\\rm DE}(z)$ is its pressure, and $'\\equiv\\dv{z}$. Accordingly, we have\n\\begin{equation}\n\\begin{aligned}\n \\varrho_{\\rm DE}(z)&=2(1+z)H_{\\Lambda\\rm CDM}^2\\qty[\\frac{H'_{\\Lambda\\rm CDM}}{H_{\\Lambda\\rm CDM}}\\delta(\\delta+2)+\\delta'(\\delta+1)]\\\\\n &\\approx2(1+z)H^2_{\\Lambda\\rm CDM}\\qty[2\\frac{H'_{\\Lambda\\rm CDM}}{H_{\\Lambda\\rm CDM}}\\delta+\\delta'],\n\\end{aligned}\n\\end{equation}\nfor the DE inertial mass density, and\n\\begin{equation}\n\\begin{aligned}\\label{eq:eos}\n w_{\\rm DE}(z)&=-1+\\frac{2(1+z)\\qty[\\frac{H'_{\\Lambda\\rm CDM}}{H_{\\Lambda\\rm CDM}}\\delta(\\delta+2)+\\delta'(\\delta+1)]}{3\\qty[\\frac{\\rho_{\\rm DE0}}{\\rho_{\\rm c}}+\\delta(2+\\delta)]}\\\\\n &\\approx-1+\\frac{2(1+z)\\qty[2\\frac{H'_{\\Lambda\\rm CDM}}{H_{\\Lambda\\rm CDM}}\\delta+\\delta']}{3\\qty[\\frac{\\rho_{\\rm DE0}}{\\rho_{\\rm c}}+2\\delta]},\n\\end{aligned}\n\\end{equation}\nfor the corresponding DE EoS parameter; in these two equations, the second lines are for small deviations from $\\Lambda$CDM. Notice that, in the exact form of~\\cref{eq:eos}, $w_{\\rm DE}(z)$ blows up if $\\rho_{\\rm DE0}\/\\rho_{\\rm c}(z)=-\\delta(z)[2+\\delta(z)]$ is satisfied for a redshift $z_{\\rm v}$. Comparing with \\cref{eqn:drho}, we see that this condition is equivalent to $\\rho_{\\rm DE}(z_{\\rm v})=0$; indeed, if the DE submits to the continuity equation as it does in this case, a vanishing energy density necessitates such a singularity~\\cite{Ozulker:2022slu}. Such infinities in the EoS parameter are not problematic from the fundamental physics point of view, instead, hints that the DE density is perhaps an effective one originating from a modified gravity model.\n\n\\section{Wiggles in Newton's ``constant\" descended from the wavelets}\n\nAlternatively, we can attribute the deviation of $H(z)$ from $H_{\\Lambda \\rm CDM}(z)$ to the deviations in the gravitational coupling strength, $G_{\\rm eff}(z)$, from the Newton's gravitational constant $G_{\\rm N}$ measured locally. We have, as usual,\n\\begin{equation}\n3H^2_{\\Lambda \\rm CDM}(z)=8\\pi G_{\\rm N}\\qty[\\rho_{\\rm m0}(1+z)^3+\\rho_{\\rm r0}(1+z)^4+\\rho_\\Lambda],\n\\end{equation}\nwhere the constant value $\\rho_\\Lambda$ is either the usual vacuum energy density or $\\rho_\\Lambda=\\frac{\\Lambda}{8\\pi G_{\\rm N}}$. We can write the Hubble parameter of the new model as\n\\begin{equation}\n3H^2(z)=8\\pi G_{\\rm eff}(z)\\qty[\\rho_{\\rm m0}(1+z)^3+\\rho_{\\rm r0}(1+z)^4+\\rho_\\Lambda],\n\\end{equation}\nfrom which, using the definition in~\\cref{eqn:deltaH},\n\\begin{equation}\n\\begin{aligned}\n\\label{eqn:Geff}\nG_{\\rm eff}(z)=\\qty[1+\\delta(z)]^{2}G_{\\rm N}\n\\end{aligned}\n\\end{equation}\ndirectly follows. Note that $G_{\\rm eff}(z)$ is also a wiggly function led by the wiggles of $\\delta(z)$, but, $G_{\\rm eff}(z)$ equals $G_{\\rm N}$ when $\\psi(z)=0$, and thereby, $G_{\\rm eff}(z=0)=G_{\\rm eff}(z>z_*)=G_{\\rm N}$ from~\\cref{eq:prereccond,eq:present}. And,\nfor small deviations from $\\Lambda$CDM, \\cref{eqn:Geff} reads\n\\begin{equation}\n\\begin{aligned}\nG_{\\rm eff}(z)\\approx [1+2\\delta(z)]G_{\\rm N}.\n\\end{aligned}\n\\end{equation}\nNote that, if we are to treat $\\rho_\\Lambda$ as the effective energy density of the cosmological ``constant\", i.e. ${\\Tilde{\\Lambda}(z)=8\\pi G_{\\rm eff}(z) \\rho_{\\Lambda}}$, this new cosmological term $\\Tilde{\\Lambda}(z)$ is not a constant anymore.\n\nIt is crucial to note that, while attributing the wiggles to the DE density or $G_{\\rm eff}(z)$ is indistinguishable in their background dynamics, this is not so for all physical observables. Particularly, a direct effect of the dynamical gravitational coupling strength would be observable, for instance, as this would promote the absolute magnitude $M_B=\\rm const$ of type Ia supernovae (SNIa) to a quantity varying with the redshift $M_B=M_B(z)$. Such an effect in the very late universe ($z\\lesssim0.1$) was recently suggested and investigated in a series of papers to address the so-called $M_B$ (and $H_0$) tension~\\cite{Alestas:2020zol,Marra:2021fvf,Perivolaropoulos:2021bds,Alestas:2022xxm,Perivolaropoulos:2022vql,Perivolaropoulos:2022txg}. Also, the idea that the supernovae absolute magnitudes are constant with redshift, has been questioned by observations and the question of whether or not this idea is valid has recently gained interest \\cite{Benisty:2022psx,DiValentino:2020evt,Rose:2020shp,Kang:2019azh,Kim:2019npy,Tutusaus:2017ibk,Linden:2009vh,Ferramacho:2008ap}. A possible variation of the $M_B(z)$ and equivalently of the SNIa luminosity $L(z)\\propto 10^{-\\frac{2}{5}M_B(z)}$ could be due to a variation of the Newton's ``constant\". Since the SNIa luminosity is proportional to the Chandrasekhar mass, which, in this case, is no longer a constant equal to $1.4\\,M_{\\odot}$, but a quantity that varies with $G_{\\rm eff}(z)$, we have $L(z)\\propto M_{\\rm Chandra}(z)$, so that\n$L(z)\\propto G_{\\rm eff}^{-3\/2}(z)$, which in turn leads, in this approach, to\n\\begin{equation}\n\\begin{aligned}\nM_{B}(z)-M_{B,G_{\\rm N}}=\\frac{15}{4}\\log \\frac{G_{\\rm eff}(z)}{G_{\\rm N}}=\\frac{15}{2}\\log[1+\\delta(z)],\\label{eq:mbwig}\n\\end{aligned}\n\\end{equation}\nwhere $M_{B,G_{\\rm N}}$ denotes the SNIa absolute magnitude when $G_{\\rm eff}(z)=G_{\\rm N}$, which satisfies ${M_{B,G_{\\rm N}}=M_{B,0}}$ due to~\\cref{eq:present}.\nThus, attributing wiggles to $G_{\\rm eff}(z)$ will have consequences not only on the expansion of the universe, but also on the absolute magnitudes of SNIa at different redshifts; and, as \\cref{eq:mbwig} shows, the wiggles of $G_{\\rm eff}(z)$ are directly manifested in the SNIa absolute magnitudes as a wiggly $M_B(z)$ reminiscent of the findings of Ref.~\\cite{Benisty:2022psx}. Investigating how this dual modification to the standard cosmology affects the cosmological parameter estimates from SNIa data and furthermore the so-called $M_B$ tension~\\cite{Efstathiou:2021ocp,Camarena:2021jlr}, is beyond the scope of this paper, and deserves a separate study.\n\n\\begin{figure*}[ht!]\n \\centering\n \\includegraphics[width=0.46\\textwidth]{wavelet.pdf}\n \\includegraphics[width=0.46\\textwidth]{sin.pdf}\n \\includegraphics[width=0.46\\textwidth]{hdot.pdf}\n \\includegraphics[width=0.46\\textwidth]{sindm.pdf}\n \\caption{The deviations from the $\\Lambda$CDM model in terms of some kinematical parameters for some wavelet examples of $\\psi(z)$ given in the top left panel where $\\Bar{\\alpha}$ and $\\alpha$ are in units of ${\\rm \\,km\\, s^{-1}\\, Mpc^{-1}}$, $\\Bar{\\beta}$ and $\\beta$ are unitless, $\\Bar{z}_\\dagger$ and $z_\\dagger$ are redshifts anchoring the wavelets. The dashed line corresponds to no deviation, i.e., $\\Lambda$CDM itself. The blue bars correspond to the TRGB $H_0$ measurement and various BAO measurements. See \\cref{sec:ex} for details.}\n \\label{fig:kinematics}\n\\end{figure*}\n\n\\section{Employing some simplest wavelets}\n\\label{sec:ex}\n\n\n\n\\begin{figure*}[ht!]\n \\centering\n \\includegraphics[width=0.46\\textwidth]{dens.pdf}\n \\includegraphics[width=0.45\\textwidth]{omegade.pdf}\n \\includegraphics[width=0.46\\textwidth]{eoswig.pdf}\n \\includegraphics[width=0.46\\textwidth]{inertial.pdf}\n \\caption{The deviations from the cosmological constant, if the wavelet examples of $\\psi(z)$ are attributed to a dynamical DE, i.e., the wiggles in $H(z)$ are produced solely by a dynamical DE. The plots are matched by color to those in \\cref{fig:kinematics}.}\n \\label{fig:dens}\n\\end{figure*}\n\n\\begin{figure}[t!]\n \\begin{flushright}\n \\includegraphics[width=0.45\\textwidth]{geff.pdf}\n \\includegraphics[width=0.467\\textwidth]{mb.pdf}\n \\end{flushright}\n \\caption{ The deviation from $G_{\\rm N}$ if wiggles are produced solely by a varying Newton's ``constant\"; we also show the variation in the absolute magnitude $M_B$ of supernovae assuming the unmodified value to be the mean value of the measurement in Ref.~\\cite{Camarena:2021jlr}. The variation of $G_{\\rm eff}$ is less than $\\sim10\\%$ at all times, and there is practically no variation for $z\\sim0$ and $z\\gg0$.\n The plots are matched by color to those in \\cref{fig:kinematics}.}\n \\label{fig:ex}\n\\end{figure}\n\nWavelets constitute a wide family of functions that may or may not be smooth. They exhibit an oscillatory (not necessarily periodic) behaviour over a compact set of their parameters, and either vanish or quickly decay outside of this set. Even the superposition of arbitrarily many wavelets would describe another one. Here, we will consider some simplest examples: one discontinuous, namely, the Haar mother wavelet (\\cref{sec:haar}); and other smooth wavelets, namely, the Hermitian wavelets (\\cref{sec:hermitian}) that are acquired from the derivative\/s of a Gaussian distribution function. These examples have no inherent superiority to other possible wavelets; we provide them only because of their simplicity and to give a taste of how wavelets behave and their cosmological consequences.\n\nThese example wavelets, and their corresponding cosmologically relevant functions are plotted in~\\cref{fig:dens,fig:ex,fig:kinematics} for various values of their free parameters; matching colors in different figures indicate the same wavelet with the same choice of parameters. The dashed line corresponds to a vanishing wavelet, i.e., to the reference $\\Lambda$CDM model described with $H_{\\Lambda\\rm CDM}(z)$; for the figures, we neglected radiation for $z0$ for the interval $z\\in[1,2)$ so that~\\cref{eq:int} is satisfied; this region constitutes a bump on $H(z)$. This bump presents itself in other functions such as $\\rho_{\\rm DE}(z)$ and $\\varrho_{\\rm DE}(z)$ [or $G_{\\rm eff}(z)$]; and, it is reminiscent of those that are found in non-parametric DE density reconstructions~\\cite{Escamilla:2021uoj,Bernardo:2021cxi} from observational data. For our particular example, we see in the figures that the bump results in slight disagreement with the eBOSS DR16 Quasar data at $z_{\\rm eff}=1.48$ for both $H(z)$ and $D_{M}(z)$. This can be mitigated by a different choice of parameters or more interestingly by adding more wiggles, for example, by superposing multiple Haar wavelets; however, this superposition would increase the number of free parameters. In the next subsection, we will increase the number of wiggles without increasing the number of free parameters. Note that, the $\\Dot{H}(z)\/3H^2(z)$ plot of the Haar example appears to never cross the zero line, implying monotonic behaviour for $H(z)$; however, this is not true. The discontinuities of the $H(z)$ function at $z=1,2,3$ results in spikes (Dirac delta distributions) that are not shown in~\\cref{fig:kinematics} for the $\\dot{H}(z)$ function at these redshifts, resulting in two crossings of the zero line at $z=1,2$ and a non-monotonic $H(z)$ that increases instantaneously in time at $z=2$. Similar spikes also exist for the $w_{\\rm DE}(z)$ and $\\varrho_{\\rm DE}(z)$ functions if the wiggles are attributed to the DE, but again are not shown in \\cref{fig:dens}. Additionally, if the deformations of the Hubble function described by $\\delta(z)$ is attributed to the DE density, the $w_{\\rm DE}(z)$ has a discontinuity at $z\\sim 2.2$ (as suggested in~\\cite{Akarsu:2019hmw,Akarsu:2021fol}) in addition to the obvious ones at $z={1,2,3}$. This discontinuity (present as a singularity) happens exactly at the redshift that $\\rho_{\\rm DE}(z)$ crosses from negative to positive values, and is characteristic of energy densities that have vanishing values in time and not problematic from the point of view of fundamental physics as discussed below~\\cref{eq:eos}. Of course, the discontinuities at $z={1,2,3}$ are not very compelling physically, but the Haar wavelet is the simplest example and shows what we should expect from the form of $H(z)$ for a minimal wavelet type deviation of the Hubble radius, $H(z)^{-1}$, from that of the standard cosmological model, $H_{\\Lambda \\rm CDM}(z)^{-1}$. A good alternative to the Haar wavelet can be the Beta wavelet~\\cite{betawave} derived from the derivative of the Beta distribution ${P_\\beta(z|\\gamma,\\lambda)\\equiv1\/B(\\gamma,\\lambda)z^{\\gamma-1}(1-z)^{\\lambda-1}}$ where $B(\\gamma,\\lambda)\\equiv\\int_0^1 k^{\\gamma-1}(1-k)^{\\lambda-1}\\dd{k}$ is the Euler beta function, $0\\leq z\\leq1$, and $1\\leq\\gamma,\\,\\lambda\\leq\\infty$. Beta wavelets are in some sense softened Haar wavelets as both have compact support and are unicycle (i.e., they have just one bump and one dip), however, unlike the Haar wavelet, the Beta wavelet is continuous~\\cite{betawave}. Thus, to describe more wiggles, one would need to superpose multiple Beta wavelets just like in case of the Haar wavelet, increasing the number of free parameters. While the Beta wavelets can satisfy \\cref{eq:int,eq:prereccond,eq:present} exactly without compromising continuity, they do not have a closed-form expression and are mathematically less tractable; thus, for simplicity, we will proceed with Hermitian wavelets that are also continuous\\footnote{One may also wish the wavelet satisfying~\\cref{eq:int,eq:prereccond,eq:present} exactly to have the stronger property of being smooth. However, since these conditions require that every derivative of the wavelet vanish outside of the interval $[0,z_*]$, but not inside, such a wavelet cannot be analytic. Non-analytic smooth functions can be constructed piecewise similar to splines but the pieces are not necessarily polynomial. These kind of functions are not compelling for the demonstrative purposes of this paper but may turn out to be useful in observational analyses.} and simpler, and satisfy~\\cref{eq:int,eq:prereccond,eq:present} to high precision.\n\n\n\\subsection{Hermitian wavelets}\n\\label{sec:hermitian}\nThe discontinuous features of the Haar wavelet can be considered as an approximate description of a rapidly varying smooth function which would be physically more relevant. A simple family of smooth wavelets can be acquired from the derivative\/s of a Gaussian distribution (cf., the Hermitian wavelets~\\cite{hermitianwave}). To do so, we consider the Gaussian distribution defined as follows;\n\\begin{equation}\n\\label{eqn:G0}\n{\\psi_{\\rm G0}(z)=-\\frac{\\alpha}{2\\beta}e^{-\\beta(z-z_\\dagger)^2}},\n\\end{equation}\nwhere $\\alpha$, $\\beta>0$, and $z_\\dagger>0$ are the three free parameters that will set, respectively, the amplitude, support, and center of the wiggles. The real part of the $n^{\\rm th}$ Hermitian wavelet can be obtained from the $n^{\\rm th}$ derivative of a Gaussian distribution ${\\psi_{\\text{G}n}(z)\\equiv\\dv[n]{\\psi_{\\rm G0}(z)}{z}}$; accordingly, utilizing~\\cref{eqn:G0} we obtain\n\\begin{equation}\n\\begin{aligned}\n\\psi_{\\rm G1}(z)=&-2\\beta(z-z_{\\dagger}) \\psi_{\\rm G0}(z),\\\\\n\\psi_{\\rm G2}(z)=&4\\beta \\qty[\\beta(z-z_{\\dagger})^2-\\frac{1}{2}]\n\\psi_{\\rm G0}(z), \\\\\n\\psi_{\\rm G3}(z)=&-8\\beta^2 \n\\qty[\\beta(z-z_{\\dagger})^3-\\frac{3}{2}(z-z_\\dagger)]\\psi_{\\rm G0}(z), \\\\\n\\psi_{\\rm G4}(z)=&16\\beta^2\\qty[\\frac{3}{4}+(z-z_{\\dagger})^4\\beta^2-3\\beta(z-z_{\\dagger})^2]\\psi_{\\rm G0}(z),\n\\end{aligned}\n\\end{equation}\netc., where only up to $4^{\\rm th}$ derivative are written explicitly. $\\psi_{\\rm G1}(z)$ and $\\psi_{\\rm G2}(z)$ are well-known wavelets and the latter is also known as the Ricker (Mexican hat) wavelet. $\\psi_{\\text{G}n}(z)$ are quasi-periodic functions, i.e., the redshift difference between consecutive peaks (whose amplitudes may differ) of the wave varies. We note that $\\psi_{\\rm G0}(z)$ itself is responsible for the fast damping of the wavelet function $\\psi_{\\text{G}n}(z)$ as $z$ moves away from $z_\\dagger$ and that $n^{\\rm th}$ derivative of $\\psi_{\\rm G0}(z)$ brings an $n^{\\rm th}$ degree polynomial as a factor to itself, which in turn implies that $n$ stands also for the number of nodes of the $\\psi_{\\text{G}n}(z)$ function, i.e., the number of times the function crosses zero. These $n$ nodes correspond to $n+1$ wiggles [total of $n+1$ dips and bumps of $\\psi(z)$]; the bumps of $\\psi(z)$ manifest themselves as dips, and dips of $\\psi(z)$ manifest themselves as bumps in $\\delta(z)$ and equivalently $H(z)$, cf., Eq.~\\eqref{eqn:sdevH}. These manifestations directly translate to wiggles on either $\\rho_{\\rm DE}(z)$ or $G_{\\rm eff}(z)$ depending on which function we attribute them to. The wiggly structure in these functions resemble the wiggles in their respective functions that are acquired from observational analyses utilizing parametric or non-parametric reconstructions~\\cite{Escamilla:2021uoj,Bernardo:2021cxi}. Wiggles acquired in observational reconstructions are no surprise even if the data set does not contain CMB, because, wiggles are necessary for $H(z)$ to fit the measurements of the Hubble parameter from the BAO data without spoiling the success of $\\Lambda$CDM in fitting the $D_M(z)$ values measured from the same BAO data (see Fig.~\\ref{fig:kinematics}); and the logic we used to show the necessity of bumps still apply when $z_*$ is swapped for the effective redshift of a BAO measurement. The necessity of the wiggles only when low redshift data ($z<3$) is considered, is the subject of an upcoming work. \n\nCoincidentally, the first derivative of the Gaussian distribution \\eqref{eqn:G0}, i.e., $\\psi_{\\rm G1}(z)$, can be used to roughly approximate the Haar wavelet smoothly. For $\\psi_{\\rm G1}(z)$, we pick $\\alpha=0.0005{\\rm \\,km\\, s^{-1}\\, Mpc^{-1}}$, $z_\\dagger=2$, and $\\beta=2$, so that the wavelet approximates our previous Haar example. For rest of the examples, $\\psi_{\\rm G2}(z)$, $\\psi_{\\rm G3}(z)$, and $\\psi_{\\rm G4}(z)$, the values of the parameters are shown on the top left panel of~\\cref{fig:kinematics} and the increased number of wiggles for higher derivatives are clearly seen. The top right, and bottom right panels show how increasing the number of wiggles can provide a better description of the BAO data. Unlike the Haar and $\\psi_{\\rm G1}(z)$ examples, $\\psi_{\\rm G2}(z)$ and $\\psi_{\\rm G4}(z)$ examples better describe also the eBOSS DR16 Quasar data at $z_{\\rm eff}=1.48$ while retaining better agreement with the Ly-$\\alpha$ BAO data at $z_{\\rm eff}=2.33$; the $\\psi_{\\rm G4}(z)$ example even complies with the trend of the Galaxy BAO data (at $z_{\\rm eff}=0.38,\\,0.51,\\,0.70$) $H(z)\/(1+z)$ measurements that increase with redshift, which cannot be achieved within $\\Lambda$CDM (even though $\\Lambda$CDM is not in strong tension with any of these data points). Still, we emphasize that these wavelets are just illustrative examples and better wavelets can be looked for. Again, attributing the wiggles to the DE, the DE density also wiggles smoothly; however, for the $\\psi_{\\rm G1}(z)$ and $\\psi_{\\rm G2}(z)$ examples, two safe\/expected singularities are again present in $w_{\\rm DE}(z)$ at the redshifts that the DE density vanishes.\n\n\nNote that \\cref{eq:int,eq:prereccond,eq:present} are satisfied exactly only for admissible wavelets with compact support in the redshift interval $[0,z_*]$; thus, unlike the Haar wavelet, $\\psi_{\\text{G}n}(z)$ does not satisfy~\\cref{eq:int,eq:prereccond,eq:present} exactly, but rather approximately\\footnote{We emphasize that the Haar and Hermitian wavelets are just convenient examples we used to demonstrate various aspects of the wavelet framework. The previously mentioned Beta wavelets can satisfy these conditions exactly without compromising continuity (at the cost of simplicity due to their lack of closed-form expression); and working with wavelets generated by higher order derivatives of the Beta distribution, it should be possible to increase the number of wiggles without increasing the number of free parameters, but to our knowledge, there is no established literature on wavelets derived from their higher order derivatives. Another possibility is constructing wiggles out of splines that are piecewise polynomials which can have compact support, but these are likely to suffer from excessive number of free parameters. Also, a middle ground exists where some of the conditions are satisfied exactly and some approximately. For example, the $n^{\\rm th}$ Poisson wavelet, viz., $\\psi_{{\\rm P}n}(z)\\equiv\\frac{z-n}{n!}z^{n-1}e^{-z}$ for $z\\geq0$ and vanishing everywhere else, satisfies \\cref{eq:present} exactly but the other two equations approximately for $n>1$.} (yet, beyond a level that cannot be resolved by observation). These three conditions were imposed on $\\psi(z)$ through arguments relying on the robustness of certain observations; however, no matter how robust and model independent they are, the uncertainties of the measurements itself require only that \\cref{eq:int,eq:prereccond,eq:present} hold approximately. Reassuringly, for large redshifts, ${\\psi_{\\text{G}n}(z)H_{\\Lambda\\rm CDM}(z)\\propto z^{n+\\frac{3}{2}}e^{-\\beta z^2}}$ for matter dominated and $\\propto z^{n+2}e^{-\\beta z^2}$ for radiation dominated universes; both of these functions rapidly decay by virtue of the exponential term which eventually decays faster than any polynomial growth, ensuring $\\delta(z)\\to0$ at large redshifts, see \\cref{eqn:deltaH}. A similar argument can be made for $\\Delta\\rho_{\\rm DE}(z)\\to0$ through \\cref{eq:deltarho} at large redshifts. Finally, to demonstrate how successfully the $\\psi_{\\text{G}n}(z)$ examples approximate the conditions given in \\cref{eq:int,eq:prereccond,eq:present}, we examine our $\\psi_{\\rm G3}(z)$ example as it is the one that violates these conditions most strongly. The values we pick in our $\\psi_{\\rm G3}(z)$ example correspond to: $\\psi_{\\rm G3}(0)=(41.75\\times10^{-6})\\,{\\rm \\,km^{-1}\\, s\\, Mpc}$, which can be compared with ${H^{-1}_{\\Lambda\\rm CDM}(0)=(14.78\\times10^{-3})\\,{\\rm \\,km^{-1}\\, s\\, Mpc}}$ from Planck 2018 and corresponds to ${\\delta(0)\\sim3\\times10^{-3}}$; $\\psi_{\\rm G3}(z_*)\\sim 10^{-10^6}{\\rm \\,km^{-1}\\, s\\, Mpc}$ which can be compared with ${H^{-1}_{\\Lambda\\rm CDM}(z_*)=7.3\\times10^{-7}{\\rm \\,km^{-1}\\, s\\, Mpc}}$ from Planck 2018 and corresponds to ${\\delta(z_*)\\sim10^{-10^6}}$; and, ${c\\times\\Psi_{\\rm G3}(z_*)=-5.46\\,\\rm Mpc}$ which is extremely well within the $1\\sigma$ uncertainty of $D_M(z_*)=13872.83\\pm 25.31\\,\\rm Mpc$ measured in Planck 2018.\n\n\n\n\\section{Conclusion}\nIt is well-known that the comoving angular diameter distance to last scattering, $D_M(z_*)$, is strictly constrained by observations almost model-independently. Therefore, in a viable cosmological model, this distance should be the same with the one measured by assuming $\\Lambda$CDM, so that consistency with CMB data is ensured at the background level. We have shown that, assuming the pre-recombination and present-day universes are well described by $\\Lambda$CDM, this is satisfied only if the deviation of any model from $\\Lambda$CDM described by the function $\\psi(z)=H(z)^{-1}-H(z)_{\\Lambda\\rm CDM}^{-1}$, which is the deviation from the standard $\\Lambda$CDM model's Hubble radius, should be, or well approximated by, an admissible wavelet. \\textit{In other words, in a viable alternative cosmological model that leaves the pre-recombination and present-day universes as they are in the standard cosmological model, the modifications cannot be arbitrary but should satisfy (exactly or approximately at a precision level that can be absorbed within the precision of the available observational data) a Hubble radius function whose deviation from the one in the standard cosmological model is a member of the set of admissible wavelets.}\n\nThe admissible wavelets describing $\\psi(z)$ can be converted to modifications in various cosmological kinematic functions such as the Hubble and comoving Hubble parameters, $H(z)$ and $H(z)\/(1+z)$ as shown in \\cref{fig:kinematics}, as well as, the deceleration and jerk parameters, $q(z)$ and $j(z)$. The wiggly nature of wavelets describing $\\psi(z)$ leads to wiggles in these functions, but none of them are necessarily wavelets, moreover, even the ones that arise from the simplest wavelets have non-trivial behaviour that is highly unlikely to be constructed\/introduced by hand in the first place. Accordingly, while requiring $\\psi(z)$ to be an admissible wavelet ensures consistency with the CMB at the background level, the wiggly nature of the kinematic functions can be immensely effective in fitting the multitude of BAO data which have no clear common trend compared to $\\Lambda$CDM. Also, as the wavelets we used as examples show, the number of wiggles in $\\psi(z)$, hence also in cosmological kinematics, can be varied and then quite featured kinematics well fitting the observational data can be achieved without further increasing the number of free parameters; e.g., one may introduce any number of wiggles by taking a sufficient number of derivatives of the Gaussian distribution and have only three extra free parameters. These non-trivial modifications we have found in the cosmological kinematics can then be attributed to different physical origins. As the first examples that come to mind, we have attributed them either to a dynamical DE, viz., $\\rho_{\\rm DE}(z)$, or to a dynamical gravitational coupling strength, viz., $G_{\\rm eff}(z)$, and briefly discussed how these different approaches are, in principle, observationally distinguishable, even though they give rise to the same background kinematics, see \\cref{fig:dens,fig:ex}. We demonstrated also that the dynamics of the DE, or the gravitational ``constant'', led by the simplest wavelets, are even more non-tirivial compared to the kinematics; for instance, the DE density can change sign in the past, accompanied by singularities in its EoS parameter.\n\nA wiggly structure may be described as consecutive bumps and dips on a function. By using the simplest admissible wavelets, we encountered a common pattern that our toy examples, which well describe the BAO data, present a bump in the Hubble parameter (which can be attributed to a bump in the DE density) at $1.5\\lesssim z\\lesssim2$ just as found in various observational reconstructions \\cite{Wang:2018fng,Escamilla:2021uoj,Bernardo:2021cxi}. The existence of bumps is a natural outcome of our findings, because the dips in $H(z)$ required for better description of the data, e.g., at $z\\sim2.3$ relevant to the Ly-$\\alpha$ data, should be compensated by bumps elsewhere so that the comoving angular diameter distance to last scattering remains unaltered. This should raise serious concerns that the bumpy features in the non-parametric $H(z)$ and\/or $\\rho_{\\rm DE}(z)$ reconstructions may be fake and caused by overfitting to the BAO data; since various BAO data call for dips in the Hubble parameter, there will be compensatory bumps where there are no data points to oppose them. Although the redshift range devoid of data where these bumps may be present is arbitrary and can extend to very high redshifts (e.g., a plateau with a small amplitude over a large redshift range compensating a tight dip at $z\\lesssim3$), most observational analyses reconstruct the cosmological functions up to $z\\sim3$ where the most suitable redshift range for a fake bump appears to be at $1.5\\lesssim z\\lesssim2$. It is worth noting here that the wiggles in the DE density are not expected to be representative of an Effective Field Theory, more concretely any minimally coupled scalar model \\cite{Colgain:2021pmf}, and thus it is conceivable that the introduction of theoretical priors should smooth out the wiggles in the DE density \\cite{Pogosian:2021mcs,Raveri:2021dbu}. This may be implying that the origin of the wiggles in $H(z)$ must be sought in modified gravity theories. However, it may also be too hasty to completely ignore the possibility of finding highly wiggly (may be discreet) DE densities; see, for instance, the so-called Everpresent $\\Lambda$ model, which suggests the observed $\\Lambda$ fluctuates between positive and negative values with a magnitude comparable to the cosmological critical energy density about a vanishing mean, $\\braket{\\Lambda}=0$, in any epoch of the Universe, in accordance with a long-standing heuristic prediction of the causal set approach to quantum gravity \\cite{Ahmed:2002mj,Zwane:2017xbg,Surya:2019ndm}.\n\nUp until now we have avoided discussing the $H_0$ tension and assumed that any alternative cosmological model would not deviate from $\\Lambda$CDM at $z\\sim0$, based on the observational argument that $\\Lambda$CDM describes local observational data well and is also supported by non-parametric reconstructions. However, this no deviation condition, cf., \\cref{eq:present}, is stricter than necessary, as observational evidence suggests that it is essentially the functional form of ${3H^2_{\\Lambda\\rm CDM}(z)=\\rho_{\\rm m,0}(1+z)^3+\\rho_{\\Lambda}}$ that is favored by local data. This suggests that, the reference model from which the deviations are defined, can be taken to be any model that is compatible with CMB data while agreeing with the functional form of $\\Lambda$CDM exactly or approximately in the vicinity of the present-time of the universe, instead of the exact $\\Lambda$CDM model itself. Such models can be compatible with both CMB and local $H_0$ measurements at the same time, see e.g., Ref.~\\cite{Akarsu:2019hmw,Akarsu:2021fol}. Even the requirement of this functional form can be relaxed and the well-known CPL parametrization and $w$CDM model can be used for the reference model, in which case $\\psi(z)$ being an admissible wavelet is not a necessary condition but an analytically compelling case. However, it is possible that strict observational constraints from BAO data, prevent these models from occupying the part of their parameter space that allows them to simultaneously fit the CMB and $H_0$ measurements. If these models are taken to be the reference model, the $H_0$ tension may also be resolved within our wavelet framework; more importantly, if the observational success of these models were held back by the BAO data, the use of wavelets may resurrect them by letting them fit the BAO data without compromising their successful description of the CMB and $H_0$ observations.\n\nIn our discussions we basically allowed wavelets to have quite a bit of freedom, apart from requiring them to be admissible and vanish outside of the interval $z=[0,z_*]$, see \\cref{eq:int,eq:prereccond,eq:present}. However, it can also be very useful to focus on various subsets of these wavelets. Namely, using arguments based on the history of the expansion of the universe and\/or fundamental physics (also, these two can be related in a certain way through the putative theory of gravity), we can impose more conditions on them, thereby narrow down the extent of the family of cosmological models satisfying our conditions. For example, as we have already discussed to some extent, with regard to the kinematics of the universe, one may demand an ever expanding universe ($H(z)>0$) and\/or a monotonically decreasing Hubble parameter ($\\dot{H}(z)<0$) from beginning to the present, or, with regards to dynamics of the DE (supposing that GR is valid and the deviations are attributed to a dynamical DE fluid), one may demand a non-negative DE density ($\\rho_{\\rm DE}(z)\\geq0$) at all times, or a non-negative DE inertial mass density corresponding to the null energy condition ($\\varrho_{\\rm DE}(z)\\geq0$) at all times, or at least be cautious so that no instability problems are encountered. Indeed, DE fluids that leads to our example admissible wavelets, seem to easily violate the conventional energy conditions; namely, the EoS parameter crosses below minus unity and\/or plus unity and even exhibits a pole\/s in some cases; the DE inertial mass density, and even the DE density itself in some cases, cross below zero. Such violations are generally known to indicate possible instability issues in the DE fluid. One way out in this case, as we mentioned earlier, would be the possibility of deriving such dark energies from modified gravity theories as effective sources without causing some other instability problems. Employing the Parameterized Post-Friedmann (PPF) \\cite{Hu:2008zd, Fang:2008sn} approach may also provide us with another way out, namely, the PPF discussed in \\cite{Hu:2008zd, Fang:2008sn} may be used to placate the violent behaviors of the DE source, particularly to solve the instability issues related to the DE EoS parameter or make them less severe by pulling it towards the safer interval $[-1,1]$. This approach that replaces the condition of DE pressure perturbation with a smooth transition scale will help us understand the momentum density of the DE and other components on the large scale structure. We leave advantages of considering such reconstruction methods in relevance with the family of the DE models introduced in this paper for future consideration.\n\nTo conclude with, the wavelet framework presented in this paper seems to have the potential to be a good guide to find new cosmological models, alternative to the base $\\Lambda$CDM model, that are consistent with the observational data and to analyze existing ones, but further observational and theoretical studies are required to uncover the full scope of the implications and applications of this framework.\n\n\\begin{acknowledgments}\n The authors thank to Bum-Hoon Lee and Kazuya Koyama for useful insights and discussions. \\\"{O}.A. acknowledges the support by the Turkish Academy of Sciences in the scheme of the Outstanding Young Scientist Award (T\\\"{U}BA-GEB\\.{I}P). E.\\'O.C. was supported by the National Research Foundation of Korea grant funded by the Korea government (MSIT) (NRF-2020R1A2C1102899). E.\\\"{O}.~acknowledges the support by The Scientific and Technological Research Council of Turkey (T\\\"{U}B\\.{I}TAK) in scheme of 2211\/A National PhD Scholarship Program. S.T. and L.Y. were supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education through the Center for Quantum Spacetime (CQUeST) of Sogang University (NRF-2020R1A6A1A03047877).\n \\end{acknowledgments}\n\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\nIn recent years, supersymmetric quantum mechanics (SUSY QM) has provided a\ndeeper understanding of the exact solvability of several well known\npotentials in 1-dimensional QM. In particular, using the ideas of shape\ninvariance (SI), it provides a procedure for getting the spectrum, the\neigenfunctions and the S-matrix (i.e. the reflection and transmission\ncoefficients) algebraically [1]. There also exist interesting connections\nbetween SUSY QM and soliton solutions. Despite these (and more)\ninteresting developments in SUSY QM for one particle system in one\ndimension, so far, not many of these results could be extended either for\n$N$-particle systems in one dimension or for one particle systems in more\nthan one dimension.\n\nIn recent times there is a revival of interest in the $N$-body problems\nin one dimension with inverse square interaction which were introduced and\nstudied by Calogero [2] and developed by Sutherland [3] and others [4].\nThese models have several interesting features, like exact solvability,\nclassical and quantum integrability and also have interesting applications\nin several branches of physics [5,6]. Apart from the well known\ntranslational invariant inverse square interaction models, referred to as\n$A_{N - 1}$ Calogero-Sutherland Model (CSM), there also exist\ngeneralizations of this model, but without the translational invariance,\nreferred to as $BC_{N}$, $B_N$, $D_N$ models. These nomenclatures refer to\nthe relationship of these models to the root system of the classical Lie\ngroup. It might be added here that these models also share with $A_{N -\n1}$ CSM, features like exact solvability, and integrability and have also\nfound application in certain physical systems.\n\nThe purpose of this note is to enquire if the ideas of one dimensional\nSUSY QM could be extended to the $N$-particle case. In particular, whether\nthe spectrum of the celebrated Calogero and other models could be obtained\nalgebraically by using the ideas of SI and SUSY QM. The first step in\nthat direction was taken recently by Efthimiou and Spector [7] who showed\nthat the well known Calogero model (also termed as $A_{N-1}$ CSM) exhibits\nSI. However, they were unable to obtain the spectrum algebraically. This\nis because using SUSY they were unable to relate the eigenspectra of the\ntwo SUSY partner potentials. In this paper we demonstrate that using SUSY\nQM, SI and exchange operator formalism [8], the spectrum of the rational\n$A_{N-1}$ CSM, and also of all its generalizations like $B_N$, $D_N$ and\n$BC_N$ can be obtained algebraically. It is worth mentioning that the SI\nin our case is somewhat different from that of Efthimiou and Spector [7].\nSo far as we are aware off, this is the first instance when an\n$N$-particle quantum system has been solved using the techniques of SUSY\nQM and SI.\n\n\n\nThe plan of the paper is the following. We briefly review the ideas of one\ndimensional SUSY QM in Sec. II with the main emphasis on solvability using\nSI.. In Sec. II.A, we apply these ideas to the Calogero model, i.e., the\nrational $A_{N-1}$ model. We show that the spectrum of such a model can\nbe derived using the ideas of SUSY QM, SI and the exchange operator\nformalism[8]. We also briefly discuss as to how to obtain the\ncorresponding eigen-functions. In Sec. II.B, we treat the rational $BC_N$\nmodel, a translationally non-invariant system, in the same spirit. The\nfull spectrum is obtained and the method for obtaining the exact\neigen-functions is explicitly spelled out. It is also shown in this\nsection that the $BC_N$ model posses SI even if the exchange operator\nformalism is not employed. This is a generalization of Efthimiou et al's\nwork [7] on $A_{N-1}$ model to the $BC_{N}$ case. Finally, in Sec. III,\nwe summarize our results and discuss the possible directions to be\nfollowed in order to have a viable formalism of many-body SUSY QM. We\nalso point out the difficulties involved in extending these results to the\ntrigonometric case. In Appendix we show that the $BC_N$ trigonometric\nmodel is also shape invariant.\n\n\\section{SUSY, SI and Solvability}\n\nIt may be worthwhile to first mention the key steps involved in obtaining\nthe eigen-spectrum of a one body problem by using the concepts of SUSY QM\nand SI. One usually defines the SUSY partner potentials $H_1$ and $H_2$\nby\n\\be\\label{1.1} \nH_1= A^{\\dag} A \\, , \\ H_2 = A A^{\\dagger}~~,\n\\ee\n\\noindent where ($\\hbar = 2m =1$)\n\\be\\label{1.2}\nA = {d \\over dx} + W(x)~,~ \nA^{\\dag} = -{d \\over dx} + W(x)~.\n\\ee\nIn the case of unbroken SUSY, the ground state wave function is given in \nterms of the superpotential $W(x)$ by, \n\\be\\label{1.3} \n\\psi_0 (x) \\propto e^{-\\int^{x} W(y)dy}~~,\n\\ee\n\\noindent while the \nenergy eigenvalues and the wave functions of $H_1$ and $H_2$\nare related by, \n$(n=0,1,2,...)$\n\\be\\label{1.4}\nE_n^{(2)} = E_{n+1}^{(1)}~~, \\hspace{.2in} E_0^{(1)} = 0~~,\n\\ee\n\\be \\label{1.5}\n\\psi_n^{(2)} = [E_{n+1}^{(1)}]^{-1\/2} A \\psi_{n+1}^{(1)}~~, \\ \\ \\ \\\n\\psi_{n+1}^{(1)} = [E_{n}^{(2)}]^{-1\/2} A^{\\dag} \\psi_{n}^{(2)}~~.\n\\ee\n\nLet us now explain precisely what one means by SI.\nIf the pair of SUSY partner Hamiltonians $H_1,H_2$\ndefined above are similar in shape and differ only in the\nparameters that appear in them, then they are said to be SI.\nMore precisely, if the partner Hamiltonians $H_{1,2}(x;a_1)$ satisfy the\ncondition,\n\\be \\label{1.6}\nH_2(x;a_1) = H_1(x;a_2) + R(a_1),\n\\ee\nwhere $a_1$ is a set of parameters, $a_2$ is a function of $a_1$ (say\n$a_2=f(a_1)$) and the remainder $R(a_1)$ is independent of $x$, then\n$H_{1}(x;a_1)$ and $H_{2}(x;a_1)$ are said to be SI. \nThe property of SI permits an immediate analytic determination of the\nenergy eigenvalues, eigenfunctions and the scattering matrix [1]. In\nparticular the eigenvalues and the eigenfunctions of $H_1$ are given by\n($n = 1,2,...$)\n\\be \\label{1.7}\nE^{(1)}_n (a_1) = \\sum^n_{k=1} R(a_k)~, \\hspace {.2in} E^{(1)}_0 (a_1)=0~~,\n\\ee\n\\be\\label{1.8}\n\\psi^{(1)}_n (x;a_1) \\propto A^{\\dag}(x;a_1)A^{\\dag}(x;a_2)....A^{\\dag}(x;a_n)\n\\psi^{(1)}_0 (x;a_{n+1})~,\n\\ee\n\\be\\label{1.9}\n\\psi^{(1)}_0 (x;a_1) \\propto e^{-\\int^{x} W(y;a_1) dy}~~.\n\\ee\n\t\n\\subsection{Rational $A_{N-1}$ Calogero Model}\n\nWe now apply the exchange operator formulation to the rational $A_{N-1}$ CSM.\nThe Hamiltonian of the rational $A_{N-1}$ CSM is given by \n\\be\\label{9}\nH_{CSM} = {\\sum_{i}{\\frac{1}{2}} {p_{i}^2}} + l(l \\mp 1) \\sum_{i < j} \n{(x_{i} - x_{j})}^{-2} + \\omega {\\sum_{i} {\\frac{1}{2}} {{x_{i}^2}}} \\, .\n\\ee\n\\noindent \nThe sign $\\mp$ in (\\ref{9})\nrefers to the fact that $H$ acts on completely anti-symmetric or symmetric\nfunctions, respectively.\nLet us now define an operator $D_i$\n\\be\\label{10}\nD_{i} = -i \\partial_{i} + il \\sum_{j}^\\prime {(x_i -x_j)}^{-1} M_{ij} ,\n\\ee\n\\noindent known as the Dunkl operator in the literature. Hereafter\n$\\prime$ means $i=j$ is excluded in the summation. The exchange operator\n$M_{ij}$\nhave the following properties [8]\n\\begin{eqnarray}}\\newcommand{\\eea}{\\end{eqnarray}\nM_{ij}^2=1~, \\ \\ \\ M_{ij}^\\dagger=M_{ij}~,\n\\ \\ \\ M_{ij} \\psi^{\\pm} = \\pm \\psi^{\\pm}~,\\nonumber \\\\\nM_{ij} D_{i} = D_{j} M_{ij}~, \\ \\\nM_{ij} D_k = D_k M_{ij}~, \\ \\ k \\neq i,j~~,\\nonumber \\\\\nM_{ijk}=M_{ij} M_{jk}~, \\ \\ M_{ijk}=M_{kij}=M_{jki}~,\n\\eea\n\\noindent where $\\psi^{\\pm}$ is a(an) symmetric(antisymmetric) function.\nNote that the Dunkl operator is hermitian by construction\nand $[D_i, D_j]=0$. If we now define\n\\be\\label{11}\na_i = D_i - i \\omega x_i \\, , \\ \n{a_i}^\\dagger = D_i +i \\omega x_i \\, ,\n\\ee\nthen it is easy to see that\n\\be\\label{12}\n{[ a_i, {a_j}^\\dagger]} = 2 w {\\delta_{ij}}(1 + l{\\sum_{k}^{\\prime}M_{ik}})\n\t - 2 (1-{\\delta_{ij}}) l w M_{ij} \\, .\n\\ee\n\\noindent Let us now consider the SUSY partner potentials \n$H$ and $\\tilde {H}$ defined by\n\\be\\label{13}\nH = \\frac{1}{2}\\sum_{i}{a_i}^\\dagger a_i~,\\ \\ \\ \n\\tilde {H} =\\frac{1}{2} \\sum_{i}a_i {{a_i}^\\dagger} \\, . \n\\ee\n\\noindent Using eqs. (\\ref{10}) and (\\ref{11}) it is easily shown that \n\\be\\label{14}\nH_{CSM} = H + E_0^{CSM}~, \\ \\ \\\nE_0^{CSM} = [\\frac{N}{2} \\mp \\frac{l}{2} N (N-1)] \\omega~..\n\\ee\n\\noindent Thus, by construction, the ground state energy of $H$ is zero.\n\nUsing eqs. (\\ref{10}) and (\\ref{11}) it is easily shown that if \n$\\psi$ is the eigenstate of $H$ with eigenvalue $E (>0)$, then \n$A_1 \\psi$ is the eigenstate of $\\tilde H$ with eigenvalue $E+ \\delta_1$ i.e. \n\\be\\label{16}\n\\tilde{H} (A_1 \\psi)=[E+ \\delta_1](A_1 \\psi),\n\\ee\nwhere,\n\\be\\label{17}\nA_1 ={\\sum_{i}} a_{i}~, \\ \\ \\\n\\delta_1 = [(N-1) \\pm l N(N-1)] \\omega~~.\n\\ee\n\\noindent \nSimilarly, if $\\tilde {\\psi}$ is the eigenfunction of $\\tilde H$ with \neigenvalue $\\tilde E$, then $A_1^{\\dag}\\psi$ is the eigenfunction of $H$ \nwith eigenvalue $\\tilde{E} - \\delta_1$ i.e.\n\\be\\label{19}\nH({{A}_1^\\dagger}\\tilde {\\psi}) =[\\tilde {E}-\\delta_1] ({{A}_1^\\dagger} \n\\tilde{\\psi}).\n\\ee\n\\noindent This proves one to one correspondence between the non-zero\nenergy eigen values of $H$ and $\\tilde {H}$. \nThus it follows from here that the energy eigenvalues and eigenfunctions \nof the two partner Hamiltonians $H$ and $\\tilde H$ \nare related by \n\\be\\label{20}\n\\tilde{E}_n = E_{n+1} +\\delta_1 \\, ,~~ E_0 = 0 \\, , \\ \\ n = 0,1,2,...\n\\ee \n\\be\\label{21}\n\\tilde{\\psi}_n = {A_1 \\psi_{n+1} \\over \\sqrt {E_{n+1} +\\delta_1}}~~,\n~\\psi_{n+1} = {A_1^{\\dag} \\tilde{\\psi}_n \\over \\sqrt{E_{n+1}}}~~.\n\\ee\n\\noindent Note that $\\delta_1$ vanishes for $N=1$ and we\nrecover the usual results of SUSY QM with one degree of freedom.\nIt is worth noting that unlike the case of one dimensional QM, \nin this case the (positive) energy levels of $H$ and $\\tilde{H}$ \nare not degenerate. \n\nUsing eqs.. (\\ref{10}) and (\\ref{11}) it is also easily shown that $H$ and \n$\\tilde{H}$ satisfy the shape invariance condition \n\\be\\label{23}\n\\tilde{H} (\\{x_i\\} ,l) = H (\\{x_i\\}, l) +R(l),\n\\ee\n\\noindent where\n\\be\\label{24}\nR(l) = [ N \\pm l N(N-1)] \\omega = \\omega + \\delta_1~~.\n\\ee\n\\noindent \nAs a result, using the formalism of SUSY QM, and the relation between\n$E_{n+1}$ and $\\tilde{E_n}$ as given by eq. (\\ref{20}), \nthe spectrum of H is given by\n\\be \\label{25}\n{E_{n}} =\\sum_{i}R(l_{i}) - n\\delta_1~.\n\\ee\n\\noindent Note that in this particular case all $l_i$ are \nidentical so that using $\\delta_1$ and $R(l)$ as given by eqs. (\\ref{17}) \nand (\\ref{24}), the spectrum turns out to be\n\\be\\label{26}\nE_{n} = n(R-\\delta_1)\n\t= n \\omega ~.\n\\ee\n\\noindent \nUsing eq. (\\ref{14}) we then get the correct spectrum of the Calogero $A_{N-1}$\nmodel.\n\nLet us now discuss as to how to obtain the eigenfunctions of CSM using the\nformalism of SUSY QM.\nWe have seen that $A_1$ and $A_1^\\dagger$ relate the non-zero eigen states\nof the partner Hamiltonians $H$ and $\\tilde{H}$. Once a particular state of\n$H (\\tilde{H})$ with non-zero eigen value is known, the use of eq.\n(\\ref{21}) enables us to find the corresponding state of \n$\\tilde{H}\n(H)$. In particular, using eq. (\\ref{1.8}) and the fact that in this case \nall the $l_i$ \nare identical, it follows that all\nthe eigen-functions can be obtained from the ground state wave\nfunctions $\\psi_0$ as, $\\psi_n=(A_1^\\dagger)^n \\psi_0$. Note that this\nis justified from the operator algebra also, since $A_1$ and $A_1^\\dagger$\ncan be identified as the annhilation and the creation operator respectively.\nIn particular, one can show\nusing eqs. (\\ref{16}), (\\ref{19}), (\\ref{23}) and (\\ref{24}) that\n$[H,A_1]=-A_1$ and $[H,A_1^\\dagger]=A_1$. \n\nThis procedure for obtaining the\neigen-functions is similar to that of Isikov {\\it et al.} [9].\nTo see this, define a set of operators,\n\\be\\label{27}\nA_n =\\sum_{i=1}^N a_i^n, \\ \\ \\ \\ n \\leq N, \\\n\\ee\n\\noindent which are symmetric in the particle indices. These operators\nsatisfy relations which are analogous to those given by eqs. (\\ref{16}) and \n(\\ref{19}) for any $n$\n(see the next paragraph).\nIt is easily checked that $[H,A_n]=-n A_n$ and\n$[H,A_n^\\dagger]=n A_n^\\dagger$. Following [9], \nthe $k$-th eigen-state is given by,\n\\be\\label{28}\n\\psi_{\\{n_i\\}} = \\prod_{i=1}^N \\left ( {A}^\\dagger_i \\right )^{n_i} \\psi_0,\n\\ \\ \\ \\ a_i \\psi_0 =0, \\ \\ \\ \\ k=\\sum_{i=1}^N n_i ~.\n\\ee\n\\noindent Note that $\\psi_{\\{n_i\\}}$ incorporates all the degenerate states\ncorresponding\nto a particular value of $k$ and all the corresponding states of $\\tilde{H}$\ncan be obtained by applying the same $A_1$ on $\\psi_{\\{n_i\\}}$.\n\n\nLet us now ask the question whether or not $A_1$ is the only operator which\nrelates the states with nonzero eigen values of the partner Hamiltonians. The\nanswer obviously is negative and in fact, any operator which is symmetric\nin the particle\nindices can be used to relate the non-zero eigenstates of the partner\nHamiltonians. However, none of these operators are useful\nin deriving the full spectrum of the $A_{N-1}$ CSM model.\nFor example, if $\\psi$ is an\neigen-function of $H$ with\nnon-zero energy eigen-value $E$, then,\n\\be\\label{29}\n\\tilde{H} ( A_n \\psi ) = \\left [ E + \\delta_n \\right ] ( A_n \\psi )~~,\n \\ \\ \\ \\ \\delta_n = [(N-n) \\pm l N (N-1)] \\omega~.\n\\ee\n\\noindent Note that the above equation is valid only if $\\psi$ is at least\nthe $n$-th excited state, since $A_n(A_n^\\dagger)$ anhilates(creates)\n$n$ states.\nSimilarly, one can show that any state $\\tilde{\\psi}$ of $\\tilde{H}$,\nwhich represents at least the $(n-1)$-th excited state with energy\neigen value $\\tilde{E}$, is related to a state\n${A_n^\\dagger} \\tilde{\\psi}$ of $H$ with the eigen value \n$\\left ( \\tilde{E} - \\delta_n \\right )$. This again proves one to one\ncorrespondence between the $n$-th excited state of $H$ and the $(n-1)$-th\nexcited state of $\\tilde{H}$. As a result, the use of SI gives only\nthe spectrum beginning with the $n$'th excited state of $H$ and not the \nfull spectrum. \n\nIt is worth pointing out that for the $B_N$ type models, however, \nthe symmetry arguments force us to replace $A_1$ by $A_2$ in order\nto derive the full spectrum using SUSY QM.\nThis is discussed below in detail.\n\n\\subsection{Rational $BC_N$ Calogero Model}\n\n\nThe Hamiltonian for $BC_{N}$ Calogero model is given by\n\\begin{eqnarray}}\\newcommand{\\eea}{\\end{eqnarray}\\label{2.1}\nH_{BC_{N}} & = & {\\frac{1}{2}}[{\\sum_{i} {p_{i}^2}} + \n l(l \\mp 1) \\sum_{i,j}^{'} \\left [(x_{i} - x_{j}) ^{-2} +\n{(x_{i} +x_{j})} ^{-2} \\right ]\\nonumber \\\\\n & + & (l_{1}-1)l_{1} {\\sum_{i}x_{i}}^{-2}\n + \\frac{(l_2)(l_2-1)}{2} \\sum_i { x_i}^{-2}+\n{\\frac{\\omega}{2}} {\\sum_{i} x_i^2}] ~.\n\\eea\n\\noindent The sign $\\mp$ in front of the second term implies\nthat $H$ is restricted\nto act on the space of anti-symmetric (symmetric) wave-functions only.\nThis model reduces to CSM of $B_N$, $C_N$ and $D_N$\ntype in the limit $l_2=0$, $l_1=0$ and $l_2=l_1=0$, respectively.\nWithout loss of generality, in this section, we therefore only study \nthe $B_N$ type model,\ni.e. $l_2=0$. The other cases are easily obtained from here.\n\nIt is interesting to observe that this system also shares \nthe property of SI\nas found in [7] in the case of the $A_{N-1}$ model. \nSuperpotential corresponding to this model is \n\\be\nW_{i} =\\frac{\\partial G(x_{1}...x_{N})}{ \\partial x_{i}}=\n\\frac{\\partial (ln \\psi_0)}{\\partial x_i}~,\n\\ee\n\\noindent where $\\psi_0$ is the ground state wave function of $H_{B_{N}}$\nand $G$ is given by \n\\be\nG = +l_{1} \\sum_{i} \\ln (x_{i}) +l\\sum_{i > j}\\ln (x_{i} - \nx_{j}){(x_{i} +x_{j})} -\\frac{\\omega}{2} \\sum_i x_i^2~. \n\\ee\n\\noindent Thus, the superpotential takes the form \n\\be\nW_i =+l \\sum_j^\\prime \\left [ (x_i - x_j)^{-1} +(x_i +x_j)^{-1} \\right ]+\nl_{1} x_i^{-1} - \\omega x_i~.\n\\ee\n\\noindent Following [7], define \n${\\cal A}_{i} ({\\cal A}_{i}^\\dagger)=\\pm \n\\partial_{i}+W_i $,from which Hamiltonian (\\ref{2.1}) with $l_2=0$\ncan be expressed as \n\\be\nH^{B_N}=\\sum_i {{\\cal A}_{i}}^\\dagger {\\cal A}_i \\ \\\n-\\left [ \\frac{N}{2} -l N (N-1) -l_{1} N \\right] \\omega~, \\ \\\n\\ee\n\\noindent\nShape invariance follows due to the\nidentity,\n\\be\n\\sum_{i}{\\cal{A}}_{i}{\\cal{A}}_{i}^\\dagger(l,l_{1}) =\n\\sum_i {\\cal{A}}_{i}^\\dagger\n{\\cal{A}}_{i}(l+1,l_{1} +1)~.\n\\ee\n\\noindent\nShape invariance as observed in [7] for $A_{N-1}$ CSM, is\npresent not only\nin the rational $B_N, D_N, BC_N$ models, but also in their trigonometric\ncounterparts. For trigonometric $BC_N$ models this is shown \nin the Appendix .\n\nAs in the $A_{N-1}$ case, the SI condition does not help us in obtaining\nthe spectrum of the rational $BC_N$ models\nunless we employ the\nexchange operator formalism. Further, \nthe Hamiltonian in eq. (\\ref{2.1}) can also be cast in a diagonal form using\nexchange\noperator method. This however requires including\na reflection operator $(t_{i})$ where $t_{i}$ commutes with $x_{j}$ and \nanti-commutes\nwith $x_{i}$. The Dunkl derivative operator ( analogous to the $A_{N-1}$ case) \nis given by\n\\be\\label{2.2}\n{\\cal{D}}_{i}\n= -i \\partial_{i} + il \\sum_{j}^\\prime \\left [ {(x_i -x_j)}^{-1} M_{ij} +\n{(x_i +x_j)}^{-1} \\tilde {M_{ij}} \\right ] +i l_1 x_{i} ^{-1},\\ \\ \\\n\\tilde{M_{ij}} = t_{i}t_{j}M_{ij}~.\n\\ee\n\\noindent\nThe reflection operator $t_i$ satisfies the following relations \n\\begin{eqnarray}}\\newcommand{\\eea}{\\end{eqnarray}\n t_i^2=1~, \\ \\ t_i \\psi(x_1, \\dots, x_i, \\dots, x_N)=\n\\psi(x_1, \\dots, -x_i, \\dots, x_N)~,\\nonumber \\\\\nM_{ij} t_i = t_j M_{ij}~,\\ \\ \n\\tilde{M}_{ij}^\\dagger = \\tilde{M}_{ij}~,\\ \\\nt_i {\\cal{D}}_i = - {\\cal{D}}_{i} t_i~,\\ \\\nt_i {\\cal{D}}_j = {\\cal{D}}_j t_i~,\\ \\ j \\neq i,\\nonumber \\\\\n\\tilde {M_{ij}} {\\cal{D}}_i = -{\\cal{D}}_{i} \\tilde{M_{ij}} \\, .\n\\eea\n\\noindent It follows from eq. (\\ref{2.2}) that $[{\\cal{D}}_i,\n{\\cal{D}}_j] = 0$ and \n\\be\n[x_i,{\\cal{D}}_j]=i\\delta_{ij} \\left ( 1 + l \\sum_{k}^\\prime (M_{ik}+\n\\tilde{M_{ik}})+2l_1t_i \\right )\n-i(1-\\delta_{ij})l(M_{ij} -\\tilde{M_{ij}}) ~.\n\\ee\n\\noindent\nDefining, ${\\hat{a}}_i$ and ${\\hat{a}}_i^\\dagger$ \nwith the same defintion as in the previous case (see eq. (\\ref{11}))\nand using the above equations,\none finds\n\\be\n[{\\hat{a}}_i, {\\hat{a}}_j^\\dagger] = 2 \\omega \\delta_{ij} \\left (1+\nl\\sum_{k}^{\\prime} ( M_{ik}+\n\\tilde{M_{ik}}) +2l_{1}t_{i} \\right )\n- 2(1- \\delta_{ij}) l \\omega (M_{ij} - \\tilde{M_{ij}})~. \n\\ee\n\n\\noindent As before, the SUSY partner Hamiltonians ${\\cal{H}}$\nand $\\tilde {{\\cal{H}}}$ for\nthe $B_{N}$ case \nare defined as \n\\be\n{\\cal{H}} = \\frac{1}{2}\\sum_{i}{\\hat{a}}_i^\\dagger {\\hat{a}}_i \\, ~, \\ \\ \n\\tilde{{\\cal{H}}} = \\frac{1}{2}\\sum_{i}{\\hat{a}}_i {\\hat{a}}_i^\\dagger \\, ~.\n\\ee\n\\noindent \nIt can be seen that,\n\\be\nH_{B_{N}} ={\\cal{H}} + E_{0}^{B_{N}}~, \\ \\ \\\nE_{0}^{B_{N}} = [\\frac{N}{2} \\mp \\frac{l}{2} N (N-1)+l_1 N] \\omega~.\n\\ee\n\\noindent\nThe operator which brings in a correspondence between the \neigenstates $\\psi$\nand $\\tilde{\\psi}$ are respectively\n\\be\n\\hat{A}_{2} = \\sum_{i} {\\hat{a}}_{i}^2 \\, ~, \\ \\\n\\hat{A}_{2}^\\dagger = \\sum_i (\\hat{a}_i^\\dagger)^2 \\, ~.\n\\ee\n\\noindent One can show that if ${{\\psi}} (\\tilde {{{\\psi}}})$ is the\neigenfunction of \n${\\cal{H}} (\\tilde{{\\cal {H}}})$ with eigenvalue ${\\cal{E}}\n(\\tilde {{\\cal{E}}})$ then \n\\be\n{\\cal{H}}(\\hat{A_{2}}^\\dagger {\\psi}) =\n(\\tilde{{\\cal{E}}}-\\hat{\\delta}_{2})(\\hat{A_{2}}^\\dagger \n\\tilde {\\psi})~,\\ \\\n\\tilde {{\\cal{H}}}(\\hat{A}_{2}\\psi) =\n({\\cal{E}}+\\hat{\\delta}_{2})(\\hat{A}_{2}\\psi)~~.\n\\ee\n\\noindent where,\n\\be\n\\hat{\\delta}_{2} = [N-2 \\pm 2lN(N-1) +2l_{1}N ] \\omega~.\n\\ee\n\\noindent Now the question is why we\nshould take $\\hat{A}_2$ instead of $\\hat{A}_1$ (note that \n$\\hat{A}_n=\\sum_i \\hat{a}_i^n$)?\nThe point is, unlike the $A_{N-1}$ case, the $BC_N$ Hamiltonian has the \nreflection \nsymmetry, $x_{i}{\\rightarrow}-x_{i}$. Such a symmetry on the wave-functions \nis ensured only if one uses\n$\\hat{A}_2$ and not $\\hat{A}_1$. \n\n\\noindent Following the treatment in the $A_{N-1}$ case, it is easy to show\nthat $H$ and $\\tilde {H}$ of the $B_N$ model also satisfy the SI condition i.e.\n\\be\n\\tilde {H} (\\{x_i\\},l,l_1) = H (\\{x_i\\},l,l_1) + R_2 (l,l_1)\n\\ee\nwhere\n\\be\nR_{2}(l,l_1) =[N \\pm 2lN(N-1) +2l_{1}N] \\omega~~.\n\\ee \nSince in this case also all the $l_i$ are identical, hence it is easy to\nsee that the spectrum is given by\n\\be\nE_{n}=n(R_{2}-\\hat{\\delta}_{2})\n =2n\\omega~.\n\\ee \n\\noindent Note that now the spectrum is given by $2n\\omega$, instead\nof $n\\omega$ as\nin the case of $\\hat{A}_{N-1}$. This spectrum was also obtained earlier in \n[10], but by different method.\n\nThus we have shown that for the N-body Calogero models,\nthe spectrum can also be obtained by using the ideas of SQM, SI and exchange \noperator formalism.\n\n\n\n\\section{Summary \\& Discussions }\n\nIn this paper, we have generalized the ideas of SUSY QM with one degree\nof freedom to the rational-CSM,\nwhich is a many-body problem. In particular, we have shown that the exchange\noperator formalism is suitable for relating the non-zero eigen states\nof the partner Hamiltonians of CSM. The shape invariance in this formalism\nbecomes\ntrivial compared to the case discussed in [7]. In fact, the\npotentials\nof the partner Hamiltonians differ by a constant and\nthis is reminiscent of the usual harmonic oscillator case. \nAs a result, the operator method employed in [9] for solving\nthe rational-CSM algebraically and the SUSY method described here are \nnot very different from each other.\n\nOne of the nontrivial check of the applicability of the SUSY QM and the SI\nideas to the many-body problems lies in solving the trigonometric CSM,\nsince unlike the oscillator case, in this case the energy spectrum is not\nlinear in the radial quantum number. Unfortunately, the generalized\nmomentum operator $D_i$ for all types of models, rational as well as\ntrigonometric CSM associated with the root structure of $A_n$, $B_n$,\n$D_n$ and $BC_n$, are hermitian by construction. So, we can not talk of\npartner Hamiltonians in terms of $D_i$ alone. We can define the usual\ncreation and the anhilation operators, $a_i^\\dagger$ and $a_i$, in case we\nare dealing with the rational-CSM and construct partner Hamiltonians.\nUnfortunately, this can not be done for the trigonometric models. On the\nother hand, as described in [7], we can indeed introduce partner\nHamiltonians for $A_n$ type of trignometric models provided the exchange\noperator formalism has not been used. The SI is present in this formalism\nalso but the task of relating the eigenspectrum of the partner\nHamiltonians is unknown as yet. We have shown in this paper that the SI\nis also present in the most general $BC_N$ type of trignometric models.\nHowever, the problem again lies in our inability to relate the the\nspectrum of the partner Hamiltonians.\n\n\n\n \n\n\\begin{appendix}\n\n\\section{SI in Trigonometric $BC_N$ CSM model}\n\nIn this Appendix, we present the SI conditions for the trigonometric\n$BC_N$ models.\nThe trigonometric $BC_N$ Hamiltonian is given by,\n\\begin{eqnarray}}\\newcommand{\\eea}{\\end{eqnarray} \nH_{BC_N} &=& \n- \\sum_i \\partial_i^2 + l (l - 1) \\sum_{i,j}^{\\prime} \\left [\n\\frac{1}{\\sin^2(x_i - x_j)} + \\frac{1}{\\sin^2(x_i + x_j)} \\right ]\\nonumber \\\\\n&& + \\sum_i \\frac{l_1 (l_1\n- 1)} {\\sin^2 x_i}\n+ l_2 (l_2 - 1) \\sum_i \\frac{1}{\\sin^2 2x_i}~.\n\\eea\n\\noindent This model reduces to $B_N$, $C_N$ and $D_N$ for\n(a) $l_2=0$, (b) $l_1=0$,\n(c) $l_1=l_2=0$, respectively. We define a superpotential\nof the form,\n\\be\nW_i = l \\sum_j \\left [ \\cot(x_i - x_j) + \\cot(x_i + x_j) \\right] + l_1 \\cot\nx_i + l_2 \\cot 2x_i~.\n\\ee\n\\noindent \nUsing this expression of $W_i$\nin the definition of the creation and the anhilation operators as defined\nin (\\ref{1.1}), one can construct partner Hamiltonians which are equivalent \nto $H_{BC_N}$ up to an overall constant. The SI condition for these partner \nHamiltonians is\n\\be \nH_2^{BC_N} (\\{x_i\\}, l , l_1 , l_2) = \nH_1^{BC_N} (\\{x_i\\}, l^\\prime , l_1^\\prime , l_2^\\prime) +\nR^{BC_N}~,\n\\ee\n\\noindent where\n\\begin{eqnarray}}\\newcommand{\\eea}{\\end{eqnarray}\nR^{BC_N} & = & \n2 ( l^\\prime l_1^\\prime - l l_1) N (N - 1) + \\frac{4}{3} N (N - 1) (N - 2)\n(l^{\\prime 2} - l^2) \n+ 4 N (l_2^\\prime l_1^\\prime - l_2 l_1)\\nonumber \\\\\n&& + 4 N (N - 1)\n(l_2^\\prime l^\\prime - l_2 l) + 4 N (l_2^{\\prime 2} - l_2)\n+ N (l_1^{\\prime 2} - l_1) + 2 N (N - 1) (l^{\\prime 2} - l^2)\n\\eea\n\\noindent and\n\\be \nl^\\prime=l-1~, \\ \\ l_1^\\prime = l_1 - 1 ~,\\ \\ l_2^\\prime=l_2 - 1~.\n\\ee\n\\noindent The SI condition for $B_N$, $C_N$ and $D_N$ can be obtained\nfrom the above equations by taking appropriate limits. In particular,\nby putting $l_2 = 0$, $l_1 = 0$ or $l_2 = l_1 = 0$, we obtain the corresponding\nresults for the $B_N$, $C_N$ and $D_N$ type models respectively.\\\\\n\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbgrf b/data_all_eng_slimpj/shuffled/split2/finalzzbgrf new file mode 100644 index 0000000000000000000000000000000000000000..8d01fc6cfa38db8f539305f427759d6ef8b5a10c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbgrf @@ -0,0 +1,5 @@ +{"text":"\n\\section{Exponential lower bound for planar graphs}\\label{sec:main}\nIn this section we present the main result of the paper.\nWe provide a construction that proves that there are planar graphs with $k$ terminals whose mimicking networks are of size $\\Omega(2^k)$. \n\nIn order to present the desired graph, for the sake of simplicity, we describe its dual graph $(\\du{G},\\du{c})$. We let $\\du{\\ensuremath{Q}}=\\{\\du{f_n},\\du{f_s},\\du{f_1},\\du{f_2},\\dots,\\du{f_{k-2}} \\}$ be the set of faces in $\\du{G}$ corresponding to terminals in the primal graph $\\duu{G}$.%\n\\footnote{Since the argument mostly operates on the dual graph, for notational simplicity,\n we use regular symbols for objects in the dual graph, e.g., $G$, $c$, $f_i$,\n while starred symbols refer to the dual of the dual graph, that is, the primal graph.}\nThere are two special terminal faces $\\du{f_n}$ and $\\du{f_s}$, referred to as the north face and the south face. The remaining faces of $\\du{\\ensuremath{Q}}$ are referred to as equator faces.\n\nA set $\\du{\\ensuremath{S}} \\subset \\du{\\ensuremath{Q}}$ is \\emph{important} if $\\du{f_n} \\in \\du{\\ensuremath{S}}$ and $\\du{f_s} \\notin \\du{\\ensuremath{S}}$. Note that there are $2^{k-2}$ important sets; in what follows we care only\nabout minimum cuts in the primal graph for separations between important sets and their complements.\nFor an important set $\\du{\\ensuremath{S}}$, \nwe define its \\emph{signature} as a bit vector $\\sign{\\du{\\ensuremath{S}}} \\in \\bitv{|\\du{\\ensuremath{Q}}|-2}$ whose $i$'th position is defined as $\\sign{\\du{\\ensuremath{S}}}[i]= 1 \\text{ iff } \\du{f_{i}} \\in \\du{\\ensuremath{S}}$. \nGraph $\\du{G}$ will be composed of $2^{k-2}$ cycles referred to as important cycles, each corresponding to an important subset $\\du{\\ensuremath{S}} \\subset \\du{\\ensuremath{Q}}$.\nA cycle corresponding to $\\du{\\ensuremath{S}}$ is referred to as $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ and it separates $\\du{\\ensuremath{S}}$ from $\\overline{\\du{\\ensuremath{S}}}$.\nTopologically, we draw the equator faces on a straight horizontal line that we call the equator. We put the north face $\\du{f_n}$ above the equator and the south face $\\du{f_s}$ below the equator. For any important $\\du{\\ensuremath{S}} \\subset \\du{\\ensuremath{Q}}$, in the plane drawing of $\\du{G}$ the corresponding cycle $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ is a curve that goes to the south of $\\du{f_i}$ if $\\du{f_i} \\in \\du{\\ensuremath{S}}$ and otherwise to the north of $\\du{f_i}$. We formally define important cycles later on, see Definition~\\ref{def:impcyc}.\n\nWe now describe in detail the construction of $\\du{G}$.\nWe start with a graph $H$ that is almost a tree, and then embed $H$ in the plane\nwith a number of edge crossings, introducing a new vertex on every edge crossing.\nThe graph $H$ consists of a complete binary tree of height $k-2$ with root $v$ and an extra vertex\n$w$ that is adjacent to the root $v$ and every one of the $2^{k-2}$ leaves of the tree.\nIn what follows, the vertices of $H$ are called \\emph{branching vertices}, contrary\nto \\emph{crossing vertices} that will be introduced at edge crossings\nin the plane embedding of $H$.\n\nTo describe the plane embedding of $H$, we need to introduce some notation of the vertices\nof $H$.\nThe starting point of our construction is the edge $\\du{e}=\\{ \\du{w}, \\du{v} \\}$.\nVertex $\\du{v}$ is the first branching vertex and also the root of $H$.\nIn vertex $\\du{v}$, edge $\\du{e}$ branches into $\\du{e_0}=\\{\\du{v},\\du{v_0}\\}$ and $\\du{e_1}=\\{\\du{v},\\du{v_1} \\}$. Now $\\du{v_0}$ and $\\du{v_1}$ are also branching vertices.\nThe branching vertices are partitioned into layers $L_0,\\ldots,L_{k-2}$. Vertex $\\du{v}$ is in layer $L_0=\\{ \\du{v} \\}$, while $\\du{v_0}$ and $\\du{v_1}$ are in layer $L_1=\\{ \\du{v_0}, \\du{v_1} \\}$. Similarly, we partition edges into layers $\\mathcal{E}^H_0,\\ldots \\mathcal{E}^H_{k-1}$. So far we have $\\mathcal{E}^H_0=\\{ \\du{e} \\}$ and $\\mathcal{E}^H_1=\\{ \\du{e_0}, \\du{e_1} \\}$. \n\nThe construction continues as follows. For any layer $L_i, i \\in \\{1, \\ldots , k-3 \\}$, all the branching vertices of $L_i=\\{ \\du{v_{00 \\ldots 0}} \\ldots \\du{v_{11 \\ldots 1}} \\}$ are of degree $3$. In a vertex $\\du{v_a} \\in L_i$, $a \\in \\bitv{i}$, edge $\\du{e_a} \\in \\mathcal{E}^H_i$ branches into edges $\\du{e_{0a}}=\\{ \\du{v_a}, \\du{v_{0a}} \\},\\du{e_{1a}}=\\{ \\du{v_a}, \\du{v_{1a}} \\} \\in \\mathcal{E}^H_{i+1}$, where $\\du{v_{0a}},\\du{v_{1a}} \\in L_{i+1}$. We emphasize here that the new bit in the index is added \\emph{as the first symbol}. \nEvery next layer is twice the size of the previous one, hence $|L_i|=|\\mathcal{E}^H_i|=2^i$. Finally the vertices of $L_{k-2}$ are all of degree $2$. Each of them is connected to a vertex in $L_{k-3}$ via an edge in $\\mathcal{E}^H_{k-2}$ and to the vertex $w$ via an edge in $\\mathcal{E}^H_{k-1}$.\n\nWe now describe the drawing of $H$, that we later make planar by adding crossing vertices, in order to obtain the graph $G$.\nAs we mentioned before, we want to draw equator faces $\\du{f_1}, \\ldots \\du{f_{k-2}}$ in that order from left to right on a horizontal line (referred to as an equator). Consider equator face $\\du{f_i}$ and vertex layer $L_i$ for some $i>0$. Imagine a vertical line through $\\du{f_i}$ perpendicular to the equator, and let us refer to it as an $i$'th meridian. We align the vertices of $L_i$ along the $i$'th meridian, from the north to the south. We start with the vertex of $L_i$ with the (lexicographically) lowest index, and continue drawing vertices of $L_i$ more and more to the south while the indices increase. Moreover, the first half of $L_i$ is drawn to the north of $\\du{f_i}$, and the second half to the south of $\\du{f_i}$.\nEvery edge of $H$, except for $e$, is drawn as a straight line segment connecting its endpoints.\nThe edge $\\du{e}$ is a curve encapsulating the north face $\\du{f_n}$ and separating it from $\\du{f_s}$-the outer face of $\\du{G}$. \n\n\n\\begin{figure}[t]\n\\begin{center}\n\\input{figures\/dual}\n\\caption{The graph $\\du{G}$.\\label{dual}}\n\\end{center}\n\\end{figure}\n\nThe crossing vertices are added whenever the line segments cross. This way the edges of $H$\nare subdivided and the resulting graph is denoted by $\\du{G}$.\nThis completes the description of the structure and the planar drawing of $\\du{G}$.\nWe refer to Figure~\\ref{dual} for an illustration of the graph $G$.\nThe set $\\mathcal{E}_i$ consists of all edges of $G$ that are parts of the (subdivided) edges of $\\mathcal{E}^H_i$ from $H$, see Figure~\\ref{subdivide}.\nWe are also ready to define important cycles formally.\n\n\\begin{figure}[t]\n\\centering\n\\input{figures\/subdivide}\n\\caption{The layer $\\mathcal{E}_{i+1}$.\n The vertex and edge names are black, their weights are blue.\\label{subdivide}}\n\\end{figure}\n\n\n\\begin{definition}\\label{def:impcyc}\nLet $\\du{\\ensuremath{S}} \\subset \\du{\\ensuremath{Q}}$ be important.\nLet $\\pi$ be a unique path in the binary tree $H-\\{w\\}$ from the root\n$\\du{v}$ to $\\du{v_{\\rev{\\sign{\\du{\\ensuremath{S}}}}}}$, \nwhere $\\rev{\\cdot}$ operator reverses the bit vector.\nLet $\\pi'$ be the path in $G$ corresponding to $\\pi$.\nThe important cycle $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ is composed of $\\du{e}$, $\\pi'$, and an edge in $\\mathcal{E}_{k-1}$ adjacent to $\\du{v_{\\rev{\\sign{\\du{\\ensuremath{S}}}}}}$. \n\\end{definition}\n\nWe now move on to describing how weights are assigned to the edges of $\\du{G}$. \nThe costs of the edges in $\\du{G}$ admit $k-1$ values: $c_1, c_2, \\ldots c_{k-2}$, and $C$. Let $c_{k-2}=1$. For $i \\in \\{1 \\dots k-3 \\}$ let $c_i= \\sum_{j=i+1}^{k-2}|\\mathcal{E}_{j}|c_{j}$.\nLet $C=\\sum_{j=1}^{k-2} |\\mathcal{E}_i|c_i$. Let us consider an arbitrary edge $\\du{e_{ba}}=\\{ \\du{v_{a}}, \\du{v_{ba}} \\}$ for some $a \\in \\bitv{i}, i \\in \\{ 0 \\ldots k-3 \\}, b \\in \\{ 0,1 \\}$ (see Figure~\\ref{subdivide} for an illustration). As we mentioned before, $\\du{e_{ba}}$ is subdivided by crossing vertices into a number of edges. If $b=0$, then edge $\\du{e_{ba}}$ is subdivided by\\footnote{For a bit vector $a$, $\\dec{a}$ denotes the integral value of $a$ read as a number in binary.} $\\dec{a}$ crossing vertices into $\\dec{a}+1$ edges: $\\du{e^1_{ba}}=\\{ \\du{v_a}, \\du{x^1_{ba}} \\}, \\du{e^2_{ba}}=\\{ \\du{x^1_{ba}},\\du{x^2_{ba}} \\} \\ldots \\du{e^{\\dec{a}+1}_{ba}}=\\{ \\du{x^{\\dec{a}}_{ba}}, \\du{v_{ba}} \\}$. Among those edges $\\du{e^{\\dec{a}+1}_{ba}}$ is assigned cost $C$, and the remaining edges subdividing $\\du{e_{ba}}$ are assigned cost $c_i$. Analogically, if $b=1$, then edge $\\du{e_{ba}}$ is subdivided by $2^i-1-\\dec{a}$ crossing vertices into $2^i-\\dec{a}$ edges: $\\du{e^1_{ba}}=\\{ \\du{v_a}, \\du{x^1_{ba}} \\}, \\du{e^2_{ba}}=\\{ \\du{x^1_{ba}},\\du{x^2_{ba}} \\} \\ldots \\du{e^{2^i-\\dec{a}}_{ba}}=\\{ \\du{x^{2^i-1-\\dec{a}}_{ba}}, \\du{v_{ba}} \\}$. Again, we let edge $\\du{e^{2^i-\\dec{a}}_{ba}}$ have cost $C$, and the remaining edges subdividing $\\du{e_{ba}}$ are assigned cost $c_i$.\nFinally, all the edges connecting the vertices of the last layer with $w$ have weight $c_{k-2} = 1$.\nThe cost assignment within an edge layer is presented in Figure~\\ref{subdivide}. \n\nThis finishes the description of the dual graph $G$. We now consider the primal graph $\\duu{G}$ with the set of terminals $\\duu{\\ensuremath{Q}}$ consisting of the\n$k$ vertices of $\\duu{G}$ corresponding to the faces $\\ensuremath{Q}$ of $G$. In the remainder of this section we show that there is a cost function on the edges of $\\duu{G}$, under which any mimicking network for $\\duu{G}$\ncontains at least $2^{k-2}$ edges. This cost function is in fact a small perturbation of the edge costs implied by the dual graph $G$.\n\nIn order to accomplish this,\nwe use the framework introduced in~\\cite{KrauthgamerR13}. In what follows,\n $\\mincut{G}{c}{S}{S'}$ stands for the minimum cut separating $S$ from $S'$ in a graph $G$ with cost function $c$. Below we provide the definition of the cutset-edge incidence matrix and the Main Technical Lemma from~\\cite{KrauthgamerR13}. \n\n\\begin{definition}[Incidence matrix between cutsets and edges] Let $(G,c)$ be a $k$-terminal network, and fix an enumeration $S_1, \\ldots S_m$ of all $2^{k-1}-1$ distinct and nontrivial bipartitions $Q=S_i \\cup \\overline{S}_i$. The cutset-edge incidence matrix of $(G,c)$ is the matrix $A_{G,c} \\in \\{ 0,1 \\}^{m \\times E(G)}$ given by\n$$\n(A_{G,c})_{i,e}=\n\\begin{cases}\n1 \\text{ if } e \\in \\mincut{G}{c}{S_i}{\\overline{S}_i}\\\\\n0 \\text{ otherwise.}\n\\end{cases}\n$$\n\\end{definition}\n\n\\begin{lemma}[Main Technical Lemma of \\cite{KrauthgamerR13}]\\label{lem:mtl}\nLet $(G,c)$ be a $k$-terminal network. Let $A_{G,c}$ be its cutset-edge incidence matrix, and assume that for all $S \\subset Q$ the minimum $S$-separating cut of $G$ is unique. Then there is for $G$ an edge cost function $\\tilde{c}: E(G) \\mapsto \\mathbb{R}^+$, under which every mimicking network $(G',c')$ satisfies $|E(G')| \\geq \\rank{A_{G,c}}$. \n\\end{lemma}\n\nRecall that $\\duu{G}$ is the dual graph to the graph $\\du{G}$ that we constructed.\nBy slightly abusing the notation, we will use the cost function $c$ defined on the dual edges\nalso on the corresponding primal edges.\nLet $\\duu{\\ensuremath{Q}}=\\{ \\duu{f_n}, \\duu{f_s}, \\duu{f_1}, \\ldots \\duu{f_{k-2}} \\}$ be the set of terminals in $\\duu{G}$ corresponding to $\\du{f_n}, \\du{f_s}, \\du{f_1}, \\ldots \\du{f_{k-2}}$ respectively. We want to apply Lemma~\\ref{lem:mtl} to $\\duu{G}$ and $\\duu{\\ensuremath{Q}}$. For that we need to show that the cuts in $\\duu{G}$ corresponding to important sets are unique and that $\\rank{A_{\\duu{G},c}}$ is high.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\input{figures\/primal}\n\\caption{Primal graph $G^\\ast$.\\label{primal}}\n\\end{center}\n\\end{figure}\n\nAs an intermediate step let us argue that the following holds.\n\\begin{claim}\\label{clm:kc}\nThere are $k$ edge disjoint simple paths in $\\duu{G}$ from $\\duu{f_n}$ to $\\duu{f_s}$: $\\pi_0, \\pi_1, \\ldots, \\pi_{k-2}, \\pi_{k-1}$. Each $\\pi_i$ is composed entirely of edges dual to the edges of $\\mathcal{E}_i$ whose cost equals $C$. For $i \\in \\{ 1 \\ldots k-2 \\}$, $\\pi_i$ contains vertex $\\duu{f_i}$. Let $\\pi_i^n$ be the prefix of $\\pi_i$ from $\\duu{f_n}$ to $\\duu{f_i}$ and $\\pi_i^s$ be the suffix from $\\duu{f_i}$ to $\\duu{f_s}$. The number of edges on $\\pi_i$ is $2^i$, and the number of edges on $\\pi_i^n$ and $\\pi_i^s$ is $2^{i-1}$. \n\\end{claim}\n\\begin{proof}The primal graph $\\duu{G}$ together with paths $\\pi_0, \\pi_1 \\ldots \\pi_{k-2},\\pi_{k-1}$ is pictured in Figure~\\ref{primal}. The paths $\\pi_{k-2},\\pi_{k-1}$ visit the same vertices in the same manner, so for the sake of clarity only one of these paths is shown in the picture. This proof contains a detailed description of these paths and how they emerge from in the dual graph $\\du{G}$.\n\nConsider a layer $L_i$. Recall that for any $ba \\in \\bitv{i}$ edge $\\du{e_{ba}}$ of the almost tree is subdivided in $\\du{G}$, and all the resulting edges are in $\\mathcal{E}_i$. If $b=0$, then edge $\\du{e_{ba}}$ is subdivided by $\\dec{a}$ crossing vertices into $\\dec{a}+1$ edges: $\\du{e^1_{ba}}=\\{ \\du{v_a}, \\du{x^1_{ba}} \\}, \\du{e^2_{ba}}=\\{ \\du{x^1_{ba}},\\du{x^2_{ba}} \\} \\ldots \\du{e^{\\dec{a}+1}_{ba}}=\\{ \\du{x^{\\dec{a}}_{ba}}, \\du{v_{ba}} \\}$, where $\\du{c}(\\du{e^{\\dec{a}+1}_{ba}})=C$. Analogically, if $b=1$, then edge $\\du{e_{ba}}$ is subdivided by $2^i-1-\\dec{a}$ crossing vertices into $2^i-\\dec{a}$ edges: $\\du{e^1_{ba}}=\\{ \\du{v_a}, \\du{x^1_{ba}} \\}, \\du{e^2_{ba}}=\\{ \\du{x^1_{ba}},\\du{x^2_{ba}} \\} \\ldots \\du{e^{2^i-\\dec{a}}_{ba}}=\\{ \\du{x^{2^i-1-\\dec{a}}_{ba}}, \\du{v_{ba}} \\}$. Again, $\\du{c}(\\du{e^{2^i-\\dec{a}}_{ba}})=C$. Consider the edges of $\\mathcal{E}_i$ incident to vertices in $L_i$. If we order these edges lexicographically by their lower index, then each consecutive pair of edges shares a common face. Moreover, the first edge $\\du{e^1_{00\\ldots0}}$ is incident to $\\du{f_n}$ and the last edge $\\du{e^1_{11\\ldots1}}$ is incident to $\\du{f_s}$. This gives a path $\\pi_i$ from $f_n$ to $f_s$ through $f_i$ in the primal graph where all the edges on $\\pi_i$ have cost $C$. Path $\\pi_{k-1}$ is given by the edges of $\\mathcal{E}_{k-1}$ in a similar fashion and path $\\pi_0$ is composed of a single edge dual to $\\du{e}$. \n\\end{proof}\n\nWe move on to proving that the condition in Lemma~\\ref{lem:mtl} holds.\nWe extend the notion of important sets $\\ensuremath{S} \\subseteq \\ensuremath{Q}$ to sets $\\duu{\\ensuremath{S}} \\subseteq \\duu{\\ensuremath{Q}}$\nin the natural manner.\n\\begin{lemma}\\label{lem:uniquecuts}\nFor every important $\\duu{\\ensuremath{S}} \\subset \\duu{\\ensuremath{Q}}$, the minimum cut separating $\\duu{\\ensuremath{S}}$ from $\\overline{\\duu{\\ensuremath{S}}}$ is unique and corresponds to cycle $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ in $\\du{G}$. \n\\end{lemma}\n\n\\begin{proof}\nLet $\\ensuremath{\\mathcal{C}}$ be the set of edges of $G$ corresponding to some\nminimum cut between $\\duu{\\ensuremath{S}}$ and $\\duu{\\overline{\\ensuremath{S}}}$ in $\\duu{G}$.\nLet $\\ensuremath{S} \\subseteq \\ensuremath{Q}$ be the set of faces of $G$ corresponding to the set $\\duu{\\ensuremath{S}}$.\nWe start by observing that the edges of $\\duu{G}$ corresponding to $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$\nform a cut between $\\duu{\\ensuremath{S}}$ and $\\duu{\\overline{\\ensuremath{S}}}$. Consequently,\n the total weight of edges of $\\ensuremath{\\mathcal{C}}$ is at most the total weight of the edges of\n $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$.\n\nBy Claim~\\ref{clm:kc}, $\\ensuremath{\\mathcal{C}}$ contains at least $k$ edges of cost $C$, at least one edge of cost $C$ per edge layer (it needs to hit an edge in every path $\\pi_0 , \\ldots \\pi_{k-1}$). Note that $\\ensuremath{\\mathcal{C}}_{\\sign{ \\du{\\ensuremath{S}} }}$ contains exactly $k$ edges of cost $C$. We assign the weights in a way that $C$ is larger than all other edges in the graph taken together.\nThis implies that $\\ensuremath{\\mathcal{C}}$ contains exactly one edge of cost $C$ in every edge layer $\\mathcal{E}_i$.\nIn particular, $\\ensuremath{\\mathcal{C}}$ contains the edge $e = \\{ v,w \\}$.\n\nFurthermore, the fact that $\\duu{f_i}$ lies on $\\pi_i$ implies that\nthe edge of weight $C$ in $\\mathcal{E}_i \\cap \\ensuremath{\\mathcal{C}}$ lies on $\\pi_i^n$ if $\\duu{f_i} \\notin \\ensuremath{S}$\nand lies on $\\pi_i^s$ otherwise.\nConsequently, in $\\duu{G}-\\ensuremath{\\mathcal{C}}$ there is one connected component containing all vertices\nof $\\duu{\\ensuremath{S}}$ and one connected component containing all vertices of $\\overline{\\duu{\\ensuremath{S}}}$.\nBy the minimality of $\\ensuremath{\\mathcal{C}}$, we infer that $\\duu{G}-\\ensuremath{\\mathcal{C}}$ contains \nno other connected components apart from the aforementioned two components.\nBy planarity, since any minimum cut in a planar graph corresponds to a collection of cycles\nin its dual, this implies that $\\ensuremath{\\mathcal{C}}$ is a single cycle in $G$.\n\nLet $e_i$ be the unique edge of $\\mathcal{E}_i \\cap \\ensuremath{\\mathcal{C}}$ of weight $C$\nand let $e_i'$ be the unique edge of $\\mathcal{E}_i \\cap \\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ of weight $C$.\nWe inductively prove that $e_i = e_i'$ and\nthat the subpath of $\\ensuremath{\\mathcal{C}}$ between $e_i$ and $e_{i+1}$ is the same as on\n$\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$.\nFor the base of the induction, note that $e_0 = e_0' = e$.\n\nConsider an index $i > 0$ and the face $\\du{f_i}$. If $\\du{f_i} \\in \\du{\\ensuremath{S}}$, i.e., $\\du{f_i}$ belongs to the north side, then $e_i$ lies south of $f_i$, that is, lies on $\\pi_i^s$.\nOtherwise, if $f_i \\notin \\ensuremath{S}$, then $e_i$ lies north of $f_i$, that is, lies on $\\pi_i^n$.\n\nLet $v_a$ and $v_{ba}$ be the vertices of $\\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$ that lie \non $L_{i-1}$ and $L_i$, respectively. By the inductive assumption, $v_a$ is an endpoint\nof $e_{i-1}' = e_{i-1}$ that lies on $\\ensuremath{\\mathcal{C}}$.\nLet $e_i = xv_{bc}$, where $v_{bc} \\in L_i$ and let $e_i' = x'v_{ba}$.\nSince $\\ensuremath{\\mathcal{C}}$ is a cycle in $G$ that contains exactly one edge on each path $\\pi_i$,\nwe infer that $\\ensuremath{\\mathcal{C}}$ contains a path between $v_a$ and $v_{bc}$ that consists of\n$e_i$ and a number of edges of $\\mathcal{E}_i$ of weight $c_i$.\nA direct check shows that the subpath from $v_a$ to $v_{ba}$ on $\\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$\nis the unique such path with minimum number of edges of weight $c_i$.\nSince the weight $c_i$ is larger than the total weight of all edges of smaller weight,\nfrom the minimality of $\\ensuremath{\\mathcal{C}}$ we infer that $v_{ba} = v_{bc}$ and $\\ensuremath{\\mathcal{C}}$\nand $\\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$ coincide on the path from $v_a$ to $b_{ba}$.\n\nConsequently, $\\ensuremath{\\mathcal{C}}$ and $\\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$ coincide on the path from the edge $e=vw$\nto the vertex $v_{\\rev{\\sign{\\ensuremath{S}}}} \\in L_{k-2}$. From the minimality of $\\ensuremath{\\mathcal{C}}$\nwe infer that also the edge $\\{w,v_{\\rev{\\sign{\\ensuremath{S}}}} \\}$ lies on the cycle $\\ensuremath{\\mathcal{C}}$ and, hence,\n $\\ensuremath{\\mathcal{C}} = \\ensuremath{\\mathcal{C}}_{\\sign{\\ensuremath{S}}}$. This completes the proof.\n\\end{proof}\n\n\n\\begin{claim}\\label{clm:rank}\n$\\rank{A_{G,c}} \\geq 2^{k-2}$. \n\\end{claim}\n\\begin{proof}\nRecall Definition~\\ref{def:impcyc} and the fact that $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ is defined for every important $\\ensuremath{S} \\subseteq \\ensuremath{Q}$.\nThis means that the only edge in $\\mathcal{E}_{k-1}$ that belongs to $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ is the edge adjacent to $\\du{v_{\\rev{\\sign{\\du{\\ensuremath{S}}}}}}$. Let us consider the part of adjacency matrix where rows correspond to the cuts corresponding to $\\ensuremath{\\mathcal{C}}_{\\sign{\\du{\\ensuremath{S}}}}$ for important $\\ensuremath{S} \\subset \\ensuremath{Q}$ and where columns correspond to the edges in $\\mathcal{E}_{k-1}$ of weight $C$. Let us order the cuts according to $\\rev{\\sign{\\du{\\ensuremath{S}}}}$ and the edges by the index of the adjacent vertex in $L_{k-2}$ (lexicographically). Then this part of $A_{G,c}$ is an identity matrix. Hence, $\\rank{A_{G,c}} \\geq 2^{k-2}$. \n\\end{proof}\n\nLemma \\ref{lem:uniquecuts} and Claim~\\ref{clm:rank} provide the conditions necessary for Lemma~\\ref{lem:mtl} to apply. This proves our main result stated in Theorem~\\ref{thm:main}. \n\n\\section{Doubly exponential example}\\label{sec:side}\n\n\\begin{figure}[tb]\n\\centering\n\\input{figures\/double-exp}\n\\caption{Illustration of the construction. The two panels correspond to two cases in the proof, either $u_{S_0} \\in Z$ (top panel) or $u_{S_0} \\notin Z$ (bottom panel).}\\label{fig:double-exp}\n\\end{figure}\n\nIn this section we show an example graph for which the compression technique introduced by Hagerup et al~\\cite{HagerupKNR98} does indeed produce a mimicking network on\nroughly $2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor}}$ vertices.\nOur example relies on doubly exponential edge costs. Note that an example with single exponential costs can be compressed into a mimicking network of size single exponential in $k$ using the techniques of~\\cite{KratschW12}.\n\nBefore we go on, let us recall the technique of Hagerup et al~\\cite{HagerupKNR98}. Let $G$ be a weighted graph and $Q$ be the set of terminals. Observe that a minimum cut separating $S \\subset Q$ from $\\overline{S}=Q \\setminus S$, when removed from $G$, divides the vertices of $G$ into two sides: the side of $S$ and the side of $\\overline{S}$. The side is defined for each vertex, as all connected components obtained by removing the minimum cut contain a terminal. Now if two vertices $u$ and $v$\nare on the same side of the minimum cut between $S$ and $\\overline{S}$ for every $S \\subset Q$, then they can be merged without changing the size of any minimum $S$-separating cut. As a result there is at most $2^{2^k}$ vertices in the graph;\nas observed by~\\cite{ChambersE13,KhanR14}, this bound can be improved to roughly $2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor}}$. After this brief introduction we move on to describing our example.\n\nOur construction builds up on the example provided in~\\cite{KrauthgamerR13} in the proof of Theorem 1.2.\nAs stated in Theorem~\\ref{thm:side} of this paper, our construction works for parameter $k$ equal to $6$ modulo $8$.\nLet $k = 2r+2$, that is, $r$ is equal to $2$ modulo $4$.\nThese remainder assumptions give the following observation via standard calculations.\n\\begin{lemma}\\label{lem:ell-even}\nThe integer $\\ell := \\binom{2r+1}{r}$ is even.\n\\end{lemma}\n\\begin{proof}\nRecall that $r$ equals $2$ modulo $4$.\nSince $\\binom{2r+1}{r} = \\frac{(2r+1)!}{r!(r+1)!}$, while the largest power of $2$ that divides $a!$ equals $\\sum_{i=1}^\\infty \\lfloor \\frac{a}{2^i} \\rfloor$, we have that\nthe largest power of $2$ that divides $\\binom{2r+1}{r}$ equals:\n\\begin{align*}\n& \\sum_{i=1}^\\infty \\left\\lfloor \\frac{2r+1}{2^i} \\right\\rfloor - \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{2^i} \\right\\rfloor - \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r+1}{2^i} \\right\\rfloor \n = r + \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{2^i} \\right\\rfloor - 2 \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{2^i} \\right\\rfloor \\\\\n&\\quad = r - \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{2^i} \\right\\rfloor \n = r - \\frac{r}{2} - \\frac{r-2}{4} - \\sum_{i=1}^\\infty \\left\\lfloor \\frac{r}{4 \\cdot 2^i} \\right\\rfloor \n \\geq \\frac{1}{2} + \\frac{r}{4} - \\sum_{i=1}^\\infty \\frac{r}{4 \\cdot 2^i}\n = \\frac{1}{2}.\n\\end{align*}\nIn particular, it is positive. This finishes the proof of the lemma.\n\\end{proof}\n\nWe start our construction with a complete bipartite graph $G_0 = (Q_0, U, E)$, where one side of the graph consists of $2r+1 = k-1$ terminals $Q_0$, and the other side of the graph consists of $\\ell = \\binom{2r+1}{r}$ non-terminals\n$U = \\{u_S~|~S \\in \\binom{Q_0}{r}\\}$. That is, the vertices $u_S \\in U$ are indexed by subsets of $Q_0$ of size $r$.\nThe cost of edges is defined as follows. Let $\\alpha$ be a large constant that we define later on.\nEvery non-terminal $u_S$ is connected by edges of cost $\\alpha$ to every terminal $q \\in Q_0 \\setminus S$ and by edges of cost $(1+\\frac{1}{r} + \\frac{1}{r^2})\\alpha$ to every terminal $q \\in S$.\nTo construct the whole graph $G$, we extend $G_0$ with a last terminal $x$ (i.e., the terminal set is $Q = Q_0 \\cup \\{x\\}$)\n and build a third layer of $m = \\binom{\\ell}{\\ell\/2}$ non-terminal vertices $W = \\{w_Z~|~Z \\in \\binom{U}{\\ell\/2}\\}$. That is, the vertices $w_Z \\in W$ are indexed by subsets of $U$ of size $\\ell\/2$.\nThere is a complete bipartite graph between $U$ and $W$ and every vertex of $W$ is adjacent to $x$.\nThe cost of edges is defined as follows. An edge $u_S w_Z$ is of cost $1$ if $u_S \\in Z$, and of cost $0$ otherwise. Every edge of the form $xw_Z$ is of cost $\\ell\/2 - 1$.\nThis finishes the description of the construction. For the reference see the top picture in Figure~\\ref{fig:double-exp}.\n\nWe say that a set $S \\subseteq Q$ is \\emph{important} if $x \\in S$ and $|S| = r+1$. Note that there are $\\ell = \\binom{2r+1}{r} = \\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor}$ important sets.\nWe observe the following.\n\\begin{lemma}\\label{lem:dblexp}\nLet $S \\subset Q$ be important and let $S_0 = S \\setminus \\{x\\} = S \\cap Q_0$. For $\\alpha > r^2 \\ell|W|$,\n the vertex $w_Z$ is on the $S$ side of the minimum cut between $S$ and $Q \\setminus S$\n if and only if $u_{S_0} \\in Z$. \n\\end{lemma}\n\n\\begin{proof}\nFirst, note that if $\\alpha > r^2 \\ell |W|$, then the total cost of all the edges incident to vertices of $W$ is less than $\\frac{1}{r^2} \\alpha$.\nIntuitively, this means that cost of the cut inflicted by the edges of $G_0$ is of absolutely higher importance than the ones incident with $W$.\n\nConsider an important set $S \\subseteq Q$ and let $S_0 = S \\setminus \\{x\\} = S \\cap Q_0$.\n\nLet $u_{S'} \\in U$. The \\emph{balance} of the vertex $u_{S'}$, denoted henceforth $\\beta(u_{S'})$, is the difference of the cost of edges\nconnecting $u_{S'}$ with $S_0$ and the ones connecting $u_{S'}$ and $Q_0 \\setminus S$. Note that we have\n$$\\beta(u_{S_0}) = r \\cdot \\left(1+\\frac{1}{r}+\\frac{1}{r^2}\\right) \\alpha - \\left(r+1\\right) \\cdot \\alpha = \\frac{1}{r} \\alpha.$$\nOn the other hand, for $S' \\neq S_0$, the balance of $u_{S'}$ can be estimated as follows:\n$$\\beta(u_{S'}) \\leq (r-1) \\cdot \\left(1+\\frac{1}{r}+\\frac{1}{r^2}\\right) \\alpha + \\alpha - r \\cdot \\alpha - \\left(1+\\frac{1}{r}+\\frac{1}{r^2}\\right) \\alpha \n = -\\frac{r+2}{r^2} \\alpha < -\\frac{1}{r^2} \\alpha.$$\nConsequently, as $\\frac{1}{r^2} \\alpha$ is larger than the cost of all edges incident with $W$,\nin a minimum cut separating $S$ from $Q \\setminus S$, the vertex $u_{S_0}$ picks the $S$ side, while every vertex $u_{S'}$ for $S' \\neq S_0$ picks the $Q \\setminus S$ side.\n\nConsider now a vertex $w_Z \\in W$ and consider two cases: either $u_{S_0} \\in Z$ or $u_{S_0} \\notin Z$; see also Figure~\\ref{fig:double-exp}.\n\n\\myparagraph{Case 1: $u_{S_0} \\in Z$. } As argued above, all vertices of $U$ choose their side according to what is best in $G_0$, so $u_{S_0}$ is the only vertex in $U$ on the $S$ side.\nTo join the $S$ side, $w_Z$ has to cut $\\ell\/2-1$ edges $u_{S'} w_Z$ of cost $1$ each, inflicting a total cost of $\\ell\/2-1$;\nnote that it does not need to cut the edge $u_{S_0}w_Z$, which is of cost $1$ as $u_{S_0} \\in Z$.\nTo join the $Q \\setminus S$ side, $w_Z$ needs to cut $xw_Z$ of cost $\\ell\/2-1$\nand $u_{S_0}w_Z$ of cost $1$, inflicting a total cost of $\\ell\/2$.\nConsequently, $w_Z$ joins the $S$ side.\n\n\\myparagraph{Case 2: $u_S \\notin Z$. } Again all vertices of $U$ choose their side according to what is best in $G$, so $u_{S_0}$ is the only vertex in $U$ on the $S$ side.\nTo join the $S$ side, $w_Z$ has to cut $\\ell\/2$ edges $u_{S'}w_Z$ of cost $1$ each, inflicting a total\ncost of $\\ell\/2$.\nTo join the $Q \\setminus S$ side, $w_Z$ has to cut one edge of positive cost, namely the edge $xw_Z$ of cost $\\ell\/2-1$.\nConsequently, $w_Z$ joins the $Q \\setminus S$ side.\n\nThis finishes the proof of the lemma.\n\\end{proof}\nLemma~\\ref{lem:dblexp} shows that $G$ cannot be compressed using the technique presented in~\\cite{HagerupKNR98}.\nTo see that let us fix two vertices $w_Z$ and $w_{Z'}$ in $W$,\nand let $u_S \\in Z \\setminus Z'$.\nThen, Lemma~\\ref{lem:dblexp} shows that $w_Z$ and $w_{Z'}$ lie on different\nsides of the minimum cut between $S$ and $Q \\setminus S$.\nThus, $w_Z$ and $w_{Z'}$ cannot be merged.\nSimilar but simpler arguments show that no other pair of vertices in $G$ can be merged.\nTo finish the proof of Theorem~\\ref{thm:side}, observe that\n$$|W| = \\binom{\\ell}{\\ell\/2} = \\Omega\\left(2^{\\ell}\/\\sqrt{\\ell}\\right) = \\Omega\\left(2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor} - k\/2}\\right).$$\n\n\n\n\n\n\\section{Introduction}\\label{sec:intro}\nOne of the most popular paradigms when designing effective algorithms is preprocessing.\nThese days in many applications, in particular mobile ones, even though fast running time is desired, the memory usage is the main limitation. The preprocessing needed for such applications is to reduce the size of the input data prior to some resource-demanding computations,\n without (significantly) changing the answer to the problem being solved.\nIn this work we focus on this kind of preprocessing, known also as graph compression, for flows and cuts.\nThe input graph needs to be compressed while preserving its essential flow and cut properties.\n\nCentral to our work is the concept of a \\emph{mimicking network}, introduced by \nHagerup, Katajainen, Nishimura, and Ragde~\\cite{HagerupKNR98}.\nLet $G$ be an edge-weighted graph with a set $Q \\subseteq V(G)$ of $k$ terminals.\nFor a partition $Q = S \\uplus \\bar{S}$, \na minimum cut between $S$ and $\\bar{S}$ is called a \\emph{minimum $S$-separating cut}. \nA \\emph{mimicking network} is an edge-weighted graph\n$G'$ with $Q \\subseteq V(G')$ such that the weights of minimum $S$-separating cuts\nare equal in $G$ and $G'$ for every partition $Q = S \\uplus \\bar{S}$.\nHagerup et al~\\cite{HagerupKNR98} \nobserved the following simple preprocessing step: if two vertices $u$ and $v$\nare always on the same side of the minimum cut between $S$ and $\\bar{S}$ for every choice\nof the partition $Q = S \\uplus \\bar{S}$, then they can be merged without changing the size\nof any minimum $S$-separating cut. \nThis procedure always\nleads to a mimicking network with at most $2^{2^k}$ vertices. \n\nThe above upper bound can be improved to a still double-exponential bound\nof roughly $2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor}}$, as observed both by \nKhan and Raghavendra~\\cite{KhanR14} and by Chambers and Eppstein~\\cite{ChambersE13}.\nIn 2013, Krauthgamer and Rika~\\cite{KrauthgamerR13} observed that the aforementioned preprocessing\nstep can be adjusted to yield a mimicking network of size $\\mathcal{O}(k^2 2^{2k})$ for planar graphs.\nFurthermore, they introduced a framework for proving lower bounds, and showed that\nthere are (non-planar) graphs, for which any mimicking network \nhas $2^{\\Omega(k)}$ edges; a slightly stronger lower bound \nof $2^{(k-1)\/2}$ has been shown by Khan and Raghavendra~\\cite{KhanR14}.\nOn the other hand, for planar graphs the lower bound of~\\cite{KrauthgamerR13} is $\\Omega(k^2)$. \nFurthermore, the planar graph lower bound applies even in the special case when all the terminals\nlie on the same face.\n\nVery recently, two improvements upon these results for planar graphs have been announced. \nIn a sequel paper, Krauthgamer and Rika~\\cite{KrauthgamerR17} improve the \npolynomial factor in the upper bound for planar graphs to $\\mathcal{O}(k 2^{2k})$ and show that the exponential dependency\nactually adheres only to the \\emph{number of faces containing terminals}: if\nthe terminals lie on $\\gamma$ faces, one can obtain a mimicking network\nof size $\\mathcal{O}(\\gamma 2^{2\\gamma} k^4)$. \nIn a different work, Goranci, Henzinger, and Peng~\\cite{GoranciHP17} showed a tight $\\mathcal{O}(k^2)$ upper bound\nfor mimicking networks for planar graphs with all terminals on a single face.\n\n\\myparagraph{Our results.}\nWe complement these results by showing an exponential lower bound for mimicking networks in planar graphs.\n\\begin{theorem}\\label{thm:main}\nFor every integer $k \\geq 3$,\nthere exists a planar graph $G$ with a set $Q$ of $k$ terminals\nand edge cost function under which every mimicking network for $G$ has\nat least $2^{k-2}$ edges.\n\\end{theorem}\nThis nearly matches the upper bound of $\\mathcal{O}(k2^{2k})$ of Krauthgamer and Rika~\\cite{KrauthgamerR17}\nand is in sharp contrast with the polynomial bounds when the terminals lie on a constant\nnumber of faces~\\cite{GoranciHP17,KrauthgamerR17}.\nNote that it also nearly matches the improved bound of $\\mathcal{O}(\\gamma 2^{2\\gamma} k^4)$ for terminals on $\\gamma$ faces~\\cite{KrauthgamerR17},\nas $k$ terminals lie on at most $k$ faces.\n\nAs a side result, we also show a hard instance for mimicking networks in general graphs.\n\\begin{theorem}\\label{thm:side}\nFor every integer $k \\geq 1$ that is equal to $6$ modulo $8$, there\nexists a graph $G$ with a set $Q$ of $k$ terminals \nand $\\Omega(2^{\\binom{k-1}{\\lfloor (k-1)\/2 \\rfloor} - k\/2})$ vertices,\n such that no two vertices can be identified without strictly increasing the size of some minimum $S$-separating cut.\n\\end{theorem}\nThe example of Theorem~\\ref{thm:side}, obtained by iterating the construction of Krauthgamer and Rika~\\cite{KrauthgamerR13},\nshows that the doubly exponential bound\nis natural for the preprocessing step of Hagerup et al~\\cite{HagerupKNR98}, and\none needs different techniques to improve upon it.\nNote that the bound of Theorem~\\ref{thm:side} is very close to the upper bound given\nby~\\cite{ChambersE13,KhanR14}.\n\n\\myparagraph{Related work.}\nApart from the aforementioned work on mimicking\nnetworks~\\cite{GoranciHP17,HagerupKNR98,KhanR14,KrauthgamerR13,KrauthgamerR17},\nthere has been substantial work on preserving cuts and flows approximately,\nsee e.g.~\\cite{robi-apx,robi-old,mm-sparsifiers}.\nIf one wants to construct mimicking networks for vertex cuts in\nunweighted graphs with deletable terminals (or with small integral\nweights), the representative sets approach of Kratsch and Wahlstr\\\"{o}m~\\cite{KratschW12}\nprovides a mimicking network with $\\mathcal{O}(k^3)$ vertices, improving upon a previous\nquasipolynomial bound of Chuzhoy~\\cite{Chuzhoy12}.\n\n\n\\medskip\n\nWe prove Theorem~\\ref{thm:main} in Section~\\ref{sec:main}\nand show the example of Theorem~\\ref{thm:side} in Section~\\ref{sec:side}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\nAbout $100$ young massive stars are located in the central half-parsec of the Milky-Way's nuclear star cluster, with a supermassive black hole at is center. The proper motions of these stars indicate an average sense of clockwise rotation around the black hole \\citep{2000MNRAS.317..348G}. More detailed 3-dimensional analyses show that a significant fraction ($\\sim 30-60 \\%$) of these stars are moving in a thin clockwise disc \\citep{2003ApJ...590L..33L, 2003ApJ...594..812G, 2006ApJ...643.1011P, 2006ApJ...648..405B, 2009ApJ...690.1463L, 2009ApJ...697.1741B, 2014ApJ...783..131Y, 2022ApJ...932L...6V}. The other stars clearly do not belong to the disc, with several groups disagreeing about the interpretation of their precise geometry. \\cite{2003ApJ...594..812G}, \\cite{2006ApJ...643.1011P}, \\cite{2009ApJ...697.1741B}, and \\cite{2022ApJ...932L...6V} argue that the other stars form another disc-like structure, albeit with somewhat diffuse clustering of the stars' orbital planes, possibly connected by a strong warp to the clockwise disc. \\cite{2022ApJ...932L...6V} goes further and claims that several diffuse disc-like structures can be identified. By contrast, \\cite{2009ApJ...690.1463L} and \\cite{2014ApJ...783..131Y} argue that all these secondary disc-like structures are not statistically significant. These groups also disagree on the precise fraction of the young stars that belong to the clockwise disc.\n\nIt is appealing to consider this kinematic data as a result of a partial disruption of an initially coherent stellar disc. There is strong evidence that the young stars were formed {\\it in situ} \\citep{2005MNRAS.364L..23N, 2006ApJ...643.1011P, 2009ApJ...690.1463L}, and by far the most natural scenario for this is the star formation inside a gravitationally unstable gas disc \\citep{2003ApJ...590L..33L, 2008Sci...321.1060B} . \\cite{2009A&A...496..695S} suggested that a torque from the circumnuclear gas ring located at a distance $\\sim 2$pc from the black hole, could exert a disruptive torque on the disc, but their analysis did not take the disc's self-gravity into account. A direct $N$-body simulations of this process by \\cite{2016ApJ...818...29T} showed that it was not efficient in disrupting the disc. \\cite{2011MNRAS.412..187K, 2015MNRAS.448.3265K, 2022MNRAS.tmp.2848P} explored whether the disc could be disrupted by stochastic torques from vector resonant relaxation (VRR), but found that in order for this to happen in $5\\times 10^6$ years, the masses of the background stars in the cluster had to be unrealistically high, over $100M_{\\odot}$.\n\nWe think that the key to the puzzle of the disrupted disc is in its gravitational interaction with the rotating nuclear star cluster inside which it resides. More specifically, we show that the effect called ``resonant friction'' \ncan produce a very strong torque on the disc that tends to align the disc's rotation with that of the cluster, and in the process can partially disrupt \nthe disc in less that $5\\times 10^6$ years. The Milky-way's nuclear cluster is rotating neither clockwise nor counterclockwise relative to the line of sight; instead its rotational axis is close to that of the Galaxy, and thus the cluster's rotation is misaligned with that of the disc \\citep{2014A&A...570A...2F}. In Section 2 we demonstrate using numerical simulations that this configuration naturally leads to the disc disruption, and that the expected orbital distribution is qualitatively similar to the one that is observed in the Galactic Center. Typically we see an inner disc, sometimes warped or broken up into rings, co-existing with a more diffuse disoriented or weakly clustered orbits of the outer stars, with the inner disc containing $\\sim 50\\%$ of the total stars. The reader uninterested in the details of this paper should just look at Figure 1: it contains the paper's most interesting results. In this section we also explain how we choose the range of rotational parameters of the cluster based on existing observations of the cluster's mean radial velocities. In Section 3 we back up the numerical results by obtaining an analytical estimate of the Resonant Friction timescale inside a slowly-rotating cluster. In Section 4 we speculate that resonant friction would reorient gaseous accretion discs that are formed inside a nuclear star cluster from radially infalling, tidally captured gas clouds. We discuss what impact this would have on spins of supermassive black holes. We present our numerical algorithm in the Appendix.\n\\begin{figure*}\n\\centering\n \n \\includegraphics[width=.450\\textwidth]{norotation.jpg}\n \\includegraphics[width=.450\\textwidth]{rotation0.1octupole.jpg} \n \\includegraphics[width=.450\\textwidth]{rotation0.2octupole.jpg}\n \\includegraphics[width=.450\\textwidth]{rotation0.3octupole.jpg}\n \\includegraphics[width=.45\\textwidth]{rotation0.4octupole.jpg}\n \\includegraphics[width=.45\\textwidth]{rotation0.5octupole.jpg}\n \n \\caption{{\\bf Evolution of the stellar disc.} The background cluster is initially rotating along the $z$ axis with the dimensionless rotation rate $\\tilde{\\gamma}$ defined in the text and specified at the top of each subfigure. The counter-rotating disc is injected at $t=0$. The figure shows some randomly chosen examples of the time evolution of $n_z$, where $\\vec{n}$ is the unit vector directed along the star's angular momentum, for all the stars in the disc. The red lines are marking the inner half of the disc's stars and the green lines are marking those the outer half. As the rotation rate increases, we observe the disruption of the outer parts of the disc and tendency for the inner disc to align with the cluster. At high rotation rate, the disc gets warped or broken into transient rings. These effects are very important at $5$Mys, the likely age of the young stellar population in the Galactic Center.\n\n \\small\n }\n\\end{figure*}\n\\newline\\newline\n\\section{The stellar disc inside a rotating cluster}\n\\subsection{The results of numerical experiments}\nWe model only the inner part of the central cluster, which in our simulations consists of $1.2\\times 10^5$ solar-mass stars, located inside the central $0.4$pc with the spacial number density of stars given by the Peebles-Young distribution, $n(r)\\propto r^{-1.5}$. The cluster, if it were extended out to $1$pc, would contain about half a million solar masses. The numbers are based on \\cite{2007A&A...469..125S}, but it is questionable how faithfully they represent the distribution of dynamical scatterers in the Galactic Center, since (a) there seems to be a hole in the stellar distribution in the central $\\sim 0.2$pc \\citep{2009A&A...499..483B, 2009ApJ...703.1323D, 2010ApJ...708..834B}, and (b) stellar-mass black holes are expeceted to segregate inside central $0.1$pc \\citep{2000ApJ...545..847M, 2009ApJ...697.1861A}. \n\n\nDespite these uncertainties, we believe that the numbers are correct to within an order of magnitude. Our young massive disc consists of $105$ stars of $80 M_\\odot$ each. It is the total mass of the disc that plays key role, and we get similar results if we increase the number of stars and decrease their masses, while keeping the total mass constant. The disc's surface density is $\\Sigma\\propto r^{-2}$ \\citep{2006ApJ...643.1011P} and its outer edge is at $0.4$pc.\n\nIn our numerical experiments, we sacrifice the precision of individual interactions between stellar orbits in favour of extremely rapid speed and ease with which we can explore physical effects. The fastest dynamics responsible for the time evolution in the orientation of orbital planes, is captured when the precessing orbits are replaced with the annuli that they trace; this type of relaxation is called ``vector resonant relaxation\" \\citep{1996NewA....1..149R}. We however, replace each annulus with a circular ring of comparable radius, and keep only quadrupolar and octupolar terms in the interaction potential between the rings. Furthermore, instead of considering a continuum of the ring radii, we choose a modest number of discreet radius values, i.e., ``the bins\" ($10$ for the computations shown in Figure 1). Our computational vectorization benefits from each bin having an equal number of stars, including the stars that belong to the disc; there is no restriction on the stars' individual masses. The requirement that the overall distribution (approximately) represents $n(r)\\propto r^{-1.5}$ determines the value of the radius assigned to each bin. The main advantage of the binning is that it allows us to perform $\\sim N_{s}$ instead of $\\sim N_s^2$ operations per each step. This point, together with other details of our numerical procedure, is explained in the Appendix. The commented Matlab code is available upon request.\n\nEach simulation runs models $3\\times 10^7$ years of evolution, on a usual desktop computer. It takes several minutes to complete if only quadrupolar terms are taken into account, and several hour if the octupole terms are included\nWe use the \"octupolar\" simulations for the production runs, and the ``quadrupolar'' ones as a super-fast way to explore the parameter space. The only significant difference is that the inner half of the disc tends to remain somewhat more coherent if the octupolar terms are included. This allows us to quickly change the parameters of the system, and explore the convergence and stability of the results. The results of the numerical simulations are illustrated in Figure 1. The disc is injected in counter-rotation with respect to the cluster, so that the $\\vec{n}_i=\\vec{l}_i\/l_i$ are clustered in the southern hemisphere relative to the cluster rotation; here $\\vec{l}_i$ is the angular momentum of the $i$'th star. If the cluster is non-rotating, the disc stays coherent even though its bending modes are excited, consistent with the findings of \\cite{2011MNRAS.412..187K,2015MNRAS.448.3265K} and, most convincingly, with the direct simulations of \\cite{2022MNRAS.tmp.2848P}. For greater rotation rates, we see that within several million years, the orbits of the inner half of the stars (represented by the red lines) are dragged towards corotation with the cluster, while the outer stars get dispersed. For more rapidly rotating clusters we observe the disc being warped or split into distinctly oriented rings, which are seen as coherent groups of the $\\vec{n}$-vectors. This complexity is consistent with the phenomenology seen in the galactic center.\n\nWhat is the rotation rate of the cluster?\nIn the next subsection we mathematically define the dimensionless rotation rate, and show how to estimate it from the radial velocity data. Using existing literature, we obtain a rough estimate of $\\tilde{\\gamma}\\sim 0.3$.\n\n\\subsection{The rotation rate of the relaxed cluster}\n\n Inside the gravitational radius of influence of a supermassive black hole, the orbit-averaged torques between the stars moving on slowly-evolving elliptical orbits, drive a fast stochastic evolution of the stars' inclinations and eccentricities \\citep{1996NewA....1..149R, 2018ApJ...860L..23B}. This is known as Resonant Relaxation. Importantly, the orbit-averaged dynamics leaves the semimajor axes unchanged. As argued in the original discovery paper, the relaxed state should be in a statistical equilibrium, with the probability distribution function proportional to the appropriate Boltzmann weight:\n\\begin{eqnarray}\n P(m, a, l, l_z,\\omega,\\Omega)&=&N(m, a)\\times\\nonumber\\\\\n & &\\exp\\left[-m\\left(\\beta\\epsilon-\\vec{\\gamma}\\cdot\\vec{l}~\\right)\\right].\n \\label{Prob}\n\\end{eqnarray}\nHere $m$ is the mass and $a$ is the semimajor axis. On the left-hand side of the equation, the two Delaunay actions $l, l_z$ are the magnitude and the z-component of the specific angular momentum vector $\\vec{l}$, and the two Delaunay angles $\\omega, \\Omega$ are the argument of periapsis and the longitude of ascending node. On the right-hand side $N(m,a)$ is the normalization factor that will play no role in the following discussion, and $\\epsilon$ is the specific orbit-averaged potential energy of the star's interaction with the other stars in the cluster. The inverse temperature $\\beta$ can be either positive, zero or negative, and $\\vec{\\gamma}$ is the rotational vector of the cluster; $\\vec{\\Omega}_{\\rm cl}\\equiv \\vec{\\gamma}\/\\beta$ is known as a thermodynamic angular velocity of the cluster, and it coincides with the angular velocity of the cluster's precession if the cluster is lopsided (e.g., Gruzinov et al.~2020). We can define the dimensionless rotation rate of the cluster as follows:\n\\begin{equation}\n \\tilde{\\gamma}=m_0 l_0 \\gamma,\n \\label{tildegamma}\n\\end{equation}\nwhere $m_0$ is the characteristic mass of a star in the cluster and $l_0$ is the characteristic specific angular momentum of a stellar orbit in the cluster. We choose $m_0=M_\\odot$ and $l_0=\\sqrt{GMr_0}$, where $M$ is the mass of the supermassive black hole and $r=0.1$pc, the characteristic radius of the stellar disc. It is this rotation parameter that labels the plots in Figure 1.\n\nA number of authors have explored thermodynamic equilibria of this form \\citep{1996NewA....1..149R,2014JPhA...47C2001T, 2015MNRAS.448.3265K, 2017ApJ...842...90R, 2018PhRvL.121j1101S, 2019PhRvL.123b1103T, 2020MNRAS.493.2632T, 2020ApJ...905...11G, 2022MNRAS.514.3452M, 2022arXiv220207665M}.\nWe now show that for a slowly-rotating edge-on cluster, Eq.~(\\ref{Prob}) together with some reasonably natural assumptions leads to a simple relationship\nbetween the mean radial velocity and its dispersion, both measured from, e.g., pixel-integrated spectroscopy. This relationship can be used to estimate the rotational velocity of the relaxed cluster (or infer it, if the data is detailed enough).\n\nConsider a cluster whose axis of rotation $z$ lies in the plane of the sky; thus $\\vec{\\gamma}=\\gamma \\hat{z}$. We shall assume that the cluster has a rotational symmetry about this axis. The Milky Way nuclear star cluster's rotation axis is perpendicular to the galaxy \\citep{2008A&A...492..419T,2014A&A...570A...2F}, and the cluster appears to be axially symmetric in its inner parts despite some inferred triaxiality in its outer parts \\citep{2017MNRAS.466.4040F}. Let the $x$-axis be directed away from the observer along line of sight to the cluster, and let the origin of the $x, y, z$ coordinate system coincide with the center of the cluster. Each line of sight is characterized by coordinates $y,z$. The mean line-of-sight radial velocity is given by\n\\begin{equation}\n \\langle v_r(y,z)\\rangle=\\int P(\\vec{r}, \\vec{v})~v_x~dv_x~dv_y~dv_z~dx,\n \\label{radvel}\n\\end{equation}\nwhere $P(\\vec{r}, \\vec{v})$ is the probability distribution function for a star moving with velocity $\\vec{v}$ to be located at position $\\vec{r}$. The latter can be obtained by using Eq.~(\\ref{Prob}) and expressing all the Delaunay variables in terms of $\\vec{r}, \\vec{v}$ (this is true because the Jacobian of a canonical transformation equals $1$). Consider a reflection \n\\begin{eqnarray}\n x&\\rightarrow&-x\\nonumber\\\\\n v_x&\\rightarrow&-v_x\\label{symmetry}\n\\end{eqnarray}\nabout the plane of the sky. This leaves $a,l,\\epsilon$ unchanged, but flips the sign of $\\vec{\\gamma} \\cdot \\vec{l}$. Therefore,\n\\begin{eqnarray}\nP(-x,y,z,-v_x, v_y, v_z)&=&\\exp\\left[-2 m\\gamma(y v_x-x v_y)\\right]\\times\\nonumber\\\\\n & &P(x,y,z,v_x,v_y, v_z).\\label{flip1}\n\\end{eqnarray}\nWe assume that the cluster is rotating slowly, with $m \\gamma l\\ll 1$\nWe therefore expand $\\exp\\left[-2 m\\gamma(y v_x-x v_y)\\right]\\simeq 1-2 m\\gamma(y v_x-x v_y)$. Multiplying the above equation by $v_x$ and integrating over the velocities and over $x$, we obtain the following relation:\n\\begin{equation}\n \\langle v_r \\rangle -m \\gamma y \\langle v_r \\rangle^2=m\\gamma \\left(y\\sigma_r^2-\\langle x T_{xy}\\rangle\\right).\n \\label{relation1}\n\\end{equation}\nHere $\\sigma$ is the radial velocity dispersion, $T_{ij}=\\overline{v_i v_j}$ is the velocity tensor, and $\\langle \\rangle$ stands for the average along the line of sight. Solving for ${\\gamma}$ and using Eq.~(\\ref{tildegamma}), and assuming that $m=m_0$ is the typical mass of the observed stars, we get\n\\begin{equation}\n \\tilde{\\gamma}={l_0 \\langle v_r\\rangle\\over y(\\sigma_r^2+\\langle v_r\\rangle^2)-\\langle x T_{xy}\\rangle}.\n\\end{equation}\nIn the above equation everything is measurable except $\\langle x T_{xy}\\rangle$, since even if $v_y$ for individual stars could be measured, we would have no information about $x$. Moreover, this term is in fact non-zero for anisotropic velocity ellipsoid and could be similar to $y \\sigma_r^2$ in magnitude. Note however, $T_{xy}=0$ for an isotropic velocity distribution, which is expected to be produced by scalar resonant relaxation near the black hole\\footnote{The prediction from the Resonant Relaxation is that the closer to the black hole, the more isotropic is the velocity ellipsoid.}. We shall assume this; in principle this assumption could be tested for consistency by checking that so-measured $\\tilde{\\gamma}$ is pixel-independent\\footnote{\\cite{2017MNRAS.466.4040F} performed orbit-modelling of the whole nuclear cluster out to $\\sim 8$pc, and found that the anisotropy of the velocity ellipsoid is significant in the intermediate range of radii around $\\sim 1$pc, but smaller at greater or much smaller radii. The inference is clearly not very precise in the regions where the stellar disc is located.}. We expect the result thus obtained to be correct to within a factor of $\\sim 2$.\n\nThe data in \\cite{2014A&A...570A...2F} is quite noisy at distances of interest. From Figure $11$ of that paper, the inner bin at $10$ arcsec (which corresponds to $\\sim 0.4$pc) has $\\langle v_r\\rangle\\sim 25 $km\/sec and $\\sigma\\sim 85$km\/sec. Plugging the numbers in the above equation, we get \n\n\\begin{equation}\\tilde{\\gamma}\\sim 0.3\\end{equation}. \n\nWe emphasize that this number is only an order of magnitude estimate and therefore we explore a range of values, as indicated in Figure $1$.\n\n\n \n\n\n\n\n\n \\section{Analytical estimate of Resonant Friction timescale}\nResonant friction was first discussed as a phenomenon by \\cite{1996NewA....1..149R}, as a dissipative counterpart to the stochastic resonant relaxation. Both are necessary for the thermal equilibrium to be established. The origin of the friction can be understood as follows. Consider the probability distribution in Eq.~(\\ref{Prob}) for high masses, $m\\gg m_0$. A star with such mass will tend to be near the orbit that minimizes the Jacoby constant $\\epsilon-\\vec{l}\\cdot\\vec{\\gamma}\/\\beta$. \\cite{2020ApJ...905...11G} studied such orbits and showed that they are stationary in the frame of reference that is rotating with the angular velocity $\\vec{\\gamma}\/\\beta$, and are typically aligned with the cluster's rotation. This occurs despite stochastic Vector Resonant Relaxation torques that are perturbing the massive orbit, because Resonant Friction {\\it drives} massive objects in a cluster towards these special orbits. The existence of Resonant Friction is thus \nclosely connected to the existence of thermodynamical equilibrium.\n\nResonant Friction explains why in numerical simulations of an Intermediate-Mass Black Hole inspiraling through a nuclear cluster, the former's orbit rapidly orients itself with the cluster's rotation. \\cite{2012ApJ...754...42M} demonstrated explicitly in numerical experiments that this reorientation occurs because of secular torques, but they did not make the connection to thermodynamics (the community has been somewhat reluctant to accept Madigan \\& Levin's arguments, instead attributing the reorientation to the $2$-body scattering processes).\n\nIt is the balance between the fluctuations and the dissipation that establishes the Boltzmann distribution. Below we use this fact to estimate the dissipation timescale in a rotating cluster.\nFirst, we observe that the $z$-component of the angular momentum of an object inside the cluster experiences a. stochastic walk, due\nto random torques from other orbits, and b. systematic drift upwards, which tends to align the orbit with the\ncluster rotation. The corresponding evolution equation\nfor the $l_z$-distribution $f(m,l_z,t)$ can be written as\n\\begin{equation}\n {\\partial f\\over \\partial t}=-{\\partial\\over \\partial l_z}\\left[F_{\\rm drift}+F_{\\rm stochastic}\\right].\n \n \\label{evolution}\n\\end{equation}\n\n\n\nHere $m$ is the mass of a star, and $l_z=L_z\/m$ where $L_z$ is the $z$-component of its angular momentum, and $F_{\\rm drift}$ and $F_{\\rm stochastic}$ are the fluxes in $l_z$-space due to the resonant friction and stochastic resonant relaxation, respectively.\nThe Fokker-Planck form of the fluxes is given by\n\\begin{eqnarray}\n F_{\\rm drift}(m,l_z)&=&V(m,l_z)~f(m,l_z)\\nonumber\\\\\n F_{\\rm stochastic}(m,l_z)&=&-{\\partial\\over\\partial l_z}\\left[ D(l_z) f(m,l_z)\\right]\\label{fluxes}\n\\end{eqnarray}\nThe drift velocity $V(m,l_z)$ is mass-dependent, while the stochastic diffusion coefficient $D(l_z)$ is mass-independent because of the equivalence principle (the torque per mass on the orbit from the other stars depends only on the orbit, and not on the mass of the star).\nMoreover, since flipping the direction of the angular momentum $\\vec{l}\\rightarrow-\\vec{l}$ does not charge the period-averaged orbit,\nwe must have $D(l_z)=D(-l_z)$.\n\nFor slowly rotating, nearly spherically symmetric clusters $\\epsilon$ depends only weakly on the orbit's orientation. Therefore, the equilibrium distribution is given by\n\\begin{equation}\n f(m,l_z)=f_0(m)\\exp(m\\gamma l_z),\n \\label{equilibrium}\n\\end{equation}\nwhere $\\vec{\\gamma}=\\gamma~\\hat{z}$ is the rotational vector of the cluster. In equilibrium, $F_{\\rm drift}+F_{\\rm stochastic}=0$. Therefore,\nsubstituting Eq.~(\\ref{equilibrium}) into Eq~(\\ref{fluxes}), we obtain the relationship for the drift velocity\n\\begin{equation}\n V(m,l_z)=V(0,l_z)+m\\gamma D(l_z).\n\\end{equation}\nThe second term of the right-hand side is the resonant-friction induced\npart of the drift velocity. The first term\n\\begin{equation}\n V(0,l_z)=D^{\\prime}(l_z)\n\\end{equation}\nis the drift velocity of the zero-mass particles, required to enforce their\nfully isotropic equilibrium distribution. The parity symmetry of $D(l_z)$ results in $V(0,0)=0$, and thus\n\\begin{equation}\n V(m,0)=m\\gamma D(0),\n\\end{equation}\nwhich establishes a key relationship between $V$, $\\gamma$, and $m$.\nIt allows us to relate the frictional timescale $t_{\\rm fr}$ to that of the vector resonant relaxation, $t_{\\rm VRR}$. We have\n\\begin{eqnarray}\n t_{\\rm fr}&\\sim & l_0\/V\\sim l\/(m\\gamma D),\\nonumber\\\\\n t_{\\rm VRR}&\\sim& l_0^2\/D,\\label{timescales}\n\\end{eqnarray}\nwhere like in the previous section, $l_0$ is the characteristic specific angular momentum of an orbit in the cluster. Thus we have\n\\begin{equation}\n t_{\\rm fr}\\sim \\tilde{\\gamma}^{-1} {m_0\\over m} t_{\\rm VRR},\n \\label{tfr1}\n\\end{equation}\nwhere $m_0$ is the mass of a typical star in the cluster and like before, the the dimensionless rotation parameter is given by\n\\begin{equation}\n \\tilde{\\gamma}=m_0\\gamma l_0.\n\\end{equation}\n\nSo what is $t_{\\rm VRR}$? Following the arguments of \\cite{1996NewA....1..149R}, one can show that\n\\begin{equation}\n t_{\\rm VRR}\\sim {M_{\\rm BH}^2\\over M_* m_0} {P^2\\over t_{\\rm coh}},\n\\end{equation}\nwhere $M_{\\rm BH}$ is the mass of the central black hole, $M_*$ is the mass of the stellar cluster, $P$ is the characteristic orbital period, and $t_{\\rm coh}$ is the characteristic coherence timescale for the fluctuating torque. \\cite{1996NewA....1..149R} took $t_{\\rm coh}\\sim t_{\\rm VRR}$, arguing that VRR is the main mechanism for the change in orientation of the orbits. Thus they obtain\n\\begin{equation}\n t_{\\rm VRR}\\sim {M_{\\rm BH}\\over \\sqrt{M_* m_0}}P.\n \\label{tVRR1}\n\\end{equation}\nSubstituting this into Eq.~(\\ref{tfr1}), we obtain\n\\begin{equation}\n t_{\\rm fr}\\sim {M_{\\rm BH}~P\\over \\tilde{\\gamma}\\sqrt{M_* m_0}}.\n \\label{tfr2}\n\\end{equation}\nHowever, this argument has a serious limitation. If a cluster is rotating, its stellar distribution is flattened, with the ellipticity $\\sim \\tilde{\\gamma}^2$. This ellipticity drives precession of the stellar orbits with characteristic timescale\n\\begin{equation}\n t_{\\rm prec}\\sim {M_{\\rm BH}\\over M_*}\\tilde{\\gamma}^{-2}P.\n\\end{equation}\nThis timescale becomes comparable the one given in Eq.~(\\ref{tVRR1}) for $\\tilde{\\gamma}\\sim N_s^{-1\/4}\\sim 0.1$, where $N_s=M_*\/m_0$ is the number of stars in the cluster. Therefore for $\\tilde{\\gamma}\\gg N_s^{-1\/4}$, we should consider $t_{\\rm coh}\\sim t_{\\rm prec}$. In that case, we get\n\\begin{equation}\nt_{\\rm VRR}\\sim {M_{\\rm BH}\\over m_0}\\tilde{\\gamma}^2 P.\n\\end{equation}\nSubstituting this into Eq.~(\\ref{tfr1}), we get\n\\begin{equation}\nt_{\\rm fr}\\sim \\tilde{\\gamma} {M\\over m} P.\n\\label{tfr3}\n\\end{equation}\nNote that this does not depend on the cluster mass $M_*$. To sum up: if $\\tilde{\\gamma}\\ll N_s^{-1\/4}$, one should use Eq.~(\\ref{tfr2}); in the opposite case, one should use Eq.~(\\ref{tfr3}). Clearly, these expressions are very approximate, and one needs more precise arguments which are beyond the scope of this paper,\nto obtain more reliable expressions with more precisely stated domains of validity. \n\nHaving derived the Resonant Friction timescale for a stellar orbit of mass $m$, we note that the effect should be there for any massive object inside the cluster that is gravitationally coherent. Thus it should apply to a disc of stars, for as long as the orbital planes of the stellar orbits remain clustered. We thus apply the expression above to the whole stellar disc. Taking $m=M_{\\rm disc}=8000 M_\\odot$, $M_{\\rm BH}=4\\times 10^6 M_\\odot$, $\\tilde{\\gamma}=0.3$, and $P=1500$yr (corresponding to a circular orbit at $0.1$pc), we get \n$t_{\\rm fr}\\sim 2.5$Myrs. This is consistent with the results of our numerical experiments described in Section 2.\n\n\n\n\\section{Alignment of accretion discs with the nuclear cluster rotation}\n\\begin{figure*}[t]\n \\centering\n \\epsfxsize=16cm \n \\epsfbox{scenario2.jpeg}\n \n \\caption{\n\n \\small\nPictorial description of the scenario described here. An infalling cloud of gas is tidally disrupted and its material forms an accretion disc around the SMBH. The initial orientation of the disc is random, but the Resonant Dynamical Friction drives the disc's angular momentum $\\vec{L}_{\\rm disc}$ into alignment with that of the cluster, $\\vec{L}_{\\rm cluster}$. }\n \\end{figure*} \n \n\\subsection{General remarks}\nA classic argument by \\cite{1982MNRAS.200..115S}, \nand its more recent elaborations \n\\citep{2002MNRAS.335..965Y, 2012AdAst2012E...7K}, \nstrongly suggest that supermassive black holes \n(SMBH) \nacquire most of their mass by accreting gas from thin discs in galactic nuclei. If the orientation of an accretion disc\nrelative to the black hole could be maintained, this mode of accretion would drive the black hole to a very high spin, with dimensionless spin parameter $\\alpha>0.9$ \n\\citep{1973blho.conf..343N}. \nThe discovery of \n\\cite{1975ApJ...195L..65B} \nthat gravito-magnetic forces drive an accretion\ndisc into alignment with the equatorial plane near the black hole, would suggest that we could expect essentially all supermassive black holes to be rapidly spinning. Measurements of x-ray and radio emission from accreting supermassive black holes in galactic nuclei indicate that they are indeed rotating rapidly \\citep{2020arXiv200808588J,2019ApJ...886...37D}. The spin energy of SMBHs is thought to be responsible for powering relativistic jets emanating from galactic nuclei, and thus SMBH spin is the key agent of feedback during the formation of elliptical galaxies. \n\nHowever, much remains to be understood about the spins of SMBHs. The measurements of the black hole spins rely on rather complex models of accretion discs and jets, and thus may contain systematic uncertainties. On the theoretical side,\n\\cite{2006MNRAS.373L..90K} \nargued that the accretion is likely to be driven by randomly-oriented infall episodes; \nthis is supported observationally by the fact that many radio jets seem to be randomly oriented relative to their host spiral galaxies. In this stochastic-accretion picture, one expects nearly half of the transient accretion discs to be counter-rotating relative to the black hole. \nImportantly, a counter-rotating innermost stable orbit has a greater radius than a co-rotating innermost stable orbit, and therefore the counter-rotating discs have a larger lever arm. \\cite{2008MNRAS.385.1621K} argued that as a result of this, the supermassive black holes are spun down, on average, to low spins of $\\alpha\\sim 0.3$. A more elaborated model of \\cite{2018MNRAS.477.3807F} makes a distinction between lower-mass black holes of $M_{\\rm BH}\\lesssim 10^7M_\\odot$ and the more massive ones. These authors argue that despite of the stochastic feeding, the spin directions of the lower mass black holes are expected to remain adiabatically aligned with the\nangular momentum directions of their accretion discs, while the higher-mass black holes might experience both spin-ups and spin-downs in equal measure. \n\nLISA will provide exquisitely precise measurements of the supermassive black hole spins, by measuring the gravitational waves from mergers of stellar-mass and intermediate-mass black holes with SMBHs. It is thus imperative to develop reliable theoretical predictions for the SMBH spins.\nBelow we argue that a key piece of physics is missing in the current theoretical analyses, namely the stabilizing effect of rotating black hole clusters on the orientation of the accretion disc. \nIndeed if the stellar disc in our Galactic Center gets reoriented into alignment with the cluster, why would not an accretion disc in a galactic nucleus which is more active than ours? \n\n\n\\subsection{Resonant friction on accretion discs}\nThe scenario we consider is depicted in Figure 2. In its first stage, a cloud of gas, with the mass of $\\sim 10^4M_\\odot$ falls in, gets tidally disrupted and forms a gaseous accretion disc. \nIf the cluster is flattened due to its rotation, it will exert a gravitational torque on the\ndisc, causing it to precess. The differential precession could distort the disc. The hydrodynamics of distorted discs could be complicated, but it is reasonable to expect that after much dissipation from shocks etc., the disc would reassemble in the equatorial plane of the cluster. Naively it would seem that the disc could equally likely be co-rotating and counter-rotating with the cluster.\n\nHowever, the resonant friction is expected to break the symmetry between co- and counter-rotation, and drive the disc's and the cluster's angular momenta into alignment. \nResonant friction should act on any massive object inside the cluster, in particular it should affect the accretion disc, so long as the disc remains coherent and creates a gravitational perturbation that affects the cluster. Whether the disc remains coherent long enough to experience the effect is the key question.\nIn absence of a full hydrodynamical treatment, our intuition can be guided by simulations of stellar disc evolution inside a rotating cluster, such as the one illustrated in Figure $1$. \nThe initial disc remains coherent enough, until its overall orbital angular momentum flips into co-rotation with the cluster. If instead of the stellar disc we introduced a gaseous disc of the same mass, it would\nfirstly flip into co-rotation with the cluster, and then settle into the equatorial plane of the cluster due to the hydrodynamic torques that are caused by the disc's differential precession.\n\nIf this picture were correct (and it obviously needs to be tested by hydrodynamic simulations!), then the rotating clusters could serve as\na stabilizing flywheel for the material that accretes onto the SMBH. It would imply that\n\\begin{itemize}\n \\item The SMBH spin direction is aligned with that of its host cluster\n \\item The SMBH spin magnitude could reach a very high value, unless there exists an as yet undetermined mechanism for the SMBH spindown (such as its super-radiant coupling to a scalar field or a cosmic string)\n \\item Since the resonant friction from cluster rotation will affect inspiraling Intermediate-Mass \n Black Holes, one expects a non-trivial alignment between the inspiraling orbit and the SMBH spin. Such an alignment will be detectable by LISA.\n\\end{itemize}\n\nIn proposing hydrodynamic numerical experiments, we need to ascertain that the scenario is reasonable by comparing the resonant friction timescale with that of the dissolution of the disc due to a differential precession. \nAs argued in the previous section, the timescale for the friction to align the disc is\n\\begin{equation}\n t_{\\rm fr}\\sim \\tilde{\\gamma} ~{M_{\\rm BH}\\over M_{\\rm disc}}~P,\n\\end{equation}\nthis equation is valid so long as $\\tilde{\\gamma}>N_s^{-1\/4}$, where $N_s$ is the number of stars in the cluster. Note that this timescale is comparable to the period of a bending mode of the disc due to its self-gravity, $t_{\\rm bend}\\sim (M_{\\rm BH}\/M_{\\rm disc}) P$. Therefore the reorientation of the disc due to Resonant Friction is expected to be accompanied by excitation of the bending motion of the disc. This explains the excitation of the\ndisc warps, and the disc's breaking into several rings in some of our simulations.\n\nNote also that the mass of the cluster, assumed to satisfy $M_{\\rm disc}2$ is prime.\nFor a positive integer $m$, $\\mathbb{Z}_p^m$ and $\\mathbb{Z}_{p^2}^m$ are the ring extension.\nThe linear codes over $\\mathbb{Z}_p$ and $\\mathbb{Z}_{p^2}$ are subgroups of $\\mathbb{Z}_p^m$ and $\\mathbb{Z}_{p^2}^m$, respectively.\n\nThe classical Gray map from $\\mathbb{Z}_{p^{k+1}}$ to $\\mathbb{Z}_p^{p^k}$, where $p$ is a prime and $k\\geq 1$, is given in \\cite{LS}.\nHere, let $k=1$, the Gray map $\\phi$ from $\\mathbb{Z}_{p^2}$ to $\\mathbb{Z}_p^{p}$ is as follows.\n\n$$\\begin{array}{ccc}\n\\mathbb{Z}_{p^2} & \\longrightarrow & \\mathbb{Z}_p^p\\\\\n\\theta & \\longmapsto & \\phi(\\theta)\n\\end{array},$$\n$\\phi(\\theta)=\\theta''(1,1,\\ldots,1)+ \\theta' (0,1,\\ldots,p-1)$, where\n$\\theta = \\theta'' p + \\theta',$ and $\\theta',\\theta''\\in\\{0,\\ldots,p-1\\}.$\n\nThe homogeneous weight \\cite{LS} $wt_{hom}$ on $\\mathbb{Z}_{p^2}$ is $$\nwt_{hom}(x)=\\left\\{\n\\begin{array}{ll}\np,& \\text{ if } x\\in p\\mathbb{Z}_{p^2}\\backslash\\{0\\}, \\\\\np-1,& \\text{ if } x\\notin p\\mathbb{Z}_{p^2}, \\\\\n0,& \\text{ if } x=0.\n\\end{array}\n\\right.\n$$\nAnd the homogeneous distance is $d_{hom}=wt_{hom}(\\mathbf{a}-\\mathbf{b})$, where $\\mathbf{a},\\mathbf{b}\\in \\mathbb{Z}_{p^2}^n$.\nThen, $\\phi$ is an isometry from $(\\mathbb{Z}_{p^2}^n, d_{hom})$ to $(\\mathbb{Z}_p^{pn}, d_H)$, where $d_H$ is the Hamming distance on $\\mathbb{Z}_p^{p}$.\n\nIf $\\mathcal{C}$ is a linear code over $\\mathbb{Z}_{p^2}$, then the code $C=\\phi(\\mathcal{C})$ is called a \\emph{$\\mathbb{Z}_{p^2}$-linear} code.\nThe dual of a linear code $\\mathcal{C}$ of length $n$ over $\\mathbb{Z}_{p^2}$, denoted by $\\mathcal{C}^{\\bot}$, is defined as\n$$\\mathcal{C}^\\bot=\\{\\mathbf{x}\\in \\mathbb{Z}_{p^2}^n:\\langle\\mathbf{x},\\mathbf{y}\\rangle=0 ~\\rm {for ~all}~ \\textbf{y}\\in \\mathcal{C}\\},$$\nwhere $\\langle,\\rangle$ denotes the usual Euclidean inner product.\nThe code $C_\\bot=\\phi(\\mathcal{C}^{\\bot})$ is called the \\emph{$\\mathbb{Z}_{p^2}$-dual code} of $C=\\phi(\\mathcal{C})$.\n\n\nIn 1973, Delsarte first defined the additive codes in terms of association schemes \\cite{DP}, it is a subgroup of the underlying abelian group.\nBorges et al. \\cite{BFPR10} studied the standard generator matrix and the duality of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes. Since then, a lot of work has been devoted to characterizing $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes. Dougherty et al. \\cite{DLY16} constructed one weight $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes and analyzed their parameters. Benbelkacem et al. \\cite{BBDF20} studied $\\mathbb{Z}_2\\mathbb{Z}_4$-additive complementary dual codes and their Gray images. In fact, these codes can be viewed as a generalization of the linear complementary dual (LCD for short) codes \\cite{M92} over finite fields. Joaquin et al. \\cite{BBCM} introduced a decoding method of the $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes.\nMore structure properties of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes can be found in \\cite{BBDF11,JTCR2}. Moreover, the additive codes over different mixed alphabet have also been intensely studied, for example $\\mathbb{Z}_2\\mathbb{Z}_2[u]$-additive codes \\cite{BC2}, $\\mathbb{Z}_2\\mathbb{Z}_{2^s}$-additive codes \\cite{AS13}, $\\mathbb{Z}_{p^r}\\mathbb{Z}_{p^s}$-additive codes \\cite{AS15}, $\\mathbb{Z}_2\\mathbb{Z}_2[u,v]$-additive codes \\cite{SW}, $\\mathbb{Z}_p\\mathbb{Z}_{p^k}$-additive codes \\cite{SWD} and $\\mathbb{Z}_p(\\mathbb{Z}_p+u\\mathbb{Z}_p)$-additive codes \\cite{WS}, and so on. It is worth mentioning that $\\mathbb{Z}_2\\mathbb{Z}_4$-additive cyclic codes form an important family of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes, many optimal binary codes can be obtained from the images of this family of codes. More details of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive cyclic codes can be found in \\cite{BC,JCR,JTCR,JTCR2,YZ20}.\n\nIn \\cite{CFC}, Fern\\'{a}ndez et al. studied the rank and kernel of $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes, where $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes are $\\mathbb{Z}_2\\mathbb{Z}_4$-additive codes under the generalized Grap map $\\Phi$.\nThe authors also studied the rank and kernel of $\\mathbb{Z}_2\\mathbb{Z}_4$-additive cyclic codes in \\cite{JTCR2}.\nIt is a natural problem: can we study the rank and kernel of $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-linear codes in terms of $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive codes?\nIn this paper, we succeed in considering this general case for the kernel, and we also generalize the rank over $\\mathbb{Z}_3\\mathbb{Z}_9$.\n\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code of length $\\alpha+\\beta$ defined by a subgroup of $\\mathbb{Z}_p^\\alpha\\times\\mathbb{Z}_{p^2}^\\beta$. Let $n=\\alpha+p\\beta$ and\n$\\Phi:\\mathbb{Z}_p^\\alpha\\times\\mathbb{Z}_{p^2}^\\beta\\rightarrow \\mathbb{Z}_p^n$ is an extension of the Gray map, which\n$$\\Phi(\\mathbf{x},\\mathbf{y})=(\\mathbf{x},\\phi(y_1),\\ldots,\\phi(y_\\beta)),$$\nfor any $\\mathbf{x}\\in \\mathbb{Z}_p^\\alpha$, and $\\mathbf{y}=(y_1,\\ldots,y_\\beta)\\in \\mathbb{Z}_{p^2}^\\beta$. The code $C=\\Phi(\\mathcal{C})$ is called a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$\\emph{-linear code}.\nThere are two parameters of this nonlinear code: the rank and the dimension of the kernel.\nWe denote $\\langle C\\rangle$ the linear span of the codewords of $C$. The dimension of $\\langle C\\rangle$ is called the \\emph{rank} of the code $C$, denoted by $rank(C)$.\nThe \\emph{kernel} of the code $C$, which is denoted as $K(C)$, is defined as:\n$$K(C)=\\{\\mathbf{x}\\in\\mathbb{Z}_p^{n}|C+\\mathbf{x}=C\\},$$\nwhere $C+\\mathbf{x}$ means $\\mathbf{x}$ adds all codewords in $C$.\nWe will denote the dimension of the kernel of $C$ by \\emph{$ker(C)$}.\n\nThe rank and dimension of the kernel have been studied for some families of $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes \\cite{BPR,FPV,PRV}.\nThese two parameters are helpful to the classification of $\\mathbb{Z}_2\\mathbb{Z}_4$-linear codes.\nTherefore, we try to generalize them to the more general case over $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$.\n\n\\par\nThe paper is organized as follows.\nIn Section \\ref{sec:2}, we give some properties about $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive codes and $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-linear codes.\nIn Section \\ref{sec:3}, we find all values of the rank for $\\mathbb{Z}_3\\mathbb{Z}_{9}$-linear codes, and we construct a $\\mathbb{Z}_3\\mathbb{Z}_{9}$-linear code for each value.\nIn Section \\ref{sec:4}, we determine all values of the dimension of the kernel for $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-linear codes.\nWe also construct all $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-linear codes for the values of the dimension of the kernel.\nIn Section \\ref{sec:5}, pairs of rank and the dimension of kernel of $\\mathbb{Z}_3\\mathbb{Z}_{9}$-additive codes are studied.\nFor each fixed value of the dimension of the kernel, the range of rank is given.\nMoreover, the construction method of the $\\mathbb{Z}_3\\mathbb{Z}_{9}$-linear codes with pairs of rank and the dimension of the kernel is provided.\nIn Section \\ref{sec:6}, we conclude the paper.\n\n\\section{Preliminaries} \\label{sec:2}\n\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code. Then $\\mathcal{C}$ is isomorphic to an abelian structure $\\mathbb{Z}_p^\\gamma \\times \\mathbb{Z}_{p^2}^\\delta$ since it is a subgroup of $\\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta$.\nThe \\emph{order} of a codeword $\\mathbf{c}$ means the minimal positive integer $a$ such that $a\\cdot\\mathbf{c}=\\mathbf{0}$.\nTherefore, the size of $\\mathcal{C}$ is $p^{\\gamma+2\\delta}$, and the number of codewords of order $p$ in $\\mathcal{C}$ is $p^{\\gamma+\\delta}$.\n\nLet $X$ be the set of $\\mathbb{Z}_p$ coordinate positions, and $Y$ be the set of $\\mathbb{Z}_{p^2}$ coordinate positions, so $|X|=\\alpha$ and $|Y|=\\beta$.\nIn general, it is notice that the first $\\alpha$ positions corresponds to the set $X$ and the last $\\beta$ positions corresponds to the set $Y$.\nWe denote $\\mathcal{C}_X$ and $\\mathcal{C}_Y$ are the punctured codes of $\\mathcal{C}$ by deleting the coordinates outside $X$ and $Y$, respectively.\nLet $\\mathcal{C}_p$ be the subcode consisting of all codewords of order $p$ in $\\mathcal{C}$. Let $\\kappa$ be the dimension of the linear code $(\\mathcal{C}_p)_X$ over $\\mathbb{Z}_p$.\nIf $\\alpha=0$, then $\\kappa=0$.\nConsidering all these parameters, we will say that $\\mathcal{C}$ or $C=\\Phi(\\mathcal{C})$ is of type $(\\alpha, \\beta; \\gamma, \\delta; \\kappa)$.\n\\par\nFor a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code, every codeword $\\mathbf{c}$ can be uniquely expressible in the form\n$$\\mathbf{c}=\\sum_{i=1}^\\gamma \\lambda_i\\mathbf{u}_i+\\sum_{j=1}^\\delta \\nu_j\\mathbf{v}_j,$$\nwhere $\\lambda_i \\in \\mathbb{Z}_p$ for $1\\leq i \\leq \\gamma$, $\\nu_j \\in \\mathbb{Z}_{p^2}$ for $1 \\leq j \\leq \\delta$ and $\\mathbf{u}_i$, $\\mathbf{v}_j$ are vectors in $\\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta$ of order $p$ and $p^2$, respectively.\nThen, we get the generator matrix $\\mathcal{G}$ for the code $\\mathcal{C}$ by vectors $\\mathbf{u}_i$ and $\\mathbf{v}_j$.\n$$\\mathcal{G}=\\left( \\begin{array}{c}\\mathbf{u}_1\\\\ \\vdots \\\\\\mathbf{u}_\\gamma \\\\ \\mathbf{v}_1 \\\\\\vdots \\\\\\mathbf{v}_\\delta \\end{array}\\right)=\\left( \\begin{array}{c|c}B_1 & pB_3 \\\\\n \\hline B_2 & Q\\end{array}\\right),$$\nwhere $B_1$, $B_2$ and $B_3$ are matrices over $\\mathbb{Z}_p$ of size $\\gamma \\times \\alpha, \\delta \\times \\alpha$ and $\\gamma \\times \\beta$, respectively; and $Q$ is a matrix over $\\mathbb{Z}_{p^2}$ of size $\\delta\\times \\beta$.\n\nRecall that in coding theory, two linear codes $C_1$ and $C_2$ of length $n$ are \\emph{permutation equivalent}\nif there exists a coordinate permutation $\\pi$ such that $C_2=\\{\\pi(c)|c\\in C_1\\}$.\nThe permutation equivalent of $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive codes can be defined similarly.\nThen we have the following theorem.\n\n\\begin{theorem}{\\rm\\cite{AS15}} \\label{th:1}\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code of type $(\\alpha,\\beta;\\gamma,\\delta;\\kappa)$. Then, $\\mathcal{C}$ is permutation equivalent to a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code $\\mathcal{C}^\\prime$ with generator matrix of the form:\n$$\\mathcal{G}_S=\\left(\\begin{array}{cc|ccc}I_\\kappa & T' & pT_2 & \\mathbf{0} & \\mathbf{0}\\\\\n\\mathbf{0} & \\mathbf{0} & pT_1 & pI_{\\gamma-\\kappa} & \\mathbf{0}\\\\\n\\hline\n\\mathbf{0} & S' & S & R & I_\\delta\n\\end{array}\\right),$$\nwhere $I_\\delta$ is the identity matrix of size $\\delta\\times \\delta$; $T', S', T_1, T_2$ and $R$ are matrices over $\\mathbb{Z}_p$; and $S$ is a matrix over $\\mathbb{Z}_{p^2}$.\n\\end{theorem}\n\\begin{proof}\nStraightforward by setting $r=1,s=2$ of $\\mathbb{Z}_{p^r}\\mathbb{Z}_{p^s}$-linear codes in \\cite{AS15}.\n\\end{proof}\n\nFrom Theorem \\ref{th:1}, there is a $\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-additive code $\\mathcal{C}$ of type $(\\alpha,\\beta;\\gamma,\\delta;\\kappa)$ if and only if\n\\begin{equation}\\label{eq:1}\n\\begin{array}{c}\n\\alpha,\\beta,\\gamma,\\delta,\\kappa\\geq 0, \\alpha+\\beta>0,\\\\\n0<\\delta+\\gamma \\leq \\beta+\\kappa \\text{\\rm{ and }} \\kappa\\leq \\rm{min}(\\alpha,\\gamma).\n\\end{array}\n\\end{equation}\n\nThe definition of the duality for $\\mathbb{Z}_p\\mathbb{Z}_{p^k}$-additive codes is shown in \\cite{SWD}, which includes $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive codes.\nThe \\emph{inner product} between $(\\mathbf{v}_1|\\mathbf{w}_1)$ and $(\\mathbf{v}_2|\\mathbf{w}_2)$ in $\\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta$ can be written as follows:\n$$\\langle(\\mathbf{v}_1|\\mathbf{w}_1),(\\mathbf{v}_2|\\mathbf{w}_2)\\rangle=p\\langle \\mathbf{v}_1,\\mathbf{v}_2\\rangle + \\langle \\mathbf{w}_1,\\mathbf{w}_2\\rangle \\in \\mathbb{Z}_{p^2}.$$\nNote that the result of the inner product $\\langle \\mathbf{v}_1,\\mathbf{v}_2\\rangle$ is from $\\mathbb{Z}_p$,\nand multiplication of its value by $p$ should be formally understood as the natural homomorphism from $\\mathbb{Z}_p$ into $\\mathbb{Z}_{p^2}$,\nthat is, $p\\langle \\mathbf{v}_1,\\mathbf{v}_2\\rangle \\in p\\mathbb{Z}_{p^2} \\subseteq \\mathbb{Z}_{p^2}$. More detail are suggested to see \\cite{SWD}.\n\nThe \\emph{dual code} $\\mathcal{C}^\\bot$ of a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code $\\mathcal{C}$ is defined in the standard way by\n\\begin{eqnarray*}\n \\mathcal{C}^\\bot=\\big\\{(\\mathbf{x}|\\mathbf{y})\n {}\\in \\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta:\n {\n \\langle(\\mathbf{x}|\\mathbf{y}),(\\mathbf{v}|\\mathbf{w})\\rangle=0 ~\\rm {for ~all}~ (\\mathbf{v}|\\mathbf{w})\\in \\mathcal{C}\\big\\}.}\n\\end{eqnarray*}\nReadily, the dual code $\\mathcal{C}^\\bot$ is also a $\\mathbb{Z}_p\\mathbb{Z}_{p^2}$-additive code \\cite{AS15}.\nIn particular, it is shown that the type of the code $\\mathcal{C}^\\bot$ is $(\\alpha, \\beta; \\bar{\\gamma}, \\bar{\\delta}; \\bar{\\kappa})$, where\n$$\\begin{array}{l}\\bar{\\gamma}=\\alpha+\\gamma-2\\kappa, \\\\\n\\bar{\\delta}=\\beta-\\gamma-\\delta+\\kappa, \\\\\n\\bar{\\kappa}=\\alpha-\\kappa.\n\\end{array}\n$$\nThe code $\\Phi(\\mathcal{C}^\\bot)$ is denoted by $C_\\bot$ and called the \\emph{$\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-dual code} of $C$.\n\n\\begin{lemma}\\label{l:2}\nFor all $\\mathbf{u},\\mathbf{v}\\in \\mathbb{Z}_p^\\alpha \\times \\mathbb{Z}_{p^2}^\\beta$,\n$\\mathbf{u}=(u_1,\\ldots,u_{\\alpha+\\beta})$,\n$\\mathbf{v}=(v_1,\\ldots,v_{\\alpha+\\beta})$,\nwe have\n$$\n\\Phi(\\mathbf{u}+\\mathbf{v})=\\Phi(\\mathbf{u})+\\Phi(\\mathbf{v})+\\Phi(pP(\\mathbf{u},\\mathbf{v})),\n$$\nwhere $P(\\mathbf{u},\\mathbf{v})=(0,\\ldots,0,P(u_{\\alpha+1},v_{\\alpha+1}),\\ldots,P(u_{\\alpha+\\beta},v_{\\alpha+\\beta}))$ and\n$$\nP(u_i,v_i)=\nP(u'_i,v'_i)=\n\\begin{cases}\n 1 &\n \\mbox{if $u'_i+v'_i \\ge p$}\n \\\\ 0 &\\mbox{otherwise},\n \\end{cases}\n\\qquad u_i=u''_i p + u'_i, \\quad v_i=v''_i p + v'_i.\n$$\n\\end{lemma}\n\\begin{proof}\nIt is sufficient to prove the claim for one\n$\\mathbb{Z}_{p^2}$ coordinate.\nAssume\n $w_i\n= u_i + v_i$ (the addition is modulo $p^2$),\nwhere\n$w_i=w''_i p + w'_i$,\n$u_i=u''_i p + u'_i$, and\n$v_i=v''_i p + v'_i$.\nSince\n$w_i\n= u_i + v_i = (u_i''+v_i'')p + (u_i'+v_i') \\bmod p^2 $,\nwe have $w'_i = u'_i+v'_i$ and $w''_i = u''_i+v''_i \\bmod p$\nif $u'_i+v'_i < p$. If $u'_i+v'_i \\ge p$,\nthe formula is different:\n$w'_i = u'_i+v'_i \\bmod p$ and $w''_i = u''_i+v''_i + 1 \\bmod p$.\nUtilizing the definition of $P$, we get\n\\begin{equation}\\label{eq:'''}\nw'_i = u'_i+v'_i \\bmod\\ p \\qquad\\mbox{and}\\quad\nw''_i = u''_i+v''_i+P(u'_i,v'_i) \\bmod\\ p.\n\\end{equation}\nNow, from the definition of $\\phi$ and \\eqref{eq:'''}, it is straightforward\n$ \\phi( pP(u'_i,v'_i)) = P(u'_i,v'_i) (1,1,\\ldots,1),$\nand we find\n\\begin{multline}\\label{eq:phiu+v}\n \\phi(u_i + v_i) =\n ( u''_i+v''_i+P(u'_i,v'_i) ) (1,1,\\ldots,1)\n+ (u'_i+v'_i) (0,1,\\ldots,p-1)\\\\\n= \\phi(u_i) + \\phi(v_i) + \\phi( pP(u'_i,v'_i)).\n\\end{multline}\n Applying \\eqref{eq:phiu+v} to each coordinate from\n $\\alpha+1$ to $\\alpha+\\beta$ completes the proof.\n\\end{proof}\n\\begin{remark}\n The function $P$ can be treated as\n a $\\{0,1,\\ldots,p-1\\}$-valued function in\n two $\\{0,1,\\ldots,p-1\\}$-valued arguments\n $u'_i$, $v'_i$.\n As any such function, it can be represented\n as a polynomial of degree at most $p-1$ in each variable. For example $P(u'_i,v'_i) = u'_i v'_i$\n for $p=2$ and\n $P(u'_i,v'_i) = 2u'_i v'_i(1 + u'_i + v'_i) $\n for $p=3$. We also note that substituting\n $u_i$ and $v_i$ instead of $u'_i$ and $v'_i$\n in this polynomial does not change the value\n of $p P(u_i,v_i)$; so, for example,\n for $p=3$, it is safe to write\n $P(u_i,v_i) = 2u_i v_i(1 + u_i + v_i) $.\n For the rigor of the paper, we still use a new function $P'(\\mathbf{u},\\mathbf{v})$,\n where $P(\\mathbf{u},\\mathbf{v})=(p-1)P'(\\mathbf{u},\\mathbf{v})$.\n Note that $\\Phi(pP(\\mathbf{u},\\mathbf{v}))=(p-1)\\Phi(pP'(\\mathbf{u},\\mathbf{v}))$.\n\\end{remark}\n\n\\begin{lemma}\\label{l:4}\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-additive code. The $\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-linear code $C=\\Phi(\\mathcal{C})$ is linear if and only if\n$pP'(\\mathbf{u},\\mathbf{v})\\in \\mathcal{C}$ for all $\\mathbf{u},\\mathbf{v}\\in \\mathcal{C}$.\n\\end{lemma}\n\n\\begin{proof}\nAssume that $C$ is linear. Because $\\mathcal{C}$ is additive, for any $\\mathbf{u},\\mathbf{v}\\in \\mathcal{C}$, we have $\\mathbf{u}+\\mathbf{v}\\in \\mathcal{C}$.\nThen, $\\Phi(\\mathbf{u}),\\Phi(\\mathbf{v}),\\Phi(\\mathbf{u+v})\\in C$.\nSince $C$ is linear, $\\Phi(pP'(\\mathbf{u},\\mathbf{v}))\\in C$ by Lemma \\ref{l:2}.\nTherefore, $pP'(\\mathbf{u},\\mathbf{v})\\in \\mathcal{C}$. Conversely, assume that $pP'(\\mathbf{u},\\mathbf{v})\\in \\mathcal{C}$ for all $\\mathbf{u},\\mathbf{v}\\in \\mathcal{C}$. Let $\\mathbf{x},\\mathbf{y}\\in C$, then there are $\\mathbf{u'},\\mathbf{v'}\\in \\mathcal{C}$ such that $\\mathbf{x}=\\phi(\\mathbf{u'}),\\mathbf{y}=\\Phi(\\mathbf{v'})$.\nThus $pP'(\\mathbf{u'},\\mathbf{v'})\\in \\mathcal{C}$.\nSince $\\mathcal{C}$ is additive, $\\mathbf{u'}+\\mathbf{v'}+pP'(\\mathbf{u'},\\mathbf{v'})\\in \\mathcal{C}$.\nTherefore, $\\Phi(\\mathbf{u'}+\\mathbf{v'}+pP'(\\mathbf{u'},\\mathbf{v'}))\\in C$.\nThen $$\\begin{aligned}\n\\Phi(\\mathbf{u'}+\\mathbf{v'}+pP'(\\mathbf{u'},\\mathbf{v'}))\n&=\\Phi(\\mathbf{u'}+\\mathbf{v'})+\\Phi(pP'(\\mathbf{u'},\\mathbf{v'}))\\\\\n&=\\Phi(\\mathbf{u'})+\\Phi(\\mathbf{v'})+\\Phi(pP(\\mathbf{u'},\\mathbf{v'}))+\\Phi(pP'(\\mathbf{u'},\\mathbf{v'}))\\\\\n&=\\Phi(\\mathbf{u'})+\\Phi(\\mathbf{v'})+(p-1)\\Phi(pP'(\\mathbf{u'},\\mathbf{v'}))+\\Phi(pP'(\\mathbf{u'},\\mathbf{v'}))\\\\\n&=\\Phi(\\mathbf{u'})+\\Phi(\\mathbf{v'}).\n\\end{aligned}$$\nHence, $\\Phi(\\mathbf{u'})+\\Phi(\\mathbf{v'})=\\mathbf{x}+\\mathbf{y} \\in C$.\n\\end{proof}\n\n\\section{Rank of $\\mathbb{Z}_3 \\mathbb{Z}_{9}$-additive codes}\\label{sec:3}\n\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_p \\mathbb{Z}_{p^2}$-additive code of type $(\\alpha, \\beta;\\gamma,\\delta;\\kappa)$ and $C=\\Phi(\\mathcal{C})$ of length $\\alpha+p\\beta$.\nWe have known the definition of $rank(C)$ before. For general prime $p$, it is very difficult to discuss all values of $r=rank(C)$ clearly.\nTherefore, in this section, we only consider $p=3$, then give the range of values $r=rank(C)$ and prove that there is a $\\mathbb{Z}_3 \\mathbb{Z}_{9}$-linear code of type $(\\alpha, \\beta;\\gamma,\\delta;\\kappa)$ with $r=rank(\\mathcal{C})$ for any positive integer $r$.\n\nIf $p=3$, for all $\\mathbf{u},\\mathbf{v}\\in \\mathbb{Z}_3^\\alpha \\times \\mathbb{Z}_{9}^\\beta$,\nit is easy to check that $\\Phi(\\mathbf{u}+\\mathbf{v})=\\Phi(\\mathbf{u})+\\Phi(\\mathbf{v})+2\\Phi(3(\\mathbf{u}\\ast \\mathbf{v}+\\mathbf{u}\\ast \\mathbf{u}\\ast \\mathbf{v}+\\mathbf{u}\\ast \\mathbf{v}\\ast \\mathbf{v}))$ by Lemma \\ref{l:2}, where $\\ast$ denotes the componentwise multiplication.\nThen we have the following theorem.\n\n\\begin{theorem}\\label{th:5}\nLet $\\mathcal{C}$ be a $\\mathbb{Z}_3\\mathbb{Z}_{9}$-additive code of type $(\\alpha, \\beta;\\gamma,\\delta;\\kappa)$ which satisfies (\\ref{eq:1}),\n$C=\\Phi(\\mathcal{C})$ be the corresponding $\\mathbb{Z}_3\\mathbb{Z}_{9}$-linear code of length $n=\\alpha+3\\beta$.\n\n\\begin{itemize}\n\\item[(i)] Let $\\mathcal{G}$ be the generator matrix of $\\mathcal{C}$,\nand let $\\{\\mathbf{u}_i\\}_{i=1}^\\gamma$, $\\{\\mathbf{v}_j\\}_{j=1}^\\delta$ be the rows of order $3$ and $9$ in $\\mathcal{G}$, respectively.\nThen $\\langle C\\rangle$ is generated by $\\{\\Phi(\\mathbf{u}_i)\\}_{i=1}^\\gamma$, $\\{\\Phi(\\mathbf{v}_j)\\}_{j=1}^\\delta$, $\\{\\Phi(3\\mathbf{v}_k*\\mathbf{v}_l)\\}_{1\\leq l\\leq k\\leq \\delta}$\nand $\\{\\Phi(3\\mathbf{v}_x*\\mathbf{v}_y*\\mathbf{v}_z)\\}_{1\\leq x\\leq y\\leq z\\leq\\delta}$.\n\\item[$(ii)$] $rank(C)\\in \\bigg\\{\\gamma+2\\delta,...,\\rm{min}\\bigg(\\beta+\\gamma+\\kappa,\\gamma+\\delta+\\dbinom{\\delta+1}{2}+\\dbinom{\\delta+2}{3}\\bigg)\\bigg\\}$.\nLet $rank(C)=r=\\gamma+2\\delta+\\overline{r}$. Then $\\overline{r}\\in\\bigg\\{0,1,...,\\rm{min}\\bigg(\\beta-(\\gamma-\\kappa)-\\delta,\\dbinom{\\delta+1}{2}+\\dbinom{\\delta+2}{3}-\\delta\\bigg)\\bigg\\}$.\n\\item[$(iii)$] The linear code $\\langle C\\rangle$ over $\\mathbb{Z}_3$ is $\\mathbb{Z}_3 \\mathbb{Z}_{9}$-linear.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\n$(i)$\nLet $\\mathbf{c}\\in \\mathcal{C}$, without loss of generality, $\\mathbf{c}$ can be expressed as $\\mathbf{c}=\\Sigma_{j=1}^\\zeta \\mathbf{v}_j+\\omega$,\nwhere $\\zeta\\leq\\delta$ and $\\omega$ is a codeword in $\\mathcal{C}$ of order $3$.\nBy Lemma \\ref{l:2}, we have $\\Phi(\\mathbf{c})=\\Phi(\\Sigma_{j=1}^\\zeta \\mathbf{v}_j)+\\Phi(\\omega)$,\nwhere $\\Phi(\\omega)$ is a linear combination of $\\{\\Phi(\\mathbf{u}_i)\\}_{i=1}^\\gamma$, $\\{\\Phi(3\\mathbf{v}_j)\\}_{j=1}^\\delta$.\nNote that\n$\\Phi(\\Sigma_{j=1}^\\zeta \\mathbf{v}_j)=\\Sigma_{j=1}^\\zeta\\Phi(\\mathbf{v}_j)+\\Sigma_{1\\leq l \\leq k\\leq \\zeta}(\\tilde{a}\\Phi(3(\\mathbf{v}_k\\ast v_l))+\\Sigma_{1\\leq x \\leq y\\leq z\\leq \\zeta}(\\tilde{b}\\Phi(3(\\mathbf{v}_x\\ast \\mathbf{v}_y\\ast \\mathbf{v}_z))$, $\\tilde{a},\\tilde{b}\\in \\mathbb{Z}_3$.\nSince $\\Phi(3\\mathbf{v}_j)=\\Phi(3\\mathbf{v}_j\\ast\\mathbf{v}_j\\ast\\mathbf{v}_j)$ by Lemma \\ref{l:2},\n$\\Phi(\\mathbf{c})$ is generated by $\\{\\Phi(\\mathbf{u}_i)\\}_{i=1}^\\gamma$, $\\{\\Phi(\\mathbf{v}_j)\\}_{j=1}^\\delta$,\n$\\{\\Phi(3\\mathbf{v}_k*\\mathbf{v}_l)\\}_{1\\leq l\\leq k\\leq \\delta}$\nand $\\{\\Phi(3\\mathbf{v}_x*\\mathbf{v}_y*\\mathbf{v}_z)\\}_{1\\leq x\\leq y\\leq z\\leq\\delta}$.\n\n$(ii)$ The bound $\\gamma+\\delta+\\dbinom{\\delta+1}{2}+\\dbinom{\\delta+2}{3}$ is straightforward by (i),\nand the bound $\\beta+\\gamma+\\kappa$ is trivial by the form of $\\mathcal{G}_S$ in Theorem \\ref{th:1}.\n\n$(iii)$ Let $\\mathcal{S}_\\mathcal{C}$ be a $\\mathbb{Z}_3\\mathbb{Z}_9$-additive code generated by\n$\\{\\mathbf{u}_i\\}_{i=1}^\\gamma$, $\\{\\mathbf{v}_j\\}_{j=1}^\\delta$, $\\{3\\mathbf{v}_k*\\mathbf{v}_l\\}_{1\\leq l\\leq k\\leq \\delta}$\nand $\\{3\\mathbf{v}_x*\\mathbf{v}_y*\\mathbf{v}_z\\}_{1\\leq x\\leq y\\leq z\\leq\\delta}$,\nthen $\\mathcal{S}_\\mathcal{C}$ is of type $(\\alpha,\\beta;\\gamma+\\overline{r},\\delta;\\kappa)$ and $\\Phi(\\mathcal{S}_\\mathcal{C})=\\langle C\\rangle$.\n\\end{proof}\nFor convenience, we use $\\mathbf{v}_j^2$ to denote $\\mathbf{v}_j*\\mathbf{v}_j$, and write the vectors in Theorem \\ref{th:5} (i) in the following way:\n$\\{\\Phi(\\mathbf{u}_i)\\}_{i=1}^\\gamma$, $\\{\\Phi(\\mathbf{v}_j)\\}_{j=1}^\\delta$, $\\{\\Phi(3\\mathbf{v}_j)\\}_{j=1}^\\delta$, $\\{\\Phi(3\\mathbf{v}_j^2)\\}_{j=1}^\\delta$, $\\{\\Phi(3\\mathbf{v}_k*\\mathbf{v}_l)\\}_{1\\leq l< k\\leq \\delta}$, $\\{\\Phi(3\\mathbf{v}_x^2*\\mathbf{v}_y)\\}_{1\\leq x< y\\leq \\delta}$, $\\{\\Phi(3\\mathbf{v}_x*\\mathbf{v}_y^2)\\}_{1\\leq x< y\\leq \\delta}$, $\\{\\Phi(3\\mathbf{v}_x*\\mathbf{v}_y*\\mathbf{v}_z)\\}_{1\\leq x< y0$, consider the normalized state\n\\begin{equation}\n\\rho_{U\\cup X \\cup W} = \\frac{4}{(1-\\epsilon)^{\\frac 1n} + 3(\\epsilon\/3)^{\\frac 1n}}\n\\rho_X \\pns{n} \\Big(P^-_{U\\cup X^L} \\otimes P^-_{W\\cup X^R}\\Big),\n\\end{equation}\nwhere\n\\begin{equation}\n\\rho_X = (1-\\epsilon) P^-_{X^L\\cup X^R} + \\frac{\\epsilon}{3} P^+_{X^L\\cup X^R}\n\\end{equation}\nand where $P_{A\\cup B}^\\pm$ denote the projector onto the symmetric and anti-symmetric subspaces of $\\mathcal{H}_A \\otimes \\mathcal{H}_B$. The conditional states are \n\\begin{align}\n\\rho_{U| X}^{(n)} &= \\frac{2}{\\sqrt{(1-\\epsilon)^{\\frac 1n} + 3(\\epsilon\/3)^{\\frac 1n}}} P^-_{U\\cup X^L} \\otimes I_{X^R} \\ \\mathrm{and} \\\\ \n\\rho_{W| X}^{(n)} &= \\frac{2}{\\sqrt{(1-\\epsilon)^{\\frac 1n} + 3(\\epsilon\/3)^{\\frac 1n}}} I_{X^L} \\otimes P^-_{W\\cup X^R}.\n\\end{align} \nBy construction, condition \\eqref{Cond:CIo4} is easily verified $\\rho_{U\\cup X\\cup W} = \\rho_X \\pns{n} (\\rho_{U| X}^{(n)} \\rho_{W| X}^{(n)})$. In the limit $\\epsilon \\rightarrow 0$, the state $\\rho_{U\\cup X \\cup W} \\rightarrow P^-_{U\\cup W} \\otimes P^-_{X^L\\cup X^R}$, which has $S(U:W|X) = 2$. By continuity, we claim that for all $n<\\infty$, there exists an $\\epsilon > 0$ such that $\\rho_{U\\cup X\\cup W}$ is a density operator that does not saturate strong subadditivity. \n\\end{Exa}\n\nThe preceding example shows that some of the conditions given in eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo4}) are not sufficient to imply quantum conditional independence on their own. Therefore, additional constraints need to be imposed in order to obtain converse results. Two alternative approaches are considered here, one based on additional commutation conditions that hold for conditionally independent states and one based on the algebraic structure of such states. The approach based on commutation conditions is perhaps more elegant, but the algebraic conditions are also relevant because they are used in theorem \\ref{Graph:THCC} in \\S\\ref{Graph:HC} to provide a characterization result for quantum Markov Networks on trees. The following sequence of results provides the approach based on commutation conditions.\n\n\\begin{The}\n\\label{Cond:EqThe}\nFor a fixed $n$, if $\\rho_X^{-\\frac{1}{2n}}\\rho_{U\\cup X}^{\\frac{1}{2n}}$ and its adjoint commute with $\\rho_X^{-\\frac{1}{2n}}\\rho_{W \\cup X}^{\\frac{1}{2n}}$, then the conditions given in eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo4}) are all equivalent.\n\\end{The}\n\\begin{proof}\nWe start by showing that $\\rho\\ns{n}_{U|W\\cup X} = \\rho\\ns{n}_{U|X}$ is equivalent to $\\rho\\ns{n}_{W|U \\cup X} = \\rho\\ns{n}_{W|X}$. The first of these can be written explicitly in terms of joint and reduced density operators as\n\\begin{equation}\n\\rho_{W \\cup X}^{-\\frac{1}{2n}} \\rho_{U \\cup W \\cup X}^{\\frac{1}{n}} \\rho_{W \\cup X}^{-\\frac{1}{2n}} = \\rho_{X}^{-\\frac{1}{2n}} \\rho_{U \\cup X}^{\\frac{1}{n}} \\rho_{X}^{-\\frac{1}{2n}}.\n\\end{equation}\nLeft and right multiplying by $\\rho_{W\\cup X}^{\\frac{1}{2n}}$ gives\n\\begin{equation}\n\\label{Cond:WOIF1}\n\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} = \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_{X}^{-\\frac{1}{2n}} \\rho_{U \\cup X}^{\\frac{1}{n}} \\rho_{X}^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}}.\n\\end{equation}\nNow, define $T = \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}}$ so that $\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} = TT^\\dagger$. In a similar fashion, $\\rho\\ns{n}_{W|U\\cup X} = \\rho_{W|X}$ can be shown to be equivalent to $\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} = T^\\dagger T$. \n\nNow,\n\\begin{align}\nT^\\dagger & = \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\\\\n& = \\rho_X^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{W \\cup X}^{\\frac{1}{2n}} \\\\\n& = \\rho_X^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\label{Cond:CommUse1} \\\\\n& = \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_X^{- \\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\\\\n& = T,\n\\end{align}\nwhere the assumption that $\\rho_X^{- \\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}}$ commutes with $\\rho_X^{- \\frac{1}{2n}} \\rho_{W\\cup X}$ has been used to derive eq. \\eqref{Cond:CommUse1}. Hence, $T$ is Hermitian and the two conditions are equivalent.\n\nFor the remaining condition note that $\\rho\\ns{n}_{U\\cup W|X} = \\rho\\ns{n}_{U|X} \\rho\\ns{n}_{W|X}$ is equivalent to\n\\begin{align}\n\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} & = \\rho_{U\\cup X}^{\\frac{1}{n}} \\rho_X^{-\\frac{1}{n}} \\rho_{W\\cup X}^{\\frac{1}{n}} \\\\\n& = \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \n\\end{align}\nThe commutativity of $\\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}}$ and $\\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}}$ then gives\n\\begin{align}\n\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} & = \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\\\\n& = \\rho_X^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}},\n\\end{align}\nand the commutativity of $ \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}}$ and $\\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}}$ gives\n\\begin{align}\n\\rho_{U\\cup W\\cup X}^{\\frac{1}{n}} & = \\rho_X^{+\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W \\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}} \\\\\n& = \\rho_{W\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{n}} \\rho_X^{-\\frac{1}{n}} \\rho_{W\\cup X}^{\\frac{1}{2n}},\n\\end{align}\nwhich is equivalent to $\\rho\\ns{n}_{U|W\\cup X} = \\rho\\ns{n}_{U|X}$.\n\\end{proof}\n\nTheorem \\ref{Cond:EqThe} relates the conditions eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo3}) for a fixed value of $n$, but the conditions for different values of $n$ can also be related via the following corollary.\n\n\\begin{Cor}\n\\label{Cond:InductCor}\nFor fixed $n$, if $\\rho_X^{-\\frac{1}{2n}}\\rho_{U\\cup X}^{\\frac{1}{2n}}$ and its adjoint commute with $\\rho_X^{-\\frac{1}{2n}}\\rho_{W\\cup X}^{\\frac{1}{2n}}$, then $\\rho\\ns{n}_{U|W\\cup X} = \\rho\\ns{n}_{U|X}$ implies $\\rho\\ns{2n}_{U\\cup W|X} = \\rho\\ns{2n}_{U|X}\\rho\\ns{2n}_{W|X}$.\n\\end{Cor}\n\\begin{proof}\nIn the preceding proof it was shown that $\\rho\\ns{n}_{U|W\\cup X} = \\rho\\ns{n}_{U|X}$ is equivalent to $\\rho_{U\\cup W \\cup X}^{\\frac{1}{n}} = TT^{\\dagger}$, where $T = \\rho_{W \\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{U\\cup X}^{\\frac{1}{2n}}$, and that the commutativity conditions imply that $T$ is Hermitian. Therefore, $\\rho_{U \\cup W\\cup X}^{\\frac{1}{n}} = \\left ( T^{\\dagger} \\right )^2$, which implies $\\rho_{U\\cup W\\cup X}^{\\frac{1}{2n}} = T^{\\dagger} = \\rho_{U\\cup X}^{\\frac{1}{2n}} \\rho_X^{-\\frac{1}{2n}} \\rho_{W\\cup X}^{\\frac{1}{2n}}$. The latter is straightforwardly equivalent to $\\rho\\ns{2n}_{U\\cup W|X} = \\rho\\ns{2n}_{U|X}\\rho\\ns{2n}_{W|X}$\n\\end{proof}\n\nPutting these results together leads to a set necessary and sufficient condition for conditional independence.\n\n\\begin{Cor}\nIf $\\rho_X^{-\\frac{1}{2n}}\\rho_{U\\cup X}^{\\frac{1}{2n}}$ and its adjoint commute with $\\rho_X^{-\\frac{1}{2n}}\\rho_{W\\cup X}^{\\frac{1}{2n}}$ for every $n$, then any of the conditions given in eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo3}) imply that $S(U:W|X) = 0$.\n\\end{Cor}\n\\begin{proof}\nUnder these commutativity conditions, theorem \\ref{Cond:EqThe} implies that eqs. (\\ref{Cond:CIo1}-\\ref{Cond:CIo3}) are equivalent for any fixed $m$ and corollary \\ref{Cond:InductCor} shows that $\\rho\\ns{2m}_{U\\cup W|X} = \\rho\\ns{2m}_{U|X}\\rho\\ns{2m}_{W|X}$ can be derived from $\\rho\\ns{m}_{U|W \\cup X} = \\rho\\ns{m}_{U|X}$. By applying theorem \\ref{Cond:EqThe} with $n=2m$, it follows that $\\rho\\ns{m}_{U|W\\cup X} = \\rho\\ns{m}_{U|X}$ implies $\\rho\\ns{2m}_{U|W\\cup X} = \\rho\\ns{2m}_{U|X}$. By induction, this implies that $\\rho\\ns{2^s m}_{U|W\\cup X} = \\rho\\ns{2^s m}_{U|X}$ for any positive integer $s$. Taking the limit $s \\rightarrow \\infty$ gives $\\rho\\ensuremath{^{(\\infty)}}_{U|W\\cup X} = \\rho\\ensuremath{^{(\\infty)}}_{U|X}$, which implies $S(U:W|X) = 0$ by theorem \\ref{Cond:RuskaiThe}.\n\\end{proof}\n\nWe now turn to the algebraic approach to proving converse results. Firstly, note that eq.~\\eqref{Cond:CIo3} implies that $\\rho\\ns{n}_{U|X}$ and $\\rho\\ns{n}_{W|X}$ commute, since $\\rho\\ns{n}_{U \\cup W|X}$ is Hermitian. It can be shown that whenever two operators $A_{U\\cup X}\\otimes I_W$ and $I_U\\otimes B_{W \\cup X}$ commute there exists a decomposition of $\\mathcal{H}_X$ as in eq.~\\eqref{eq:decomp_X} such that\n\\begin{align}\nA_{U X} &= \\sum^d_{j = 1} a_{U X_j^L} \\otimes I_{X^R_j} \\ \\mathrm{and} \\label{Cond:Decomp1} \\\\\nB_{W X} &= \\sum^d_{j = 1} I_{X^L_j}\\otimes b_{ X_j^RW}, \\label{Cond:Decomp2}\n\\end{align}\nso eq. \\eqref{Cond:CIo3} implies that $\\rho\\ns{n}_{U|X}$ and $\\rho\\ns{n}_{W|X}$ have this structure, as would be expected if the joint state is conditionally independent and hence satisfies eq.~\\eqref{Cond:Hayden}. However, eq. \\eqref{Cond:Hayden} implies an additional constraint that has not been used so far, namely that $\\rho_X$ also respects the same tensor product structure on $\\mathcal{H}_X$, i.e. $\\rho_X$ is of the form\n\\begin{equation}\n\\rho_X = \\sum_{j = 1}^d p_j \\sigma_{X_j^L} \\otimes \\tau_{X_j^R}.\n\\end{equation}\nMore generally, we will say that an operator $C_X$ is {\\em decomposable with respect to} the pair of commuting operators $A_{U\\cup X}$ and $B_{W \\cup X}$ if it has the same algebraic structure on $\\mathcal{H}_X$, i.e. if\n\\begin{equation}\nC_{X} = \\sum^d_{j = 1} c_{X_j^R} \\otimes c_{X_j^R}.\n\\label{def:decomposable}\n\\end{equation}\nfor some factorization of $\\mathcal{H}_X$, such that eqs. \\eqref{Cond:Decomp1} and \\eqref{Cond:Decomp2} hold.\nImposing the commutativity of $\\rho\\ns{n}_{U|X}$ and $\\rho\\ns{n}_{W|X}$, along with the decomposability of $\\rho_X$ with respect to $\\rho\\ns{n}_{U|X}$ and $\\rho\\ns{n}_{W|X}$ as additional constraints is enough to straightforwardly show that any of eqs.~(\\ref{Cond:CIo1}-\\ref{Cond:CIo4}) imply conditional independence for all values of $n$. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Graphical Models}\n\n\\label{Graph}\n\nIn this section, quantum conditional independence is used to define quantum Graphical Models that generalize their classical counterparts. The main focus is on quantum Markov Networks and $n$-Bifactor Networks, since these allow for the simplest formulation of the Belief Propagation algorithms to be described in \\S\\ref{QBP}. \\S\\ref{Graph:MN} reviews the definition of classical Markov Networks and the Hammersley-Clifford theorem, which gives an explicit representation for the probability distributions associated with classical Markov Networks. Motivated by this, \\S\\ref{Graph:QGS} defines the class of quantum $n$-Bifactor Networks, which are the most general class of networks on which our Belief Propagation algorithms operate. \\S\\ref{Graph:DMG} reviews the theory of dependency models and graphoids, which is useful for proving theorems about Graphical Models, and shows that quantum conditional independence can be used to define a graphoid. \\S\\ref{Graph:HC} defines quantum Markov Networks and gives some partial characterization results for the associated quantum states, along similar lines to the Hammersley-Clifford theorem. Most of these definitions and characterization results are summarized on Fig. \\ref{fig:worldview}.\n\nThe remaining two subsections briefly outline two other quantum Graphical Models: Quantum Factor Graphs in \\S\\ref{Graph:FG} and Quantum Bayesian Networks in \\S\\ref{Graph:BN}. These structures are equivalent from the point of view of the efficiency of Belief Propagation algorithms, since it is always possible to convert them into $n$-Bifactor Networks and vice-versa with only a linear overhead in graph size. An explicit method for converting a quantum factor graph into a quantum $1$-Bifactor Network is given because factor graphs are used in the application to quantum error correction developed in \\S\\ref{App:QEC}.\n\n\\subsection{Classical Markov Networks}\n\n\\label{Graph:MN}\n\n\\begin{figure}\n\\center\\includegraphics{MN}\n\\caption{The equalities $H(a:d\\cup e\\cup f | b \\cup c) = 0$, $H(f:a\\cup b \\cup c \\cup d | e) = 0$, and $H(a\\cup b : e \\cup f | c \\cup d) = 0$ are examples of constraints that are satisfied when $(G,P(V))$ is a Markov Network.}\n\\label{fig:MN}\n\\end{figure}\n\nLet $G = (V,E)$ be an undirected graph and suppose that each vertex $v \\in V$ is associated with a random variable, also denoted $v$. Let $P(V)$ be the joint distribution of the variables. $(G,P(V))$ is a \\emph{Classical Markov Network} if for all $U \\subseteq V$, $H(U:V - (n(U) \\cup U)|n(U)) = 0$, where $n(U)$ is the set of nearest neighbors of $U$ in $G$ (see Fig.~\\ref{fig:MN}). Further, if $P(V)$ is strictly positive for all possible valuations of the variables, then $(G,P(V))$ is called a \\emph{Positive Classical Markov Network}. For such positive networks there is a powerful characterization theorem \\cite{Gri73a,Bes74a}.\n\\begin{The}[Hammersley-Clifford \\cite{HC71a}]\n\\label{thm:HC}\n$(G,P(V))$ is a positive classical Markov network iff it can be written as\n\\begin{equation}\n\\label{Graph:HCE}\nP(V) = \\frac{1}{Z} \\prod_{C \\in \\mathfrak{C}} \\psi(C),\n\\end{equation}\nwhere $\\mathfrak{C}$ is the set of cliques of $G$, $\\psi(C)$ is a positive function defined on the random variables in $C$ and $Z$ is a normalization factor.\n\\end{The}\nA set of vertices $C \\subseteq V$ in a graph is a clique if $\\forall u,v \\in C$, $u \\neq v \\rightarrow (u,v) \\in E$, i.e. every vertex in $C$ is connected to every other vertex in $C$ by an edge. Note that the decomposition in eq.~\\eqref{Graph:HCE} is generally not unique, even up to normalization. A distribution of the form of eq.~\\eqref{Graph:HCE} is said to factorize with respect to the graph $G$.\n\nMarkov chains are a special case of Markov Networks in which the graph is a chain. These are included in the slightly more general class of networks where the graph is a tree. For trees the only cliques are the individual vertices and the pairs of vertices that are connected by an edge, and the associated probability distributions have a representation in terms of marginal and mutual probability distributions of the form\n\\begin{equation}\n\\label{Graph:HCMut}\nP(V) = \\prod_{v \\in V} P(v) \\prod_{(u,v) \\in E} P(u:v),\n\\end{equation}\nwhich generalizes the decomposition for three variable Markov chain given in eq. \\eqref{Cond:MCMDecomp}. For more general networks wherein the graph has cycles, there is no Hammersley-Clifford decomposition in which the functions $\\psi(C)$ are marginal and mutual probability distributions.\n\nThe Hammersley-Clifford decomposition can be put in a form more familiar to physicists by introducing a positive constant $\\beta$ and defining the functions $H(C) = - \\beta^{-1} \\log \\psi(C)$, which are always well defined since $\\psi(C)$ is positive. Then eq.~\\eqref{Graph:HCE} can be written as\n\\begin{equation}\nP(V) = \\frac{1}{Z} \\exp \\left ( -\\beta \\sum_{C \\in \\mathfrak{C}} H(C) \\right ),\n\\end{equation}\nwhich is a Gibbs state for a system with a Hamiltonian $\\sum_{C \\in \\mathfrak{C}} H(C)$ and partition function $Z$. This is a generalization of the lattice models studied in statistical physics to arbitrary graphs. Indeed, if $G$ is a lattice, then, as for trees, the only cliques are the individual vertices and pairs of vertices connected by an edge, so for lattices the edges represent local nearest-neighbor interactions. \n\nIn many applications, such as in statistical physics, the functions $\\psi(C)$ are often constants for cliques containing three or more vertices even in the case where the graph has cliques with more than two vertices. In this case, we again have that the only nontrivial functions are defined on the vertices and edges of the graph, so the state can be written as\n\\begin{equation}\n\\label{Graph:CGS}\nP(V) = \\frac{1}{Z} \\prod_{v \\in V} \\psi(v) \\prod_{(u,v) \\in E} \\psi(u:v).\n\\end{equation}\nHere, the edge functions are denoted $\\psi(u:v)$ because of the close parallel with eq.~\\eqref{Graph:HCMut}, but they are general positive functions rather than mutual distributions. We adopt the terminology \\emph{bifactor distribution} to describe distributions of the form of eq.~\\eqref{Graph:CGS} and \\emph{Bifactor Network} for the pair $(G,P(V))$. For example, the distribution associated with a local nearest-neighbor model on an arbitrary graph, such as the spin-glasses studied in statistical physics, would be a bifactor distribution. \n\n\\subsection{Quantum Bifactor Networks}\n\n\\label{Graph:QGS}\n\nA proper generalization of Markov Networks to quantum theory involves the replacement of random variables with quantum systems and the replacement of classical conditional independence with its quantum counterpart. This theory is developed in the following sections, but it is convenient to first introduce a class of states that parallels the classical bifactor distributions of eq.~\\eqref{Graph:CGS}.\n\nLet $G = (V,E)$ be a graph, let each vertex $v \\in V$ be associated to a quantum system with Hilbert space $\\mathcal{H}_v$. Let $\\mathcal{H}_V = \\bigotimes_{v \\in V} \\mathcal{H}_v$ and consider the class of states $\\rho_V$ that can be expressed as\n\\begin{equation}\n\\label{Graph:QGSEPre}\n\\rho_V = \\frac 1 Z \\left(\\bigotimes_{u\\in V} \\mu_u\\right) \\pns{n} \\left(\\left (\\pns{n} \\right )_{(v,w) \\in E} \\nu_{v:w}\\right),\n\\end{equation}\nwhere $Z$ is normalization constant, the $\\mu_u$'s are operators on $\\mathcal{H}_u$ and the $\\nu_{v:w} = \\nu_{w:v}$ are operators on $\\mathcal{H}_v\\otimes\\mathcal{H}_w$. As stated, this expression is ambiguous because the $\\pns{n}$ product is neither commutative or associative apart from in the limit $n \\rightarrow \\infty$. To avoid this ambiguity we impose the additional constraint that $[\\nu_{u:v},\\nu_{w:x}]=0$ for finite $n$, in which case the expression $\\left ( \\pns{n} \\right )_{(v,w) \\in E} \\nu_{v:w}$ reduces to $\\prod_{(v,w) \\in E} \\nu_{v:w}$. The state $\\rho_V$ is an \\emph{$n$-bifactor state} if it can be written as \n\\begin{equation}\n\\label{Graph:QGSE}\n\\rho_V = \\frac 1 Z \\left(\\bigotimes_{u\\in V} \\mu_u\\right) \\pns{n} \\left(\\prod_{(v,w) \\in E} \\nu_{v:w}\\right),\n\\end{equation}\nwith $[\\nu_{u:v},\\nu_{w:x}]=0$, and it is an \\emph{$\\infty$-bifactor state} if it can be written as\n\\begin{equation}\n\\label{Graph:QGSEInf}\n\\rho_V = \\frac 1 Z \\left(\\bigotimes_{u\\in V} \\mu_u\\right) \\ensuremath{\\odot} \\left(\\ensuremath{\\odot}_{(v,w) \\in E} \\nu_{v:w}\\right),\n\\end{equation}\nwith no commutativity constraint on the $\\nu_{v:w}$.\nThe pair $(G,\\rho_V)$ is referred to as a quantum \\emph{$n$-Bifactor Network}, or \\emph{$\\infty$-Bifactor Network}, respectively.\n\nIt turns out that not every quantum Bifactor Network is a quantum Markov Network, but the quantum generalizations of Belief Propagation algorithms to be developed in \\S\\ref{QBP} can be formulated for any Bifactor Network. Therefore, readers who are mainly interested in algorithms and applications rather than proofs can skip to \\S\\ref{QBP}, perhaps pausing to read \\S\\ref{Graph:FG} on the way in order to understand the application to quantum error correction. \n\nThe next goal is to formulate the theory of quantum Markov Networks and provide characterization theorems analogous to the Hammersley-Clifford theorem. In order to do so it is convenient to first introduce the theory of dependency models and graphoids, which is useful for proving theorems about Graphical Models.\n\n\\subsection{Dependency Models and Graphoids}\n\n\\label{Graph:DMG}\n\nGraphs and conditional independence relations share a number of important properties that are responsible for the structure of Graphical Models. These properties are also shared by a number of other mathematical structures and they can be abstracted into structures known as dependency models and graphoids, which were introduced by Gieger, Verma, and Pearl \\cite{VP90a, GVP90a}. Here, the theory is briefly reviewed and quantum conditional independence is shown to also give rise to a graphoid.\n\nA \\emph{dependency model} $M$ over a finite set $V$ is a tripartite relation over disjoint subsets of $V$. The statement that $(U,W,X) \\in M$ will be denoted $I(U,W|X)$, with a possible subscript on the $I$ to denote the type of dependency model. $I(U,W|X)$ should be taken to mean that ``$U$ and $W$ only interact via $X$'', or that ``$U$ and $W$ are independent given $X$''. \n\n\\begin{Exa}\nAn \\emph{Undirected Graph Dependency Model} $I_G$ is defined in terms of an undirected graph $G$. Let $V$ be the set of vertices of $G$ and then let $I_G(U,W|X)$ if every path from a vertex in $U$ to a vertex in $W$ passes through a vertex in $X$. $I_G$ is often called the \\emph{Global Markov Property}.\n\\end{Exa}\n\n\\begin{Exa}\nA \\emph{Probabilistic Dependency Model} $I_P$ is defined in terms of a probability distribution $P(V)$ over a set $V$ of random variables. $I_P(U,W|X)$ is true if $U$ and $W$ are conditionally independent given $X$. \n\\end{Exa}\n\n\\begin{Exa}\nA \\emph{Quantum Dependency Model} $I_\\rho$ is defined in terms of a density operator $\\rho_V$ acting on the tensor product of Hilbert spaces labeled by elements of a set $V$. $I_\\rho(U,W|X)$ is true if $U$ and $W$ are quantum conditionally independent given $X$.\n\\end{Exa}\n\nA \\emph{graphoid} is a dependency model that for all disjoint $U,W,X,Y \\subseteq V$satisfies the following axioms:\n\\begin{align}\n\\text{Symmetry:}\\quad \t\t& I(U,W|X) \\Rightarrow I(W,U|X) \\\\\n\\text{Decomposition:}\\quad \t& I(U,W\\cup Y|X) \\Rightarrow I(U,W|X) \\\\\n\\text{Weak Union:}\\quad \t\t& I(U,W \\cup Y|X) \\Rightarrow I(U,W|X \\cup Y) \\\\\n\\text{Contraction:}\\quad \t\t& I(U,W|X) \\,\\, \\text{and} \\,\\, I(U,Y|X \\cup W) \\Rightarrow I(U,W \\cup Y|X).\n\\end{align}\nA \\emph{positive graphoid} is a graphoid that also satisfies the additional axiom\n\\begin{align}\n\\text{Intersection:}\\quad\t\t& I(U,W |X \\cup Y) \\,\\, \\text{and} \\,\\, I(U,Y|W \\cup X) \\Rightarrow I(U,W \\cup Y|X).\n\\end{align}\n\n\\begin{The}\nThe quantum dependency model is a graphoid.\n\\end{The}\n\\begin{proof}\nSymmetry is immediate because $S(U:W|X)$ is invariant under exchange of $U$ and $W$. Decomposition and Weak Union follow from the strong subadditivity inequality. Specifically, for $A,B,C \\subseteq V$, strong subadditivity asserts that $S(A:B|C) \\geq 0$, or in terms of von Neumann entropies\n\\begin{equation}\n\\label{Cond:SS}\nS(A\\cup C) + S(B\\cup C) - S(C) - S(A\\cup B \\cup C) \\geq 0.\n\\end{equation}\nDecomposition asserts that if $S(U:W \\cup Y|X) = 0$ then $S(U:W|X) = 0$. This is true if $S(U:W \\cup Y|X) - S(U:W|X) \\geq 0$, since $S(U:W|X)$ is guaranteed to be positive by strong subadditivity. Expanding $S(U:W \\cup Y|X) - S(U:W|X)$ and canceling terms gives\n\\begin{equation}\n\\begin{split}\nS(U:W \\cup Y|X) - S(U:W|X) = & S(U \\cup W \\cup X) + S(W \\cup X \\cup Y) \\\\\n& - S(W \\cup X) - S(U \\cup W \\cup X \\cup Y),\n\\end{split}\n\\end{equation}\nbut the right hand side is positive by eq.~\\eqref{Cond:SS} with $A= U, B = Y, C = W\\cup X$.\n\nWeak Union is proved via a similar argument applied to $S(U:W \\cup Y|X) - S(U:W|X \\cup Y)$. It follows from eq.~\\eqref{Cond:SS} by taking $A = U, B = Y, C = X$. Finally, contraction follows from noting that $S(U:W|X) + S(U:Y|X \\cup W) = S(U:W \\cup Y|X)$, which is straightforward to show by expanding in terms of von Neumann entropies.\n\\end{proof}\n\nThe well-known analogous result for classical probability distributions follows immediately because classical probability distributions can be represented by density matrices that are diagonal in an orthonormal product basis, and for such states the von Neumann entropies of subsystems are equal to the Shannon entropies of the corresponding marginal distributions. Additionally, if $P(V)$ is positive for all possible valuations of the variables then the associated dependency model is actually a positive graphoid. The analogous quantum property would be to require that $\\rho_V$ is a strictly positive operator, i.e. it is of full rank, but we have not been able to prove that this property implies intersection.\n\nThe undirected graph dependency model is also a positive graphoid. The proof is straightforward, so it is not given here. The following theorem is important for the theory of Markov networks.\n\\begin{The}[Lauritzen \\cite{Lau06a}]\n\\label{Cond:UDGraph}\nThe undirected graph dependency model is equivalent to the dependency model obtained by setting \n$I\\left (U,V - (U\\cup n(U))|n(U) \\right )$ for all $U \\subseteq V$, where $n(U)$ is the set of nearest neighbors of $U$, and demanding closure under the positive graphoid axioms.\n\\end{The}\n\nThe condition $I\\left (U,V - (U\\cup n(U))|n(U) \\right )$ defines the \\emph{Local Markov Property} on a graph. Note that although its closure under the positive graphoid axioms is equivalent to the Global Markov Property, this is not the case for a graphoid that doesn't satisfy intersection \\cite{Lau06a}.\n\n\\subsection{Quantum Markov Networks}\n\n\\label{Graph:HC}\n\nUsing the terminology of the previous section, the definition of a classical Markov Network can be conveniently reformulated as a pair $(G,P(V))$, where $G=(V,E)$ is an undirected graph and $P(V)$ is a probability distribution over random variables represented by the vertices, such that the graphoid $I_P$ satisfies the local Markov property with respect to the graph $G$. The definition of a quantum Markov network can now be obtained by replacing the probabilistic dependency model with a quantum dependency model.\n\nLet $G = (V,E)$ be an undirected graph and suppose that each vertex $v \\in V$ is associated with a quantum system, also denoted $v$, with Hilbert space $\\mathcal{H}_{v}$. Let $\\rho_V$ be a state on $\\mathcal{H}_V = \\bigotimes_{v \\in V} \\mathcal{H}_v$. $(G,\\rho_V)$ is a \\emph{Quantum Markov Network} if the graphoid $I_\\rho$ satisfies the local Markov property with respect to the graph $G$. Further, if $\\rho_V$ is of full rank, then $(G,\\rho_V)$ is called a \\emph{Positive Quantum Markov Network}. Note that unlike in the classical case, we cannot conclude that the global Markov property holds for positive quantum Markov networks because the intersection axiom has not been proved.\n\nThe remainder of this section provides some partial characterization results for quantum Markov networks, along the lines of the Hammersley-Clifford theorem. The most generally applicable of these results makes use of the $\\ensuremath{\\odot}$ product.\n\\begin{The}\n\\label{Graph:QHC}\nLet $G = (V,E)$ be an undirected graph and let $\\mathfrak{C}$ be the set of cliques of $G$. If $(G,\\rho_V)$ is a positive quantum Markov network then there exist positive operators $\\sigma_C$ acting on the cliques of $G$, i.e. $C \\in \\mathfrak{C}$, such that\n\\begin{equation}\n\\label{Graph:QHCDecomp}\n\\rho_V = \\bigodot_{C \\in \\mathfrak{C}} \\sigma_C .\n\\end{equation}\n\\end{The}\nThis theorem is analogous to one direction of the Hammersley-Clifford theorem and the proof is very similar to a standard proof for the classical case \\cite{Pol04a}, but is somewhat involved so it is given in appendix \\ref{Proof}. However, unlike the classical case, the converse does not hold, i.e. there are states of the form eq. \\eqref{Graph:QHCDecomp} that do not satisfy the local Markov property as illustrated by the following example. \n\n\\begin{Exa}\n\\label{ex:heisenberg}\nConsider a chain of 3 qubits $A$, $B$, and $C$ coupled through an anti-ferromagnetic Heisenberg interaction $H = \\sigma^x_A\\sigma^x_BI_C + \\sigma^y_A\\sigma^y_BI_C + \\sigma^z_A\\sigma^z_BI_C + I_A\\sigma^x_B\\sigma^x_C + I_A\\sigma^y_B\\sigma^y_C + I_A\\sigma^z_B\\sigma^z_C$ where $\\sigma^x$, $\\sigma^y$, and $\\sigma^z$ denote the Pauli operators\n\\begin{equation}\n\\sigma^x= \n\\left( \\begin{array}{cc}\n0 & 1 \\\\\n1 & 0\n\\end{array} \\right),\\ \\ \n\\sigma^z= \n\\left( \\begin{array}{cc}\n1 & 0 \\\\\n0 & -1\n\\end{array} \\right), \\ \\ \n\\mathrm{and}\\ \\ \n\\sigma^y= \\sigma^z \\sigma^x.\n\\end{equation}\nThe Gibbs state $\\rho_{A\\cup B \\cup C}(\\beta) = \\frac{1}{Z(\\beta)} \\exp(-\\beta H)$ has the form eq. \\eqref{Graph:QHCDecomp}, but for any finite $\\beta$ it has a non-zero mutual information between $A$ and $C$ conditioned on $B$ as shown on Fig.~\\ref{Heisenberg}.\n\\begin{figure}[h!]\n\\center \\includegraphics[height=2.5in]{Heis}\n\\caption{Conditional mutual information for a 3-vertex anti-ferromagnetic Heisenberg spin-$\\frac 12$ chain as a function of inverse temperature $\\beta$.}\n\\label{Heisenberg}\n\\end{figure}\n\\end{Exa}\n\nFor trees, a decomposition into reduced and mutual density operators analogous to eq. \\eqref{Graph:HCMut} is possible. For this, we need the following lemma.\n\n\\begin{Lem}\n\\label{Graph:Remove}\nLet $G= (V,E)$ be a graph, let $(G,\\rho_V)$ be a quantum Markov network and let $u \\in V$. Let $G' = (V',E')$ be the graph obtained by removing $u$ from $V$ and removing all edges that connect $u$ to any other vertex from the graph. Let $G'' = (V',E'')$ be the graph obtained by adding to $G'$ an edge between every pair of distinct neighbors of $u$ in the original graph $G$. Let $\\rho_{V'} = \\PTr{u}{\\rho_{V'}}$. Then $(G'',\\rho_{V'})$ is a quantum Markov network.\n\\end{Lem}\n\\begin{proof}\nFor $U \\subset V$, let $U_u = U-u$ if $u \\in U$ and $U_u = U$ otherwise, and denote $n_G(U_u)$ and $n_{G''}(U_u)$ the neighbors of $U_u$ in the graphs $G$ and $G''$ respectively. It must be shown that $I_{\\rho_V}(U, V-(U \\cup n_G(U) | n_G(U))$ for all $U \\subset V$ implies $I_{\\rho_{V'}}(U_u, V'-(U_u \\cup n_{G''}(U_u) | n_{G''}(U_u))$ for every $U_u \\subset V'$. By symmetry, we can assume without loss of generality that $u \\in U$. There are two different cases to consider:\n\n\\noindent{\\bf Case I:} $n_G(u) \\cap U \\neq \\emptyset$.\\\\\nThis implies that $n_{G''}(U_u) = n_G(U)$ and so $V' - (U_u \\cup n_{G''}(U_u)) = V - (U \\cup n_G(U))$. We conclude that $I_{\\rho_{V'}}(U_u, V'-(U_u \\cup n_{G''}(U_u) | n_{G''}(U_u))$ is equivalent to $I_{\\rho_V}(U - u, V-(U \\cup n_G(U) | n_G(U))$, and the result follows from decomposition.\n \n\\noindent{\\bf Case II:} $n_G(u) \\cap U = \\emptyset$.\\\\\nThis implies that $n_{G''}(U_u) = n_G(U_u)$. Consider the local Markov property on the original graph $G$ applied to $U_u$: $I_{\\rho_{V}}(U_u, V-(U_u \\cup n_{G}(U_u) | n_{G}(U_u))$ which is equivalent to $I_{\\rho_{V}}(U_u, u \\cup V'-(U_u \\cup n_{G''}(U_u) | n_{G''}(U_u))$, and the result follows from decomposition. \n\\end{proof}\n\n\\begin{The}\nLet $G= (V,E)$ be a tree. If $(G,\\rho_V)$ is a positive quantum Markov network then it can be written as\n\\begin{equation}\n\\rho_V = \\left (\\bigotimes_{v \\in V} \\rho_v \\right ) \\pns{n} \\left ( \\prod_{(v,u) \\in E} \\rho\\ns{n}_{v:u} \\right ).\n\\label{eq:standard_graph_state}\n\\end{equation}\n\\label{The:mutual}\n\\end{The}\n\n\\begin{proof}\nThe proof is by induction on the number of vertices in the tree. It is clearly true for a single vertex, so consider a tree $G=(V,E)$ with $N$ vertices and choose a leaf vertex $u \\in V$. Construct the quantum Markov network $(G'',\\rho_{V'})$ as in lemma \\ref{Graph:Remove}. Since $u$ is a leaf it only has one neighbor in $G$, denoted $w$, so the only difference between $G$ and $G''$ is that $u$ and the single edge connecting $u$ to the rest of the graph have been removed. By the inductive assumption, $\\rho_{V'}$ has a decomposition of the form\n\\begin{equation}\n\\label{Graph:InductAssump}\n\\rho_{V'} = \\left (\\bigotimes_{v \\in V'} \\rho_v \\right ) \\pns{n} \\left ( \\prod_{(v,x) \\in E''} \\rho\\ns{n}_{v:u} \\right ).\n\\end{equation} \nGenerally, $\\rho_{V} = \\rho_{V' \\cup \\{u\\}} = \\rho_{V'} \\pns{n} \\rho\\ns{n}_{u|V'}$. The local Markov property implies that $I_\\rho (u,V' - w|w)$, so that $\\rho_{u|V'} = \\rho_{u|w}$, which in turn can be written as $\\rho_{u|w} = \\rho_u \\pns{n} \\rho_{u:w}$, so \n\\begin{equation}\n\\rho_V = \\rho_{V'} \\pns{n} \\left ( \\rho_u \\pns{n} \\rho_{u:w}\\right ).\n\\end{equation} \nEvery term in eq.~\\eqref{Graph:InductAssump} commutes with $\\rho_u$, because they are defined on different tensor product factors. Also, $\\rho_{u:w}$ commutes with all the other mutual density operators either because they act on different tensor product factors or because the fact that $w$ is the only neighbor of $u$ implies that $u$ is quantum conditionally independent of any other subsystem given $w$.\n\\end{proof}\n\nIn the classical case, the Hammersley-Clifford decomposition is not necessarily unique, and when the graph is a tree the decomposition into marginal and mutual distributions is only one possibility. Similarly, a state $\\rho_V$ might have a decomposition of the form of eq.~\\eqref{eq:standard_graph_state} but with more general operators in place of the mutual and marginal states. This provides another motivation for the definition of an $n$-bifactor state that was given in eq.~\\eqref{Graph:QGSE}. As mentioned in \\S\\ref{Graph:QGS}, not all $n$-bifactor states are quantum Markov networks, but a subset of them are, as shown by the following theorem.\n\n\\begin{The}\n\\label{Graph:THCC}\nLet $G = (V,E)$ be a tree with each vertex $v \\in V$ associated to a quantum system with Hilbert space $\\mathcal{H}_v$. Let $\\mathcal{H}_V = \\bigotimes_{v \\in V} \\mathcal{H}_v$ and let $\\rho_V$ be an $n$-bifactor state on $\\mathcal{H}_V$. If $\\mu_v$ is decomposable with respect to all pairs $\\nu_{u:v}$ and $\\nu_{w:v}$, then $(G, \\rho_V)$ is a quantum Markov network.\n\\end{The}\n\nThe notion of decomposability used in the statement of this theorem is defined at eq. \\eqref{def:decomposable}. The proof is straightforward and we leave it as an exercise. \n\n\\subsection{Other Graphical Models}\n\n\\label{Graph:OM}\n\nIn this section quantum generalizations of two other Graphical Models are described: Factor Graphs and Bayesian Networks. Generally, the choice of which model to use depends on the application and Belief Propagation algorithms have been developed for all of them in the classical case. For example, Factor Graphs arise naturally in the theory of error correcting codes, Bayesian Networks are commonly used to model causal reasoning in artificial intelligence, and Markov Networks are useful in statistical physics. However, it is now understood that the classical versions of these three models are interconvertable, and that upon such conversion the different Belief Propagation algorithms are all equivalent in complexity \\cite{AM00a, YFW02a, KFL01a}. Some similar results also hold for the quantum case, as we illustrate by showing how a quantum factor graph can be converted into a $1$-Bifactor Network. This construction is used in the application to quantum error correction described in \\S\\ref{App:QEC}. \n\n\\subsubsection{Quantum Factor Graphs}\n\n\\label{Graph:FG}\n\n\\begin{figure}\n\\center\\includegraphics{FG}\n\\caption{Factor graph representation of the state $(\\Ket{000} + \\Ket{111})_{uvw}$, with $\\mu_u = \\mu_v = \\mu_w = I$ and $X_a = (I+\\sigma^z_u\\otimes \\sigma^z_v)$, $X_b = (I+\\sigma^x_u\\otimes \\sigma^x_v\\otimes\\sigma^x_w)$, and $X_c = (I+\\sigma^z_v\\otimes\\sigma^z_w)$.}\n\\label{fig:FG}\n\\end{figure}\n\nA \\emph{quantum factor graph} consists of a pair $(G, \\rho_V)$, where $G = (U,E)$ is a bipartite graph and $\\rho_V$ is a quantum state. A bipartite graph is an undirected graph for which the set of vertices can be partitioned into two disjoint sets, $V$ and $F$, such that $(v,f) \\in E$ only if $v \\in V$ and $f \\in F$. The vertices in $V$ are referred to as ``variable nodes\" and those in $F$ as ``function nodes\". Each variable node $v$ is associated with a quantum system, also labeled $v$, with a Hilbert space $\\mathcal{H}_v$, and $\\rho_V$ is a state on $\\bigotimes_{v \\in V} \\mathcal{H}_v$. The Hilbert space associated to a function node $f$ is the tensor product of the Hilbert spaces of the adjacent variable nodes\\footnote{The following equality is not just meant in the sense of an isomorphism, they are the same Hilbert spaces.}: $\\mathcal{H}_f = \\bigotimes_{v \\in n(f)} \\mathcal{H}_v$. The state associated with a factor graph is of the form\n\\begin{equation}\n\\label{Graph:FGE}\n\\rho_V = \\frac 1Z \\prod_{f \\in F} X_f \\ensuremath{\\star} \\bigotimes_{v \\in V} \\mu_v \n\\end{equation}\nwhere $\\mu_v$ is an operator on $\\mathcal{H}_v$, $X_f$ is an operator on $\\mathcal{H}_f$ and $[X_f,X_g] = 0$. \n\nFor example, such a state would be obtained after performing a sequence of projective von Neumann measurements on a product state of the variable nodes (see Fig.~\\ref{fig:FG}). More precisely, for each $f \\in F$, let $\\{P_f^j\\}$ be a complete set of orthogonal projectors, and let $\\bigotimes _{v \\in V} \\mu_v$ be the initial state of $V$. When the projective measurements $\\{P_f^j\\}$ are performed at each function node and commuting outcomes $P_f^j = X_f$ are obtained, the post-measurement state is of the form of eq. \\eqref{Graph:FGE}. Similarly, factor graph states could be obtained from more general POVM measurements $\\{E_f^j\\}$, provided the state update rule $\\rho_V \\rightarrow \\frac{(E_f^j)^{\\frac{1}{2}} \\rho_V (E_f^j)^{\\frac{1}{2}}}{\\Tr{E_f^j \\rho_V}}$ is used. In that case, the $X_f$ could be any positive operator rather than being restricted to projectors as in the case of a von Neumann measurement. \n\nTo convert a factor graph into a $1$-Bifactor Network, we need to treat the function nodes as distinct quantum systems, and so endow them with their own Hilbert spaces $\\mathcal{H}_f = \\bigotimes_{v \\in n(f)} \\mathcal{H}_{R_v^f}$ where $ \\mathcal{H}_{R_v^f}$ is isomorphic to $ \\mathcal{H}_{v}$. The system $R_v^f$ is called a reference system for $v$ in $f$. Then, the state of the function nodes can be written on the graph $G = (U,E)$, where $U = V\\cup F$, $\\rho_V = \\PTr{F}{\\rho_{U}}$ and\n\\begin{equation}\n\\rho_U = \\frac 1Z \\bigotimes_{u \\in U} \\mu_u \\ensuremath{\\star} \\prod_{(v,f) \\in E} \\nu_{v:f},\n\\end{equation}\nwhere for $u \\in F$, $\\mu_u = X_u^T$, $\\nu_{v:f} = d_v\\kb{\\Phi}{\\Phi}_{v\\cup R_v^f} \\otimes I_{f-R^f_v}$ and $\\Ket\\Phi_{v\\cup R_v^f} = \\frac{1}{\\sqrt d_v} \\sum_{j=1}^{d_v} \\Ket{j}_v\\Ket{j}_{R_v^f}$ denotes the maximally entangled state between $v$ and its reference $R_v^f$. \n\n\\subsubsection{Quantum Bayesian Networks}\n\n\\label{Graph:BN}\n\n\\begin{figure}\n\\center\\includegraphics{BN}\n\\caption{This directed acyclic graph has two distinct ancestral orderings: $(a,b,c,d)$ and $(a,c,b,d)$. The equalities $S(d:a|b\\cup c) = 0$ and $S(b:d\\cup d | a) = 0$ are examples of constraints that are satisfied when $(G,\\rho_V)$ is a Quantum Bayesian Network.}\n\\label{fig:BN}\n\\end{figure}\n\nApart from Markov Networks, there are other Graphical Models that make use of the theory of dependency models and graphoids. Bayesian Networks provide an example, and they are commonly applied in expert systems to model causal reasoning \\cite{Nea90a,Nea04a}. The basic idea is to replace the undirected graph of a Markov network with a Directed Acyclic Graph (DAG), wherein the directed edges represent direct cause-effect relationships. The quantum graphoid can be used to give a straightforward generalization of the classical networks, which we only treat briefly here. To describe the generalization, a few definitions and facts about DAGs are required.\n\nFor a vertex $v$ in a DAG $G = (V,E)$, let $m(v)$ denote the parents of $v$, i.e. $m(v) = \\{u \\in V|(u,v) \\in E\\}$. The set of ancestors of $v$ is denoted $a(v)$ and consists of those vertices $u$ for which there exists a path in the graph starting at $u$ and ending at $v$. Conversely, the set of descendants of $v$ is denoted $d(v)$ and consists of those vertices $u$ for which there exists a path in the graph starting at $v$ and ending at $u$. The set of parents of a subset $U \\subseteq V$ of vertices is defined as $m(U) = \\cup_{u \\in U} m(u) - U$ and similarly $a(U) = \\cup_{u \\in U} a(u) - U$ and $d(U) = \\cup_{u \\in U} d(u) - U$. The set of nondescendants of a subset $U \\subseteq V$ of vertices is defined to be $nd(U) = V - (d(U)\\cup U)$. Note that the vertices in $U$ are not considered to be nondescendants of $U$ for technical convenience. Finally, every DAG has at least one ancestral ordering of its vertices $(v_1,v_2,\\ldots,v_n)$, such that if $v_j \\in a(v_k)$ then $j < k$ (see Fig. \\ref{fig:BN}).\n\nA \\emph{Quantum Bayesian Network} is a pair $(G,\\rho_V)$, where $G = (V,E)$ is a DAG, each vertex $v \\in V$ is associated with a quantum system, also denoted $v$, with Hilbert space $\\mathcal{H}_v$, and $\\rho_V$ is a quantum state on $\\mathcal{H}_V = \\bigotimes_{v \\in V} \\mathcal{H}_v$. The state $\\rho_V$ satisfies the conditional independence constraints $I_\\rho (U, nd(U)-m(U)|m(U) )$ for all subsets $U \\subseteq V$.\n\nThe definition of a classical Bayesian Network is obtained by replacing the quantum systems with classical random variables. It can be shown that $(G,P(V))$ is a classical Bayesian Network iff $P(V) = \\prod_{v \\in V} P(v|m(v))$, and a partial quantum generalization of this can be obtained using the conditional density operator.\n\nDue to the nonassociativity of the $\\pns{n}$ products, expressions like $A \\pns{n} B \\pns{n} C$ are ambiguous. It is convenient to adopt the convention that they are evaluated left-to-right, so that $A \\pns{n} B \\pns{n} C = \\left ( A \\pns{n} B \\right ) \\pns{n} C$. Similarly, we adopt the convention that\n\\begin{equation}\n\\left ( \\pns{n} \\right )_{j=1}^N A_j = \\left ( \\left ( \\left ( A_1 \\pns{n} A_2 \\right ) \\pns{n} A_3 \\right ) \\ldots \\right ) \\pns{n} A_N.\n\\end{equation}\n\n\\begin{The}\nIf $(G,\\rho_V)$ is a Quantum Bayesian Network and $(v_1,v_2,\\ldots,v_N)$ is an ancestral ordering of $V$ then\n\\begin{equation}\n\\rho_V = \\left ( \\pns{n} \\right )_{j=1}^N \\rho\\ns{n}_{v_j|m(v_j)}.\n\\end{equation}\n\\end{The}\n\\begin{proof}\nFor any ordering $(v_1,v_2,\\ldots,v_N)$ of the vertices, an arbitrary state can always be written as\n\\begin{equation}\n\\rho_V = \\left ( \\pns{n} \\right )_{j=1}^{N} \\rho\\ns{n}_{v_{j}|v_{j - 1} v_{j - 2} \\ldots v_1}.\n\\end{equation}\nThis is a quantum generalization of the chain rule for conditional probabilities, which follows straightforwardly from the definition of conditional density operators. If $(v_1,v_2,\\ldots,v_N)$ is in fact an ancestral ordering, then $\\{v_{j-1}, v_{j - 2}, \\ldots, v_1\\} \\subseteq nd(v_{j})$, so $I_\\rho (v_{j}, nd(v_{j})|m(v_{j}))$ implies that $\\rho\\ns{n}_{v_j|v_{j-1} v_{j-2} \\ldots v_1} = \\rho\\ns{n}_{v_j|m(v_j)}$.\n\\end{proof}\n\n\n\\section{Quantum Belief Propagation}\n\n\\label{QBP}\n\nIn this section, we discuss algorithms for solving the inference problem that we started with in \\S\\ref{Problem} for the case of $n$-Bifactor Networks. In fact, we start with the seemingly simpler problem of computing the reduced density operators of the state on the vertices and on pairs of vertices connected by an edge, and then present a simple modification of the algorithm to solve the inference problem for local measurements. \n\nRecall that $n$-bifactor states are of the form\n\\begin{equation}\n\\label{Graph:QGSERem}\n\\rho_V = \\frac 1 Z \\left(\\bigotimes_{u\\in V} \\mu_u\\right) \\pns{n} \\left(\\prod_{(v,w) \\in E} \\nu_{v:w}\\right),\n\\end{equation}\nand that the operators associated with vertices and edges do not have to be straightforwardly related to the reduced and mutual density operators. Therefore, it is not clear a priori that even the simpler task can be done efficiently. {\\em Quantum Belief Propagation} (QBP) algorithms are designed to solve this problem by exploiting the special structure of $n$-bifactor states. Since the class of states under consideration is different for each value of $n$, there is not one but a family of algorithms. The algorithm that is designed to solve inference problems on $n$-Bifactor Networks is denoted QBP$^{(n)}$. \n\nTo avoid cumbersome notation, focus will be given to $n$-bifactor states with $n < \\infty$. Recall that the operators $\\nu_{u:v}$ defining these states mutually commute. This is not true of $\\infty$-bifactor states. Nevertheless, a Belief Propagation algorithm for $\\infty$-bifactor states can be readily defined from the finite $n$ one, by replacing {\\em all} products appearing in eqs.~(\\ref{message}-\\ref{belief2}) by the $\\odot$ product. Under this modification, the convergence Theorem \\ref{thm:QBP} applies to $\\infty$-Bifactor Networks, and its proof only requires straightforward modifications.\n\nThe remainder of this section is structured as follows. \\S\\ref{QBP:Desc} gives a description of the QBP algorithms and \\S\\ref{QBP:Conv} shows that QBP$\\ns{n}$ converges on trees if the $n$-Bifactor Network is also a quantum Markov Network and that QBP$\\ns{1}$ converges on trees in general. In both cases, the algorithm converges in a time that scales linearly with the diameter of the tree. Finally, \\S\\ref{QBP:Inf} explains how to modify the algorithm to solve inference problems for local measurements.\n\n\\subsection{Description of the Algorithm}\n\n\\label{QBP:Desc}\n\nTo describe the operation of the QBP algorithms, it is helpful to imagine that the graph $G$ represents a network of computers with a processor situated at each vertex. The algorithm could equally well be implemented on a single processor, in which case the network is just a convenient fiction. Pairs of processors are connected by a communication channel if there is an edge between the corresponding vertices. The processor at vertex $u$ has a memory that stores the value of $\\mu_u$ as well as the value of $\\nu_{u:v}$ for each vertex $v$ that is adjacent to $u$ in the graph. The task assigned to each processor is to compute the local reduced state $\\rho_u$ and the joint states $\\rho_{u\\cup v}$\\footnote{Of course, it would be sufficient to only have one processor compute $\\rho_{u\\cup v}$ for each edge.}. At each time step $t$, the processor at $u$ updates its ``beliefs'' about $\\rho_u$ and $\\rho_{u\\cup v}$ via an iterative formula. These beliefs are denoted $b_u\\ns{n}(t)$ and $b_{uv}\\ns{n}(t)$, and are supposed to be approximations to the true reduced states $\\rho_v$ and $\\rho_{u\\cup v}$ based on the information available to the processor at time step $t$. Since the reduced states may depend on information stored at other vertices, the processors pass operator valued messages $m\\ns{n}_{u \\rightarrow v}(t)$ along the edges at each time step in order to help their neighbors. The message $m\\ns{n}_{u \\rightarrow v}(t)$ is an operator on $\\mathcal{H}_v$ and is initialized to the identity operator $m\\ns{n}_{u\\rightarrow v}(0) = I_v$ at $t = 0$. For $t > 0$ it is computed via the iterative formula\n\\begin{equation}\nm^{(n)}_{u \\rightarrow v}(t) = \\frac 1Y \\PTr{u}{\\mu_u \\pns{n} \\Bigg[ \\Big\\{\\prod_{v' \\in n(u)-v} m^{(n)}_{v' \\rightarrow u}(t-1)\\Big\\} \\pns{n} \\nu_{u:v} \\Bigg]}.\n\\label{message}\n\\end{equation}\nHere, $Y$ is an arbitrary normalization factor that should be chosen to prevent the the matrix elements of $m\\ns{n}_{u \\rightarrow v}(t)$ becoming increasingly small as the algorithm proceeds. It is convenient to choose $Y$ such that $\\PTr{v}{m\\ns{n}_{u \\rightarrow v}(t)} = 1$. \n\n\nThe beliefs about the local density operator $\\rho_u$ at time $t$ are given by the simple formula\n\\begin{equation}\nb^{(n)}_u(t) = \\frac 1{Y'} \\mu_u \\pns{n} \\prod_{v' \\in n(u)} m^{(n)}_{v' \\rightarrow u}(t), \\label{belief1}\n\\end{equation}\nwhere $Y'$ is again a normalization factor that should be chosen to make $\\PTr{u}{b^{(n)}_u(t)} = 1$.\nOn the other hand, the beliefs about $\\rho_{u\\cup v}$ also depend on the messages received by the processor at $v$, so we have to imagine that each vertex shares its messages with its neighbors. Having done so, the beliefs about $\\rho_{u\\cup v}$ are computed via\n\\begin{equation}\nb^{(n)}_{uv}(t) = \\frac 1{Y''} (\\mu_u \\mu_v) \\pns{n} \\Bigg[\\Big\\{\\prod_{w \\in n(u)-v} m^{(n)}_{w \\rightarrow u}(t) \\prod_{w' \\in n(v)-u} m^{(n)}_{w' \\rightarrow v}(t)\\Big\\} \\pns{n} \\nu_{u:v} \\Bigg] \\label{belief2},\n\\end{equation}\nwhere $Y''$ is again a normalization factor.\n\nThe beliefs obtained from the QBP$^{(n)}$ algorithm on input $\\{\\mu_u\\}_{u\\in V}$ and $\\{\\nu_{u:v}\\}_{(u,v) \\in E}$ after $t$ time steps are denoted $[b^{(n)}_{u}(t), b^{(n)}_{uv}(t) ] = \\mathrm{QBP}_t^{(n)}(\\mu_u,\\nu_{u:v})$. The goal of the next section is to provide conditions under which the beliefs represent the exact solution to the inference problem, i.e. to find states and values of $t$ such that $\\mathrm{QBP}_t\\ns{n}(\\mu_u,\\nu_{u:v}) = [\\rho_u,\\rho_{u\\cup v}]$. \n\\subsection{Convergence on Trees}\n\n\\label{QBP:Conv}\n\nAt time $t$, the beliefs $b^{(n)}_{u}(t)$ and $b^{(n)}_{uv}(t)$ represent estimates of the reduced states $\\rho_u$ and $\\rho_{u\\cup v}$ of the input $n$-bifactor state $\\rho_V$. \nNote that when the $\\mu_u$ and the $\\nu_{u:v}$ all commute with one another and are diagonal in local basis, the QBP$^{(n)}$ algorithms all coincide for different $n$ (including $n=\\infty$) and correspond to the well known classical Belief Propagation algorithm. This algorithm always converges on trees in a time that scales like the diameter of the tree. Its convergence on general graphs is not fully understood and constitutes an active area of research \\cite{Yed01a, YFW02a}. In the quantum setting, the $\\mu_u$ and the $\\nu_{u:v}$ do not commute in general, but for finite $n$, the $\\nu_{u:v}$ commute with each other by assumption. This has straightforward consequence that will be of use later.\n\n\\begin{Prop}\n\\label{prop:commute}\nFor all $u,v \\in V$, $x \\in n(u)$, and $w \\in n(v)$, the following commutation relations hold $[\\nu_{u:v},m^{(n)}_{x\\rightarrow u}(t)] = 0$ and $[m^{(n)}_{w\\rightarrow v}(t),m^{(n)}_{x\\rightarrow u}(t)] = 0$ .\n\\end{Prop}\n\nBefore proving the convergence of Quantum Belief Propagation, the following classical example can help build intuition of its workings, and also serves to outline the crucial steps in proving convergence.\n\n\\begin{figure}\n\\center\\includegraphics{belief}\n\\caption{Belief $b_{uv}$ is a function of $\\mu_u$, $\\mu_v$, $\\nu_{u:v}$, and the incoming messages at vertices $u$ and $v$, except $m_{u\\rightarrow v}$ and $m_{v\\rightarrow u}$.}\n\\label{fig:belief}\n\\end{figure}\n\n\\begin{Exa}\n\\label{ex:classical}\nConsider the function $P$ of $N$ discrete variables $x_j \\in \\{1,2,\\ldots,d\\}$ \n\\begin{equation}\nP(x_1,x_2,\\ldots,x_N) = \\psi(x_1,x_2)\\psi(x_2,x_3)\\ldots\\psi(x_{N-1},x_N)\n\\label{eq:classic_ex}\n\\end{equation}\nwhich could be for instance a classical bifactor distribution on a chain with $N$ sites. To evaluate the marginal function $P(x_N) = \\sum_{x_1,x_2,\\ldots,x_{N-1}} P(x_1,x_2,\\ldots,x_N)$, one can proceed directly and carry the sum over $d^N$ terms. A more efficient solution is obtained by invoking the distributive law to reorder the various sums and products into\n\\begin{equation*}\nP(x_N) = \\sum_{x_{N-1}}\\Big(\\psi(x_{N-1},x_N)\\Big( \\ldots \\Big(\\sum_{x_2} \\psi(x_2,x_3)\\Big(\\sum_{x_1} \\psi(x_1,x_2) \\Big)\\Big) \\ldots\\Big) \\Big),\n\\end{equation*}\nand performing the sums sequentially, starting with $\\sum_{x_1}$, then $\\sum_{x_2}$, and so on\n\\begin{eqnarray*}\nP(x_N) &=& \\sum_{x_{N-1}}\\Big(\\psi(x_{N-1},x_N)\\Big( \\ldots \\Big(\\sum_{x_2} \\psi(x_2,x_3)M_{1\\rightarrow 2}(x_2)\\Big) \\ldots\\Big) \\Big) \\\\\n&=& \\sum_{x_{N-1}}\\Big(\\psi(x_{N-1},x_N)\\Big( \\ldots M_{2\\rightarrow 3}(x_3) \\ldots\\Big) \\Big) \\\\\n&&\\vdots \\\\\n&=& \\sum_{x_{N-1}} \\psi(x_{N-1}:x_N) M_{N-2 \\rightarrow N-1}(x_{N-1}) \n\\end{eqnarray*}\nwhere the ``messages\" are defined recursively $M_{j \\rightarrow j+1}(x_{j+1}) = \\sum_{x_{j}} \\psi(x_j:x_{j+1}) M_{j-1 \\rightarrow j}(x_j)$, with $M_{1 \\rightarrow 2} = \\sum_{x_1} \\psi(x_1:x_2)$. Each of these steps involves the sum of $d^2$ terms, so $P(x_N)$ can be computed with order $Nd^2$ operations. \n\\end{Exa}\n\nThis example differs from the Belief Propagation algorithm described in the previous section in three important aspects. Firstly, it relied on the distributive law, which does not hold in general for the $\\pns{n}$ product, i.e. $\\PTr{u}{X_{uv} \\pns{n} Y_{vw}} \\neq \\PTr{u}{X_{uv}} \\pns{n} Y_{vw}$ in general. This will motivate Theorems \\ref{thm:commute} and \\ref{thm:commute1}, that establish necessary conditions for the validity of the distributive law. Secondly, the graph in that example is a chain, whereas Belief Propagation operates on any graph. However, Belief Propagation is only guaranteed to converge on trees, and the above example generalizes straightforwardly to such graphs. Thirdly, the messages in the example must be computed in a prescribed order: $M_{i-1\\rightarrow i}$ is required to compute $M_{i \\rightarrow i+1}$. This last point is important and deserves an extensive explanation. \n\nSuppose that instead of computing the messages $M_{i\\rightarrow i+1}$ sequentially, messages at each vertex were computed at every time step, following the rule $m_{i \\rightarrow i\\pm1}(t,x_{i\\pm1}) = \\sum_{x_i} m_{i\\mp1 \\rightarrow i}(t-1,x_i)\\psi(x_i:x_{i\\pm1})$, as in eq. \\eqref{message}, with the initialization $m_{i\\pm1 \\rightarrow i}(0,x_i) = 1$. Then, one can easily verify that for $t\\geq i$, $m_{i \\rightarrow i+1}(t,x_{i+1}) = M_{i \\rightarrow i+1}(x_{i+1})$. In other words, the messages $m_{i\\rightarrow i+1}$ become time independent after a time equal to the distance between vertex $i$ the beginning of the chain. This observation can in fact be generalized as follows.\n\n\\begin{figure}\n\\center\\includegraphics{depth}\n\\caption{For $(u,v) \\in E$, the graph $G_v^u$ is obtained from $G$ by considering $u$ as the root and removing the subtree associated to vertex $v$. In this example, $depth(G_v^u) = 2$.}\n\\label{fig:depth}\n\\end{figure} \n\n\\begin{Lem}\nWhen $G$ is a tree, the QBP$^{(n)}$ messages $m^{(n)}_{u\\rightarrow v}(t)$ are time independent for $t> depth(G_v^u)$, where $G_v^u$ is the tree obtained from $G$ by choosing $u$ as the root, and removing the subtree associated to $v$ (see Fig. \\ref{fig:depth}).\n\\end{Lem}\n\n\\begin{proof}\nThe proof is by induction. If $u$ is a leaf, it has a unique neighbor $n(u)$ and $m_{u \\rightarrow n(u)}^{(n)}(t) = \\PTr{u}{\\mu_u \\pns{n} \\nu_{u:n(u)}}$ which is time independent. If $u$ is not a leaf, it has two neighbors $L(u)$ and $R(u)$. Clearly, if $m_{L(u)\\rightarrow u}^{(n)}(t)$ is time independent for $t \\geq t^*$, then $m_{u\\rightarrow R(u)}^{(n)}(t) = \\PTr{u}{\\mu_u \\pns{n}\\big[m_{L(u)\\rightarrow u}^{(n)}(t-1) \\pns{n} \\nu_{u:R(u)}\\big]}$ is time independent for $t \\geq t^*+1$. \n\\end{proof}\n\nWhen operated on a tree, all beliefs computed by QBP algorithm converge to a steady state after a time equal to the diameter of the tree. Note that when the graph contains loop, the beliefs do not necessarily reach a steady state. It remains to be shown that on trees, this steady state is the correct solution. For this, we need a technical result that requires some new notation. Let $U$ and $W$ be two non-intersecting subsets of $V$. Define the two subsets of edges $E_U = \\{ (u,w) \\in E : u,w \\in U\\}$ and $E_{U:W}= \\{ (u,w) \\in E : u \\in U\\ {\\mathrm and}\\ w \\in W\\}$. Let $\\Gamma_U = \\bigotimes_{u \\in U} \\mu_u$ and for any $F \\subset E$, let $\\Lambda_{F} = \\prod_{(u,w) \\in F} \\nu_{v:w}$. \n\n\\begin{The}\\label{thm:commute}\nLet $(G, \\rho_V)$ be an $n$-Bifactor Network with graph $G = (V,E)$. Let $U,W,X$ be non-intersecting subsets of $V$ such that $U\\cup W \\cup X =V$. When $S(U:X|W) = 0$, the following diagram is commutative.\n\\begin{equation}\\begin{CD}\n \\Gamma_{U\\cup W} \\pns{n} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}}) \n @>{\\text{Tr}_U}>> \n \\PTr{U}{\\Gamma_{U\\cup W} \\pns{n} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}})}\\\\\n @VV{\\Gamma_X \\pns{n}(\\cdot \\Lambda_{E_X})}V \n @VV{\\Gamma_X \\pns{n}(\\cdot \\Lambda_{E_X})}V \\\\\n \\rho_V = \\Gamma_V \\pns{n} \\Lambda_{E_V} \n @>{\\text{Tr}_U}>> \n \\PTr{U}{\\rho_V}\n\\end{CD}\\label{eq:commute_diag}\\end{equation}\n\\end{The}\n\n\\begin{proof}\nThe down-right path is the simplest. The first equality follows from the fact that $\\Lambda_{E_X}$ commutes with $\\Gamma_{U\\cup W}$ and all other $\\Lambda_E$'s, and the definition $\\rho_V = (\\Gamma_{U \\cup W} \\otimes \\Gamma_X ) \\pns{n} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}} \\Lambda_{E_X})$. The second equality is just a definition. The right-down path uses the representation of states that saturate strong subadditivity eq.~\\eqref{Cond:Hayden}, which implies that $\\rho_V$ has a decomposition of the form $\\rho_V = \\sum^d_{j = 1} p_j \\sigma_{U W_j^{(1)}} \\otimes \\tau_{W_j^{(2)} X}$. First observe that \n\\begin{align}\n\\Gamma_{U \\cup W} \\star^{(n)} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}} ) \n&= (\\Gamma_X^{-1}\\pns{n} \\rho_V) \\Lambda_{E_X}^{-1} \\\\\n&= \\Big(\\Gamma_X^{-1}\\pns{n} \\sum^d_{j = 1} p_j \\sigma_{U W_j^{(1)}} \\otimes \\tau_{W_j^{(2)} X}\\Big)\\Lambda_{E_X}^{-1} \\\\\n&= \\sum^d_{j = 1} p_j \\sigma_{U W_j^{(1)}} \\otimes \\Big[\\big(\\Gamma_X^{-1}\\pns{n} \\tau_{W_j^{(2)} X}\\big)\\Lambda_{E_X}^{-1}\\Big].\n\\end{align}\nIt follows that\n\\begin{align*}\n\\Gamma_X\\pns{n}\\left[\\PTr{U}{\\Gamma_{U \\cup W} \\star^{(n)} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}} )}\\Lambda_{E_X} \\right] \n&= \\sum^d_{j = 1} p_j \\sigma_{W_j^{(1)}} \\otimes \\tau_{W_j^{(2)} X}\\\\\n&= \\PTr{U}{\\rho_V}.\n\\end{align*}\n\\end{proof}\n\nSpecializing to the case $n=1$ enables a stronger result to be derived that does not require independence assumptions.\n\n\\begin{The}\\label{thm:commute1}\nLet $(G,\\rho_V)$ be a 1-Bifactor Network with graph $G = (V,E)$. Let $U,W,X$ be non-intersecting subsets of $V$ such that $U\\cup W \\cup X =V$. The following diagram is commutative.\n\\begin{equation}\\begin{CD}\n \\Gamma_{U} \\ensuremath{\\star} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}}) \n @>{\\text{Tr}_U}>> \n \\PTr{U}{\\Gamma_{U} \\ensuremath{\\star} (\\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}})}\\\\\n @VV{\\Gamma_{W\\cup X} \\ensuremath{\\star}(\\cdot \\Lambda_{E_X})}V \n @VV{\\Gamma_{W\\cup X} \\ensuremath{\\star}(\\cdot \\Lambda_{E_X})}V \\\\\n \\rho_V = \\Gamma_V \\ensuremath{\\star} \\Lambda_{E_V} \n @>{\\text{Tr}_U}>> \n \\PTr{U}{\\rho_V}\n\\end{CD}\\end{equation}\n\\end{The}\n\n\\begin{proof}\nThe theorem follows simply from the cyclic property of the partial trace:\n\\begin{align}\n\\PTr{U}{\\rho_V}\n&= \\PTr{U}{[\\Gamma_U^\\frac 12 \\otimes \\Gamma_{W\\cup X}^\\frac 12] \\Lambda_{E} [\\Gamma_U^\\frac 12 \\otimes \\Gamma_{W\\cup X}^\\frac 12]} \\\\\n&= \\Gamma_{W\\cup X}^\\frac 12 \\PTr{U}{ \\Gamma_U \\Lambda_{E}} \\otimes \\Gamma_{W\\cup X}^\\frac 12 \\\\\n&= \\Gamma_{W\\cup X}^\\frac 12 \\PTr{U}{ \\Gamma_U \\Lambda_{E_{U\\cup W}} \\Lambda_{E_{U\\cup W:X}}} \\Lambda_{E_X} \\otimes \\Gamma_{W\\cup X}^\\frac 12.\n\\end{align}\n\\end{proof}\n\nWe are now positioned to state and prove the main result of this section. \n\n\\begin{The}\n\\label{thm:QBP}\nLet $(G,\\rho_V)$ be an $n$-Bifactor Network with graph $G = (V,E)$, and let $[b^{(n)}_{u}(t), b^{(n)}_{uv}(t) ] = \\mathrm{QBP}_t^{(n)}(\\mu_u,\\nu_{u:v})$. If $(G,\\rho_V)$ is a quantum Markov network and $G$ is a tree, then for all $t \\geq diameter(G)$, $b^{(n)}_u(t) = \\rho_u$ and $b^{(n)}_{uv}(t) = \\rho_{u\\cup v}$.\n\\end{The}\n\n\\begin{proof}\nFirst, observe that $b^{(n)}_u(t) = \\PTr{v}{b^{(n)}_{uv}(t)}$, so it is sufficient to prove that $b^{(n)}_{uv}(t) = \\rho_{u\\cup v}$. Consider $u \\cup v$ to be the root of the tree. We proceed by induction, repeatedly tracing out leaves from the bifactor state except $u$ and $v$ until we are left with only vertices $u$ and $v$. Set $G(0) = G$ and let $G(t) = (V(t),E(t))$ be the tree left after $t$ such rounds of removing leaves. Denote the leaves of $G(t)$ apart from $u$ and $v$ by $l(t)$, the children of $x$ by $c(x)$, and the unique parent of $x$ by $m(x)$. At $t=0$, consider tracing out a leaf $w$ of $G$\n\\begin{align}\n\\PTr{w}{\\rho_V}\n&= \\PTr{u}{(\\mu_w\\otimes \\Gamma_{V-w})\\pns{n} (\\nu_{w:m(w)} \\Lambda_{E_{V-w}})}\\\\\n&= \\Gamma_{V-w} \\pns{n}\\left[\\PTr{w}{\\mu_w \\pns{n} \\nu_{w:m(w)}}\\Lambda_{E_{V-w}}\\right] \\\\\n&= \\Gamma_{V-w} \\pns{n}\\left[m_{w\\rightarrow m(w)}^{(n)}(1)\\Lambda_{E_{V-w}}\\right]\n\\end{align}\nwhere we have used Theorem~\\ref{thm:commute} going from the first to the second line. Since this holds for all leaves, we conclude that \n\\begin{equation}\n\\PTr{l(0)}{\\rho_{V}} = \\Gamma_{V(1)} \\pns{n} \\left(\\prod_{x \\in l(0)} \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(1)\\Lambda_{V(1)}\\right).\n\\end{equation}\nWe thus make the inductive assumption that\n\\begin{equation}\n\\label{Graph:IndAss}\n\\rho_{V(t)} = \\Gamma_{V(t)} \\pns{n} \\left(\\prod_{x \\in l(t)} \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t)\\Lambda_{V(t)}\\right).\n\\end{equation}\nIt follows that\n\\begin{align}\n\\rho_{V(t+1)} \n&= \\PTr{l(t)}{\\rho_{V(t)}} \\\\\n&= \\PTr{l(t)}{\\Gamma_{V(t)} \\pns{n} \\left[\\prod_{x \\in l(t)} \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t)\\Lambda_{V(t)}\\right]} \\\\\n&= \\PTr{l(t)}{\\Gamma_{V(t+1)} \\pns{n} \\left[\\prod_{x \\in l(t)} \\mu_x \\pns{n}\\Big( \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t) \\nu_{x:m(x)} \\Lambda_{V(t+1)}\\Big)\\right]} \\\\\n&= \\Gamma_{V(t+1)} \\pns{n} \\left[\\prod_{x \\in l(t)} \\PTr{x}{ \\mu_x \\pns{n}\\Big( \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t) \\nu_{x:m(x)}\\Big)} \\Lambda_{V(t+1)}\\right] \\\\\n&= \\Gamma_{V(t+1)} \\pns{n} \\left[\\prod_{x \\in l(t)} m_{x\\rightarrow m(x)}^{(n)}(t+1) \\Lambda_{V(t+1)}\\right] \\\\\n&= \\Gamma_{V(t+1)} \\pns{n} \\left[\\prod_{x \\in l(t+1)} \\prod_{y \\in c(x)} m_{y\\rightarrow x}^{(n)}(t+1)\\Lambda_{V(t+1)}\\right]\n\\end{align}\nalso assumes the same form, so eq.~\\eqref{Graph:IndAss} follows by induction. We have again used Theorem~\\ref{thm:commute} in going from the third to the fourth line. When $V(t)$ contains only $u$ and $v$ then this reduces to $\\rho_{u\\cup v} = b\\ns{n}_{uv}(t)$, which is what we set out to prove. \n\\end{proof}\n\nOnce again, specializing to the case $n=1$ enables a stronger result to be derived that does not rely on independence assumptions.\n\n\\begin{Cor}\n\\label{cor:QBP1}\nLet $(G,\\rho_V)$ be an $1$-Bifactor Network with graph $G = (V,E)$, and let $[b_{u}(t), b_{uv}(t) ] = \\mathrm{QBP}^{(1)}_t(\\mu_u,\\nu_{u:v})$. If $G$ is a tree, then for all $t \\geq diameter(G)$, $b_u(t) = \\rho_u$ and $b_{uv}(t) = \\rho_{u\\cup v}$.\n\\end{Cor}\n\n\\begin{proof} This Corollary is a consequence of Theorem~\\ref{thm:commute1} and the fact that the proof of Theorem~\\ref{thm:QBP} only relies on the commutativity of the diagram eq.~\\eqref{eq:commute_diag}.\n\\end{proof}\n\nThis last result gives us additional information about the structure of correlations in 1-bifactor states that is captured by the following corollary.\n\n\\begin{Cor}\nLet $(G,\\rho_V)$ be an $1$-Bifactor Network on graph $G = (V,E)$. If $G$ is a tree, then the mutual density operators commute: $[\\rho_{u:v},\\rho_{w:x}] = 0$ for all $(u,v)$ and $(w,x) \\in E$.\n\\end{Cor}\n\n\\begin{proof}\nThe only non trivial case is $[\\rho_{u:v},\\rho_{v:w}]$ with $u \\neq w$. Let $[b_{u}(t), b_{uv}(t) ] = \\mathrm{QBP}^{(1)}_t(\\mu_u,\\nu_{u:v})$ and denote\n\\begin{equation}\nA_{u-v}(t) = \\prod_{w\\in n(u)-v} m_{w\\rightarrow u}(t).\n\\end{equation}\nObserve that $A_{u-v}(t)$ is an operator on $\\mathcal{H}_u$, and by Proposition~\\ref{prop:commute}, $[A_{u-v}(t),\\nu_{u:w}] = 0$ for all $u$, $v$, and $w\\in V$. From Theorem~\\ref{thm:QBP}, we have for $t \\geq diameter(G)$\n\\begin{equation}\n[\\rho_{u:v},\\rho_{v:w}] = \n[ A_{u-v}(t) A_{v-u}(t) \\nu_{u:v}, A_{v-w}(t) A_{w-v}(t) \\nu_{v:w} ] = 0.\n\\end{equation}\n\\end{proof}\n\nCorollary~\\ref{cor:QBP1} shows that for 1-bifactor states on trees, QBP$^{(1)}$ enables an efficient evaluation of the one-vertex and two-vertex reduced density operators $\\rho_{u}$ for all $u \\in V$ and $\\rho_{u\\cup v}$ for all $(u,v) \\in E$. Can this result be generalized to arbitrary bifactor states? This question is of interest since, as we will detail in \\S\\ref{App:Stat}, the Gibbs states used in statistical physics are $\\infty$-bifactor states. However, it is known that approximating the ground state energy of a two-local Hamiltonian on a chain is QMA-complete \\cite{AGK07a,Ira07a}\\footnote{QMA stands for Quantum Merlin and Arthur and it is the natural quantum generalization of the classical complexity class NP. So to the best of our knowledge, solving a QMA-complete problem would require an exponential amount of time even on a quantum computer.}. Knowledge of $\\rho_{u\\cup v}$ leads to an efficient evaluation of the energy. Therefore, without any independence assumptions, it is unlikely that an efficient QBP algorithm for $n$-Bifactor Networks will converge to the correct marginals for $n>1$. This contrasts with classical BP that always converges to the exact solution on trees. However, \\S\\ref{Heur:Rep} gives a QBP algorithm that solves the inference problem for any $n$-bifactor state on a tree in a time that scales exponentially with $n$. \n\n\\subsection{Solving Inference Problems}\n\n\\label{QBP:Inf}\n\nWe close this section with a discussion of how QBP algorithm can solve inference problems when local measurements are executed on a bifactor state. In other words, for an outcome of a local measurement on a subsystem $U$ described by a POVM element $E^{(j)}_U = \\bigotimes_{u \\in U} E_u^{(j)}$, we are interested in evaluating the marginal states $\\rho_{u|E^{(j)}_U}$ and $\\rho_{u\\cup v|E^{(j)}_U}$ conditioned on the outcome, where\n\\begin{align}\n\\rho_{u|E^{(j)}_U} & = \\frac 1 Y \\PTr{V - u}{(E^{(j)}_U)^{\\frac{1}{2}} \\rho_V (E^{(j)}_U)^{\\frac{1}{2}}} \\\\\n\\rho_{u\\cup v|E^{(j)}_U} & = \\frac 1 Y \\PTr{V - \\{u,v\\}}{(E^{(j)}_U)^{\\frac{1}{2}} \\rho_V (E^{(j)}_U)^{\\frac{1}{2}}},\n\\end{align}\nand $Y$ is a normalization factor.\nFor $u,v \\notin U$, this amounts to a local modification of the bifactor state that accounts for the action of the measurement, the QBP algorithm being otherwise unaltered. We focus on 1-Bifactor Networks and return to the general case at the end of this section.\n\n\\begin{The}\n\\label{thm:update}\nLet $(G,\\rho_V)$ be a 1-Bifactor Network with $G = (V,E)$ a tree. For $U \\subset V$, let $\\{E^{(j)}_U\\} = \\Big\\{\\bigotimes_{u \\in U} E_u^{(j)}\\Big\\}$ be a POVM on the subsystem $U$ and let $W = V-U$. Define $\\mu_u^{(j)} = \\mu_u \\ensuremath{\\star} E_u^{(j)} $ for $u\\in U$ and $\\mu_u^{(j)} = \\mu_u$ for $u \\in W$. Let $[b_{uv}(t), b_{uv}(t) ] = \\mathrm{QBP}^{(1)}(\\mu_u^{(j)},\\nu_{u:v})$. Then for all $t \\geq diameter(G)$, $b_u(t) = \\rho_{u|E_U^{(j)}}$ for all $u \\in W$ and $b_{u\\cup v}(t) = \\rho_{uv|E_U^{(j)}}$ for all $(u,v) \\in E_W$.\n\\end{The} \n\n\\begin{proof}\nThe reduced state on $W$ conditioned on the measurement outcome $E_U^{(j)}$ is given by\n\\begin{align}\n\\rho_{W|E_U^{(j)}} & = \\frac 1Y \\PTr{U}{(E_U^{(j)})^{\\frac{1}{2}} \\rho_V (E_U^{(j)})^{\\frac{1}{2}}} \\\\\n&= \\frac 1Y \\prod_{\\substack{v \\in W \\\\ u\\in U}} \\prod_{(w,x) \\in E} \\mu_v^\\frac 12 \\PTr{U}{(E_u^{(j)})^{\\frac{1}{2}} \\mu_u^\\frac 12 \\nu_{w:x} \\mu_u^\\frac 12 (E_u^{(j)})^{\\frac{1}{2}}}\\mu_v^\\frac 12 \\\\\n&= \\frac 1Y \\prod_{\\substack{v \\in W \\\\ u\\in U}} \\prod_{(w,x) \\in E} \\mu_v^\\frac 12 \\PTr{U}{ \\nu_{w:x} \\mu_u^\\frac 12 E_u^{(j)} \\mu_u^\\frac 12 }\\mu_v^\\frac 12 \\\\\n&= \\frac 1Y \\prod_{\\substack{v \\in W \\\\ u\\in U}} \\prod_{(w,x) \\in E} \\Big(\\mu_v^{(j)}\\Big)^\\frac 12 \\PTr{U}{ \\Big(\\mu_u^{(j)}\\Big)^\\frac 12 \\nu_{w:x} \\Big(\\mu_u^{(j)}\\Big)^\\frac 12 }\\Big(\\mu_v^{(j)}\\Big)^\\frac 12 .\n\\end{align}\nThe result thus follows from Corollary~\\ref{cor:QBP1}.\n\\end{proof}\n\n\nThe result of Theorem~\\ref{thm:update} can easily be extended to compute the conditional marginal state $\\rho_{u|E^{(j)}_U}$ and $\\rho_{u\\cup v|E^{(j)}_U}$ for any $u$ and $v$, not just those in $W = V-U$. This is achieved by altering the beliefs as follows\n\\begin{equation}\nb_u(t) = \\frac 1ZE_u^{(j)} \\ensuremath{\\star} \\mu_u \\ensuremath{\\star} \\prod_{v' \\in n(u)} m_{v' \\rightarrow u}(t)\n\\end{equation}\nfor $u \\in U$, \n\\begin{equation}\nb_{uv}(t) = \\frac 1Z E_{uv}^{(j)} \\ensuremath{\\star} (\\mu_u \\mu_v)\\ensuremath{\\star} \\Bigg[\\prod_{w \\in n(u)-v} m_{w \\rightarrow u}(t) \\prod_{w' \\in n(v)-u} m_{w' \\rightarrow v}(t) \\ensuremath{\\star} \\nu_{u:v} \\Bigg]\n\\end{equation}\nwith $E_{uv}^{(j)} = E_{u}^{(j)} \\otimes I_v$ when $u\\in U$ and $v \\in W$ and $E_{uv}^{(j)} = E_{u}^{(j)} \\otimes E_{u}^{(j)} $ when $u,v \\in U$. The proof is straightforward and we omit it.\n\nTheorem~\\ref{thm:update} shows how QBP leads to an efficient algorithm for solving inference problems on 1-bifactor states on trees with local measurements. This immediately implies an efficient algorithm for general $n$-bifactor states when $(G,\\rho_V)$ is a quantum Markov network. Indeed, Theorem~\\ref{thm:QBP} demonstrates that in that case the QBP$^{(n)}$ algorithm can be used to efficiently compute the marginal density operators $\\rho_{u\\cup v}$ for all $(u,v) \\in E$. From these, one can straightforwardly obtain the marginal operators $\\rho_u$ for all $u \\in V$ and mutual operators $\\rho_{u:v}$ for all $(u,v) \\in E$. Theorem~\\ref{The:mutual} states that $\\rho_V$ can be represented as a 1-bifactor state in terms of its marginal and mutual operators. The inference problem can then be solved using the QBP$^{(1)}$ algorithm as explained above. \n\n\n\\section{Heuristic Methods}\n\n\\label{Heur}\n\nThe previous section provided conditions under which QBP algorithms give exact solutions to inference problems on $n$-Bifactor Networks. Namely, the underlying graph must be a tree, and the state must be either a quantum Markov network or a 1-bifactor state. When these conditions are not met, QBP algorithms may still be used as heuristic methods to obtain approximate solutions to the inference problem, although in general these approximations will be uncontrolled. \n\nTo draw a parallel, classical Belief Propagation algorithms have found applications in numerous distinct scientific fields where they are sometimes known under different name: Gallager decoding, Viterbi's algorithm, sum-product, and iterative turbo decoding in information theory; cavity method and the Bethe-Peierls approximation in statistical physics; junction-tree and Shafer-Shenoy algorithm in machine learning to name a few. In many of these examples, BP algorithms exhibit good performance on graphs with loops, even though the algorithm does not converge to the exact solution on such graphs. In fact, ``Loopy Belief Propagation\" is often the best known heuristic method to find approximate solutions to hard problems. Important examples include the near-Shannon capacity achieving turbo-codes and low density parity check codes. On the other hand, there are known examples for which loopy BP fail to converge and their general realm of applicability is not yet fully understood. \n\nAs in the classical case, one can expect loopy QBP to give reasonable approximations in some circumstances, for instance when the size of typical loops is very large. Intuitively, one expects a local algorithm to be relatively insensitive to the large scale structure of the underlying graph. However, quantum inference problems also pose a new challenge. Quite apart from issues regarding the graph's topology, an $n$-bifactor state with $n>1$ may not obey the independence conditions required to ensure the convergence of QBP. The goal of this section is to suggest three techniques that are expected to improve the performance of QBP in such circumstances. \n\n\\subsection{Coarse-graining}\n\n\\label{Heur:CG}\n\nBy definition, a quantum Markov network has the property that the correlations from one vertex to the rest of the graph are screened off by its neighbors. When this property fails, QBP will not in general produce the correct solution to an inference problem. Coarse graining is a simple way of modifying a graph in such a way that the state may be a closer to forming a quantum Markov network with respect to the new graph than it was with respect to the original graph. \n\nA coarse graining of a graph $G = (V,E)$ is a graph $\\tilde G = (\\tilde V, \\tilde E)$, where $\\tilde V$ is a partition of $V$ into disjoint subsets of and $(U,W) \\in \\tilde{E}$ if there is an edge connecting a vertex in $U$ to a vertex in $W$ in $G$. The coarse-grainings that are of most interest are those that partition $V$ into connected sets of vertices (see Fig.~\\ref{CG} for example). It is an elementary exercise to show that if $(G,\\rho_V)$ is an $n$-Bifactor Network, then $(\\tilde G, \\rho_{\\tilde{V}})$ is an $n$-Bifactor Network for any coarse graining $\\tilde G$. The intuition for why coarse graining might get us closer to a Markov network is that it effectively ``thickens\" the neighborhood of each vertex, which may then be more efficient at screening off correlations. This intuition is illustrated in Fig.~\\ref{CG} and is supported by the fact that Markov networks are fixed points of the coarse graining procedure, i.e. if $\\tilde G$ is a coarse-graining of $G$, then $(\\tilde G, \\rho_{\\tilde{V}})$ is a quantum Markov network whenever $(G,\\rho_V)$ is a Markov network.\n\n\\begin{figure}[h!]\n\\center \\includegraphics[height=1.6in]{CG}\n\\caption{Example of a coarse-grained graph. Figure a) shows in light gray the neighborhood of the darkened vertex in the original graph. In b) the dashed ellipses represent coarse-grained vertices. The neighborhood of the darkened coarse-grained vertex is represented by the light gray set.}\n\\label{CG}\n\\end{figure}\n\nAlso note that every graph $G$ can be turned into a tree by a suitable coarse graining. When the obtained Bifactor Network is a Markov Network or when $n=1$, QBP is then guaranteed to converge to the exact solution. The Hilbert space dimension at the vertices of the coarse-grained graph is bounded by an exponential in the tree-width of $G$, so this technique is efficient only for graph of $O(\\log(N))$ tree-width. \n\n\\subsection{Sliding window QBP}\n\n\\label{Heur:SW}\n\nSliding window QBP is similar in spirit to coarse-graining but is mainly suitable for chains (although the idea is easily generalized to arbitrary trees of low degree). Consider an $n$-bifactor state $\\rho_V$ on a one dimensional lattice $G = (V,E)$ with $V = \\{v_1,v_2,\\ldots,v_N\\}$ and $E = \\{(v_j,v_{j+1})\\}_{j=1,\\ldots ,N-1}$. When $(G,\\rho_V)$ is not a quantum Markov Network, the diagram of eq.~\\eqref{eq:commute_diag} will generally fail to be commutative. The commutativity of this diagram is essential for the success of QBP, as for instance it implies\n\\begin{align}\n\\PTr{v_1}{(\\mu_{v_1} \\otimes \\mu_{v_2}) \\pns{n} (\\nu_{v_1:v_2} \\nu_{v_2:v_3})} &= \\mu_{v_2} \\pns{n}\\left[ \\PTr{v_1}{\\mu_{v_1} \\pns{n} \\nu_{v_1:v_2}} \\pns{n} \\nu_{v_2:v_3}\\right] \\\\\n&= \\mu_{v_2} \\pns{n} \\big[ m_{v_1\\rightarrow v_2} \\pns{n} \\nu_{v_2:v_3} \\big].\n\\end{align}\nThus, the Hilbert space of vertex $v_1$ is traced out before operators on vertex $v_3$ are brought into the picture. This enables the algorithm to progress along the lattice by evaluating a cumulative operator of constant dimension (i.e. the messages), much in the spirit of the transfer matrix of statistical physics. Without the Markov property, this is generally not possible. \n\nHowever, when vertices separated by a distance $\\ell$ are conditionally independent given the vertices between them, sliding window QBP can be operated efficiently to produce the exact solution of the inference problem. This works by defining new message operators\n\\begin{equation}\n\\tilde m_{v_{j +\\ell-1}\\rightarrow v_{j+\\ell}} = \\PTr{\\{v_1,v_2, \\ldots ,v_j\\}}{\\left[\\bigotimes_{k = 1}^{\\ell + j-1} \\mu_{v_k}\\right] \n\\pns{n} \\left[\\prod_{k = 1}^{\\ell+j-1} \\nu_{v_k:v_{k+1}}\\right]}\n\\end{equation}\nwhich act on $\\mathcal{H}_{v_{j+1}} \\otimes \\mathcal{H}_{v_{j+2}} \\otimes \\ldots \\mathcal{H}_{v_{j+\\ell}}$. When\n\\begin{equation}\nS(v_j:v_{j+\\ell}|\\{v_{j+1}, v_{j+2}, \\ldots , v_{j+\\ell-1}\\}) = 0\n\\label{correlation-length}\n\\end{equation}\nfor all $v_j \\in V$, we have the equality\n\\begin{equation}\n\\tilde m_{v_{j+\\ell} \\rightarrow v_{j+\\ell+1}} = \\PTr{v_{j+1}}{\\mu_{v_{j+\\ell}}\\pns{n}\\big[\\tilde m_{v_{j+\\ell-1}\\rightarrow v_{j+\\ell}} \\pns{n} \\nu_{v_{j+\\ell}:v_{j+\\ell+1}}\\big]},\n\\end{equation}\nso inference problems can be solved exactly with operators whose dimension grow exponentially with the $\\ell$ rather than the lattice size $N$. In particular, this method can be applied to spin-systems that have a finite correlation length because then eq.~\\eqref{correlation-length} can be expected to hold approximately for some finite $\\ell$.\n\n\\subsection{Replicas}\n\n\\label{Heur:Rep}\n\nThe method of replicas maps $n$-bifactor states to 1-bifactor states on which QBP$^{(1)}$ can be implemented without concerns for independence. This is achieved by replacing the systems $v$ on each vertex of the graph $G$ by $n$ replicas, so that the Hilbert space associated to vertex $v$ becomes $\\mathcal{H}_v^{\\otimes n}$. As a consequence, the algorithm suffers an overhead exponential in $n$. The name ``replica\" is borrowed from the analogous technique used in the study of classical quenched disordered systems. The validity of this technique is based on the following observation. \n\n\\begin{Prop}\\label{prop:product}\nLet $\\{\\mathcal{H}_j\\}_{j=1,\\ldots,n}$ be isomorphic Hilbert spaces. Let $T^{(n)}$ be the operator that cyclicly permutes these $n$ systems. Let $A_1$ be an arbitrary operator on $\\mathcal{H}_1$, and define $A_j = (T^{(n)})^{j-1} A_1 (T^{(n)\\dagger})^{j-1}$ to be the corresponding operators on $\\mathcal{H}_j$. Then for any set of operators $\\{A_1^{(k)}\\}$ on $\\mathcal{H}_1$, the following equality holds\n\\begin{equation}\nA_1^{(1)}A_1^{(2)}\\ldots A_1^{(n)} = \\PTr{2,3,\\ldots, n}{[A_1^{(1)}\\otimes A_2^{(2)}\\otimes\\ldots\\otimes A_n^{(n)}]T^{(n)}}.\n\\end{equation}\n\\end{Prop}\n\nWe are now in a position to formalize the replica method. \n\n\\begin{The} Let $(G,\\rho_V)$ be an $n$-Bifactor Network, with operators $\\mu_u$ and $\\nu_{u:v}$. Then, $\\rho_V$ is locally isomorphic to a 1-bifactor state with Hilbert spaces comprising $n$ replicas of the original system $\\mathcal{H}_u' = \\mathcal{H}_{u_1} \\otimes \\mathcal{H}_{u_2} \\otimes \\ldots \\otimes \\mathcal{H}_{u_n}$ for all $u \\in V$. The partial isomorphism at vertex $u$ is given by $\\PTr{u_2,u_3,\\ldots,u_n}{(T^{(n)\\dagger}_u)^\\frac 12 \\cdot (T^{(n)}_u)^\\frac 12}$. More precisely, we claim that\n\\begin{equation}\n\\rho_V = \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{U^\\dagger \\left(\\bigotimes_{u\\in V} \\tilde \\mu_u\\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\tilde\\nu_{v:w}\\right) U}\n\\end{equation}\nwhere\n\\begin{align}\n\\tilde \\mu_u &= \\left(\\mu_u^{\\frac{1}{n}} \\right)^{\\otimes n} \\left(T_u^{(n)}\\right) \\\\\n\\tilde \\nu_{u:v} &= \\left(\\nu_{u:v}^{\\frac 1n}\\right)^{\\otimes n} \\\\\nU &= \\bigotimes_{u \\in V} (T^{(n)}_u)^\\frac 12\n\\end{align}\nare operators on $\\mathcal{H}_u'$\n\\end{The}\n\n\\begin{proof}\nFirst, note that $T_u^{(n)}$ commutes with $\\left(\\mu_u^{\\frac{1}{n}} \\right)^{\\otimes n}$, so $\\tilde \\mu_u^\\frac 12 = \\left(\\mu_u^{\\frac{1}{2n}} \\right)^{\\otimes n} \\left(T_u^{(n)}\\right)^\\frac 12 = \\left(T_u^{(n)}\\right)^\\frac 12 \\left(\\mu_u^{\\frac{1}{2n}} \\right)^{\\otimes n}$. Thus\n\\begin{align}\n& \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{U^\\dagger \\left(\\bigotimes_{u\\in V} \\tilde \\mu_u\\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\tilde\\nu_{v:w}\\right)U} \\\\\n&= \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{U^\\dagger \\left(\\bigotimes_{u\\in V} T_u^{(n)} \\left(\\mu_u^{\\frac 1n} \\right)^{\\otimes n}\\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\left(\\nu_{u:v}^{\\frac 1n}\\right)^{\\otimes n} \\right)U} \\\\\n&= \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{\\left(\\bigotimes_{u\\in V} \\left(\\mu_u^{\\frac 1n} \\right)^{\\otimes n}\\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\left(\\nu_{u:v}^{\\frac 1n}\\right)^{\\otimes n} \\right) \\bigotimes_{u \\in V} T_u^{(n)} } \\\\\n&= \\PTr{\\{u_2,u_3,\\ldots, u_n\\}_{u \\in V}}{\\left[\\left(\\bigotimes_{u\\in V} \\mu_u^{\\frac 1n} \\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\nu_{u:v}^{\\frac 1n} \\right)\\right]^{\\otimes n} \\bigotimes_{u \\in V} T_u^{(n)} } \\\\\n&= \\left[\\left(\\bigotimes_{u\\in V} \\mu_u^{\\frac 1n} \\right) \\ensuremath{\\star} \\left(\\prod_{(v,w) \\in E} \\nu_{u:v}^{\\frac 1n} \\right)\\right]^n = \\rho_V\n\\end{align}\nwhere we used Proposition~\\ref{prop:product} to obtain the last line. \n\\end{proof}\n\nSince the dimension of the Hilbert at each vertex grows exponentially with $n$, the QBP$^{(1)}$ algorithm used to solve the corresponding inference problem suffers an exponential overhead. One can make a replica symmetry ansatz, assuming that the state is symmetric under exchange of replica systems at any given vertex. Since the symmetric subspace of $\\mathcal{H}_v^{\\otimes n}$ grows polynomially\\footnote{More precisely, it grows as $\\binom{n+d-1}{n} \\approx n^{d-1}$.} with $n$, QBP algorithm can be executed efficiently. The validity of this ansatz cannot be verified in general, but it may serve as a good heuristic method. \n\n\n\\section{Applications}\n\n\\label{App}\n\nThis section explains in some detail how QBP can be used as a heuristic algorithm to find approximate solutions to important problems in quantum error correction and the simulation of many-body quantum systems. The focus will be on the reduction of well established problems to inference problems on $n$-Bifactor Networks. One can make use of the techniques discussed in the previous section whenever the resulting Graphical Model does not meet the requirements to ensure convergence of QBP, or when these conditions cannot be verified efficiently.\n\n\\subsection{Quantum Error Correction}\n\n\\label{App:QEC}\n\nMaximum-likelihood decoding is an important task in quantum error correction (QEC). As in classical error correction, this problem reduces to the evaluation of marginals on a factor graph, also called Tanner graph in this context. More precisely, for independent error models, the quantum channel conditioned on error syndrome is a 1-bifactor state. As a consequence, qubit-wise maximum likelihood decoding of a QEC stabilizer code reduces to an inference problem on a 1-Bifactor Network. Thus, there is no independence condition that needs to be verified, although the graph will generally contain loops. Before demonstrating this reduction, a brief summary of stabilizer QEC is in order, see \\cite{Got97a} for more details. For details on the use of Belief Propagation for the decoding of classical error correction codes, the reader is referred to the text of MacKay \\cite{Mac03a} and forthcoming book of Richardson and Urbanke \\cite{RU05a}. \n\nConsider a collection of $N$ two-dimensional quantum systems (qubits) $V = \\{u\\}_{u=1,\\ldots,N}$ with $\\mathcal{H}_u = \\mathbb{C}^2$. A QEC code is a subspace $\\mathcal{C} \\in \\mathcal{H}_V$ that is the $+1$ eigensubspace of a collection of commuting operators $S_j$, $j=1,\\ldots N-K$, called stabilizer generators. Each stabilizer generator is a tensor product of Pauli operators on a subset $U_j$ of $V$:\n\\begin{equation}\nS_j = \\bigotimes_{u \\in U_j} \\sigma^{\\alpha^u_j}_u\n\\end{equation}\nwhere $\\alpha_j^u \\in \\{x,y,z\\}$. When the stabilizer generators are multiplicatively independent, the code encodes $K$ qubits, i.e. $\\mathcal{C}$ has dimension $2^K$. For each $j=1,\\ldots N-K$, define the two projectors $P_j^\\pm = (I \\pm S_j)\/2$. The code space is therefore defined as $\\mathcal C = (\\prod_j P_j^+) \\mathcal H_V$. \n\nError correction consists of three steps. First, the system $V$ is prepared in a code state $\\rho_V$ supported on $\\mathcal C$, in such a way that $P_j^+ \\rho_V P_j^+ = \\rho_V$ for all $j$. The state is then subjected to the channel $\\rho_V \\rightarrow \\mathcal E_{V|V}(\\rho_V)$. Second, each stabilizer generator $S_j$ is measured, yielding an outcome $s_j = \\pm$ with probability $\\Tr{P^\\pm_j \\mathcal{E}_{V|V}(\\rho_V)}$. The collection of all $N-K$ measurement outcomes $s_j$, called the error syndrome, is denoted ${\\bf s} = (s_1,s_2,\\ldots s_{N-K})\\in \\{-,+\\}^{N-K}$. Third, the channel $\\mathcal E_{V|V}$ is updated conditioned the error syndrome $\\bf s$. Based on this updated channel, the optimal recovery is computed and implemented. \n\nThe computationally difficult step in the above protocol consists in conditioning the channel on the error syndrome. To understand this problem, it is useful to express the channel in a Kraus form $\\mathcal{E}_{V|V}(\\rho_V) = \\sum_k M^{(k)}_{V|V} \\rho_V M^{(k)\\dagger}_{V|V}$ where $\\{M^{(k)}\\}$ are operators on $\\mathcal{H}_V$. When $s_j = +$, we learn that the error that has affected the state commutes with $S_j$, while $s_j = -$ indicates that the error anti-commutes with $S_j$. To update the channel conditioned on the error syndrome $s_j = +$ say, we first decompose each Kraus operator $M^{(k)}_{V|V}$ as the sum of an operator that commutes with $S_j$ and an operator that does not commute with $S_j$: $M^{(k)}_{V|V} = M^{(k)+}_{V|V} + M^{(k)\\prime}_{V|V}$ where $ M^{(k)+}_{V|V} = P_j^+ M^{(k)}_{V|V} P_j^+$ and $ M^{(k)\\prime}_{V|V} = M^{(k)}_{V|V} - M^{(k)+}_{V|V}$. The updated channel is obtained by throwing away the primed component $ M^{(k)\\prime}_{V|V}$ of each Kraus operator, and renormalizing. \n\nIn what follows, we demonstrate how the conditional channel can be expressed as a factor graph. This is most easily done using the Jamio\\l kowski representation of quantum channels. For each quantum system $v$, let $R_v$ denote a reference for $v$, with Hilbert space $\\mathcal{H}_{R_v} \\simeq \\mathcal H_v$. Define the maximally entangled state between system $v$ and its reference by $\\Ket\\Phi_{vR_v} = \\frac{1}{\\sqrt d} \\sum_j \\Ket{j}_v\\Ket{j}_{R_v}$. Then, the Jamio\\l kowski representation of a channel $\\mathcal E_{V|V}$ is a density operator $\\rho_{\\overline V}$ on $\\mathcal H_{\\overline V} = \\mathcal H_V \\otimes \\mathcal H_{R_V}$ given by $\\rho_{\\overline V} = (\\mathcal E_{V|V} \\otimes \\mathcal I_{R_V|R_V})(\\kb{\\Phi}{\\Phi}_{VR_V})$, where $\\mathcal I$ denotes the identity channel. For independent error models considered here, $\\rho_{\\overline V} = \\bigotimes_{u \\in V} \\rho_{\\overline u}$.\n\nFor each stabilizer generator $S_j$, denote $\\overline{S}_j = \\bigotimes_{u \\in U_j} \\sigma^{\\alpha_j^u}_u \\otimes \\sigma^{\\alpha_j^u}_{R_u}$, and construct the associated projectors $\\overline{P}_j^\\pm = (I \\pm \\overline{S}_j)\/2$. An important property of these operator is that they fix the maximally entangled state $\\overline{S}_j \\Ket{\\Phi}_{VR_V} = \\overline{P}_j^+ \\Ket{\\Phi}_{VR_V} = \\Ket{\\Phi}_{VR_V}$. Let $E$ be an operator on $V$. If $E$ commutes with $S_j$, we have $\\overline{P}_j^+ (E\\otimes I_{R_V}) \\Ket{\\Phi}_{VR_V} = (E\\otimes I_{R_V}) \\Ket{\\Phi}_{VR_V}$ and $\\overline{P}_j^- (E\\otimes I_{R_V}) \\Ket{\\Phi}_{VR_V} = 0$, while if $E$ anti-commutes with $S_j$, the same identities hold with $\\overline P_j^+$ and $\\overline P_j^-$ exchanged. It follows from this observation that conditioned on the error syndrome $\\bf s$, the channel is described by the Jamio\\l kowski matrix\n\\begin{equation}\n\\rho_{\\overline V|{\\bf s}} = \\frac 1Z \\prod_{j} \\overline P_j^{s_j} \\ensuremath{\\star} \\bigotimes_{v \\in V} \\rho_{\\overline v}, \n\\end{equation}\nthat is a quantum factor graph. \n\nThere are a number of relevant quantities that can be evaluated from this factor graph. For instance, one can efficiently evaluate the conditional channel on any constant size set of qubits $W \\subset V$ vial partial trace. This is useful in iterative decoding schemes such as those used for quantum turbo-codes \\cite{OPT07a} and low density parity check codes \\cite{COT05a}. In those cases, the conditional channel on $W$ can only be evaluated approximately since it requires loopy QBP. The factor graph also enables exact evaluation of the logical error in a concatenated block coding scheme \\cite{Pou06b} such as used in fault-tolerant protocols. \n\n\\subsection{Simulation of Many-Body Quantum Systems}\n\n\\label{App:Stat}\n\nIn statistical physics, the state of a many-body quantum system $V$ is a Gibbs state $ \\rho_V = \\frac 1Z \\exp(-\\beta H)$ for some Hamiltonian $H$, where $\\beta = 1\/T$ is the inverse temperature. Typically, $H$ is the sum of single and two-body interactions $H = \\sum_{u \\in V} H_u + \\sum_{(u,w) \\in E} H_{uv}$ on some graph $G = (V,E)$. Understanding the correlations present in these states is a great challenge in theoretical physics. In this section, we describe how QBP can serve as an heuristic method to accomplish this task approximately. For an account of the use of Belief Propagation in classical statistical mechanical systems, we refer the reader to the text of M\\'ezard and Montanari \\cite{MM07a}.\n\nDefining $\\mu_u = \\exp(-\\beta H_u)$ and $\\nu_{v:w} = \\exp(-\\beta H_{vw})$ gives an expression for $\\rho_V$ of the form of eq.~\\eqref{Graph:QGSEInf}:\n\\begin{equation}\n\\rho_V = \\left(\\bigotimes_{v \\in V} \\mu_v\\right) \\odot \\left(\\bigodot_{(v,w) \\in E} \\nu_{v:w}\\right)\n\\end{equation}\nThus, $\\rho_V$ is an $\\infty$-bifactor state. As mentioned in \\S\\ref{QBP}, a QBP$^{(\\infty)}$ algorithm can easily be formulated for this type of bifactor state, and still converge to the exact solutions of the corresponding inference problem when $\\rho_V$ is a quantum Markov network and $G$ is a tree. This requires replacing all matrix products $\\prod$ by the commutative product $\\odot$ in the defining equations of QBP$^{(\\infty)}$ eqs.~(\\ref{message}-\\ref{belief2}). The proof of convergence Theorem \\ref{thm:QBP} under these more general conditions follows essentially the same reasoning. \n\nTo obtain a bifactor state that satisfies the commutation condition $[\\nu_{u:v},\\nu_{w:x}]=0$, it is possible to coarse-grain $G$ in a way that the resulting interaction between coarse-grained neighbors commute. Consider for instance a one dimensional chain $G = (V,E)$ with $V = \\{u\\}_{u=1,\\ldots,N}$ and $E = \\{(u,u+1)\\}_{u=1,\\ldots,N-1}$. We can construct a coarse-grained graph $\\tilde G$ by identifying all vertices $2 u-1$ and $2 u$ for $u=1,\\ldots,\\lfloor\\frac N 2\\rfloor$. The state $\\rho_V$ is then an $\\infty$-bifactor state on $\\tilde G$, with operators\n\\begin{align}\n\\tilde\\mu_u &= \\mu_{2u-1} \\odot \\mu_{2u} \\odot \\nu_{2u-1:2u} \\\\\n\\tilde\\nu_{u:u+1} &= \\nu_{2u:2u+1},\n\\end{align}\nsatisfying $[\\tilde{\\nu}_{u:u+1},\\tilde{\\nu}_{v:v+1}]$\nThus, $\\infty$-bifactor states are commonplace in quantum many-body physics. Unfortunately, the convergence of the QBP algorithm in this case requires the state to be a quantum Markov network, which cannot be tested directly in general. As we will now explain, it is often possible to reasonably approximate a Gibbs state by an $n$-bifactor with finite $n$, and sometime even $n=1$.\n\nA simple way to obtain an $n$-bifactor state is to approximate $\\odot$ by $\\pns{n}$ for some large value of $n$. In the context of many-body physics, this is called a Trotter-Suzuki decomposition of the Gibbs state, and becomes more accurate as the ratio $\\beta\/n$ decreases. The QBP$^{(n)}$ algorithm can then be operated on this $n$-bifactor state, but its convergence again requires some independence condition that cannot be verified systematically. Alternatively, one can use the replica method described in section \\ref{Heur:Rep} and solve the inference problem exactly with QBP$^{(1)}$, but with an increase in complexity exponential in $n$. The replica method is then reminiscent of the well known correspondence between quantum statistical mechanics in $d$ dimensions and classical statistical mechanics in $d+1$ dimensions, where the extra dimension represents inverse temperature. \n\nThe 1-bifactor states also capture the correlations of some non-trivial quantum many-body systems. {\\em Valence bond solid} (VBS) states were introduced in Ref. \\cite{AKLT87a,AKLT88a} as exact ground states (i.e. $T=0$ Gibbs states) of spin systems with interesting properties. Recent work has generalized these constructions to {\\em matrix product states} (MPS) in one-dimension \\cite{FNW92a,Vid04a,Vid06a}, and {\\em projected entangled-pair states} (PEPS) for higher dimensions \\cite{VC04a,SDV06a}. These form an important class of states for the description of quantum many-body systems. For instance, {\\em density matrix renormalization group} (DMRG) \\cite{Whi92a} --- one of the most successful method for the numerical study of spin chains --- is now understood as a variational method over MPS \\cite{OR95a,DMNS98a,VPC04b}. All these states are instances of 1-bifactor states.\n\n\\begin{figure}[h!]\n\\center \\includegraphics[height=1.6in]{VBS}\n\\caption{Projected entangled pair state on a two-dimensional square lattice. The vertices are associated to dashed circles. Each $\\bullet$---$\\bullet$ represents a maximally entangled state of $D$ dimension shared between neighboring vertices. A partial isometry $A_u: (\\mathbb{C}^D)^{c_u} \\rightarrow \\mathbb{C}^d$ is applied at each vertex, where $c_u$ is the degree of vertex $u$. }\n\\label{VBS}\n\\end{figure}\n\nFor sake of simplicity, we will demonstrate this claim for one-dimensional MPS, but the same argument holds for higher dimensions. The MPS $\\Ket\\Psi$ is a pure state of a collection of $N$ $d$-dimensional quantum systems displayed on a one dimensional lattice. Each vertex $u$ is assigned two ``virtual particles\" $L_u$ and $R_u$, where $L$ and $R$ stand for left and right (see Fig.~\\ref{VBS} for a illustration of this construction in two-dimensions). Each of these particles are associated a Hilbert space $\\mathcal{H}_{L_u} = \\mathcal{H}_{R_u} = \\mathbb{C}^D$. Initially, the right particle of vertex $u$ is in a maximally entangled state with the left particle of vertex $u+1$; $\\Ket{\\Phi}_{R_u\\cup L_{u+1}} = \\frac{1}{\\sqrt D} \\sum_{\\alpha=1}^D \\Ket{\\alpha}_{R_u}\\Ket{\\alpha}_{L_{u+1}}$ where $\\Ket\\alpha$ are orthogonal basis vectors for $\\mathbb{C}^D$. (The lattice can be closed to form a circle, in which case we identify $N+1 = 1$.) The initial state is therefore $\\Ket{\\Phi_0} = \\bigotimes_u \\Ket{\\Phi}_{R_u\\cup L_{u+1}}$. \n\nTo obtain the MPS, apply an operator $A_u: \\mathcal{H}_{L_u}\\otimes \\mathcal{H}_{R_u} \\rightarrow \\mathbb{C}^d$\n\\begin{equation}\nA_u = \\sum_{j=1}^d \\sum_{\\alpha,\\beta = 1}^D A_u^{j,\\alpha,\\beta} \\kb{j}{\\alpha,\\beta}\n\\end{equation}\nto each vertex of the lattice. The vectors $\\Ket j$ form an orthogonal basis for $\\mathbb{C}^d$. The resulting state is\n\\begin{equation}\n\\Ket{\\Psi} = \\bigotimes_{u=1}^N A_u \\Ket{\\Phi_0} \n\\propto \\sum_{j_1,j_2,\\ldots,j_N=1}^d \\Tr{B_1^{j_1}B_2^{j_2}\\ldots B_N^{j_N}} \\Ket{j_1,j_2,\\ldots,j_N}\n\\label{eq:MPS}\n\\end{equation}\nwhere the matrices $B_u^j$ are the submatrices of $A_u$ with matrix elements $(B_u^j)_{(\\alpha,\\beta)} = A_u^{j,\\alpha,\\beta}$. \n\n\nFor the corresponding 1-bifactor state, the underlying graph $G = (V,E)$ is also a one dimensional lattice $V = \\{1,2,\\ldots,N\\}$ and $E = \\{(1,2),(2,3),\\ldots ,(N-1,N)\\}$. The Hilbert space associated to vertex $u$ is $\\mathcal{H}_u = \\mathbb{C}^D \\otimes\\mathbb{C}^D$. As above, it is convenient to imagine that each vertex $u$ is composed of two $D$-dimensional subsystems $L_u$ and $R_u$. Then, up to a local isometry, the MPS of eq.~\\eqref{eq:MPS} can be expressed as a 1-bifactor state eq.~\\eqref{Graph:QGSE} with\n\\begin{equation}\n\\mu_u = A_u^\\dagger A_u \\quad \\mathrm{and}\\quad \\nu_{u:v} = \\kb{\\Phi}{\\Phi}_{R_u\\cup L_v}.\n\\end{equation}\nMoreover, the operators $\\nu_{u:v}$ mutually commute. To see the relation with eq.~\\eqref{eq:MPS}, note that the operators $A^u$ can be polar decomposed $A_u = U_u \\sqrt{A_u^\\dagger A_u} = U_u \\mu_u^\\frac 12$. \\footnote{Note that $\\mu_u$ has rank $\\leq d$. This can be seen straightforwardly by writing $\\mu_u = \\sum_{j=1}^d A_u^{*j,\\alpha,\\beta}A_u^{j,\\gamma,\\delta} = \\sum_{j=1}^d \\kb{h_u^j}{h_u^j}$ where $|h_u^j\\rangle = \\sum_{\\alpha,\\beta} A_u^{j,\\alpha,\\beta} \\Ket{\\alpha,\\beta} \\in \\mathcal{H}_u$.} The matrix $U_u$ is a partial isometry $\\mathcal{H}_u \\rightarrow \\mathbb{C}^d$ and\n\\begin{align}\n\\kb\\Psi\\Psi &= \\frac 1Z \\left(\\prod_{u \\in V} A_u\\right) \\kb{\\Phi_0}{\\Phi_0}\\left(\\prod_{u \\in V} A^\\dagger_u\\right) \\\\\n&= \\frac 1Z \\left(\\prod_{u \\in V} U_u\\mu_u^\\frac 12 \\right) \\left(\\prod_{(v,w) \\in E} \\nu_{u:v}\\right) \\left(\\prod_{u \\in V} \\mu_u^\\frac 12 U^\\dagger_u \\right) \\\\\n&= \\frac 1Z \\left(\\prod_{u \\in V} U_u \\right) \\left(\\bigotimes_{u \\in V} \\mu_u \\right) \\ensuremath{\\star} \\left( \\prod_{(v,w) \\in E} \\nu_{u:v}\\right) \\left(\\prod_{u \\in V} U^\\dagger_u \\right)\n\\end{align}\nas claimed. \n\nBifactor states are thus relevant to the description of quantum many-body systems. QBP can sometimes be used to efficiently compute correlation functions, but in general for spatial dimension larger than one, its convergence is not guaranteed. This is mainly due to the presence of small loops in the underlying graph. Partial solutions have been proposed to overcome this difficulty \\cite{VC04a}, and it is conceivable that techniques from loopy Belief Propagation and its generalizations \\cite{YFW02a} will improve these algorithms. As in the classical case however, QBP may be more appropriate for the study of quantum systems on irregular sparse graphs, such as those encountered in classical spin glasses. \n\nFinally, it should be noted that the Markov conditions required to certify the convergence of QBP --- or the associated coarse-grained Markov conditions as explained in the previous section --- are weaker than those typically studied in statistical physics, namely the vanishing of connected correlation functions beyond some length scale. For pure quantum states, the two notions coincide and are equivalent to the absence of long-range entanglement. At finite temperature however, the state is mixed and the vanishing of mutual information between vertices $u$ and $u+\\ell$ conditioned on vertices $u+1,\\ldots,u+\\ell-1$ eq. \\eqref{correlation-length} does not imply the absence of connected correlations $\\langle A_uA_{u+\\ell}\\rangle = \\Tr{\\rho_V A_u A_{u+\\ell}} - \\Tr{\\rho_V A_u}\\Tr{\\rho_VA_{u+\\ell}}$.\n\n\\section{Related Work}\n\n\\label{Relate}\n\nIn this section, our approach to quantum Graphical Models and Belief Propagation is compared to other proposals that have appeared in the literature. Firstly, Tucci has developed an approach to quantum Bayesian Networks \\cite{Tuc95a}, Markov Networks \\cite{Tuc07a} and Belief Propagation \\cite{Tuc98a} based on a different analogy between quantum theory and classical probability, namely the idea that probabilities should be replaced by complex valued amplitudes. Tucci's models require that these amplitudes should factorize according to conditions similar to those used in classical Graphical Models. One disadvantage of this is that the definition requires a fixed basis to be chosen for the system at each vertex of the graph, and the factorization condition for Bayesian Networks is not preserved under changes of this basis. In contrast, our definition of quantum conditional independence is based on an explicitly basis independent quantity, so it does not have this problem. Another difficulty with using amplitudes is that they are only well-defined for pure states, so that mixed states have to be represented as purifications on larger networks. In our approach, density operators are taken as primary, so mixed states can be represented without purification. On the other hand, the Tucci's definitions can easily accommodate unitary time evolution, whereas we do not have a general treatment of dynamics in our approach at the present time. A related definition of quantum Markov Networks, also based on amplitudes but without a development of the corresponding Belief Propagation algorithm, has been proposed by La Mura and Swiatczak \\cite{LMS07a}, to which similar comments apply.\n\nThere has also been work on Quantum Markov networks within the quantum probability literature \\cite{Lei01a,AF03a,AF03b}, although Belief Propagation has not been investigated in this literature. This is closer to the spirit of the present work, in the sense that it is based on the generalization of classical probability to a noncommutative, operator-valued probability theory. These works are primarily concerned with defining the Markov condition in such a way that it can be applied to systems with an infinite number of degrees of freedom, and hence an operator algebraic formalism is used. This is important for applications to statistical physics because the thermodynamic limit can be formally defined as the limit of an infinite number of systems, but it is not so important for numerical simulations, since these necessarily operate with a finite number of discretized degrees of freedom. Also conditional independence is defined in a different way via quantum conditional expectations, rather than the approach based on conditional mutual information and conditional density operators used in the present work. Nevertheless, it seems likely that there are connections to our approach that should to be investigated in future work. \n\nLastly, during the final stage of preparation of this manuscript, two related papers have appeared on the physics archive. An article by Laumann, Scardicchio and Sondhi \\cite{LSS07a} used a QBP-like to solve quantum models on sparse graphs. Hastings \\cite{Has07b} proposed a QBP algorithm for the simulation of quantum many-body systems based on ideas similar to the ones presented here. The connection between the two approaches, and in particular the application of the Lieb-Robinson bound \\cite{LR72a} to conditional mutual information, is worthy of further investigation. \n\n\n\\section{Conclusion}\n\n\\label{Conc}\n\n\\begin{figure}\n\\center\\includegraphics[width=11cm]{veine}\n\\caption{Relation between Markov Networks, Bifactor Networks, and 1-Bifactor Networks in a) quantum theory and b) classical probability theory. The hashed regions indicate the domain of convergence of the associated Belief Propagation algorithms. Figure a). Convergence of Belief Propagation on trees for Markov Networks is Theorem \\ref{thm:QBP} and for 1-Bifactor states is Corollary \\ref{cor:QBP1}. That all Markov Networks on trees are Bifactor states is Theorem \\ref{The:mutual}. The existence of Bifactor Networks on trees that are not Markov Networks is given by Example \\ref{ex:notMarkov} for $n<\\infty$ and the Heisenberg anti-ferromagnetic spin chain of Example \\ref{ex:heisenberg} for $n = \\infty$. Markov Networks on trees with cliques of size $>2$ are generally not Bifactor Networks, c.f. Theorem \\ref{Graph:QHC}. Figure b). That all classical Bifactor Networks are Markov Networks is the Hammersley-Clifford Theorem \\ref{thm:HC}, and convergence of Belief Propagation on trees follows from Theorem \\ref{thm:QBP}. }\n\\label{fig:worldview}\n\\end{figure}\n\nIn this paper, we have presented quantum Graphical Models and Belief Propagation based on the idea that quantum theory is a noncommutative, operator-valued, generalization of probability theory. Our main results are summarized on Fig. \\ref{fig:worldview}. We expect these methods to have significant applications in quantum error correction and the simulation of many-body quantum systems. We are currently in the process of implementing these algorithm numerically in both of these contexts. Belief Propagation based decoding of several types of quantum error correction codes has already been implemented quite successfully, e.g. on concatenated block codes \\cite{Pou06b}, turbo codes \\cite{OPT07a}, and sparse codes \\cite{COT05a}. However, for the noise models considered there, the corresponding bifactor states only involve commuting operators and thus the corresponding inference problem could be solved by means of a classical Belief Propagation algorithm. We conclude with several open questions suggested by this work. \n\nIn the context of many-body physics, it would be interesting to relate the class of solutions obtained by QBP to other approximation schemes used in statistical physics, much in the spirit of the work of Yedidia \\cite{Yed01a} in the classical setting. A related problem would be to understand how the different classes of bifactor states relate to each other. We suspect that when the Hilbert space dimension at each vertex of the graph is held fixed, the $n$-bifactor states on that graph form a subset of the $m$-bifactor states when $n1$ by\n\\begin{equation}\nE^\\ast(z,s):=\\pi^{-s}\\Gamma(s)\\left(\\frac{1}{2}\\sum_{(m,n)\\in\\Z^2\\setminus (0,0)}\n\\frac{y^s}{|cz+d|^{2s}}\\right).\n\\end{equation}\nIt is well known that for fixed $z$, $E^\\ast(z,s)$ admits a meromorphic \ncontinuation to the whole $s$-plane, and satisfies the functional equation\n\\[E^\\ast(z,s)=E^\\ast(z,1-s).\\]\nIts only singularities are simple poles at $s=1$ and $s=0$. As a function of\n$z$, it is invariant under $\\Gamma$\n\\[E^\\ast(\\gamma z,s)=E^\\ast(z,s), \\quad \\gamma\\in\\Gamma.\\]\nIn particular $E^\\ast(z,s)$ is invariant under $z\\mapsto z+1$ and so it has a\nFourier expansion\n\\[E^\\ast(x+iy,s)=\\sum_{n\\in\\Z}a_n(y,s)e^{2\\pi ix}.\\]\nThe zeroth Fourier coefficient $\\varphi_0(y,s)$ is given by\n\\[\\varphi_0(y,s)=\\Lambda(2s)y^s+\\Lambda(2s-1)y^{1-s},\\]\nwhere $\\Lambda(s)$ is the completed zeta function\n\\[\\Lambda(s)=\\pi^{-s\/2}\\Gamma\\left(\\frac{s}{2}\\right)\\zeta(s).\\]\nLet $a>0$. The zeros of $\\varphi_0(a,s)$ \nhave been studied by various authors. Hejhal\n\\cite[Proposition 5.3(f)]{He} proved that for all $a\\ge 1$, the complex zeros\nof $\\varphi_0(a,s)$ are on the critical line $\\Re(s)=1\/2$. Lagarias and Suzuki \n\\cite{LS} reproved this result and also determined the occurrence of real\nzeros. Ki \\cite{Ki} proved that all complex zeros are simple.\nPutting these results together, we have following theorem.\n\\begin{theo}\\label{th0.1}\nFor each $a\\ge 1$ all complex zeros of $\\varphi_0(y,s)$ are simple and lie on\nthe \ncritical line $\\Re(s)=1\/2$. Moreover there is a critical value\n$a^\\ast=4\\pi e^{-\\gamma}=7.055...$\nsuch that the following holds:\n\\begin{itemize}\n\\item[1)] For $1\\le a\\le a^\\ast$ all zeros are on the critical line.\n\\item[2)] For $a>a^\\ast$ there are exactly two zeros off the critical\n line. These \nare real simple zeros $\\rho_a, 1-\\rho_a$ with $\\rho_a\\in(1\/2,1)$. The zero\n$\\rho_a$ is a nondecreasing function of $a$ and $\\lim_{a\\to\\infty}\\rho_a=1$.\n\\end{itemize}\n\\end{theo}\nThe first aim of this paper is to point out that there is actually a\nspectral interpretation of the zeros of $\\varphi_0(a,s)$, which \ngives another proof of this theorem. For $a> 0$ let $\\Delta_a$ be the\ncut-off Laplacian $\\Delta_a$ introduced by Lax and Phillips \\cite{LP}.\nIt acts in the subspace $\\H_a\\subset L^2(\\Gamma\\ba \\Hb)$ of all\n$f\\in L^2(\\Gamma\\ba \\Hb)$ satisfying\n$\\int_0^1f(x+iy)\\,dx=0 $ for almost all $y\\ge a$.\nThe cut-off Laplacian $\\Delta_a$ is a nonnegative self-adjoint operator with\npure point spectrum. The spectrum has been studied by Colin de Verdiere \n\\cite{CV}. Let \n\\begin{equation}\\label{0.2}\nc(s)=\\frac{\\Lambda(2s-1)}{\\Lambda(2s)},\\quad s\\in\\C.\n\\end{equation}\nThe following theorem is an immediate consequence of \\cite[Th\\'eor\\`eme 5]{CV}.\n\n\\begin{theo}\\label{th0.2}\nFor every $a>0$, the spectrum of $\\Delta_a$ is the union of the cuspidal\neigenvalues \n$0<\\lambda_1\\le\\lambda_2\\le\\cdots\\to\\infty$ of $\\Delta$ and a sequence of\neigenvalues\n$$0<\\mu_0(a)< \\mu_1(a)<\\cdots$$\nwith the following properties:\n\\begin{enumerate}\n\\item[1)] Each eigenvalue $\\mu_j(a)$ is a decreasing function of $a$.\n\\item[2)] If $a\\ge1$, each eigenvalue $\\mu_j(a)$ has multiplicity 1. Moreover\n $\\lim_{a\\to\\infty}\\mu_0(a)=0$ and if $j\\ge 1$, then $\\mu_j(a)\\ge 1\/4$ and\n $\\lim_{a\\to\\infty}\\lambda_j(a)= 1\/4$. \n\\item[3)] Let $a\\ge1$. Then the map $s\\mapsto s(1-s)$ is a bijection between\n the zeros $\\rho\\not=1\/2$ of $\\varphi_0(a,s)$ and the eigenvalues\n $\\mu_j(a)\\not=1\/4$ of $\\Delta_a$.\n\\item[4)] $1\/2$ is a zero of $\\varphi_0(a,s)$ for all $a>0$. \nThere is $j$ with $\\mu_j(a)=1\/4$ if and only if $c^\\prime(1\/2)=-2\\log a$.\n\\end{enumerate}\n\\end{theo}\nThis theorem implies that for $a\\ge 1$ there is at most one\n zero of $\\varphi_0(a,s)$ which is off the\nline $\\Re(s)=1\/2$. The simplicity of \nthe zeros follows from the consideration of the corresponding eigenfunctions\nof $\\Delta_a$. \n\nThe main purpose of this article is to extend these results to the constant\nterm of other Eisenstein series. In the \npresent paper we will consider the Eisenstein series attached to\n$\\operatorname{PSL}(2,\\cO_K)$, \nwhere $\\cO_K$ is the ring of integers of\nan imaginary quadratic field $K=\\Q(\\sqrt{-D})$ of class number one, $D$ being\na square free positive integer. These are exactly the fields\n$\\Q(\\sqrt{-D})$ with $D=1,2,3,7,11,19,43,67,163$ \\cite{St}. Let $d_K$ be the\ndiscriminant of $K$ and let $\\zeta_K(s)$ be the Dedekind zeta function of\n$K$. Let\n\\begin{equation}\n\\Lambda_K(s)=\\left(\\frac{\\sqrt{|d_K|}}{2\\pi}\\right)^s\\Gamma(s)\\zeta_K(s)\n\\end{equation}\nbe the completed zeta function. \nThen $\\Lambda_K(s)$ satisfies the functional equation \n$\\Lambda_K(s)=\\Lambda_K(1-s)$.\nFor $a>0$ let \n\\begin{equation}\n\\varphi_{K}(a,s):=a^s\\Lambda_K(s)+a^{2-s}\\Lambda_K(s-1).\n\\end{equation}\nNote that $\\varphi_{K}(a,s)$ is a Dirichlet series which satisfies the\nfunctional equation $\\varphi_{K}(a,s)=\\varphi_{K}(a,2-s)$.\nLet $\\xi_K(s)=s(s-1)\\Lambda_K(s)$. Then $\\xi_K(s)$ is entire. Put\n\\[a^\\ast_K:=\\exp\\left(1+\\frac{\\xi_K^\\prime(1)}{\\xi_K(1)}\\right). \\]\nThen our main result is the following theorem.\n\\begin{theo}\\label{th0.3}\nLet $K=\\Q(\\sqrt{-D})$ be an imaginary quadratic field of class number one. \nThen for each $a\\ge 1$ all complex zeros of \n$\\varphi_K(a,s)$ are simple and lie on the line $\\Re(s)=1$. Moreover \n\\begin{enumerate}\n\\item[1)] For $a>\\max\\{a^\\ast_K,1\\}$ there are exactly two zeros off the \ncritical line.\nThese are simple zeros $\\rho_a,2-\\rho_a$ with $\\rho_a\\in(1,2)$. The zero\n$\\rho_a$ is a nondecreasing function of $a$, and $\\lim_{a\\to\\infty}\\rho_a=2$. \n\\item[2)] If $1\\le a^\\ast_K$ and $1\\le a\\le a^\\ast_K$,\n all zeros of $\\varphi_K(a,s)$ are on the critical line.\n\\end{enumerate}\n\\end{theo} \nTo prove Theorem \\ref{th0.3}, we extend Colin de Verdiere's Theorem \n\\cite[Th\\'eor\\`eme 5]{CV} to our setting. Let $\\cO_K$ be the ring of integers\nof $K$ and let $\\Gamma=\\operatorname{PSL}(2,\\cO_K)$. Then $\\Gamma$ is a discrete subgroup of \n$\\operatorname{PSL}(2,\\C)$ which acts properly discontinuously on the 3-dimensional\nhyperbolic space $\\Hb^3$. The quotient $\\Gamma\\ba\\Hb^3$ has finite volume\nand a single cups at $\\infty$. The function $\\varphi_K(y,s)$ appears to be the \nzeroth Fourier coefficient of the modified Eisenstein series attached to the\ncusp $\\kappa=\\infty$. Let $\\Delta_a$ be the corresponding cut-off Laplacian\non $\\Gamma\\ba\\Hb^3$. Then we generalize Theorem \\ref{th0.2} to this setting.\nAs above, this leads to a spectral interpretation of the zeros of\n$\\varphi_K(a,s)$, and we deduce Theorem \\ref{th0.3} from this spectral\ninterpretation. \n\nThe constant $a^\\ast_K$ can be computed using the Kronecker limit formula \n\\cite[p. 273]{La} \nand the Chowla-Selberg formula \\cite[(2), p.110]{SC}.\nFor example, we get\n\\[a^\\ast_{\\Q(i)}=\\frac{4\\pi^2 e^{2+\\gamma}}{\\Gamma(1\/4)^4}\\approx 3.00681,\n\\quad\na^\\ast_{\\Q(\\sqrt{-2})}=\\frac{8\\pi^2e^{2+\\gamma}}\n{\\left(\\Gamma\\left(\\frac{1}{8}\\right)\\Gamma\\left(\\frac{3}{8}\\right)\\right)^{2}}\n\\approx 3.2581 .\\] \nThus $a^\\ast_{\\Q(\\sqrt{-D})}>1$ for $D=1,2$ and therefore, in the range \n$1\\le a\\le a^\\ast_{\\Q(\\sqrt{-D})}$ all zeros of\n$\\varphi_{\\Q(\\sqrt{-D})}(a,s)$ are \non the line $\\Re(s)=1$. \n\nThe method used to prove Theorem \\ref{th0.3} can be extended so that\nan arbitrary number field $K$ of class number one can be treated. \nThe underlying global Riemannian symmetric \nspace is $X=\\Hb^{r_1}\\times (\\Hb^3)^{r_2}$, where $r_1$ is the number of real\nplaces and $r_2$ the number of pairs of complex conjugate places of $K$. The\nstructure of $\\Gamma\\ba X$ is slightly more complicated, however\nthe proof of the corresponding statement is completely analogous. More\ngenerally, it seems to be possible to prove similar results for the\nconstant terms of rank one cuspidal Eisenstein series attached to\n$\\operatorname{PSL}(n,\\Z)$. \n \n\\section{The cut-off Laplacian}\n\\setcounter{equation}{0}\n\nThe cut-off Laplacian can be defined for every rank one locally symmetric \nspace. In the present paper we are dealing only with hyperbolic\nmanifolds of dimension 2 and 3. So we discuss only the case of a\nhyperbolic 3-manifold. The general case is similar.\n\nLet \n\\[\\Hb^3=\\{(x_1,x_{2},y)\\in\\R^3\\colon y>0\\}\\]\nbe the hyperbolic 3-space with its hyperbolic metric\n\\[ds^2=\\frac{dx_1^2+dx_{2}^2+dy^2}{y^2}.\\]\nThe hyperbolic Laplacian $\\Delta$ is given by\n\\[\\Delta=-y^2\\left(\\frac{\\partial^2}{\\partial x_1^2}\n+\\frac{\\partial^2}{\\partial x_2^2}+ \n\\frac{\\partial^2}{\\partial y^2}\\right)+y\\frac{\\partial}{\\partial y}.\\]\nIf we regard $\\Hb^3$ as the set of all quaternions $z+yj$ with $z\\in\\C$\nand $y>0$, then $G=\\operatorname{PSL}(2,\\C)$ is the group of all orientation preserving\nisometries of $\\Hb^3$. It acts by linear fractional transformations, i.e., for\n$\\gamma=\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}\\in G$,\n\\[\\gamma(w)=(aw+b)(cw+d)^{-1},\\quad w\\in\\Hb^3.\\]\nLet\n$$B(\\C):=\\left\\{\\begin{pmatrix}\\lambda&z\\\\0&\\lambda^{-1}\\end{pmatrix}\\colon\n \\lambda\\in\\C^\\ast,\\; z\\in\\C\\right\\}\/\\{\\pm \\operatorname{Id}\\},\\quad\nN(\\C):=\\left\\{\\begin{pmatrix}1&z\\\\0&1\\end{pmatrix}\\colon z\\in\\C\\right\\}.$$ \nLet $\\Gamma\\subset G$ be a discrete subgroup of finite co-volume. \nLet $\\kappa_1=\\infty, \\kappa_2,...,\\kappa_m\\in \\Pf^1(\\C)$ be a complete set of\n$\\Gamma$-inequivalent cusps, and let $\\Gamma_i$ be the stabilizer of\n$\\kappa_i$ in $\\Gamma$. Choose $\\sigma_i\\in G$ such that\n$\\sigma_i(\\kappa_i)=\\infty$, $i=1,...,m$. Then\n\\begin{equation}\\label{1.1}\n\\sigma_i\\Gamma_i\\sigma_i^{-1}\\cap\nN(\\C)=\\left\\{\\begin{pmatrix}1&\\lambda\\\\0&1\\end{pmatrix}\\colon \n\\lambda\\in L_i\\right\\},\n\\end{equation}\nwhere $L_i$ is a lattice in $\\C$ \\cite[Theorem 2.1.8]{EGM}. Note that\nfor $D\\not=1,3$ the intersection actually coincides with \n$\\sigma_i\\Gamma_i\\sigma_i^{-1}$. \nChoose closed fundamental sets $\\cP_i$ for\nthe action of $\\sigma_i\\Gamma_i \\sigma_i^{-1}$ \non $\\Pf^1(\\C)\\setminus\\{\\infty\\}=\\C$. For $T>0$ define\n\\[\\tilde \\cF_i(T):=\\left\\{ (z,y)\\in\\Hb^3\\colon z\\in\\cP_i,\\;\\; y\\ge T\\right\\}.\\] \nLet $\\cF_i(T)=\\sigma_i^{-1}(\\tilde \\cF_i(T))$. There exists $T_i>0$ such \nthat\nany two points in $\\cF_i(T_i)$ are $\\Gamma$-equivalent if and only if they are\n$\\Gamma_i$-equivalent. For such a choice of $T_1,...,T_m$ there exists a\ncompact subset $\\cF_0\\subset \\Hb^3$ such that\n\\begin{equation}\\label{1.2}\n\\cF:=\\cF_0\\cup \\cF_1(T_1)\\cup\\cdots\\cup \\cF_m(T_m)\n\\end{equation}\nis a fundamental domain for $\\Gamma$ \\cite[Proposition 2.3.9]{EGM}. Let\n\\begin{equation}\\label{1.3}\nb:=\\max\\{T_1,...,T_m\\}.\n\\end{equation}\nBy $C^\\infty_c(\\Gamma\\ba\\Hb^3)$ we denote\nthe space of $\\Gamma$-invariant $C^\\infty$-functions on $\\Hb^3$ with compact \nsupport in $\\cF$. For $f\\in C^\\infty_c(\\Gamma\\ba \\Hb^3)$ let\n$$\\parallel f\\parallel^2=\\int_{\\cF}|f(x_1,x_2,y)|^2\\;\\frac{dx_1dx_2dy}{y^3},$$\nand let $L^2(\\Gamma\\ba \\Hb^3)$ be the completion of $C^\\infty_c(\\Gamma\\ba\n\\Hb^3)$ with respect to this norm.\nSimilarly let\n$$\\parallel df\\parallel^2=\\int_{\\cF} df\\wedge\\ast \\overline{df}\n=\\int_{\\cF}\\parallel\ndf(x_1,x_2,y)\\parallel^2\\;\\frac{dx_1dx_2dy}{y}.$$ \nLet $H^1(\\Gamma\\ba \\Hb^3)$ denote the completion of $C^\\infty_c(\\Gamma\\ba\n\\Hb^3)$ with respect to the norm\n$$\\parallel f\\parallel^2_1:=\\parallel f\\parallel^2+\\parallel df\\parallel^2.$$\nDenote by $|\\cP_i|$ the Euclidean area of the fundamental domain \n$\\cP_i\\subset\\C$ of\n$\\sigma_i\\Gamma_i\\sigma_i^{-1}$.\nFor $f\\in L^2(\\Gamma\\ba \\Hb^3)$ let \n\\[f_{j,0}(y)=\\frac{1}{|\\cP_i|}\\int_{\\cP_i} f(\\sigma_i^{-1}(x_1,x_2,y))\\;dx_1dx_2\\]\nbe the zeroth coefficient of the Fourier expansion of $f$ at the cusp\n$\\kappa_j$. Note that for $f\\in H^1(\\Gamma\\ba \\Hb^3)$, each $f_{j,0}$ belongs to\n$H^1(\\R^+)$ and therefore, each $f_{j,0}$ is a continuous function on $\\R^+$. \nFor $a>0$ let\n\\[H^1_a(\\Gamma\\ba \\Hb^3):=\\left\\{f\\in H^1(\\Gamma\\ba \\Hb^3)\\colon f_{j,0}(y)=0\\;\n\\;{\\mathrm for}\\;y\\ge a,\\;j=1,...,m\\right\\}.\\]\nThen $H^1_a(\\Gamma\\ba\\Hb^3)$ is a closed subspace of $H^1(\\Gamma\\ba\\Hb^3)$. \nHence the quadratic form \n\\[q_a(f)=\\parallel df\\parallel^2,\\quad f\\in H^1_a(\\Gamma\\ba\\Hb^3),\\]\nis closed. Let $\\Delta_a$ denote the self-adjoint operator which represents\nthe quadratic form $q_a$. It acts in the Hilbert space $\\H_a$ which is the\nclosure of $H^1_a(\\Gamma\\ba\\Hb^3)$ in $L^2(\\Gamma\\ba\\Hb^3)$. By definition,\n$\\Delta_a$ is nonnegative. Its domain can be described as follows.\nLet $\\psi_{j,a}\\in\\cD^\\prime(\\Gamma\\ba\\Hb^3)$ be defined by \n\\[\\psi_{j,a}(f):=f_{j,0}(a),\\quad f\\in C^\\infty_c(\\Gamma\\ba\\Hb^3).\\]\nLet $b>0$ be defined by (\\ref{1.3}). \n\\begin{lem} Let $a\\ge b$. Then the domain of $\\Delta_a$ consists of all \n$f\\in H^1_a(\\Gamma\\ba \\Hb^3)$ for which there exist $C_1,...,C_m\\in\\C$ \nsuch that\n\\begin{equation}\\label{1.4}\n\\Delta f-\\sum_{j=1}^mC_j\\psi_{j,a}\\in L^2(\\Gamma\\ba \\Hb^3).\n\\end{equation}\n\\end{lem}\nThe proof of the lemma is analogous to the proof of Th\\'eor\\`em 1 in\n\\cite{CV}. \nLet $f\\in \\operatorname{dom}(\\Delta_a)$. By the lemma, there exist $C_1,...,C_m\\in\\C$ such \nthat (\\ref{1.4}) holds. Then $\\Delta_af$ is given by\n\\begin{equation}\\label{1.5}\n\\Delta_af=\\Delta f-\\sum_{j=1}^mC_j\\psi_{j,a}.\n\\end{equation}\n\n\nFurthermore, we have\n\\begin{lem}\\label{l1.2}\n$\\Delta_a$ has a compact resolvent.\n\\end{lem}\n\\begin{proof} The proof is a simple extension of the argument in \n\\cite[p.206]{LP}. \n\\end{proof}\nSo the spectrum of $\\Delta_a$ consists of a sequence of eigenvalues \n$0\\le \\lambda_1(a)\\le \\lambda_2(a)\\le\\cdots\\to\\infty$ with finite\nmultiplicities. To describe the eigenvalues and eigenfunctions of $\\Delta_a$\nmore explicitely, we need to consider the Eisenstein series. Let $\\sigma_j\\in\nG$, $j=1,...,m$, be as above. \nFor each cusp $\\kappa_i$, the Eisenstein series $E_i(w,s)$ attached to\n$\\kappa_i$ is defined to be\n\\[E_i(w,s):=\\sum_{\\gamma\\in\\Gamma_i\\ba \\Gamma}y(\\sigma_i\\gamma w)^s,\\quad\n\\Re(s)>2,\\]\nwhere $y(\\sigma_i\\gamma w)$ denotes the $y$-component of $\\sigma_i\\gamma w$. \nSelberg has shown that $E_i(w,s)$ can be meromorphically continued in $s$ to \nthe whole complex plane $\\C$, and it is an automorphic eigenfunction of\n$\\Delta$ with \n$$\\Delta E_i(w,s)=s(2-s)E_i(w,s).$$\nSince $E_i(w,s)$ is $\\Gamma$-invariant, it is invariant under the stabilizer\n$\\Gamma_j$ of the cusp $\\kappa_j$ and therefore, admits a Fourier expansion\nat $\\kappa_j$. The constant Fourier coefficient is given by\n\\begin{equation}\\label{1.3a}\n\\delta_{ij}y^s+C_{ij}(s)y^{2-s}.\n\\end{equation}\nPut\n\\[C(s):=\\left(C_{ij}(s)\\right)_{i,j=1,...,m}.\\]\nThis is the so called ``scattering matrix''. The scattering matrix and the \nEisenstein series satisfy the following system of functional equations.\n\\begin{equation}\\label{1.7}\n\\begin{split}\n&C(s)C(2-s)=\\operatorname{Id},\\\\\n&E_i(w,s)=\\sum_{j=1}^m C_{ij}(s)E_j(w,2-s), \\quad i=1,...,m.\n\\end{split}\n\\end{equation}\n\nNow recall that a square integrable eigenfunction $f$ of $\\Delta$ is called \ncuspidal, if the zeroth Fourier coefficient $f_{j,0}$ of $f$\nat the cusp $\\kappa_j$ vanishes for all $j=1,...,m$. Denote by\n$S(\\lambda;\\Gamma)$ the space of cuspidal eigenfunctions of $\\Delta$ with \neigenvalue $\\lambda$. A function $f\\in C^0(\\Gamma\\ba \\Hb^3)$ is called of \nmoderate growth, if there exists $R\\in\\R^+$ such that\n\\[f(\\sigma^{-1}_i(x_1,x_2,y))=O(y^R),\\quad {\\mathrm\n for}\\;\\;(x_1,x_2,y)\\in\\cF_i(b),\\; i=1,...,m.\\] \nIt follows from the Fourier expansion in the cusps \\cite[Section 3.3.3]{EGM}\nthat a cuspidal eigenfunction $f$ of $\\Delta$ is rapidly decreasing in each\ncusp. Therefore for $\\psi\\in\\E(\\lambda,\\Gamma)$ and $\\varphi\\in\nS(\\lambda,\\Gamma)$ , the inner product $\\langle \\psi, \\varphi\\rangle$ \nis well defined. For $\\lambda\\in\\C$ let\n\\begin{equation}\n\\E(\\lambda;\\Gamma):=\\left\\{\\psi\\in L^2_{\\loc}(\\Gamma\\ba\\Hb^3)\\colon\n \\Delta\\psi=\\lambda\\psi,\\;\\psi\\;\\;{\\mathrm is \\; of\\;\n moderate\\; growth},\\;\\psi\\perp S(\\lambda;\\Gamma)\\right\\}.\n\\end{equation}\n\\begin{lem}\\label{l1.3} Let $m$ be the number of cusps of $\\Gamma\\ba\\Hb^3$. \nThen \n\\begin{enumerate}\n\\item[1)] For every $\\lambda\\in\\C$ we have $\\dim\\E(\\lambda;\\Gamma)\\le m$. \n\\item[2)] Suppose that $\\lambda=s(2-s)$, where $\\Re(s)\\ge 1$, $s\\not=1$,\n and $s$ is not a pole of any Eisenstein series. Then\n$E_1(w,s),...,E_m(w,s)$ is a basis of $\\E(\\lambda;\\Gamma)$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nThe proof is analogous to the proof of Satz 11 in \\cite[p.171]{Ma}. Let \n$\\phi,\\psi\\in\\E(\\lambda;\\Gamma)$ and $\\lambda=s(2-s)$. The zeroth Fourier \ncoefficient of $\\phi$, $\\psi$ at the $j$-th cusp is given by\n\\[\\phi_{0,j}(y)=a_jy^s+b_jy^{2-s},\\quad\n\\psi_{0,j}(y)=c_jy^s+d_jy^{2-s},\\;s\\not=1,\\] \nand for $s=1$ by\n\\[\\phi_{0,j}(y)=a_jy+b_jy\\log y,\\quad \\psi_{0,j}(y)=c_jy+d_jy\\log y.\\]\nLet $\\chi_{j,[a,\\infty)}$ denote the characteristic function of $\\Gamma_i\\ba\n\\cF_i(a)$ in $\\Gamma\\ba\\Hb^3$. Let\n\\[\\phi_a=\\phi-\\sum_{j=1}^m\\chi_{j,[a,\\infty)}\\phi_{0,j},\\quad \n\\psi_a=\\psi- \\sum_{j=1}^m\\chi_{j,[a,\\infty)}\\psi_{0,j}\\] \nbe the truncations of $\\phi$ and $\\psi$, respectively, at level $a\\ge b$. \nLet $s\\not=1$. Integrating by parts, we get\n\\begin{equation}\\label{1.9}\n\\begin{split}\n0=\\int_{\\Gamma\\ba \\Hb^3}(\\phi_a\\Delta\\psi_a-\\psi_a\\Delta\\phi_a)\\;d{\\mathrm \nvol}&=\n\\sum_{j=1}^m\\left(\\psi_{0,j}(y)\\frac{\\partial\\phi_{0,j}}{\\partial y}(y)-\n\\phi_{0,j}(y)\\frac{\\partial\\psi_{0,j}}{\\partial y}(y)\\right)\\bigg|_{y=a}\\\\\n&=(2-2s)\\sum_{j=1}^m(a_jd_j-b_jc_j).\n\\end{split}\n\\end{equation}\nFor $s=1$ the right hand side equals $\\sum_{j=1}^m(a_jd_j-b_jc_j)$.\n\nAn element of $\\E(\\lambda)$ is uniquely determined by its zeroth Fourier \ncoefficients. Define $i\\colon \\E(\\lambda)\\to \\C^{2m}$ by\n$i(\\phi):=(a_1,b_1,...,a_m,b_m),$\nwhere $a_j,b_j$ are the $0$-th Fourier coefficients of $\\phi$ at the $j$-th\ncusp. Then $i$ is an \nembedding. Let \n\\[I(x,y):=\\sum_{j=1}^m(x_{2j-1}y_{2j}-x_{2j}y_{2j-1})\\]\nbe the standard symplectic form on $\\C^{2m}$. Then by (\\ref{1.9}) and the\ncorresponding statement for $s=1$,\n$\\E(\\lambda)$ is an isotropic subspace for $I$. Hence $\\dim\\E(\\lambda)\\le m$. \n\nIf $s\\not=1$ and $s$ is not a pole of $E_j(w,s)$, $j=1,...,m$, the Eisenstein \nseries have linearly independent $0$-th Fourier coefficients given by \n(\\ref{1.3a}). This implies that $E_1(w,s),...,E_m(w,s)$ are linearly\nindependent and hence, form a basis of $\\E(\\lambda,\\Gamma)$, where\n$\\lambda=s(2-s)$.\n\\end{proof}\nWe are now ready to describe the spectrum of $\\Delta_a$. Let\n$0=\\Lambda_0<\\Lambda_1<\\cdots<\\Lambda_N<1$ be the eigenvalues of $\\Delta$ with\nnon-cuspidal eigenfunctions. Then $\\Lambda_j$ has the form \n$\\Lambda_j=\\sigma_j(2-\\sigma_j)$ with $\\sigma_j\\in (1,2]$ and $\\sigma_j$ is a \nsimple pole of some \nEisenstein series $E_k(w,s)$. The corresponding residue is an \neigenfunction of $\\Delta$ with eigenvalue $\\Lambda_j$ \n\\cite[Proposition 6.2.2]{EGM}.\nThe following theorem\nis an extension of Theorem 5 in \\cite{CV} to the 3-dimensional case.\n\\begin{theo}\\label{th1.4}\nThe spectrum of $\\Delta_a$ is the union of the cuspidal \neigenvalues $(\\lambda_i)_i $ of $\\Delta$ and a sequence\n$0<\\mu_0(a)\\le \\mu_1(a)\\le \\cdots $\nwith the following properties:\n\\begin{enumerate}\n\\item[1)] Each $\\mu_j(a)$ is a decreasing function of $a$,\n\\item[2)] Let $a\\ge b$. Then each $\\mu_j(a)$ has multiplicity $\\le m$, and\n the map $s\\mapsto s(2-s)$ is a bijection between the\neigenvalues $\\mu_j(a)\\not\\in\\{1,\\Lambda_0,...,\\Lambda_N\\}$ and the zeros of\n$\\rho\\in\\C\\setminus\\{1,\\sigma_0,...,\\sigma_N\\}$ of \n$\\varphi(s):=\\det(C(s)+a^{2s-2}\\operatorname{Id})$.\n\\end{enumerate}\n\\end{theo}\n\\begin{proof}\nLet $f\\in L^2(\\Gamma\\ba\\Hb^3)$ be a cuspidal eigenfunction of $\\Delta$.\nThen $f$ belongs to\n$H^1_a(\\Gamma\\ba \\Hb^3)$ and $\\Delta f$ is square integrable. Hence\nby (\\ref{1.4}) and (\\ref{1.5}), $f\\in\\operatorname{dom}(\\Delta_a)$ and $\\Delta_af=\\Delta f$.\nThus all cuspidal eigenfunctions of $\\Delta$ are also eigenfunctions of\n$\\Delta_a$ with the same eigenvalues. \n\nFor 1)\nobserve that for $a\\le a^\\prime$ we have\n\\[H^1_a(\\Gamma \\ba\\Hb^3)\\subset H^1_{a^\\prime}(\\Gamma \\ba\\Hb^3).\\]\nSince $\\Delta_a$ is a non-negative self-adjoint operator with compact\nresolvent, 1) follows from the mini-max characterization of its eigenvalues.\n\n\n\n2) Let $\\Phi$ be an eigenfunction of $\\Delta_a$ with eigenvalue\n$\\mu_j(a)=s_j(2-s_j)$ and suppose that $\\Phi\\perp\nS(\\mu_j(a);\\Gamma)$. Let $\\Phi_{i,0}$ be the zeroth Fourier coefficient of\n$\\Phi$ at the cusp $\\kappa_i$. There exists $1\\le j\\le m$ such that\n$\\Phi_{j,0}\\not\\equiv 0$. Let $\\hat\\Phi_{i,0}$ denote the analytic continuation\nof $\\Phi_{i,0}$ from $(0,a)$ to $\\R^+$. Put\n\\[\\hat\\Phi(w):=\\Phi(w)+\\sum_{i=1}^m\\chi_{i,[a,\\infty)}\\hat\\Phi_{i,0}.\\]\nThen $\\hat\\Phi\\in C^\\infty(\\Gamma\\ba \\Hb^3)$ and $\\Delta \\hat\\Phi\n=\\mu_j(a)\\hat\\Phi$. \nMoreover $\\hat\\Phi$\nis of moderate growth and $\\hat\\Phi\\perp S(\\mu_j(a),\\Gamma)$. Therefore $\\hat\\Phi\\in\\E(\\mu_j(a);\\Gamma)$. \nBy Lemma \\ref{l1.3},\n1), it follows that the multiplicity of $\\mu_j(a)$ is bounded by $m$.\n\nNext suppose that\n$\\mu_j(a)=s_j(2-s_j)\\not\\in\\{1,\\Lambda_0,...,\\Lambda_N\\}$. We can \nassume that $\\Re(s_j)\\ge 1$. As explained above, $s_j$ is not a pole of any\nEisenstein series $E_k(w,s)$, $k=1,...,m$. Therefore, by Lemma \\ref{l1.3} \nthere exist \n$c_1,...,c_m\\in\\C$ such that \n\\[\\hat\\Phi(w)=\\sum_{i=1}^m c_i E_i(w,s_j).\\]\nBy construction, the zeroth Fourier coefficient $\\hat\\Phi_{l,0}(y)$ of \n$\\hat\\Phi(w)$ at\n$\\kappa_l$ vanishes at $y=a$ for all $l=1,...,m$. By (\\ref{1.3a}) this implies\n\\[\\sum_{i=1}^mc_i\\left(\\delta_{il}a^{s_j}+C_{il}(s_j)a^{2-s_j}\\right)=0,\\quad\nl=1,...,m.\\]\nHence $\\det(C(s_j)+a^{2s_j-2}\\operatorname{Id})=0$.\n\nNow assume that $\\det(C(s_j)+a^{2s_j-2}\\operatorname{Id})=0$, $\\Re(s_j)\\ge 1$,\n and $s_j\\not\\in\\{1,\\sigma_0,...,\\sigma_N\\}$. Then $s_j$ is not a pole of any\nEisenstein series. Let $0\\not=\\psi\\in\\C^m$ such\nthat\n\\begin{equation}\\label{1.10}\nC(s_j)\\psi=-a^{2s_j-2}\\psi.\n\\end{equation}\nLet $\\psi=(c_1,...,c_m)$ and set\n\\[\\hat\\Phi(w):=\\sum_{k=1}^m c_kE_k(w,s_j).\\] \nBy (\\ref{1.3a}) and (\\ref{1.10}), the $0$-th Fourier coefficient of \n$\\hat\\Phi(w)$ at the cusp $\\kappa_l$ is given by\n\\begin{equation}\n\\sum_{k=1}^m\nc_k\\left(\\delta_{kl}y^{s_j}+C_{kl}(s_j)y^{2-s_j}\\right)\n=c_l\\left(y^{s_j}-a^{2s_j-2}y^{2-s_j}\\right). \n\\end{equation}\nTherefore, the $0$-th Fourier coefficient $\\hat\\Phi_{l,0}(y)$ vanishes at\n$y=a$ for all $l=1,...,m$. Put\n\\[\\Phi(w)=\\hat\\Phi(w)-\\sum_{i=1}^m\\chi_{i,[a,\\infty)}\\hat\\Phi_{l,0}(y).\\]\nThen $\\Phi\\in H^1_a(\\Gamma\\ba\\Hb)$ and it follows from the description of\nthe domain of $\\Delta_a$ that $\\Phi\\in\\operatorname{dom}(\\Delta_a)$ and \n$\\Delta_a \\Phi=s_j(2-s_j)\\Phi$. \n\\end{proof}\n\nNow assume that $\\Gamma\\ba \\Hb^3$ has a single cusp $\\kappa=\\infty$. Then\nthere is a single \nEisenstein series $E(w,s)$, which is given by\n\\[E(w,s)=\\sum_{\\gamma\\in\\Gamma_\\infty\\ba\\Gamma}y(\\gamma w)^s,\\quad \\Re(s)>2.\\]\nThe zeroth Fourier coefficient of $E(w,s)$ at $\\infty$ equals\n\\begin{equation}\n\\varphi_0(y,s):=y^s+c(s)y^{2-s},\n\\end{equation}\nwhere $c(s)$ is a meromorphic function of $s\\in\\C$. \nThe following facts are well known \n\\cite[pp.243-245]{EGM} .\nThe poles of $E(w,s)$ in the half-plane $\\Re(s)>1$ are \ncontained in the interval $(1,2]$ and are all simple. The residue of $E(w,s)$\nat a pole \n$\\sigma\\in (1,2]$ is a square integrable eigenfunction of $\\Delta$ with \neigenvalue $\\sigma(2-\\sigma)$, which is non-cuspidal. Moreover all\nnon-cuspidal eigenfunctions of $\\Delta$ are obtained in this way. Thus\nthe non-cuspidal eigenvalues have multiplicity one. So in this case we get \nthe following refinement of Theorem \\ref{th1.4}.\n\\begin{theo}\\label{th1.5}\nAssume that $\\Gamma\\ba\\Hb^3$ has a single cusp.\nThen the spectrum of $\\Delta_a$ is the union of the cuspidal \neigenvalues $(\\lambda_i)_i $ of $\\Delta$ and a sequence\n$0<\\mu_0(a)< \\mu_1(a)< \\cdots $\nwith the following properties:\n\\begin{enumerate}\n\\item[1)] Each $\\mu_j(a)$ is a decreasing function of $a$;\n\\item[2)] Let $a\\ge b$. Then each $\\mu_j(a)$ has multiplicity 1 and the map\n$s\\mapsto s(2-s)$ is a bijection between the zeros $\\rho\\not=1$ of \n$\\varphi_0(a,s)$ and the eigenvalues $\\mu_j(a)\\not=1$ of $\\Delta_a$. \n\\item[3)] There is $j$ with $\\mu_j(a)=1$ if and only if $c(1)=-1$ and\n $c^\\prime(1)=-2\\log a$. \n\\item[4)] Let $0=\\Lambda_0<\\Lambda_1<\\cdots<\\Lambda_N<1$ be the\n eigenvalues of $\\Delta$ \nwith non-cuspidal eigenfunctions. If $a\\ge b$, then \n\\[0=\\Lambda_0<\\mu_0(a)<\\Lambda_1<\\mu_1(a)<\\cdots <\\Lambda_N< \n\\mu_N(a).\\]\nMoreover, \n\\begin{equation}\n\\lim_{a\\to\\infty}\\mu_j(a)=\\begin{cases}\\Lambda_j&,\\; 0\\le j\\le N;\\\\\n1&, \\;j>N.\n\\end{cases}\n\\end{equation}\n\\end{enumerate}\n\\end{theo}\n\\begin{proof}\n1) follows from Theorem \\ref{th1.4}. If $s\\not\\in\\{1,\\sigma_0,...,\\sigma_N\\}$,\nthen 2) follows also from 2) of Theorem \\ref{th1.4}. Now suppose that $s_0$\nis a pole of $E(w,s)$ in the half-plane $\\Re(s)\\ge 1$. Then the residue\n$\\psi$ of $E(w,s)$ at $s_0$ is an eigenfunction of $\\Delta$ with non-vanishing\n$0$-th Fourier coefficient. Moreover $\\psi\\perp S(\\lambda,\\Gamma)$.\nTherefore $\\psi\\in\\E(\\lambda,\\Gamma)$, where \n$\\lambda=s_0(2-s_0)$. By Lemma \\ref{l1.3} it follows that\n $\\dim\\E(\\lambda,\\Gamma)=1$.\nMoreover the constant term $\\psi_0(y)$ of $\\psi$ has the form \n$\\psi_0(y)=cy^{2-s_0}$. Especially, it never vanishes on $\\R^+$. On the other\nhand, if $\\Phi$ were an eigenfunction of $\\Delta_a$ with eigenvalue $\\lambda$,\nthen the corresponding eigenfunction $\\hat\\Phi\\in\\E(\\lambda,\\Gamma)$, \nconstructed in the proof of Theorem \\ref{th1.4} would have a constant term\nwhich \nvanishes at $y=a$. Hence the eigenvalues $\\Lambda_j$ can not be eigenvalues of\n$\\Delta_a$. This proves 2)\n\nFor 3) we note that $c(1)^2=1$ by the\nfunctional equation (\\ref{1.7}). Thus $c(1)=\\pm 1$. If $c(1)=-1$, then \n$E(w,1)\\equiv 0$. Put\n$\\psi(w):=\\frac{d}{ds}E(w,s)\\big|_{s=1}.$ Then $\\Delta \\psi=\\psi$ and the \nzeroth Fourier coefficient of $\\psi$ is given by\n\\[\\psi_0(y)=(2\\log y+c^\\prime(1))y.\\]\nSuppose that $c^\\prime(1)=-2\\log a$. Put \n$\\hat \\psi_a(w):=\\psi(w)-\\chi_{[a,\\infty)}\\psi_0(y(w))$. Then $\\hat \\psi_a$\nis in the domain of $\\Delta_a$ and $\\Delta_a\\hat\\psi_a=\\hat\\psi_a$. \n\nFor the other direction suppose that $\\hat\\psi$ is an eigenfunction of\n$\\Delta_a$ with eigenvalue \n$1$ and $\\hat\\psi\\perp S(1,\\Gamma)$. Let $\\hat\\psi_0(y)$ be the $0$-th Fourier\ncoefficient of $\\hat\\psi$. \nExtend $\\hat\\psi_0(y)$ in the obvious way from $(b,a)$ to a smooth function \n$\\psi_0$ on $(b,\\infty)$. Set\n\\[\\psi:=\\hat\\psi+\\chi_[a,\\infty)\\psi_{0}.\\]\nThen $\\psi$ is smooth, of moderate growth, satisfies $\\Delta\\psi=\\psi$ and\n$\\psi\\perp S(1,\\Gamma)$. Thus $\\psi\\in\\E(1,\\Gamma)$. Therefore by Lemma\n\\ref{l1.3} it follows that $\\dim\\E(1,\\Gamma)=1$. On the other hand, we have\n$0\\not=\\frac{d}{ds}E(s,w)\\big|_{s=1}\\in\\E(1,\\Gamma)$. \nTherefore it follows that there exists\n$c\\not=0$ such that $\\psi(w)=c\\frac{d}{ds}E(w,s)\\big|_{s=1}$. \nComparing the constant terms, it follows that $c^\\prime(1)=-2\\log a$. \n\n4) follows from the mini-max principle and the fact that $\\cup_{a\\ge b}\nH^1_a(\\Gamma \\ba\\Hb^3)$ is dense in $H^1(\\Gamma \\ba\\Hb^3)$.\n\\end{proof}\n\n\\section{Hyperbolic surfaces of finite area}\n\\setcounter{equation}{0}\n\nIn this section we prove Theorem \\ref{th0.2} and deduce Theorem\n\\ref{th0.1} from it.\n\nLet $\\Gamma=\\operatorname{PSL}(2,\\Z)$ be the modular group. Then $\\Gamma\\ba \\Hb$ has a single\ncusp $\\kappa=\\infty$. As fundamental domain we take the standard domain\n\\[F:=\\left\\{z\\in\\Hb\\colon |\\Re(z)|<1\/2,\\,|z|>1\\right\\}.\\]\nSo we can take $b=1$. \nThe Eisenstein series attached to the cusp $\\infty$ is given by\n\\[E(z,s)=\\sum_{\\gamma\\in\\Gamma_\\infty\\ba\\Gamma}\\Im(\\gamma z)^s=\\sum_{(m,n)=1}\n\\frac{y^s}{|mz+n|^{2s}},\\quad \\Re(s)>1.\\]\nThe constant term of $E(z,s)$ equals\n\\begin{equation}\\label{2.1}\ny^s+c(s)y^{1-s},\n\\end{equation}\nwhere $c(s)$ is the meromorphic function defined by (\\ref{0.2}). \n\n\\noindent\n{\\bf Proof of Theorem 0.2:}\nSince the \nthe zeros of the completed zeta function $\\Lambda(s)$ are all contained in the\nstrip $0<\\Re(s)<1$, it follows that $\\Lambda(2s-1)$ and $\\Lambda(2s)$ have no\ncommon zeros. This implies that the zeros of \n\\[\\varphi_0(a,s)=\\Lambda(2s)a^s+\\Lambda(2s-1)a^{1-s}\\]\ncoincide with the zeros of\n$a^s+c(s)a^{1-s}$. Now note that $\\Lambda(s)$ has poles of order 1 at $s=1,0$.\nThe residue at $s=1$ is 1 and the residue at $s=0$ is $-1$. Hence\n\\begin{equation}\\label{2.2}\nc(1\/2)=\\lim_{s\\to 1\/2}\\frac{\\Lambda(2s-1)}{\\Lambda(2s)}=-1.\n\\end{equation}\nNow the statements 1), 3) and 4) follow immediately from \\cite[Th\\'eor\\`em\n5]{CV}. For 2) we use that by Roelcke \\cite{Ro}, the smallest positive\neigenvalue $\\lambda_1$ of $\\Delta$ on $\\Gamma\\ba \\Hb$ satisfies\n$\\lambda_1>1\/4$. Then 2) follows from Th\\'eor\\`em 5, (iii), of \\cite{CV}. \n\n\\noindent\n{\\bf Proof of Theorem 0.1:} Let $a\\ge 1$ and let $\\rho\\not=1\/2$ be a zero of\n$\\varphi_0(a,s)$. Then by Theorem 0.2, 3), $\\rho(1-\\rho)$ is an eigenvalue of\n$\\Delta_a$. Hence $\\rho(1-\\rho)$ is a non-negative real number. If \n$\\rho(1-\\rho)> 1\/4$, then $\\rho$ is a complex zero with $\\Re(\\rho)=1\/2$. \nBy Theorem 0.2, 2), there exist at most two zeros, $\\rho_a$ with \n$0<\\rho_a(1-\\rho_a)<1\/4$. Thus $\\rho_a$ and $1-\\rho_a$ are zeros and we\nmay assume that \n$\\rho_a\\in (1\/2,1)$. Let $aa^\\ast$, and $\\mu_0(a)>1\/4$, if\n$1\\le a0$, the zeros of \n\\[\\varphi_K(a,s)=a^s\\Lambda_K(s-1)+a^{2-s}\\Lambda_K(s)\\]\ncoincide with the zeros of $E_0(a,s)$. \n\nWe can now apply Theorem \\ref{th1.5}. By (\\ref{3.2}) we can take $b=1$.\nLet $a\\ge 1$ and let $\\rho\\not=1$ be a zero of $\\psi_K(a,s)$. Then by\nTheorem \\ref{th1.5}, 2), $\\rho(2-\\rho)$ is an eigenvalue of $\\Delta_a$. This\nimplies \nthat if $\\rho(2-\\rho)\\ge 1$, then $\\rho=1+ir$ with $r\\in\\R$, $r\\not=0$. \nIf $\\rho(2-\\rho)< 1$, then $\\rho$ is real. Thus all complex zeros of \n$\\varphi_K(a,s)$ are on the line $\\Re(s)=1$. \n\nFor the real zeros we need to consider the non-cuspidal eigenvalues \n of $\\Delta$. By the spectral resolution of the Laplacian, these eigenvalues \nare in \none-to-one correspondence with the poles of the Eisenstein series $E(w,s)$ in \nthe interval $(1,2]$ \\cite[Proposition 6.2.2]{EGM}.\n On the other hand, using the\nMaass-Selberg relations, it follows that\nthe poles of $E(w,s)$ in $(1,2]$ coincide with the poles of the scattering\nmatrix $c_K(s)$ in $(1,2]$. Since $\\Lambda_K(s)$ has no zeros in $\\Re(s)>1$\nand the only pole in $\\Re(s)>0$ is a simple pole at $s=1$, it follows from\n(\\ref{3.4}) that the only pole of $c_K(s)$ in $\\Re(s)>1$ is a simple pole at \n$s=2$ which corresponds to the eigenvalue 0. This shows that $\\Delta$ has no\nnon-cuspidal eigenvalues in the interval $(0,1)$. \n By Theorem \\ref{th1.5}, 4), it follows that $\\varphi_K(a,s)$ has at most \ntwo real zeros $\\rho_a$ and $2-\\rho_a$ with\n$\\rho_a\\in (1,2)$. Let $aa^\\ast_K$, and $\\mu_0(a)>1$, if\n$agin}\nLet $\\sigma$ and $\\tau$ be two term orders on $S_{[n]}$.\nThen $\\mbox{\\upshape Gin}_{\\sigma}(I)\\geq_\\sigma \\mbox{\\upshape Gin}_{\\tau}(I)$\nfor any homogeneous ideal $I\\subset S_{[n]}$.\n\\end{lemma} \n\\smallskip\\noindent {\\it Proof: \\ }\nLet $f_1, \\ldots, f_t$ be a basis of $I_d$, and let \n$g$ be a generic $n \\times n$\nupper-triangular matrix. \nSince $M'_d:=\\mbox{\\upshape in}_{>_{\\tau}}(g(f_1)\\wedge\\ldots\\wedge g(f_t))$ appears\nin $g(f_1)\\wedge\\ldots\\wedge g(f_t)$ with a non-zero coefficient, it follows\nthat \n$M_d:=\\mbox{\\upshape in}_{\\sigma}(g(f_1)\\wedge\\ldots\\wedge g(f_t)) \\geq_{\\sigma} M'_d$\n(for every $d\\geq 0$).\nProposition \\ref{gin_constr} implies the lemma.\n\\hfill$\\square$\\medskip\n\nWe remark that a stronger version of Lemma \\ref{gin>gin} was proved in \n\\cite[Cor.~1.6]{Conca}.\n\nAnother ingredient needed for defining revlex shifting\nis the notion of the squarefree operation. This is a bijection $\\Phi$\nbetween the set of all monomials in $\\{x_i : i\\in \\mathbb{N}\\}$ and the set of all\nsquarefree monomials in $\\{x_i : i\\in \\mathbb{N}\\}$, defined by\n$$\\Phi(\\prod_{j=1}^k x_{i_j})=\\prod_{j=1}^k x_{i_j+j-1}, \\mbox{ where }\n i_1\\leq i_2\\leq \\ldots\\leq i_k.\n$$\n Note that for a monomial $m\\in S_{[n]}$, \n$\\Phi(m)$ may not belong to $S_{[n]}$. However the graded \nreverse lexicographic order \n has the following remarkable property \n\\cite[Lemma 6.3(ii)]{K91}, \\cite[Lemma 1.1]{AHH}: if \n$m$ is a minimal generator of $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}} I_\\Gamma$ \n(where $\\Gamma$ is a simplicial \ncomplex on $[n]$), then $\\Phi(m)$ is an element of $S_{[n]}$.\nThis leads to the following definition (due to Kalai):\n\\begin{definition} \\label{equiv}\nLet $\\Gamma$ be a simplicial complex on the vertex set $[n]$.\nThe reverse lexicographic shifting of $\\Gamma$,\n$\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma)$, is a simplicial complex on $[n]$ whose \nStanley-Reisner ideal is given by\n$$I_{\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma)}=\\langle \\Phi(m) \\; : \\; m\\in\nG(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}} I_\\Gamma) \\rangle,$$\nwhere $G(I)$ denotes the set of\nthe minimal generators of a monomial ideal $I$.\n\\end{definition}\n\nWe now provide a new and simple proof of Eq.~(\\ref{P5})\n (due originally to Aramova, Herzog, and Hibi \\cite{AHH}).\n\\begin{theorem} \\label{AHH}\nThe revlex shifting $\\Delta_{\\mbox{\\upshape {\\tiny rl}}}$ satisfies all the conditions of \nTheorem \\ref{stable}. Thus $\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma)=\\Gamma$ for every\n shifted complex $\\Gamma$.\n\\end{theorem} \n\\smallskip\\noindent {\\it Proof: \\ } It is well-known that (symmetric) revlex shifting satisfies all\nthe conditions of Theorem \\ref{stable}, except possibly for property\n(2) whose proof appears to be missing in the literature (for the\nexterior version of algebraic shifting it was recently verified by\nNevo \\cite{Nevo}): the fact that $\\Delta(\\Gamma)$ is a shifted\nsimplicial complex follows from Lemma~\\ref{cone}(1); property (1) is\n\\cite[Lemma 6.3(i)]{K91}; property (3) is a consequence of \nLemma~\\ref{cone}(3); property (4) follows from \\cite[Cor.~8.25]{H} asserting\nthat $\\beta_i(\\Gamma)=\\beta_i(\\Delta(\\Gamma))$ for all $i$. To prove\nproperty (2) it suffices to check that $\\Delta(\\Gamma)$ and\n$\\Delta(\\Gamma\\star \\{n+1\\})$ have the same set of minimal nonfaces\n(equivalently, $I_{\\Delta(\\Gamma)}\\subset S_{[n]}$ and\n$I_{\\Delta(\\Gamma\\star \\{n+1\\})}\\subset S_{[n+1]}$ have the same set\nof minimal generators). This follows from Definition \\ref{equiv} and\nLemma \\ref{cone}(4). \\hfill$\\square$\\medskip\n\n\\paragraph{\\bf Remarks} \\quad\n\n(1)\n We note that to verify the inequality $\\sum \\beta_i(\\Gamma)\\leq \\sum\n \\beta_i(\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma))$ one does not need to use the fact\n that $\\beta_i(\\Gamma)=\\beta_i(\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma))$ for all\n $i$, which is a consequence of the deep result due to\n Bayer--Charalambous--Popescu \\cite{BCP} and Aramova--Herzog~\\cite{AH} \nthat revlex shifting preserves extremal (algebraic) Betti\n numbers. Instead one can use the standard flatness argument (see\n \\cite[Thm.~3.1]{H}) to show that $\\beta_{i,j}(I_\\Gamma) \\leq\n \\beta_{i,j}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I_\\Gamma)) = \\beta_{i,j}(I_{\\Delta(\\Gamma)})$ \nfor all $i$, $j$, where the equality comes from\n the fact that $\\Phi$ applied to (minimal generators of)\n a strongly stable ideal\n $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I_\\Gamma)$ preserves algebraic Betti numbers (see\n \\cite[Lemma 2.2]{AHH}). The Hochster formula \\cite{Hoc} then asserts\n that the reduced Betti numbers of a simplicial complex are equal to\n certain algebraic graded Betti numbers of its Stanley-Reisner ideal.\n\n(2)\n In algebraic terms, the statement of\nTheorem \\ref{AHH} translates to the fact that if\n$I\\subset S_{[n]}$ is a squarefree strongly stable ideal, then\n$\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I))=I$, where \n$\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I)):=\\langle \\Phi(m): m \\in G(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I)) \\rangle\n$.\nHence $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I)=\\langle\\Phi^{-1}(\\mu) : \\mu\\in G(I)\\rangle$, that is,\n computing the revlex Gin of a squarefree strongly stable ideal $I$\nsimply amounts to applying $\\Phi^{-1}$ to the minimal generators\nof $I$.\n\n(3) Our proof (as well as the original proof in \\cite{AHH})\nof the equation $\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I))=I$ for a squarefree strongly\nstable ideal $I$ works only over a field ${\\bf k}$ of characteristic zero.\nWe however do not know of any counterexamples in the case of a field\nof positive characteristic.\n\n\n\n\\section{Combinatorics of USLIs, almost USLIs, and lex Gins} \n \\label{USLI_section}\nIn this section we introduce and study \nthe class of universal squarefree lexsegment ideals (USLIs)\nand the class of almost USLIs. These notions turn out to be crucial \nin the proof of Theorem \\ref{Thm2}.\nTo allow for infinitely generated ideals \n(as we need in the following section) we \nconsider the system of rings \n$S_{[n]}$, $n\\in\\mathbb{N}$,\nendowed with natural embeddings $S_{[n]}\\subseteq S_{[m]}$ for $m\\geq n$, and\nprovide definitions suitable\nfor the direct limit \nring $S=\\lim_{n\\rightarrow\\infty}S_{[n]}={\\bf k}[x_i : i\\in\\mathbb{N}]$.\n\n\nRecall that a squarefree monomial ideal \n$I\\subset S$ ($I\\subset S_{[n]}$, respectively) is a \n{\\em squarefree lexsegment ideal} of $S$ ($S_{[n]}$, respectively)\n if for every\nmonomial $m\\in I$ and every\nsquarefree monomial $m'\\in S$ ($m'\\in S_{[n]}$, respectively)\nsuch that $\\deg(m')=\\deg(m)$ and $m'>_{\\mbox{\\upshape {\\tiny lex}}} m$, \n$m'$ is an element of $I$ as well.\n\\begin{definition} \\label{USLI}\nAn ideal $L$ of $S$ (or of $S_{[n]}$)\nis\na {\\em universal squarefree lexsegment ideal} (abbreviated USLI)\nif it is finitely generated in each degree and $LS$ is\na squarefree lexsegment ideal of $S$. \nEquivalently, an ideal \n$L=L(k_\\bullet)$ (here $k_\\bullet=\\{k_i\\}_{i\\in\\mathbb{N}}$ is a \nsequence of nonnegative integers) is a \nUSLI with $k_i$ minimal generators\nof degree $i$ (for $i\\in\\mathbb{N}$) if and only if\n the set of minimal generators of $L$, \n$G(L)$, is given by\n$$G(L)=\n\\bigcup_{r=1}^{\\infty}\\left\\{\n(\\prod_{j=1}^{r-1} x_{R_j})\\cdot x_l \\;:\\; R_{r-1}+1 \n\\leq l \\leq R_r-1\\right\\}, \\mbox{ where } R_j=j+\\sum_{i=1}^j k_i. \n \n$$\n\\end{definition}\n\nThe easiest way to verify the description of the set\n$G(L)=\\{m_1 >_{\\mbox{\\upshape {\\tiny lex}}} m_2 >_{\\mbox{\\upshape {\\tiny lex}}} \\cdots \\ >_{\\mbox{\\upshape {\\tiny lex}}} m_s >_{\\mbox{\\upshape {\\tiny lex}}} \\cdots \\}$ \nof a USLI $L$ is by induction on $s$. Indeed, if $m_1, \\cdots, m_s$ satisfy\nthe above description and\n$m_s=(\\prod_{j=1}^{r-1} x_{R_j})\\cdot x_l$, \nthen there are two possibilities for\n$m_{s+1}$: either $\\deg(m_{s+1})=\\deg(m_s)=r$ (equivalently, $lr$ (equivalently, $l=R_r-1$ and $k_i=0$ for all $r_{\\mbox{\\upshape {\\tiny lex}}} m_{s+1}$ and since $m_s$ is the \nimmediate lex-predecessor of\n$m':=(\\prod_{j=1}^{r-1} x_{R_j})\\cdot x_{l+1}$,\nit follows that\n$m'\\geq_{\\mbox{\\upshape {\\tiny lex}}} m_{s+1}\\in L$ \nwhich together\nwith $L$ being a USLI implies that \n$m'\\in L$. Since $m'$ is not divisible by any of $m_1, \\cdots, m_s$,\nthis yields\n$m_{s+1}=m'$. The treatment\nof the latter case is similar: just observe that every squarefree monomial\nof degree $r'$ that is lex-smaller than\n $m':=(\\prod_{j=1}^{r-1} x_{R_j})\\cdot (\\prod_{j=1}^{r'-r+1} x_{l+j})=\n (\\prod_{j=1}^{r'-1} x_{R_j})\\cdot x_{R_{r'-1}+1}$ is divisible by at least\none of\n$m_1, \\ldots, m_s$ and hence is in $L-G(L)$, while $m'$ is not divisible\nby any of $m_1, \\cdots, m_s$.\n\n\\begin{example} \\quad\n\\begin{enumerate} \n\\item\n The ideal $\\langle x_1x_2, x_1x_3,\n x_2x_3\\rangle$ (the Stanley-Reisner ideal of three isolated points)\n is a lexsegment in $S_{[3]}$, but is not a lexsegment in $S$, and\n hence is not a USLI. \n\n\\item\n The ideal $I = \\langle x_1x_2, x_1x_3,\n x_1x_4x_5x_6x_7\\rangle$ is the USLI with $k_1 = 0, k_2 = 2, k_3 =\n k_4 = 0, k_5 = 1$ and $k_i = 0$ for all $i > 5$. In this example,\n check that $R_1 = 1, R_2 = 4, R_3 = 5, R_4 = 6$ and $R_5 = 8$. \n\\end{enumerate}\n\\end{example}\n\nNote that every USLI is a squarefree strongly stable ideal, and hence\nis the Stanley-Reisner ideal of a shifted (possibly infinite)\nsimplicial complex (we refer to such complex as a {\\em USLI complex}).\nAll complexes considered in this section are assumed to be finite.\n\nThe following lemma describes certain combinatorial properties of \nUSLI complexes. This lemma together with\n Lemmas \\ref{main_lemma} and \\ref{Pardue_lemma}\nbelow provides a key step in the proof of \nTheorem \\ref{Thm2}.\n\\begin{lemma} \\label{comb_USLI}\nLet $\\Gamma$ be a USLI complex on the vertex set $[n]$\nwith $I_\\Gamma=L(k_\\bullet)$. \n\\begin{enumerate}\n\\item If $I_\\Gamma\\neq 0$ and $k_d$ is the last nonzero entry\nin the sequence $k_\\bullet$, then $\\Gamma$ has exactly $d$ facets. \nThey are given by\n$$F_i=\\left\\{\\begin{array}{ll}\n \\{R_j : 1\\leq j \\leq i-1\\}\\cup [R_i+1, n] & \\mbox{ if $1\\leq\n i\\leq d-1$,}\\\\ \n \\{R_1, \\ldots, R_{d-1}\\}\\cup [R_d, n] & \\mbox{ if $i=d$.}\n\\end{array}\\right.\n$$\n\\item If $\\Gamma'$ is a shifted complex on $[n]$ such that \n$f(\\Gamma)=f(\\Gamma')$, then $\\Gamma=\\Gamma'$. (In other words\nevery USLI complex is the only shifted complex in its $f$-class).\n\\end{enumerate}\n\\end{lemma} \n\\smallskip\\noindent {\\it Proof: \\ } We verify part (1) by induction on $n+d+\\sum k_i$. The assertion\nclearly holds if $d=1$ or if $\\sum k_i=1$. For instance, if $d=1$ and $k_1=n$\n(equivalently, $R_1=n+1$), then $F_1=[n+1, n]=\\emptyset$ is the only \nfacet of $\\Gamma$.\n\nNote that $R_d$\nis the index of the first variable that does\nnot divide any of the minimal generators of $I_\\Gamma$.\nThus if $R_d\\leq n$, then $\\Gamma=\\mbox{\\upshape lk}\\,_\\Gamma(n)\\star\\{n\\}$, and we are done\nby applying induction hypothesis to the USLI complex $\\mbox{\\upshape lk}\\,_\\Gamma(n)$.\nSo assume that $R_d=n+1$.\nThen $\\mbox{\\upshape lk}\\,_\\Gamma(n)$ and $\\mbox{\\upshape ast}\\,_\\Gamma(n)$\nare easily seen to be the USLI complexes on the vertex set $[n-1]$\nwhose Stanley-Reisner ideals are given\nby $L_1=L(k_1, \\ldots, k_{d-2}, k_{d-1}+1)$ and \n$L_2=L(k_1, \\ldots, k_{d-1}, k_d-1)$,\nrespectively. \nHence by induction hypothesis the complex\n$\\mbox{\\upshape lk}\\,_\\Gamma(n)\\star\\{n\\}$ has exactly $d-1$ facets, namely\nthe sets $F_1, \\ldots, F_{d-1}$ from the list above.\nNow if $k_d>1$, then by induction hypothesis\nthe facets of $\\mbox{\\upshape ast}\\,_\\Gamma(n)$ are the sets $F_1-\\{n\\}, \\ldots,\nF_{d-1}-\\{n\\}, F_d$. Since\n$\\Gamma= (\\mbox{\\upshape lk}\\,_\\Gamma(n)\\star\\{n\\}) \\cup \\mbox{\\upshape ast}\\,_\\Gamma(n)$,\nit follows that $\\max(\\Gamma)=\\{F_1, \\ldots, F_d\\}$.\nSimilarly, if $k_d=1$ and $k_j$ is the last nonzero\nentry in the sequence $(k_1, \\ldots, k_{d-1})$, then \n the facets of $\\mbox{\\upshape ast}\\,_\\Gamma(n)$ are the sets $F_1-\\{n\\}, \\ldots,\nF_{j-1}-\\{n\\}, F_d$, and the result follows in this case as well.\n\nTo prove part (2) we induct on $n$. The assertion is obvious for $n=1$.\nFor $n>1$ we consider two cases.\n\n{\\bf Case 1:} $R_d\\leq n$. In this case \n$\\Gamma=\\mbox{\\upshape lk}\\,_\\Gamma(n)\\star\\{n\\}$, so $\\beta_i(\\Gamma)=0$ for all $i$.\nSince among all squarefree strongly stable\nideals with the same Hilbert function\nthe squarefree lexsegment ideal has the largest \nalgebraic Betti numbers \\cite[Thm.~4.4]{AHHlex}, \nand since by Hochster's formula \\cite{Hoc}, \n$\\beta_{n-i-1}(\\Lambda)=\\beta_{i-1,n}(I_\\Lambda)$ \nfor any simplicial complex $\\Lambda$ on the vertex set $[n]$,\n it follows that $\\beta_i(\\Gamma')\\leq\\beta_i(\\Gamma)=0$,\nand so $\\beta_i(\\Gamma')=0$ for all $i$. Since $\\Gamma'$ is shifted,\nLemma \\ref{betti} implies that all facets of $\\Gamma'$ contain $n$.\nThus $\\Gamma'=\\mbox{\\upshape lk}\\,_{\\Gamma'}(n)\\star\\{n\\}$, and the assertion follows\nfrom induction hypothesis applied to \n$\\mbox{\\upshape lk}\\,_\\Gamma(n)$ and $\\mbox{\\upshape lk}\\,_{\\Gamma'}(n)$.\n\n{\\bf Case 2:} $R_d=n+1$. In this case all facets of $\\Gamma$ but $F_d$ \ncontain vertex $n$ (this follows from part (1) of the Lemma), and we infer\nfrom Lemma \\ref{betti} that\n$$\\beta_i(\\Gamma)=\\left\\{\\begin{array}{ll} 0, \\mbox{ if $i\\neq d-2$} \\\\\n 1, \\mbox{ if $i= d-2$.} \\\\\n \\end{array}\n \\right.\n$$\nRecall the Euler-Poincar\\'e formula asserting that for any simplicial\ncomplex $\\Lambda$, \n$$\\sum_{j\\geq -1}(-1)^j f_j(\\Lambda) \n= \\sum_{j\\geq -1}(-1)^j \\beta_j(\\Lambda)\n=:\\widetilde{\\chi}(\\Lambda).$$\nTherefore, $\\widetilde{\\chi}(\\Gamma')=\\sum_{j\\geq -1}(-1)^j f_j(\\Gamma')=\n\\sum_{j\\geq -1}(-1)^j f_j(\\Gamma)=\\widetilde{\\chi}(\\Gamma)=(-1)^{d-2}$, and\nhence not all Betti numbers of $\\Gamma'$ vanish. The\nsame reasoning as in Case 1 then shows that \n$\\beta_i(\\Gamma')=\\beta_i(\\Gamma)$ for all $i$. Applying\nLemma \\ref{betti} once again, we obtain that \n$\\Gamma'=(\\mbox{\\upshape lk}\\,_{\\Gamma'}(n)\\star\\{n\\})\\cup\\{ F'\\}$, where $|F'|=d-1$\nand $F'$\nis the only facet of $\\Gamma'$ that does not contain $n$. Thus \n$f(\\mbox{\\upshape lk}\\,_{\\Gamma}(n))= f(\\mbox{\\upshape lk}\\,_{\\Gamma'}(n))$ and\n $f(\\mbox{\\upshape ast}\\,_{\\Gamma}(n))= f(\\mbox{\\upshape ast}\\,_{\\Gamma'}(n))$, and so\n$\\mbox{\\upshape lk}\\,_{\\Gamma}(n)=\\mbox{\\upshape lk}\\,_{\\Gamma'}(n)$ and \n$\\mbox{\\upshape ast}\\,_{\\Gamma}(n)=\\mbox{\\upshape ast}\\,_{\\Gamma'}(n)$ (by induction hypothesis),\nyielding that $\\Gamma=\\Gamma'$.\n\\hfill$\\square$\\medskip\n\nWe now turn to the class of {\\em almost USLIs}. \n(Recall our convention that lower degree monomials are \nlex-larger than higher degree monomials.)\n\n\n\\begin{definition}\nLet $I\\subset S$ (or $I\\subset S_{[n]}$)\nbe a squarefree strongly stable monomial ideal with \n$G(I)=\\{m_1>_{\\mbox{\\upshape {\\tiny lex}}} \\ldots >_{\\mbox{\\upshape {\\tiny lex}}} m_l>_{\\mbox{\\upshape {\\tiny lex}}} m_{l+1} \\}$. \nWe say that $I$ is {\\em an almost USLI}\nif $I$ is not a USLI, but $L=\\langle m_1, \\ldots, m_l\\rangle$ is a USLI.\nWe say that a simplicial complex $\\Gamma$ is {\\em an almost USLI complex} \nif $I_\\Gamma$ is an almost USLI.\n\\end{definition}\n\nAs we will see in the next section (see also Lemma \\ref{Pardue_lemma} below),\n what makes almost USLI complexes noninvariant under\nlex shifting is the following combinatorial property. (We recall that the \n{\\em regularity} of a finitely generated stable monomial ideal $I$, $\\mbox{\\upshape reg}(I)$,\nis the maximal degree of its minimal generators.)\n\n\\begin{lemma} \\label{main_lemma}\nLet $\\Gamma$ be an almost USLI complex.\nThen $|\\max(\\Gamma)|>\\mbox{\\upshape reg}(I_\\Gamma)$.\n\\end{lemma}\n\n\\smallskip\\noindent {\\it Proof: \\ } \nAssume $\\Gamma$ is a simplicial complex on $[n]$\nwith $G(I_\\Gamma)=\\{m_1>_{\\mbox{\\upshape {\\tiny lex}}}\\ldots>_{\\mbox{\\upshape {\\tiny lex}}}m_l>_{\\mbox{\\upshape {\\tiny lex}}}m_{l+1}\\}$.\nWe have to show that $|\\max(\\Gamma)|>\\deg(m_{l+1})=:d$.\nWe verify this by induction on $d$. To simplify the notation\n assume without loss of generality that every singleton \n$\\{i\\}\\subset[n]$ is a vertex of $\\Gamma$\n(equivalently, $I_\\Gamma$ has no generators of degree 1).\nIf there are generators of degree 1 then the proof given below can\nbe modified by letting the index $R_1$ play the role of the index $1$. \nAs $I_\\Gamma$ is an almost USLI, and so\n$\\langle m_1, \\ldots, m_l\\rangle$ is a USLI,\nthis leaves two possible cases:\n\n{\\bf Case 1:}\n{\\em $m_1, \\ldots, m_l$ are divisible by $x_1$, but\n$m_{l+1}$ is not divisible by $x_1$.}\nSince $I_\\Gamma$ is squarefree strongly stable, it follows that \n$m_{l+1}=\\prod_{j=2}^{d+1}x_j$. In this case each set $F_i=[n]-\\{1, i\\}$,\n$i=2, \\ldots, d+1$, is a facet of $\\Gamma$. \n(Indeed the product $\\prod\\{x_j : j\\in F_i\\}$\n is not divisible by $m_{l+1}$,\nand it is also not divisible by $x_1$, and hence\nby $m_1, \\ldots, m_{l}$, implying that $F_i$ is a face. To show that $F_i$\nis a maximal face observe that \n $F_i\\cup \\{i\\}$ \ncontains the support of $m_{l+1}$, and hence is not a face,\nbut then shiftedness of $\\Gamma$ implies that\nneither is $F_i\\cup\\{1\\}$.)\nSince there also should be a facet containing $1$, we conclude\nthat $\\max(\\Gamma)\\geq d+1>\\deg(m_{l+1})$, \ncompleting the proof of this case.\n\n{\\bf Case 2:} \n{\\em All minimal generators of $I$ are divisible by $x_1$.}\nIn this case consider an almost USLI\n$I_\\Gamma':=\\langle x_1, m_1\/x_1, \\ldots, m_{l+1}\/x_1 \\rangle$.\n By induction hypothesis $\\Gamma'$\nhas $s>\\deg(m_{l+1})-1$ facets which we denote by $F_1, \\ldots, F_s$.\nOne easily verifies that\n$\\max(\\Gamma)=\\left\\{\\{1\\}\\cup F_1, \\ldots, \\{1\\}\\cup F_s, [2,n]\\right\\},\n$\nand so $|\\max(\\Gamma)|=s+1>\\deg(m_{l+1})$.\n\\hfill$\\square$\\medskip\n\n\nWe close this section with an algebraic lemma that relates regularity of\n$\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma)$ to the number of facets of $\\Gamma$ (for an arbitrary\ncomplex $\\Gamma$).\n\n\\begin{lemma} \\label{Pardue_lemma}\nFor a (finite) simplicial complex $\\Gamma$, \n$\\mbox{\\upshape reg}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_{\\Gamma}))\\geq |\\max(\\Gamma)|$.\n\\end{lemma}\n\\smallskip\\noindent {\\it Proof: \\ }\nThis fact is a corollary of \\cite[Lemma 23]{Pardue}\napplied to squarefree (and hence radical) ideal $I_\\Gamma\\in S_{[n]}$. \nFor $\\sigma\\subseteq[n]$, we denote by $P_\\sigma$ the (prime)\nideal in $S_{[n]}$ generated by $\\{x_j : j\\notin\\sigma\\}$. It is well known\nthat $I_\\Gamma$ has the following prime decomposition:\n$\nI_\\Gamma=\\cap_{\\sigma\\in\\max(\\Gamma)} P_\\sigma.\n$\n Thus the variety of $I_\\Gamma$, $\\mathcal{V}(I_\\Gamma)$,\nis the union (over $\\sigma\\in\\max(\\Gamma)$)\nof the irreducible subvarieties $\\mathcal{V}(P_\\sigma)$.\nEach such subvariety is a \nlinear subspace of ${\\bf k}^n$ of codimension $n-|\\sigma|$.\n\\cite[Lemma 23]{Pardue} then implies that the monomial $m:=\\prod x_i^{r_i}$,\nwhere $r_i=|\\{\\sigma\\in\\max(\\Gamma): |\\sigma|=n-i\\}|$,\nis a minimal generator of $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma)$.\nHence $\\mbox{\\upshape reg}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma))\\geq \\deg(m)=|\\max(\\Gamma)|$.\n\\hfill$\\square$\\medskip\n\n\n\n\n\n\n\\section{Lex shifting, $B$-numbers and the limit complex} \n \\label{infinite_section}\nIn this section after defining the notion of\nlexicographic shifting and the notion of $B$-numbers \n(a certain analog of the Hilbert function) we prove Theorem~\\ref{Thm2}. \nWe remark that extending the notion of algebraic shifting to an arbitrary term \norder\n$\\succ$ is not entirely automatic since the $\\Phi$-image of the set of\nminimal generators of $\\mbox{\\upshape Gin}_{\\succ}(I_\\Gamma)\\subset S_{[n]}$, \n$G(\\mbox{\\upshape Gin}_{\\succ}(I_\\Gamma))$, may not be a subset of $S_{[n]}$.\n This however can be easily corrected if one considers the system of rings \n$S_{[n]}$, $n\\in\\mathbb{N}$,\nendowed with natural embeddings $S_{[n]}\\subseteq S_{[m]}$ \nfor $m\\geq n$, and makes \nall the computations in the direct limit ring \n$S=\\lim_{n\\rightarrow\\infty}S_{[n]}={\\bf k}[x_i : i\\in\\mathbb{N}]$. This is the approach\nwe adopt here.\n We work with the class of monomial ideals $I\\subset S$\nfinitely generated in each degree.\nThroughout this section we use the graded \nlexicographic term order on $S$.\n\n\\begin{definition} \\label{gin_def}\nLet $I$ be a monomial ideal of $S$ that is finitely generated\nin each degree.\nDefine \n$$\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I):=\\lim_{n\\rightarrow\\infty}\\,\n\\left(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I\\cap S_{[n]})\\right)S,\n$$ \nwhere we consider $I\\cap S_{[n]}$ as an ideal of $S_{[n]}$.\n\\end{definition}\nSince the $d$-th component of $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I\\cap S_{[n]})$ depends only on the\n$d$-th component of $I\\cap S_{[n]}$,\n or equivalently on the minimal generators of \n$I\\cap S_{[n]}$ of degree $\\leq d$,\nLemma \\ref{cone}(4) implies that \n$\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ is well-defined and that for every\n$d$ there is $n(d)$ such that\n$(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}I)_d=((\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I\\cap S_{[n]}))S)_d$ \nfor all $n\\geq n(d)$.\nThus $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ is a monomial ideal finitely generated\nin each degree. (It is finitely generated if $I$ is.)\n Moreover, it follows from Lemma \\ref{cone}(1) that\n$\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ is a strongly stable ideal. \n\n\nRecall that the squarefree operation $\\Phi$ \ntakes monomials of $S$ to squarefree\nmonomials of $S$. \nIf $I\\subset S$ is a monomial ideal finitely generated in each degree,\nwe define $\\Phi(I):=\\langle \\Phi(m) : m\\in G(I)\\rangle$, where $G(I)$ is\nthe set of minimal generators of $I$.\n\\begin{definition}\nLet $I$ be a homogeneous ideal of $S$ that is finitely generated\nin each degree. The {\\em lexicographic shifting} \nof $I$ is the squarefree strongly stable ideal \n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)=\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))$. \nThe {\\em $i$-th lexicographic shifting} of $I$\nis the ideal $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}^i(I)$, where $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}^i$ stands \nfor $i$ successive applications of $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}$. \nWe also define the {\\em limit ideal}\n$\\overline{\\Delta}(I):=\\lim_{k\\rightarrow\\infty}\\Delta_{\\mbox{\\upshape {\\tiny lex}}}^k(I)$.\n\\end{definition}\n\nThe rest of the section is devoted to the proof of Theorem \\ref{Thm2}.\nFirst however we digress and review\nseveral facts on algebraic Betti numbers (defined by Eq.~(\\ref{alg_betti})).\n\n\n\\begin{lemma} \\label{betti-prop}\nLet $I$ and $J$ be monomial ideals of $S_{[n]}$.\n\\begin{enumerate}\n\\item If $I_j=J_j$ for all $0\\leq j\\leq j_0$, then \n$\\beta_{i,j}(I)=\\beta_{i,j}(J)$ for all $i$ and all $j\\leq j_0$. \n\\item The Betti numbers of $I\\subset S_{[n]}$ coincide with those\nof $IS_{[n+1]}\\subset S_{[n+1]}$, that is,\n $\\beta_{i,j}(I)=\\beta_{i,j}(IS_{[n+1]})$ for all $i, j$.\n\\end{enumerate}\n\\end{lemma}\n\\smallskip\\noindent {\\it Proof: \\ }\nPart (1) follows from the standard facts that\n$$\\beta_{i,j}(I)=\n\\dim_{\\bf k} \\mbox{\\upshape Tor}_i^{S_{[n]}}({\\bf k}, I)_{j}=\n\\dim_{\\bf k} \\mbox{\\upshape Tor}_i^{S_{[n]}}(I, {\\bf k})_{j},$$ \nwhere we identify ${\\bf k}$ with the $S_{[n]}$-module \n$S_{[n]}\/\\langle x_1, \\ldots, x_n\\rangle$. \nFor part (2) note that if $\\mathbb{F}$ is\nthe free minimal \nresolution of $I$ over $S_{[n]}$, then $\\mathbb{F}\\otimes_{S_{[n]}} S_{[n+1]}$ \nis the free minimal resolution of $IS_{[n+1]}$ over $S_{[n+1]}$,\nyielding the lemma.\n\\hfill$\\square$\\medskip\n\nThe above properties allow to extend the definition\nof the Betti numbers to the class of monomial ideals of $S$\nthat are finitely generated in each degree.\n\n\\begin{definition} \\label{betti_def}\nLet $I\\subset S$ be a monomial ideal finitely generated in each\ndegree.\n Define \n$$\\beta_{i,j}(I):=\n\\lim_{n\\rightarrow\\infty}\\beta_{i,j}(I\\cap S_{[n]}) \\quad \\mbox{for all } \ni, j\\geq0,\n$$\nwhere we consider $I\\cap S_{[n]}$ as an ideal of $S_{[n]}$.\n\\end{definition}\nWe remark that since $I$ is finitely generated in each degree,\nfor a fixed $j_0$\nthere exists $n_0$ such that $(I\\cap S_{[n+1]})_j=((I\\cap S_{[n]})S_{[n+1]})_j$\nfor all $0\\leq j \\leq j_0$ and $n\\geq n_0$.\n Hence it follows from Lemma \\ref{betti-prop}\nthat (for a fixed $i$)\nthe sequence $\\{\\beta_{i,j_0}(I\\cap S_{[n]})\\}_{n\\in\\mathbb{N}}$\nis a constant for indices starting with $n_0$, and thus\n $\\beta_{i,j_0}(I)$ is well-defined. \n\n\nThe Betti numbers of strongly stable ideals (of $S_{[n]}$) were computed by\nEliahou and Kervaire \\cite{ElKer}, and the analog of this\nformula for squarefree strongly stable ideals (of $S_{[n]}$) was established\nby Aramova, Herzog, and Hibi \\cite{AHHlex}. Definition \\ref{betti_def}\nallows to state these results as follows. (For a monomial $u$\ndefine $m(u):=\\max\\{i : x_i|u\\}$.)\n\\begin{lemma} \\label{EK}\nLet $I\\subset S$ be a monomial ideal finitely generated in each degree, \nlet $G(I)$ denote its set of minimal generators, and let \n$G(I)_j=\\{u\\in G(I): \\deg u=j\\}$. \n\\begin{enumerate}\n\\item If $I$ is strongly stable, then\n$\\beta_{i, i+j}(I)=\\sum_{u\\in G(I)_j} {m(u)-1 \\choose i}$;\n\\item If $I$ is squarefree strongly stable, then\n$\\beta_{i, i+j}(I)=\\sum_{u\\in G(I)_j} {m(u)-j \\choose i}$.\nIn particular, if $I=L(k_\\bullet)$ is a USLI, then\n$\\beta_{i, i+j}(I)=\\sum_{l=1}^{k_j}{k_1+\\ldots+k_{j-1}+l-1 \\choose i}$.\n\\end{enumerate}\n\\end{lemma}\n\nUsing the notion of the Betti numbers, one can define \na certain analog of the Hilbert function ---\n the $B$-numbers --- of a monomial ideal $I$ of $S$ that is finitely generated \nin each degree.\n\n\\begin{definition} \\label{B-definition}\nLet $I\\subset S$ (or $I\\subset S_{[n]}$)\nbe a monomial ideal finitely generated in each degree, and \nlet $\\beta_{i,j}(I)$ be its graded Betti numbers. Define\n$$\nB_j(I):=\\sum_{i=0}^j (-1)^i\\beta_{i,j}(I) \\quad \\mbox{ for all }\nj\\geq 0 \\quad (\\mbox{e.g., $B_0=0$ and $B_1(I)=|G(I)_1|$}).\n$$\n The sequence \n$B(I):=\\{B_j(I): j\\geq 1\\}$ is called the {\\bf $B$-sequence} of $I$. \n\\end{definition}\n\\begin{remark}\nIt is well known and is easy to prove (see \\cite[Section 1B.3]{Eis2})\n that for every $n\\in\\mathbb{N}$ the polynomial\n $\\sum_j B_j(I\\cap S_{[n]})x^j$\nequals $(1-x)^n \\text{Hilb}(I\\cap S_n, x)$, where \n$\\text{Hilb}(I\\cap S_n, x)$ is the Hilbert series of $I\\cap S_{[n]}$. \nIn particular, if $\\Gamma$ is a $(d-1)$-dimensional\nsimplicial complex on $[n]$ and \n$I_{\\Gamma}\\subset S_{[n]}$\nis its Stanley-Reisner ideal then \n\\[\n\\frac{1-\\sum_j B_j(I_\\Gamma)x^j}{(1-x)^n} = \\text{Hilb}(S_{[n]}\/I_\\Gamma, x)\n=\\sum_{i=0}^{d} \\frac{f_{i-1}(\\Gamma)x^i}{(1-x)^i}=\n\\frac{\\sum_{i=0}^d h_i(\\Gamma)x^i}{(1-x)^{d}}, \n\\]\nwhere $\\{h_i(\\Gamma)\\}_{i=0}^d$ is the $h$-vector of $\\Gamma$ \\cite{St}.\n(Recall that\n$h_j=\\sum_{i=0}^j (-1)^{j-i}{d-i \\choose j-i}f_{i-1}$ for\n$0\\leq j \\leq d$. In particular, $h_1=f_0-d$.)\nThus \n$\\sum_j B_j(I_\\Gamma)x^j=1-(1-x)^{h_1}\\sum_i h_ix^i$\n(if one assumes that $\\{i\\}\\in\\Gamma$ for every $i\\in[n]$), and so\nthe $h$-vector of $\\Gamma$ \ndefines the $B$-sequence of $I_\\Gamma$.\n\\end{remark}\n\nThe following lemma provides the\nanalog of the ``$f(\\Gamma)=f(\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma))$-property\".\n\\begin{lemma} \\label{cone2}\nIf $I\\subset S$ is a monomial ideal that is\n finitely generated in each degree, then \nthe ideals $I$ and $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)$ have the same $B$-sequence.\nIn particular, if $I$ is finitely generated, then for a sufficiently large $n$,\nthe ideals $I\\cap S_{[n]}$ and $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)\\cap S_{[n]}$ \nhave the same Hilbert function\n(in $S_{[n]}$).\n\\end{lemma}\n\\smallskip\\noindent {\\it Proof: \\ }\nSince for every $n\\in\\mathbb{N}$ the ideals\n$I\\cap S_{[n]}$ and $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I\\cap S_{[n]})$ have the same Hilbert function\n(in $S_{[n]}$) (see Lemma \\ref{cone}), and since \n$B_i(I)=\\lim_{n\\rightarrow\\infty} B_i(I\\cap S_{[n]})$, \nthe above remark implies that $B(I)=B(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))$. Finally,\n since $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I)$\nis a strongly stable ideal (Lemma \\ref{cone}), we infer (by comparing\nthe two formulas of Lemma \\ref{EK})\nthat \n$\\beta_{i,j}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))=\\beta_{i,j}(\\Phi\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))=\\beta_{i,j}\n(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I))$ for all $i, j$,\nand so $B(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))=B(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I))$.\nThe result follows. \\hfill$\\square$\\medskip\n\n\n\n\nNow we are ready to verify the first part of Theorem \\ref{Thm2}. In fact\nwe prove the following slightly more general result.\n\n\\begin{theorem} \\label{main}\nLet $I$ be a squarefree strongly stable ideal of $S$ finitely \ngenerated in each degree. \nThen $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)>_{\\mbox{\\upshape {\\tiny lex}}} I$ unless $I$ is a USLI in which case\n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)= I$. \nMoreover if $I$ is finitely\ngenerated and is not a USLI, then \nall ideals in the sequence\n$\\{\\Delta^i_{\\mbox{\\upshape {\\tiny lex}}}(I)\\}_{i\\geq 0}$ are distinct. \n\\end{theorem}\n\n\n\\smallskip\\noindent {\\it Proof: \\ } There are several possible cases.\n\n{\\bf Case 1:} $I=L(k_\\bullet)$ is a USLI. To prove that \n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)=I$,\nit suffices to show that for every $d\\geq 1$, \n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(L(k^{(d)})= L(k^{(d)})$, where \n$k^{(d)}:=\\{k_1, \\ldots, k_d, 0,0,\\ldots\\}$ is the sequence $k_\\bullet$\ntruncated at $k_d$.\nBut this is immediate from Lemmas \\ref{comb_USLI}(2) and \\ref{cone2}. \nIndeed, for $n=n(d)$ sufficiently large\nthe simplicial complexes on the vertex set $[n]$ whose Stanley-Reisner\nideals are given by $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(L(k^{(d)})\\cap S_{[n]}$ and \n$L(k^{(d)})\\cap S_{[n]}$, respectively,\nare shifted and have the same $f$-numbers. \nSince the second complex is a USLI complex,\nit follows that those complexes, and hence their ideals, coincide. \n \n{\\bf Case 2:} $I=\\langle m_1, \\ldots, m_l, m_{l+1} \\rangle$\n is an almost USLI.\nLet $n$ be the largest index of a variable appearing in $\\prod_{i=1}^{l+1}m_i$,\nand let $\\Gamma$ be a simplicial complex on $[n]$ with \n$I_\\Gamma = I\\cap S_{[n]}$.\nThen\n$$\\mbox{\\upshape reg}(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I))=\\mbox{\\upshape reg}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma)) \n\\stackrel{\\mbox{\\tiny {Lemma \\ref{Pardue_lemma}}}}{\\geq} |\\max(\\Gamma)|\n\\stackrel{\\mbox{\\tiny {Lemma \\ref{main_lemma}}}}{>}\\mbox{\\upshape reg}(I_\\Gamma)=\\mbox{\\upshape reg}(I),\n$$\nyielding that $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)\\neq I$ in this case.\nMoreover, since by Eq.~(\\ref{P5}), \n$\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I_\\Gamma))=I_\\Gamma$ and since $\\Phi$ is a \nlex-order preserving map,\nwe infer from Lemma \\ref{gin>gin} that \n$\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma))\\geq_{\\mbox{\\upshape {\\tiny lex}}} \\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I_\\Gamma))\n=I_\\Gamma$,\nand hence that $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)>_{\\mbox{\\upshape {\\tiny lex}}} I$.\n\n{\\bf Case 3:} I is squarefree strongly stable, but is not a USLI. \nIn this case we sort $G(I)=\\{m_1, \\ldots, m_l, m_{l+1}, \\ldots\\}$ by graded lex-order\nand assume that $m_{l+1}$ is the first non-USLI generator of $I$.\nLet \n$I_1=\\langle m_1, \\ldots, m_l \\rangle$ and let \n$I_2=\\langle m_1, \\ldots, m_{l+1} \\rangle$. \nThen $I_1$ is a USLI, $I_2$ is an almost USLI, and $I_1\\subset I_2\\subseteq I$. \nHence by the previous two cases \n$I_1=\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I_1)\\subset\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I_2)$ \nand \n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I_2)>_{\\mbox{\\upshape {\\tiny lex}}} I_2$, and so \n there exists a monomial $m$, $m_l>_{\\mbox{\\upshape {\\tiny lex}}} m>_{\\mbox{\\upshape {\\tiny lex}}} m_{l+1}$, such that\n$m \\in G(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I_2)) \\subseteq G(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I))$. \nThus\n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)>_{\\mbox{\\upshape {\\tiny lex}}} I$. \n\nFinally to show that for a finitely generated ideal $I$,\nall ideals in the sequence $\\{\\Delta^i_{\\mbox{\\upshape {\\tiny lex}}}(I)\\}_{i\\geq 0}$ are distinct, \nit suffices to check that none of those ideals is a USLI. \nThis is an immediate corollary of Lemmas \\ref{comb_USLI}(2) and \\ref{cone2}. \\hfill$\\square$\\medskip\n\n\n\nOur next goal is to prove the second part of Theorem \\ref{Thm2}. \nTo do that we fix a sequence of integers\n$B=\\{B_j : j\\geq 1\\}$ and study the class $\\mathcal{M}(B)$ of all monomial ideals $I\\subset S$ \nthat are finitely generated in each degree and satisfy $B(I)=B$.\n\n\\begin{lemma}\nThere is at most one USLI in the class $\\mathcal{M}(B)$.\n\\end{lemma}\n\\smallskip\\noindent {\\it Proof: \\ } \nRecall that a USLI $L=L(k_\\bullet)$ \nis uniquely defined by its $k$-sequence\n$k_\\bullet=\\{k_i : i\\geq 1\\}$, where $k_i=\\beta_{0,i}(L)=|G(L)_i|$. \nRecall also that \n$B(L)$ is a function of \n$k_\\bullet$ (see Lemma \\ref{EK}(2)), and so to complete the proof it suffices\nto show that this function is\none-to-one, or more precisely that\n $k_j$ is determined by\n$k_1, \\ldots, k_{j-1}, B_j$ (for every $j\\geq 1$). \nAnd indeed, \n\\begin{eqnarray*}\nk_j&=&\\beta_{0,j}(L)=B_j-\\sum_{i=1}^j (-1)^i\\beta_{i,j}(L) \n \\quad (\\mbox{by definition of } B_j)\\\\\n&=& B_j-\\sum_{i=1}^j(-1)^i \\sum_{l=1}^{k_{j-i}}{k_1+\\ldots+k_{j-i-1}+l-1 \\choose i}\n\\quad (\\mbox{by Lemma } \\ref{EK}(2)). \n\\end{eqnarray*}\n\\hfill$\\square$\\medskip\n\nNow we are ready to prove (the slightly more general \nversion of) the second part of Theorem \\ref{Thm2}.\n\\begin{theorem}\nFor every ideal $I\\in\\mathcal{M}(B)$, the limit ideal $\\overline{\\Delta}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ \nis well defined and is the unique USLI of $\\mathcal{M}(B)$.\n\\end{theorem}\n\\smallskip\\noindent {\\it Proof: \\ }\nFix $I\\in \\mathcal{M}(B)$. To show that $\\overline{\\Delta}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ \nis well defined, it suffices to check that for every $d\\geq 0$,\nthere exists $s=s(d)$ such that \n\\begin{equation} \\label{stab}\nG(\\Delta^{s}_{\\mbox{\\upshape {\\tiny lex}}}(I))_{\\leq d}=G(\\Delta^{s+1}_{\\mbox{\\upshape {\\tiny lex}}}(I))_{\\leq d} \n\\end{equation}\n(where $G(J)_{\\leq d}:=\\cup_{j\\leq d} G(J)_j$),\nand hence that all ideals $\\Delta^{i}_{\\mbox{\\upshape {\\tiny lex}}}(I)$, $i\\geq s$,\nhave the same $d$-th homogeneous component.\nWe verify this fact by showing that the collection of all \npossible sets of minimal generators \n\\begin{equation} \\label{finite}\n \\mathcal{G}_{\\leq d}:=\\{ G(J)_{\\leq d} : \n J\\in\\mathcal{M}(B), J \\mbox{ is squarefree strongly stable}\\} \n \\quad \\mbox{is finite}.\n\\end{equation}\n(This yields (\\ref{stab}), since all ideals $\\Delta^{i}_{\\mbox{\\upshape {\\tiny lex}}}(I)$, $i\\geq 1$,\nare squarefree strongly stable, and since \n$\\Delta^{i}_{\\mbox{\\upshape {\\tiny lex}}}(I)\\leq_{\\mbox{\\upshape {\\tiny lex}}} \\Delta^{i+1}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ \nby Theorem \\ref{main}.)\nEq.~(\\ref{finite}) can be easily proved by induction.\n It clearly holds for \n$d=0$. Now if $J\\in\\mathcal{M}(B)$ is squarefree strongly stable, then\nby Lemma \\ref{EK}(2) and Definition \\ref{B-definition}, \n$$ |G(J)_d|=\\beta_{0,d}(J)=\nB_d-\\sum_{i=1}^{d}(-1)^i\\sum_{u\\in G(J)_{d-i}}{m(u)-(d-i) \\choose i},\n$$\nso assuming that the collection \n$\\mathcal{G}_{\\leq d-1}$ is finite, or equivalently that the set of integers\n$\\{m(u): u\\in G(J)_{\\leq d-1}\\in\\mathcal{G}_{\\leq d-1}\\}$ is bounded \n(say by $n(d)$), \nwe obtain that \nthere exists a constant $g(d)$ such that\n$|G(J)_d|\\leq g(d)$ for all squarefree strongly stable ideals $J\\in\\mathcal{M}(B)$.\nBut then the squarefree strongly stable property implies that\n$m(u)< n(d)+g(d)+d$ for every $u\\in G(J)_{\\leq d}\\in \\mathcal{G}_{\\leq d}$,\nand (\\ref{finite}) follows.\n\nThe second part of the statement is now immediate:\nindeed if $G(\\Delta^s(I))_{\\leq d} = G(\\Delta^{s+1}(I))_{\\leq d}$,\n then by Theorem \\ref{main}, \n$G(\\Delta^s(I))_{\\leq d}= G(\\overline{\\Delta}(I))_{\\leq d}$ \nis the set of minimal generators of a USLI.\n\\hfill$\\square$\\medskip\n\n\\section{Remarks on other term orders}\nWe close the paper by discussing several results and conjectures\nrelated to algebraic shifting with respect to arbitrary term orders. \n To this end, we say that an order $\\succ$ \non monomials of $S$ is a {\\em term order} if \n$x_i\\succ x_{i+1}$ for $i\\geq 1$, \n$m\\succ m'$ as long as $\\deg(m)<\\deg(m')$,\nand the restriction of $\\succ$\n to $S_{[n]}$\nis a term order on $S_{[n]}$ for all $n\\geq 1$. In addition,\nwe restrict our discussion\nonly to those term orders on $S$ that are compatible with the squarefree\noperation $\\Phi$, that is, $\\Phi(m)\\succ\\Phi(m')$ if $m\\succ m'$.\n\nSimilarly to Definition \\ref{gin_def}, for a term order $\\succ$ on $S$ and\na homogeneous ideal $I\\subset S$ that is finitely generated in each degree,\nwe define $\\Delta_\\succ(I):=\\Phi(\\mbox{\\upshape Gin}_\\succ(I))$. Thus $\\Delta_\\succ(I)$\nis a squarefree strongly stable ideal that has the same $B$-sequence as $I$.\n(Indeed, the proof of Lemma \\ref{cone2} carries over to this more \ngeneral case.)\n\nWe say that a squarefree monomial ideal $I\\subset S$ is a {\\em US$\\succ$I}\nif for every monomial $m\\in I$ and every squarefree monomial $m'$\nsuch that\n$\\deg(m)=\\deg(m')$ and $m'\\succ m$, $m'$ is an element of $I$ as well. \nBeing US$\\succ$I implies being squarefree strongly stable.\n\nIn view of Theorems \\ref{Thm2} and \\ref{AHH}\nit is natural to ask the following:\n\\begin{enumerate}\n\\item Does $\\Delta_\\succ(I)=I$ hold for every US$\\succ$I I?\n\\item Is there a term order $\\succ$ other than the lexicographic order\nfor which the equality $\\Delta_\\succ(I)=I$ implies that $I$ is a \nUS$\\succ$I?\n\\item Is there a term order $\\succ$ other than the \nreverse lexicographic order such that the equation $\\Delta_\\succ(I)=I$ \nholds for all squarefree strongly stable ideals $I$?\n\\end{enumerate}\n\nThe next proposition answers the first question in the affirmative.\n\\begin{proposition}\nIf $I$ is a US$\\succ$I, then $\\Delta_\\succ(I)=I$\nfor every term order on $S$ that is compatible with $\\Phi$.\n\\end{proposition}\n\\smallskip\\noindent {\\it Proof: \\ } Exactly as in the proof of Theorem \\ref{main} (see the\nlast three lines of Case 2), \none can show that $\\Delta_\\succ(I)\\succeq I$. Hence either\n$\\Delta_\\succ(I)= I$, in which case we are done, or the \n$\\succ$-largest monomial, $m$, in the symmetric difference of \n$G(\\Delta_\\succ(I))$ and $G(I)$ is an element of $G(\\Delta_\\succ(I))$.\nSince $I$ is a US$\\succ$I, we obtain in the latter case that \n$G(\\Delta_\\succ(I))_i=G(I)_i$ for all $i<\\deg(m)$ and\n$$\nG(I)_{i_0}=\\{m'\\in G(\\Delta_\\succ(I))_{i_0} : m'\\succ m\\}\n\\quad \\mbox{ for } i_0=\\deg(m),\n$$\nthat is, $G(I)_{i_0}$ is a strict subset of $ G(\\Delta_\\succ(I))_{i_0}$.\nThis is however impossible, since it contradicts the fact that\nthe ideals $I$ and $\\Delta_\\succ(I)$ have the same $B$-sequence.\n\\hfill$\\square$\\medskip\n\nThe answer to the second question is negative as follows from \nthe following result.\n\n\\begin{proposition}\nIf $I$ is a USLI, then $\\Delta_\\succ(I)=I$ for all term orders $\\succ$.\n\\end{proposition}\nWe omit the proof as it is completely analogous to that of \nTheorem \\ref{main}, Case 1.\n\nWhile we do not know the answer to the third question, we believe that it\nis negative. In fact it is tempting to conjecture that the following holds.\nLet $\\succ$ be a term order on $S$ other than the (graded) \nreverse lexicographic order, and let $k\\geq 2$ be the smallest degree\non which $\\succ$ and revlex disagree. Write $m_i$ to denote the $i$th \nsquarefree monomial of $S$ of degree $k$ with respect to the revlex order.\n(It is a fundamental property of the revlex order that every squarefree \nmonomial of $S$ of degree $k$ is of the form $m_i$ for some finite $i$.)\n\n\\begin{conjecture}\nLet $i_0\\geq 1$ be the smallest index for which \n$I_{i_0}:=\\langle m_1, \\cdots, m_{i_0}\\rangle$ is not a US$\\succ$I. \nThen $\\Delta_{\\succ}(I_{i_0})\\neq I_{i_0}$.\n\\end{conjecture}\n\n\n\\section*{Acknowledgments} We are grateful to Aldo Conca for\nhelpful discussions and to the anonymous referees for insightful comments.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nMonocular Visual Odometry (MVO) is a popular method for camera pose estimation, but due to the scale ambiguity \\cite{song2015high,zhou2016reliable, wu2020eao,2009Absolute,2010scale}, the MVO system cannot provide real odometry data. Therefore, an accurate and robust scale recovery algorithm is of great significance in the application of MVO \\cite{hawkeye-zhu}.\nThe key of scale recovery is to integrate absolute reference information, such as the gravity orientation from IMU \\cite{qin2018vins} or the depth measured by Lidar \\cite{zhang2014real, li2017hybrid}. The baseline of stereo cameras can also serve as such a reference \\cite{mur2017orb}. However, these sensors are not always available in real-world applications. Moreover, a complicated sensor calibration and fusion process is needed to align the MVO system with other sensors. \n\nAnother frequently used method for removing scale ambiguity is to take as reference the height of a mounted camera above the ground plane, which remains a stable signal during the navigation of vehicles. The idea is to estimate a ratio between the real camera height and the relative one calculated from image features. The ratio then can be used to recover the real scale. The advantages of this method are significant since it does not depend on other sensors and is with high feasibility. The method is also regarded as one of the most promising solutions in this research area.\n\nPrior work like \\cite{zhou2019ground,zhou2016reliable,song2015high,1211380} typically leverages the results of feature matching to calculate the homography matrix and then decompose the matrix to estimate the parameters of the ground plane, based on which the relative camera height can be obtained. The major deficiency of this method is that the decomposition is very sensitive to noises and multiple solutions exist, which requires additional operations to eliminate the ambiguity. Some other work like \\cite{grater2015robust,1211380} chooses to directly fit the ground plane using feature points that lie on the ground, e.g., the center regions of a road. By removing low-quality image features outside the target region, the robustness is improved. However, the target region may be occluded sometimes, which interferes with the detection of image features, thus degrading the accuracy of scale recovery.\n\n\nIn recent work \\cite{yin2017scale,andraghetti2019enhancing,xue2020toward,wagstaff2020self, wang2020tartanair}, deep learning based MVO algorithms are proposed, in which the camera pose with a real scale is directly predicted by the neural network in an end-to-end manner. Such methods have received much attention in recent years, but their generalization ability across different scenarios is very limited \\cite{wang2020tartanair}. Some other deep learning based methods take scale recovery as an independent problem. For instance, DNet \\cite{xue2020toward} is proposed to perform ground segmentation and depth regression simultaneously. Based on the predicted depth points within the ground region, a dense geometrical constraint is then formulated to help recover the scale. In \\cite{wagstaff2020self}, a scale-recovery loss is developed based on the idea of enforcing the consistency between the known camera height and the predicted one. Constrained by this loss, the neural network can predict more accurate ego poses. Nonetheless, these methods usually require a large-scale training process, and the computational cost is prohibitively expensive.\n\nIn this paper, we propose a light-weight method for accurate and robust scale recovery based on plane geometry. The method includes an efficient Ground Point Extraction (GPE) algorithm based on Delaunay triangulation \\cite{shewchuk1996triangle} and a Ground Points Aggregation (GPA) algorithm for aggregating ground points from consecutive frames. Based on these two algorithms, a large number of high-quality ground points are selected. For scale recovery, we first formulate a least-square problem to fit the ground plane and then estimate the relative camera height to calculate the real scale. By leveraging the high-quality points and a RANSAC-based optimizer, the scale can be estimated accurately and robustly. Benefiting from the light-weight design of the algorithms, our method can achieve a 20Hz running frequency on the benchmark dataset. \n\nThe main contributions of this work are as follows:\n\\begin{itemize}\n\t\\item We propose a GPE algorithm based on Delaunay triangulation, which can accurately extract ground points. \n\t\\item We propose a GPA algorithm that can effectively aggregate local ground points and perform robust optimization of the ground plane parameters. \n\t\\item Based on the proposed algorithms, we implement a real-time MVO system with accurate and robust scale recovery, aiming to reduce scale drift and provide accurate odometry in long-distance navigations without loop closure. \n\n\\end{itemize}\n\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[height=6.3cm]{sys2.pdf}\n\t\\caption{Overview of the proposed system. There are two parallel threads in the system: 1) The MVO thread takes image frames as input and estimates the current camera pose, 2) The GPE-GPA thread fetches image features from the MVO thread and selects high-quality points for ground plane estimation.}\n\n\t\\label{overview}\n\t\\vspace{-2mm}\n\\end{figure*}\n\n\n\n\\section{System Overview}\nThe notations used in this paper are as follows:\n\\begin{itemize}\n\t\\item $T_t\\in{R^{4\\times4}}$ - The camera pose of image frame $I_t$ in the global frame, which is composed of a camera orientation $R_t\\in{R^{3\\times3}}$ and a translation $\\boldsymbol{t}_t\\in{R^{3\\times1}}$.\n\t\\item $T_{t,t-1}$ - The relative pose of frame $I_{t-1}$ w.r.t. frame $I_t$.\n\t\\item $K$ - The intrinsic matrix of a pinhole camera model.\n\t\\item $\\mathbf{x}_i, \\mathbf{u}_i$ - The 3D map point in camera frame and its corresponded 2D point on image plane after projection.\n\n\t\\item $\\mathbf{n}_t, h_t$ - The plane parameters, i.e., $\\mathbf{n}_t \\cdot \\mathbf{x}_i-h_t=\\mathbf{0}$, where $\\mathbf{n}_t$ is the normal vector and $h_t$ is the distance to the plane.\n\t\\item $h, {h}^{\\dagger}, h^*$ -- The calculated camera height form image features, the estimated camera height after scale recovery, and the real camera height.\n\\end{itemize}\n\n\n\\subsection{Problem Definition}\nGiven consecutive image frames from a calibrated monocular camera, our goal is to estimate the absolute scale of camera poses and then recover the real camera trajectory by making use of the prior known camera height $h^*$. Under scale ambiguity, the camera height $h$ calculated from image features maintains a ratio with the real one, i.e., $s=h^*\/h$. Therefore, scale recovery is essentially to compute $s$, and the key lies in the accurate estimation of the ground plane.\n\n\n\n\n\n\n\\subsection{System Architecture}\nThe proposed system in this work is shown in Fig. \\ref{overview}. There are two parallel threads in the system: The first one is the MVO thread, which takes consecutive images as input and estimates the current camera pose, e.g., the ORB-SLAM2 framework. The second thread is used to run the GPE and GPA algorithms for scale recovery. The proposed system is based on such an assumption that the ground is locally flat and can be approximated by a plane with a surface normal. The workflow of the second thread is as follows.\n\nAs shown in the red block in Fig.\\ref{overview}, for each image frame from the MVO thread, the Delaunay triangulation is first applied to segment the matched feature points into a set of triangles. Each triangle is then back-projected into the camera frame, and the associated plane parameters are also estimated. After that, several geometrical constraints are leveraged to select and then refine ground points. \n\nNote that selected ground points are not enough for an accurate estimation of the plane parameters. We thus propose the GPA algorithm to aggregate ground points from multiple frames using a sliding windows method, as shown in the orange block of Fig.\\ref{overview}. Based on the aggregated local points, a robust parameter estimation procedure is then performed to fit the ground plane. Accordingly, the relative camera height of each frame can be estimated, and the absolute camera trajectory is recovered, shown in the blue block of Fig.\\ref{overview}.\n\n\n\n\\section{Ground Plane Estimation}\n\n\\subsection{Ground Point Extraction}\nFor a given set of matched feature points $\\mathbf{u}^t_i$, $i\\in\\{1,2,\\dots,N\\}$, in the current image frame $I_t$, the Delaunay triangulation uses each of the feature points as a triangle vertex. We back-project the triangles from the image plane into the current camera frame and denote them by $\\Delta_i^t, i\\in\\{1,2,\\dots M\\}$ associated with a set of vertices $\\mathbf{x}^t_{ij},j\\in\\{1,2,3\\}$. The normal vector $\\mathbf{n}_i^t = ({n}_{i,x}^t, {n}_{i,y}^t, {n}_{i,z}^t)$ of each triangle can be obtained by the cross product,\n\\begin{equation}\n\t\\mathbf{n}_i^t=\\frac{(\\mathbf{x}^t_{i1}-\\mathbf{x}^t_{i2})\\times(\\mathbf{x}^t_{i1}-\\mathbf{x}^t_{i3})}{||(\\mathbf{x}^t_{i1}-\\mathbf{x}^t_{i2})\\times(\\mathbf{x}^t_{i1}-\\mathbf{x}^t_{i3})||_2},\n\t\\label{est-n}\n\\end{equation}\nwhere $\\mathbf{n}_i^t$ has an unit length. For each vertex of the triangle, the following geometrical constraint then holds:\n\\begin{equation}\n\t\\mathbf{n}^t_i \\cdot \\mathbf{x}^t_{ij}-h^t_i=0.\n\t\\label{est-h}\n\\end{equation}\n\nTherefore, $h^t_i$ can also be estimated. Here, we also add two additional constraints, i.e., ${n}_{i,y}^t>0$ and $h^t_i>0$, based on the fact that the camera is mounted on the top of the vehicle and is above the ground plane, as shown in Fig.\\ref{overview}.\n\nNote that the triangles are scattered in the whole image plane, hence we need to identify the ones located on the ground plane, named \\textit{ground triangles}, for estimating the plane parameters. Based on the fact that the normal of a ground triangles is orthogonal to camera translation $\\boldsymbol{t}_{t,t-1}$, and that the pitch angle of the camera is zero, the ground triangles can be identified by testing with the following constraints,\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\arccos(\\mathbf{n}^t_i, \\boldsymbol{t}_{t,t-1}) &= 0, \\\\\n\t\t|\\arctan(-\\frac{R_{32}}{R_{33}})| = 0,\\, &R_{33}\\neq0. \\\\\n\t\\end{aligned}\n\\end{equation}\n\nIn practice, the equality condition cannot be strictly satisfied. We thus set a tolerance value of $5^\\circ$ in the actural implementation.\n\n\nFor ground triangles that satisfy the above constraints, their vertices are categorized into a new point set $\\tilde{\\mathbf{x}}^t_{ij}, i\\in\\{1,2,\\dots K\\}, j\\in\\{1,2,3\\}, K\\small{<}M$. Since the same vertex point may be shared by multiple triangles, we also need to remove the repetitive ones from the point set. This will ensure the same contribution of each point to the ground plane estimation. \n\nThe ground points are now initially segmented out, denoted by $\\tilde{\\mathbf{x}}^t_{k} \\in \\mathcal{G}$, but there may still exist some outliers introduced by moving objects and some remote points.\nTo further improve the quality of $\\mathcal{G}$, a RANSAC-based method is leveraged to optimize $\\tilde{\\mathbf{x}}^t_{k}$, which minimizes a plane-distance error as follows,\n\\begin{equation}\n\t\\min_{\\tilde{\\mathbf{x}}^t_{g} \\in \\mathcal{G}} \\;\\sum_{g=1}^{|\\mathcal{G}|}||\\mathbf{n}^t\\cdot\\tilde{\\mathbf{x}}^t_{g}-h^t||_2.\n\t\\label{ransac}\n\\end{equation}\n\nIn the implementation, we randomly sample three points to estimate a new plane with eq. \\eqref{est-n}-\\eqref{est-h}, and then we calculate the total distance of the remaining points to the estimated plane. Such a process repeats $Z$ times, and the plane that induces the minimum total distance error is reserved. The points with a distance larger than $0.01$m to the reserved plane are then removed.\nThis is actually a more strict criterion for ground point selection. After this process, only high-quality ground points are reserved. In Alg. \\ref{ag1}, we present a complete procedure of the GPE algorithm, which gives more details about the proposed implementation.\n\n\n\\begin{algorithm}[t]\n\t\\caption{ Ground Point Extraction (GPE) }\n\t\\label{ag1}\n\n\t\\KwIn{ $\\mathbf{u}^t_i$ , $R_{t,t-1}$, $\\boldsymbol{t}_{t,t-1}$}\n\t\\KwOut{$\\{\\tilde{\\mathbf{x}}^t_{g}\\}$}\n\t\n\n\t\n\t$\\{\\Delta_i^t\\}$$\\gets$ \\textsc{DelaunayTriangulation}($\\mathbf{u}^t_i$) \n\t\n\n\t\n\n\t\n\n\t\n\ttriangles points set $\\{\\tilde{\\mathbf{x}}^t_{ij}\\}\\gets\\emptyset$\n\t\n\tsegmented points set $\\{\\tilde{\\mathbf{x}}^t_{k}\\}\\gets\\emptyset$\n\t\n\tground points set $\\mathcal{G}_{best}\\gets\\emptyset$\n\t\n\ttemp ground points set $\\mathcal{G}_k\\gets\\emptyset$\n\t\n\t\n\t\n\t\\For{each $\\mathbf{u}_{ij}^t\\in\\{\\Delta_i^t\\}, j=\\{1,2,3\\}$ }{\n\t\t\n\t\t\n\t\t\n\t\tback-project $\\mathbf{x}^t_{ij}$=$K^{-1}\\mathbf{u}_{ij}^t$\n\t\t\n\t\tcalculate $\\mathbf{n}_i^t$ by Eq.(1)\n\t\t\n\t\tcalculate $h_i^t$ by Eq.(2)\n\t\t\n\t\t\\If{$\t|\\arctan(-\\frac{R_{32}}{R_{33}})|<\\theta_{1}$ \\& $h^t_i>0$}\n\t\t{\n\t\t\t\\If{$\\arccos(\\mathbf{n}^t_i, \\boldsymbol{t}_{t,t-1})<\\theta_{2}$}\n\t\t\t{\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t$\\{\\tilde{\\mathbf{x}}^t_{ij}\\}\\cup\\mathbf{x}^t_{ij}$\t\n\t\t\t}\n\t\t}\t\n\t}\n\n\tdelete repeate vertices and get $ \\{\\tilde{\\mathbf{x}}^t_{k}\\}\\subset\\{\\tilde{\\mathbf{x}}^t_{ij}\\}$\n\t\n\n\t{\/* \\emph{ensure enough points} *\/} \\\\\n\t\\While{size ($ \\{\\tilde{\\mathbf{x}}^t_{k}\\}$) $>$ 5 }{ \n\t\t\n\t\t\\For{iterations $$ size $(\\mathcal{G}_{best})$ }\n\t\t\t{\n\t\t\t\t$\\mathcal{G}_{best}\\gets\\mathcal{G}_k$ \\tcp{best model select}\n\t\t\t}\t\n\t\t\t\n\t\t}\n\t\t$\\{\\tilde{\\mathbf{x}}^t_{g}\\}\\gets\\mathcal{G}_{best}$\n\t\t\n\t\t\\Return {$\\{\\tilde{\\mathbf{x}}^t_{g}\\}$}\n\t}\n\\end{algorithm}\t\n\n\n\n\\subsection{Ground Point Aggregation}\n\nDue to the strict criteria by GPE, the inliers are not enough for accurate estimation of the ground plane. Therefore, we propose the GPA algorithm to aggregate ground points from consecutive image frames. As shown in Fig. \\ref{lpg}, we leverage the sliding window method to select image frames, and a frame buffer is maintained to store the camera poses and ground points in the current window. At each time step, with the arrival of a new image frame, we update the buffer and then estimate the ground plane by solving a least-squares problem. \n\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[height=5.5cm]{SW.pdf}\n\t\\caption{Illustration of the GPA algorithm. In the bottom-left figure, the red dots indicate ground points, and the green line segment is the normal of each triangle. In the bottom-right figure, the red quadrilateral is the estimated ground plane based on the aggregated ground points in the current window.}\n\t\\label{lpg}\n\\end{figure}\n\nFrom the MVO thread, we can get the pose $T_t$ and the inliers $\\mathbf{x}^t_g$ of each frame from the buffer. We then can transform each of the inliers, denoted by $\\mathbf{x}^t_{gi}$, into the global frame,\n\\begin{equation}\n\t\\mathbf{p}_i=T_t \\,[\\mathbf{x}^t_{gi}, 1]^T.\n\\end{equation}\n\nSuppose there are $N$ local ground points in the buffer, the least-squares problem that minimizes the plane-distance error of these points is formulated as follows:\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\min_{\\boldsymbol{\\mu}}\\sum_{i=1}^{N}\\|\\boldsymbol{\\mu}^T \\mathbf{p}_i\\|_2&=\\min_{\\boldsymbol{\\mu}}\\boldsymbol{\\mu}{P_t}{P_t}^T\\boldsymbol{\\mu}^T, \\\\\n\t\t\\boldsymbol{\\mu} = &[\\mathbf{n}_t, -h_t]^T,\n\t\\end{aligned}\n\t\\label{mat}\n\\end{equation}\nwhere ${P_t}=[\\mathbf{p}_1, \\mathbf{p}_2, \\cdots, \\mathbf{p}_N ]\\in R^{4\\times N}$. Equation (\\ref{mat}) can be rewritten as follows:\n\\begin{equation}\n\t\\begin{aligned}\n\t\t&\\min_{\\boldsymbol{\\mu}} \\boldsymbol{\\mu} Q_t\\boldsymbol{\\mu}^T, \\\\ \n\t\tQ_t&={P}_t{P}_t^T\\in R^{4\\times4},\t\n\t\\end{aligned}\n\t\\label{opt}\n\\end{equation}\nwhich can then be efficiently solved by the SVD method.\n\nTo further improve the estimation accuracy of $\\boldsymbol{\\mu}$, we also introduce a weighting matrix $\\Sigma={\\sigma^{-2}_z}I$, where $\\sigma_z$ is the normalized distance of the point depth to their mean value. As a result, the matrix $Q$ in Eq. \\eqref{opt} becomes,\n\\begin{equation}\n\tQ_t={P}_t \\Sigma {P}_t^T\\in R^{4\\times4}.\n\\end{equation}\n\nAnother important refinement on the estimated plane parameter is to conduct a RANSAC-based optimization, which shares the same idea as Eq. \\eqref{ransac}. In each iteration of the optimization, we first estimate $\\boldsymbol{\\mu}$, and then calculate the distance between $\\mathbf{p}_i$ and the estimated plane. Points with a distance larger than $0.01$m are removed, and the remaining is then leveraged to estimate a new $\\boldsymbol{\\mu}$. Such a process continues until convergence. We denote the final plane normal by $\\mathbf{n}_t^*$ and the reserved ground points by $\\mathbf{p}_k^*, k\\in\\{1, 2, \\cdots, K\\}$. The relative camera height then can be calculated by projecting the camera center to the ground plane:\n\\begin{equation}\n\th^t_j=\\mathbf{n}_t^* \\cdot (\\mathbf{p}_c-\\mathbf{p}_k^*),\n\t\\label{multi-h}\n\\end{equation}\nwhere $\\mathbf{p}_c$ is the camera center of frame $I_t$ in the global frame. It is worth noting that there are $K$ estimated camera heights, which will be further processed to recover a smooth scale. Details of the GPA algorithm are presented in Alg. \\ref{alg2}.\n\n\\begin{algorithm}[t]\n\t\\label{alg2}\n\t\\caption{Ground Points Aggregation (GPA)}\n\t\n\t\\KwIn{ $I_t$, $\\{\\mathbf{x}^t_{g}\\}$, $T_t$}\n\t\n\t\\KwOut{$\\{h^t\\}$}\n\t\n\tbuffer $\\{queue\\}\\gets \\emptyset$\n\t\n\tinliers in global frame $\\{\\mathbf{p}_t\\} \\gets \\emptyset$\n\t\n\tinliers in current frame $\\{\\mathbf{x}_g^t\\} \\gets \\emptyset$\n\t\n\treserved ground points $\\{\\mathbf{p}_k^*\\}\\gets \\emptyset$\n\t\n\tcamera heights $\\{h_t\\}\\gets \\emptyset$\n\t\n\t\n\t\\While{$I_t$ is not empty}{\n\t\t$\\{queue\\} \\cup \\{I_t\\} $\t\n\t\t\n\t\t\\If{$size(\\{queue\\})>4$}{\n\t\t\t\n\t\t\t\\textsc{pop$\\_$front}($\\{queue\\}$) \n\t\t\t\n\t\t\t\\tcp{fixed number of frames}\n\t\t\t\\For{$each$ $I_t$ in $\\{queue\\}$}{\n\t\t\t\tcalculate $\\mathbf{p}_i$ by Eq.(5)\n\t\t\t\t\n\t\t\t\t$\\{\\mathbf{p}_t\\} \\cup \\{\\mathbf{p}_i\\} $\n\t\t\t}\n\t\t\t\n\t\t\tMatrix $P_t$\n\t\t\t\n\t\t\tcalculate $\\Sigma={\\sigma^{-2}_z}I$ \\tcp{Weighting Matrix}\n\t\t\t\n\t\t\tcalculate $Q_t$ by Eq.(8)\n\t\t\t\n\t\t\t$\\boldsymbol{\\mu} \\gets \\textsc{SVDDecomposition}(Q)$\n\t\t\t\n\t\t}\n\t\n\t\t\n\t\t\\For{iteration $ d_{thresh}$}{\n\t\t\t\t\n\t\t\t\tRemove $\\mathbf{p}_s$ in $\\{\\mathbf{p}_t\\}$\n\t\t\t\t\n\t\t\t\t\n\t\t\t}\n\t\t\t$\\{\\mathbf{p}_k^*\\}\\gets \\{\\mathbf{p}_t\\}$\n\t\t\t\n\t\t\tcalsulate $Q_s$ by Eq.(8)\n\t\t\t\n\t\t\t$Q \\gets Q_s$\n\t\t\t\n\t\t\t$\\boldsymbol{\\mu_s} \\gets \\textsc{SVDDecomposition}(Q)$\n\t\t\t\n\t\t\t$\\boldsymbol{\\mu} \\gets \\boldsymbol{\\mu_s}$ \\tcp{update model}\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t}\n\t\t\n\t\t\\For{$\\mathbf{p}^*$ in $\\{\\mathbf{p}_k^*\\}$}{\n\t\t\tcalculate $h_j^t$ by Eq.(9)\n\t\t\t\n\t\t\t$\\{h^t\\}\\cup \\{h_j^t\\}$\n\t\t}\n\t\t\\Return{$\\{h^t\\}$}\n\t}\n\t\n\t\n\n\\end{algorithm}\n\n\\subsection{Filter-Based Scale Recovery}\nAfter we compute the relative camera height $h$ of each frame, the scale factor is then obtained by $s_t=h^*\/h$, the motion scale of each frame is recovered by\n\\begin{equation}\n\t\\boldsymbol{t}^{\\dagger}_{t,t-1}=s_t \\cdot \\boldsymbol{t}_{t,t-1}.\n\\end{equation}\n\nCorresponding to the multiple $h$ values in Eq. \\eqref{multi-h}, there are also multiple estimated scales.\n\n\nBy plotting the scaled camera heights of each frame in the figure, shown in Fig. \\ref{gaussian}, we find the data do not strictly follow a Gaussian distribution. Therefore, we choose the median point as the scale of the current frame. In the time domain, a moving average filter is applied, shown in Fig. \\ref{filter-b}, which can give a more smooth result.\n\n\\begin{figure}[!htbp]\n\t\\centering\t\n\t\\subfigure[The distribution of the scaled camera heights.]{\n\t\t\\includegraphics[width=0.90\\linewidth]{distribution.pdf}\n\t\t\\label{gaussian}\n\t}\n\t\\subfigure[The estimated camera height on sequence-02 and -05.]{\n\t\t\\includegraphics[width=0.50\\linewidth]{Figure01.png}\\hspace{-5mm}\n\t\t\\includegraphics[width=0.50\\linewidth]{Figure02.png}\n\t\t\\label{filter-b}\n\t}\t\n\t\\centering\n\t\\caption{Demonstration of the filter-based scale recovery. The green points are the scaled camera heights. The red curve is the smoothed one.}\n\t\\label{filter}\n\\end{figure}\n\n\n\\section{Experiments}\nWe conduct experiments to evaluate the performance of our proposed method. The MVO system used in the experiments is implemented based on ORB-SLAM2, and the proposed scale recovery method is integrated as an independent thread. The system architecture is demonstrated in Fig. \\ref{overview}. The KITTI dataset \\cite{geiger2012we} is adopted as the benchmark dataset, in which sequence-01 is not used since it fails most feature-based VO systems. All the experiments are conducted using a laptop with Intel(R) Core(TM) i5-6300HQ CPU @ 2.30 GHz. \n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[height=8.2cm]{trajectoryv2.pdf}\n\t\\caption{The re-scaled trajectories on KITTI dataset. The blue and green trajectories are generated by our system and ORB-SLAM2-noLC, respectively.}\n\t\\label{fig3}\n\\end{figure*}\n\n\\subsection{Qualitative Evaluation}\n\nThe qualitative evaluation results of the proposed method are visualized in Fig. \\ref{fig3}. The trajectories outputted by the system are recovered using the proposed method, which means similarity transformation is not necessary when compared with ground-truth trajectories\\cite{grupp2017evo}. The baseline trajectories, indicated by green color in Fig. \\ref{fig3}, are generated by ORB-SLAM2 with loop closure detection disabled, denoted by ORB-SLAM2-noLC. We can see that our re-scaled trajectories can eliminate scale drift and form correct loops, which demonstrates the effectiveness of the proposed method. \n\nThe comparison of trajectory length between the ground truth and the proposed method is shown in Table \\ref{tabel-1}, in which the Relative Length Error (RLE) is computed by $e=|l_{gt}-l_{ours}|\/|l_{gt}|$, where $l$ is the length of a trajectory.\nFor sequence-02, -04, -06, -07, and -10, the RLE is less than 1$\\%$. The high performance is due to the fact that most roads in these sequences are straight lines and contain rich features. Sequence-00 and -08 are more complicated cases, in which the scenario is composed of a lot of curves and turns. The path lengths are relatively longer than other sequences. The RLE is thus slightly higher, 2.17$\\%$ and 2.72$\\%$, respectively. Nevertheless, the results show that the proposed system can estimate an accurate scale of the entire trajectory.\n\n\\begin{table}[t]\n\t\\renewcommand\\tabcolsep{16.0pt} \n\t\\caption{Comparison of re-scaled trajectory \\protect\\\\length with ground truth}\n\t\\begin{center}\n\t\t\\label{table_time}\t\n\t\t\\begin{tabular}{lllllll}\n\t\t\t\\toprule\n\t\t\tSeq & GT (m)&Ours (m) & RLE (\\%)\\\\\n\t\t\t\\midrule\n\t\t\t00 & 3645.213 & 3724.419 & 2.173 \\\\\n\t\t\t02 & 5071.151 &5067.233 &0.757 \\\\\n\t\t\t03 &547.774 &560.888 &0.558 \\\\\n\t\t\t04 &391.861 &393.645 &0.455 \\\\\n\t\t\t05 &2175.159 &2205.576 &1.398 \\\\\n\t\t\t06 &1235.982 &1232.876 &0.251 \\\\\n\t\t\t07 &709.065 &694.697 &0.767 \\\\\n\t\t\t08 &3137.398 &3222.795 &2.722 \\\\\n\t\t\t09 &1738.601 &1705.051 &1.929 \\\\\n\t\t\t10 &911.399 &919.518 &0.890 \\\\\n\t\t\t\\bottomrule \t\t\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\label{tabel-1}\n\\end{table}\n\\renewcommand{\\arraystretch}{1} \n\\begin{table*}[t]\n\t\\centering\n\t\\fontsize{6}{8.4}\\selectfont\n\t\\begin{threeparttable}\n\t\t\\caption{comparison of average translation errors and rotation errors with the latest visual odometry methods on kitti dataset}\n\t\t\\label{table2}\n\t\t\\begin{tabular}{ccccccccccccccccccc}\n\t\t\t\\toprule\n\t\t\t\\multirow{3}{*}{Seq}& \t\n\t\t\t\\multicolumn{2}{c}{ORB-SLAM2-noLC \\cite{mur2017orb}}\n\t\t\t&\\multicolumn{2}{c}{ VISO2-M\\cite{2011StereoScan}}\n\t\t\t&\\multicolumn{2}{c}{VISO2-Stereo\\cite{2011StereoScan}}\n\t\t\t&\\multicolumn{2}{c}{Song et.al\\cite{song2015high}}\n\t\t\t&\\multicolumn{2}{c}{Zhou et.al\\cite{zhou2019ground}}\n\t\t\t&\\multicolumn{2}{c}{Brandon et.al\\cite{wagstaff2020self}}\n\t\t\t&\\multicolumn{2}{c}{DNet\\cite{xue2020toward}}\n\t\t\t\n\t\t\t&\\multicolumn{2}{c}{Ours}\\cr\n\t\t\t\\cmidrule(lr){2-3} \\cmidrule(lr){4-5}\n\t\t\t\\cmidrule(lr){5-6} \\cmidrule(lr){6-7}\n\t\t\t\\cmidrule(lr){8-9} \\cmidrule(lr){10-11}\n\t\t\t\\cmidrule(lr){12-13} \\cmidrule(lr){14-15}\n\t\t\t\\cmidrule(lr){16-17}\n\t\t\t&Trans&Rot&Trans&Rot&Trans&Rot&Trans&Rot\n\t\t\t&Trans&Rot&Trans&Rot&Trans&Rot&Trans&Rot\\cr\n\t\t\t&(\\%)&(deg\/m)&(\\%)&(deg\/m)&(\\%)&(deg\/m)&(\\%)&(deg\/m)\n\t\t\t&(\\%)&(deg\/m)&(\\%)&(deg\/m)&(\\%)&(deg\/m)&(\\%)&(deg\/m)\\cr\n\t\t\t\\midrule\n\t\t\t00&20.8&$-$&11.91&0.0209&2.32&0.0109&2.04&0.0048&2.17&0.0039&1.86& &1.94&$-$ &\\textbf{1.41}&0.0054\\cr\n\t\t\t\n\t\t\t01&$-$&$-$&$-$&$-$&$-$&$-$&$-$ &$-$ & $-$&$-$ &$-$ &$-$ &$-$&$-$&$-$&$-$\\cr\n\t\t\t\n\t\t\t02&9.52&$-$&3.33&0.0114&\\textbf{2.01}&0.0074&1.50&0.0035& $-$&$-$ &2.27&$-$ &3.07&$-$ &2.18&0.0046\\cr\n\t\t\t\n\t\t\t03&$-$&$-$&10.66&0.0197&2.32&0.0107&3.37&0.0021&2.70&0.0044&$-$& $-$&$-$&$-$ &\\textbf{1.79}&0.0041\\cr\n\t\t\t\n\t\t\t04&$-$&$-$&7.40&0.0093&0.99&0.0081&2.19&0.0028&$-$ &$-$ &$-$&$-$ &$-$&$-$ &\\textbf{1.91}&0.0021\\cr\n\t\t\t\n\t\t\t05&18.63&$-$&12.67&0.0328&1.78&0.0098&1.43&0.0038&$-$ &$-$ &\\textbf{1.50}&$-$ &3.32&$-$ &1.61&0.0064\\cr\n\t\t\t\n\t\t\t06&18.98&$-$&4.74&0.0157&\\textbf{1.17}&0.0072&2.09&0.0081& $-$& $-$&2.05&$-$ &2.74&$-$ &2.03&0.0044\\cr\n\t\t\t\n\t\t\t07&13.82&$-$&$-$&$-$&$-$&$-$&$-$&$-$&$-$ &$-$ &1.78& $-$&2.74& $-$ &\\textbf{1.77} &0.0230\\cr\n\t\t\t\n\t\t\t08&22.06&$-$&13.94&0.0203&2.35&0.0104&2.37&0.0044& $-$& $-$&2.05&$-$ &2.72&$-$ &\\textbf{1.51}&0.0076\\cr\n\t\t\t\n\t\t\t09&12.74&$-$&4.04&0.0143&2.36&0.0094&1.76&0.0047&$-$ &$-$ &\\textbf{1.50}&$-$ &3.70& $-$ &1.77&0.0118\\cr\n\t\t\t\n\t\t\t10&4.86&$-$&25.20&0.0388&1.37&0.0086&2.12&0.0085&2.09&0.0054&3.70& &5.09& &\\textbf{1.25} &0.0031& \\cr\n\t\t\t\\midrule\n\t\t\tAvg&18.17& $-$&14.39&0.0245&2.32&0.0095&2.03&\\textbf{0.0045}&2.32&0.045&2.03 & $-$&3.17&$-$&\\textbf{1.72} & 0.0068&\\cr\n\t\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{threeparttable}\n\\end{table*}\n\n\n\n\n\\subsection{Quantitative Evaluation}\n\n{The quantitative comparison} between our method and the baseline methods, including \\cite{wagstaff2020self,xue2020toward,song2015high,zhou2019ground,2011StereoScan,mur2017orb}, is presented in Table \\ref{table2}. The average translation error and rotation error are adopted as evaluation metrics.\n\nWe can see that ORB-SLAM2-noLC and VISO2-M have the worst performance due to the lack of loop closure detection. The scale drift of the two methods induces a large translation error, 18.17$\\%$ and 14.39$\\%$ respectively, while the VO systems with scale recovery all maintain a low translation error, $<4\\%$. It can also be seen that a MVO system with scale recovery \\cite{song2015high, zhou2019ground, xue2020toward, wagstaff2020self} can exhibit competitive performance with a stereo VO system like VISO2-M \\cite{2011StereoScan}, which significantly demonstrates the importance of scale recovery for MVO. \n\nIn terms of monocular systems, we can see our proposed method achieves the minimum translation error while maintaining a competitive performance on rotation error. The methods proposed by Song \\textit{et al.} \\cite{song2015high} and Zhou \\textit{et al.} \\cite{zhou2019ground} can not work with sequence-07, because they both rely on a fixed region to extract ground points, whereas occlusions by moving vehicles occur frequently in this sequence. In contrast with \\cite{song2015high, zhou2019ground}, the proposed method works well with sequence-07 with a translation error of 1.77$\\%$, benefiting from the GPA algorithm. \n\n\nIn \\cite{xue2020toward}, a deep neural network, named DNet, is proposed for monocular depth prediction and scale recovery. Compared with this method, our method shows a better accuracy in all the sequences. In \\cite{wagstaff2020self}, a real-time MVO system is implemented based on self-supervised learning techniques. This method can slightly outperform our proposed method in sequence-05 and -09, but has a much lower accuracy in sequence-00, -08, and -10. A similar phenomenon can be observed when comparing with \\cite{song2015high}. This indicates a significant variance on the performance of \\cite{wagstaff2020self}. Actually, this is the limitation of most deep learning based methods, which has been discussed in detail by \\cite{wang2020tartanair}. \n\nThe comparative experiments in Table \\ref{table2} significantly verify the effectiveness of our method and demonstrates the advantages over the latest methods in the literature.\n\\subsection{Efficiency Evaluation}\nAnother significant advantage of our method lies in its high efficiency. We evaluate the run-time of our system on all the KITTI sequences mentioned above, and the experiment repeats five times. The median run-time is reported in Fig. \\ref{time}. In all the experiments, the MVO thread requires $50$-$55$ ms, while the GPE and GPA requires less than $10$ ms, which makes the system suitable for real-time applications. \n\\section{CONCLUSIONS}\n\nIn this work, we present a light-weight MVO system with accurate and robust scale recovery, aiming to reduce scale drift and provide accurate odometry in long-distance navigations without loop closure. We solve the scale ambiguity for MVO by implementing our GPE-GPA algorithm for selecting high-quality points and optimizing them in a local sliding window. Sufficient data and robust optimizer provide accurate metric trajectory leveraging the ratio of the estimated camera height and the real one. Extensive experiments show that our proposed framework can achieve state-of-the-art accuracy and recover a metric trajectory without additional sensors. The system is designed to be a light-weight framework, which can achieve real-time performance with 20 Hz running frequency. Our proposed light-weight MVO system facilitates the localization and navigation of low-cost autonomous vehicles in long-distance scenarios.\n\nFurther study into integrating the uncertainty of plane estimation will be considered, which will further improve the accuracy of scale recovery. The light-weight neural network for ground segmentation will also be considered to help constrain the extraction of high-quality ground points.\n\n\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[scale=0.45]{time-consum2.pdf}\n\t\\caption{Run-time of our MVO system on KITTI dataset. The time costs of MVO thread, the GPE algorithm, and the GPA algorithm are reported.}\n\t\\label{time}\n\\end{figure}\n\\enlargethispage{-7.8cm}\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbzkr b/data_all_eng_slimpj/shuffled/split2/finalzzbzkr new file mode 100644 index 0000000000000000000000000000000000000000..d185fc3752b475e690a2be1fd6170d604d2adadc --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbzkr @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\nChiral Perturbation Theory (ChPT) is the effective field theory of QCD at low energies \\cite{weinberg75,sielder}. \nIts paradigmatic application is the purely mesonic sector in $SU(2)$.\\footnote{Which even presents one corner of \nconcern due to the enhanced role of the right-hand-cut in the isoscalar scalar pion-pion scattering \n\\cite{npa620,nd,zeros,gamma,alba12,bernasigma,truong1988}, with an important \nimpact as well in the pion-nucleon ($\\pi N$) sector \\cite{sainiosumrule,aco2013,sigma2012}.}\nIts extension to the one-baryon sector presents some complications due to the \nlarge nucleon mass that does not vanish in the chiral limit \\cite{sainio88,manohar91}, which posed interesting problems to \nthe theory.\\footnote{A faster stabilization of the chiral series in this case has been recently accomplished \\cite{sigma2012,aco2013,strangeness14} by combining the covariant formalism of the Extended on Mass Shell Regularization Scheme (EOMS) \\cite{eoms} with the explicit inclusion \nof the $\\Delta(1232)$ in the $\\delta$-counting \\cite{delta-c} .}\n For reviews on ChPT on these topics see e.g. \\cite{bijnens09,ecker,bernard,pich,ulf}.\n\nThe extension of ChPT to systems with a larger baryonic number was \nconsidered in Ref.~\\cite{weinn}, where the chiral counting is applied to the calculation \nof the multi-nucleon potential. In these cases one also has to face the problem associated with the \ninfrared enhancement associated with the small nucleon kinetic energies, which requires \nto resum the infinite string of diagrams due to the iteration of intermediate multi-nucleon states. \nThe extension of the chiral power counting to finite density system, including the contributions of multi-nucleon \nreducible diagrams, is given in Ref.~\\cite{finiteden}. For related reviews see e.g. \n\\cite{Epelbaum:2008ga,Machleidt:2011zz,Epelbaum:2005pn,Bedaque:2002mn,vanKolck:1999mw}.\n\n The application of the set up of Ref.~\\cite{weinn} to nucleon-nucleon ($NN$) scattering \nhas been phenomenologically successful \\cite{ordo94,entem,thesis,epe042}. However, the sensitivity \nof the results on the values of the cutoff taken to solve the associated Lippmann-Schwinger equation for the \niteration of two-nucleon intermediate states has given rise to a flurry of publications, whose fair \n and comprehensive consideration is beyond this introduction. For more detailed accounts on this \nrespect the reader is referred to \\cite{Machleidt:2011zz,pavon06,nogga,kswa,phillipssw,pavon11,longyang,zeoli,Epelbaum:2008ga}.\n\n\nWe continue here the application of the $N\/D$ method \\cite{chew} to $NN$ scattering \n extending the previous work of Refs.~\\cite{paper1,paper2,gor2013}. For this method the dynamical \ninput is not the $NN$ potential but the discontinuity of a $NN$ partial-wave amplitude along the \nleft-hand-cut (LHC), which is denoted in the following by $2i\\Delta(A)$. Here $A$ is \nthe center of mass (c.m.) three-momentum squared of a $NN$ state. In other words, \n$\\Delta(A)$ is the imaginary part of a $NN$ partial-wave amplitude along the LHC, that \n extends for real $A$ with $A<-M_\\pi^2\/4$, being $M_\\pi$ the pion ($\\pi$) mass. \nThe function $\\Delta(A)$ is due to the multi-exchange of pions driving the finite-range nuclear \nforces, while in a low-energy effective field theory the short-range nuclear forces are accounted for by \nlocal interactions of zero range that do not contribute to $\\Delta(A)$ for finite $A$.\n The two-nucleon irreducible contributions to $\\Delta(A)$ are amenable to a straightforward ChPT expansion, \nin much the same way as discussed in Ref.~\\cite{weinn} for the calculation of the chiral $NN$ potential.\nHowever, $\\Delta(A)$ has also contributions from two-nucleon reducible diagrams but, \nas explained in Ref.~\\cite{gor2013}, these contributions require to cut all \nthe pion lines simultaneously when iterating one-pion exchange (OPE). \nIn this way, when including an extra $NN$ intermediate state in the iteration of the unitarity two-nucleon \ndiagrams their contribution to $\\Delta(A)$ starts further away in the LHC. It then results that the $n$th iteration of \n two-nucleon intermediate states, which at least requires $n+1$ OPE ladders, \n gives contribution to $\\Delta(A)$ only for $A<-(n+1)^2 M_\\pi^2\/4$. \nThis makes that its relevance \nfor physical values of $A$ ($A\\geq 0$) in the low-energy region clearly dismisses \nwith increasing $n$. As a result, because of the chiral expansion together with this other effect that\n numerically suppresses the proliferation of two-nucleon reducible diagrams in the calculation of $\\Delta(A)$, \none can determine this function reliably in ChPT.\\footnote{Notice that the suppression \nof the iteration of two-nucleon reducible diagrams only occurs for $\\Delta(A)$, and it does not occur to any \nother ``component'' of a $NN$ partial wave amplitude.}\n\n In Refs.~\\cite{paper1,paper2} the $N\/D$ method was \nsolved with $\\Delta(A)$ calculated at leading order (LO) from OPE, while in Ref.~\\cite{gor2013}\nthe NLO contributions to $\\Delta(A)$ were also included. \nThese contributions comprise two-nucleon irreducible two-pion exchange and once-iterated OPE, whose sum gives \nthe leading two-pion exchange (TPE). \nReference~\\cite{gor2013} obtained a clear improvement in the reproduction of the phase shifts and mixing angles \n given by the Nijmegen partial-wave analysis (PWA) \\cite{Stoks:1994wp} as compared with the LO \nstudy, so that a global and rather good agreement is achieved at NLO. \n We want to give one step forward and consider here the next-to-next-to-leading (NNLO) contributions to \n$\\Delta(A)$, which are given by the imaginary part along the LHC of the \ntwo-nucleon irreducible TPE diagrams with a NLO $\\pi N$ vertex in Heavy-Baryon ChPT (HBChPT) \\cite{peripheral}. \nWe see that the chiral expansion within our approach is well behaved, so that there is a steady improvement \nin the reproduction of the Nijmegen PWA results when passing from LO to NLO and then to NNLO, where a quite good\n reproduction of the Nijmegen PWA is finally obtained. \nThis is accomplished in a progressive and smooth way, without violent variations in the results obtained at \nevery order.\\footnote{This was not the case in previous studies, e.g. in the model calculation of $NN$ scattering by Ref.~\\cite{lutz} \nthat uses a modified version of the $N\/D$ method by truncating the integrals along the LHC with a sharp cutoff.}\n In addition, we deal with convergent integrals by taking enough number of subtractions so that the above \nreferred regulator dependence that arises when solving the Lippmann-Schwinger equation with a chiral $NN$ potential \nis avoided by construction in our approach. An interesting outcome from our study is that we\n corroborate the long-range correlations between the effective range and scattering length for each of the $NN$ $S$ waves, \n$^1S_0$ and $^3S_1$, when only the corresponding scattering length is taken as experimental input. \nThese correlations, first noticed in Ref.~\\cite{pavon06}, were also obtained in the NLO $N\/D$ study of Ref.~\\cite{gor2013}, and \nwithin our approach they are deduced solely from basic principles of $NN$ partial-wave amplitudes, namely, \n chiral symmetry, unitarity and analyticity.\nThey are typically fulfilled at the level of around a $10\\%$ when comparing with \nthe experimental values for the effective ranges. \nWe should say that we can proceed further and include more subtractions, so that we can implement within our formalism \nthe exact values of the effective ranges, something not possible in the tight scheme of Ref.~\\cite{pavon06}.\n \nRegarding the subtraction constants we elaborate below a chiral power counting for them, by \ntaking into account the change in their values due to variations in the subtraction point.\n We show that at NLO and NNLO in the calculation of $\\Delta(A)$ one properly takes twice-subtracted dispersion relations (DRs). \nNevertheless, on top of this criterion we impose that one should obtain the proper threshold behavior \nfor higher partial waves, as well as having meaningful solutions of the integral equations (IEs) \nthat result from the corresponding DRs.\\footnote{By a meaningful solution we mean here a mathematical solution to the IE \nthat does not depend on the the number of points employed and \nin the arbitrary large extension of the LHC on which they lie \n when performing the numerical discretization to solve the IE.} \nThese two requirements often imply the necessity of taking more than two subtractions \nin the corresponding DRs relations. \nRegarding the number of subtractions used to guarantee the threshold behavior \nfor higher partial waves we use here the formalism developed in Ref.~\\cite{gor2013}, so that partial waves with \norbital angular momentum $\\ell\\geq 1$ and mixing partial waves with total angular momentum $J\\geq 1$ vanish \nat threshold as $ A^\\ell$ and $A^J$, respectively. \nThis requires to take at least $\\ell$ or $J$ subtractions, in order, \nwith $\\ell-1$ or $J-1$ free parameters, respectively. \nBut at the end, as emphasized in Ref.~\\cite{gor2013}, \nnone or only one of the resulting subtraction constants for a given partial wave with $\\ell>1$ (or $J>1$ for a mixing wave) is necessary to reproduce data. \nThis interesting point, which allows to treat easily higher partial waves, is called in Ref.~\\cite{gor2013} the \nprinciple of maximal smoothness.\n\n In our study we have also paid special attention to the issue concerning the impact on the results of the rather large size \nof the NLO $\\pi N$ counterterms, typically denoted by $c_i$ \\cite{ulf}, which first appear \n in the calculation of $\\Delta(A)$ at NNLO. \nIt is discussed in Ref.~\\cite{epe04} that the $\\pi N$ monomials, proportional to the $c_i$ counterterms, \n produce a too large contribution to the $NN$ potential at medium and short distances when it is \ncalculated at NNLO in dimensional regularization, which worsens the properties of the chiral expansion. \nBecause of this Ref.~\\cite{epe04} argued to better use a cutoff regularization to calculate the NNLO potential, or equivalently, \nto cut the energy spectral representation of the NNLO $NN$ potential at around the chiral symmetry breaking scale. \nThis last point would be equivalent to truncate the full extent of the LHC in our dispersive integrals. \n However, it is interesting to remark that we do not need to do that in order to obtain a good reproduction of the Nijmegen \nPWA when employing $\\Delta(A)$ determined up to NNLO. \nIn fact, we observe that the definitive improvement of our results compared \nwith the Born approximation does not arise by \nmodifying the two-nucleon \nirreducible diagrams at NNLO, but by performing the iteration of two-nucleon unitarity diagrams as required\n by analyticity and unitarity in a well-defined way.\n\n\nAfter this introduction we review the $N\/D$ method for coupled and uncoupled partial waves in \nSec.~\\ref{unformalism}.\n The function $\\Delta(A)$, calculated in ChPT up to NNLO, is discussed in Sec.~\\ref{delta}, \nwhere we also elaborate the chiral power counting for the subtraction constants.\n Sections \\ref{1s0} to \n\\ref{gi5w} are devoted to discuss the application of the $N\/D$ method to \n the different $NN$ partial waves up to $J=5$.\n There it is shown that a quite good reproduction of the Nijmegen PWA phase shifts and mixing angles results.\n In these sections we also compare \nwith the Born approximation for higher partial waves and discuss on the relative importance of the different contributions \nto $\\Delta(A)$.\n Our concluding remarks are given in Sec.~\\ref{conc}. \nFinally, we discuss in Appendix \\ref{appen:vs} a method to calculate higher order shape parameters \n of the $NN$ $S$ waves. \n\n\n\n\\section{The $N\/D$ method}\n\\label{unformalism}\n\nA detailed presentation of the formalism for the $N\/D$ method \\cite{chew}\n can be found in Ref.~\\cite{gor2013}.\n Here we only reproduce the main facets of the approach. \n\n\\subsection{Uncoupled partial waves}\n\\label{upw}\nAn uncoupled $NN$ partial wave is written as the quotient of two functions, where the\n numerator is the function $N(A)$ and the denominator is $D(A)$. \nThen, one writes\n\\begin{align}\nT(A)&=\\frac{N(A)}{D(A)}~,\n\\label{eq.ta}\n\\end{align}\nwith $T(A)$ the corresponding $NN$ partial wave in the c.m. frame. In the following we use the spectroscopic notation and denote by \n$^{2S+1}L_J$ the different $NN$ partial waves with $S$ the total spin, $L$ the orbital angular momentum and $J$ the \ntotal angular momentum. The point for the splitting of $T(A)$ in two functions is because $N(A)$ has only LHC while $D(A)$ \nhas only right-hand cut (RHC), also called unitarity cut.\n The following expressions for the discontinuities of the functions $N(A)$ and $D(A)$ \nalong their respective cuts then arise,\n\\begin{align}\n\\mathrm{Im} D(A)=-\\rho(A) N(A)~,~A>0~,\\nonumber \\\\\n\\mathrm{Im} N(A)=\\Delta(A) D(A)~,~A0~.\n\\end{align}\n In terms of $1\/T(A)$ this can be recast simply as \n\\begin{align}\n\\mathrm{Im}\\,\\frac{1}{T(A)}=-\\rho(A)~,~ A>0~.\n\\label{invTun}\n\\end{align} \nWith this normalization the relation between the $T$ and $S$ matrices is $S(A)=1+2i\\rho(A) T(A)$. \nThe discontinuity of a $NN$ partial wave \n$T(A)$ along the LHC is given by $2i \\Delta(A)$, which directly implies the second expression in Eq.~\\eqref{disconts}. \n\n\n Standard DRs for the functions $D(A)$ and $N(A)$ are derived in Ref.~\\cite{gor2013} under the assumption that \nthe function $D(A)$ does not diverge faster than a polynomial of degree $n_0$ for $A\\to \\infty$. Then for $n>n_0$ one \ncan write \\cite{gor2013}\n\\begin{align}\nD(A)&=\\sum_{i=1}^n \\delta_i (A-C)^{i-1}-\\frac{(A-C)^n}{\\pi}\\int_0^\\infty dq^2\\frac{\\rho(q^2)N(q^2)}{(q^2-A)(q^2-C)^n}~,\\nonumber\\\\\nN(A)&=\\sum_{i=1}^n \\nu_i (A-C)^{i-1}+\\frac{(A-C)^n}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2-A)(k^2-C)^n}~,\n\\label{standardr}\n\\end{align}\nwhere $C$ is the subtraction point. Notice that the same number of subtractions is taken both in $D(A)$ and $N(A)$. \nThe argument given in Ref.~\\cite{gor2013} makes use of the fact that $N(A)=T(A)D(A)$ and $T(A)$,\n because of unitarity, vanish at least as $A^{-1\/2}$ for $A\\to +\\infty$. \n As a result if $D(A)$ diverges at most as $A^{n_0}$ then $N(A)$ does not diverge faster than $A^{n_0-1\/2}$. \nHere we take into account the\nSugawara and Kanazawa theorem \\cite{barton,suga}, as a consequence of which any function \nlike $D(A)$ or $N(A)$ with only one cut of infinite extent along the\n real axis has the same limit for $A\\to \\infty$ in any direction of the $A$-complex plane. \n In addition, it is clear from Eq.~\\eqref{standardr} and the standard theory of DRs \\cite{spearman}, \nthat we can take different values for the corresponding subtraction points for each function \nseparately. Indeed, for many partial waves we will take the subtractions for the function $D(A)$ in two \ndifferent subtraction points, one at $C=0$ and the other at $C=-M_\\pi^2$. This is motivated by the fact that \nwe impose the normalization \n\\begin{align}\nD(0)=1~,\n\\label{nor.da}\n\\end{align}\nwhich can always be done by dividing simultaneously $D(A)$ and $N(A)$ by a constant without altering their ratio \ncorresponding to $T(A)$, Eq.~\\eqref{eq.ta}. \n In this way, one subtraction for $D(A)$ is always taken at $C=0$ in order \nto guarantee straightforwardly the normalization Eq.~\\eqref{nor.da}.\n\nTo solve $D(A)$ in terms of the input $\\Delta(A)$ and the subtraction constants we substitute in Eq.~\\eqref{standardr} the expression \nfor $N(A)$ into the DR of $D(A)$, so that we end with the following IE for $D(A)$ with $A0~.\n\\label{nuij.def}\n\\end{align}\n From Eq.~\\eqref{relst} it is straightforward to obtain the following expressions for the $\\nu_{ij}(A)$ \\cite{paper2,gor2013}, \n\\begin{align}\n\\nu_{11}(A) & = \\rho(A) \\left[ 1- \\frac{\\frac{1}{2}\\sin^2 2\\epsilon_J}{1-\\cos 2\\epsilon_J \\cos 2\\delta_1} \\right]^{-1} ~,\\nonumber\\\\\n\\nu_{22}(A) & = \\rho(A) \\left[ 1- \\frac{\\frac{1}{2}\\sin^2 2\\epsilon_J}{1-\\cos 2\\epsilon_J \\cos 2\\delta_2} \\right]^{-1}~,\\nonumber \\\\\n\\nu_{12}(A) & = 2 \\rho(A) \\frac{\\sin(\\delta_1 + \\delta_2)}{\\sin 2\\epsilon_J} \\label{nuij}~.\n\\end{align}\nIn terms of them we have the analogous DRs for $D(A)$ and $N(A)$ of Eq.~\\eqref{standardr}, but now distinguishing \nbetween the different $D_{ij}(A)$ and $N_{ij}(A)$ such that $t_{ij}(A)=N_{ij}(A)\/D_{ij}(A)$, and employing $ \\nu_{ij}(A)$ \ninstead of simply $\\rho(A)$. The following expressions are obtained \\cite{gor2013}:\n\\begin{align}\nD_{ij}(A)&=\\sum_{p=1}^n \\delta^{(ij)}_p (A-C)^{p-1}-\\sum_{p=1}^n \\nu^{(ij)}_p\\frac{(A-C)^n}{\\pi}\\int_0^\\infty dq^2\\frac{\\nu_{ij}(q^2)}{(q^2-A)(q^2-C)^{n-p+1}}\\nonumber\\\\\n&+\\frac{(A-C)^n}{\\pi^2}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{ij}(k^2)D_{ij}(k^2)}{(k^2-C)^n}\\int_0^\\infty dq^2\\frac{\\nu_{ij}(q^2)}{(q^2-A)(q^2-k^2)}~,\\\\\nN_{ij}(A)&=\\sum_{p=1}^n \\nu^{(ij)}_p (A-C)^{p-1}+\\frac{(A-C)^n}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{ij}(k^2)D_{ij}(k^2)}{(k^2-A)(k^2-C)^n}~.\n\\label{standardrcc}\n\\end{align}\n Here, we also impose the normalization condition at $A=0$,\n\\begin{align}\nD_{ij}(0)=1~.\n\\end{align}\nOf course, the same remark concerning the subtraction point as done in Sec.~\\ref{upw} is also in order here. Namely, \nwe can use different subtraction points for the functions $D_{ij}(A)$ and $N_{ij}(A)$, as well as to use even different subtraction \npoints in the same function, as we will do below for $D_{ij}(A)$. \n\n\\subsection{Higher partial waves}\n\\label{hpw}\n\nAn uncoupled $NN$ partial wave with $\\ell\\geq 1$ should vanish at threshold as $A^\\ell$.\n Similarly for a coupled partial wave we have the analogous results but in terms of $\\ell_{ij}\\equiv (\\ell_i+\\ell_j)\/2$, with $i,~j=1,~2$. \n As discussed in Ref.~\\cite{gor2013} this threshold behavior is enforced by taken at least $\\ell$ \nor $\\ell_{ij}$ subtractions\n at $C=0$ in the DR for $N(A)$ in Eq.~\\eqref{standardr} or Eq.~\\eqref{standardrcc}, respectively, \nand setting $\\nu_p=0$ ($\\nu_p^{(ij)}=0$) for $p=1,\\ldots,\\ell$ ($\\ell_{ij}$). In this way we end with the DRs:\n\\begin{align}\n&\\underline{\\mathrm{Uncoupled~ case}:}\\nonumber \\\\\n\\label{highd}\nD(A)&=1+\\sum_{p=2}^{\\ell}\\delta_p A^{p-1}+\\frac{A^\\ell}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^\\ell}g(A,k^2)~,\\\\\n\\label{highn}\nN(A)&=\\frac{A^\\ell}{\\pi}\\int_{-\\infty}^\\ell dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^\\ell (k^2-A)}~,\\\\\n\\label{tayloruc}\n\\delta_p&=\\frac{1}{(p-1)!}D^{(p-1)}(0)~,~p=2,3,\\ldots\\\\\n&\\underline{\\mathrm{Coupled~ case}:}\\nonumber \\\\\n\\label{highdcc}\nD_{ij}(A)&=1+\\sum_{p=2}^{\\ell_{ij}}\\delta^{(ij)}_p A(A-C)^{p-2} +\\frac{A(A-C)^{\\ell_{ij}-1}}{\\pi} \n\\int_{-\\infty}^L\\!\\! dk^2 \\frac{\\Delta_{ij}(k^2)D_{ij}(k^2)}{(k^2)^{\\ell_{ij}}} g_{ij}(A,k^2,C;\\ell_{{ij}-1})~,\\\\\n\\label{highncc}\nN_{ij}(A)&=\\frac{A^{\\ell_{ij}}}{\\pi}\\int_{-\\infty}^L \\!\\!dk^2\\frac{\\Delta_{ij}(k^2)D_{ij}(k^2)}{(k^{2})^{\\ell_{ij}}(k^2-A)}~,\\\\\n\\delta_p^{(ij)}&=\\frac{(-1)^p}{C^{p-1}}\\left[\n\\sum_{n=0}^{p-2}\\frac{(-1)^n}{n!}C^n D^{(n)}_{ij}(C)-1\n\\right]~,~p=2,3,\\ldots\n\\label{taylor}\n\\end{align}\nwhere we have denoted the derivative of $D(A)$ of order $n$ by $D^{(n)}(A)$. In addition, we have introduced the function $g_{ij}(A,k^2,C;m)$ defined \nas\n\\begin{align}\ng_{ij}(A,k^2,C;m)&=\\frac{1}{\\pi}\n\\int_0^\\infty\\!\\! dq^2\n\\frac{ \\nu_{ij}(q^2) (q^2)^m }{ (q^2-A) (q^2-k^2) (q^2-C)^{m}}~,\n\\label{gij.a.k2.c} \n\\end{align}\nwhich can be expressed algebraically as a combination of $g(A,B)$'s, Eq.~\\eqref{gdef}, with \ndifferent arguments.\n\nAlthough in this way there is a proliferation of subtraction constants \n(which are not constrained) in the function $D(A)$ as $\\ell$ ($\\ell_{ij}$)\n grows, most of them play a negligible role. \nThis is so because $NN$ partial waves with $\\ell$ or $\\ell_{ij}$ greater than 2 are \n quite perturbative \\cite{peripheral,gor2013}. \nIn practical terms we have found in our NNLO study, as well as in the previous \n one at NLO \\cite{gor2013}, that for higher partial waves\n only $\\delta_{\\ell}$ (or $\\delta^{(ij)}_{\\ell_{ij}}$), if any, \n is needed to fit data, with the rest of them fixed to zero. Furthermore,\n no significant improvement in the reproduction of data or in the fitted values is observed \nby releasing $\\delta_i$ or $\\delta_i^{(ij)}$ with $i<\\ell$ or $\\ell_{ij}$, respectively, so that the fit is stable. This is called in Ref.~\\cite{gor2013} \nthe {\\it principle of maximal smoothness} because it implies for the uncoupled case \nthat the derivatives of $D(A)$ at $A=0$ with order $< \\ell-1$ are zero, as it follows from Eqs.~\\eqref{highd} and \n\\eqref{tayloruc}. Similarly, for the coupled case\n it implies that $D_{ij}(C)=1$ and $D_{ij}^{(n)}(C)=0$ for $1\\leq n \\leq \\ell_{ij}-3$, cf. Eqs.~\\eqref{highdcc} and \\eqref{taylor}. \n In some cases, it happens that $\\delta_\\ell$ or $\\delta_{\\ell_{ij}}^{(ij)}$ is also zero and then we say that for this partial wave the subtraction constants have the {\\it pure perturbative values}. \n\nWe further illustrate in this work the perturbative character of $NN$ partial waves with $\\ell\\,(\\ell_{ij})\\geq 3$ by \ncomparing the full outcome from the $N\/D$ method with the perturbative result corresponding to the leading Born \napproximation, cf. Sec.~\\ref{born}. In this case there is no dependence on any of the \nsubtraction constants $\\delta_p$ or $\\delta_p^{(ij)}$ and, indeed, we show below that the results are typically \nrather similar to the full ones, although the latter reproduce closer the Nijmegen PWA, as one should expect.\n\n\n\\section{The input function $\\Delta(A)$}\n\\label{delta}\nThe discontinuity along the LHC of a NN partial wave, $2i \\Delta(A)$, is taken from the \ncalculation of Ref.~\\cite{peripheral} in Baryon ChPT (BChPT) up to ${\\cal O}(p^3)$ or NNLO, which includes OPE plus leading and subleading TPE.\n At this order $\\Delta(A)$ \nfor a given partial wave diverges at most as $\\lambda (-A)^{3\/2}$ for $A\\to-\\infty$, with $\\lambda$ a constant. \nAs discussed in Ref.~\\cite{gor2013}, when $\\lambda<0$ one can have solutions for the integral \nequation providing $D(A)$ for $A2$ ($J>2$ subtractions), cf. Sec.~\\ref{hpw}.\n\n\n\\section{Uncoupled $^1 S_0$ wave}\n\\label{1s0} \n\nIn this section we study the $^1 S_0$ partial wave. We first take the once-subtracted DRs:\n\\begin{align}\n\\label{onceD}\nD(A)&=1-\\nu_1 A g(A,0)+\\frac{A}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{k^2}g(A,k^2)~,\\\\\n\\label{onceN}\nN(A)&=\\nu_1+\\frac{A}{\\pi}\\int_{-\\infty}^Ldk^2\\frac{\\Delta(k^2)D(k^2)}{k^2(k^2-A)}~.\n\\end{align}\nWe have one free parameter $ \\nu_1$ that can be fixed in terms of the $^1S_0$ scattering length $a_s$ \n\\begin{align}\n\\nu_1=-\\frac{4\\pi a_s}{m}~,\n\\label{1s0nu1}\n\\end{align}\nwith the experimental value $a_s=-23.76\\pm 0.01$~fm \\cite{thesis}. \n\nThe phase shifts obtained by solving the IE of Eq.~\\eqref{onceD} are shown in \n Fig.~\\ref{fig:1fp1s0} as a function of the c.m. three-momentum, denoted by $p$ ($p= \\sqrt{A}$) \nin the axis of abscissas. \nThe (red) hatched area corresponds to our results from Eqs.~\\eqref{onceD}-\\eqref{1s0nu1}\n with $\\Delta(A)$ calculated up-to-and-including ${\\cal O}(p^3)$ contributions and by taking into account the variation in the results from the different values employed for the NLO $\\pi N$ ChPT counterterms in Table~\\ref{tab:cis}.\nOur present results are compared with \n the neutron-proton ($np$) $^1S_0$ phase shifts of the Nijmegen PWA \\cite{Stoks:1994wp} (black dashed line), \n the OPE results of Ref.~\\cite{paper1} (blue dotted line) and the NLO results of Ref.~\\cite{gor2013} (magenta solid line).\n As we see, the Nijmegen PWA phase shifts are better reproduced at lower energies at NNLO than at smaller orders, \nthough one also observes an excess of repulsion at this order. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.7\\textwidth]{.\/1s0.ps} \n\\end{center}\n\\caption[pilf]{\\protect { (Color online.) Phase shifts of the $^1S_0$ $NN$ partial wave where the number \nof subtractions taken is indicated by the value of $n$ given in the legend of each type of line. \n The once-subtracted DR results are shown by the (red) hatched areas at NNLO, the \n(magenta) solid lines at NLO \\cite{gor2013} and the (blue) dotted lines at LO (OPE) \\cite{paper1}. \nThe twice-subtracted DR results correspond to the (cyan) band at NNLO and \nthe (green) dash-dotted line at NLO \\cite{gor2013}. \n The Nijmegen PWA phase shifts are shown by the (black) dashed lines.\n}\n\\label{fig:1fp1s0}\n}\n\\end{figure}\n\nNext, we work out the effective range expansion (ERE) parameters for the $^1S_0$ . \nBy taking into account the relation in our normalization\n\\begin{align}\n\\frac{4\\pi}{m}\\frac{D}{N}=-\\frac{1}{a_s}+\\frac{1}{2}r_s A+\n\\sum_{i=1}^{10}v_i A^i - i\\sqrt{A}+{\\cal O}(A^{11})~,\n\\label{efr1}\n\\end{align}\nwith $r_s$ the $^1S_0$ effective range and the shape parameters $v_i$, $i=2,\\ldots,10$. \n We designate by $I_m$, $m=1,2,\\ldots$, the integral along the LHC,\n\\begin{align}\n I_{2n}&=\\int_{-\\infty}^Ldk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^n}~,\\nonumber\\\\\n I_{2n+1}&=\\int_{-\\infty}^Ldk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^n\\sqrt{-k^2}}~.\\nonumber\\\\\n\\end{align}\nFrom Eqs.~\\eqref{onceD}, \\eqref{onceN} and \\eqref{efr1} we derive the following expressions for \n$r_s$ and the shape parameters in the ERE up to $i=4$\n\\begin{align}\nr_s&=-\\frac{m (a_s I_3+I_4)}{2 \\pi ^2 a_s^2} ~,\\nonumber\\\\\nv_2&=-\\frac{m \\left(I_4 m (a_s I_3+I_4)+4 \\pi ^2 a_s (a_s\n I_5+I_6)\\right)}{16 \\pi ^4 a_s^3}~,\\nonumber\\\\\nv_3&=-\\frac{m \\left[16 \\pi ^4 a_s^2 (a_s I_7+I_8)+I_4^2 m^2 (a_s I_3+I_4)+4\n \\pi ^2 a_s m (a_s I_3 I_6+a_s I_4 I_5+2 I_4 I_6)\\right]}{64 \\pi\n ^6 a_s^4}~\\,\\\\\nv_4&=-\\frac{m}{256 \\pi ^8 a_s^5}\\left[64 \\pi ^6 a_s^3 (a_s I_9+I_{10})+16 \\pi ^4 a_s^2 m \\left(a_s (I_3\n I_8+I_4 I_7+I_5 I_6)+2 I_4 I_8+I_6^2\\right)+I_4^3 m^3\n (a_s I_3+I_4)\\right.\\nonumber\\\\\n&\\left.+4 \\pi ^2 a_s I_4 m^2 (2 a_s I_3 I_6+a_s I_4\n I_5+3 I_4 I_6)\\right]~.\n\\label{ere.1s0}\n\\end{align}\nFor higher order shape parameters is more efficient to use the numerical method developed in Appendix \\ref{appen:vs}, \nto which we refer.\n\nThe resulting values for $r_s$ and \n the shape parameters $v_i$, $i=1,\\ldots,6$, are given in \nTable~\\ref{table:vs1s0a} and for $v_i$, $i=7,\\ldots, 10$ are shown \nin Table~\\ref{table:vs1s0b} in the second and third rows for NLO and \nNNLO, respectively. The latter are indicated by NNLO-I.\n These results are compared \nwith the results from the calculation based on the NNLO $NN$ potential of Refs.~\\cite{epe04} and \\cite{thesis},\n and with the Nijmegen PWA values.\n Our results for $v_3$ and $v_4$ are very similar to those obtained in Ref.~\\cite{thesis}. The difference between \n\\cite{thesis} and \\cite{epe04} stems from the fact that in the latter reference a different method to regularize pion exchanges was introduced, the so-called spectral function regularization, instead of the dimensional regularization used in Ref.~\\cite{thesis}. We also observe a clear improvement in the reproduction of the ERE parameters from NLO to NNLO. \nAt NLO the errors in Tables~\\ref{table:vs1s0a} and \\ref{table:vs1s0b} reflect the numerical uncertainty in the \ncalculation of higher order derivatives. At NNLO in addition they take into account the spread in the results \nfrom the different sets of $c_i$'s used. \n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|l|l|l|}\n\\hline\n &$r_s$ & $v_2$ & $v_3$ & $v_4$ & $v_5$ & $v_6$ \\\\\n\\hline\nNLO & $2.32$ & $-1.08$ & 6.3 & $-36.2$ & 225 & $-1463$ \\\\\n\\hline\nNNLO-I & $2.92(6)$ & $-0.32(8)$ & $4.9(1)$ & $-27.7(8)$ & $177(4)$ & $-1167(30)$ \\\\\n\\hline\nNNLO-II & $2.699(4)$ & $-0.657(3)$ & $5.20(2)$ & $-30.39(9)$ & $191.9(6)$ & $-1263(3)$ \\\\\n\\hline\nRef.~\\cite{thesis} & $2.68$ & $-0.61$ & $5.1$ & $-30.0$ & & \\\\\n\\hline\nRef.~\\cite{epe04} & $2.62\\sim 2.67$ & $-0.52\\sim -0.48$ & $4.0\\sim 4.2$ & $-20.5\\sim -19.9$ & & \\\\\n\\hline\nRef.~\\cite{Stoks:1994wp} & $2.68$ & $-0.48$ & $4.0$ & $-20.0$ & & \\\\\n\\hline\n\\end{tabular}\n\\caption{Values for effective range $r_s$ [fm] and the shape parameters \n$v_i$, $i=2,\\ldots,6$ in units of fm$^{2i-1}$\n for our present results at NNLO with once-subtracted DRs [Eq.~\\eqref{onceD}] (NNLO-I in the third row) \nand with twice-subtracted DRs [Eq.~\\eqref{twiceD}] (NNLO-II in the fourth row). \nThe second row shows the results at NLO with once-subtracted DRs [Eq.~\\eqref{onceD}].\n We also give the values obtained by using the NNLO $NN$ potential in \nRefs.\\cite{thesis} and \\cite{epe04} (fifth and\n sixth rows, respectively). The values corresponding to the Nijmegen PWA \\cite{Stoks:1994wp}, \n as obtained in Refs.~\\cite{epe04,thesis}, are given in the last row. \n\\label{table:vs1s0a}}\n\\end{center}\n\\end{table}\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n & $v_7\\times 10^{-1}$ & $v_8\\times 10^{-2}$ & $v_{9}\\times 10^{-3}$ & $v_{10}\\times 10^{-4}$ \\\\\n\\hline\nNLO & $985$ & $-681$ & 480 & $-344(3)$ \\\\\n\\hline\nNNLO-I & $795(18)$ & $-554(12)$ & $393(8)$ & $-284(6)$ \\\\\n\\hline\nNNLO-II & $857.1(1.9)$ & $-595.7(1.3)$ & $421.7(9)$ & $-304(3)$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Values for the shape parameter $v_i$, $i=7,\\ldots,10$ in units of fm$^{2i-1}$. \nFor the meanings of the rows see Table~\\ref{table:vs1s0a}.\n\\label{table:vs1s0b}}\n\\end{center}\n\\end{table}\n\n\n\nFrom Eq.~\\eqref{ere.1s0} we can also derive a power series expansion of the ERE parameters as a function of $a_s$, as it was done previously for $r_s$ in Ref.~\\cite{gor2013} at NLO. We refer to that reference for further details. The important point is that $D(A)$ satisfies the linear IE of Eq.~\\eqref{onceD} with an inhomogeneous term that is a polynomial of first degree in $a_s$. As a result, $D(A)=D_0(A)+a_s D_1(A)$, with $D_0(A)$ and $D_1(A)$ independent of $a_s$. This also implies that the different $I_n$ can be expressed as $I_n^{(0)}+a_s I_n^{(1)}$ with $I_n^{(0)}$ and $I_n^{(1)}$ independent of $a_s$. \nIn this way, the ERE parameters satisfies the following expansions \n\\begin{align}\nr_s&=\\alpha_0+\\frac{\\alpha_{-1}}{a_s}+\\frac{\\alpha_{-2}}{a_s^2}~,\\nonumber\\\\\nv_n&=\\sum_{m=-n-1}^0 \\frac{v_n^{(m)}}{a_s^m}~,\n\\label{expvs.1s0}\n\\end{align}\nwith the coefficients $\\alpha_i$ and $v_n^{(i)}$ independent of $a_s$. \nThe relation between $r_s$ and $a_s$ was first realized in Ref.~\\cite{pavon06} in the context of $NN$ scattering.\\footnote{The correlation between the effective range and the scattering length\n in Eq.~\\eqref{expvs.1s0} was derived earlier in atomic physics for Van der Waals potentials\n \\cite{flambaum}, and throughly confronted with data \\cite{calle2010}.} The explicit expressions \nof $\\alpha_i$ ($i=-2,-1,0)$ in terms of $D_0(A)$ and $D_1(A)$ were given in Ref.~\\cite{gor2013}.\n Its values at NNLO are\n\\begin{align}\n\\alpha_0&= 2.61\\sim 2.73~\\text{fm}~,\\nonumber\\\\\n\\alpha_{-1}&=-5.93\\sim -5.65~\\text{fm}^2~,\\nonumber\\\\\n\\alpha_{-2}&=5.92\\sim 6.12~ \\text{fm}^3~.\n\\label{exp.1s0}\n\\end{align}\n The expressions for the coefficients $v_n^{(m)}$ in Eq.~\\eqref{expvs.1s0} can also be worked straightforwardly \nin terms of $D_0(A)$ and $D_1(A)$ by the interested reader. For conciseness we do not reproduce them here. \nThe results in Eq.~\\eqref{exp.1s0} are perfectly compatible with those obtained in the first entry of \nRef.~\\cite{pavon06}, $\\alpha_0=2.59\\sim 2.67$~fm, $\\alpha_{-1}=-5.85\\sim -5.64$~fm$^2$ and $\\alpha_{-2}=5.95\\sim \n6.09$~fm$^3$. This reference employs the chiral $NN$ potential in a Lippmann-Schwinger equation that is \n renormalized with boundary conditions and imposing \nthe hypothesis of orthogonality of the wave functions determined with\n different energy.\\footnote{Since the potentials involved are singular this orthogonality condition is imposed in \n the formalism of Ref.~\\cite{pavon06}.} \nIn our case, however, the expansions in Eq.~\\eqref{expvs.1s0} are consequences of basic principles of a $NN$ \npartial wave like unitarity, analyticity and chiral symmetry. \n The resulting phase shifts in Fig.~\\ref{fig:1fp1s0} from Eq.~\\eqref{onceD}, and shown by the (red) hatched area,\n are also coincident with those obtained by Ref.~\\cite{pavon06}. \nThey are also rather similar to those obtained when employing only one contact term in the third entry of Ref.~\\cite{phillipssw},\n which studies the independence of its results as a function of the cutoff used to solve the Lippmann-Schwinger equation.\n Nevertheless, in this case the NNLO chiral potential is calculated by truncating its spectral representation \\cite{epe04}, \n while Ref.~\\cite{pavon06} uses the dimensional regularized result \n(which requires to take to infinity the cutoff(s) used in Ref.~\\cite{phillipssw}.)\n \n\n\nNext, we consider the twice-subtracted DRs: \n\\begin{align}\n\\label{twiceD}\nD(A)&=1+\\delta_2 A-\\nu_1\\frac{A(A+M_\\pi^2)}{\\pi}\\int_0^\\infty dq^2\\frac{\\rho(q^2)}{(q^2-A)(q^2+M_\\pi^2)q^2}\n-\\nu_2 A(A+M_\\pi^2) g(A,-M_\\pi^2)\\nonumber\\\\\n&+\\frac{A(A+M_\\pi^2)}{\\pi^2}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^2} \n\\int_0^\\infty dq^2\\frac{\\rho(q^2)q^2}{(q^2-A)(q^2+M_\\pi^2)(q^2-k^2)}\n~, \\\\\n\\label{twiceN}\nN(A)&=\\nu_1+\\nu_2 A+\\frac{A^2}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2-A)(k^2)^2}~,\n\\end{align}\nwhere the two subtractions in the function $N(A)$ and one for $D(A)$ are taken at $C=0$, while\n the other subtraction in $D(A)$ is placed at $C=-M_\\pi^2$. \nTaking into account Eq.~\\eqref{gdef} it is straightforward to rewrite\n\\begin{align}\n\\frac{1}{\\pi}\\int_0^\\infty dq^2\\frac{\\rho(q^2)q^2}{(q^2-A)(q^2-k^2)(q^2-C)}=\\frac{C g(A,C) - k^2 g(A,k^2)}{C-k^2}~.\n\\end{align} \n The subtraction constant $\\nu_1$ is given by Eq.~\\eqref{1s0nu1}, while $\\nu_2$ and $\\delta_2$ \nare directly fitted to \n the $np$ Nijmegen PWA phase shifts.\\footnote{Since Ref.~\\cite{Stoks:1994wp} does not provide \nerrors we always perform a least square fit, without weighting.} \n The best fit occurs for\n\\begin{align}\n\\nu_2&=-23(1)~M_\\pi^{-4}\\nonumber\\\\\n\\delta_2&=-8.0(3)~M_\\pi^{-2}~,\n\\label{nu2delta2}\n\\end{align} \nwhere the intervals of values stem from the uncertainty due to the different values of $c_i$'s taken. \nThe reproduction of data is very good, as shown by the (cyan) filled area in Fig.~\\ref{fig:1fp1s0} which \nlies on top of the Nijmegen PWA $np$ phase shifts. In the same figure \n we show by the (green) dash-dotted line \nthe twice-subtracted DR result at NLO, which reproduces \nthe Nijmegen data equally well as obtained at NNLO, with \n the fitted values $\\nu_2=-11.9$~$M_\\pi^{-4}$ and $\\delta_2=-4.6~M_\\pi^{-2}$. \n The resulting ERE shape parameters for the fit in Eq.~\\eqref{nu2delta2} \nare shown in the fourth rows of Tables~\\ref{table:vs1s0a} and \\ref{table:vs1s0b}, where \n we observe a remarkable good agreement with Ref.~\\cite{thesis}. \nWe predict $r_s=2.70$~fm which is compatible with its experimental value \n$r_s=2.75\\pm 0.05$~fm \\cite{thesis}. A similar good reproduction of the $^1S_0$ phase shifts is also achieved by Ref.~\\cite{phillipssw} in \nterms of two contact terms, although in this case there is a strong sensitivity on the cutoff employed to \nregularize the Lippmann-Schwinger equation near those values that give rise to poles in the domain of validity \nof the effective field theory. \n\n The value of $\\nu_2$ in \nEq.~\\eqref{nu2delta2} is rather large, of similar size in absolute value to $\\nu_1\\simeq 31~M_\\pi^{-2}$, Eq.~\\eqref{1s0nu1}. A linear correlation between $\\nu_2$ and $\\delta_2$ can be observed in a $\\chi^2$ contour plot, along which \n there is an absolute minimum corresponding to the parameters given \nin Eq.~\\eqref{nu2delta2}. \nThe subtraction constant $\\nu_2$ that results from the once-subtracted DR \n Eq.~\\eqref{onceN}, and that we denote by $\\nu_2^{\\mathrm{pred}}$, is given by the \nexpression\n\\begin{align}\n\\nu^{\\mathrm{pred}}_2&=\\frac{1}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^2}~,\n\\label{nu2predicted}\n\\end{align}\nwith the numerical value $\\nu_2^{\\mathrm{pred}}\\simeq -6.0$, $-6.4$ and $-7.5\\pm 0.2~M_\\pi^{-4}$ when \n $\\Delta(A)$ is calculated up to ${\\cal O}(p^0)$, ${\\cal O}(p^2)$ and ${\\cal O}(p^3)$, respectively.\n The difference between the predicted and fitted values for $\\nu_2$ at NLO \n is denoted by $\\delta\\nu_2^{(0)}$.\n The superscript takes into account the chiral order for $\\nu_2$, ${\\cal O}(p^{-2+m})$ according to the new \ncontribution to $\\Delta(A)$ of ${\\cal O}(p^m)$, Eq.~\\eqref{summarypwc}. \n The value obtained is $\\delta \\nu_2^{(0)}\\simeq -5.5~M_\\pi^{-4}$. At NNLO in order \nto calculate $\\delta \\nu_2^{(1)}$ one has to subtract $\\delta\\nu_2^{(0)}$ to the difference \nbetween the fitted value in Eq.~\\eqref{nu2delta2} and the predicted one from Eq.~\\eqref{nu2predicted}. \n Then, one has $\\delta\\nu_2^{(1)}\\simeq -15+5.5=-9.5~M_\\pi^{-4}$.\n This implies that in order to overcome the excess of repulsion at NNLO one needs\n to incorporate a significant contribution from short-distance physics to give account \nof ``missing physics'', beyond the pure long-range physics\\footnote{We mean here the physics driven by the \n multi-pion exchanges giving rise to the LHC and to $\\Delta(A)$.}\n that stems from the once-subtracted DR case and that is not able to provide an accurate \nreproduction of data as shown in Fig.~\\ref{fig:1fp1s0} by the (red) hatched areas. \n The large value for $\\delta\\nu_2^{(1)}$ is mainly due to the ${\\cal O}(p^2)$ $\\pi N$ counterterms \n$c_i$'s, which in turn are dominated by the $\\Delta(1232)$ resonance contribution \\cite{bernard93,aco2012}. \n This can be easily seen by performing a fit to data in which we set $c_i=0$ for all of them.\n A good reproduction \nof the Nijmegen PWA phase shifts results but now $\\delta\\nu_2^{(1)}\\simeq -1.5~M_\\pi^{-4}$, which is \nmuch smaller than $\\delta\\nu_2^{(0)}$, with a ratio $\\delta\\nu_2^{(1)}\/\\delta\\nu_2^{(0)}\\sim 30\\% \\sim \n{\\cal O}(p)$. This indicates that once the large contributions \n that stem from the $c_i$ coefficients are discounted a quite natural (baryon) chiral expansion emerges. \n \nRegarding the absolute value of $\\delta\\nu_2^{(0)}$ one should expect on dimensional grounds that\n\\begin{align}\n|\\delta\\nu_2^{(0)}|\\sim \\frac{4\\pi\\,|a_s|}{m \\Lambda^2}~,\n\\label{abvalue_v2}\n\\end{align}\nwith $\\Lambda$ the expansion scale. The factor $4\\pi\/m$ is due to our normalization, cf. Eq.~\\eqref{efr1}. \nThere should be also another contribution to $\\delta\\nu_2^{(0)}$ not proportional to $a_s$, but since \nthe scattering length is so large the contribution shown in Eq.~\\eqref{abvalue_v2}\n is expected to be the most important. For $\\Lambda\\simeq 350$~MeV, one would have \n$|\\delta\\nu_2^{(0)}|\\sim 5~M_\\pi^{-4}$, which is very similar indeed to the reported value above. This \nvalue of $\\Lambda$ is also consistent with the ratio $\\delta\\nu_2^{(1)}\/\\delta\\nu_2^{(0)}\\sim 1\/3$ given above \nas $M_\\pi\/\\Lambda \\sim 1\/3$.\n\nLet us consider now the relevance of the different contributions to $\\Delta(A)$ by evaluating the double\n integral in Eq.~\\eqref{twiceD}, namely,\n\\begin{align}\n\\frac{A(A+M_\\pi^2)}{\\pi^2}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^2}\\int_0^\\infty dq^2\\frac{\\rho(q^2)q^2}{(q^2-A)(q^2+M_\\pi^2)(q^2-k^2)}~,\n\\label{1s0.quanty}\n\\end{align}\n with the full result for $D(A)$ but with $\\Delta(A)$ in the integrand of Eq.~\\eqref{1s0.quanty} evaluated \npartially with some contributions or all of them. The result of this exercise is given\n in the left panel of Fig.~\\ref{fig:1s0quanty} for the $c_i$ coefficients of Ref.~\\cite{epe12}, \n collected in the first row of Table~\\ref{tab:cis}. \n In turn, we show directly $\\Delta(A)$ along the LHC in the right panel of Fig.~\\ref{fig:1s0quanty}. \n The (black) dash-dotted lines correspond to OPE, \nthe (blue) dotted lines take into account the full ${\\cal O}(p^2)$ TPE,\n including both two-nucleon reducible and irreducible TPE, and \n the (cyan) double-dotted lines contain the ${\\cal O}(p^3)$ two-nucleon irreducible TPE. \nIn the right panel we show by the (cyan) filled area the variation in the ${\\cal O}(p^3)$ irreducible TPE \ncontribution by varying between the different sets of $c_i$'s from Refs.~\\cite{epe04} and \\cite{aco2013}, as discussed above. \nThis band indicates a large source of uncertainty in $\\Delta(A)$.\n In the left panel the (red) solid line results by keeping all the contributions to $\\Delta(A)$, and \none can quantify from this panel the fact that the ${\\cal O}(p^3)$ irreducible TPE is the largest subleading contribution.\n At $\\sqrt{A}=100$~MeV it is around 28\\% of the OPE contribution, and it raises with energy\n so that at $\\sqrt{A}=200$~MeV it is 44\\% and at 300~MeV it becomes 66\\%. The increase in energy \nof the relative size of the subleading TPE contribution should be expected because at low energies\n the suppression mechanism due to the earlier onset of the OPE source of $\\Delta(A)$ along the LHC at $L$ is more efficient.\n In addition, it is well-known that the $\\Delta(1232)$ plays a prominent role in $\\pi N$ scattering because\n its proximity to the $\\pi N$ threshold and its strong coupling to this channel.\n This manifests in the large size of the LECs $c_3$ and $c_4$ in Table~\\ref{tab:cis} due to the $\\Delta(1232)$ contribution \nto them, evaluated in Refs.~\\cite{bernard93,aco2012}. \nThe large impact of the $\\Delta(1232)$ is the well-known reason for the large size of subleading TPE,\n but once its leading effects are taken into account at $ {\\cal O}(p^3)$ the chiral expansion\n stabilizes \\cite{entem,epe04}, as we have also concluded in the discussion following Eq.~\\eqref{nu2predicted}. \n In the following, we skip the discussion on the relative importance of the different contributions \nto $\\Delta(A)$ for those $NN$ partial waves with a similar situation to the one discussed \nconcerning the $^1S_0$. \n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/lhcd1s0.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/dis.lhc1s0.ps}\n\\end{tabular}\n\\caption{ { (Color online.) Left panel: different contributions to the integral in Eq.~\\eqref{1s0.quanty} for the $^1S_0$.\n Right panel: contributions to $\\Delta(A)$. These contributions comprise OPE (black dash-dotted line), \nleading TPE (blue dotted line) and \nthe subleading TPE contribution, shown by the (cyan) double-dotted line in the left panel \nand by the (cyan) filled area in the right one. The total result, only shown for the left panel, \nis the (red) solid line.}\n\\label{fig:1s0quanty}}\n\\end{center}\n\\end{figure}\n\n\\section{Uncoupled $P$ waves}\n\\label{pw} \nIn this section we discuss the application of the method to the uncoupled $P$ waves. \n At NNLO one has for these waves that \n\\begin{align}\n\\lambda_P=\\lim_{A\\to-\\infty}\\frac{\\Delta(A)}{(-A)^{(3\/2)}}>0~,\n\\label{unlambdap}\n\\end{align}\nso that, according to the results of Ref.~\\cite{gor2013}, its Proposition 4, a once-subtracted DR for $D(A)$, \nEq.~\\eqref{standardr}, does not converge and more subtractions should be taken. Then, we directly discuss \nthe twice- and three-time subtracted DRs. \n\nThe twice-subtracted DRs are given by:\n\\begin{align}\n\\label{twiceDNl}\nD(A)&=1+\\delta_2 A-\\nu_2 A^2 g(A,0)\n+\\frac{A^2}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^2}g(A,k^2)~,\\nonumber \\\\\nN(A)&=\\nu_2 A+\\frac{A^2}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2-A)(k^2)^2}~,\n\\end{align}\nwith all the subtractions in Eq.~\\eqref{standardr} taken at $C=0$. \n The three-time subtracted DRs are:\n\\begin{align}\nD(A)&=1+\\delta_2 A + \\delta_3 A(A+M_\\pi^2)+(\\nu_2-\\nu_3 M_\\pi^2) A(A+M_\\pi^2)^2 \\frac{\\partial g(A,-M_\\pi^2)}{\\partial M_\\pi^2}\n-\\nu_3 \\, A(A+M_\\pi^2)^2 g(A,-M_\\pi^2)\\nonumber\\\\\n&+\\frac{A(A+M_\\pi^2)^2}{\\pi}\\int_{-\\infty}^Ldk^2 \\frac{\\Delta(k^2)D(k^2)}{(k^2)^3}g(A,k^2,-M_\\pi^2;2)~,\\nonumber\\\\\nN(A)&=\\nu_2 A+ \\nu_3 A^2+\\frac{A^3}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2-A)(k^2)^3}~.\n\\label{pw.tdr}\n\\end{align}\nHere all the subtractions in $N(A)$ and one in $D(A)$ are taken at $C=0$, while the other two subtractions \nin $D(A)$ are taken at $C=-M_\\pi^2$. This is done in order to avoid handling an infrared diverging \nintegral along the RHC multiplying $\\nu_2$ that would result if all the subtractions were taken at $C=0$. \nThe function $g(A,k^2,C;m)$ appearing in Eq.~\\eqref{pw.tdr} is defined as\n\\begin{align}\ng(A,k^2,C;m)=\\int_0^\\infty\\!\\! dq^2\\frac{\\rho(q^2) (q^2)^m}{(q^2-A)(q^2-k^2)(q^2-C)^m}~.\n\\label{def.el.gm}\n\\end{align}\n\nIn all the cases the subtraction constant $\\nu_2$ is fixed in terms of the \nscattering volume, $a_V$, \n\\begin{align}\n\\nu_2=4\\pi a_V\/m~.\n\\label{pnu2fix}\n\\end{align}\nFor $a_V$ we take the values $0.890$, $-0.543$ and $-0.939~M_\\pi^{-3}$ for the partial waves $^3P_0$, $^3P_1$ and $^1P_1$, \nin order, as deduced from Ref.~\\cite{Stoks:1994wp}.\n\n\\subsection{$^3P_0$ wave}\n\\label{3p0}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.5\\textwidth]{.\/3p0.ps} \n\\caption{ {\\small }\n\\label{fig:3p0}\n Phase shifts of the $^3P_0$ $NN$ partial wave. \n The three-time subtracted DR results at NNLO are shown by the (red) hatched area and the twice-subtracted DR results at NLO \\cite{gor2013} are given by the (magenta) solid line. \nThe (blue) dotted line corresponds to the OPE results \\cite{paper1} and \nthe Nijmegen PWA phase shifts are shown by the (black) dashed lines.}\n\\end{center}\n\\end{figure}\n\n \n For the $^3P_0$ wave the twice-subtracted DRs at NNLO, Eq.~\\eqref{twiceDNl}, do not provide \nstable results under the increase in absolute value \nof the lower limit of integration along the LHC. \nHowever, the three-time subtracted DRs, Eq.~\\eqref{pw.tdr}, are convergent and provide meaningful results. \n Notice that, as stated in Sec.~\\ref{nschpt}, \non top of the number of subtractions required by the chiral counting, two at NNLO,\n we impose the requirement of having well-defined IEs providing stable \n solutions. \nRegarding the subtractions constants $\\nu_3$, $\\delta_2$ and $\\delta_3$ in Eq.~\\eqref{pw.tdr}, \nwe can fix $\\nu_3=0$ because it plays a negligible role in the fits and, if released, the fit remains stable. \nThe fitted values for $\\delta_2$ and $\\delta_3$ are\n\\begin{align}\n\\delta_2&= 2.82(5)~M_\\pi^{-2}\\nonumber\\\\\n\\delta_3&=0.18(6) ~M_\\pi^{-4}~,\n\\end{align}\nwhere the intervals of values take into account the dispersion in the results \nthat stems from the different sets of $c_i$'s in Table~\\ref{tab:cis}.\n The phase shifts calculated, shown by the (red) hatched area \nin Fig.~\\ref{fig:3p0}, reproduce exactly the Nijmegen PWA phase shifts \\cite{Stoks:1994wp}, \ngiven by the (black) dashed line. Indeed, the two lines overlap each other. \nThe results with different sets of values for the $c_i$ counterterms cannot be distinguished either between each other. \n The (magenta) solid line shows the results with twice-subtracted DRs at NLO \\cite{gor2013}, \nwhich are already almost on top of the data, and the OPE results \\cite{paper1} are shown by the (blue) dotted line. \n We have also checked that a tree-time-subtracted DR at LO and NLO provide already a prefect reproduction of data as well. \nThen, the wave $^3P_0$ studied at ${\\cal O}(p^3)$ is not a good partial wave to learn above chiral dynamics, because \n independently of order up to which $\\Delta(A)$ is calculated the reproduction of data is excellent when \nthree-subtractions are taken. \n\n\n\n\\subsection{$^3P_1$ wave}\n\\label{3p1}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.5\\textwidth]{.\/3p1.ps} \n\\caption{ {\\small }\n\\label{fig:3p1}\n Phase shifts of the $^3P_1$ $NN$ partial wave. \n The three-time subtracted DR results at NNLO are shown by the (red) hatched area. \nThe (blue) dotted line corresponds to the OPE results \\cite{paper1} and\n the Nijmegen PWA phase shifts are shown by the (black) dashed lines.}\n\\end{center}\n\\end{figure}\n\nFor this partial wave the situation is similar to that discussed for the $^3P_0$. \nThe twice-subtracted DRs, Eq.~\\eqref{twiceDNl}, do not provide stable results and we have \nto consider then the three-time subtracted DRs, Eq.~\\eqref{pw.tdr}. \n The free parameters are $\\delta_2$ and $\\delta_3$, with $\\nu_3$ fixed to 0 (the fit is stable if this \nsubtraction constant is released). The fitted values are\n\\begin{align}\n\\delta_2& = 2.7(1)~M_\\pi^{-2},\\nonumber\\\\\n\\delta_3& = 0.47(3)~M_\\pi^{-4}~.\n\\end{align}\n\nThe resulting phase shifts are shown in Fig.~\\ref{fig:3p1} by the (red) hatched area and \n reproduce perfectly the Nijmegen PWA phase shifts (shown by \nthe black dashed line), independently of the set of values\n for the $c_i$'s chosen from Refs.~\\cite{epe12,aco2013} in Table~\\ref{tab:cis}. \n At NLO \\cite{gor2013} it is also necessary to take three-subtracted DRs in order to obtain stable results and the reproduction \nof data is equally perfect. \nThis is why we have not included the NLO results in Fig.~\\ref{fig:3p1}. \n Similarly to the $^3P_0$ case, we cannot discern the impact of chiral dynamics at ${\\cal O}(p^3)$ once three-time subtracted DRs \nare considered.\n\n\\subsection{$^1P_1$ wave}\n\\label{1p1}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.5\\textwidth]{.\/1p1.ps} \n\\caption{ {\\small }\n\\label{fig:1p1}\n Phase shifts of the $^1P_1$ $NN$ partial wave. \n The twice subtracted DR results at NNLO are shown by the (red) hatched area, while at NLO \\cite{gor2013}\ncorrespond to the (magenta) solid line. \nThe (blue) dotted line corresponds to the OPE results \\cite{paper1} and \nthe Nijmegen PWA phase shifts are shown by the (black) dashed lines.}\n\\end{center}\n\\end{figure}\n \nFor this partial wave the twice-subtracted DR results from Eq.~\\eqref{twiceDNl} are quite stable at low energies. \nThe free parameters are now $\\nu_2$ and $\\delta_2$. \nThe resulting fitted value for $\\delta_2$ to the Nijmegen PWA phase shifts is \n\\begin{align}\n\\delta_2 =0.4(1)~M_\\pi^{-2}~,\n\\end{align}\nwith the variation in the value due to the set of $c_i$'s taken [$\\nu_2$ is given by Eq.~\\eqref{pnu2fix}]. \nWe show by the (red) hatched area in Fig.~\\ref{fig:1p1} our results by employing the different $c_i$ sets of values. \nFor this case the curves obtained with the $c_i$ from \\cite{aco2013}, by reproducing the $\\pi N$ phase shifts with \nLorentz covariant EOMS BChPT, are the closest to data and determine the upper\n limit of the hatched area in Fig.~\\ref{fig:1p1}. \n The improvement in the reproduction of data for the $^1P_1$ partial wave by the twice-subtracted DRs at NNLO compared with the \nresults obtained at NLO with the same number of subtractions \n (hatched area versus (magenta) solid line) is a notorious effect from $\\pi N$ physics. One should \nnotice that for the $^1P_1$ wave the dispersive integral on the r.h.s. of Eq.~\\eqref{twiceDNl} for the function $D(A)$ is clearly \ndominated by the OPE contribution\n This is the reason why for the $^1P_1$ one does not need to take three subtractions but two are enough. \n Although, as much as for the other partial waves discussed until now, the ${\\cal O}(p^3)$ two-nucleon irreducible \nTPE is the dominant contribution between the subleading effects to $\\Delta(A)$. \n\n\n\n\n\\section{Uncoupled $D$ waves}\n\\label{dw}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/1d2.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3d2.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) Phase shifts for $^1D_2$ (left panel) and $^3D_2$ (right panel). \n The (red) hatched areas correspond to the NNLO results while the (magenta) solid lines are \nthe NLO outcome \\cite{gor2013}. In both cases twice-subtracted DRs are used. The \nphase shifts in the Born approximation are shown by the (cyan) filled bands, \nthe OPE result from Ref.~\\cite{paper1} is the (blue) dotted lines and \n the Nijmegen PWA phase shifts are given by the (black) dashed lines.}\n\\label{fig:dw} }\n\\end{center}\n\\end{figure}\n\n\nHere, we discuss the $D$ waves. In order to preserve the right threshold behavior we \nemploy the twice-subtracted DRs of Eqs.~\\eqref{highd} and \\eqref{highn} with $\\ell=2$. \nFor the uncoupled $D$ waves one has that\n\\begin{align}\n\\lambda_D&=\\lim_{A\\to -\\infty}\\frac{\\Delta(A)}{(-A)^{3\/2}}<0\n\\end{align}\nand for this sign we do not have numerical problems \nin the solution of the resulting IE even for diverging $\\Delta(A)$ \\cite{gor2013}.\n\n\\begin{align}\n\\label{twiceDNl_Dw}\nD(A)&=1+\\delta_2 A\n+\\frac{A^2}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2)^2}g(A,k^2)~,\\nonumber \\\\\nN(A)&=\\frac{A^2}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)D(k^2)}{(k^2-A)(k^2)^2}~.\n\\end{align}\n\nThe only free parameter per partial wave is $\\delta_2=D^{(1)}(0)$ which is fitted to the \nNijmegen PWA phase shifts. Taking into account the \ndifferent sets of values for the $c_i$ counterterms we have the following \nresults,\n\\begin{align}\n^1D_2:& ~D^{(1)}(0)= 0.07(1) ~M_\\pi^{-2},\\nonumber\\\\\n^3D_2:& ~D^{(1)}(0)= -0.017(3)~M_\\pi^{-2}~.\n\\end{align} \nThe reproduction of data is excellent as shown by the (red) \nhatched areas in Fig.~\\ref{fig:dw}, where the phase shifts for the $^1D_2$ are given in the left panel \nand those of the $^3D_2$ in the right one. Our results indeed overlap the Nijmegen PWA phase shifts given by \nthe (black) dashed lines. \nReference~\\cite{gor2013} obtained the (magenta) solid line making use also of twice-subtracted DRs at NLO. \n We see a remarkable improvement from NLO to NNLO due to the inclusion of NLO $\\pi N$ dynamics, \nparticularly for the $^1D_2$ partial wave. \n\n\\subsection{Perturbative and Born approximation phase shifts}\n\\label{born}\n\nThe higher is the orbital angular momentum $\\ell$ the more perturbative is expected to be the \ncorresponding $NN$ partial wave. \nThis statement was studied in detail in the perturbative study of \n Ref.~\\cite{peripheral} by making use \nof the one-loop approximation in BChPT.\n Indeed, we can easily obtain from \nour formalism both the leading perturbative solution to the \nIEs of the $N\/D$ method in powers of $\\Delta(A)$, as well as the leading term in the Born series approximation \nfor the chiral $NN$ amplitude calculated up to ${\\cal O}(p^3)$ in Ref.~\\cite{peripheral}. \nThe point is that for a \nweak interaction (small $\\Delta(A)$ at low three-momentum) one can expect that $D(A)\\simeq 1$ at low energies.\n It is then reasonable to consider that substituting $D(A)\\to 1$ in the integral on the r.h.s. of \nEq.~\\eqref{highn} would be meaningful to calculate $N(A)$, \n because we have a rapid converging integral due to the factor $(k^2)^\\ell$ in the denominator for a sufficiently \nlarge value of $\\ell$.\\footnote{Of course, the precise meaning of this statement could vary from one case to \nthe other due to characteristic facets of the considered partial wave.} The perturbative \nresult for $N(A)$, denoted by $N_p(A)$, is then\n\\begin{align}\nN^{(p)}(A)=\\frac{A^\\ell}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta(k^2)}{(k^2)^\\ell (k^2-A)}~.\n\\label{eq.hw.per}\n\\end{align}\nHad we included only the two-nucleon irreducible contributions to $\\Delta(A)$, which is then \ndenoted as $\\Delta_B(A)$, \nthe previous integral becomes the DR representation of the $NN$ potential that \nwe denominate $N_B(A)$,\n\\begin{align}\nN_B(A)=\\frac{A^\\ell}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_B(k^2)}{(k^2)^\\ell (k^2-A)}~.\n\\label{eq.nborn}\n\\end{align}\nThis is due to the fact that the $NN$ potential projected in a given \npartial wave is an analytical function that only has LHC and it can be \nwritten in terms of a DR along the latter cut. \n We have checked numerically that the DR representation \nEq.~\\eqref{eq.nborn} for the $NN$ potential coincides\n with its explicit partial wave decomposition taking into account \nthe expressions given in Ref.~\\cite{peripheral}. \n In our notation the relation between $N_B(A)$ and the phase shifts in the Born approximation, $\\delta_B(A)$, reads\n\\begin{align}\n\\delta_B(A)=\\rho(A) N_B(A)~.\n\\label{deltab}\n\\end{align}\nAn analogous expression holds for the perturbative phase shifts $\\delta^{(p)}(A)$ calculated in terms of $N^{(p)}(A)$. \n The difference between the perturbative phase shifts and the Born approximation ones \nfor $\\ell\\geq 2$ is typically not very significant and quite small. \nIn the following we compare our full results with $\\delta_B(A)$, \nsince these phase shifts can be also calculated straightforwardly in potential models.\nWe proceed in the same way for the coupled channel case as well by evaluating $N_{ij}(A)$ in the Born approximation \nby substituting $D_{ij}(A)\\to 1$ in Eq.~\\eqref{highncc}, and keeping only the two-nucleon irreducible contributions \nto $ \\Delta_{ij}(A)$.\n\nTurning back to the uncoupled $D$ waves we also show in Fig.~\\ref{fig:dw} the leading Born approximation phase shifts \nobtained from the NNLO two-nucleon irreducible contributions to $\\Delta(A)$ by the (cyan) filled areas.\n One observes that these curves are quite different from our full results \ngiven by the hatched areas. This clearly indicates that the perturbative \ntreatment of the $NN$ $D$ waves is not accurate.\n\n\\section{Uncoupled $F$ waves}\n\\label{fw}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/1f3.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3f3.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) Phase shifts for $^1F_3$ (left panel) and $^3F_3$ (right panel). \n The (red) hatched areas correspond to the NNLO results while the (magenta) solid lines are \nthe NLO outcome. In both cases three-time-subtracted DRs are used.\nThe (cyan) filled bands give $\\delta_B(A)$, \nthe OPE result from Ref.~\\cite{paper1} is the (blue) dotted lines and \nthe Nijmegen PWA phase shifts correspond to the (black) dashed lines.}\n\\label{fig:fw} }\n\\end{center}\n\\end{figure}\n\nFor the $F$ waves we have three subtractions with two free parameters $\\delta_2$ and $\\delta_3$. We fix $\\delta_2=0$ in the following (according to the principle of maximal smoothness) and fit $\\delta_3$ to data. \n At NNLO the fitted values for $D^{(2)}(0)=2 \\delta_3$, Eq.~\\eqref{tayloruc}, are: \n\\begin{align}\n^1F_3:~& D^{(2)}(0)= 0.057(3)~M_\\pi^{-4}~,\\nonumber\\\\\n^3F_3:~& D^{(2)}(0)= 0.035(5)~M_\\pi^{-4}~,\n\\end{align}\nwhere the variation in the values is due to the different sets of $c_i$ counterterms employed. \nThe NNLO results are shown by the (red) hatched areas in Fig.~\\ref{fig:fw} which reproduce the \nNijmegen PWA phase shifts (black dashed line) better than the NLO results (magenta lines) and \nthe perturbative phase shifts (cyan filled areas). This improvement \nis particularly noticeable for the $^3F_3$ partial wave.\n\nWe also observe that for the $F$ waves the phase shifts in the leading Born approximation, Eq.~\\eqref{deltab}, \nrun much closer to our full results than for the $D$ waves, which clearly indicates that \n$F$ waves are more perturbative. \n Nevertheless, the relative deviation of the perturbation results compared \nwith the full solution is still around a 50\\% at the end of the interval shown in Fig.~\\ref{fig:fw}. \n A similar conclusion on the more perturbative nature of the $F$ waves \nwas also reached in the pure perturbative study of Ref.~\\cite{peripheral} by comparing with experimental data. \n However, here we can also compare with the full unambiguous solution of the corresponding IE. \n For example, we can learn from Fig.~\\ref{fig:fw}\n that the widths of the (cyan) filled bands for the Born approximation results \nreflect a much larger dependence on the $c_i$ \ncoefficients than the one corresponding to the full nonperturbative results given by the (red) hatched\n areas.\n Thus, within our approach the failure reported in Refs.~\\cite{epe04,thesis} to reproduce simultaneously \nthe $D$ and $F$ waves by using the NNLO chiral potential \ncalculated in dimensional regularization in Ref.~\\cite{peripheral} \n because the large values of the $c_i$ counterterms does not happen. \nNamely, we are able to \ndescribe properly both the uncoupled $D$ and $F$ waves, Figs.~\\ref{fig:dw} and \\ref{fig:fw}, respectively, \nand the dependence on the precise set of $c_i$'s taken is quite mild for the full results. \nIndeed our calculation at NNLO describe the Nijmegen PWA phase shifts better than the NLO ones \\cite{gor2013},\n which is not the case for all of these waves in Ref.~\\cite{thesis} \nbased on the (modified) Weinberg approach when comparing their NLO and NNLO results.\n\n \n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/lhcd1f3.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/dis.lhc.1f3.ps} \\\\\n\\includegraphics[width=.4\\textwidth]{.\/lhcd3f3.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/dis.lhc.3f3.ps} \\\\\n\\end{tabular}\n\\caption{ { (Color online.) Left panel: different contributions to the integral on the \nr.h.s. of Eq.~\\eqref{highd} for $\\ell=3$.\n The meanings of the lines are the same as in Fig.~\\ref{fig:1s0quanty}. \nFor definiteness we consider the $c_i$'s given in the last row of Table~\\ref{tab:cis}. }\n\\label{fig:unFquanty}}\n\\end{center}\n\\end{figure}\n\nThe increase in the perturbative character of the $F$ waves can also be seen by considering the \nrelevance of the different contributions of $\\Delta(A)$ to the integral on the r.h.s. of\n Eq.~\\eqref{highd}, proceeding in a similar way to that already performed for the $^1S_0$ partial wave \nin Sec.~\\ref{1s0}.\n The result is shown in the left panels of Fig.~\\ref{fig:unFquanty}, where\n the first row corresponds to $^1F_3$ and the second to $^3F_3$.\n In the right panels we show directly the different contributions to $\\Delta(A)$. \nThe meanings of the lines \nin Fig.~\\ref{fig:unFquanty} are the same as in Fig.~\\ref{fig:1s0quanty}, though here \n the $c_i$'s are taken from Ref.~\\cite{aco2013}, given in the last row of Table~\\ref{tab:cis}, \nwhich is enough for the present purposes. \nNotice, that now a qualitative different situation is found with respect to what is shown in \nFig.~\\ref{fig:1s0quanty}, that also holds for the $P$ and $D$ waves discussed in Secs.~\\ref{pw} and \\ref{dw}. \n For the $F$ and higher waves the subleading two-nucleon irreducible TPE contribution is much less important and \nOPE is by far the dominant contribution, as it should correspond to a perturbative high-$\\ell$ wave.\n \n\n \n\n\n\\section{Uncoupled $G$ waves}\n\\label{gw}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/1g4.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3g4.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) Phase shifts for $^1G_4$ (left panel) and $^3G_4$ (right panel). \n The (red) hatched areas correspond to the NNLO results while the (magenta) solid lines are \nthe NLO outcome. In both cases four-time--subtracted DRs are used. \n The (cyan) filled areas represent\n the outcome from the leading Born approximation, \n the OPE result from Ref.~\\cite{paper1} is the (blue) dotted lines \nand the Nijmegen PWA analysis is the (black) dashed lines.}\n\\label{fig:gw} }\n\\end{center}\n\\end{figure}\n\nFor the $G$ waves we have four subtractions of which $\\delta_i$ $(i=2,3,4)$ are free but, according to the \nprinciple of maximal smoothness, all of them are fixed to 0 except \n$\\delta_4=D^{(3)}(0)\/3!$ that is fitted to data. \nAt NNLO the fitted values for $D^{(3)}(0)$ are: \n\\begin{align}\n^1G_4:~& D^{(3)}(0)=-0.014(2)~M_\\pi^{-6}~,\\nonumber\\\\\n^3G_4:~& D^{(3)}(0)=-0.055(5)~M_\\pi^{-6}~,\n\\end{align}\nwhere the variation in the values is due to the different sets of $c_i$ counterterms employed. \nThe corresponding results are shown by the (red) hatched areas in Fig.~\\ref{fig:gw}. \nFor both partial waves the actual dependence on the $c_i$ coefficients for the resulting phase shifts \nis almost negligible and the hatched areas degenerate to lines. \n The low-energy results are very similar at NLO and NNLO and reproduce the Nijmegen \nPWA phase shifts quite well.\n These results are better than the perturbative ones in the Born approximation, Eq.~\\eqref{deltab}, which \nare shown by the (cyan) filled areas. \nAs indicated for the uncoupled $F$ waves here OPE overwhelmingly dominates \n the different contribution to the dispersive integral on the r.h.s. of Eq.~\\eqref{highd}. \nThis indicates that these waves are rather perturbative, though still we observe differences \naround 30\\% for $p\\lesssim 300$~MeV in Fig.~\\ref{fig:gw} between the full and perturbative results.\n\n\n\n\n\\section{Uncoupled $H$ waves}\n\\label{hw}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/1h5.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3h5.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) Phase shifts for $^1H_5$ (left panel) and $^3H_5$ (right panel). \n The (red) hatched areas correspond to the NNLO results while the (magenta) solid lines are \nthe NLO outcome. \n The (cyan) filled bands correspond to $\\delta_B(A)$, \nthe OPE result from Ref.~\\cite{paper1} is the (blue) dotted lines and \nthe Nijmegen PWA is the (black) dashed lines.}\n\\label{fig:hw} }\n\\end{center}\n\\end{figure}\n\nFor the case of the uncoupled $H$ waves, $^1H_5$ and $^3H_5$, we apply the five-time subtracted DRs of \nEqs.~\\eqref{highd} and \\eqref{highn} with $\\ell=5$. \nWe fit $\\delta_5=D^{(4)}(0)\/4 !$ to the Nijmegen PWA phase shifts, which for $\\ell\\geq 5$ \ncorrespond to those obtained from the $NN$ potential model of Ref.~\\cite{obe}, while $\\delta_{2,3,4}$ are fixed to 0 (principle of maximal smoothness). \nWe obtain the fitted values: \n\\begin{align}\n^1H_5:~D^{(4)}(0)&=0.156~ M_\\pi^{-8}~,\\nonumber\\\\\n^3H_5:~D^{(4)}(0)&=0.066~M_\\pi^{-8}.\n\\end{align} \nThe resulting fit is stable if we release $\\delta_i$ $(i=2,3,4)$. \nThe phase shifts obtained are shown by the (red) hatched areas in Fig.~\\ref{fig:hw} by taking into account the spread of the results \ndepending of the set of $c_i$'s chosen. \nIn this figure the left panel corresponds to $^1H_5$ and the right one to $^3H_5$. \n For the former the resulting curve indeed overlaps the Nijmegen PWA phase shifts \\cite{Stoks:1994wp}. \nWe also show by the (cyan) filled bands the phase shifts in the leading Born approximation\n which run rather close to \nthe full results, indeed for the $^3H_5$ case the (cyan) filled band is overlapped by the (red) hatched one. \nThis clearly indicates the perturbative nature for the $H$ waves. \nFor them it is also true that OPE overwhelmingly dominates \nthe dispersive integral on the r.h.s. of Eq.~\\eqref{highd}, which is also the expected behavior \nfor a perturbative partial wave.\nNotice that for the $^1H_5$ wave the dependence on the actual values of the \n$c_i$ coefficients is so small that at the end the hatched and filled areas collapse to lines.\n For the $^3H_5$ case there is a visible, albeit small, dependence on the set of $c_i$'s employed. \nIn both cases the NNLO results reproduce the Nijmegen PWA phase shifts closer than the NLO and OPE results. \n\n \n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/N3h5.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/D3h5.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) The functions $N(A)$ and $N_p(A)$ are shown by the (red) solid and (blue) \ndash-dotted lines in the left panel, respectively. The real part of the function $D(A)$ is plotted in the right panel. }\n\\label{fig:hw.nd} }\n\\end{center}\n\\end{figure}\n\nIt is interesting to discuss in this case the behavior of the function $N(A)$ compared with $N_p(A)$, given in Eq.~\\eqref{eq.hw.per}. \nThe main point is that here both $N(A)$ and $D(A)$ have a zero at around 450~MeV.\n We consider only the $^3H_5$ wave because a similar discussion would follow for $^1H_5$ as well, \nthat we skip for brevity.\n In the left panel of Fig.~\\ref{fig:hw.nd} we show by the (red) solid line the full $N(A)$ \n and by the (blue) dashed line the perturbative result $N_p(A)$. \nWe see that they are very similar, as expected for a partial wave with an $\\ell$ as high as 5.\n In addition, we display in the right panel of the same figure the real part of $D(A)$ from Eq.~\\eqref{highd}, \nwhich is very close to 1, as expected for a situation with a weak interaction as well. \nAll these curves are obtained by employing the $c_i$'s from Ref.\\cite{aco2013}.\n A bit higher in energy both $N_p(A)$ and $N(A)$ have a zero \n at around $\\sqrt{A}=450$~MeV.\n Since $T(A)=N(A)\/D(A)$ this would imply that $T(A)=0$ at that energy, which is at odds \nwith the values of the phase shifts given by the Nijmegen PWA \\cite{Stoks:1994wp} that do not vanish at this point. \nThe only remedy is that $D(A)$ is also zero at the same point so that \n one had a limit 0\/0 that is finally finite. \nThis is indeed the case and it is is the reason why \n $D(A)$ starts to decrease for $\\sqrt{A}>200$~MeV in Fig.~\\ref{fig:hw.nd}.\n\n\nAnother question of interest to think about is what have we gained by solving exactly Eq.~\\eqref{highd} instead of \n using only the perturbative solution, Eq.~\\eqref{eq.hw.per}, or the Born approximation, Eq.~\\eqref{eq.nborn}, with \nthe related $ \\delta_B(A)$, Eq.~\\eqref{deltab}?. \nThe main point that one should consider \n in connection with this question is that by solving the full and nonperturbative Eq.~\\eqref{highd} \n(furthermore, in good agreement with data) one can then state that Eq.~\\eqref{eq.hw.per} is a perturbation of a \n well-defined and existing nonperturbative solution. \nBy solving exactly Eq.~\\eqref{highd} we have needed to consider explicitly $\\delta_5$ as a free parameter for the uncoupled \n$H$ waves and fit it to the Nijmegen PWA. \nIndeed, $\\delta_5$ is not only necessary for a good fit, \n but it is also required in order to keep $D(A)\\simeq 1$ at low \nthree-momentum. \nOtherwise, the contribution from the dispersive integral to $D(A)$ on the r.h.s. of Eq.~\\eqref{highn} would \nbe too large and negative and would render a too strong function $N(A)$ in plain disagreement with $N_p(A)$. \nNotice as well that in the case of the partial wave $^1H_5$ a better reproduction of data is achieved than with $\\delta_B(A)$. \n It is also worth recalling the previous finding in Sec.~\\ref{fw} for the $F$ waves, \nwhere the full results show a much smaller dependence on \nthe set of $c_i$ coefficients used than the perturbative or Born approximation phase shifts, cf. Fig.~\\ref{fig:fw}.\n\n\n\\section{Coupled $^3S_1-{^3D_1}$ waves}\n\\label{sd12w}\n\n\nWe start our study of the $^3S_1-{^3D_1}$ coupled-partial-wave system in terms of just one free parameter, \nthat we choose as the pole position of the deuteron in the $A$-complex plane, $k^2_d=-m E_d$, with $E_d= 2.225$~MeV \nthe deuteron binding energy. \n Thus we implement once-subtracted DRs for the $^3S_1$ and twice-subtracted ones for the $^3D_1$. \nIn the case of the mixing partial wave we have a mixed situation with a once-subtracted DR \nfor $N_{12}(A)$ and a twice-subtracted one for $D_{12}(A)$.\n In this way we guarantee both the right threshold behavior as well as the experimental \n deuteron-pole position in all the partial waves. \nWe write now explicitly the DRs considered. For the $^3S_1$ one has,\n\\begin{align}\n\\label{3s1_a}\nD_{11}(A)&=1-\\frac{A}{k_d^2}\\frac{g_{11}(A,0)}{g_{11}(k_d^2,0)}\n+\\frac{A}{\\pi}\\int_{-\\infty}^L dk^2 \\frac{\\Delta_{11}(k^2)D_{11}(k^2)}{k^2}\n\\Bigg[g_{11}(A,k^2)-g_{11}(A,0)\\frac{g_{11}(k_d^2,k^2)}{g_{11}(k_d^2,0)}\\Bigg]~,\\nonumber\\\\\nN_{11}(A)&=\\nu_1^{(11)}+\\frac{A}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{11}(k^2)D_{11}(k^2)}{k^2(k^2-A)}~,\n\\end{align}\nwith all the subtractions taken at $A=0$ and the new function $g_{ij}(A)$ is defined as\n \\begin{align}\ng_{ij}(A,k^2)&=\\frac{1}{\\pi}\\int_0^\\infty dq^2\\frac{\\nu_{ij}(q^2)}{(q^2-A)(q^2-k^2)}~,\n\\label{gij}\n\\end{align}\n The subtraction constant $\\nu_1$ in $N_{11}(A)$ is fixed by imposing that $D_{11}(k_d^2)=0$,\n\\begin{align}\n\\nu_1^{(11)}&=\\frac{1}{k_d^2 \\,g_{11}(k_d^2,0)}\\Bigg[\n1+\\frac{k_d^2}{\\pi}\\int_{-\\infty}^L dk^2 \\frac{\\Delta_{11}(k^2) D_{11}(k^2)}{k^2}g_{11}(k^2,k_d^2)\n\\Bigg]~,\n\\label{once_nu0}\n\\end{align}\na result that is already implemented in Eq.~\\eqref{3s1_a} for $D_{11}(A)$.\n \nThe corresponding DRs for the $^3D_1$ and the mixing wave can be grouped together in the same form,\n\\begin{align}\n\\label{3sd1_a}\nD_{ij}(A)&=1-\\frac{A}{k_d^2}+\\frac{A(A-k_d^2)}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{ij}(k^2)D_{ij}(k^2)}{(k^2)^{\\ell_{ij}}}\ng_{ij}^{(d)}(A,k^2;\\ell_{ij})~,\\nonumber\\\\\nN_{ij}(A)&=\\frac{A^{\\ell_{ij}}}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{ij}(k^2)D_{ij}(k^2)}{(k^2)^{\\ell_{ij}}(k^2-A)}~.\n\\end{align}\nwhere $\\ell_{12}=1$ and $\\ell_{22}=2$ and all the subtractions for the $N_{ij}(A)$ are taken at $A=0$,\n while in the function $D(A)$ one is taken at $A=0$ and \nthe other at $A=k_d^2$. The function $g^{(d)}_{ij}(A,k^2;m)$ is defined as\n\\begin{align}\ng^{(d)}_{ij}(A,k^2;m)&=\\frac{1}{\\pi}\\int_0^\\infty dq^2\\frac{\\nu_{ij}(q^2)(q^2)^{m-1}}{(q^2-A)(q^2-k^2)(q^2-k_d^2)}~.\n\\label{gijd}\n\\end{align}\n\nThe results obtained by solving the IEs for the functions $D_{ij}(A)$ along the LHC\n from Eqs.~\\eqref{3s1_a} and \\eqref{3sd1_a} are \nshown in Fig.~\\ref{fig:3sd1_a} by the (cyan) filled areas.\n These results are indicated as NNLO-I and \n all the subtraction constants are fixed in terms of $k_d^2$, without any other freedom. \n The spread in the results originates by taking different sets of $c_i$'s from Refs.~\\cite{epe12,aco2013} \nand varying the input in the iterative procedure.\n The present NNLO calculation from Eqs.~\\eqref{3s1_a} and \\eqref{3sd1_a}\n reproduces the Nijmegen PWA mixing angle $\\epsilon_1$ much better than the NLO \nresult from the same set of equations, which is shown by the (magenta) dot-dashed lines.\n This improvement in the description of $\\epsilon_1$ \nwhen passing from NLO to NNLO is also seen in Ref.~\\cite{thesis} by \nemploying the Weinberg scheme. \nThe $^3S_1$ phase shifts are also reproduced better at NNLO than at NLO, while \nthe $^3D_1$ phase shifts are somewhat worse described by the former. \nOur results for the $^3S_1$ and $^3D_1$ phase shifts are quite similar to those obtained \nin Ref.~\\cite{pavon06}, but not for $\\epsilon_1$ where our outcome is closer to the Nijmegen PWA. \nThe comparison is not so straightforward with the results of Ref.~\\cite{phillipssw}, which depend\n very much on the type of chiral $NN$ potential used.\nFor the $^3S_1-{^3D_1}$ coupled partial waves we do not show the Born approximation results in Fig.~\\ref{fig:3sd1_a} because \nthey are specially poor, see e.g. Refs.\\cite{epe04,peripheral} for the $^3D_1$ phase shifts.\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/3s1_all.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3d1_all.ps}\\\\ \n\\includegraphics[width=.4\\textwidth]{.\/e1_all.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) From left to right and top to bottom:\n Phase shifts for $^3S_1$, $^3D_1$ and the mixing angle $\\epsilon_1$, respectively.\nThe (cyan) filled areas correspond to the NNLO-I outcome obtained by solving \n Eqs.~\\eqref{3s1_a} and \\eqref{3sd1_a}. The hatched areas with (red) crossed lines are the NNLO-II results \n that stem from Eqs.~\\eqref{2subs.3s1}, \\eqref{2subs.mix} and \\eqref{3sd1_a}. \nIn addition, for the $^3D_1$ we show by the hatched areas with (gray) parallel lines the results obtained \nby employing three-time subtracted DRs for $^3D_1$, Eq.~\\eqref{3d1.extra}. \nAs usual, the (magenta) dot-dashed lines are the NLO phase shifts and mixing angle, \n the LO ones are given by the (blue) dotted lines and the \nNijmegen PWA results correspond to the (black) dashed lines.}\n\\label{fig:3sd1_a} }\n\\end{center}\n\\end{figure}\n\nWe can also predict from Eqs.\\eqref{3s1_a} and \\eqref{3sd1_a} the $^3S_1$ scattering length ($a_t$) and effective range ($r_t$). The former is given in terms of $\\nu_1^{(11)}$, Eq.~\\eqref{once_nu0}, as \n\\begin{align}\na_t&=-\\frac{m \\nu_1^{(11)}}{4\\pi}~.\n\\end{align} \nRegarding $r_t$ we can proceed similarly as discussed in detail in Ref.~\\cite{gor2013} \nwhere the following expression is derived, \n\\begin{align}\nr_t=-\\frac{m}{2\\pi^2 a_t}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{11}(k^2)D_{11}(k^2)}{(k^2)^2}\\left\\{\\frac{1}{a_t}+\\frac{4\\pi k^2}{m}g_{11}(0,k^2) \\right\\}-\\frac{8}{m}\\int_0^\\infty dq^2\\frac{\\nu_{11}(q^2)-\\rho(q^2)}{(q^2)^2}~,\n\\label{3s1.rt}\n\\end{align}\nThis equation also exhibits a correlation between $r_t$ and $a_t$, although in a more complicated manner than \nfor the $^1S_0$ partial wave, as shown in Eq.~\\eqref{expvs.1s0}, because $\\nu_{11}(A)$ depends nonlinearly on $D_{11}(A)$. \n \nAnother observable that we also consider is the slope at threshold of $\\epsilon_1$, \nindicated as $a_\\varepsilon$, and defined by \n\\begin{align}\na_\\varepsilon=\\lim_{A\\to 0^+}\\frac{\\sin 2\\epsilon_1}{A^\\frac{3}{2}}=1.128~M_\\pi^{-3}~,\n\\label{avarepsilon}\n\\end{align}\nwhere the numerical value is deduced from the Nijmegen PWA phase shifts. From the DRs in Eq.~\\eqref{3sd1_a} we obtain \nthe following expression for $a_\\varepsilon$,\n\\begin{align}\na_\\varepsilon&=\\frac{m}{4\\pi^2}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{ij}(k^2)D_{ij}(k^2)}{(k^2)^2}~.\n\\label{aep.predicted}\n\\end{align}\n\nIt is also interesting to diagonalize the $^3S_1-{^3D_1}$ $S$-matrix around the deuteron pole position. This can \nbe done by means of a real orthogonal matrix \\cite{swart},\n\\begin{align}\n{\\cal O}&=\\left(\n\\begin{array}{ll}\n \\cos \\varepsilon_1 & -\\sin\\varepsilon_1 \\\\\n\\sin\\varepsilon_1 & \\cos \\varepsilon_1\n\\end{array}\n\\right)~.\n\\label{mat.ort}\n\\end{align}\nSuch that\n\\begin{align}\nS&={\\cal O}\\left(\n\\begin{array}{ll}\nS_0 & 0 \\\\\n0 & S_2\n\\end{array}\n\\right) {\\cal O}^T~,\n\\label{eigen3s1}\n\\end{align}\nwith $S_0$ and $S_2$ the $S$-matrix eigenvalues. The asymptotic $D\/S$ ratio of the deuteron, $\\eta$, \ncan be expressed in terms of $\\varepsilon_1$ as\n\\begin{align}\n\\eta=-\\tan \\varepsilon_1~.\n\\label{eta.ep}\n\\end{align}\nThe residue of $S_0$ at the deuteron pole position is denoted by $N_p^2$,\n\\begin{align}\nN_p^2&=\\lim_{A\\rightarrow k_d^2} \\left(\\sqrt{-k_d^2}+i\\sqrt{A}\\right) S_0~.\n\\end{align} \n\nAs discussed in Ref.~\\cite{cohen} the shape parameters are a good testing ground for the range of \napplicability of the underlying EFT. \nWe then study our results for the shape parameters of the lowest \neigenphase $\\delta_0$ (also called $^3S_1$ eigenphase), Eq.~\\eqref{eigen3s1}, with the diagonalization of the \n$S$-matrix performed in the physical region $A\\geq 0$,\\footnote{This can also be done in terms of an \northogonal matrix Eq.~\\eqref{mat.ort} because of two-body unitarity.}\n \\begin{align}\n\\sqrt{A}\\,\\text{cot}\\delta_0=-\\frac{1}{a_t}+\\frac{1}{2}r_t A +\\sum_{i=2}^{10}v_i A^i+{\\cal O}(A^{11})~.\n\\label{3s1.ere}\n\\end{align}\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|l|l|}\n\\hline\n & $a_t$ [fm] & $r_t$ [fm] & $\\eta$ & $ N_p^2$ [fm$^{-1}$] & $a_\\varepsilon$ [$M_\\pi^{-3}$] \\\\\n\\hline\nNLO & 5.22 & 1.47 & 0.0295 & 0.714 & 1.372 \\\\\n\\hline\nNNLO-I & $5.52(3)$ & $1.89(3)$ & $0.0242(3)$ & $0.818(10)$& $1.270(9)$\\\\\n\\hline\nNNLO-II & $5.5424^\\star$ & $ 1.759^\\star$ & $0.02535(13)$ & $0.78173(2)$ & $1.293(8)$\\\\\n\\hline\nRef.~\\cite{swart}& $5.4194(20)$ & $1.7536(25)$ & $0.0253(2)$ & $0.7830(15)$ & \\\\\n\\hline\nRef.~\\cite{thesis} & $5.424$ & $1.753$ & 0.0245 & & \\\\ \n\\hline\n\\end{tabular}\n\\caption{Values for $a_t$, $r_t$, $\\eta$, $N_p^2$ and $a_\\varepsilon$. The results \n predicted from Eqs.~\\eqref{3s1_a} \n and \\eqref{3sd1_a} are given in the second (NLO) and third row (NNLO-I). \nThe values given in the fourth row (NNLO-II) are obtained once $a_t$ and $r_t$ are fixed to the experimental \n figures, which is indicated by a star on top of the values. \nWe also show the results from Refs.~\\cite{swart} and \\cite{thesis} in the fifth and sixth rows, respectively. \n\\label{table:eta}}\n\\end{center}\n\\end{table}\n\nThe scattering length and effective range in the previous equation are the same as given above because \ncoupled-wave effects with the $^3D_1$ only affects the shape parameters $v_i$, $i\\geq 2$. The values obtained at NLO and NNLO from Eqs.~\\eqref{3s1_a} and \\eqref{3sd1_a} for $a_t$, $r_t$, $\\eta$, \n$N_p^2$ and $a_\\varepsilon$ are shown in Table~\\ref{table:eta} in the \nsecond and third rows, respectively. \nWe observe that the numbers at NNLO (indicated by NNLO-I) are already \n rather close to those of Ref.~\\cite{swart}, obtained from the Nijmegen PWA of $n p$ data,\n and Ref.~\\cite{thesis}. \nIt is interesting to remark that our value for $r_t$ is a prediction in terms of only one \nsubtraction constant (fixed by the deuteron pole position) and \n$NN$ forces stemming from $\\pi N$ physics. This value deviates from experiment \n $r_t=1.759\\pm 0.005$~fm \\cite{thesis} around a $10\\%$ at NNLO ($\\sim 20\\%$ at NLO), while the relative \n experimental error is around $3\\%$. \n Other determinations for the parameter $\\eta$, not shown in Table~\\ref{table:eta}, are $\\eta=0.0256(4)$ \\cite{rodning}, \n$\\eta=0.0271(4)$ \\cite{ericson:82}, \n$\\eta=0.0263(13)$ \\cite{conzett:79} and \n$\\eta=0.0268(7)$ \\cite{martorell}.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|l|l|}\n\\hline\n & $v_2$ & $v_3$ & $v_4$ & $v_5$ & $v_6$ \\\\\n\\hline\nNLO & -0.10572(12)& 0.8818(11) & $-5.427(11)$ & 36.73(11) & $-259.9(1.1)$ \\\\\n\\hline\nNNLO-I & 0.157(22)& 0.645(9) & $-3.41(13)$ & 23.2(8) & $-161(6)$ \\\\\n\\hline\nNNLO-II & $0.0848(4)$ & $0.762(7)$ & $-4.33(2)$ & $29.0(2)$ & $-198(2)$\\\\\n\\hline\nRef.~\\cite{swart} & $0.040(7)$ & $0.673(2)$ & $-3.95(5)$ & $27.0(3)$ & \\\\\n\\hline\nRef.~\\cite{thesis} & $0.046$ & $0.67$ & $-3.9$ & & \\\\\n\\hline\n\\end{tabular}\n\\caption{Values for the shape parameters $v_i$, $i=2,\\ldots,6$ in units of fm$^{2i-1}$.\n The results \n predicted from Eqs.~\\eqref{3s1_a}\n and \\eqref{3sd1_a} are given in the second (NLO) and third row (NNLO-I). \nThe errors for the NLO results correspond entirely to the numerical accuracy in the calculation. \nThose values corresponding to NNLO-II are given in the fourth row. \nThe values from Refs.~\\cite{swart} and \\cite{thesis} appear in the fifth and sixth rows, in order.\n\\label{table:vs3s1a}}\n\\end{center}\n\\end{table}\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n & $v_7$ & $v_8\\times 10^{-1}$ & $v_{9}\\times 10^{-2}$ & $v_{10}\\times 10^{-3}$ \\\\\n\\hline\nNLO & 1867(11) & $-1375(11)$ & $1008(11)$ & $-760(12)$ \\\\\n\\hline\nNNLO-I & 1161(41) & $-840(30)$ & $625(22)$ & $-463(17)$ \\\\ \n\\hline\nNNLO-II& $1426(13)$ & $-1015(15)$ & $764(17)$ & $-545(20)$\\\\\n\\hline \n\\end{tabular}\n\\caption{Values for the shape parameters $v_i$, $i=7,\\ldots,10$ in units of fm$^{2i-1}$. \nFor the meanings of the rows see Table~\\ref{table:vs3s1a}.\n\\label{table:vs3s1b}}\n\\end{center}\n\\end{table}\n\nThe values for the shape parameters $v_i$, $i=2,\\ldots,6$, are given in Table~\\ref{table:vs3s1a} and \nfor $i=7,\\ldots,10$ in Table~\\ref{table:vs3s1b}. Up to \nour knowledge the values of the shape parameters with $i>5$ were not given before. We detailed in \nAppendix \\ref{appen:vs} the numerical method that allows us to perform the appropriate \nderivatives up to so high order.\\footnote{For example in Ref.~\\cite{swart} it is stated that their numerical \nset up is not precise enough to calculate $v_6$ and that it already casts doubts about the numerical \n accuracy for $v_5$.} \n We could have also given shape parameters of even higher orders\n within a numerical precision of a few per cent, \nbut this is skipped because its apparent little relevance in practice.\n One can appreciate the numerical precision in the calculation of the shape parameters by considering the errors in \n Tables~\\ref{table:vs3s1a} and \\ref{table:vs3s1b} for the NLO results, which entirely correspond to the numerical accuracy.\n Notice that for the highest shape parameter shown, $v_{10}$, its relative error is 1.5$\\%$, \njust slightly worse than for $v_9$ with a relative error of 1.1\\%. \nWe then see that by increasing the order of the shape parameter \nthe numerical accuracy only worsens little by little. \nMorever, the errors at NNLO take into account additionally \nthe variation in the results from the different sets of $c_i$'s employed and the dependence in the \ninput for starting the iterative process.\n For the shape parameters with large order, $i\\geq 5$, their absolute values increase \ntypically as ${\\cal O}(1\/M_\\pi)^{2i-1}$, which is the expected behavior for long-range interactions\n mediated by OPE. It is clear from Table~\\ref{table:vs3s1a} that the shape parameters $v_i$, $i=2,\\ldots,5$ \npredicted by the NNLO-I calculation (third row) are typically\n closer to the values of Refs.~\\cite{swart,thesis} than those at NLO (second row). \nThis is a positive feature indicating a well-behaved expansion of the results obtained by \napplying the $N\/D$ method with the discontinuity $\\Delta(A)$ expanded in BChPT.\n\n\nAccording to the power counting for the subtraction constants, Eq.~\\eqref{summarypwc}, at NNLO it is appropriate\n to consider twice-subtracted DRs. \nFor the $^3S_1-{^3D_1}$ system this implies to take into account two more free parameters for the $^3S_1$ \nwave and one more for the mixing partial wave.\n The three parameters for the $^3S_1$ wave are fixed in terms \nof the experimental values of $k_d^2$, $r_t$ and $a_t$.\nThe DR for the $^3D_1$ wave is the same as in Eq.~\\eqref{3sd1_a}. \n The twice-subtracted DRs taken now regarding the $^3S_1$ partial wave are\n\\begin{align}\nD_{11}(A)&=1-\\frac{A}{k_d^2}-\\nu_1^{(11)} \\,A(A-k_d^2) g_{11}^{(d)}(A,0;1)\n-\\nu_2^{(11)} \\,A(A-k_d^2) g_{11}(A,k_d^2)\\nonumber\\\\\n&+\\frac{A(A-k_d^2)}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{11}(k^2)D_{11}(k^2)}{(k^2)^2} \ng_{11}^{(d)}(A,k^2;2)~,\\nonumber\\\\\nN_{11}(A)&=\\nu_1^{(11)}+\\nu_2^{(11)}\\,A+\\frac{A^2}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{11}(k^2)D_{11}(k^2)}{(k^2)^2(k^2-A)}~,\\nonumber\\\\\n\\nu_1^{(11)}&=-\\frac{4\\pi a_t}{m}~,\\nonumber\\\\\n\\nu_2^{(11)}&=\\frac{\\nu_1^{(11)}}{\\nu_1^{(11)} \\,k_d^2\\, g_{11}(0,k_d^2)-1}\\left\\{\n\\frac{1}{k_d^2}+a_t\\Bigg(\n\\frac{4 k_d^2}{m}\\int_0^\\infty dq^2\\frac{\\nu_{11}(q^2)-\\rho(q^2)}{(q^2)^2(q^2-k_d^2)}+\\frac{1}{\\sqrt{-k_d^2}}\n-\\frac{r_t}{2}\\Bigg)\\right. \\nonumber\\\\\n&\\left. +\\frac{k_d^2}{\\pi}\\int_{-\\infty}^L dk^2 \\frac{\\Delta_{11}(k^2)D_{11}(k^2)}{(k^2)^2}g_{11}(k_d^2,k^2)\n\\right\\}~.\n\\label{2subs.3s1}\n\\end{align}\n\nFor the mixing partial wave the DRs are \n\\begin{align}\nD_{12}(A)&=1-\\frac{A}{k_d^2}-\\nu_2^{(12)} A(A-k_d^2)g_{12}(A,k_d^2)\n+\\frac{A(A-k_d^2)}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{12}(k^2)D_{12}(k^2)}{(k^2)^2}g_{12}^{(d)}(A,k^2;2)~,\\nonumber\\\\\nN_{12}(A)&=\\nu_2^{(12)} A+\\frac{A^2}{\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{12}(k^2)D_{12}(k^2)}{(k^2)^2(k^2-A)}~,\n\\label{2subs.mix}\n\\end{align}\n The results obtained by solving the IEs of Eqs.~\\eqref{2subs.3s1}, \\eqref{2subs.mix} and Eq.~\\eqref{3sd1_a} \nwith $\\ell_{22}=2$ are denoted in the following by NNLO-II and correspond to the (red) hatched areas with crossed lines \nin Fig.~\\ref{fig:3sd1_a}. \nIt turns out that we cannot obtain a solution of the resulting IE for $D_{12}(A)$ by \nimplementing any arbitrary value for $\\nu_2^{(12)}$.\n We have further checked this statement by employing the following expression for $\\nu_2^{(12)}$,\n\\begin{align}\n\\nu_{2}^{(12)}&=\\frac{\\Theta}{2\\pi}\\int_{-\\infty}^L dk^2\\frac{\\Delta_{ij}(k^2)D_{ij}(k^2)}{(k^2)^2}~.\n\\end{align}\nHere, the integral is the same as in Eq.~\\eqref{aep.predicted}, so that if we take $ \\Theta=1$ we would simply \nrewrite the IE of Eq.~\\eqref{3sd1_a} in terms of twice-subtracted DRs. Then, we vary $\\Theta$ and whenever we find \na meaningful solution the obtained value for $a_\\varepsilon=m\\nu_2^{(12)}\/2\\pi$ is always basically the same, \n$a_\\varepsilon\\simeq 1.30~M_\\pi^{-3}$.\n In our opinion this difficulty in our approach to reproduce the value for $a_\\varepsilon$ \n that follows from the Nijmegen PWA, Eq.~\\eqref{avarepsilon}, casts doubts on this number. \n Notice that the calculated values for $\\epsilon_1$ at low momentum, \ne.g. for $\\sqrt{A}\\lesssim 100$~MeV, lie on top of the curve for the Nijmegen PWA results\n as shown in the third panel of Fig.~\\ref{fig:3sd1_a} by the coincident hatched and filled areas that overlap the Nijmegen PWA line. \n The phase shifts and $\\epsilon_1$ are quite similar to the NNLO-I results in terms of just one free parameter.\nNevertheless, the $^3S_1$ phase shifts for NNLO-II are closer to the Nijmegen PWA ones \nat lower energies, but the change for this S-wave by going from once- to twice-subtracted DRs\n is much less notorious than in the case of the partial wave $^1S_0$, discussed in Sec.~\\ref{1s0}. \nWe can also see in the fourth row of\n Table~\\ref{table:eta} that the NNLO-II values for $\\eta$ and $N_p^2$ are compatible \nwith those of Ref.~\\cite{swart}, which is quite remarkable.\n The value for $a_\\varepsilon$ mentioned above is shown in \nthe last column of the same table.\n The shape parameters are shown in the forth rows of Tables~\\ref{table:vs3s1a} \nand \\ref{table:vs3s1b}, where we observe \na better agreement with the numbers given in Ref.~\\cite{swart} for \n $v_4$ and $v_5$ than for $v_2$ and $v_3$. The variation of the values\n between NNLO-I and NNLO-II for the higher order shape parameters allows us to guess \nin a conservative way the systematic uncertainty affecting their calculation. \n\nOn the other hand, we would like to elaborate further on the fact that at NNLO the results for the $^3D_1$ phase \nshifts do not still offer a good reproduction of the Nijmegen PWA ones, being even worse than those obtained \nat NLO.\n In Ref.~\\cite{epe04} one can find a discussion on the difficulties arisen in their calculation \n because of the large values of the NLO $\\pi N$ \ncounterterms, namely $c_3$ and $c_4$, in order to reproduce simultaneously the $D$ and $F$ waves \nwithin the Weinberg scheme using the NNLO chiral potential \ncalculated in dimensional regularization. \n Considering this observation \n we obtain that when all the $c_i=0$ our NNLO result \nfor $\\delta_2$ is then essentially the same as the NLO one in Fig.~\\ref{fig:3sd1_a}, \ncorresponding to the (magenta) dot-dashed line. \n In view of this, we study now the influence in the results by including one more subtraction in the DRs for $^3D_1$ \nwith the aim of determining whether this worsening \nis an effect that can be counterbalanced in a natural way at ${\\cal O}(p^4)$. \n In this way we use the same twice-subtracted DRs for $^3S_1$ and the mixing partial wave \ngiven in Eqs.~\\eqref{2subs.3s1} and \\eqref{2subs.mix}, respectively, while \n the following three-time subtracted DRs are used for the $^3D_1$\n\\begin{align}\nD_{22}(A)&=1-\\frac{A}{k_d^2}+\\delta_3^{(22)} A(A-k_d^2)\n-\\nu_3^{(22)} A(A-k_d^2)^2\\frac{\\partial g_{22}^{(d)}(A,0;2)}{\\partial k_d^2}\\nonumber\\\\\n&+\\frac{A(A-k_d^2)^2}{\\pi}\\int_{-\\infty}^L dk^2 \\frac{\\Delta_{22}(k^2)D_{22}(k^2)}{(k^2)^3} \n\\frac{\\partial g_{22}^{(d)}(A,k^2;3)}{\\partial k_d^2}~,\\nonumber\\\\\nN_{22}(A)&=\\nu_3^{(22)} A^2+\\frac{A^3}{\\pi}\\int_{-\\infty}^L dk^2 \\frac{\\Delta_{22}(k^2)D_{22}(k^2)}{(k^2)^3(k^2-A)}~,\n\\label{3d1.extra}\n\\end{align}\nwith two additional subtraction constants $\\delta_3^{(22)}$ and $\\nu_3^{(22)}$. \nConsidering the results obtained from the twice-subtracted DRs for all the waves in the system\n $^3S_1-{^3D_1}$, and denoting by $\\hat{D}_{22}(A)$ the function $D_{22}(A)$ obtained then, we have \nthe following predictions for the subtraction constants $\\delta_3^{(22)}$ and $\\nu_3^{(22)}$, \n\\begin{align}\n\\nu_3^{\\mathrm{pred}}&=\\frac{1}{\\pi}\\int_{-\\infty}^L dk^2 \\frac{\\Delta_{22}(k^2)\\hat{D}_{22}(k^2)}{(k^2)^3}~,\\nonumber\\\\\n\\delta_3^{\\mathrm{pred}}&=\\frac{1}{\\pi}\\int_{-\\infty}^L dk^2 \\frac{\\Delta_{22}(k^2) \\hat{D}_{22}(k^2)}{(k^2)^2}g_{22}^{(d)}(k^2,k_d^2;2)~.\n\\label{nudelta2pre}\n\\end{align}\nThe numerical values that stem from the previous expressions are \n$\\delta_3^{\\mathrm{pred}}\\simeq 1~m_\\pi^{-4}$ and $\\nu_3^{\\mathrm{pred}}\\simeq -2.5~m_\\pi^{-6}$~. A fit \nto the $^3D_1$ phase shifts only requires to vary $\\nu_3^{(22)}$ around that value with the final \nresult $\\nu^{(22)}_3= -2.05(5)$~$m_\\pi^{-6}$, while \n$\\delta^{(22)}_3$ stays put.\n Then, it is only necessary a relatively small change of around 20\\% in \n $\\nu_3^{(22)}$ \n from the one predicted by the twice-subtracted DRs in Eq.~\\eqref{nudelta2pre} in order to end with a much better\n reproduction of the \n$^3D_1$ phase shifts that is compatible with the Nijmegen PWA, as shown \n by the hatched areas with (gray) parallel lines in Fig.~\\ref{fig:3sd1_a} (denoted as \nNNLO-III results).\n Since the reproduction of \nthe $^3S_1$ phase shifts and mixing angle $\\epsilon_1$ is the same as the one obtained already in terms of the \ntwice-subtracted DRs, the so-called NNLO-II results, we do not show them \nnor the values for the other parameters given in Tables~\\ref{table:eta}, \\ref{table:vs3s1a} and \\ref{table:vs3s1b}, \nthat would be also basically coincident with the NNLO-II ones in these tables. \n\n We now elaborate on the difference between the value of $\\nu_3^{(22)}$ fitted and the one predicted, $\\nu_3^{\\mathrm{pred}}$. \nAccording to the power counting of Sec.~\\ref{nschpt}, cf. Eq.~\\eqref{summarypwc}, $\\nu_3^{(22)}={\\cal O}(p^{-1})$\n in our present NNLO calculation. If we consider that this difference is an effect that stems from the ${\\cal O}(p^4)$ \ncontributions to $\\Delta(A)$, which are not considered here yet, one would have that \n$\\delta\\nu_3 \\equiv \\nu_3^{(22)}-\\nu_3^{\\mathrm{pred}}\\simeq 0.6~M_\\pi^{-6} \n={\\cal O}(p^0)$. It also follows then that nominally\n $\\delta \\nu_3\/\\nu_3^{\\mathrm{pred}}={\\cal O}(p)$ and taking into account the \n numerical values\n\\begin{align}\n\\frac{\\delta \\nu_3}{\\nu_3^{\\mathrm{pred}}}=0.23\\sim \\frac{M_\\pi}{\\Lambda}~,\n\\end{align}\nwe can estimate that $\\Lambda\\sim 4 M_\\pi$, which is \n similar to the estimate of $\\Lambda$ obtained in Sec.~\\ref{1s0} \nfor the $^1S_0$ partial wave.\n As a result, $\\delta \\nu_3$ is consistent with a naturally sized ${\\cal O}(p^4)$ effect.\n\nThe fact that the matrix of limiting values\n\\begin{align}\nM_{ij}=\\lim_{A\\to -\\infty}\\frac{\\Delta_{ij}(A)}{(-A)^{3\/2}}\n\\label{mij.3s1}\n\\end{align}\nhas two negative eigenvalues is certainly related with the possibility of obtaining \nmeaningful DRs with only one free parameter as first obtained in this section. \nWe base this statement on the necessity condition of Ref.~\\cite{gor2013} \nin order to obtain meaningful once-subtracted DRs for $\\lambda<0$,\n a condition also introduced in Sec.~\\ref{nschpt}. \nIndeed, since the mixing between different partial waves is very small \nthese eigenvalues are given in good approximation by $M_{11}$ and $M_{22}$; this rule applies \nindeed not only to the $^3S_1-{^3D_1}$ coupled waves but to any other one.\n\n\\section{Coupled $^3P_2-{^3F_2}$ waves}\n\\label{pf2w}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/3p2.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3f2.ps}\\\\ \n\\includegraphics[width=.4\\textwidth]{.\/e2.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) From top to bottom and left to right: Phase shifts for $^3P_2$, $^3F_2$ and the mixing angle $\\epsilon_2$, respectively.\nThe (red) hatched areas correspond to the NNLO results and the (cyan) filled bands are the leading Born approximation results.\n The NLO phase shifts and mixing angle are shown by the (magenta) dot-dashed lines and the LO ones are given by the \n (blue) dotted lines.\n The Nijmegen PWA phase shifts correspond to the (black) dashed lines.\n} \n\\label{fig:3pf2} }\n\\end{center}\n\\end{figure}\n\nWe dedicate this section to the study of the coupled wave system $^3P_2-{^3F_2}$. By direct computation \none has in this case that \n\\begin{align}\n\\lambda_{11}=\\lim_{A\\to -\\infty}\\frac{\\Delta_{11}(A)}{(-A)^{3\/2}}>0~,\n\\label{couplambdap}\n\\end{align}\nwhich requires one to consider DRs with more than one subtraction for the $^3P_2$ wave \\cite{gor2013}.\n Indeed, similarly to the $^3P_0$ and $^3P_1$ partial waves, studied in Secs.~\\ref{3p0} and \\ref{3p1}, respectively, \nwe need to take at least three subtractions in the DRs for the $^3P_2$ wave in order to obtain \nstable and meaningful results.\n Thus, we have the following three-time subtracted DRs for the $^3P_2$ wave, \n\\begin{align}\n\\label{d.3p2}\nD_{11}(A)&=1+\\delta^{(11)}_2 A+\\delta^{(11)}_3 A(A-C)-\\nu^{(11)}_{2}\\frac{A(A-C)^2}{\\pi}\\int_0^\\infty dq^2\\frac{\\nu_{11}(q^2)}{(q^2-A)(q^2-C)^2}\\nonumber\\\\\n&-\\nu^{(11)}_3\\frac{A(A-C)^2}{\\pi}\\int_0^\\infty dq^2\\frac{\\nu_{11}(q^2)q^2}{(q^2-A)(q^2-C)^2}\\nonumber\\\\\n&+\\frac{A(A-C)^2}{\\pi}\\int_{-\\infty}^L dk^2\n\\frac{\\Delta_{11}(k^2)D_{11}(k^2)}{(k^2)^3} g_{11}(A,k^2,C;2)~,\\\\\n\\label{n.3p2}\nN_{11}(A)&=\\nu^{(11)}_2 A+\\nu^{(11)}_3 A^2+\\frac{A^3}{\\pi}\\int_{-\\infty}^L dk^2 \\frac{\\Delta_{11}(k^2)D_{11}(k^2)}{(k^2)^3(k^2-A)}~.\n\\end{align}\nWith respect to the mixing and $^3F_2$ partial waves we use the standard formalism \n for the coupled waves given in Eqs.~\\eqref{highdcc} and \\eqref{highncc} with $\\ell_{12}=2$ and $\\ell_{22}=3$, respectively. \nAs a result 2 and 3 subtractions are taken in order.\n\n As usual for the $P$ waves, we fix $\\nu_2^{(11)}=4\\pi a_V\/m$ by requiring the exact reproduction of the $^3P_2$ scattering volume \nextracted from the Nijmegen PWA \\cite{Stoks:1994wp},\n\\begin{align}\na_V=0.0964~M_\\pi^{-3}~,\n\\end{align}\nwhile $\\nu_3^{(11)}$ is fitted to the results of this PWA. \nRegarding the subtraction constants $\\delta_i^{(11)}$, $i=1,$~2, we follow the principle of maximal smoothness in virtue of which we fix $\\delta^{(11)}_{2}=0$\n and fit $D_{11}^{(1)}(-M_\\pi^2)$.\\footnote{In the following we use \n$D_{ij}^{p-2}(-M_\\pi^2)$ as free parameter in terms of which one can calculate $\\delta_{p}^{(ij)}$ from Eq.~\\eqref{taylor}.} \n The resulting fitted values are: \n\\begin{align}\nD_{11}^{(11)}(-M_\\pi^2)&=0.025(5)~M_\\pi^{-2}~,\\nonumber\\\\\n\\nu_3^{(11)}&=0.155(5)~M_\\pi^{-6}~,\\\\\nD_{22}^{(11)}(-M_\\pi^2)&=0.011(4)~M_\\pi^{-2}~,\n\\end{align}\n with the interval of values reflecting the dependence on the $c_i$'s chosen.\n The free parameter associated with the mixing wave is fixed to its pure\n perturbative value, cf. Sec.~\\ref{hpw}, $D_{12}(-M_\\pi^2)=1$.\n\nAll in all the resulting phase shifts are shown by the (red) hatched areas in Fig.~\\ref{fig:3pf2}. \nThere we see a clear improvement at NNLO in the reproduction \nof the $^3P_2$ phase shifts compared with the results at NLO, given by the (magenta) dot-dashed lines, so that now the (red) hatched area overlaps the Nijmegen PWA phase shifts.\n The $^3F_2$ phase shifts and mixing angle $\\epsilon_2$ are reproduced with a similar quality to that already achieved at NLO.\n We also give by the (cyan) filled bands the results obtained by the leading Born approximation, Eq.~\\eqref{deltab}, \nwith $\\Delta(A)$ calculated at NNLO. Due to the fact that the latter diverges as $(-A)^{3\/2}$ for $A\\to-\\infty$ at least two \nsubtractions have to be taken in the DR for $N_B(A)$, Eq.~\\eqref{eq.nborn}. \nThis is immediately accomplished for the $D$ and \nhigher partial waves but for a $P$-wave with $\\ell=1$ one needs to include one \nextra subtraction.\n In particular, for our present case we use Eq.~\\eqref{n.3p2} with $D_{11}(A)\\to 1$ and with $\\Delta_{11}(k^2)$\n restricted to its two-nucleon irreducible contributions, with the subtraction constants $\\nu^{(11)}_{2}$ and $\\nu^{(11)}_{3}$ \ntaking the same values as discussed before. \nWe see that our full results provide a clear improvement in the reproduction of the Nijmegen PWA phase shifts and \nmixing angle with respect to the Born approximation. \n One should mention that the Born approximation phase shifts for \n$^3F_2$ and $^3F_3$ have a striking resemblance to the full NNLO results of Ref.~\\cite{thesis} obtained within the Weinberg \nscheme. \n We have obtained this improvement without dismissing the strength of the TPE at NNLO, as advocated in Ref.~\\cite{epe04}. \n This makes that our full results are not so much sensitive to the particular set of $c_i$'s taken as \npreviously thought in the literature from the results of Refs.~\\cite{thesis,epe04}.\n\n\\section{Coupled $^3D_3-{^3G_3}$ waves}\n\\label{dg3w}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/3d3.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3g3.ps}\\\\ \n\\includegraphics[width=.4\\textwidth]{.\/e3.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) From top to bottom and left to right: Phase shifts for $^3D_3$, $^3G_3$ and the mixing angle $\\epsilon_3$, in order.\nThe (red) hatched areas correspond to the NNLO results and the (cyan) filled bands are the leading order Born approximation.\n The NLO results are shown by the (magenta) dot-dashed lines and the LO ones are given by the \n (blue) dotted lines. The Nijmegen PWA phase shifts correspond to the (black) dashed lines.}\n\\label{fig:3dg3} }\n\\end{center}\n\\end{figure}\n\nFor the study of the $^3D_3-{^3G_3}$ coupled waves we follow the formalism for coupled waves, Eqs.~\\eqref{highdcc} and \n\\eqref{highncc}, with $\\ell_{11}=2$, $\\ell_{12}=3$ and $\\ell_{22}=4$, so that $\\ell_{ij}$ subtractions are taken in the DRs for the \ncoupled wave $ij$. \n Regarding the free parameters we follow the principle of maximal smoothness, although for the mixing wave \nthe subtraction constants take their pure perturbative values. So that we fit to data \n$D_{11}(-M_\\pi^2)$ and $D_{22}^{(2)}(-M_\\pi^2)$, with the resulting values:\n\\begin{align}\nD_{11}(-M_\\pi^2)&=0.90(5)~,\\nonumber\\\\\nD_{22}^{(2)}(-M_\\pi^2)&=-0.09(1)~M_\\pi^{-4}~,\n\\label{fit.3dg3}\n\\end{align}\n The interval of values in Eq.~\\eqref{fit.3dg3} reflect the dependence on the set of values considered for the $c_i$'s. \nThe resulting phase shifts are shown by the (red) hatched areas in Fig.~\\ref{fig:3dg3}.\n Importantly at NNLO the phase shifts for the $^3D_3$ wave follow closely the Nijmegen PWA phase shifts \nso that a remarkable improvement is obtained in comparison with both \nthe NLO and Born results.\n Notice that this is accomplished without any need of dismissing the strength of\n TPE as directly obtained from the NLO $\\pi N$ amplitudes. \nWe have been able to improve the situation by taking into account the subtraction constant $ \\delta_2$ or $D_{11}(-M_\\pi^2)$, \nwhose presence is required by the nonperturbative unitarity implementation\\footnote{In more general terms, by generating the analytical properties \nassociated with the RHC while respecting unitarity in the full amplitudes.} at NNLO, cf. Eq.~\\eqref{summarypwc}.\n We also observe a good reproduction of the Nijmegen PWA results for the waves $^3G_3$ and $\\epsilon_3$, \nwhich are already well reproduced at NLO \\cite{gor2013} as shown by the (magenta) dot-dashed lines.\n \n\n\n\n\\section{Coupled $^3F_4-{^3H_4}$ waves}\n\\label{gh4w}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/3f4.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3h4.ps}\\\\ \n\\includegraphics[width=.4\\textwidth]{.\/e4.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) From top to bottom and left to right: Phase shifts for $^3F_4$, $^3H_4$ and the mixing angle $\\epsilon_4$, in order.\nThe (red) hatched areas correspond to the NNLO results and the (cyan) filled ones to the leading\nBorn approximation.\n The NLO results are shown by the (magenta) solid line and the LO ones are given by the \n (blue) dotted lines. \nThe Nijmegen PWA phase shifts are given by the (black) dashed lines.}\n\\label{fig:3fh4} }\n\\end{center}\n\\end{figure}\n\nThe discussion of the $^3F_4-{^3H_4}$ coupled-wave system follows the standard formalism for coupled waves, Eq.~\\eqref{highdcc} and \\eqref{highncc},\n with $\\ell_{11}=3$, $\\ell_{12}=4$ and $\\ell_{22}=5$. The free parameters are then fitted to data according to the \n principle of maximal smoothness. However, for $^3H_4$ and the mixing partial wave there is no improvement in the reproduction of data\n with respect to the situation in which the pure perturbative values are taken, so that \nat the end we only have to fit $D_{11}^{(1)}(-M_\\pi^2)$ to the Nijmegen PWA results. \nThe fitted value is\n\\begin{align}\nD_{11}^{(1)}(-M_\\pi^2)&= -0.009(3)~M_\\pi^{-2}~.\n\\label{free.fh4}\n\\end{align}\n The resulting phase shifts and mixing angle are shown by the (red) hatched areas in Fig.\\ref{fig:3fh4},\nwith the width of the band reflecting the dependence on values for the $\\pi N$ NLO counterterms. \nOne can observe a clear improvement in the description of the $^3F_4$ phase shifts compared with the results from OPE (blue dotted lines), NLO \n(magenta dot-dashed lines) and leading Born approximation (cyan filled areas). \n Similarly to the $^3D_3$ wave in the previous section, \n this improvement is related with the effect of the subtraction constant $\\delta_3^{(11)}$ which \nis not directly related with an improvement in the \ncalculation of $\\Delta_{11}(A)$, and hence of the $NN$ potential. \nLet us recall that the subtraction constants $\\delta_p^{(ij)}$ arise because of the rescattering process that the \n$N\/D$ method allows to treat in a clear and well-defined way, \novercoming the obscurities that still remain in the literature associated with the use of the cutoff regularized Lippmann-Schwinger with \na higher-order $NN$ potential. \nFor the mixing angle $\\epsilon_4$ the quality in the reproduction of data is similar to that obtained \nby the other approximations just quoted. \nHowever, for the $^3 H_4$ phase shifts the outcome at NNLO is a bit worse than at \nNLO and OPE, though one should also notice the tiny values for the $^3H_4$ phase shifts so that this discrepancy \nis certainly small in absolute value. \nWe have also checked that it cannot be removed by releasing the other subtraction constants $\\delta_p^{(22)}$, with \n$p=2$, 3 and 4. \nLikely, the origin of this difference in the $^3H_4$ phase shifts between our full results and the Nijmegen PWA \ncan be tracked back to the change in the leading Born approximation once the ${\\cal O}(p^3)$ two-nucleon irreducible \ncontributions are included in $\\Delta_{22}(A)$.\n\n\n\\section{Coupled $^3G_5-{^3I_5}$ waves}\n\\label{gi5w}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{.\/3g5.ps} & \n\\includegraphics[width=.4\\textwidth]{.\/3i5.ps}\\\\ \n\\includegraphics[width=.4\\textwidth]{.\/e5.ps} \n\\end{tabular}\n\\caption[pilf]{\\protect { (Color online.) From top to bottom and left to right: Phase shifts for $^3G_5$, $^3I_5$ and the mixing angle $\\epsilon_5$, in order.\nThe (red) hatched areas correspond to the NNLO results and the filled ones to the leading\nBorn approximation.\n The NLO results are shown by the (magenta) dot-dashed line and the LO ones are given by the \n (blue) dotted lines.\n The Nijmegen PWA phase shifts are shown by the (black) dashed lines.}\n\\label{fig:3gi5} }\n\\end{center}\n\\end{figure}\n\n\nThe standard formalism for coupled waves with high angular momentum, Eqs.~\\eqref{highdcc} and \\eqref{highncc}, is followed here \nwith $\\ell_{11}=4$, $\\ell_{12}=5$ and $\\ell_{22}=6$.\n The application of the principle of maximal smoothness \nto fit the free parameters provides a good reproduction of the Nijmegen PWA phase shifts \\cite{Stoks:1994wp}.\\footnote{For $5\\leq J\\leq 8$ the Nijmegen PWA phase shifts \\cite{Stoks:1994wp} are those obtained from the $NN$ potential model of Ref.~\\cite{obe}.}\nThe range of values obtained for the free parameters $D_{11}^{(2)}(-M_\\pi^2)$ and $D_{22}^{4}(-M_\\pi^2)$ is\n\\begin{align}\nD_{11}^{(2)}(-M_\\pi^2)&=-0.0025(5)~M_\\pi^{-4}~,\\nonumber\\\\\nD_{22}^{(4)}(-M_\\pi^2)&=-0.0125(5)~M_\\pi^{-8}~,\n\\label{free.gi5}\n\\end{align}\nwhile basically the same results are obtained for any $D_{12}^{(3)}(-M_\\pi^2)\\leq 0~M_\\pi^{-6}$. \n The results are shown in Fig.\\ref{fig:3gi5} by the (red) hatched areas whose widths take into account the \nuncertainty from the set of $c_i$'s taken and some numerical noise from the iterative process. \n A clear improvement results in the description of the $^3G_5$ phase shifts compared with the OPE (blue dotted lines), NLO (magenta dot-dashed lines)\n and leading Born approximation results (cyan filled areas). \nIt is worth stressing that this partial wave cannot be well reproduced\neven at NNNLO in the Weinberg potential scheme neither by \nkeeping a finite value for the three-momentum cutoff entering \nin the solution of the Lippmann-Schwinger equation \\cite{epe042}, \nnor by sending it to $\\infty$ as in Ref.~\\cite{zeoli}.\n A similar situation occurs too \nfor the leading Born approximation results at NNLO, as shown by the (cyan) filled area in the first panel, \n a result also obtained in Ref.~\\cite{epe04}. \nEven more, the modification of the TPE mechanism proposed in this reference by making use of the \nso-called spectral-function regularization is inoperative here to provide \nan improvement in the Born approximation results.\n A similar problem was also observed in the perturbative calculation at NNNLO in \nRef.~\\cite{entem}.\n From ours results this is not surprising because the improvement in the reproduction of the \nNijmegen PWA phase shifts for the $^3G_5$ wave is accomplished through the subtraction constant $\\delta_4^{(11)}$. \nThis constant is directly related to the $NN$ rescattering (from which the final function $D_{11}(A)$ stems nonperturbatively) \nand not to the $NN$ potential or $\\Delta_{22}(A)$.\n In the case of the mixing angle $\\epsilon_5$ and the \n$^3I_5$ phase shifts there is a slight worsening in the reproduction of Nijmegen \nPWA compared with the NLO ones, but still our results run very close to the Nijmegen PWA ones.\n\n\n\\begin{table}\n\\begin{center}\n{\\small\n\\begin{tabular}{|l|l|l|}\n\\hline\nWave & Type of DRs & Parameters \\\\\n\\hline\n$^1S_0$ & 1DR & $\\nu_1=30.69$ \\\\\n & 2DR & $\\nu_1=30.69$~,~$\\nu_2=-23(1)$, $\\delta_2=-8.0(3)$ \\\\\n\\hline\n$^3P_0$ & 3DR & $\\nu_2=1.644$~,~$\\delta_2=2.82(5)$~,~$\\delta_3=0.18(6)$ \\\\\n\\hline\n$^3P_1$ & 3DR & $\\nu_2=-1.003$~,~$\\delta_2=2.7(1)$~,~$\\delta_3=0.47(3)$ \\\\\n\\hline\n$^1 P_1$ & 2DR & $\\nu_2=-1.723$~,~$\\delta_2=0.4(1)$ \\\\\n\\hline\n$^1D_2$ & LTS & $D^{(1)}(0)=0.07(1)$ \\\\\n\\hline\n$^3D_2$ & LTS & $D^{(1)}(0)=-0.017(3)$ \\\\\n\\hline\n$^1F_3$ & LTS & $D^{(2)}(0)=0.057(3)$ \\\\\n\\hline\n$^3F_3$ & LTS & $D^{(2)}(0)= 0.035(5)$ \\\\\n\\hline\n$^1G_4$ & LTS & $D^{(3)}(0)=-0.014(2)$ \\\\\n\\hline\n$^3G_4$ & LTS & $D^{(3)}(0)=-0.055(5)$ \\\\\n\\hline\n$^1H_5$ & LTS & $D^{(4)}(0)=0.156$ \\\\\n\\hline\n$^3H_5$ & LTS & $D^{(4)}(0)=0.066$ \\\\\n\\hline\n$^3S_1-{^3D_1}$ & $1$DR $^3S_1$, 2DR $^3D_1$, mixing & $E_d$ \\\\\n & 2DR all & $a_t$, $r_t$, $E_d$ \\\\\n & 2DR $^3S_1$, mixing, 3DR $^3D_1$ & $a_t$, $r_t$, $E_d$, $\\nu_3^{(22)}=-2.05(5)$ \\\\ \n\\hline\n$^3P_2-{^3F_2}$ & 3DR for $^3P_2$ and LTS for the others & $\\nu^{(11)}_2=0.178$~,~$D^{(1)}_{11}(-M_\\pi^2)=0.025(5)$~,~\n$\\nu_3^{(11)}=0.155(5)$\\\\\n & & $D_{22}(-M_\\pi^2)=0.011(4)$\\\\\n\\hline\n$^3D_3-{^3G_3}$ & LTS & $D_{11}(-M_\\pi^2)=0.90(5)$~,~$D^{(2)}_{22}(-M_\\pi^2)=-0.09(1)$ \\\\\n\\hline\n$^3F_4-{^3H_4}$ & LTS & $D_{11}^{(1)}(-M_\\pi^2)=-0.009(3)$ \\\\\n\\hline\n$^3G_5-{^3I_5}$ & LTS & $D_{11}^{(2)}(-M_\\pi^2)=-0.0025(5)$~,~$D_{22}^{(4)}(-M_\\pi^2)=-0.0125(5)$ \\\\\n\\hline\n\\end{tabular} }\n\\caption[pilf]{\\protect { We give in the columns from left to right, in order, \nthe partial wave, the type of DRs employed to study it\n and the values for the free parameters involved. }\n\\label{tab:allparam} }\n\\end{center}\n\\end{table}\n\nFinally, we give in Table~\\ref{tab:allparam} the values of the free parameters employed in the different partial waves according \nto the type of DRs employed, which is indicated in the second column.\n This is done by following the notation, already introduced in Ref.~\\cite{gor2013}, \n $m$DR with $m=1,2,\\ldots$, and it should be read as $m$-time subtracted DR.\n For the higher $NN$ partial waves we use the abbreviation LTS to indicate that \n$\\ell$ (or $J$ for the mixing partial waves) subtractions are taken to satisfy the threshold behavior, following \nthe standard formalism explained in Sec.~\\ref{hpw}.\n According to the \nprinciple of maximal smoothness only the highest derivative $D^{(n)}(C)$ \nrelated to the subtraction constants in $D(A)$ is not fixed to its perturbative value \n(1 for $n=0$ and 0 for $n\\neq 0$) and released, \nif appropriate. The units correspond to appropriate powers of $M_\\pi^2$, although they are not explicitly shown. \nThere is a proliferation of free parameters for the $P$ waves because for them $\\lambda>0$, Eqs.~\\eqref{unlambdap} \nand \\eqref{couplambdap}, so that, except for the $^1P_1$ wave, three-time-subtracted DRs are needed.\n This could be a specific feature for the NNLO calculation of $\\Delta(A)$ that has to be investigated for\n higher-orders.\\footnote{If then $\\lambda<0$ \none would need to invoke less free parameters for the $P$ waves than in Table~\\ref{tab:allparam}.}\n \n\n\\section{Conclusions}\n\\label{conc}\n\nWe have discussed in this paper the application of the $N\/D$ method when its dynamical input, namely, \nthe imaginary part of the $NN$ partial waves along the LHC, is calculated in ChPT up to NNLO. \nIt then comprises OPE, leading and subleading two-nucleon irreducible TPE and once-iterated OPE ~\\cite{peripheral}. \nWe have obtained a quite good reproduction of the Nijmegen PWA phase shifts and mixing angles, in better agreement \nthan the one achieved in the previous lower order studies at LO \\cite{paper1,paper2} and NLO \\cite{gor2013}. In particular, our NNLO results are able to reproduce \n the phase shifts for the triplet waves with $\\ell_{11}=J-1$, $^3P_2$, $^3D_3$, $^3F_4$ \nand $^3G_5$, while at NLO they were not properly accounted for.\nWe do not need to modify the NNLO two-nucleon irreducible diagrams (or chiral $NN$ potential) in order to obtain such a good agreement \nwith the Nijmegen PWA, contrary to common wisdom. \nThe point that stems from our study is that one should perform in a well-defined way\n the iteration of diagrams along the RHC, which are responsible \nfor unitarity and analyticity attached to this cut, rather than reshuffling the $NN$ potential with contributions \nfrom higher orders. \nIn this respect, the use of DRs allows one to perform the iteration of two-nucleon intermediate states independently of regulator. We have also compared our full results for the higher partial waves with the Born approximation. From this comparison, as well as from \nthe direct study \nof the importance of the different contributions of $\\Delta(A)$ to the dispersive integrals, \nit follows that the $NN$ $D$ waves cannot be treated perturbatively.\n\nIt is also worth remarking that up to the order studied here we reproduce the long-range correlation between the \neffective ranges and the scattering lengths for the $NN$ $S$ waves when only once-subtracted DRs are applied. \nIn this way one can predict values for the $S$-wave effective ranges in agreement with experiment up to around a $10 \\%$. \nWe have also elaborated a chiral power counting for the subtraction constants, so that twice-subtracted DRs \nare appropriate when $\\Delta(A)$ is calculated at NLO and NNLO. \nFrom these considerations it turns out also that the chiral power expansion is made over a scale $\\Lambda\\sim 400$~MeV. \nOne should consider further the impact of higher orders in $\\Delta(A)$, which are partially calculated already in \nthe literature, as an interesting extension of the present work in order to settle the applicability of the $N\/D$\nmethod to $NN$ scattering in ChPT with a high degree of accurateness.\n\n\n\n\\section*{Acknowledgments}\n This work is partially funded by the grants MINECO (Spain) and ERDF (EU), grant FPA2010-17806 and the Fundaci\\'on S\\'eneca 11871\/PI\/09.\n We also thank the financial support from the EU-Research Infrastructure\nIntegrating Activity\n ``Study of Strongly Interacting Matter\" (HadronPhysics2, grant n. 227431)\nunder the Seventh Framework Program of EU and \nthe Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042). \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this paper, we study the following problem raised by Coman-Guedj-Zeriahi.\n\\begin{prob}[\\cite{CGZ13}]\\label{prob:1}\nLet $(X,\\omega)$ be a compact K\\\"ahler manifold of complex dimension $n$, equipped with a K\\\"ahler metric $\\omega$. Let $V\\subset X$ be a complex submanifold of complex dimension $k>0$. Is the following holds\n\\begin{align*}\n\\mbox{Psh}(V,\\omega|_V)=\\mbox{Psh}(X,\\omega)|_V?\n\\end{align*}\n\\end{prob}\n\nRecently, there are many progress towards this problem. \n\\begin{itemize}\n\\item When $\\omega$ is a Hodge metric and $\\varphi$ is a smooth quasi-psh function on $V$, such that $\\omega|_V+\\sqrt{-1}\\partial\\bar\\partial \\varphi>0$, then Problem \\ref{prob:1} has a positive answer by Schumacher \\cite{Sch98}.\n\\item When $\\omega$ is a Hodge metric, then Problem \\ref{prob:1} has a positive answer by Coman-Guedj-Zeriahi \\cite{CGZ13}, and when $\\omega$ is a K\\\"ahler metric and $\\varphi$ is a smooth quasi-psh function on $V$, such that $\\omega|_V+\\sqrt{-1}\\partial\\bar\\partial \\varphi>0$, then Problem \\ref{prob:1} has a positive answer in the same paper \\cite{CGZ13}.\n\\item When $\\omega$ is a K\\\"ahler metric and $\\varphi$ is a quasi-psh function on $V$, which has analytic singularities, such that $\\omega|_V+\\sqrt{-1}\\partial\\bar\\partial \\varphi>\\epsilon\\omega|_V$ for some $\\epsilon>0$, there is a quasi-psh function $\\Phi$ on $X$, such that $\\Phi|_V=\\varphi$ and $\\omega+\\sqrt{-1}\\partial\\bar\\partial \\Phi>\\epsilon'\\omega$ on $X$ by Collins-Tosatti \\cite{CT14}.\n\\item When $\\omega$ is a K\\\"ahler metric and $\\varphi$ is a quasi-psh function with arbitrary singularity on $V$, such that $\\omega|_V+\\sqrt{-1}\\partial\\bar\\partial \\varphi>\\epsilon\\omega|_V$ for some $\\epsilon>0$. Suppose that $V$ has a holomorphic tubular neighborhood in $X$, then there is a quasi-psh function $\\Phi$ on $X$, such that $\\Phi|_V=\\varphi$ and $\\omega+\\sqrt{-1}\\partial\\bar\\partial \\Phi>\\epsilon'\\omega$ on $X$ for some $\\epsilon'>0$ by Wang-Zhou \\cite{WZ20}.\n\\end{itemize}\n\nThe main theorem of this paper is as follows.\n\\begin{thm}\\label{thm:main}\nLet $(X,\\omega)$ be a compact K\\\"ahler manifold of complex dimension $n$, equipped with a K\\\"ahler metric $\\omega$. Let $V\\subset X$ be a complex submanifold of complex dimension $k>0$. Let $\\varphi$ be a quasi-psh function with arbitrary singularity on $V$, such that $\\omega|_V+\\sqrt{-1}\\partial\\bar\\partial \\varphi>\\epsilon\\omega|_V$ for some $\\epsilon>0$. Suppose that there is a open neighborhood $U$ of $V$ in $X$, and a holomorphic retraction $\\pi:U\\rightarrow V$. Then there is a quasi-psh function $\\Phi$ on $X$, such that $\\Phi|_V=\\varphi$ and $\\omega+\\sqrt{-1}\\partial\\bar\\partial \\Phi>\\epsilon'\\omega$ on $X$ for some $\\epsilon'>0$.\n\\end{thm}\n\n\\begin{rem}\nThe main theorem is a slightly stronger than the result in \\cite{WZ20}, by weakening the assumption that $V$ has a holomorphic tubular neighborhood structure in $X$ to the assumption that $V$ has a holomorphic retraction structure in $X$. By a holomorphic retraction, we mean that there is an open neighborhood $U$ of $V$ in $X$, and a holomorphic map $\\pi:U\\rightarrow V$, such that $\\pi|_V:V\\rightarrow V$ is the identity map. Without the holomorphic tubular neighborhood structure, we need to compute the complex Hessian of the square of the distance function to $V$ on $X$. The main advantage of this generalization is that there many examples of compact K\\\"ahler manifolds which are not necessarily projective.\n\\end{rem}\nWe also consider the extension of K\\\"ahler currents in a big class.\n\\begin{thm}\nLet $(X,\\omega)$ be a compact K\\\"ahler manifold of complex dimension $n$, and $V\\subset X$ be a complex submanifold of positive dimension. Suppose that $V$ has a holomorphic retraction structure in $X$.\nLet $\\alpha\\in H^{1,1}(X,\\mathbb R)$ be a big class and $E_{nK}(\\alpha)\\subset V$. \nThen any K\\\"ahler current in $\\alpha|_V$ is the restriction of a K\\\"ahler current in $\\alpha$.\n\\end{thm}\n\n\\subsection*{Acknowledgement}\nThe second author would like to thank G. Hosono and T. Koike for helpful discussions, and especially T. Koike for sharing the note \\cite{Ko21}.\n\n\n\n\n\n\n\n\\section{Complex Hessian of square of distance to a complex submanifold}\\label{sect:dist}\nWe follow Matsumoto's notations in \\cite{Ma}.\nLet $(M,g)$ be a $C^\\infty$ Rimannian manifold of dimension $n$.\nFor $x,y\\in M$, we denote by $\\delta(x,,y)$ the distance between $x$ and $y$ induced by the metric $g$.\n\nIt is known that for any $p\\in M$, there is an open coordinate neighborhood $U$ of $p$,\nchoose a coordinate $x_1,x_2,\\cdots,x_n$ on $U$, with $x(p)=0$ and\n$g_{ij}=g(\\frac{\\partial}{\\partial x_i},\\frac{\\partial}{\\partial x_j}),\\, i,j=1,2,\\cdots,n$.\nFor $v=(v_1,v_2,\\cdots,v_n)\\in\\mathbb{R}^n$,\nwe may view $v\\in T_xM$ as $\\sum_{j=1}^{n}v_j\\frac{\\partial}{\\partial x_j}|_x$.\nWe may shrink $U$ if necessary, there is an open neighborhood $B\\subset \\mathbb{R}^n$ of $0$,\nsuch that $\\Phi(x,v)=(x,\\exp_x(v))$\nis bijection from $U\\times B$ to $\\Phi(U\\times B)$, both $\\Phi$ and $\\Phi^{-1}$ are $C^\\infty$.\nAs $\\Phi(x,0)=(x,x)$, and from the property of exponential mapping $y=\\exp_x v$,\nwe can get\n\\begin{equation}\\label{eq1}\nJ\\Phi(0,0)\n=\\left[\n\\begin{array}{cc}\nI & 0\\\\\nI & I\n\\end{array}\n\\right].\n\\end{equation}\n\nAs $\\Phi(U\\times B)$ is an open neighborhood of $(p,p)$,\nwe may take an open set $V\\subset U$, such that $p\\in V$, and $\\Phi(U\\times B)\\supset V\\times V$.\nWrite $(x,v(x,y))=\\Phi^{-1}(x,y)$, then\n$$y=\\exp_x(v(x,y)),\\, \\delta(x,y)^2=\\sum_{i,j=1}^n g_{ij}(x)v_i(x,y)v_j(x,y)$$\nand from (\\ref{eq1}), we have\n\\begin{equation}\\label{eq2}\nv(0,0)=0, \\,\\, \\frac{\\partial v_i}{\\partial y_j}(0,0)=-\\frac{\\partial v_i}{\\partial x_j}(0,0)=\\delta_{ij},\\, 1\\leq i,j\\leq n.\n\\end{equation}\nLet $S\\in M$ be a $C^\\infty$ submanifold of $M$ with $dim S=k$, $0\\leq rk\n\\end{cases}\n\\end{split}\n\\end{equation}\n\nNow we let $(X,\\omega)$ be a compact Hermitian manifold with a Hermitian metric $\\omega$. Let $g$ be the Riemannian metric on $X$ induced by $\\omega$. Let $V\\subset X$ be a complex submanifold of complex dimension $r>0$. Fix any $p\\in V$. There is a holomorphic coordinate $(U, z=(z_1,\\cdots, z_k, z_{k+1}, \\cdots,z_n))$ centered at $p$ in $X$, such that $U\\cap V=\\{z_{k+1}=\\cdots=z_n=0\\}$, and $g_{ij}(0)=\\delta_{ij}$ for $i,j=1,\\cdots, 2n$, here we write $z_i=x_{2i-1}+\\sqrt{-1}x_{2i}$. Since $$\\frac{\\partial}{\\partial z_i}=\\frac{1}{2}(\\frac{\\partial}{\\partial x_{2i-1}}-\\sqrt{-1}\\frac{\\partial}{\\partial x_{2i}}), \\frac{\\partial}{\\partial\\bar z_i}=\\frac{1}{2}(\\frac{\\partial}{\\partial x_{2i-1}}+\\sqrt{-1}\\frac{\\partial}{\\partial x_{2i}}),$$\nwe get that \n\\begin{align}\\label{equ: chessian}\n\\frac{\\partial^2h}{\\partial z_i\\partial \\bar z_j}=\\frac{1}{4}\\left(\\frac {\\partial^2h}{\\partial x_{2i-1}\\partial x_{2j-1}} +\\frac{\\partial^2h}{\\partial x_{2i}\\partial x_{2j}}-\\sqrt{-1}\\frac{\\partial^2h}{\\partial x_{2j}\\partial x_{2j-1}} +\\sqrt{-1}\\frac{\\partial^2h}{\\partial x_{2j-1}\\partial x_{2j}} \\right).\n\\end{align}\n\nCombining (\\ref{eq 7}) and (\\ref{equ: chessian}), we obtain the following \n\n\\begin{prop}\\label{prop:hessian}Let $(X,\\omega)$ be a complex $n$-dimensional Hermitian manifold with a Hermitian metric $\\omega$. Let $V\\subset X$ be a complex submanifold of complex dimension $k$. Let $p\\in V$ be an arbitrarily fixed point in $V$, then there is a holomorphic coordinate chart $(U,z=(z_1,\\cdots,z_k,z_{k+1},\\cdots, z_n))$ centered at $p$ such that $U\\cap V=\\{z_{k+1}=\\cdots=z_n=0\\}$ and \n\t\\begin{align*}\n\t\\frac{\\partial^2h}{\\partial z_i\\partial \\bar z_j}(0)=\n\t\\begin{cases}\n\t0 \\quad\\quad i \\; \\text{or} \\; j\\leq k \\\\\n\t2\\delta_{ij} \\quad i,j>k.\n\t\\end{cases}\n\t\\end{align*}\n\t\\end{prop}\n\n\\section{Proof of the main theorem}\n\nIn this section, we give the proof of Theorem \\ref{thm:main}. The idea of the proof is similar with that in \\cite{WZ20}. The main difference lies in the construction of the local uniform extension. For the sake of completeness, we give the detailed proof.\n\n\\begin{lemma}[\\cite{BK07, WZ20}]\\label{key lemma}\n\tLet $\\varphi$ be a quasi-psh function on a compact Hermitian manifold $(X,\\omega)$, such that $\\omega+\\sqrt{-1}\\partial\\bar{\\partial}\\varphi\\geq \\varepsilon\\omega$ and $\\varphi0$ converging to $0$, satisfying the following\n\t\\begin{itemize}\n\t\t\\item [(a)] $\\varphi_m\\searrow \\varphi$;\n\t\t\\item [(b)]$\\omega+\\sqrt{-1}\\partial\\bar{\\partial}\\varphi_m\\geq (\\varepsilon-\\varepsilon_m)\\omega$;\n\t\t\\item[(c)] $\\varphi_m\\leq -\\frac{C}{2}$.\n\t\\end{itemize}\n\\end{lemma}\n\n\\begin{lemma}[c.f. \\cite{DP04}]\\label{reference function}\n\tThere exists a function $F:X\\rightarrow [-\\infty, +\\infty)$ which is smooth on $X\\setminus V$, with logarithmic singularities along $V$, and such that $\\omega+\\sqrt{-1}\\partial\\bar{\\partial}F\\geq \\varepsilon \\omega$ is a K\\\"{a}hler current on $X$.\n\tBy subtracting a large constant, we can make that $F<0$ on $X$.\n\\end{lemma}\n\n\nLet $T=\\omega|_V+\\sqrt{-1}\\partial\\bar{\\partial}\\varphi\\geq \\varepsilon\\omega|_V$ be the given K\\\"{a}hler current in the K\\\"{a}hler class $[\\omega|_V]$, where $\\varphi$ is a strictly $\\omega|_V$-psh function.\nBy subtracting a large constant, we may assume that $\\sup_V \\varphi<-C$ for some positive constant $C$.\n\nBy Lemma \\ref{key lemma}, we have that there is a sequence of non-increasing smooth strictly $\\omega|_V$-psh functions $\\varphi_{m}$ on $V$,\nand a decreasing sequence of positive numbers $\\varepsilon_m$ such that as $m\\rightarrow \\infty$\n\\begin{itemize}\n\n\t\\item $\\varphi_{m} \\searrow \\varphi$;\n\t\\item$\\omega|_V+\\sqrt{-1}\\partial\\bar{\\partial}\\varphi_m> \\frac{\\varepsilon}{2}\\omega|_V$;\n\t\\item $\\varphi_m\\leq -\\frac{C}{2}$.\n\\end{itemize}\n\nWe say a smooth strictly $\\omega|_V$-psh function $\\phi$ on $V$ satisfies \\textbf{assumption $\\bigstar_{\\varepsilon, C}$}, if $\\omega|_V+\\sqrt{-1}\\partial\\bar{\\partial}\\phi>\\frac{\\varepsilon}{2}\\omega|_V$ and $\\phi<-\\frac{C}{2}$.\n\nNote that for all $m\\in \\mathbb{N}^+$, $\\varphi_m$ satisfy \\textbf{assumption $\\bigstar_{\\varepsilon, C}$}. In the following, we will extend all the $\\varphi_m$ simultaneously to non-increasing strictly $\\omega$-psh functions on the ambient manifold $X$.\n\n\\noindent\\textbf{Step1: Local uniform extensions of $\\varphi_m$ for all $m$.} Let $U\\subset X$ be an open neighborhood of $V$ and let $r:U\\rightarrow V$ be a holomorphic retraction. Let $\\phi$ be a function satisfying \\textbf{assumption $\\bigstar_{\\varepsilon, C}$}. Let $h$ be the square of the distance function , which is a smooth function defined in \\S\\ref{sect:dist}.\n We define \t\n\\begin{align*}\n\\bar{\\phi}:=\\phi\\circ r+Ah\n\\end{align*}\nwhere $A$ is a positive constant to be determined later. \n\nFix arbitrary $p\\in V$, choose a holomorphic coordinate chart $(W_p,z=(z_1,\\cdots,z_n))$ centered at $p$ and $W_p\\cap V=\\{z_{k+1}=\\cdots=z_n=0\\}$, $g_{i\\bar j}(0)=\\delta_{ij}$ and $W\\subset U$, where $\\omega=\\sum_{i,j=1}^ng_{i\\bar j}dz_i\\wedge d\\bar z_j$. Then on $W_p$, we have that \n\\begin{align*}\n\\bar{\\phi}(z):=(\\phi\\circ r)(z)+Ah(z).\n\\end{align*}\nNote that on $W_p$, \n\\begin{align*}\n\\omega+\\sqrt{-1}\\partial\\bar\\partial \\bar\\phi(z)&=(\\omega-r^*(\\omega|_V))+r^*(\\omega|_V+\\sqrt{-1}\\partial\\bar\\partial\\phi)+A\\sqrt{-1}\\partial\\bar\\partial h\\\\\n&\\geq (\\omega-r^*(\\omega|_V))+\\varepsilon r^*(\\omega|_V)+A\\sqrt{-1}\\partial\\bar\\partial h\n\\end{align*}\nThe second inequality follows from the fact that $\\omega|_V+\\sqrt{-1}\\partial\\bar\\partial\\phi\\geq \\varepsilon\\omega|_V$ on $V$ and $r$ is a holomorphic retraction map. The key point is that the last term in above inequality is independent of $\\phi$.\n\n\n\n\\begin{claim}\\label{claim: 1}There is an open neighborhood $W_p$ (independent of $\\phi$), of $p$ in $U$, and positive constants $A>0$ and $\\varepsilon'>0$ (independent of $\\phi$), such that on $W_p$,\n\t\n\\begin{align*}\n\\omega+\\sqrt{-1}\\partial\\bar\\partial \\bar\\phi\\geq \\frac{\\varepsilon'}{2}\\omega \\mbox{~~and~~} \\bar\\phi\\leq -\\frac{C}{4}.\n\\end{align*}\n\\end{claim}\n\\begin{proof}Under the local coordinate chosen as above, one can see that \n\t\\begin{align*}\n\t&r(z)=(r_1(z_1,\\cdots,z_n),\\cdots, r_k(z_1,\\cdots,z_n),0,\\cdots,0);\\\\\n&\tr(z_1,\\cdots, z_k,0,\\cdots, 0)=(z_1,\\cdots,z_k,0,\\cdots,0);\\\\\n&dr_i(z_1,\\cdots,z_k,0,\\cdots,0)=dz_i+\\sum_{k+1\\leq j\\leq n}\\frac{\\partial r_i}{\\partial z_j}(z_1,\\cdots, z_k,0\\cdots,0)dz_j.\n\t\\end{align*}\nSince $\\omega|_V=\\sum_{1\\leq i,j\\leq k}g_{i\\bar j}(z_1,\\cdots,z_k,0,\\cdots,0)dz_i\\wedge d\\bar z_j$, it follows that at $(z_1,\\cdots, z_k,0,\\cdots,0)$, \n\\begin{align*}\nr^*(\\omega|_V)=&\\sum_{1\\leq i,j\\leq k}g_{i\\bar j}(dz_i+\\sum_{k+1\\leq l\\leq n}\\frac{\\partial r_i}{\\partial z_l}dz_l)\\wedge(d\\bar z_j+\\sum_{k+1\\leq m\\leq n}\\frac{\\partial \\bar r_j}{\\partial \\bar z_m} d\\bar z_m)\\\\\n=&\\sum_{1\\leq i,j\\leq k}g_{i\\bar j}dz_i\\wedge d\\bar z_j+\\sum_{1\\leq i\\leq k,k+1\\leq m\\leq n}\\sum_{1\\leq j\\leq k}g_{i\\bar j}\\frac{\\partial \\bar r_j}{\\partial \\bar z_m}dz_i\\wedge d\\bar z_m\\\\\n&+\\sum_{1\\leq j\\leq k,k+1\\leq l\\leq n}\\sum_{1\\leq i\\leq k}g_{i\\bar j}\\frac{\\partial r_i}{\\partial \\bar z_l}dz_i\\wedge d\\bar z_l\n+\\sum_{k+1\\leq l,m\\leq n}\\sum_{1\\leq i,j\\leq k}g_{i\\bar j}\\frac{\\partial r_i}{\\partial z_l}\\frac{\\partial \\bar r_j}{\\partial \\bar z_m}dz_l\\wedge d\\bar z_m.\n\\end{align*}\n\nThus, at $(z_1,\\cdots,z_k,0,\\cdots,0)$, we get the following\n\\begin{align*}\n\\omega+\\sqrt{-1}\\partial\\bar\\partial \\bar\\phi(z)\\geq &\\sum_{1\\leq i,j\\leq k}(\\varepsilon g_{i\\bar j}+Ah_{i\\bar j})dz_i\\wedge d\\bar z_j+\\sum_{1\\leq i\\leq k,k+1\\leq m\\leq n}(g_{i\\bar m}+Ah_{i\\bar m}+(\\varepsilon-1)\\sum_{1\\leq j\\leq k}g_{i\\bar j}\\frac{\\partial \\bar r_j}{\\partial \\bar z_m})dz_i\\wedge d\\bar z_m\\\\\n&+\\sum_{1\\leq j\\leq k,k+1\\leq l\\leq n}(g_{j\\bar l}+Ah_{j\\bar l}+(\\varepsilon-1)\\sum_{1\\leq i\\leq k}g_{i\\bar j}\\frac{\\partial r_i}{\\partial \\bar z_l})dz_i\\wedge d\\bar z_l\\\\\n&+\\sum_{k+1\\leq i,j\\leq n}(g_{i\\bar j}+Ah_{i\\bar j}+(\\varepsilon-1)\\sum_{1\\leq i,j\\leq k}g_{i\\bar j}\\frac{\\partial r_i}{\\partial z_l}\\frac{\\partial \\bar r_j}{\\partial \\bar z_m})dz_i\\wedge d\\bar z_j.\n\\end{align*}\nSince $(g_{i\\bar j})_{1\\leq i,j\\leq k}$ is positive definite, from Proposition \\ref{prop:hessian}, we can see that when $A>0$ is sufficiently large (independent of $\\phi$), there is an open neighborhood $W_p$ (independent of $\\phi$), such that the conclusion of Claim \\ref{claim: 1} holds.\n\n\n\n\t\\end{proof}\n\nTo emphasis the uniformity, it is worth to point out again that the chosen of the open set $W_p$, and the constant $\\varepsilon'$ is independent of $\\phi$, as long as $\\phi$ satisfies \\textbf{assumption $\\bigstar_{\\varepsilon,C}$}.\nWe call the above data $ (W_p,\\varepsilon',-\\frac{C}{4},\\bar\\phi)$ an \\textbf{admissible local extension} of $\\phi$.\n\n\nSince all the $\\varphi_m$ satisfy the same \\textbf{assumption $\\bigstar_{\\varepsilon,C}$}, thus near $p$, we can choose a \\textbf{uniform admissible local extension $ (W_p,A,\\varepsilon',-\\frac{C}{4},\\bar\\varphi_m)$} of $\\varphi_m$, for all $m\\in\\mathbb{N}^+$.\nSince $V$ is compact, one may choose an open neighborhood $W$ of $V$ in $X$, and universal constants $A>0$ and $\\varepsilon'>0$, such that the functions $\\widetilde \\varphi_m:=\\varphi_m\\circ r+Ah$ are defined on $W$, such that $\\omega+i\\partial\\bar\\partial \\widetilde \\varphi_m\\geq \\varepsilon'\\omega$ on $W$ for all $m$. Since $\\{\\varphi_m\\}$ is a non-increasing sequence, one obtains that $\\{\\widetilde{\\varphi}_m\\}$ is a non-increasing sequence.\n\n\\noindent\\textbf{Step 2: Global extensions of $\\varphi_m$ for all $m$.} Up to shrinking, we may assume that $\\widetilde{\\varphi}_m$ are defined on the closure of $W$ for all $m\\in \\mathbb{N}^+$.\nLet $F$ be the quasi-psh function in Lemma \\ref{reference function}.\nNear $\\partial W$ (the boundary of $W$), the function $F$ is smooth, and $\\sup_{\\partial W}\\widetilde{\\varphi}_{1}=-C''$ for some positive constant $C''>0$.\nNow we choose a small positive $\\nu$, such that $\\inf_{\\partial W}(\\nu F)>-\\frac{C''}{2}$ and $\\omega+i\\partial\\bar{\\partial}\\nu F\\geq\\varepsilon'\\omega$.\nThus $\\nu F >\\widetilde{\\varphi}_{1}\\geq \\widetilde{\\varphi}_m$ in a neighborhood of $\\partial W$ for all $m\\in \\mathbb{N}^+$, since $\\widetilde{\\varphi}_m$ is non-increasing.\nTherefore, we can finally define\n\\begin{align*}\n\\Phi_m=\\left\\{\n\\begin{array}{ll}\n\\max\\{\\widetilde{\\varphi}_m, \\nu F\\}, & \\hbox{on $W$;} \\\\\n\\nu F, & \\hbox{on $X\\setminus W$,}\n\\end{array}\n\\right.\n\\end{align*}\nwhich is defined on the whole of $X$. It is easy to check that $\\Phi_m$ satisfies the following properties:\n\\begin{itemize}\n\t\\item $\\Phi_m$ is non-increasing in $m$,\n\t\\item $\\Phi_m\\leq 0$ for all $m\\in \\mathbb{N}^+$,\n\t\\item $\\omega+i\\partial\\bar{\\partial}\\Phi_m\\geq \\varepsilon'\\omega$ for all $m\\in \\mathbb{N}^+$,\n\t\\item $\\Phi_m|_V=\\varphi_m$ for all $m\\in \\mathbb{N}^+$.\n\\end{itemize}\n\n\\noindent\\textbf{Step 3: Taking limit to complete the proof of Theorem \\ref{thm:main}.}\nFrom above steps, we get a sequence of non-increasing, non-positive strictly $\\omega$-psh functions $\\Phi_m$ on $X$. Then either $\\Phi_m\\rightarrow -\\infty $ uniformly on $X$, or $\\Phi:=\\lim\\limits_m\\Phi_m\\in$ Psh$(X,\\omega)$.\nBut $\\Phi_m|_V=\\varphi_m\\searrow \\varphi\\not\\equiv -\\infty$, the first case will not appear.\nMoreover, we can see that $\\Phi:=\\lim\\limits_m\\Phi_m$ is a strictly $\\omega$-psh function on $X$ from the property $\\omega+i\\partial\\bar{\\partial}\\Phi_m\\geq \\varepsilon'\\omega$ for all $m\\in \\mathbb{N}^+$, and $\\Phi|_V=\\lim\\limits_m\\Phi_m|_V=\\lim\\limits_m\\varphi_m=\\varphi$.\nIt follows that $(\\omega+i\\partial\\bar{\\partial}\\Phi)|_V=\\omega|_V+i\\partial\\bar{\\partial}\\varphi$.\nThus we complete the proof of Theorem \\ref{thm:main}.\n\n\n\\begin{rem} By similar arguments as in \\cite{WZ20}, we can get the following extension results for K\\\"ahler currents in a big class.\n\t\\begin{thm}\n\t\tLet $(X,\\omega)$ be a compact K\\\"ahler manifold of complex dimension $n$, and $V\\subset X$ be a complex submanifold of positive dimension. Suppose that $V$ has a holomorphic retraction structure in $X$.\n\t\tLet $\\alpha\\in H^{1,1}(X,\\mathbb R)$ be a big class and any of the irreducible components of $E_{nK}(\\alpha)$ either does not intersect with $V$, or is contained in $V$. \n\t\tThen any K\\\"ahler current in $\\alpha|_V$ is the restriction of a K\\\"ahler current in $\\alpha$.\n\t\\end{thm}\n\t\\end{rem}\n\n\\section{Examples}\n\nIn \\cite{HK20}, Hosono-Koike point out that in Nakayama's example and Zariski's example, the submanifold have holomorphic tubular neighborhood structure in the ambient manifold, thus have holomorphic retraction structure. \n\n\\noindent{\\textbf{Product manifold.}} Let $Y_1$ and $Y_2$ be two compact K\\\"ahler manifold and set $X:=Y_1\\times Y_2$. Fix an arbitrary point $p\\in Y_2$, let $V=Y_1\\times p$, then the natural map $\\pi:Y_1\\times Y_2\\rightarrow Y_1\\times p$ serves as a holomorphic retraction map. \n\n\nAn interesting example of non-product manifold, communicated to us by Koike \\cite{Ko21}, is the following famous example of Serre.\n\n\\noindent{\\textbf{Serre's example.}} Let $X:=\\mathbb P_{[x;y]}\\times \\mathbb C_z\/\\sim$, where $\\tau\\in \\mathbb H$ with $\\mathbb H$ be the upper half plane, and $$([x;y],z)\\sim ([x;y+x],z+1)\\sim ([x;y+\\bar\\tau\\cdot x],z+\\tau).$$\nLet $V:=\\{x=0\\}\\subset X$ as a submanifold of $X$ which is obviously isomorphic to the elliptic curve $\\mathbb C\/\\langle 1,\\tau\\rangle$. It is easy to check that the projection map $\\pi: X\\rightarrow \\mathbb C\/\\langle 1,\\tau\\rangle=:V$ is a holomorphic retraction. It can also be verified that $V$ does not have a holomorphic tubular neighborhood structure in $X$.\n\\begin{rem}\n\tIn \\cite{Ko21}, Koike gives a very interesting proof of Theorem \\ref{thm:main} for Serre's example, which however seems not applicable to general case treated in this paper.\n\\end{rem}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{{\\bf Introduction}}\n\n\nThe loop space $LM$ of a manifold $M$ appears frequently in mathematics and\n mathematical physics. In this paper, using an infinite dimensional version of\n Chern-Simons theory associated to the Wodzicki residue for\n pseudodifferential operators ($\\Psi{\\rm DO}$s), we develop a computable theory of secondary\n characteristic classes on the tangent\n bundle to loop spaces. We apply these secondary classes to distinguish circle actions\n on $S^2\\times S^3$, and we prove that $\\pi_1({\\rm Diff} (S^2\\times S^3))$ is infinite. To our knowledge, these applications are the first examples of nonzero Wodzicki-type characteristic classes. \n\n\nSince Chern-Weil and Chern-Simons theory are geometric, it is necessary\nto understand connections and curvature on loop spaces. A Riemannian metric\n$g$ \non $M$ induces a family of metrics $g^s$ on $LM$ parametrized by a Sobolev\nspace parameter $s \\geq 0$, where $s=0$ gives the usual $L^2$ metric, and the\nsmooth case is a kind of limit as $s\\to\\infty.$ Thus we think of\n$s$ as a regularizing parameter, and pay attention to the parts of the theory which\nare independent of $s$. \n\nIn Part I, we compute the connection and curvature for the Levi-Civita\nconnection for $g^s$ for $s>\\frac{1}{2}$. The \nclosed form expressions obtained for the Levi-Civita connection for general $LM$ \nextend Freed's results for loop groups \\cite{Freed}. The connection and \ncurvature forms take values in\nzeroth order $\\Psi{\\rm DO}$s acting on a trivial bundle over $S^1$. \nFor Wodzicki-Chern-Simons classes, we only need\nthe principal and subprincipal symbols for these\nforms, which we calculate.\n\n\n\n\nIn Part II, we develop a theory of Chern-Simons classes on loop spaces.\nThe structure group for the Levi-Civita connection for\n$(LM, g^s)$ is the set of invertible zeroth order $\\Psi{\\rm DO}$s, so we need\n invariant polynomials on the corresponding Lie algebra. The naive choice is\nthe standard polynomials $\\operatorname{Tr}(\\Omega^k)$ of the curvature $\\Omega = \\Omega^s$,\nwhere Tr is the operator trace. However, $\\Omega^k$ is zeroth order\nand hence not trace class, and in any case the operator trace\nis impossible to compute in general. Instead, as in \\cite{P-R2} we use the \nWodzicki residue, the only trace on the full\nalgebra of $\\Psi{\\rm DO}$s. \nFollowing Chern-Simons\n \\cite{C-S} as much as possible, we build a theory of\nWodzicki-Chern-Simons (WCS) classes, which gives classes in $H^{2k-1}(LM^{2k-1})$ associated to partitions of $k$.\n\nThere are two main differences from the finite\ndimensional theory. The absence of a Narasimhan-Ramanan universal connection\ntheorem means that we do not have a theory of differential characters\n\\cite{Ch-Si}. However, since we have a family of connections on $LM$, we can define real valued, not just ${\\mathbb R}\/{\\mathbb Z}$-valued, WCS classes. \n\n In contrast to the operator trace, the Wodzicki residue is locally\n computable, so we can write explicit expressions for the WCS classes.\nIn particular, we can see how the WCS classes depend on the Sobolev parameter $s$, and \nhence define a ``regularized\" or $s$-independent WCS classes. \nThe local expression also yields some vanishing results for WCS classes. More importantly,\nwe produce a nonvanishing \nWCS class on $L(S^2\\times S^3).$ This leads to the topological results described in the first paragraph.\n\nFor related results on characteristic classes on infinite rank bundles with a group of $\\Psi{\\rm DO}$s as structure group, see \n \\cite{lrst, P-R2}.\n \n\n\\medskip\nThe paper is organized as follows. Part I treates the family of metrics $g^s$ on $LM$\nassociated to $(M,g)$. \\S2 discusses connections associated to $g^s.$ After some preliminary material,\nwe compute the Levi-Civita connection for $s=0$ (Lemma \\ref{lem:l2lc}), $s=1$\n(Theorem \\ref{old1.6}), $s\\in {\\mathbb Z}^+$ (Theorem \\ref{thm:sinz}), and general \n$s>\\frac{1}{2}$ (Theorem \\ref{thm25}). These connections allow us to track how the geometry\nof $LM$ depends on $s$. \n\nBoth the Levi-Civita and $H^s$ connections have connection and curvature forms taking \nvalues in $\\Psi{\\rm DO}$s of order zero. In \\S3, we compute the symbols of these forms needed in Part II. In \\S4, we show that our results extend Freed's on loop groups \\cite{Freed}.\n\nPart II covers Wodzicki-Chern-Simons classes. In \\S5, we review the finite dimensional\nconstruction of Chern and Chern-Simons classes, and use the Wodzicki residue to define Wodzicki-Chern (WC) and WCS classes (Definition \\ref{def:WCS}). We prove the necessary vanishing\nof the WC classes for mapping spaces (and in particular for $LM$) in Proposition\n\\ref{prop:maps}. In Theorem \\ref{thm:5.5}, we give the explicit local expression for the\nrelative WCS class $CS_{2k-1}^W(g)\\in H^{2k-1}(LM^{2k-1})$ associated to the trivial\npartition of $k$. We then define the regularized or $s$-independent WCS class. \nIn Theorem \\ref{WCSvan}, we give a vanishing result\nfor WCS classes. \n\nIn particular, the WCS class which is the analogue of the classical dimension three Chern-Simons class vanishes on loop spaces of \n $3$-manifolds, so we look for nontrivial examples on $5$-manifolds.\n In \\S\\ref{dimfive}, we use a Sasaki-Einstein\nmetric constructed in \\cite{gdsw} to produce a nonzero WCS class $CS_5^W\\in\nH^5(L(S^2\\times\nS^3)).$ We prove $CS_5^W\\neq 0$ by an exact computer calculation showing\n $\\int_{[a^L]} CS_5^W \\neq 0$, where\n$[a^L]\\in H_5(LM)$ is a cycle associated to a simple\ncircle action on $S^2\\times S^3.$ From this\nnonvanishing, we conclude both that the circle action is not \nsmoothly homotopic to the trivial action and that $\\pi_1({\\rm Diff} (S^2\\times S^3))$ is infinite.\nWe expect other similar results in the future. \n\n\n\nOur many discussions with Sylvie Paycha are gratefully\nacknowledged. We also thank Kaoru Ono and Dan Freed for pointing out errors in previous versions of \nthe paper.\n\n\n\n\n\\bigskip\n\n\n\\large\n\\noindent {{\\bf Part I. The Levi-Civita Connection on the Loop Space $LM$}}\n\\normalsize\n\n\\bigskip\n\nIn this part, we compute the Levi-Civita connection on $LM$\nassociated to a Riemannian metric on $M$ and a Sobolev parameter $s=0$ \nor $s>\\frac{1}{2}.$ The standard $L^2$ metric on $LM$ is the case $s=0$, and otherwise we avoid technical issues by assuming that $s$ is greater than the critical exponent $\\frac{1}{2}$ for analysis on bundles over $S^1.$ \n In \\S\\ref{LCconnection}, the\nmain results are Lemma \\ref{lem:l2lc}, Theorem \\ref{old1.6}, \nTheorem \\ref{thm:sinz}, and Theorem \\ref{thm25},\nwhich compute the Levi-Civita connection for $s =0$, $s=1$, \n$s\\in {\\mathbb Z}^+$, and general $s >\\frac{1}{2},$ respectively.\n\n\nIn\n\\S3, we compute the relevant symbols of the connection one-forms and the\ncurvature two-forms. In \\S4, we compare our results with work of Freed\n\\cite{Freed} on loop groups. \n\n\n\n\\section{{\\bf The Levi-Civita Connection for Sobolev Parameter $s\\geq 0$}}\n\\label{LCconnection}\n\nThis section covers background material and computes the Levi-Civita\nconnection on $LM$ for Sobolev parameter $s=0$ and $s>\\frac{1}{2}$. \nIn \\S2.1, we review\nmaterial on $LM$, and in \\S2.2 we review pseudodifferential operators and the\nWodzicki residue. In \\S2.3, we give the crucial computations of the Levi-Civita connections\nfor $s=0,1$.\nThis computation is extended to $s\\in {\\mathbb Z}^+$ in \\S2.4, and to general $s>\\frac{1}{2}$ in\n\\S2.5. In \\S2.6, we discuss how the geometry of $LM$ forces an extension of\nthe structure group of $LM$ from a gauge group to a group of bounded\ninvertible $\\Psi{\\rm DO}$s.\n \n\\subsection{{\\bf Preliminaries on $LM$}}\n\n${}$\n\\medskip\n\nLet $(M, \\langle\\ ,\\ \\rangle)$ \nbe a closed, connected, oriented Riemannian $n$-manifold with loop space $LM\n= C^\\infty(S^1,M)$ of smooth loops. \n$LM$ is a smooth infinite dimensional Fr\\'echet manifold, but it is\n technically simpler \nto work \nwith the smooth Hilbert manifold $H^{s'}(S^1,M)$ of loops in some Sobolev class $s' \\gg 0,$\nas we now recall. For $\\gamma\\in LM$, the formal\ntangent space $T_\\gamma LM$ is \n$\\Gamma(\\gamma^*TM)$, the space\n of smooth sections of the pullback bundle $\\gamma^*TM\\to\nS^1$. The actual tangent space of $H^{s'}(S^1, M)$ at $\\gamma$ is \n$H^{s'-1}(\\gamma^*TM),$ the sections of $\\gamma^*TM$ of Sobolev class $s'-1.$\nWe will fix $s'$ and use $LM, T_\\gamma LM$ for $H^{s'}(S^1, M), H^{s'-1}(\\gamma^*TM)$, respectively.\n\nFor each $s>1\/2,$ we can complete $\\Gamma(\\gamma^*TM\\otimes {\\mathbb C})$\n with respect to the Sobolev inner product\n \\begin{equation}\\label{eq:Sob1}\n\\langle X,Y\\rangle_{s}=\\frac{1}{2\\pi}\\int_0^{2\\pi} \\langle(1+\\Delta)^{s}\nX(\\alpha),Y(\\alpha)\n\\rangle_{\\gamma (\\alpha)}d\\alpha,\\ X,Y\\in \\Gamma(\\gamma^*TM).\n\\end{equation}\nHere $\\Delta=D^*D$, with $D=D\/d\\gamma$ the covariant derivative along\n$\\gamma$. (We use this notation instead of the classical $D\/dt$ to keep track\nof $\\gamma$.)\nWe need the complexified pullback bundle $\\gamma^*TM\\otimes {\\mathbb C}$, denoted from now on\njust as $\\gamma^*TM$, in order to apply the\npseudodifferential operator $(1+\\Delta)^{s}.$\nThe construction of $(1+\\Delta)^{s}$ is reviewed in\n\\S\\ref{pdoreview}. \nWe denote this completion by $H^{s'}(\\gamma^*TM)$. We can consider the\n$s$ metric on $TLM$ for any $s\\in {\\mathbb R}$, but we will only consider \n$s=0$ or $1\/2 < s\\leq s'-1.$\n\n\nA small real neighborhood $U_\\gamma$ \nof the zero section in $H^{s'}(\\gamma^*TM)$ is a\ncoordinate chart near $\\gamma\\in LM$ \nvia the pointwise exponential map\n\\begin{equation}\\label{pointwiseexp}\n\\exp_\\gamma:U_\\gamma\n\\to L M, \\ X \\mapsto \n\\left(\\alpha\\mapsto \\exp_{\\gamma(\\alpha)} X(\\alpha)\\right). \n\\end{equation}\nNote that the domain of the exponential map is not contained in $T_\\gamma LM.$\nThe differentiability of the transition functions $\\exp_{\\gamma_1}^{-1}\\cdot\n\\exp_{\\gamma_2}$ is proved in\n\\cite{E} and \\cite[Appendix A]{Freed1}.\nHere $\\gamma_1, \\gamma_2$ are close loops in the sense that\na geodesically convex neighborhood of $\\gamma_1(\\theta)$ contains\n$\\gamma_2(\\theta)$ and vice versa for all $\\theta.$\nSince \n$\\gamma^*TM$\nis (noncanonically) isomorphic to the trivial bundle ${\\mathcal R} =\nS^1\\times {\\mathbb C}^n\\to S^1$, \nthe model space for $LM$ is the set of \n$H^{s'}$ sections of this trivial bundle. \nThe $s$ metric is a weak Riemannian metric for $s1$ (see\ne.g.~\\cite{fgl} in general and \\cite{ponge} for the case $M=S^1.$). \n\n\nThe Wodzicki residue will be used in\nPart II to define characteristic classes on $LM$. In our particular case, the operator $P$ \nwill be an $\\Psi{\\rm DO}$ of order $-1$ acting on sections of a bundle over $S^1$\n (see (\\ref{cswint})), so\n$\\sigma_{-1}(P)$ is globally defined. Of course, $\\int_{S^*S^1}\n\\operatorname{tr}\\sigma_{-1}(P) d\\xi d\\theta = 2\\int_{S^1}\\operatorname{tr}\\sigma_{-1}(P)d\\theta$. It is\neasy to check that this integral,\nwhich strictly speaking involves a choice of cover of $S^1$ and a partition of unity,\nequals the usual $2\\int_0^{2\\pi} \\operatorname{tr}\\sigma_{-1}(P) d\\theta.$\n\n\n\n\n\n\n\n\n\n\n\\subsection{The Levi-Civita Connection for $s=0, 1$}\n${}$\n\\medskip\n\n\nThe smooth Riemannian manifold $LM = H^{s'}(S^1,M)$ has tangent bundle $TLM$ with \n$T_\\gamma LM = H^{s'-1}(\\gamma^*TM).$ For the $s'-1$ metric on $TLM$ (i.e., \n$s = s'-1$ in (\\ref{eq:Sob1})), \nthe \nLevi-Civita connection exists and is determined by the six term formula\n\\begin{eqnarray}\\label{5one}\n2\\ip{\\con{s}{X}Y,Z}_{s} &=& X\\ip{Y,Z}_{s}+Y\\ip{X,Z}_{s}-Z\\ip{X,Y}_{s}\\\\\n&&\\qquad +\\ip{[X,Y],Z}_{s}+\\ip{[Z,X],Y}_{s}-\\ip{[Y,Z],X}_s\\nonumber\n\\end{eqnarray}\n\\cite[Ch. VIII]{lang}. The point is that each term on the RHS of (\\ref{5one}) \nis\na {\\it continuous} linear functional $T_i:H^{s=s'-1}(\\gamma^*TM) \\to {\\mathbb C}$ in $Z$. Thus \n$T_i(Z) = \\ip{T_i'(X,Y),Z}_s$ for a unique $T'(X,Y)\\in H^{s'-1}(\\gamma^*TM)$, and $\\con{s}{Y}X\n= \\frac{1}{2}\\sum_i T'_i.$ \n\nIn general, the Sobolev parameter $s$ in (\\ref{eq:Sob1}) differs from the parameter $s'$ defining the loop space. We discuss how this affects the existence of a Levi-Civita connection. \n\n\\begin{rem}\\label{lcrem} For general $s >\\frac{1}{2}$, the Levi-Civita connection for the $H^s$ \nmetric is guaranteed to exist on the bundle $H^s(\\gamma^*TM)$, as above. However, it is inconvenient to have the bundle depend on the Sobolev parameter, for several reasons: \n(i) $H^s(\\gamma^*TM)$ is strictly speaking not the tangent bundle of $LM$, (ii) for the\n$L^2$ ($s=0$) metric, the Levi-Civita connection should be given by the Levi-Civita connection on $M$ applied pointwise along the loop (see Lemma \\ref{lem:l2lc}), and on $L^2(\\gamma^*TM)$ this would have to be interpreted in the distributional sense; (iii) to compute Chern-Simons classes on\n$LM$ in Part II, we need to compute with a pair of connections corresponding to $s=0, s=1$ on the\nsame bundle. These problems are not fatal: (i) and (ii) are essentially aesthetic issues,\nand for (iii), the connection one-forms will take values in zeroth order $\\Psi{\\rm DO}$s, which are bounded operators on any \n$H^{s'-1}(\\gamma^*TM)$, so $s' \\gg 0$ can be fixed. \n\nThus it is more convenient\nto fix $s'$ and consider the family of $H^s$ metrics on $TLM$ for \n$\\frac{1}{2} < s < s'-1$. \n However, the existence of the Levi-Civita connection for the $H^s$ metric is trickier.\n For a sequence $Z\\in H^{s'-1} = H^{s'-1}(\\gamma^*TM)$ with $Z\\to 0$\n in $H^{s'-1}$ or in \n $H^s$, the RHS of (\\ref{5one}) goes to $0$ for fixed $X, Y\\in H^s.$ Since\n $H^{s'-1}$ is dense in $H^{s}$, the RHS of (\\ref{5one}) extends to a continuous linear functional on $H^s$. Thus the RHS of (\\ref{5one}) is given by\n $\\langle L(X,Y), Z\\rangle_s$ for some $L(X,Y)\\in H^s.$ We set $\\nabla^{s}_YX = \n \\frac{1}{2}L(X,Y)$. Note that even if we naturally demand that\n $X, Y\\in H^{s'-1}$, we only get $\\nabla^s_YX\\in H^s\\supset H^{s'-1}$ without additional work. Part of the content of Theorem \\ref{thm25} is that the Levi-Civita connection exists in the {\\it strong sense}: given a tangent vector $X\\in H^{s'-1}(\\gamma^*TM)$ and a smooth vector field\n$Y(\\eta)\\in H^{s'-1}(\\eta^*TM)$ for all $\\eta$,\n $\\nabla^s_XY(\\gamma)\\in H^{s'-1}(\\gamma^*TM).$ See Remark 2.6.\n \n \n\n\\end{rem}\n\n\n\n\nWe need to discuss local coordinates on $LM$.\nFor motivation, recall that\n\\begin{equation}\\label{lie}[X,Y]^a = X(Y^a)\\partial_a - Y(X^a)\\partial_a\n\\equiv \\delta_X(Y) -\\delta_Y(X)\n\\end{equation}\nin local coordinates on a finite dimensional manifold. Note that\n$X^i\\partial_iY^a = X(Y^a) =\n(\\delta_XY)^a$ in this notation.\n\nLet $Y$ be a vector field on $LM$, and let $X$ be a tangent vector at\n$\\gamma\\in LM.$ The local variation $\\delta_XY$ of $Y$ in the direction of $X$ at $\\gamma$ is \ndefined as usual: let $\\gamma(\\varepsilon,\\theta)$ be a family of loops in $M$\nwith $\\gamma(0,\\theta) = \\gamma(\\theta), \\frac{d}{d\\varepsilon}|_{_{\\varepsilon=0}}\n\\gamma(\\varepsilon,\\theta) = X(\\theta).$ Fix $\\theta$, and let $(x^a)$ be\ncoordinates near $\\gamma(\\theta)$. We call these coordinates \n{\\it manifold coordinates.} Then\n$$\\delta_XY^a(\\gamma)(\\theta) \\stackrel{{\\rm def}}{=}\n\\frac{d}{d\\varepsilon}\\biggl|_{_{_{\\varepsilon =0}}} Y^a(\\gamma(\\varepsilon,\\theta)).$$\nNote that $\\delta_XY^a = (\\delta_XY)^a$ by definition.\n\n\\begin{rem} Having $(x^a)$ defined only near a fixed $\\theta$ is inconvenient.\nWe can find coordinates that work for all $\\theta$ as follows. For\n fixed $\\gamma$, there is an $\\varepsilon$ such that for all $\\theta$,\n $\\exp_{\\gamma(\\theta)} X$ is inside the cut locus of $\\gamma(\\theta)$ if\n $X\\in T_{\\gamma(\\theta)}M$ has $|X|<\\varepsilon.$ Fix such an $\\varepsilon.$ Call \n $X\\in H^{s'-1}(\\gamma^*TM)$ {\\it\n short} if $|X(\\theta)|<\\varepsilon$ for all $\\theta.$ Then\n$$U_\\gamma = \\{\\theta \\mapsto \\exp_{\\gamma(\\theta)}X(\\theta) | X\\ {\\rm is\\\n short}\\}\\subset LM$$\nis a coordinate neighborhood of $\\gamma$ parametrized by $\\{ X: X\\ {\\rm is\\ \n short}\\}.$ \n\n We know\n$H^{s'-1}(\\gamma^*TM) \\simeq H^{s'-1}(S^1\\times {\\mathbb R}^n)$ noncanonically, so\n$U_\\gamma$ is parametized by short sections of $H^{s'-1}(S^1\\times {\\mathbb R}^n)$ for\na different $\\varepsilon.$ In particular, we have a smooth diffeomorphism $\\beta$ from\n$U_\\gamma$ to short sections of $H^{s'-1}(S^1\\times {\\mathbb R}^n)$.\n\nPut coordinates $(x^a)$ on ${\\mathbb R}^n$, which we identify canonically with the fiber ${\\mathbb R}^n_\\theta$\nover $\\theta$ in $S^1\\times {\\mathbb R}^n$. For $\\eta\\in\nU_\\gamma$, we have $\\beta(\\eta) = (\\beta(\\eta)^1(\\theta),...,\\beta(\\eta)^n(\\theta)).$\nAs with finite dimensional coordinate systems, we will drop $\\beta$ and just\nwrite\n$\\eta = (\\eta(\\theta)^a).$ These coordinates work for all\n$\\eta$ near $\\gamma$ and for all $\\theta.$ The definition of $\\delta_XY$ above carries over to exponential coordinates.\n\nWe will call these coordinates {\\it exponential coordinates}.\n\\end{rem}\n\n(\\ref{lie}) continues to hold\n for vector fields on $LM$, in either \n manifold or exponential coordinates.\n To see this, one checks that the coordinate-free proof that $L_XY(f) =\n [X,Y](f)$ for $f\\in C^\\infty(M)$ (e.g.~\\cite[p.~70]{warner}) carries over to\n functions on $LM$. In brief, the usual proof involves a map $H(s,t)$ of a\n neighborhood of the origin in ${\\mathbb R}^2$ into $M$, where $s,t$ are parameters for\n the flows of $X, Y,$ resp. For $LM$, we have a map $H(s,t,\\theta)$, where\n $\\theta$ is the loop parameter. \nThe usual proof uses\n only $s, t$ differentiations,\n so $\\theta$ is unaffected. The point is that the $Y^i$ are local functions\n on the $(s,t,\\theta)$ parameter space, whereas\nthe $Y^i$ are not\n local functions on $M$ at points where loops cross or self-intersect.\n\nWe first compute the $L^2$ ($s=0$) Levi-Civita connection invariantly and in \nmanifold coordinates.\n\n\n\n\\begin{lem} \\label{lem:l2lc} Let $\\nabla^{LC}$ be the Levi-Civita connection on $M$. \n Let $\\operatorname{ev}_\\theta:LM\\to M$ be $\\operatorname{ev}_\\theta(\\gamma) = \\gamma(\\theta).$ \n Then $D_XY(\\gamma)(\\theta) \\stackrel{\\rm def}{=} \n (\\operatorname{ev}_\\theta^*\\nabla^{LC})_XY(\\gamma)(\\theta)$ is the $L^2$ Levi-Civita connection on $LM$. In manifold coordinates,\n \\begin{equation}\\label{l2lc} (D_XY)^a(\\gamma)(\\theta) = \\delta_XY^a(\\gamma)(\\theta) +\n \\cch{bc}{a}(\\gamma(\\theta))X^b(\\gamma)(\\theta) Y^c(\\gamma)(\\theta).\n \\end{equation}\n\\end{lem}\n\\medskip\n\nAs in Remark \\ref{lcrem}, we may assume that\n$X, Y\\in H^{s'-1}(\\gamma^*TM)$ with $s' \\gg 0$, so (\\ref{l2lc}) makes sense.\n\n\\begin{proof} $\\operatorname{ev}_\\theta^*\\nabla^{LC}$ is a connection on\n$\\operatorname{ev}_\\theta^*TM\\to LM$. We have\n$\\operatorname{ev}_{\\theta,*}(X) = X(\\theta)$. If $U$ is a coordinate\nneighborhood on $M$ near some $\\gamma(\\theta)$, then on $\\operatorname{ev}_\\theta^{-1}(U)$, \n\\begin{eqnarray*}(\\operatorname{ev}_\\theta^*\\nabla^{LC})_XY^a(\\gamma)(\\theta) &=& (\\delta_{X}Y)^a(\\gamma)(\\theta) +\n((\\operatorname{ev}_\\theta^*\\omega^{LC}_{X})Y)^a (\\theta)\\\\\n&=& (\\delta_{X}Y)^a(\\gamma)(\\theta) + \n \\chw{b}{c}{a}(\\gamma(\\theta))X^b(\\gamma)(\\theta) Y^c(\\gamma)(\\theta)\n\\end{eqnarray*}\nSince $\\operatorname{ev}_\\theta^*\\nabla^{LC}$ is a connection, for each fixed $\\theta$, $\\gamma$ and $X\\in\n T_\\gamma LM$, \n $Y\\mapsto$\\\\\n $ (\\operatorname{ev}^*_\\theta\\nabla^{LC})_XY(\\gamma)$\n has Leibniz rule with respect to\nfunctions on $LM$. Thus $D$ is a connection on $LM.$\n\n\n$D$ is torsion free, as from the local expression\n $D_XY - D_YX = \\delta_XY - \\delta_YX = [X,Y].$\n\nTo show that $D_XY$ is compatible with the $L^2$\n metric, first recall that for a function $f$ on $LM$, $D_Xf = \\delta_Xf =\n \\frac{d}{d\\varepsilon}|_{_{\\varepsilon=0}}f(\\gamma(\\varepsilon,\\theta))$ for $X(\\theta)\n = \\frac{d}{d\\varepsilon}|_{_{\\varepsilon=0}}\\gamma(\\varepsilon, \\theta).$\n (Here $f$ depends only on\n $\\gamma$.) Thus (suppressing the partition of unity, which is independent of $\\varepsilon$)\n\\begin{eqnarray*} D_X\\langle Y,Z\\rangle_0 &=& \n \\frac{d}{d\\varepsilon}\\biggl|_{_{_{\\varepsilon=0}}}\\int_{S^1} g_{ab}(\\gamma(\\varepsilon,\\theta))\nY^a(\\gamma(\\varepsilon,\\theta))Z^b(\\gamma(\\varepsilon,\\theta))d\\theta\\\\\n&=& \\int_{S^1}\\partial_c\ng_{ab}(\\gamma(\\varepsilon,\\theta))\nX^cY^a(\\gamma(\\varepsilon,\\theta))Z^b(\\gamma(\\varepsilon,\\theta))d\\theta\\\\\n&&\\qquad + \\int_{S^1} \ng_{ab}(\\gamma(\\varepsilon,\\theta))\n(\\delta_XY)^a(\\gamma(\\varepsilon,\\theta))Z^b(\\gamma(\\varepsilon,\\theta))d\\theta\\\\\n&&\\qquad \n+ \\int_{S^1} \ng_{ab}(\\gamma(\\varepsilon,\\theta))\nY^a(\\gamma(\\varepsilon,\\theta))(\\delta_XZ)^b(\\gamma(\\varepsilon,\\theta))d\\theta\\\\\n&=& \\int_{S^1}\\Gamma{}_{c a}^{e}g_{eb} X^cY^aZ^b +\n\\Gamma{}_{c b}^{e}g_{ae}X^cY^aZ^b\\\\\n&&\\qquad +g_{ab}(\\delta_XY)^aZ^b + g_{ab}Y^a(\\delta_X Z)^bd\\theta\\\\\n&=& \\langle D_XY,Z\\rangle_0 + \\langle Y, D_XZ\\rangle_0.\n\\end{eqnarray*}\n\\end{proof}\n\n\n\\begin{rem} The local expression for $D_XY$ also holds in exponential coordinates. More precisely, let $(e_1(\\theta),...,e_n(\\theta))$\nbe a global frame of $\\gamma^*TM$ given by the trivialization of\n$\\gamma^*TM.$ Then $(e_i(\\theta))$ is also naturally a frame of\n$T_XT_{\\gamma(\\theta)}M$ for all $X\\in T_{\\gamma(\\theta)}M.$ We use\n$\\exp_{\\gamma(\\theta)}$ to pull back the metric on $M$ to a metric on\n$T_{\\gamma(\\theta)}M$: \n$$g_{ij}(X) = (\\exp^*_{\\gamma(\\theta)}g)(e_i, e_j) =\n g(d(\\exp_{\\gamma(\\theta)})_X (e_i), d(\\exp_{\\gamma(\\theta)})_X\n (e_j))_{\\exp_{\\gamma(\\theta)}X}.$$\nThen the Christoffel symbols \n$\\Gamma_{b c}^{a}(\\gamma(\\theta))$\nare computed with respect to\n this metric. For example, the term $\\partial_\\ell g_{bc}$ means $e_\\ell\n g(e_a, e_b)$, etc. The proof that $D_XY$ has the local expression (\\ref{l2lc}) \n then carries over to exponential coordinates.\n\n\\end{rem}\n\nThe $s=1$ Levi-Civita connection on $LM$ is given as follows.\n\n\\begin{thm} \\label{old1.6}\nThe $s=1$ Levi-Civita connection $\\nabla = \\nabla^1$ on $LM$ is given at the loop\n$\\gamma$ by\n\\begin{eqnarray*} \\nabla_XY &=& D_XY + \\frac{1}{2}(1+\\Delta)^{-1}\\left[\n-\\nabla_{\\dot\\gamma}(R(X,\\dot\\gamma)Y) - R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma} Y\\right.\\\\\n&&\\qquad \\left. -\\nabla_{\\dot\\gamma}(R(Y,\\dot\\gamma)X) - R(Y,\\dot\\gamma)\\nabla_{\\dot\\gamma} X\\right.\\\\\n&&\\qquad \\left. +R(X,\\nabla_{\\dot\\gamma} Y)\\dot\\gamma - R(\\nabla_{\\dot\\gamma} X, Y)\\dot\\gamma\\right].\n\\end{eqnarray*}\n\\end{thm}\n\n\nWe prove this in a series of steps. The assumption in the next Proposition will be dropped later.\n\n\n\\begin{prop} \\label{old1.3}\nThe Levi-Civita connection for the $s=1$ metric is given by\n$$\\nabla_X^1Y = D_XY + \\frac{1}{2}(1+\\Delta)^{-1}[D_X, 1+\\Delta]Y +\n \\frac{1}{2}(1+\\Delta)^{-1}[D_Y, 1+\\Delta]X \n+ A_XY,$$\nwhere we assume that for $X, Y\\in H^{s'-1}$, $A_XY$ is well-defined by\n\\begin{equation}\\label{insert2}-\\frac{1}{2}\\langle [D_Z,1+\\Delta]X,Y\\rangle_0 = \\langle A_XY,Z\\rangle_1.\n\\end{equation}\n\\end{prop}\n\n\n\n\n\\begin{proof} By Lemma \\ref{lem:l2lc},\n\\begin{eqnarray*} X\\langle Y,Z\\rangle_1 &=& X\\langle (1+\\Delta)Y,Z\\rangle_0 =\n \\langle D_X((1+\\Delta)Y),Z\\rangle_0 + \\langle (1+\\Delta)Y, D_XZ\\rangle_0\\\\\nY\\langle X,Z\\rangle_1 &=& \\langle D_Y((1+\\Delta)X),Z\\rangle_0 + \\langle (1+\\Delta)X,\nD_YZ\\rangle_0\\\\\n-Z\\langle X,Y\\rangle_1 &=& -\\langle D_Z((1+\\Delta)X),Y\\rangle_0 - \\langle (1+\\Delta)X,\nD_ZY\\rangle_0\\\\\n\\langle [X,Y],Z\\rangle_1 &=& \\langle(1+\\Delta)(\\delta_XY - \\delta_YX), Z\\rangle_0\n= \\langle (1+\\Delta)(D_XY - D_YX),Z\\rangle_0\\\\\n\\langle[Z,X],Y\\rangle_1 &=& \\langle(1+\\Delta)(D_ZX-D_XZ),Y\\rangle_0\\\\\n-\\langle[Y,Z],X\\rangle_1 &=& -\\langle(1+\\Delta)(D_YZ-D_ZY),X\\rangle_0.\n\\end{eqnarray*}\nThe six terms on the left hand side must sum up to $2\\langle \\nabla^1_XY,Z\\rangle_1$ \nin the sense of Remark \\ref{lcrem}.\nAfter some cancellations, for $\\nabla =\\nabla^1$ we get\n\\begin{eqnarray*} 2\\langle\\nabla_XY,Z\\rangle_1 &=& \\langle D_X((1+\\Delta)Y),Z\\rangle_0 +\n \\langle D_Y((1+\\Delta)X),Z\\rangle_0\\nonumber\\\\\n&&\\qquad + \\langle (1+\\Delta)(D_XY - D_YX),Z\\rangle_0 - \\langle\n D_Z((1+\\Delta)X),Y\\rangle_0\\nonumber\\\\\n&&\\qquad +\\langle(1+\\Delta)D_ZX),Y\\rangle_0\\nonumber\\\\\n&=& \\langle (1+\\Delta)D_XY,Z\\rangle_0 + \\langle [D_X,1+\\Delta] Y, Z\\rangle_0\\\\\n&&\\qquad + \\langle (1+\\Delta)D_YX,Z\\rangle_0 + \\langle [D_Y,1+\\Delta] X, Z\\rangle_0\\nonumber\\\\\n&&\\qquad + \\langle (1+\\Delta)(D_XY - D_YX),Z\\rangle_0 -\\langle\n [D_Z,1+\\Delta]X,Y\\rangle_0\\\\\n&=& 2\\langle D_XY,Z\\rangle_1 + \\langle (1+\\Delta)^{-1}[D_X,1+\\Delta]Y,Z\\rangle_1\\\\\n&&\\qquad + \\langle (1+\\Delta)^{-1}[D_Y,1+\\Delta]X,Z\\rangle_1 +2\n\\langle A_XY,Z\\rangle_1.\n\\end{eqnarray*}\n\n\\end{proof}\n\nNow we compute the bracket terms in the Proposition. We have $[D_X,1+\\Delta] =\n[D_X,\\Delta]$. Also,\n$$0 = \\dot\\gamma\\langle X, Y\\rangle_0 = \\langle\\nabla_{\\dot\\gamma}X,Y\\rangle_0\n+ \\langle X,\\nabla_{\\dot\\gamma}Y\\rangle_0,$$\nso \n\\begin{equation}\\label{one}\\Delta = \\nabla_{\\dot\\gamma}^* \\nabla_{\\dot\\gamma}\n = -\\nabla_{\\dot\\gamma}^2.\n\\end{equation}\n\n\\begin{lem} $[D_X,\\nabla_{\\dot\\gamma}]Y = R(X,\\dot\\gamma)Y.$\n\\end{lem}\n\n\\begin{proof} \nNote that $\\gamma^\\nu, \\dot\\gamma^\\nu$ are locally defined functions on \n$S^1\\times LM.$\nLet $\\tilde\\gamma:\n[0,2\\pi]\\times (-\\varepsilon,\\varepsilon)\\to M$ be a smooth map with $\\tilde\\gamma(\\theta,0) =\n\\gamma(\\theta)$, and\n$\\frac{d}{d\\tau}|_{\\tau = 0}\\tilde\\gamma(\\theta,\\tau) = Z(\\theta).$\nSince $(\\theta,\\tau)$ are coordinate functions on\n$S^1\\times (-\\varepsilon,\\varepsilon)$, we have\n\\begin{eqnarray}\\label{badterms} Z(\\dot\\gamma^\\nu) &=& \\delta_Z(\\dot\\gamma^\\nu) = \\ptau{Z}(\\dot\\gamma^\\nu) =\n\\dtau\\right.\\left(\\frac{\\partial}{\\partial\\theta}\n(\\tilde\\gamma(\\theta,\\tau)^\\nu\\right)\\\\\n&=& \\frac{\\partial}{\\partial\\theta}\n\\dtau\\right. \\tilde\\gamma(\\theta,\\tau)^\\nu = \\partial_\\theta Z^\\nu \\equiv\n\\dot Z^\\nu.\\nonumber\n\\end{eqnarray}\n\n We compute\n\\begin{eqnarray*} \n(D_X\\nabla_{\\dot\\gamma} Y)^a\n&=& \\delta_X(\\nabla_{\\dot\\gamma} Y)^a +\n \\chw{b}{c}{a}X^b\\nabla_{\\dot\\gamma} Y^c\\\\\n&=& \\delta_X(\\dot\\gamma^j\\partial_jY^a + \\chw{b}{c}{a}\\dot\\gamma^bY^c\n+ \\chw{b}{c}{a}X^b(\\dot\\gamma^j\\partial_jY^c + \\chw{e}{f}{c}\\dot\\gamma^e Y^f)\\\\\n&=& \\dot X^j\\partial_jY^a + \\dot\\gamma^j\\partial_j\\delta_XY^a +\n \\partial_m\\chw{b}{c}{a}X^m\\dot\\gamma^bY^\n+ \\chw{b}{c}{a}\\dot X^bY^c + \\chw{b}{c}{a}\\dot\\gamma^b\\delta_XY^c\\\\\n&&\\qquad \n+ \\chw{b}{c}{a}X^b\\dot\\gamma^j\\partial_jY^c +\n \\chw{b}{c}{a}\\chw{e}{f}{c}X^b\\dot\\gamma^eY^f.\\\\\n(\\nabla_{\\dot\\gamma} D_XY)^a &=& \\dot\\gamma^j(\\partial_j(D_XY)^a +\n \\chw{b}{c}{a}\\dot\\gamma^b (D_XY)^c)\\\\\n&=& \\dot\\gamma^j\\partial_j(\\delta_XY^a + \\chw{b}{c}{a}X^b Y^c) \n+ \\chw{b}{c}{a}\\dot\\gamma^b(\\delta_XY^c + \\chw{s}{f}{c}X^eY^f)\\\\\n&=& \\dot\\gamma^j\\partial_j\\delta_XY^a + \\dot\\gamma^j\\partial_j\\chw{b}{c}{a}X^bY^c +\n \\chw{b}{c}{a}\\dot X^bY^c\n+ \\chw{b}{c}{a}X^b\\dot Y^c + \\chw{b}{c}{a}\\dot\\gamma^b\\delta_XY^c \\\\\n&&\\qquad + \\chw{b}{c}{a}\\chw{e}{f}{c}\\dot\\gamma^b X^eY^f.\n\\end{eqnarray*}\nTherefore\n\\begin{eqnarray*} (D_X\\nabla_{\\dot\\gamma}Y - \\nabla_{\\dot\\gamma}D_XY)^a &=& \\partial_m\n \\chw{b}{c}{a}X^m\\dot\\gamma^bY^c - \\partial_j \\chw{b}{c}{a}\\dot\\gamma^j X^bY^c\n + \\chw{b}{c}{a}\\chw{e}{f}{c}X^b\\dot\\gamma^e Y^f \\\\\n&&\\qquad \n-\\chw{b}{c}{a}\\chw{e}{f}{c}\\dot\\gamma^b X^e Y^f \\\\\n&=& (\\partial_j \\Gamma_{bc}^{a} - \\partial_b \\chw{j}{c}{a} +\\chw{j}{e}{a}\\chw{b}{c}{e}-\n\\chw{b}{e}{a}\\chw{j}{c}{e})\\dot\\gamma^b X^j Y^c \\\\\n&=& R_{jbc}^{\\ \\ \\ a}X^j\\dot\\gamma^b Y^c,\n\\end{eqnarray*}\nso \n$$D_X\\nabla_{\\dot\\gamma}Y - \\nabla_{\\dot\\gamma}D_XY = R(X,\\dot\\gamma)Y.$$\n\\end{proof}\n\n\\begin{cor}\\label{cor:zero}\n At the loop $\\gamma$, $[D_X,\\Delta]Y = -\\nabla_{\\dot\\gamma}(R(X,\\dot\\gamma)Y) -\n R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma}Y.$ In particular, $[D_X,\\Delta]$ is a zeroth order\n operator.\n\\end{cor}\n\n\\begin{proof} \n\\begin{eqnarray*} [D_X,\\Delta]Y &=& (-D_X\\nabla_{\\dot\\gamma}\\ndg + \\nabla_{\\dot\\gamma}\\ndg D_X)Y \\\\\n&=& -(\\nabla_{\\dot\\gamma} D_X\\nabla_{\\dot\\gamma} Y+ R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma} Y) +\\nabla_{\\dot\\gamma}\\ndg D_XY\\\\\n&=& -(\\nabla_{\\dot\\gamma}\\ndg D_XY + \\nabla_{\\dot\\gamma}(R(X,\\dot\\gamma)Y) + R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma} Y) \n+\\nabla_{\\dot\\gamma}\\ndg D_XY\\\\\n&=& -\\nabla_{\\dot\\gamma}(R(X,\\dot\\gamma)Y) - R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma} Y.\n\\end{eqnarray*}\n\\end{proof}\n\nNow we complete the proof of Theorem \\ref{old1.6}, showing in the process that $A_XY$ exists. \n\\medskip\n\n\\noindent {\\it Proof of Theorem \\ref{old1.6}.}\n By Proposition \\ref{old1.3} and Corollary \\ref{cor:zero}, we have\n\\begin{eqnarray*} \\nabla_XY &=& D_XY + \\frac{1}{2}(1+\\Delta)^{-1}[D_X,1+\\Delta]Y +\n (X\\leftrightarrow Y) + A_XY\\\\\n&=& D_XY + \\frac{1}{2}(1+\\Delta)^{-1}(-\\nabla_{\\dot\\gamma}(R(X,\\dot\\gamma)Y) - R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma} Y) + \n (X\\leftrightarrow Y) + A_XY,\n\\end{eqnarray*}\nwhere $(X\\leftrightarrow Y)$ denotes the previous term with $X$ and $Y$ switched.\n\nThe curvature tensor satisfies \n$$-\\langle Z, R(X,Y)W\\rangle = \\langle R(X,Y)Z,W\\rangle = \\langle R(Z,W)X,Y\n\\rangle$$\n pointwise, so\n\\begin{eqnarray*} \\langle A_XY,Z\\rangle_1 &=&\n -\\frac{1}{2}\\langle[D_Z,1+\\Delta]X,Y\\rangle_0\\\\\n&=& -\\frac{1}{2}\\langle (-\\nabla_{\\dot\\gamma}(R(Z,\\dot\\gamma)X) - R(Z,\\dot\\gamma)\\nabla_{\\dot\\gamma} X,Y\\rangle_0\\\\\n&=& -\\frac{1}{2} \\langle R(Z,\\dot\\gamma)X,\\nabla_{\\dot\\gamma} Y\\rangle_0 + \\frac{1}{2}\\langle\n R(Z,\\dot\\gamma)\\nabla_{\\dot\\gamma} X,Y\\rangle_0\\\\\n&=& -\\frac{1}{2} \\langle R(X,\\nabla_{\\dot\\gamma} Y)Z,\\dot\\gamma\\rangle_0 + \\frac{1}{2}\\langle R(\\nabla_{\\dot\\gamma} X,\n Y)Z,\\dot\\gamma\\rangle_0\\\\\n&=& \\frac{1}{2}\\langle Z, R(X,\\nabla_{\\dot\\gamma} Y)\\dot\\gamma\\rangle_0 - \\frac {1}{2} \\langle Z,\n R(\\nabla_{\\dot\\gamma} X,Y)\\dot\\gamma\\rangle_0\\\\\n&=&\\frac{1}{2}\\langle Z, (1+\\Delta)^{-1}(R(X,\\nabla_{\\dot\\gamma} Y)\\dot\\gamma - R(\\nabla_{\\dot\\gamma} X, Y)\\dot\\gamma)\\rangle_1.\n\\end{eqnarray*}\nThus $A_XY$ must equal $\\frac{1}{2} (1+\\Delta)^{-1}(R(X,\\nabla_{\\dot\\gamma} Y)\\dot\\gamma - R(\\nabla_{\\dot\\gamma} X, Y)\\dot\\gamma).$ \nThis makes sense: for $X, Y\\in H^{s'-1}$, \n $A_XY\\in H^{s'}\\subset H^1,$ since $R$ is zeroth order. \n\n\\hfill$\\Box$\n\\medskip \n\n\n\n\\begin{rem} Locally on $LM$, we should have \n$D_XY = \\delta_X^{LM}Y + \\omega_X^{LM}(Y)$. \nNow\n$\\delta_X^{LM}Y$ can only mean $\\frac{d}{d\\tau}|_{\\tau =\n 0}\\frac{d}{d\\epsilon}|_{\\epsilon = 0}\\gamma(\\epsilon,\\tau,\\theta)$, where\n$\\gamma(0,0,\\theta) = \\gamma(\\theta)$, ${d\\epsilon}|_{\\epsilon =\n 0}\\gamma(\\epsilon,0,\\theta) = X(\\theta)$, \n${d\\tau}|_{\\tau = 0}\\gamma(\\epsilon,\\tau,\\theta) = Y_{\\gamma(\\epsilon, 0,\\cdot)}(\\theta).$\n In other words, $\\delta_X^{LM}Y$ equals $ \\delta_XY$.\nSince $D_XY^a = \\delta_XY^a + \\chw{b}{c}{a}(\\gamma(\\theta))$, the connection one-form for the $L^2$ Levi-Civita connection on $LM$ is given by\n$$\\omega^{LM}_X(Y)^a(\\gamma)(\\theta) = \\chw{b}{c}{a}(\\gamma(\\theta))X^bY^c\n= \\omega^M_X(Y)^a(\\gamma(\\theta)).$$\n\\end{rem}\n\nBy this remark, we get\n\\begin{cor}\\label{cor2} The connection one-form $\\omega = \\omega^1$ for $\\nabla^1$ in \nexponential coordinates is\n\\begin{eqnarray}\\label{two}\\omega_X(Y)(\\gamma)(\\theta) &=& \\omega^M_X(Y)(\\gamma(\\theta))\n + \n\\frac{1}{2}\\bigl\\{(1+\\Delta)^{-1}\\left[\n-\\nabla_{\\dot\\gamma}(R(X,\\dot\\gamma)Y) - R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma} Y\\right.\\nonumber\\\\\n&&\\qquad \\left. -\\nabla_{\\dot\\gamma}(R(Y,\\dot\\gamma)X) - R(Y,\\dot\\gamma)\\nabla_{\\dot\\gamma} X\\right.\\\\\n&&\\qquad \\left.+R(X,\\nabla_{\\dot\\gamma} Y)\\dot\\gamma - R(\\nabla_{\\dot\\gamma} X, Y)\\dot\\gamma\\right]\\bigr\\}(\\theta).\\nonumber\n\\end{eqnarray}\n\\end{cor}\n\n\n\\subsection{The Levi-Civita Connection for $s\\in{\\mathbb Z}^+$}\n${}$\n\\medskip\n\nFor $s>\\frac{1}{2}$, the proof of Prop.~\\ref{old1.3} extends directly to give\n\n\\begin{lem} \\label{lem: LCs}\nThe Levi-Civita connection for the $H^s$ metric is given by\n$$\\nabla_X^sY = D_XY + \\frac{1}{2}(1+\\Delta)^{-s}[D_X, (1+\\Delta)^s]Y +\n \\frac{1}{2}(1+\\Delta)^{-s}[D_Y, (1+\\Delta)^s]X \n+ A_XY,$$\nwhere we assume that for $X, Y\\in H^{s'-1}$, $A_XY\\in H^s$ is characterized by\n\\begin{equation}\\label{axy}\n-\\frac{1}{2}\\langle [D_Z,(1+\\Delta)^s]X,Y\\rangle_0 = \\langle A_XY,Z\\rangle_s.\n\\end{equation}\n\\end{lem}\n\\bigskip\n\nWe now compute the bracket terms.\n\n\\begin{lem}\\label{bracketterms}\nFor $s\\in {\\mathbb Z}^+$, at the loop $\\gamma$,\n\\begin{equation}\\label{bracket}\n[D_X,(1+\\Delta)^s]Y = \\sum_{k=1}^s(-1)^k\\left(\\begin{array}{c}s\\\\k\\end{array}\\right)\n\\sum_{j=0}^{2k-1} \\nabla_{\\dot\\gamma}^j(R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma}^{2k-1-j}Y).\n\\end{equation}\nIn particular, $[D_X,(1+\\Delta)^s]Y$ is a $\\Psi{\\rm DO}$ of order $2s-1$ in either $X$ or $Y$.\n\\end{lem}\n\n\\begin{proof} The sum over $k$ comes from the binomial expansion of $(1+\\Delta)^s$, so\nwe just need an inductive formula for \n$[D_X,\\Delta^s].$ \nThe case $s=1$ is Proposition \\ref{old1.3}. For the induction step, we have\n\\begin{eqnarray*} [D_X,\\Delta^s] &=& D_X\\Delta^{s-1}\\Delta - \\Delta^sD_X\\\\\n&=& \\Delta^{s-1}D_X\\Delta + [D_X,\\Delta^{s-1}]\\Delta - \\Delta^sD_X\\\\\n&=& \\Delta^sD_X +\\Delta^{s-1}[D_X,\\Delta] + [D_X,\\Delta^{s-1}]\\Delta\n-\\Delta^sD_X\\\\\n&=& \\Delta^{s-1}(-\\nabla_{\\dot\\gamma}(R(X,\\dot\\gamma)Y) -R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma}Y)\\\\\n&&\\qquad - \n\\sum_{j=0}^{2s-3}(-1)^{s-1}\n\\nabla^j_{\\dot\\gamma}(R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma}^{2k-j-1}(-\\nabla^2_{\\dot\\gamma}Y)\\\\\n&=& (-1)^{s-1}(-\\nabla_{\\dot\\gamma}^{2s-1}(R(X,\\dot\\gamma)Y) - (-1)^{s-1}\\nabla_{\\dot\\gamma}^{2s-2}(R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma}Y)\\\\\n&&\\qquad + \\sum_{j=0}^{2s-3}(-1)^{s}\n\\nabla^j_{\\dot\\gamma}(R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma}^{2k-j-1}(-\\nabla^2_{\\dot\\gamma}Y)\\\\\n&=& \\sum_{j=0}^{2s-1}(-1)^s \\nabla_{\\dot\\gamma}^j(R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma}^{2k-1-j}Y).\n\\end{eqnarray*} \n\\end{proof}\n\nWe check that $A_XY$ is a $\\Psi{\\rm DO}$ in $X$ and $Y$ for $s\\in {\\mathbb Z}^+.$\n\n\\begin{lem} \\label{insert3} For $s\\in{\\mathbb Z}^+$ and fixed $X, Y\\in H^{s'-1}$, $A_XY$ in (\\ref{axy})\n is an explicit $\\Psi{\\rm DO}$ in $X$ and $Y$ of order at most $-1.$\n\n\\end{lem}\n\n\\begin{proof} By (\\ref{bracket}), for $j, 2k-1-j \\in \\{0,1,...,2s-1\\}$, a typical term on\nthe left hand side of (\\ref{axy}) is\n\\begin{eqnarray*} \\ipo{\\nabla^j_{\\dot\\gamma}(R(Z,\\dot\\gamma)\\nabla_{\\dot\\gamma}^{2k-1-j}X)}{Y} &=& \n(-1)^j\n \\ipo{R(Z,\\dot\\gamma)\\nabla_{\\dot\\gamma}^{2k-1-j}X}{\\nabla^j_{\\dot\\gamma} Y}\\\\\n&=& (-1)^j\\int_{S^1} g_{i\\ell} (R(Z,\\dot\\gamma)\\nabla_{\\dot\\gamma}^{2k-1-j}X)^i(\\nabla^j_{\\dot\\gamma} Y)^\\ell d\\theta\\\\\n&=& (-1)^j\\int_{S^1} g_{i\\ell} Z^k R_{krn}^{\\ \\ \\ i}\\dot\\gamma^r (\\nabla_{\\dot\\gamma}^{2k-1-j}X)^n (\\nabla_{\\dot\\gamma}^jY)^\\ell d\\theta\\\\\n&=& (-1)^j\n \\int_{S^1} g_{tm}g^{kt} g_{i\\ell} Z^m R_{krn}^{\\ \\ \\ i}\\dot\\gamma^r (\\nabla_{\\dot\\gamma}^{2k-1-j}X)^n (\\nabla_{\\dot\\gamma}^jY)^\\ell d\\theta\\\\\n&=& (-1)^j \\ipo{Z}\n{g^{kt} g_{i\\ell} R_{krn}^{\\ \\ \\ i}\\dot\\gamma^r (\\nabla_{\\dot\\gamma}^{2k-1-j}X)^n (\\nabla_{\\dot\\gamma}^jY)^\\ell\\partial_t}\\\\ \n&=& (-1)^j\\ipo{Z}{R^t_{\\ rn\\ell} \\dot\\gamma^r (\\nabla_{\\dot\\gamma}^{2k-1-j}X)^n (\\nabla_{\\dot\\gamma}^jY)^\\ell\\partial_t}\\\\ \n&=&(-1)^{j+1} \\ipo{Z}{R^{\\ \\ \\ t}_{n\\ell r} \\dot\\gamma^r (\\nabla_{\\dot\\gamma}^{2k-1-j}X)^n (\\nabla_{\\dot\\gamma}^jY)^\\ell\\partial_t}\\\\\n&=& (-1)^{j+1}\\ipo{Z}{R(\\nabla_{\\dot\\gamma}^{2k-1-j}X,\\nabla_{\\dot\\gamma}^jY)\\dot\\gamma}\\\\\n&=& (-1)^{j+1} \\ips{Z}{(1+\\Delta)^{-s} R(\\nabla_{\\dot\\gamma}^{2k-1-j}X,\\nabla_{\\dot\\gamma}^jY)\\dot\\gamma}.\n \\end{eqnarray*}\n (In the integrals and inner products, the local expressions are in fact globally defined one-forms on $S^1$, resp.~vector fields along $\\gamma$, so we do not need a partition of unity.)\n$(1+\\Delta)^{-s} R(\\nabla_{\\dot\\gamma}^{2k-1-j}X,\\nabla_{\\dot\\gamma}^jY)\\dot\\gamma$ is of order at most $-1$ in either $X$ or $Y$, so this term is in \n$H^{s'}\\subset H^s.$ Thus the last inner product is well defined. \n\\end{proof}\n\nBy (\\ref{axy}), (\\ref{bracket}) and the proof of Lemma \\ref{insert3}, we get\n$$A_XY = \\sum_{k=1}^s(-1)^k\\left(\\begin{array}{c}s\\\\k\\end{array}\\right)\n\\sum_{j=0}^{2k-1} (-1)^{j+1} (1+\\Delta)^{-s} R(\\nabla_{\\dot\\gamma}^{2k-1-j}X,\\nabla_{\\dot\\gamma}^jY)\\dot\\gamma.$$\nThis gives:\n\n\\begin{thm} \\label{thm:sinz}\nFor $s\\in{\\mathbb Z}^+$, the Levi-Civita connection for the $H^s$ metric at the\nloop $\\gamma$ is given by\n\\begin{eqnarray*} \\nabla_X^sY(\\gamma) &=& D_XY(\\gamma) + \\frac{1}{2}(1+\\Delta)^{-s}\n \\sum_{k=1}^s(-1)^k\\left(\\begin{array}{c}s\\\\k\\end{array}\\right)\n\\sum_{j=0}^{2k-1} \\nabla_{\\dot\\gamma}^j(R(X,\\dot\\gamma)\\nabla_{\\dot\\gamma}^{2k-1-j}Y)\\\\\n&&\\qquad + (X\\leftrightarrow Y)\\\\\n &&\\qquad \n + \\sum_{k=1}^s(-1)^k\\left(\\begin{array}{c}s\\\\k\\end{array}\\right)\n\\sum_{j=0}^{2k-1} (-1)^{j+1} (1+\\Delta)^{-s} R(\\nabla_{\\dot\\gamma}^{2k-1-j}X,\\nabla_{\\dot\\gamma}^jY)\\dot\\gamma.\n\\end{eqnarray*}\n\\end{thm}\n\n\\subsection{The Levi-Civita Connection for General $s>\\frac{1}{2}$}\n\n${}$\n\\medskip\n\nIn this subsection, we show that the $H^s$ Levi-Civita connection for general $s>\\frac{1}{2}$ exists in the strong sense of Remark \\ref{lcrem}.\nThe formula is less explicit than in the \n$s\\in {\\mathbb Z}^+$ case, but is good enough for symbol calculations.\n\n\n\n\nBy Lemma \\ref{lem: LCs}, we have to examine the term $A_XY$, which, if it exists, is\n characterized by (\\ref{axy}):\n$$-\\frac{1}{2}\\ipo{[D_Z,(1+\\Delta)^s]X}{Y} = \\ips{A_XY}{Z}$$\nfor $Z\\in H^s$. As explained in Remark \\ref{lcrem}, we may take\n$X, Y\\in H^{s'-1}.$\nThroughout this section we assume that $s'\\gg s$.\n\nThe following lemma extends Lemma \\ref{bracketterms}.\n\\begin{lem}\\label{pdo}\n (i) For fixed $Z\\in H^{s'-1}$, $[D_Z,(1+\\Delta)^s] X$ is a $\\Psi$DO of\n order $2s-1$ in $X$. For ${\\rm Re}(s)\\neq 0$, the principal symbol of $[D_Z,(1+\\Delta)^s]$ is \n linear in $s$.\n \n (ii) For fixed $X\\in H^{s'-1}$, $[D_Z,(1+\\Delta)^s]X$ is a $\\Psi{\\rm DO}$ \n of order $2s-1$ in $Z$.\n\\end{lem}\n\nAs usual, ``of order $2s-1$\" means ``of order at most $2s-1.$\" \n\n\n\n\\begin{proof}\n(i) For $f:LM\\to {\\mathbb C}$, we get $[D_Z,(1+\\Delta)^s]fX = f[D_Z,(1+\\Delta)^s]X$, since $[f,(1+\\Delta)^s]=0.$ \nTherefore, $[D_Z,(1+\\Delta)^s]X$ depends only on $X|_\\gamma.$\n\nBy Lemma \\ref{lem:l2lc}, $D_Z = \\delta_Z + \\Gamma \\cdot Z$ in shorthand exponential \ncoordinates. The Christoffel symbol term is zeroth order and $(1+\\Delta)^s$ has scalar leading order symbol, so $[\\Gamma\\cdot Z,(1+\\Delta)^s]$ has order $2s-1.$ \n\nFrom the integral expression for \n$(1+\\Delta)^s$, it is immediate that \n\\begin{eqnarray}\\label{immediate}\n[\\delta_Z,(1+\\Delta)^s]X &=& (\\delta_Z(1+\\Delta)^s) X + (1+\\Delta)^s\\delta_Z X - (1+\\Delta)^s\\delta_ZX\\\\\n&=& (\\delta_Z(1+\\Delta)^s) X.\\nonumber\n\\end{eqnarray}\n$\\delta_Z(1+\\Delta)^s$ is a limit of differences of $\\Psi{\\rm DO}$s on bundles isomorphic to $\\gamma^*TM$.\nSince the algebra of $\\Psi{\\rm DO}$s is closed in the Fr\\'echet topology\nof all $C^k$ seminorms\nof symbols and smoothing terms\non compact sets, $\\delta_Z(1+\\Delta)^s$ is a $\\Psi{\\rm DO}.$\n\nSince $(1+\\Delta)^s$ has order $2s$ and has scalar leading order symbol, \n$[D_Z,(1+\\Delta)^s]$ have order $2s-1$. For later purposes (\\S3.2), we compute some explicit symbols. \n\nAssume Re$(s)<0.$ As in the construction of $(1+\\Delta)^s$,\n we will compute what the symbol asymptotics\nof $\\delta_Z(1+\\Delta)^s$ should\nbe, and then construct an operator with these asymptotics.\nFrom the functional calculus for unbounded operators, we have\n\\begin{eqnarray}\\label{funcalc}\n\\delta_Z(1+\\Delta)^s &=& \\delta_Z\\left(\\frac{i}{2\\pi}\\int_\\Gamma\n\\lambda^s(1+\\Delta-\\lambda)^{-1}d\\lambda\\right)\\nonumber\\\\\n&=& \\frac{i}{2\\pi}\\int_\\Gamma\n\\lambda^s\\delta_Z (1+\\Delta-\\lambda)^{-1}d\\lambda\\\\\n&=& -\\frac{i}{2\\pi}\\int_\\Gamma\n\\lambda^s (1+\\Delta-\\lambda)^{-1} (\\delta_Z\\Delta)\n(1+\\Delta-\\lambda)^{-1}d\\lambda,\\nonumber\n\\end{eqnarray}\nwhere $\\Gamma$ is a contour around the spectrum of $1+\\Delta$, and the\nhypothesis on $s$ justifies the exchange of $\\delta_Z$ and the integral. The\noperator $A =\n(1+\\Delta-\\lambda)^{-1} \\delta_Z\\Delta (1+\\Delta-\\lambda)^{-1}$ is a $\\Psi{\\rm DO}$\nof order $-3$\n with top order symbol\n\\begin{eqnarray*} \\sigma_{-3}(A)(\\theta,\\xi)^\\ell_j &=&\n(\\xi^2-\\lambda)^{-1}\\delta^\\ell_k (-2Z^i\\partial_i{\\operatorname{ch}}{\\nu}{\\mu}{k}\\dot\\gamma^\\nu\n -2{\\operatorname{ch}}{\\nu}{\\mu}{k}\\dot Z^\\nu) \\xi (\\xi^2-\\lambda)^{-1}\\delta^\\mu_j\\\\\n&=&\n(-2Z^i\\partial_i{\\operatorname{ch}}{\\nu}{j}{\\ell}\\dot\\gamma^\\nu\n -2{\\operatorname{ch}}{\\nu}{j}{\\ell}\\dot Z^\\nu)\n\\xi (\\xi^2-\\lambda)^{-2}.\n\\end{eqnarray*}\nThus the top order symbol of $\\delta_Z(1+\\Delta)^s$ should be\n\\begin{eqnarray}\\label{tsmo}\n \\sigma_{2s-1}(\\delta_Z(1+\\Delta)^s)(\\theta,\\xi)^\\ell_j\n&=& -\\frac{i}{2\\pi}\\int_\\Gamma\n\\lambda^s (-2Z^i\\partial_i{\\operatorname{ch}}{\\nu}{j}{\\ell}\\dot\\gamma^\\nu\n -2{\\operatorname{ch}}{\\nu}{j}{\\ell}\\dot Z^\\nu)\n\\xi (\\xi^2-\\lambda)^{-2} d\\lambda \\nonumber\\\\\n&=& \\frac{i}{2\\pi}\\int_\\Gamma s\\lambda^{s-1}\n(-2Z^i\\partial_i{\\operatorname{ch}}{\\nu}{j}{\\ell}\\dot\\gamma^\\nu\n -2{\\operatorname{ch}}{\\nu}{j}{\\ell}\\dot Z^\\nu)\n\\xi (\\xi^2-\\lambda)^{-1} d\\lambda \\nonumber\\\\\n&=& s(-2Z^i\\partial_i{\\operatorname{ch}}{\\nu}{j}{\\ell}\\dot\\gamma^\\nu\n -2{\\operatorname{ch}}{\\nu}{j}{\\ell}\\dot Z^\\nu)\\xi (\\xi^2-\\lambda)^{s-1}.\n\\end{eqnarray}\nSimilarly, all the terms in the symbol asymptotics for $A$ are of the form\n$B^\\ell_j \\xi^n(\\xi^2-\\lambda)^m$ for some matrices $B^\\ell_j =\nB^\\ell_j(n,m).$ This produces a symbol sequence $\n\\sum_{k\\in {\\mathbb Z}^+}\\sigma_{2s-k}$, and there exists a $\\Psi{\\rm DO}$ $P$ with $\\sigma(P) =\n\\sum \\sigma_{2s-k}$. (As in \\S\\ref{pdoreview}, we \nfirst produce operators $P_i$ on a coordinate cover $U_i$ of $S^1$, \nand then set\n$P = \\sum_i\\phi_iP_i\\psi_i$.) The construction\ndepends on\nthe choice of local coordinates\ncovering $\\gamma$, the partition of unity and cutoff\nfunctions as above, and a cutoff function in $\\xi$; as\nusual, different choices change the operator by a smoothing operator.\nStandard estimates \nshow that $P-\\delta_Z(1+\\Delta)^s$ is a smoothing\noperator, this verifies explicitly that $\\delta_Z(1+\\Delta)^s$ is a $\\Psi{\\rm DO}$ of order $2s-1.$\n\n\nFor Re$(s) >0$, motivated by differentiating $(1+\\Delta)^{-s}\\circ(1+\\Delta)^s = {\\rm\n Id}$, we set\n\\begin{equation}\\label{abc}\n\\delta_Z(1+\\Delta)^s = -(1+\\Delta)^s\\circ\\delta_Z(1+\\Delta)^{-s}\\circ(1+\\Delta)^s.\n\\end{equation}\nThis is again a $\\Psi{\\rm DO}$ of order $2s-1$ with principal symbol linear in $s$. \n\n (ii) As a $\\Psi{\\rm DO}$ of order $2s$, $(1+\\Delta)^s$ has the expression\n $$(1+\\Delta)^s X(\\gamma)(\\theta) = \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) X(\\gamma)(\\theta')d\\theta' d\\xi,$$\n where we omit the cover of $S^1$ and its partition of unity on the right hand side.\n Here \n p(\\theta,\\xi)$ is the symbol of $(1+\\Delta)^s$, which has the asymptotic expansion\n $\n p(\\theta,\\xi) \\sim \\sum_{k=0}^\\infty p_{2s-k }(\\theta,\\xi).$$\n The covariant derivative along $\\gamma$ on\n$Y\\in\\Gamma(\\gamma^*TM)$ is given by\n\\begin{eqnarray*}\\frac{DY}{d\\gamma} &=&\n(\\gamma^*\\nabla^{M})_{\\partial_\\theta}(Y) =\n\\partial_\\theta Y + (\\gamma^*\\omega^{M})(\\partial_\\theta)(Y)\\\\\n&=& \\partial_\\theta(Y^i)\\partial_i + \\dot\\gamma^t Y^r\n\\Gamma^j_{tr}\\partial_j,\n\\end{eqnarray*}\nwhere $\\nabla^{M}$ is the Levi-Civita connection on $M$ and $\\omega^{M}$ is the\nconnection one-form in exponential coordinates on $M$. \nFor $\\Delta =\n(\\frac{D}{d\\gamma})^* \\frac{D}{d\\gamma}$, an integration by parts using the\nformula\n$\\partial_tg_{ar} = \\Gamma_{\\ell t}^ng_{rn} + \\Gamma_{rt}^ng_{\\ell n}$ gives\n$$(\\Delta Y)^k = -\\partial^2_\\theta Y^k\n-2\\Gamma_{\\nu\\mu}^k\\dot\\gamma^\\nu\\partial_\\theta Y^\\mu -\\left\n( \\partial_\\theta\\Gamma_{\\nu\\delta}^k\\dot\\gamma^\\nu\n+\\Gamma_{\\nu\\delta}^k\\ddot\\gamma^\\nu +\n\\Gamma_{\\nu\\mu}^k\\Gamma_{\\varepsilon\\delta}^\\mu\\dot\\gamma^\\varepsilon\\dot\\gamma^\\nu\\right)\nY^\\delta.$$\nThus $p_{2s}(\\theta, \\xi) = |\\xi|^2$ is independent of $\\gamma$, but the lower order symbols depend on \n derivatives of both $\\gamma$ and the metric on $M$. \n \n We have\n \\begin{eqnarray} [D_Z,(1+\\Delta)^s]X(\\gamma)(\\theta) &=&\n D_Z \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) X(\\gamma)(\\theta')d\\theta' d\\xi\\label{216}\\\\\n &&\\quad - \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) D_ZX(\\gamma)(\\theta')d\\theta' d\\xi. \\label{217}\n \\end{eqnarray}\n In local coordinates, (\\ref{216}) equals\n \\begin{eqnarray} \\label{218}\n \\lefteqn{\n\\left[ D_Z \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) X(\\gamma)(\\theta')d\\theta' d\\xi\\right]^a}\\nonumber\\\\\n &=& \\delta_Z\\left[\n \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) X(\\gamma)(\\theta')d\\theta' d\\xi\\right]^a(\\theta)\\\\\n &&\\quad + \\Gamma^a_{bc} Z^b(\\gamma)(\\theta) \n \\left[ \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) X(\\gamma)(\\theta')d\\theta' d\\xi\\right]^c(\\theta).\\nonumber\n \\end{eqnarray}\n Here we have suppressed matrix indices in $p$ and $X$.\nWe can bring $\\delta_Z$ past the integral on the right hand side of (\\ref{218}). If\n$\\gamma_\\epsilon$ is a family of curves with $\\gamma_0 = \\gamma, \\dot\\gamma_\\epsilon = Z$, then\n$$\\delta_Zp(\\theta, \\xi) = \\frac{d}{d\\epsilon}\\biggl|_{_{_{\\epsilon=0}}} p(\\gamma_\\epsilon,\n\\theta,\\xi) = \\frac{d\\gamma_\\epsilon^k}{d\\epsilon}\\biggl|_{_{_{\\epsilon=0}}}\n\\partial_k p(\\gamma,\\theta, \\xi) = Z^k(\\gamma(\\theta))\\cdot \\partial_k p(\\theta,\\xi).$$ Substituting this into (\\ref{218}) gives\n\\begin{eqnarray}\\label{219}\n\\lefteqn{\n\\left[ D_Z \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) X(\\gamma)(\\theta')d\\theta' d\\xi\\right]^a}\\nonumber\\\\\n&=& \n\\lefteqn{\n\\left[ \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n Z^k(\\gamma)(\\theta)\\cdot \\partial_k p(\\theta,\\xi) X(\\gamma)(\\theta')d\\theta' d\\xi\\right]^a}\\\\\n &&\\quad \n+ \\Gamma^a_{bc} Z^b(\\gamma)(\\theta) \n \\left[ \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) X(\\gamma)(\\theta')d\\theta' d\\xi\\right]^c(\\theta).\\nonumber\\\\\n&&\\qquad + \n \\left[ \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) \\delta_Z X(\\gamma)(\\theta')d\\theta' d\\xi\\right]^c(\\theta).\\nonumber\n\\end{eqnarray}\nSimilarly, (\\ref{217}) equals\n\\begin{eqnarray}\\label{220}\n\\lefteqn{\\left[\\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) D_ZX(\\gamma)(\\theta')d\\theta' d\\xi\\right]^a}\\nonumber\\\\\n &=& \\left[\\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi) \\delta_ZX(\\gamma)(\\theta')d\\theta' d\\xi\\right]^a\\\\\n &&\\quad + \n \\int_{T^*S^1}\n e^{i(\\theta-\\theta')\\cdot \\xi}\n p(\\theta,\\xi)^a_e\\Gamma^e_{bc}Z^b(\\gamma)(\\theta') X^c(\\gamma)(\\theta')d\\theta' d\\xi.\n \\nonumber\n \\end{eqnarray}\n Substituting (\\ref{219}), (\\ref{220}), into (\\ref{216}), (\\ref{217}), respectively, gives\n \\begin{eqnarray}\\label{221}\n\\lefteqn{( [D_Z,(1+\\Delta)^s]X(\\theta))^a}\\\\\n&=&\nZ^b(\\theta)\\cdot\\left[\\int_{T^*S^1} e^{i(\\theta-\\theta')\\cdot\\xi}\\left(\\partial_bp^a_e(\\theta,\\xi) +\n\\Gamma_{bc}^a(\\gamma(\\theta)p_e^c(\\theta, \\xi)\\right)X^e(\\theta')d\\theta'd\\xi\\right]\\nonumber\\\\\n&&\\quad - \\int_{T^*S^1} e^{i(\\theta-\\theta')\\cdot\\xi}p(\\theta,\\xi)^a_e\\Gamma_{bc}^e\n(\\gamma(\\theta'))Z^b(\\theta') X^c(\\theta')d\\theta' d\\xi,\\nonumber\n \\end{eqnarray}\n where $X(\\theta') = X(\\gamma)(\\theta)$ and similarly for Z.\n \n The first term on the right hand side of (\\ref{221}) is order zero in $Z$; note that\n $0<2s-1$, since $s>\\frac{1}{2}$. For the last term in (\\ref{221}), we do a change of variables typically used in the proof that the composition of $\\Psi{\\rm DO}$s is a $\\Psi{\\rm DO}.$ Set\n \\begin{equation}\\label{221a}q(\\theta, \\theta', \\xi)^a_b = p(\\theta,\\xi)^a_e \\Gamma_{bc}^e(\\gamma(\\theta'))X^c\n (\\theta'),\n \\end{equation}\n so the last term equals\n \\begin{eqnarray*}\n(PZ)^a(\\theta) &\\stackrel{\\rm def}{=}& \\int_{T^*S^1} e^{i(\\theta-\\theta')\\cdot\\xi}q(\\theta, \\theta', \\xi)^a_b Z^b(\\theta') d\\theta' d\\xi\\\\\n &=& \\int_{T^*S^1} e^{i(\\theta-\\theta')\\cdot\\xi}q(\\theta, \\theta', \\xi)^a_b e^{i(\\theta'-\\theta'')\n \\cdot\\eta} Z^b(\\theta'') d\\theta'' d\\eta \\ d\\theta' d\\xi,\n \\end{eqnarray*}\n by applying Fourier transform and its inverse to $Z$. A little algebra gives\n \\begin{equation}\\label{222}\n (PZ)^a(\\theta) = \\int_{T^*S^1} e^{i(\\theta-\\theta')\\cdot\\eta}r(\\theta,\\eta)^a_b Z^b(\\theta')\n d\\theta' d\\eta,\n \\end{equation}\n with \n \\begin{eqnarray*}r(\\theta, \\eta) &=& \\int_{T^*S^1} e^{i(\\theta-\\theta')\\cdot(\\xi-\\eta)}\n q(\\theta,\\theta', \\xi) d\\theta' d\\xi\\\\\n &=& \\int_{T^*S^1} e^{it\\cdot\\xi} q(\\theta,\\theta - t, \\eta + \\xi) dt\\ d\\xi.\n \\end{eqnarray*} \n In the last line we continue to abuse notation by treating the integral in local coordinates in \n $t = \\theta-\\theta'$ lying in an interval $I\\subset {\\mathbb R}$ and implicitly\n summing over a cover and partition of unity of $S^1;$ thus we can consider $q$ as a compactly supported function in $t\\in{\\mathbb R}.$\n Substituting in the Taylor expansion of $q(\\theta,\\theta - t, \\eta + \\xi)$ in $\\xi$ gives in local coordinates\n \\begin{eqnarray}\\label{223a}\n r(\\theta, \\eta) &=& \\int_{T^*{\\mathbb R}} e^{it\\cdot \\xi} \\left[ \\sum_{\\alpha, |\\alpha|=0}^N\n \\frac{1}{\\alpha!} \\partial_\\xi^\\alpha|_{\\xi=0} q(\\theta,\\theta-t, \\eta+\\xi)\\xi^\\alpha + {\\rm O}\n (|\\xi|^{N+1})\\right] dt \\ d\\xi\\nonumber\\\\\n &=& \\sum_{\\alpha, |\\alpha|=0}^N \\frac{i^{|\\alpha|}}{\\alpha!} \\partial^\\alpha_t\\partial^\\alpha\n _\\xi q(\\theta, \\theta, \\eta) + {\\rm O} (|\\xi|^{N+1}).\n \\end{eqnarray}\n Thus $P$ in (\\ref{222}) is a $\\Psi{\\rm DO}$ with apparent top order symbol \n $q(\\theta, \\theta, \\eta)$, which by (\\ref{221a}) has order $2s.$ The top order symbol can be computed in any local coordinates on $S^1$ and $\\gamma^*TM$. If we choose\n manifold coordinates (see \\S2.3) which are \nRiemannian normal coordinates centered at $\\gamma(\\theta)$, the Christoffel symbols vanish at this point, and\n so\n $$q(\\theta, \\theta, \\eta)^a_b = p(\\theta,\\xi)^a_e\\Gamma_{bc}^e(\\gamma(\\theta)) X^c(\\theta)\n =0.$$\n Thus $P$ is in fact of order $2s-1$, and so both terms on the right hand side of (\\ref{221}) have order at most $2s-1$.\n\n\\end{proof}\n\n\n\\begin{rem} (i) For $s\\in{\\mathbb Z}^+$, $\\delta_Z(1+\\Delta)^s$ differs\nfrom the usual definition by a smoothing operator. \n\n(ii) For all $s$, the proof of Lemma \\ref{pdo}(i) shows that\n$\\sigma(\\delta_Z(1+\\Delta)^s) = \\delta_Z(\\sigma((1+\\Delta)^s)).$\n\n\\end{rem}\n\nWe can now complete the computation of the Levi-Civita connection for general $s.$\n\nLet $[D_\\cdot,(1+\\Delta)^s]X^*$ be the formal $L^2$ adjoint of $[D_\\cdot,(1+\\Delta)^s]X$.\nWe abbreviate $[D_\\cdot,(1+\\Delta)^s]X^*(Y)$ by $[D_Y,(1+\\Delta)^s]X^*.$\n\n\n\n\\begin{thm} \\label{thm25} (i) For $s>\\frac{1}{2}$, \nThe Levi-Civita connection for the $H^s$ metric is given by\n\\begin{eqnarray}\\label{quick}\\nabla_X^sY &=& D_XY + \\frac{1}{2}(1+\\Delta)^{-s}[D_X, (1+\\Delta)^s]Y +\n \\frac{1}{2}(1+\\Delta)^{-s}[D_Y, (1+\\Delta)^s]X \\nonumber\\\\\n&&\\quad -\\frac{1}{2} (1+\\Delta)^{-s}[D_Y,(1+\\Delta)^s]X^*.\n\\end{eqnarray}\n\n(ii) The connection one-form $\\omega^s$ in exponential coordinates is given by\n\\begin{eqnarray}\\label{223}\\lefteqn{\\omega^s_X(Y)(\\gamma) (\\theta)}\\\\\n&=& \\omega^M(Y)(\\gamma(\\theta)) + \n\\left(\\frac{1}{2}(1+\\Delta)^{-s}[D_X, (1+\\Delta)^s]Y +\n \\frac{1}{2}(1+\\Delta)^{-s}[D_Y, (1+\\Delta)^s]X \\right.\\nonumber\\\\\n &&\\quad \\left.\n-\\frac{1}{2} (1+\\Delta)^{-s}[D_Y,(1+\\Delta)^s]X^*\\right)(\\gamma)(\\theta).\\nonumber\n\\end{eqnarray}\n\n(iii) The connection one-form takes values in zeroth order $\\Psi{\\rm DO}$s.\n\\end{thm}\n\n\\begin{proof} Since $[D_Z,(1+\\Delta)^s]X$ is a $\\Psi{\\rm DO}$ in $Z$ of order $2s-1$, its formal adjoint is\na $\\Psi{\\rm DO}$ of the same order. Thus\n$$\\langle [D_Z,(1+\\Delta)^s]X,Y\\rangle_0 = \\langle Z, [D_\\cdot, (1+\\Delta)^s]X^*(Y)\\rangle\n= \\langle Z, (1+\\Delta)^{-s}[D_Y,(1+\\Delta)^s]X^*\\rangle_s.$$\nThus $A_XY$ in (\\ref{axy}) satisfies\n$A_XY = (1+\\Delta)^{-s}[D_Y,(1+\\Delta)^s]X^*.$ Lemma \\ref{lem: LCs} applies to all $s>\\frac{1}{2}$, \nso (i) follows. (ii) follows as \nin Corollary \\ref{cor2}. Since $\\omega^M$ is zeroth order and all \nother terms have order $-1$, (iii) holds as well.\n\\end{proof}\n\n\\begin{rem} This theorem implies that the Levi-Civita connection exists for the \n$H^s$ metric in the strong sense: for $X\\in T_\\gamma LM =H^{s'-1}(\\gamma^*TM)$\nand $Y\\in H^{s'-1}(\\cdot^*TM)$ a smooth vector field on $LM = H^{s'}(S^1,M)$,\n $\\nabla^s_XY(\\gamma)\\in H^{s'-1}(\\gamma^*TM).$ (See Remark\n2.1.) For each term except $D_XY$ on the right hand side of (\\ref{quick}) is order\n$-1$ in $Y$, and so takes $H^{s'-1}$ to $H^{s'}\\subset H^{s'-1}.$ For $D_XY = \\delta_XY + \\Gamma\\cdot Y$, $\\Gamma$ is zeroth order and so bounded on $H^{s'-1}.$ Finally, \nthe definition of a smooth vector field on $LM$ implies that $\\delta_XY$ stays in $H^{s'-1}$\nfor all $X$.\n\\end{rem}\n\n\n\n\n\n\n\n\\subsection{{\\bf Extensions of the Frame Bundle of $LM$}}\\label{extframe}\n\nIn this subsection we discuss the choice of structure group for the\n$H^s$ and Levi-Civita connections on $LM.$\n\nLet ${\\mathcal H}$ be the Hilbert space\n $H^{s_0}(\\gamma^*TM)$ for \na fixed $s_0$ and $\\gamma.$ \n Let $GL({\\mathcal H})$ be the group of bounded invertible linear\noperators on ${\\mathcal H}$; inverses of elements are bounded by the closed graph\ntheorem. $GL({\\mathcal H})$ has the subset\ntopology of the norm topology on ${\\mathcal B}({\\mathcal H})$, the bounded linear\noperators on ${\\mathcal H}$.\n$GL({\\mathcal H})$ is an infinite dimensional Banach Lie group, as a group which\nis an open subset of the infinite dimensional Hilbert manifold \n${\\mathcal B}({\\mathcal H})$\n\\cite[p.~59]{Omori}, and has Lie algebra \n${\\mathcal B}({\\mathcal H})$. Let $\\Psi{\\rm DO}_{\\leq 0}, \n\\Psi{\\rm DO}_0^*$ denote the algebra of classical\n$\\Psi{\\rm DO}$s of nonpositive order \nand the group of invertible zeroth order $\\Psi{\\rm DO}$s, respectively,\nwhere all $\\Psi{\\rm DO}$s act on ${\\mathcal H}.$ \nNote that $\\Psi{\\rm DO}_0^*\\subset GL({\\mathcal H}).$\n \n\\begin{rem} \nThe inclusions of $\\Psi{\\rm DO}_0^*, \\Psi{\\rm DO}_{\\leq 0}$ into $GL({\\mathcal H}), {\\mathcal\n B}({\\mathcal H})$ are trivially continuous in the subset topology.\nFor the Fr\\'echet topology on $\\Psi{\\rm DO}_{\\leq 0}$, \nthe inclusion is \ncontinuous as in \\cite{lrst}.\n\\end{rem}\n\nWe recall\nthe relationship between\n the connection one-form $\\theta$ on the frame bundle $FN$ of a\n manifold $N$\nand\nlocal expressions for the connection on $TN.$ For $U\\subset N$,\n let $\\chi:U\\to FN$ be a local section.\n A metric connection $\\nabla$ on $TN$ with local\nconnection one-form $\\omega$ determines a connection $\\theta_{FN}\\in\n \\Lambda^1(FN, {\\mathfrak o}(n))$ on $FN$\nby {\\it (i)} $\\theta_{FN}$ is the Maurer-Cartan one-form on each fiber,\nand {\\it (ii) }\n$\\theta_{FN}(Y_u)=\\omega (X_p),$ for $ Y_u=\\chi_*X_p$\n\\cite[Ch.~8, Vol.~II]{Spi}, or equivalently\n$\\chi^*\\theta_{FN} = \\omega.$\n\n\nThis applies to $N=LM.$\nThe frame bundle $FLM\\to LM$ is constructed\nas in the finite dimensional case. The\nfiber over $\\gamma$ is isomorphic to the gauge group ${\\mathcal G}$ of ${\\mathcal R}$\nand fibers are glued by the transition functions for\n$TLM$. Thus the frame bundle is\ntopologically a\n${\\mathcal G}$-bundle.\n\nHowever, by Theorem \\ref{thm25},\nthe Levi-Civita connection one-form $\\omega^s_X$\ntakes\nvalues in $\\Psi{\\rm DO}_{\\leq 0}$. \nThe curvature two-form $\\Omega^{s} = d_{LM}\\omega^{s} + \\omega^{s}\\wedge\n\\omega^s$ also takes values in $\\Psi{\\rm DO}_{\\leq 0}.$ (Here $d_{LM}\\omega^{s}(X,Y)$\nis defined by the Cartan formula for the exterior derivative.)\nThese\nforms should take values in the Lie algebra of the structure\ngroup. Thus we should extend the structure group to the Fr\\'echet Lie group\n $\\Psi{\\rm DO}_0^*$, since its Lie\nalgebra is $\\Psi{\\rm DO}_{\\leq 0}.$ \nThis leads to an extended\nframe bundles, also denoted $FLM$. The transition\n functions are unchanged, since \n${\\mathcal G} \\subset \\Psi{\\rm DO}_0^*$.\n Thus $(FLM,\\theta^s)$ as a geometric\nbundle (i.e.~as a bundle with connection $\\theta^s$ associated to\n$\\nabla^{1,s}$) is a $\\Psi{\\rm DO}_0^*$-bundle.\n\nIn summary, for the Levi-Civita connections we have\n$$ \\begin{array}{ccc}\n{\\mathcal G}&\\longrightarrow &FLM\\\\\n& & \\downarrow\\\\\n& & LM\n\\end{array}\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n\\begin{array}{ccc}\n\\Psi{\\rm DO}_0^*&\\longrightarrow &(FLM,\\theta^s)\\\\\n& & \\downarrow\\\\\n& & LM\n\\end{array}\n$$\n\n\n\\begin{rem}\\label{rem:ext} If\n we extend the structure group of the frame bundle with\n connection from $\\Psi{\\rm DO}_0^*$ to\n $GL({\\mathcal H})$, the frame bundle becomes trivial by Kuiper's theorem. \n \nThus\nthere is a potential loss of information if \nwe pass to the larger frame\n bundle. \n\n\nThe situation is similar to the following examples. Let $E\\to S^1$ be\nthe $GL(1,{\\mathbb R})$ (real line)\n bundle with gluing functions (multiplication by) $1$ at $1\\in\nS^1$ and $2$ at $-1\\in S^1.$ $E$ is trivial as a $GL(1,{\\mathbb R})$-bundle, \nwith global section $f$ with $\\lim_{\\theta\\to -\\pi^+}f(e^{i\\theta}) = 1, \nf(1) = 1,\n\\lim_{\\theta\\to\\pi^-}f(e^{i\\theta}) = 1\/2.$ \nHowever, as a $GL(1,{\\mathbb Q})^+$-bundle, $E$ is nontrivial, as a\nglobal section is locally constant. As a second example,\n let $E\\to M$ be a nontrivial\n$GL(n,{\\mathbb C})$-bundle. Embed ${\\mathbb C}^n$ into a Hilbert space ${\\mathcal H}$, and extend $E$\nto an $GL({\\mathcal H})$-bundle ${\\mathcal E}$ \nwith fiber ${\\mathcal H}$ and with the transition functions for $E$ (extended by the identity in\ndirections perpendicular to the image of $E$). Then ${\\mathcal E}$ is\ntrivial.\n\n\\end{rem}\n\n\n\n\\section{{\\bf Local Symbol Calculations}}\\label{localsymbols}\n\nIn this section, we compute the $0$ and $-1$ order symbols of the\nconnection one-form and the curvature two-form of\n the $s=1$ Levi-Civita connection. \n We also compute the $0$ and $-1$ order symbols of the\nconnection one-form for the general $s>\\frac{1}{2}$ connection, and the $0$ order symbol of the \ncurvature of the general $s$ connection.\n These results are used in the calculations of Wodzicki-Chern-Simons\nclasses in \\S6. The formulas show that\nthe $s$-dependence of these symbols is\nlinear, which will be used to define regularized Wodzicki-Chern-Simons classes\n(see Definition \\ref{def:regularized}).\n\n\\subsection{{\\bf Connection and Curvature Symbols for $s=1$}}\n${}$\n\\medskip\n\nIn this subsection $\\omega = \\omega^1, \\Omega = \\Omega^1.$\n\nUsing Corollary \\ref{cor2}, we can compute these symbols easily. \n\n\\begin{lem} \\label{old2.1}\n(i) At $\\gamma(\\theta)$,\n$\\sigma_0(\\omega_X)^a_b = (\\omega^M_X)^a_b = \\chw{c}{b}{a}X^c.$\n\n(ii) \\begin{eqnarray*}\n\\frac{1}{i|\\xi|^{-2}\\xi}\\sigma_{-1}(\\omega_X) &=& \\frac{1}{2}(-2R(X,\\dot\\gamma)\n-R(\\cdot,\\dot\\gamma)X + R(X,\\cdot)\\dot\\gamma).\n\\end{eqnarray*}\nEquivalently,\n\\begin{eqnarray*}\n\\frac{1}{i|\\xi|^{-2}\\xi}\\sigma_{-1}(\\omega_X)^a_b &=& \n\\frac{1}{2}\n(-2R_{cdb}^{\\ \\ \\ a} -R_{bdc}^{\\ \\ \\ a} + R_{cbd}^{\\ \\ \\ a})X^c\\dot\\gamma^d.\n\\end{eqnarray*}\n\\end{lem}\n\n\\begin{proof} (i) For $\\sigma_0(\\omega_X)$, the only term in (\\ref{two}) of order\n zero is the Christoffel term. \n\n(ii) For $\\sigma_{-1}(\\omega_X)$, label the last six terms on the right hand side of\n(\\ref{two}) by (a), ..., (f). By Leibniz rule for the tensors, the only\nterms of order $-1$ come from:\nin (a), $-\\nabla_{\\dot\\gamma}(R(X,\\dot\\gamma)Y) = -R(X, \\dot\\gamma) \\nabla_{\\dot\\gamma}Y +$ lower order in\n$Y$;\nin (b), the term $-R(X, \\dot\\gamma) \\nabla_{\\dot\\gamma}Y$;\nin (c), the term $-R(\\nabla_{\\dot\\gamma}Y, \\dot\\gamma)X$;\nin (e), the term $R(X,\\nabla_{\\dot\\gamma}Y)\\dot\\gamma.$\n\n\nFor any vectors $Z, W$, the curvature endomorphism $R(Z, W): TM\\to TM$ has\n$$R(Z,W)^a_b = R_{c d b}^{\\ \\ \\ a}Z^cW^d.$$\n Also, since $(\\nabla_{\\dot\\gamma}Y)^a =\n\\frac{d}{d\\theta}Y^a $ plus zeroth order terms,\n$\\sigma_{1}(\\nabla_{\\dot\\gamma}) =\ni\\xi\\cdot Id.$\nThus in (a) and (b), \n$\\sigma_1(-R(X, \\dot\\gamma) \\nabla_{\\dot\\gamma})^a_b = -R_{cdb}^{\\ \\ \\ a}X^c\\dot\\gamma^d\\xi.$\n\nFor (c), we have $-R(\\nabla_{\\dot\\gamma}Y, \\dot\\gamma)X = -R_{cdb}^{\\ \\ \\\n a}(\\nabla_{\\dot\\gamma}Y)^c\\dot\\gamma^d X^b\\partial_a$, so the top order symbol is\n$-R_{cdb}^{\\ \\ \\ a}\\xi\\dot\\gamma^dX^b = -R_{bdc}^{\\ \\ \\ a}\\xi\\dot\\gamma^d X^c.$\n\nFor (e), we have $R(X,\\nabla_{\\dot\\gamma}Y)\\dot\\gamma = R_{cdb}^{\\ \\ \\ a}X^c(\\nabla_{\\dot\\gamma}Y)^d\n\\dot\\gamma^b\\partial_a$, so the top order symbol is \n$R_{cdb}^{\\ \\ \\ a}X^c\\xi \\dot\\gamma^b = R_{cbd}^{\\ \\ \\ a}X^c\\xi \\dot\\gamma^d.$\n\nSince the top order symbol of $(1+\\Delta)^{-1}$ is $|\\xi|^{-2}$, adding these four terms\nfinishes the proof. \n\\end{proof}\n \n We now compute the top symbols of the curvature tensor. $\\sigma_{-1}(\\Omega)$ involves\n the covariant derivative of the curvature tensor on $M$, but fortunately this symbol\n will not be needed in Part II.\n \n\n\\begin{lem}\\label{old2.2}\n(i) \n$\\sigma_0(\\Omega(X,Y))^a_b = R^M(X,Y)^a_b = R_{cdb}^{\\ \\ \\ a}X^cY^d.$\n\n(ii) \\begin{eqnarray*}\n\\frac{1}{i|\\xi|^{-2}\\xi}\\sigma_{-1}(\\Omega(X,Y)) &=&\n\\frac{1}{2}\\left(\\nabla_X[-2R(Y,\\dot\\gamma) - R(\\cdot,\\dot\\gamma)Y +\n R(Y,\\cdot)\\dot\\gamma]\\right.\\\\\n&&\\qquad \\left. - (X\\leftrightarrow Y) \\right.\\\\\n&&\\qquad\\left. - [-2R([X,Y],\\dot\\gamma) -R(\\cdot,\\dot\\gamma)[X,Y] + R([X,Y],\\cdot)\\dot\\gamma] \\right).\n\\end{eqnarray*}\nEquivalently, in Riemannian normal coordinates on $M$ centered at $\\gamma(\\theta)$,\n\\begin{eqnarray}\\label{moc}\n\\frac{1}{i|\\xi|^{-2}\\xi}\\sigma_{-1}(\\Omega(X,Y))^a_b &=& \\frac{1}{2}\nX[(-2R_{cdb}^{\\ \\ \\ a} -R_{bdc}^{\\ \\ \\ a} + R_{cbd}^{\\ \\ \\\n a})\\dot\\gamma^d]Y^c - (X\\leftrightarrow Y)\\nonumber\\\\\n&=&\\frac{1}{2}\nX[-2R_{cdb}^{\\ \\ \\ a} -R_{bdc}^{\\ \\ \\ a} + R_{cbd}^{\\ \\ \\\n a}]\\dot\\gamma^dY^c -(X\\leftrightarrow Y)\\\\\n&&\\qquad +\n\\frac{1}{2}[-2R_{cdb}^{\\ \\ \\ a} -R_{bdc}^{\\ \\ \\ a} + R_{cbd}^{\\ \\ \\\n a}]\\dot X^dY^c - (X\\leftrightarrow Y)\\nonumber\n\\end{eqnarray}\n\\end{lem}\n\n\n\n\n\n\\begin{proof} \n(i)\n\\begin{eqnarray*} \\sigma_0(\\Omega(X,Y))^a_b &=& \\sigma_0((d\\omega +\n \\omega\\wedge\\omega)(X,Y))^a_b\\\\\n&=& [(d\\sigma_0(\\omega) + \\sigma_0(\\omega)\\wedge\\sigma_0(\\omega))(X,Y)]^a_b\\\\\n&=& [(d\\omega^M + \\omega^M\\wedge\\omega^M)(X,Y)]^a_b\\\\\n&=& R^M(X,Y)^a_b = R_{cdb}^{\\ \\ \\ a}X^cY^d.\n\\end{eqnarray*}\n\n(ii) Since $\\sigma_0(\\omega_X)$ is independent of $\\xi$, after dividing by\n$i|\\xi|^{-2}\\xi$ we have\n\\begin{eqnarray*}\\sigma_{-1}(\\Omega(X,Y))^a_b &=& (d\\sigma_{-1}(\\omega)\n (X,Y))^a_b + \\sigma_0(\\omega_X)^a_c\\sigma_{-1}(\\omega_Y)^c_b\n+ \\sigma_{-1}(\\omega_X)^a_c\\sigma_{0}(\\omega_Y)^c_b\\\\\n&&\\qquad\n-\\sigma_0(\\omega_Y)^a_c\\sigma_{-1}(\\omega_X)^c_b\n+ \\sigma_{-1}(\\omega_Y)^a_c\\sigma_{0}(\\omega_X)^c_b.\n\\end{eqnarray*}\nAs an operator on sections of $\\gamma^*TM$, \n$\\Omega^{LM} - \\Omega^M$ has order $-1$ so $\\sigma_{-1}(\\Omega^{LM})\n= \\sigma_{-1}(\\Omega^{LM} -\\Omega^M)$ is independent of coordinates.\nIn Riemannian normal coordinates at $\\gamma(\\theta)$, $\\sigma_0(\\omega_X) = \\sigma_0(\\omega_Y) = 0$, so\n\\begin{eqnarray*}\\sigma_{-1}(\\Omega(X,Y))^a_b &=&\n (d\\sigma_{-1}(\\omega)(X,Y))^a_b\\\\\n&=& X(\\sigma_{-1}(\\omega_Y))^a_b - Y(\\sigma_{-1}(\\omega_X))^a_b\n -\\sigma_{-1}(\\omega_{[X.Y]})^a_b\\\\\n&=& \\frac{1}{2} X[(-2R_{cdb}^{\\ \\ \\ a} -R_{bdc}^{\\ \\ \\ a} + R_{cbd}^{\\ \\ \\\n a}]Y^c\\dot\\gamma^d] - (X\\leftrightarrow Y)\\\\\n&&\\qquad -\\frac{1}{2}( -2R_{cdb}^{\\ \\ \\ a} -R_{bdc}^{\\ \\ \\ a} + R_{cbd}^{\\ \\ \\\n a}][X,Y]^c\\dot\\gamma^d.\n\\end{eqnarray*}\nThe terms involving $X(Y^c) - Y(X^c) - [X,Y]^c$ cancel (as they must, since the symbol two-form \ncannot involve derivatives of $X$ or $Y$). Thus\n$$\\sigma_{-1}(\\Omega(X,Y))^a_b = \\frac{1}{2} X[(-2R_{cdb}^{\\ \\ \\ a} \n-R_{bdc}^{\\ \\ \\ a} + R_{cbd}^{\\ \\ \\ a})Y^c\\dot\\gamma^d] - (X\\leftrightarrow Y).$$\n\nThis gives the first coordinate expression in (\\ref{moc}). The second expression follows from\n $X(\\dot\\gamma^d) = \\dot X^d $ (see (\\ref{badterms})).\n\nTo convert from the coordinate expression to the covariant expression, we follow the \nusual procedure of changing ordinary derivatives to covariant derivatives and adding bracket terms. For example,\n\\begin{eqnarray*}\\nabla_X(R(Y,\\dot\\gamma)) &=& (\\nabla_XR)(Y,\\dot\\gamma) + R(\\nabla_XY,\\dot\\gamma)\n + R(Y,\\nabla_X\\dot\\gamma) \\\\\n&=& X^iR_{cdb\\ ;i}^{\\ \\ \\ a}Y^c\\dot\\gamma^d + R(\\nabla_XY,\\dot\\gamma) + R_{cdb}^{\\ \\ \\\n a}Y^c(\\nabla_X\\dot\\gamma)^d.\n\\end{eqnarray*}\nIn Riemannian normal coordinates at\n$\\gamma(\\theta)$, we have $X^iR_{cdb\\ ;i}^{\\ \\ \\ a} = X^i\\partial_i R_{cdb}^{\\\n \\ \\ a} = X(R_{cdb}^{\\ \\ \\ a})$ and $(\\nabla_X\\dot\\gamma)^d = X(\\dot\\gamma^d).$ \n \n Thus \n $$\\nabla_X(R(Y,\\dot\\gamma)) -(X\\leftrightarrow Y) - R([X,Y],\\dot\\gamma) = X(R_{cdb}^{\\ \\ \\ a}\\dot\\gamma^d)Y^c - (X\\leftrightarrow Y).$$\n The other terms are handled similarly.\n \\end{proof} \n\n\\subsection{{\\bf Connection and Curvature Symbols for General $s$}}\n${}$\n\\medskip\n\nThe noteworthy feature of these computations is the linear dependence of $\\sigma_{-1}(\\omega^{s})$ on $s$. \n\nLet $g$ be the Riemannian metric on $M$.\n\n\\begin{lem}\\label{lem:33}\n(i) At $\\gamma(\\theta)$,\n$\\sigma_0(\\omega^s_X)^a_b = (\\omega^M_X)^a_b = \\chw{c}{b}{a}X^c.$\n\n(ii) $\\sigma_0(\\Omega^s(X,Y))^a_b = R^M(X,Y)^a_b = R_{cdb}^{\\ \\ \\ a}X^cY^d.$\n\n\n(iii) $\n\\frac{1}{i|\\xi|^{-2}\\xi}\\sigma_{-1}(\\omega^s_X)^a_b = s T(X,\\dot\\gamma, g)$,\nwhere $T(X, \\dot\\gamma, g)$ is tensorial and independent of $s$. \n\\end{lem}\n\n\n\\begin{proof} (i) By Lemma \\ref{pdo}, the only term of order zero in (\\ref{223}) \n is $\\omega^M_X.$\n\n(ii) The proof of Lemma \\ref{old2.2}(ii) carries over. \n\n(iii) By Theorem \\ref{thm25}, we have to compute $\\sigma_{2s-1}$ for $[D_X,(1+\\Delta)^s]$, \n$[D_\\cdot,(1+\\Delta)^s]X$, and $[D_\\cdot,(1+\\Delta)^s]X^*$, as \n$\\sigma_{-1} ((1+\\Delta)^{-s}[D_X,(1+\\Delta)^s]) = |\\xi|^{-2s}\\sigma_{-1}([D_X,(1+\\Delta)^s])$, etc.\n\nWrite $D_X = \\delta_X + \\Gamma\\cdot X$ in shorthand. Since $(1+\\Delta)^s$ has scalar leading order symbol, $[\\Gamma\\cdot X,(1+\\Delta)^s]$ has order $2s-1.$ Thus\nwe can compute $\\sigma_{2s-1}([\\Gamma\\cdot X,(1+\\Delta)^s])$ in any coordinate system. \nIn Riemannian normal coordinates centered at $\\gamma(\\theta)$, as in the proof of Lemma \\ref{pdo}(ii), the Christoffel symbols vanish.\nThus $\\sigma_{2s-1}([\\Gamma\\cdot X,(1+\\Delta)^s]) =0.$\n\nBy (\\ref{tsmo}), $\\sigma_{2s-1}([\\delta_X,(1+\\Delta)^s])$ is $s$ times a tensorial expression in $X, \\dot\\gamma, g$,\nsince $\\partial_i\\Gamma_{\\nu j}^{\\ell} = \\frac{1}{3}(R_{i\\nu j}^{\\ \\ \\ \\ell} +\nR_{ij\\nu }^{\\ \\ \\ \\ell})$ in normal coordinates. The term with $\\Gamma$ vanishes, so \n$\\sigma_{2s-1}( [D_X,(1+\\Delta)^s]) $ is $s$ times this tensorial expression.\n\nThe argument for $\\sigma_{2s-1}([D_\\cdot,(1+\\Delta)^s]X$ is similar. The term\nwith \n$\\Gamma $ vanishes. \nBy (\\ref{222}), (\\ref{223a}), \n$$\\sigma_{2s-1}([\\delta_\\cdot,(1+\\Delta)^s]X)^a_b = \ni\\sum_j\\partial_t^j\\partial_\\xi^j|_{t=0, \\xi=0} (p(\\theta, \\xi)^a_e\\Gamma_{bc}^e(\\gamma-t, \\eta +\\xi) X^c(\\theta-t)).$$\nBy (\\ref{tsmo}), the right hand side is linear in $s$ for Re$(s) <0$. By (\\ref{abc}), this implies \nthe linearity in $s$ for Re$(s)>0.$ \n\nSince $\\sigma_{2s-1}([D_\\cdot,(1+\\Delta)^s]X^*) = (\\sigma_{2s-1}([D_\\cdot, (1+\\Delta)^s]X))^*$, this \nsymbol is also linear in $s$.\n\\end{proof} \n\n\n\\section{{\\bf The Loop Group Case}}\n\nIn this section, we relate our work to Freed's work on based loop groups\n$\\Omega G$\n\\cite{Freed}. We find a particular representation of the loop algebra that\ncontrols \nthe order of the curvature of the $H^1$ metric on $\\Omega G.$\n\n\n$\\Omega G\\subset LG$ has tangent space $T_\\gamma\\Omega G\n= \\{X\\in T_\\gamma LG: X(0) = X(2\\pi) = 0\\}$ in some Sobolev topology. \nInstead of using \n$D^2\/d\\gamma^2$ to define the Sobolev spaces, the usual choice is\n$\\Delta_{S^1} = -d^2\/d\\theta^2$ coupled to the\nidentity operator on the Lie algebra ${\\mathfrak g}$. Since this operator has\nno kernel on $T_\\gamma\\Omega M$, \n$1 + \\Delta$ is replaced by\n $\\Delta$. These changes in the $H^s$ inner product\ndo not alter the spaces of Sobolev sections, but the $H^s$ metrics on $\\Omega G$ are \nno longer induced from a metric on $G$ as in the previous sections.\n\n\nThis simplifies the calculations of the Levi-Civita connections.\nIn particular,\\\\\n $[D_Z,\\Delta^s] = 0$, so there is no term $A_XY$ as in (\\ref{axy}). \nAs a result, one can work directly with the six term formula (\\ref{5one}).\nFor $X, Y, Z$ left\ninvariant vector fields, the first three terms on the right hand side of \n(\\ref{5one}) vanish. Under the standing assumption that $G$ has a left\ninvariant, \nAd-invariant inner product,\none obtains\n$$2\\nabla^{(s)}_XY = [X,Y] + \\Delta^{-s}[X,\\Delta^sY] +\n\\Delta^{-s}[Y,\\Delta^sX]$$\n\\cite{Freed}. \n\n\nIt is an interesting question to compute the order of the curvature operator\nas a function of $s$. For based loops, Freed proved that this order is at\nmost $-1$. In \\cite{andres}, it is\nshown that the order of $\\Omega^s$ is at most $-2$ for all $s\\neq 1\/2, 1$ on\nboth $\\Omega G$ and $LG$, and is exactly $-2$ for $G$ nonabelian. \n For the case $s=1$, we have a much stronger result. \n\n\n\\begin{prop} The curvature of the\nLevi-Civita connection for the $H^1$ inner product on $\\Omega\nG$ associated to $-\\frac{d^2}{d\\theta^2}\\otimes {\\rm Id}$ is a $\\Psi{\\rm DO}$ of order $-\\infty.$\n\\end{prop}\n\n\\noindent {\\sc Proof:}\nWe give two quite different proofs. \n\nBy \\cite{Freed}, the $s=1$ curvature operator $\\Omega = \\Omega^{1}$\nsatisfies\n$$\\left\\langle \\Omega(X,Y)Z,W\\right\\rangle_1 = \\left(\\int_{S^1}[Y,\\dot\nZ],\\int_{S^1}[X,\\dot W]\\right)_{\\mathfrak g} - (X\\leftrightarrow Y),$$\nwhere the inner product is the Ad-invariant form on the Lie algebra ${\\mathfrak g}$. We\nwant to write the right hand side of \nthis equation as an $H^1$ inner product with $W$, in\norder to recognize $\\Omega(X,Y)$ as a $\\Psi{\\rm DO}.$\n\n\nLet $\\{e_i\\}$ be an orthonormal basis of ${\\mathfrak g}$, considered\nas a left-invariant frame of $TG$ and as global sections of $\\gamma^*TG.$\n Let $\\cc{i}{j}{k} = ([e_i,e_j],\ne_k)_{\\mathfrak g}$ be the structure constants of ${\\mathfrak g}.$\n(The Levi-Civita connection on left invariant vector fields\nfor the left invariant metric is\ngiven by $\\nabla_XY = \\frac{1}{2}[X,Y]$, so the structure constants\nare twice the Christoffel symbols.) For $X = X^ie_i =\nX^i(\\theta)e_i, Y = Y^je_j,$ etc., integration by parts \ngives\n$$\\left\\langle\\Omega(X,Y)Z,W\\right\\rangle_1 = \\left(\\int_{S^1} \\dot\nY^iZ^jd\\theta\\right)\\left( \\int_{S^1}\\dot X^\\ell W^m d\\theta\\right)\n\\cc{i}{j}{k}\\cc{\\ell}{m}{n}\\delta_{kn} - (X\\leftrightarrow Y).$$\nSince\n$$\\int_{S^1}\\cc{\\ell}{m}{n}\\dot X^\\ell W^m =\n\\int_{S^1}\\left(\\delta^{mc}\\cc{\\ell}{c}{n}\\dot X^\\ell\ne_m,W^be_b\\right)_{\\mathfrak g} = \\left\n\\langle \\Delta^{-1}(\\delta^{mc}\\cc{\\ell}{c}{n} \\dot X^\\ell e_m),\nW\\right\\rangle_1,$$\nwe get\n\\begin{eqnarray*}\n\\langle\\Omega(X,Y)Z,W\\rangle_1 &=& \\left\\langle\n \\left[\\int_{S^1} \\dot Y^i Z^j\\right]\n\\cc{i}{j}{k}\\delta_{kn}\\delta^{ms}\\cc{\\ell}{s}{n} \\Delta^{-1}(\\dot\nX^\\ell e_m),W\\right\\rangle_1- (X\\leftrightarrow Y)\\\\\n&=&\\left\\langle \\left[ \\int_{S^1}\na_j^k(\\theta,\\theta')Z^j(\\theta')d\\theta'\\right] e_k,W\\right\\rangle_1,\n\\end{eqnarray*}\nwith\n\\begin{equation}\\label{a}a_j^k(\\theta,\\theta') = \\dot Y^i(\\theta')\n\\cc{i}{j}{r}\\delta_{rn}\\delta^{ms}\\cc{\\ell}{s}{n} \n\\left( \\Delta\n^{-1}( \\dot X^\\ell\ne_m)\\right)^k(\\theta) - (X\\leftrightarrow Y).\n\\end{equation}\n\n\n\n\nWe now show that $Z\\mapsto \\left(\\int_{S^1}\na_j^k(\\theta,\\theta')Z^j(\\theta')d\\theta'\\right)e_k$ is a smoothing\noperator. Applying Fourier transform and Fourier inversion to $Z^j$\nyields\n\\begin{eqnarray*} \\int_{S^1} a_j^k(\\theta,\\theta')Z^j(\\theta')d\\theta'\n&=& \\int_{S^1\\times{\\mathbb R}\\times S^1}\na_j^k(\\theta,\\theta')e^{i(\\theta'\n-\\theta'')\\cdot\\xi}Z^j(\\theta'')d\\theta''d\\xi d\\theta'\\\\\n&=&\n\\int_{S^1\\times{\\mathbb R}\\times S^1} \\left[ a_j^k(\\theta,\\theta')e^{-i(\\theta\n-\\theta')\\cdot\\xi}\\right]e^{i(\\theta\n-\\theta'')\\cdot\\xi}Z^j(\\theta'')d\\theta''d\\xi d\\theta',\n\\end{eqnarray*}\nso $\\Omega(X,Y)$ is a $\\Psi{\\rm DO} $ with symbol \n\\begin{equation}\\label{b} b_j^k(\\theta,\\xi) =\n\\int_{S^1} a_j^k(\\theta,\\theta') e^{i(\\theta-\\theta')\\cdot\\xi} d\\theta',\n\\end{equation}\nwith the usual mixing of local and global notation.\n\nFor fixed $\\theta$,\n(\\ref{b}) contains the Fourier transform of $\\dot Y^i(\\theta')$ and $\\dot X^i(\\theta')$, as\nthese are the only $\\theta'$-dependent terms in (\\ref{a}).\nSince the\nFourier transform is taken in a local chart with respect to a\npartition of unity, and since in each chart $\\dot Y^i$ and $\\dot X^i$ times the\npartition of unity function is compactly supported, the Fourier\ntransform of $a_j^k$ in each chart is rapidly decreasing. Thus\n$b_j^k(\\theta,\\xi)$ is the product of a rapidly decreasing function\nwith $e^{i\\theta\\cdot\\xi}$, and hence is of order $-\\infty.$\n\n\nWe now give a second proof. For all $s$,\n$$\\nabla_X Y = \\frac{1}{2}[X,Y] -\\frac{1}{2} \\Delta^{-s}[\\Delta^sX,Y]\n+\\frac{1}{2}\\Delta^{-s}[X,\\Delta^sY].$$\nLabel the terms on the right hand side (1) -- (3).\n As an operator on $Y$ for fixed $X$, the symbol of (1) is\n$\\sigma((1))^a_\\mu = \\frac{1}{2}X^ec_{\\varepsilon\\mu}^a.$\n Abbreviating $\\xii{-s}$ by $\\xi^{-2s}$, we have\n\\begin{eqnarray*} \\sigma((2))^a_\\mu &\\sim & -\\frac{1}{2}c_{\\varepsilon\\mu}^a\n\\left[ \\xi^{-2s}\\Delta^sX^\\varepsilon -\\frac{2s}{i}\\xi^{-2s-1}\n\\partial_\\theta\\Delta^s X^\\varepsilon \\right.\\\\\n&&\\ \\ \\ \\left. +\\sum_{\\ell=2}^\\infty\\frac{(-2s)(-2s-1)\n\\ldots(-2s-\\ell+1)}{i^\\ell \\ell!}\\xi^{-2s-\\ell}\n\\partial_\\theta^\\ell\\Delta^s X^\\varepsilon \\right]\\\\\n\\sigma((3))^a_\\mu &\\sim & \\frac{1}{2}c_{\\varepsilon\\mu}^a\n\\left[ X^\\varepsilon+ \\sum_{\\ell=1}^\\infty \\frac{(-2s)(-2s-1)\n\\ldots(-2s-\\ell+1)}{i^\\ell \\ell!} \\xi^{-\\ell}\\partial_\\theta^\\ell X^\\varepsilon\\right].\n\\end{eqnarray*}\nThus\n\\begin{eqnarray}\\label{fourone}\n\\sigma(\\nabla_X)^a_\\mu &\\sim& \\frac{1}{2}c_{\\varepsilon\\mu}^a\\left[ 2X^\\varepsilon\n -\\xi^{-2s}\\Delta^sX^\\varepsilon\n+\\frac{2s}{i}\n\\xi^{-2s-1}\\partial_\\theta\\Delta^sX^\\varepsilon\\right. \\nonumber\\\\\n&&\\ \\ \\\n -\\sum_{ \\ell=2}^\\infty\\frac{(-2s)(-2s-1)\\ldots(-2s-\\ell+1)}{i^\\ell \\ell!}\n\\xi^{-2s-\\ell}\\partial_\\theta^\\ell\\Delta^s X^\\varepsilon \\\\\n&&\\ \\ \\ \\left. + \\sum_{\\ell=1}^\\infty \\frac{(-2s)(-2s-1)\n\\ldots(-2s-\\ell+1)}{i^\\ell \\ell!} \\xi^{-\\ell}\\partial_\\theta^\\ell\n X^\\varepsilon. \\right].\\nonumber\n\\end{eqnarray}\n\nSet $s=1$ in (\\ref{fourone}), and replace $\\ell$\nby\n$\\ell-2$ in the first infinite sum. Since $\\Delta = -\\partial_\\theta^2$, a\nlittle algebra gives\n\\begin{equation}\\label{fourtwo}\n\\sigma(\\nabla_X)^a_\\mu \\sim c_{\\varepsilon\\mu}^a\\sum_{\\ell=0}^\\infty\n\\frac{(-1)^\\ell}{i^\\ell}\n\\partial_\\theta^\\ell X^\\varepsilon\\xi^{-\\ell}\n= {\\operatorname{ad\\,}}\\left( \\sum_{\\ell=0}^\\infty\n\\frac{(-1)^\\ell}{i^\\ell}\\partial_\\theta^\\ell\nX\\xi^{-\\ell}\n\\right).\n\\end{equation}\n\nDenote the infinite sum in the last term of (\\ref{fourtwo})\nby $W(X,\\theta,\\xi)$. The map\n$X\\mapsto W(X,\\theta,\\xi)$ takes the Lie algebra of left invariant vector\nfields on $LG$ to the Lie algebra\n$L{\\mathfrak g}[[\\xi^{-1}]], $\nthe space of formal $\\Psi{\\rm DO}$s of nonpositive integer order on the trivial bundle\n$S^1\\times{\\mathfrak g} \\to S^1$, where the Lie bracket on the\ntarget involves multiplication of power series and bracketing in\n${\\mathfrak g}.$ We claim that this map is a Lie algebra homomorphism.\nAssuming this, we see that\n\\begin{eqnarray*} \\sigma\\left(\\Omega(X,Y)\\right) &=&\n \\sigma\\left([\\nabla_X,\\nabla_Y] -\\nabla_{[X,Y]}\\right)\n\\sim \\sigma\\left( [{\\operatorname{ad\\,}} W(X), {\\operatorname{ad\\,}} W(Y)] - {\\operatorname{ad\\,}} W([X,Y]) \\right)\\\\\n&=& \\sigma\\left( {\\operatorname{ad\\,}} ( [W(X), W(Y)]) - {\\operatorname{ad\\,}} W([X,Y]) \\right) = 0,\n\\end{eqnarray*}\nwhich proves that $\\Omega(X,Y)$ is a smoothing operator.\n\nTo prove the claim,\nset $X = x^a_n\\eff{n}e_a, Y =y^b_m\\eff{m}e_b$.\nThen\n\\begin{eqnarray*} W([X,Y]) &=&\n W( x^ny^m\\eff{(n+m)}c_{ab}^k e_k) =\\sum_{\\ell=0}^\\infty \\frac{(-1)^\\ell}\n{i^\\ell } c_{ab}^k \\partial_\\theta^\\ell\n \\left(x^a_ny^b_m\\eff{(n+m)}\\right) \\xi^{-\\ell}e_k\\\\\n {[} W(X) , W(Y)]\n&=& \\sum_{\\ell=0}^\\infty \\sum_{p+q = \\ell}\n\\frac{(-1)^{p+q}}{i^{p+q}} \\partial_\\theta^p \\left(\nx^a_n\\eff{n}\\right) \\partial_\\theta^q\n\\left( y^b_m\\eff{m}\\right)\\xi^{-(p+q)}c_{ab}^k e_k,\n\\end{eqnarray*}\nand these two sums are clearly equal.\n\\hfill $\\Box$\n\n\\bigskip\n\nIt would be interesting to understand how the map $W$ fits into the\nrepresentation theory of the loop algebra $L{{\\mathfrak g}}.$ \n\\bigskip\n\n\\large\n\\noindent {{\\bf Part II. Characteristic Classes on $LM$}}\n\\normalsize\n\\bigskip\n\n\nIn this part, we construct a general theory of Chern-Simons\nclasses on certain infinite rank bundles including the frame\/tangent bundle of \nloop spaces,\nfollowing the construction of primary characteristic classes\nin \\cite{P-R2}. The primary classes vanish on the tangent bundles of\nloop spaces, which forces the\nconsideration of secondary classes. \nThe key ingredient is to replace the ordinary matrix trace in the Chern-Weil\ntheory of\n invariant polynomials\non finite dimensional Lie groups with the Wodzicki residue on invertible bounded\n$\\Psi{\\rm DO}$s. \n\nAs discussed in the Introduction, there are absolute and relative versions of Chern-Simons theory. We use the relative version, which assigns an odd degree form to a pair \nof connections.\nIn particular, for $TLM$, we can use the $L^2$ (i.e. $s=0$) and\n $s=1$ Levi-Civita connections to form Wodzicki-Chern-Simons (WCS) classes associated to a metric on $M$. \n \n In \\S\\ref{CSCLS}, we develop the general theory of Wodzicki-Chern and WCS classes for \n bundles with structure group $\\Psi{\\rm DO}_0^*$, the group of invertible classical zeroth order pseudodifferential operators. We show the vanishing of \n the Wodzicki-Chern classes of $LM$ and more general mapping spaces. \nAs in finite dimensions, we show the existence of WCS classes in \n $H^n(LM,{\\mathbb C})$ if dim$(M) = n$ is odd (Definition \\ref{def:WCS})\n and give the local expression for the WCS classes associated to the Chern character\n (Theorem \\ref{thm:5.5}).\n In Theorem \\ref{WCSvan}, we prove that the\n Chern character WCS class vanishes if dim$(M) \\equiv 3\n \\ ({\\rm mod}\\ 4)$.\nIn \\S\\ref{dimfive}, we associate to every circle action $a:S^1\\times M^n\\to M^n$\n an $n$-cycle $[a]$\n in $LM$. For a specific metric on $S^2\\times S^3$ and a specific circle action $a,$\n we prove via exact computer calculations that the WCS class is nonzero by integrating it over $[a].$\n Since the corresponding integral for the cycle associated to the trivial action \n is zero, $a$ cannot be homotoped to the trivial action. \nWe use this result to prove that $\\pi_1({\\rm Diff}\n (S^2\\times S^3))$ is infinite.\n\n\nThroughout this part, $H^*$ always refers to de Rham cohomology for complex valued forms. By \\cite{beggs}, $H^*(LM)\\simeq H^*_{\\rm sing}(LM,{\\mathbb C}).$\n\n\n\\section{{\\bf Chern-Simons Classes on Loop Spaces}}\\label{CSCLS}\n\nWe begin in \\S5.1 with a review of Chern-Weil and Chern-Simons theory in\nfinite dimensions, following \\cite{C-S}. \n\nIn\n\\S5.2, we discuss Chern-Weil and Chern-Simons theory on a class of infinite rank bundles\nincluding the frame bundles of loop spaces. As in \\S2.7, the geometric\nstructure group of these bundles\n is $\\Psi{\\rm DO}_0^*$, so we need a trace on the Lie algebra\n$\\Psi{\\rm DO}_{\\leq 0}$ to define invariant polynomials. There are two\ntypes of traces, one given by taking the zeroth order symbol and one given by\nthe Wodzicki residue \\cite{paycha-lescure}, \\cite{ponge}. Here we only consider the \nWodzicki residue trace. \n\n\nUsing this trace, we generalize the usual definitions of Chern and Chern-Simons classes in\nde Rham cohomology. In particular,\ngiven a $U(n)$-invariant polynomial $P$ of degree $k$, we define a corresponding \nWCS class $CS^W_P\\in H^{2k-1}(LM)$ if dim$(M) = 2k-1.$ We are forced to consider these secondary classes, because the Wodzicki-Chern classes of mapping spaces\n${\\rm Maps} (N,M)$ vanish. In Theorem \\ref{thm:5.5}, we give an exact expression for the WCS\nclasses associated to the Chern character.\nIn Theorem \\ref{WCSvan}, we show that these WCS classes in $H^{4k+3}(LM^{4k+3})$\nvanish; in contrast, in finite dimensions, the Chern-Simons classes associated to the Chern character vanish in $H^{4k+1}(M^{4k+1}).$\n\n\n\n\n\\subsection{{\\bf Chern-Weil and Chern-Simons Theory for Finite Dimensional\n Bundles} }\n\nWe first review the Chern-Weil construction. \nLet $G$ be a finite dimensional Lie group with Lie algebra ${\\mathfrak g}$, and let\n $G\\to F\\to M$ be a principal $G$-bundle over a manifold $M$. \nSet $\n {\\mathfrak g}^k={\\mathfrak g}^{\\otimes k}$ and let\n\\begin{equation*}I^k(G)\n= \\{P:{\\mathfrak g}^k\\to {\\mathbb C}\\ | P\\ \\text{symmetric,\n multilinear, Ad-invariant}\\}\n\\end{equation*}\nbe the degree $k$ Ad-invariant polynomials on ${\\mathfrak g}.$\n\n\n\\begin{rem}\nFor classical Lie groups $G$, $I^k(G)$ is generated by the polarization of\nthe Newton polynomials $\\operatorname{Tr}(A^\\ell)$, where $\\operatorname{Tr}$ is the usual trace on finite\ndimensional matrices.\n\\end{rem}\n\n\nFor $\\phi\\in\\Lambda^\\ell(F,{\\mathfrak g}^k)$, $P\\in I^k(G)$, set\n$P(\\phi)=P\\circ \\phi\\in\\Lambda^\\ell(F)$. \n\n\n\\begin{thm}[The Chern-Weil Homomorphism \\cite{K-N}] \\label{previous}\nLet $F\\to M$ have a connection $\\theta$ with curvature $\\Omega_F\\in\n\\Lambda^2(F,{\\mathfrak g})$. For $P\\in I^k(G)$, $P(\\Omega_F)$ is a closed\n invariant real form on $F$, and so\ndetermines a closed form\n$P(\\Omega_M)\\in \\Lambda^{2k}(M)$.\nThe Chern-Weil map\n\\begin{equation*}\n\\oplus_{k}I^k(G)\\to H^{*}(M), \\ P\\mapsto [P(\\Omega_M)]\n\\end{equation*}\nis a well-defined algebra homomorphism, and in particular is independent of the choice of\nconnection on $F$.\n\\end{thm} \n\nThe proof depends on:\n\\begin{itemize}\n\\item (The {\\it commutativity property}) \nFor $\\phi\\in\\Lambda^{\\ell}(F,{\\mathfrak g}^k)$, \n\\begin{equation}\\label{eq:deri}\nd(P(\\phi))=P(d\\phi).\n\\end{equation}\n\\item (The {\\it infinitesimal invariance property})\nFor $\\psi_i\\in\\Lambda^{\\ell_i}(F,{\\mathfrak g})$, $\\phi\\in\\Lambda^{1}(F,{\\mathfrak g})$ and $P\\in\n I^k(G)$, \n\\begin{equation}\\label{eq:inva}\n\\sum^k_{i=1}\n(-1)^{\\ell_1+\\dots+\\ell_i}P(\\psi_1\\wedge\\dots\\wedge[\\psi_i,\\phi]\\wedge\\dots\n\\psi_l)=0. \n\\end{equation}\n\\end{itemize}\n$[P(\\Omega_M)]$ is\ncalled the {\\it characteristic class} of $P$. For example, the characteristic class\n associated to $\\operatorname{Tr}(A^k)$ is the k${}^{\\rm th}$ component of the Chern character of $F$.\n\n\nPart of the theorem's content\nis that for any two connections on $F$,\n$P(\\Omega_1) - P(\\Omega_0) = \ndCS_P(\\theta_1,\\theta_0)$ \nfor some odd form $CS_P(\\nabla_1, \\nabla_0)$. Explicitly, \n\\begin{equation}\\label{5.1}\nCS_P(\\theta_1,\\theta_0) = \\int_0^1 P(\\theta_1-\\theta_0,\\overbrace{\\Omega_t,...,\\Omega_t}^{k-1})\n\\ dt\n\\end{equation}\nwhere \n$$\\theta_t = t\\theta_0+(1-t)\\theta_1,\\ \\ \\Omega_t = d\\theta_t+\\theta_t\\wedge\\theta_t$$ \\cite[Appendix]{chern}. \n\n\\begin{rem}\nFor $F\\stackrel{\\pi}{\\to} M$, \n$\\pi^*F\\to F$ is trivial.\nTake $\\theta_1$ to be the flat connection on $\\pi^*F$\nwith respect to a fixed trivialization.\nLet $\\theta_1$ also\ndenote the connection $\\chi^*\\theta_1$ on $F$, \nwhere $\\chi$ is the global section of $\\pi^*F.$ For any other connection $\\theta_0$ on $F$, $\\theta_t = t\\theta_0, \\Omega_t = t\\Omega_0 + (t^2-t)\\theta_0\\wedge \\theta_0$. \n Assume an invariant polynomial $P$ takes values in ${\\mathbb R}.$ Then we\nobtain the formulas for the transgression form $TP(\\Omega_1)$\non $F$: for \n\\begin{equation}\\label{eq:ChernSimons}\n\\phi_t =t\\Omega_1+\\frac{1}{2}(t^2-t)[\\theta,\\theta],\\ \\ \nTP(\\theta)=l\\int_0^1 P(\\theta\\wedge \\phi^{k-1}_t)dt,\n\\end{equation}\n$dTP(\\theta)=P(\\Omega_1)\\in \\Lambda^{2l}(F)$\n\\cite{C-S}. $TP(\\Omega_1)$ pushes down to an ${\\mathbb R}\/{\\mathbb Z}$-class on $M$,\nthe absolute Chern-Simons class.\n\\end{rem}\n\nAs usual, these formulas carry over to connections $\\nabla = d+\\omega$\non vector bundles $E\\to M$ in the form\n\\begin{equation}\\label{5.11}\nCS_{P}(\\nabla_1,\\nabla_0) = \\int_0^1 P(\\omega_1-\\omega_0,\\Omega_t,..., \n\\Omega_t)\\ dt,\n\\end{equation}\nsince $\\omega_1-\\omega_0$ and\n$\\Omega_t$ are globally defined forms. \n\n \n \n \\subsection{{\\bf Chern-Weil and Chern-Simons Theory for $\\Psi{\\rm DO}_0^*$-Bundles}}\n\n \n Let $\\mathcal E\\to\\mathcal M$ be an infinite rank bundle\n over a paracompact Banach manifold\n $\\mathcal M$, with the fiber of $\\mathcal E$ \n modeled on a fixed Sobolev class of sections of \n a finite rank hermitian vector bundle $E\\to N$, and with structure group $\\pdo_0^*(E)$. \n For such $\\pdo_0^*$-bundles,\n we can produce\n primary and secondary characteristic classes \n once we choose a trace on $\\Psi{\\rm DO}_{\\leq 0}(E)$.\n Since the adjoint action of $\\pdo_0^*$ on $\\Psi{\\rm DO}_{\\leq 0}$ is by conjugation, a trace on $\\Psi{\\rm DO}_{\\leq 0}$ will extend to a polynomial on forms\nsatisfying (\\ref{eq:deri}), (\\ref{eq:inva}), so the finite dimensional proofs extend. \n \nThese traces were classified in \\cite{lesch-neira, paycha-lescure}, although there are slight variants\nin our special case $N= S^1$ \\cite{ponge}. Roughly speaking, the traces fall into two classes, the leading order symbol trace \\cite{P-R2} and the Wodzicki residue. In this paper,\nwe consider only the Wodzicki residue, and refer to \\cite{lrst} for the leading order symbol\ntrace.\n\nFor simplicity, we mainly restrict to the generating invariant polynomials $P_k(A) = A^k$, and \nonly consider $\\mathcal E = TLM$, which we recall is the complexified tangent bundle. We will work with vector bundles rather than principal bundles. \n\n\n\n\\begin{defn} \\label{def:WCS}\n(i) The k${}^{\\rm th}$ {\\it Wodzicki-Chern (WC) form} of a $\\Psi{\\rm DO}_0^*$-connection\n$\\nabla$ on $TLM$ with curvature $\\Omega$ is\n\\begin{equation}\\label{5.1a}\nc_k^W(\\Omega)(\\gamma) =\\frac{1}{k!}\n \\int_{S^*S^1}\\operatorname{tr}\\sigma_{-1}(\\Omega^{k}) \\ d\\xi dx.\n\\end{equation}\nHere we recall that for each $\\gamma\\in LM$,\n$\\sigma_{-1}(\\Omega^k)$ is a $2k$-form with values in endomorphisms\n of a trivial bundle\nover $S^*S^1$. \n\n\n\n(ii) The k${}^{\\rm th}$ {\\it Wodzicki-Chern-Simons (WCS) form} of two $\\Psi{\\rm DO}_0^*$-connections \n$\\nabla_0,\\nabla_1$ on $TLM$ is\n\\begin{eqnarray}\\label{5.22}\nCS^W_{2k-1}(\\nabla_1,\\nabla_0) &=&\\frac{1}{k!}\n \\int_0^1 \\int_{S^*S^1}\\operatorname{tr}\\sigma_{-1}((\\omega_1-\\omega_0)\\wedge \n(\\Omega_t)^{k-1})\\ dt\\\\ \n&=&\\frac{1}{k!} \\int_0^1 {\\rm res}^{\\rm w} \n[(\\omega_1-\\omega_0)\\wedge \n(\\Omega_t)^{k-1}]\\ dt.\\nonumber\n\\end{eqnarray}\n\n(iii) The k${}^{\\rm th}$ {\\it Wodzicki-Chern-Simons form} associated to a Riemannian metric \n$g$ \non $M$, denoted $CS^W_{2k-1}(g)$, is $CS^W_{2k-1}(\\nabla_1,\\nabla_0)$, where $\\nabla_0, \\nabla_1$ refer to the \n$L^2$ and $s=1$ Levi-Civita connections on $LM$, respectively.\n\n\n(iv) Let $\\Sigma = \\{\\sigma\\}$ be the group of permutations of $\\{1,...,k\\}$. Let $I:\n1\\leq i_1< ...< i_\\ell = k$ be a partition of $k$ (i.e. with $i_0=0$, $\\sum_{j=1}^k\n (i_j-i_{j-1}) = k$) . For the symmetric, $U(n)$-invariant, \nmultilinear form on ${\\mathfrak u}(n)$\n\\begin{eqnarray*} P_I(A_1,A_2,...,A_k) &=& \\frac{1}{k!}\n\\sum_\\sigma \\operatorname{tr}(A_{\\sigma(1)}\\cdot...\\cdot A_{\\sigma(i_1)})\n\\operatorname{tr}(A_{\\sigma(i_1+1)}\\cdot...\\cdot A_{\\sigma(i_2)})\\\\\n&&\\qquad \\cdot ...\\cdot \\operatorname{tr}(A_{\\sigma(i_{\\ell-1})}\n\\cdot ...\\cdot A_{\\sigma(k)}),\n\\end{eqnarray*}\ndefine the symmetric, $\\pdo_0^*$-invariant, multilinear form on $\\Psi{\\rm DO}_{\\leq 0}$ by\n\\begin{eqnarray*} P_I^W(B_1,...,B_k) &=& \\frac{1}{k!}\n \\sum_\\sigma\\left( \\int_{S^*S^1} \\operatorname{tr}\\sigma_{-1}(B_{\\sigma(1)}\\cdot...\\cdot B_{\\sigma(i_1)}) \\right.\\\\\n&&\\qquad \\left. \\cdot\n \\int_{S^*S^1} \\operatorname{tr}\\sigma_{-1}\n(B_{\\sigma(i_1+1)}\\cdot...\\cdot B_{\\sigma(i_2)})\\right. \\\\\n&&\\qquad \\left. \\cdot ...\\cdot \\int_{S^*S^1} \\operatorname{tr}\\sigma_{-1}(B_{\\sigma(i_{\\ell-1})} \n\\cdot ...\\cdot B_{\\sigma(k)})\\right).\n\\end{eqnarray*}\n The {\\it Wodzicki-Chern form associated\nto $P_I$} for a $\\Psi{\\rm DO}_0^*$-connection on $TLM$ with curvature $\\Omega$ is \n\\begin{eqnarray}\\label{wcpi} c_{P_I}^W(\\Omega) &=& \nP_I^W(\\Omega,\\Omega,...,\\Omega)\\\\\n &=& \\frac{1}{k!}\n\\int_{S^*S^1} \\operatorname{tr}\\sigma_{-1}(\\Omega^{k_1}) \\cdot \\int_{S^*S^1} \\operatorname{tr}\\sigma_{-1}(\\Omega^{k_2})\n\\cdot\n...\\cdot \\int_{S^*S^1} \\operatorname{tr}\\sigma_{-1}(\\Omega^{k_\\ell}\\nonumber )\\\\\n&=& \\frac{k_1!k_2!\\cdot...\\cdot k_\\ell !}{k!} \nc_{k_1}^W(\\Omega)c_{k_2}^W(\\Omega)\\cdot ...\\cdot c_{k_\\ell}^W(\\Omega),\\nonumber\n\\end{eqnarray}\nwhere $k_1=i_1-i_0, k_2 = i_2-i_1,...,\nk_\\ell = i_\\ell - i_{\\ell-1}$.\n\nSetting $K = (k_1,...,k_\\ell)$, we also denote $c_{P_I}^W(\\Omega)$ by $c_K^W(\\Omega).$\n\n\n\n(v) \nLet $\\nabla_0,\\nabla_1$ be $\\Psi{\\rm DO}_0^*$-connections on $TLM$ with connection forms\n$\\omega_0, \\omega_1,$ respectively. The \n {\\it Wodzicki-Chern-Simon form associated to $P_I$ and $\\nabla_0, \\nabla_1$}\nis\n$$\nCS^W_{P_I}(\\nabla_1,\\nabla_0) = \\int_0^1 P_I^W(\\omega_1-\\omega_0,\\Omega_t,...,\n\\Omega_t)dt. $$ \n\\end{defn}\n\nIn (iv) and (v), we do not bother with a normalizing constant, since we do not claim that \nthere is a normalization which gives classes with integral periods. \nNote that the k${}^{\\rm th}$ WCS class is associated to $P_k(A_1,...,A_k) = \\operatorname{tr}(A_1\\cdot\n...\\cdot A_k)$, i.e. the partition $K = (k)$, or in other words to the polynomial giving the \nk${}^{\\rm th}$ component of the Chern character.\n\nAs in finite dimensions, $c_k^W(\\nabla)$ is a closed $2k$-form, with de Rham cohomology\nclass $c_k(LM)$\n independent of $\\nabla$, as $c_k^W(\\Omega_1) - c_k^W(\\Omega_0) =\ndCS^W_{2k-1}(\\nabla_1,\\nabla_0).$ \n \n\\begin{rem} It is an interesting question to determine all the $\\Psi{\\rm DO}_0^*$-invariant polynomials on \n$\\Psi{\\rm DO}_{\\leq 0}.$ As above, $U(n)$-invariant polynomials combine with the Wodzicki residue \n(or the other traces on $\\Psi{\\rm DO}_{\\leq 0}$) to give $\\Psi{\\rm DO}_0^*$-polynomials,\nbut there may be others. \n\\end{rem} \n\n\n\nThe tangent space $TLM$, and more generally mapping spaces\nMaps$(N,M)$ with $N$ closed\nhave vanishing Wodzicki-Chern classes. Here we take a Sobolev topology on Maps$(N,M)$ for some\nlarge Sobolev parameter, so that Maps$(N,M)$ is a paracompact Banach manifold.\nWe denote the de Rham class of $c_{P_I}^W(\\Omega)$ for a connection on $\\mathcal E$ by\n$c_{P_I}(\\mathcal E).$ \n\n\n\n\\begin{prop} \\label{prop:maps} Let $N, M$ be closed manifolds, and let {\\rm Maps}${}_f(N,M)$ denote\nthe component of a fixed $f:N\\to M$. Then the cohomology classes\n$c_{P_I}^W({\\rm Maps}_f(N,M)) $ of $T{\\rm Maps}(M,N)$ vanish.\n\\end{prop}\n\n\\begin{proof}\nFor $TLM$, the $L^2$ connection in Lemma \\ref{lem:l2lc}\nhas curvature $\\Omega$ which is a multiplication operator. Thus $\\sigma_{-1}(\\Omega)$ and hence $\\sigma_{-1}(\\Omega^{i})$ are zero, \nso the WC forms $c_{P_I}(\\Omega)$ also vanish.\n\nFor $n\\in N$ and $h:N\\to M$,\nlet $\\operatorname{ev}_n: {\\rm Maps}_f(N,M)$ be $\\operatorname{ev}_n(h) = h(n).$ \n Then $D_XY(h)(n) \\stackrel{\\rm def}{=} \n (\\operatorname{ev}_h^*\\nabla^{LC,M})_XY(h)(n)$ is the $L^2$ Levi-Civita connection on \\\\\nMaps$(N,M).$\nAs in Lemma \\ref{lem:l2lc},\nthe curvature of $D$ is a \na multiplication operator. Details are left to the reader.\n\\end{proof}\n\n\n\\begin{rem} (i) These mapping spaces fit into the framework of the Families Index Theorem in \nthe case of a trivial fibration\n$Z\\to M\\stackrel{\\pi}{\\to} B$ of closed manifolds. Given a\nfinite rank bundle $E\\to M$, we get an associated infinite rank bundle ${\\mathcal E} = \\pi_*E\n\\to B$. For the fibration $N\\to N\\times {\\rm Maps}(N,M)\\to {\\rm Maps}(N,M)$ and $E = {\\rm\n ev}^*TM$, ${\\mathcal E}$ is $T{\\rm Maps}(N,M).$ A connection $\\nabla$ on $E$ induces a connection $\\nabla^{{\\mathcal E}}$ on ${\\mathcal E}$ defined by\n\\begin{equation*}\n(\\nabla^{{\\mathcal E}}_Z s)(b)\n(z)=\\left( (\\operatorname{ev}^*\\theta^u)_{(Z,0)} u_s\\right)(b,z).\n\\end{equation*}\nHere $u_s(b,z)=s(b)(z)$.\nThe curvature $\\Omega^{{\\mathcal E}}$ satisfies\n\\begin{equation*}\\label{eq:pullback}\n\\Omega^{{\\mathcal E}}(Z,W)s(b)(z)=(\\operatorname{ev}^*\\Omega ) ((Z,0),(W,0)) u_s(b,z).\n\\end{equation*}\nThis follows from\n\\begin{equation*}\n\\Omega^{{\\mathcal E}}(Z,W)s(b)(z)= [\\nabla^{{\\mathcal E}}_Z \\nabla^{{\\mathcal E}}_W\n-\\nabla^{{\\mathcal E}}_W \\nabla^{{\\mathcal E}}_Z -\\nabla^{{\\mathcal E}}_{[Z,W]}]\ns(b)(z).\n\\end{equation*}\nThus the connection and curvature forms take values in multiplication operators, and \nso $c_k^W({\\mathcal E}) = 0.$\n\n\nIf the fibration is nontrivial, the connection on ${\\mathcal E}$ depends on the choice of a horizontal complement to $TZ$ in $TM$, and the corresponding connection and curvature forms take\nvalues in first order differential operators. \n\n(ii) In finite dimensions, odd Chern forms of complexified real bundles like\\\\\n $T{\\rm Maps}(N,M)$ vanish, because the form involves a composition of an odd number of skew-symmetric matrices. In contrast, odd WC forms involve terms like\n$\\sigma_{-1}(\\Omega^1)\\wedge\\Omega^M\\wedge...\\wedge\\Omega^M,$ where $\\Omega^1$ is the curvature of the $s=1$ Levi-Civita connection. By Lemma \n\\ref{old2.2}(ii), $\\sigma_{-1}(\\Omega^1)$ is not skew-symmetric as an endomorphism. Thus\nit is not obvious that the odd WC forms vanish.\n\nSimilarly, in finite dimensions the Chern-Simons form for the odd Chern classes of complexified real bundles vanish, but this need not be the case for WCS forms. In fact, we will produce nonvanishing \nWCS classes associated to $c_3^W(TLM^5)$ in \\S\\ref{dimfive}.\n\n\\end{rem}\n\n\nIn finite dimensions, Chern classes are topological obstructions to the\nreduction of the structure group and geometric obstructions to the existence\nof a flat connection. \nWodzicki-Chern classes for $\\Psi{\\rm DO}_0^*$-bundles \nare also topological and geometric obstructions, but\nthe geometric information is a little more refined due to the grading on the\nLie algebra \n $\\Psi{\\rm DO}_{\\leq 0}$.\n\n\\begin{prop}\n Let ${\\mathcal E}\\to{\\mathcal B}$ be an infinite rank $\\pdo_0^*$-bundle, for\n $\\pdo_0^*$ acting on\n$E\\to N^n$. \nIf ${\\mathcal E}$ admits a reduction to the gauge group ${\\mathcal G}(E)$, then\n $c_k^W({\\mathcal E}) = 0$ for all $k$, and hence $c_{P_I}^W({\\mathcal E}) =0$ for all $P_I$.\nIf ${\\mathcal E}$ admits a \n $\\pdo_0^*$-connection whose\n curvature has order $-k$, then $\n c_{\\ell}({\\mathcal E}) =0$ for $\\ell \\geq [n\/k].$\n \\end{prop}\n\n\\begin{proof} If the structure group of ${\\mathcal E}$ reduces to the gauge\n group, there exists a connection one-form\n with values in Lie$({\\mathcal G}) = {\\rm End}(E)$, the Lie algebra of multiplication\n operators. Thus the Wodzicki residue of powers of the curvature vanishes,\n so the Wodzicki-Chern classes vanish.\nFor the second statement, the order of the curvature is less than\n$-n$ for $\\ell \\geq [n\/k]$, so the Wodzicki residue\n vanishes in this range. \n \\end{proof}\n \n However, we do not have examples of nontrivial WC classes; cf.~\\cite{lrst}, where it is \n conjectured that these classes always vanish. \n \\bigskip\n\n\n\n\nThe relative WCS form is not difficult to compute. \n\n\\begin{prop} Let $\\sigma$ be in the group of permutations of $\\{1,\\ldots,2k-1\\}.$ Then\n\\begin{eqnarray}\\label{5.4}\n\\lefteqn{CS^W_{2k-1}(g)(X_1,...,X_{2k-1}) }\\\\\n&=&\n\\frac{2}{(2k-1)!} \\sum_{\\sigma} {\\rm sgn}(\\sigma) \\int_{S^1}\\operatorname{tr} [\n(-2R(X_{\\sigma(1)},\\dot\\gamma)\n-R(\\cdot,\\dot\\gamma)X_{\\sigma(1)} + R(X_{\\sigma(1)},\\cdot)\\dot\\gamma) \\nonumber\\\\\n&&\\qquad \n\\cdot (\\Omega^M)^k(X_{\\sigma(2)},..X_{\\sigma(2k-1)} )].\\nonumber\n\\end{eqnarray}\n\\end{prop}\n\n\n\n\n\\begin{proof}\n$$\\sigma_0((\\omega_1-\\omega_0)_X)^a_b = \\Gamma_{cb}^aX^c -\\Gamma_{cb}^aX^c = 0.$$\nThus \n\\begin{equation}\\label{cswint}\nCS^W_{2k-1}(g) = \\int_0^1 \\int_{S^*S^1}\\operatorname{tr}\\sigma_{-1}(\\omega_1-\\omega_0)\\wedge (\\sigma_0(\\Omega_t))^k\\ dt.\n\\end{equation}\nMoreover,\n\\begin{eqnarray*}\\sigma_0(\\Omega_t) &=& td(\\sigma_0(\\omega_0)) + (1-t)d(\\sigma_0(\\omega_1)) \\\\\n&&\\qquad + \n(t\\sigma_0(\\omega_0) + (1-t)\\sigma_0(\\omega_1))\\wedge (t\\sigma_0(\\omega_0) + (1-t)\\sigma_0(\\omega_1))\\\\\n&=& d\\omega^M + \\omega^M\\wedge \\omega^M\\\\\n&=& \\Omega^M.\n\\end{eqnarray*}\nTherefore\n\\begin{equation}\\label{5.3}\nCS^W_{2k-1}(g) = \\int_0^1 \\int_{S^*S^1}\\operatorname{tr} [\\sigma_{-1}(\\omega_1)\n\\wedge (\\Omega^M)^k]\\ dt,\n\\end{equation}\nsince $\\sigma_{-1}(\\omega_0) = 0.$ We can drop the integral over $t$. \nThe integral over the $\\xi$ variable contributes a factor of $2$: the integrand has\na factor of $|\\xi|^{-2}\\xi$, which equals $\\pm 1$ on the two components of $S^*S^1$.\nSince the fiber of $S^*S^1$ at a fixed $\\theta$ consists of two points \nwith opposite orientation, the ``integral\" over each fiber is $1-(-1) = 2.$ \nThus\n\\begin{eqnarray}\\label{5.4a} \\lefteqn{\nCS^W_{2k-1}(g)(X_1,...X_{2k-1}) } \\\\\n&=& = \\frac{2}{(2k-1)!} \\sum_\\sigma {\\rm sgn}(\\sigma) \\int_{S^1}\\operatorname{tr}[\n(-2R(X_{\\sigma(1)},\\dot\\gamma)\n-R(\\cdot,\\dot\\gamma)X_{\\sigma(1)} + R(X_{\\sigma(1)},\\cdot)\\dot\\gamma)\\nonumber\\\\\n&&\\qquad\n\\cdot (\\Omega^M)^k(X_{\\sigma(2)},..X_{\\sigma(2k-1)} )]\\nonumber\n\\end{eqnarray}\nby Lemma \\ref{old2.1}.\n\\end{proof} \n\n\nThis produces odd classes in the de Rham cohomology of the loop space of an odd\ndimensional manifold.\n\n\\begin{thm}\\label{thm:5.5}\n (i) Let dim$(M) = 2k-1$ and let $P$ be a $U(n)$-invariant polynomial of degree \n$k.$ Then $c^W_P(\\Omega) \\equiv 0$ for any $\\Psi{\\rm DO}_0^*$-connection $\\nabla$ on\n $TLM.$ Thus $CS^W_P(\\nabla_1,\\nabla_0)$ is closed and defines a \n class $[CS^W_P(\\nabla_1,\\nabla_0)]\\in H^{2k-1}(LM).$ In particular, we can\n define $[CS^W_P(g)]\\in H^{2k-1}(LM)$ for a Riemannian metric $g$ on $M$. \n \n \n \n(ii) For dim$(M) = 2k-1$, the k${}^{\\it th}$ Wodzicki-Chern-Simons form $CS^W_{2k-1}(g)$\nsimplifies to \n \\begin{eqnarray}\\label{csg}\n\\lefteqn{CS^W_{2k-1}(g)(X_1,...,X_{2k-1}) }\\nonumber \\\\\n&=&\n\\frac{2}{(2k-1)!} \\sum_{\\sigma} {\\rm sgn}(\\sigma) \\int_{S^1}\\operatorname{tr}[\n(-R(\\cdot,\\dot\\gamma)X_{\\sigma(1)} + R(X_{\\sigma(1)},\\cdot)\\dot\\gamma)\\\\\n&&\\qquad \n\\cdot (\\Omega^M)^{k-1}(X_{\\sigma(2)},..X_{\\sigma(2k-1)} )].\\nonumber\n\\end{eqnarray}\n\n\n \n\n \\end{thm}\n \n \\begin{proof} (i) Let $\\Omega$ be the curvature of $\\nabla.$\n $c^W_P(\\Omega)(X_1,\\dots, X_{2k})(\\gamma)$ is a sum of monomials of the form\n(\\ref{wcpi}). This is\na $2k$-form on $M$, and hence vanishes. \n\n \n (ii) Since\n $$R(X_{1},\\dot\\gamma)\n\\cdot (\\Omega^M)^k(X_{2},..X_{2k-1}) = \n[i_{\\dot\\gamma}\\operatorname{tr}(\\Omega^{k})](X_1,...X_{2k-1}) = \\operatorname{tr}(\\Omega^k)(\\dot\\gamma, X_1,\\ldots,X_{2k-1}),$$\nthe first term on the right hand side of (\\ref{5.4a}) vanishes on a $(2k-1)$-manifold.\n\n\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\\begin{rem} There are several variants to the construction of relative WCS classes.\n\n(i) If we define the transgression form $Tc_k(\\nabla)$ with the Wodzicki residue\nreplacing the trace in (\\ref{eq:ChernSimons}), it is easy to check that $Tc_k(\\nabla)$\ninvolves $\\sigma_{-1}(\\Omega).$ For $\\nabla$ the $L^2$ connection, this WCS class vanishes. For $\\nabla$ the $H^s$ connection, $s>0$, $\\sigma_{-1}(\\Omega)$ involves \nthe covariant derivative of the curvature of $M$ (cf.~Lemma \\ref{old2.2} for $s=1.$) Thus the\nrelative WCS class is easier for computations than the absolute class $[Tc_k(\\nabla)].$\n\n\n(ii) If we define $CS_k^W(g)$ using the Levi-Civita connection for the $H^s$ \nmetric instead of\nthe $H^1$ metric, the WCS class is simply multiplied by the artificial parameter $s$ by \nLemma \\ref{lem:33}. Therefore setting $s=1$ is not only computationally convenient, it \nregularizes the WCS, in that it extracts the $s$-independent information.\nThis justifies the following definition:\n\n\n\\begin{defn} \\label{def:regularized}\nThe {\\it regularized $k^{th}$ WCS class} associated to a Riemannian metric \n$g$ on $M$ is $CS_k^{W, {\\rm reg}}(g) \\stackrel{\\rm def}{=} \nCS_k^W(\\nabla^{1},\\nabla^0)$, where $\\nabla^{1}$ is the $H^1$ connection\nand $\\nabla^0$ is the $L^2$ Levi-Civita connection. \n\\end{defn} \n\n\\end{rem}\n\n\\bigskip\n\nWe conclude this section with a vanishing result that does not have a finite dimensional \nanalogue.\n \\begin{thm} \\label{WCSvan}\n The {\\it k}${}^{\\it th}$ WCS class $CS_k^W(g)$\n vanishes if ${\\rm dim}(M) \\equiv 3 \\ ({\\rm mod}\\ 4).$\n \\end{thm}\n\n\\begin{proof} Let dim$(M) =2 k-1$. Since $\\Omega^M$ takes values in skew-symmetric endomorphisms, \nso does\n$(\\Omega^M)^{k-1}$ if $k$ is even, i.e. if ${\\rm dim}(M) \\equiv 3 \\ ({\\rm mod}\\ 4).$\n The term \n $-R(\\cdot,\\dot\\gamma)X_{\\sigma(1)} + R(X_{\\sigma(1)},\\cdot)\\dot\\gamma$ in (\\ref{csg}) is a symmetric\n endomorphism. For in Riemannian normal coordinates, this term is\n $(-R_{bdca} +R_{cbda})X^c\\dot\\gamma^d \\equiv A_{ab}$, say, so the curvature terms in \n $A_{ab} - A_{ba}$ are\n \\begin{eqnarray*}\n -R_{bdca} +R_{cbda} + R_{adcb} - R_{cadb} &=& -R_{bdca} +R_{cbda} \n + R_{cbad} - R_{dbca}\\\\\n &=& -R_{bdca} +R_{cbda} -R_{cbda} + R_{bdca}=0.\n \\end{eqnarray*}\nThus the integrand in (\\ref{csg}) is the trace of a symmetric endomorphism composed with a skew-symmetric endormorphism, and so \nvanishes.\n\\end{proof}\n\n\\begin{exm} We contrast Theorem \\ref{WCSvan} with the situation in finite dimensions. \nLet dim$(M)=3.$ \nThe only invariant monomials of degree two are $\\operatorname{tr}(A_1A_2)$ and \n$\\operatorname{tr}(A_1)\n\\operatorname{tr}(A_2)$ (corresponding to $c_2$ and $c_1^2$, respectively). \n\nFor $M$, $\\operatorname{tr}(A_1A_2)$ gives rise \nto the classical Chern-Simons invariant for $M$. However, the Chern-Simons class associated to \n$\\operatorname{tr}(A_1)\\operatorname{tr}(A_2)$ involves $\\operatorname{tr}(\\omega_1-\\omega_0)\\operatorname{tr}(\\Omega_t)$, \nwhich vanishes since both forms take values in skew-symmetric endomorphisms.\n\nIn contrast, on $LM$ we know that the WCS class $CS^W_3$ associated to \n$\\operatorname{tr}(A_1A_2)$ vanishes. The WCS associated to $\\operatorname{tr}(A_1)\\operatorname{tr}(A_2)$ involves \n$\\operatorname{tr}\\sigma_{-1}(\\omega_1-\\omega_0) = \\operatorname{tr}\\sigma_{-1}(\\omega_1)$ and $\\operatorname{tr}\\sigma_{-1}(\\Omega_t).$ \nBoth $\\omega_1$ and $ \\Omega_t$ take values in skew-symmetric $\\Psi{\\rm DO}$s, but\nthis does not imply that the terms in their symbol expansions are skew-symmetric. In fact, a calculation using Lemma \\ref{old2.1} shows that $\\sigma_{-1}(\\omega_1)$ is not skew-symmetric. \nThus the WCS class associated to $\\operatorname{tr}(A_1)\\operatorname{tr}(A_2)$ may be nonzero.\n\n\n\\end{exm}\n\n\n\n\n\n\n\\section{{\\bf An Application of Wodzicki-Chern-Simons Classes to Circle\n Actions}}\\label{dimfive}\n\n\nIn this section we use WCS classes to distinguish different $S^1$ actions on\n$M=S^2\\times S^3$. We use this to conclude that $\\pi_1({\\rm Diff}(M) , id)$ is infinite. \n\nRecall that $H^*(LM)$ denotes de Rham cohomology of complex valued \nforms. In particular, integration of closed forms over homology cycles gives a pairing of\n$H^*(LM)$ and $H_*(LM,{\\mathbb C})$. \n\n For any closed oriented manifold $M$, let $a_0,a_1:S^1\\times M\\to M$ be two smooth actions. Thus\n$$a_i(0,m) = m, \\ a_i(\\theta,a(\\psi,m)) = a_i(\\theta + \\psi, m).$$\n\n\\begin{defn} (i) $a_0$ and $a_1$\n are {\\it smoothly homotopic} if there exists a smooth map \n$$F:[0,1]\\times S^1\\times M\\to M,\\ F(0,\\theta,m) = a_0(\\theta,m),\\\n F(1,\\theta,m) = a_1(\\theta,m).$$\n\n(ii) $a_0$ and $a_1$ are {\\it smoothly homotopic through actions} if\n $F(t,\\cdot,\\cdot):S^1\\times M\\to M$ is an action for all $t$.\n\n\\end{defn}\n\nWe can rewrite an action in two equivalent ways.\n\n\n\\begin{itemize}\n\\item $a$ determines (and is determined by)\n$a^D:S^1\\to {\\rm Diff} (M)$ given by\n$a^D(\\theta)(m) = a(\\theta,m).$ $a^D(\\theta)$ is a diffeomorphism because \n$$a^D(-\\theta)(a^D(\\theta,m)) = a(-\\theta, a(\\theta,m)) = m.$$\nSince $a^D(0) = id,$ we get a class $[a^D]\\in \\pi_1({\\rm Diff} (M), id)$, the\nfundamental group of ${\\rm Diff} (M)$ based at $id.$ Here Diff$(M)$ is a Banach manifold\nas an open subset of the Banach manifold of ${\\rm Maps} (M) = {\\rm Maps} (M,M)$ of some fixed Sobolev class.\n\n\n\\item $a$ determines (and is determined by)\n$a^L:M\\to LM$ given by $ a^L(m)(\\theta) = a(\\theta,m)$. This determines a class\n$[a^L]\\in H_n(LM,{\\mathbb Z})$ with $n = {\\rm dim}(M)$ by setting $[a^L] = a^L_*[M].$\n In concrete terms, if we triangulate $M$ as the $n$-cycle $\\sum_i n_i\\sigma_i$,\nwith $\\sigma_i:\\Delta^n\\to M$, \nthen $[a^L]$ is the homology class of\nthe cycle $\\sum_i n_i (a^L\\circ \\sigma_i).$ \n\n\\end{itemize}\n\nWe give a series of elementary lemmas comparing these maps.\n\n\n\\begin{lem}\\label{lem:one} $a_0$ is smoothly homotopic to $a_1$ through actions iff $[a^D_0] =\n\t [a^D_1]\\in \\pi_1({\\rm Diff} (M), id).$\n\\end{lem}\n\n\\begin{proof} ($\\Rightarrow$) Given $F$ as above, set $G:[0,1]\\times S^1\\to\n\t {\\rm Diff} (M)$ by $G(t,\\theta)(m) = F(t,\\theta,m).$ We have $G(0,\\theta)(m)\n\t = a_0(\\theta,m) = a^D(\\theta)(m)$, $G(1,\\theta)(m) = a_1(\\theta, m)\n\t = a^D_1(\\theta)(m)$.\n$G(t,\\theta)\\in{\\rm Diff} (M)$, because \n$$G(t,-\\theta)(G(t,\\theta)(m)) = F(t,-\\theta,F(t,\\theta,m)) = F(t,0,m) = m.$$\n(This uses that $F(t,\\cdot,\\cdot)$ is an action.)\nSince $F$ is\nsmooth, $G$ is a continuous (in fact, smooth) map of ${\\rm Diff} (M)$.\nThus $a^D_0, a^D_1$ \nare homotopic as elements of\\\\\n ${\\rm Maps} (S^1,{\\rm Diff} (M))$, so $[a^D_0] = [a^D_1].$\n\\bigskip\n\n\\noindent ($\\Leftarrow$) Let $G:[0,1]\\times S^1\\to {\\rm Diff} (M)$ be a continuous\nhomotopy from\n$a^D_0(\\theta) = G(0,\\theta) $ to $a^D_1(\\theta) = G(1,\\theta)$ with $G(t,0) = id$\nfor all $t$. \nIt is possible to approximate\n$G$ arbitrarily well by a smooth map, since $[0,1]\\times S^1$ is compact. Set\n$F:[0,1]\\times S^1\\times M\\to M$ by\n$F(t,\\theta,m) = G(t,\\theta)(m).$ \n $F$ is smooth. Note that\n$F(0,\\theta,m) = G(t,\\theta)(m) = a^D_0(\\theta)(m) = a_0(\\theta,m)$, and\n$F(1,\\theta,m) = a_1(\\theta,m).$ Thus $a_0$ and $a_1$ are smoothly homotopic.\n\\end{proof}\n\n\nThere are similar results for $a^L.$\n\n\\begin{lem}\\label{lem:three} $a_0$ is smoothly homotopic to $a_1$ iff $a^L_0,\na^L_1:M\\to LM$ are smoothly homotopic.\n\\end{lem}\n\n\n\\begin{proof} Let $F$ be the homotopy from $a_0$ to $a_1$. Set\n\t $H:[0,1]\\times M \\to LM $ by $H(t,m)(\\theta) = F(t,\\theta,m).$ Then\n$H(0,m)(\\theta) = F(0,\\theta,m) = a_0(\\theta,m) = a^L_0(m)(\\theta)$,\n $H(1,m)(\\theta) = a^L_1(m)(\\theta),$ so $H$ is a homotopy from $\n a^L_0$ to $ a^L_1.$ \nIt is easy to check that $H$ is smooth.\n\n\nConversely, if $H:[0,1]\\times M \\to LM $ is a smooth homotopy from $a^L_0$ to\n$a^L_1$, set $F(t,\\theta, m) = H(t,m)(\\theta).$ \n\\end{proof}\n\n\\begin{cor}\\label{cor:one} If $a_0$ is smoothly homotopic to $a_1$, then\n$[a^L_0] = [a^L_1]\\in H_n(LM,{\\mathbb Z}).$ \n\\end{cor}\n\n\\begin{proof} By the last Lemma, $a^L_0$ and $a^L_1$ are homotopic. Thus \n$[a^L_0] = a^L_{0,*}[M] = a^L_{1,*}[M] = [a^L_1].$\n\\end{proof}\n\nThis yields a technique to use WCS classes to distinguish actions and to investigate \n$\\pi_1({\\rm Diff}(M) ,id).$ From now on, ``homotopic\" means ``smoothly homotopic.\"\n\n\n\\begin{prop} \\label{prop:two} Let dim$(M)=2k-1.$ Let $a_0, a_1:S^1\\times M\\to M$ be actions.\n\n\n(i) If $\\int_{[a^L_0]} CS^W_{2k-1 } \\neq \\int_{[a^L_1]} CS^W_{2k-1 }$, then $a_0$ and $a_1$\n are not homotopic through actions, and $[a^D_0]\\neq [a^D_1]\\in \\pi_1({\\rm Diff} (M),id).$\n\n(ii) If $\\int_{[a_1^L]} CS^W_{2k-1 } \\neq 0,$ then\n $\\pi_1({\\rm Diff} (M), id)$ is infinite.\n\n\\end{prop}\n\n\\begin{proof} \n\n(i) By Stokes' Theorem, $[a^L_0]\\neq [a^L_1]\\in H_n(LM,{\\mathbb C})$.\n By Corollary \\ref{cor:one}, $a_0$ and $a_1$ are not homotopic,\n and hence not homotopic\n through actions. By\nLemma \\ref{lem:one}, $[a^D_0]\\neq [a^D_1]\\in \\pi_1({\\rm Diff} (M),id).$\n\n(ii) Let $a_n$ be the $n^{\\rm th}$ iterate of \n $a_1$, i.e. $a_n(\\theta,m) =\na_1(n\\theta,m).$ \n\nWe claim that \n $\\int_{[a^L_n]}CS^W_{2k-1 } =\nn\\int_{[a^L_1]}CS^W_{2k-1 }$. By (\\ref{5.4}), every term in $CS^W_{2k-1 }$ is of the\nform $\\int_{S^1}\\dot\\gamma(\\theta) f(\\theta)$, where $f$ is a periodic function on the\ncircle. Each loop $\\gamma\\in\na^L_1(M)$ corresponds to the loop $\\gamma(n\\cdot)\\in a^L_n(M).$ Therefore the term\n$\\int_{S^1}\\dot\\gamma(\\theta) f(\\theta)$ is replaced by \n$$\\int_{S^1} \\frac{d}{d\\theta}\\gamma(n\\theta) f(n\\theta)d\\theta \n = n\\int_0^{2\\pi} \\dot\\gamma(\\theta)f(\\theta)d\\theta.$$\nThus $\\int_{[a^L_n]}CS^W_{2k-1 } = n\\int_{[a^L_1]}CS^W_{2k-1 }.$\n By (i), the $[a^L_n]\\in \n\\pi_1({\\rm Diff} (M), id)$\nare all distinct. \n\n\\end{proof}\n\n\n\n\n\\begin{rem}\nIf two actions\nare homotopic through actions,\nthe $S^1$ index of an equivariant operator of the two actions is the same. (Here equivariance\nmeans for each action $a_t, t\\in [0,1].$)\nIn contrast to Proposition \\ref{prop:two}(ii), the $S^1$ index of an equivariant operator\ncannot distinguish actions on odd dimensional manifolds, as the\n$S^1$ index vanishes. This can be seen from the\nlocal version of the\n$S^1$ index theorem \\cite[Thm. 6.16]{BGV}. For the normal bundle to the\nfixed point set is always even dimensional, so the fixed point set consists of\nodd dimensional submanifolds. The integrand in the fixed point submanifold\ncontribution to the $S^1$-index is the constant term in the short time\nasymptotics of the appropriate heat kernel. In odd dimensions, this constant\nterm is zero.\n\n In \\cite{MRT2}, we interpret the $S^1$ index theorem as\nthe integral of an equivariant characteristic class over $[a^L]$.\n\\end{rem}\n\n\nWe now apply these methods to a Sasaki-Einstein metric on $S^2\\times S^3$\nconstructed in \\cite{gdsw}\n to prove the following:\n\n\\begin{thm} (i) There is an $S^1$ action on $S^2\\times S^3$ that is not smoothly homotopic\nto the trivial action.\n\n(ii) $\\pi_1({\\rm Diff} (S^2\\times S^3), id)$ is infinite.\n\\end{thm}\n\nThe content of (i) is that although the $S^1$-orbit\n $\\gamma_x$ through $x\\in S^2\\times S^3$\nis contractible to $x$, the contraction cannot be constructed to be \nsmooth in $x$. \n\n\\begin{proof}\nAccording to \\cite{gdsw}, the locally defined metric \n\\begin{eqnarray}\\label{metric} g &=&\\frac{1-cy}{6}(d\\theta^2 + \\sin^2\\theta d\\phi^2) + \n\\frac{1}{w(y)q(y)} dy^2 + \\frac{q(y)}{9}[d\\psi^2 -\\cos\\theta d\\phi^2]\\nonumber\\\\\n&&\\qquad + w(y)\\left[d\\alpha + \\frac{ac-2y+y^2c}{6(a-y^2)}[d\\psi -\\cos\\theta d\\phi]\\right]^2,\n\\end{eqnarray}\nwith \n$$w(y) = \\frac{2(a-y^2)}{1-cy}, q(y) = \\frac{a-3y^2+2cy^3}{a-y^2},$$\nis a family of Sasaki-Einstein metrics on a coordinate ball in the variables\n$(\\phi, \\theta, \\psi, y, \\alpha).$ Here $a$ and $c$ are constants, and we can take $a\\in (0,1], c=1$. \nFor $p,q$ relatively prime, $qT_{WS}$, the diagrams of Fig.~\\ref{fig:feyn} cease to dominate the rate of pair production and there is a further nonperturbative enhancement that cannot easily be understood diagramatically. As can be seen from Eqs.~\\eqref{eq:rate_sphaleron} and \\eqref{eq:rate_sphaleron_high}, the dependence of the rate, $\\Gamma_S$, on the fine-structure constant is non-analytic even after one absorbs one power of $e$ into the electric field.\n\\begin{figure}\n \\includegraphics[width=0.7\\columnwidth]{feyn.pdf}\n \\caption{Feynman diagrams which dominate the rate of thermal Schwinger pair production in the (a) low and (b) intermediate temperature regimes. Double lines denote the electron propagator including the effect of the external electric field to all orders and the wiggly lines denote photons from the thermal bath. In the high temperature regime, the rate is not dominated by a single such Feynman diagram but infinitely many diagrams contribute to the leading approximation to the rate.}\n \\label{fig:feyn}\n\\end{figure}\n\nFor completeness, we note that the addition of a single electromagnetic plane wave to a thermal bath of photons also leads to nonperturbatively enhanced electron-positron pair production. This is true even in the long-wavelength limit, $\\lambda m c\/\\hbar \\to \\infty$, showing the collective, nonperturbative nature of the phenomenon. In this case, the Breit-Wheeler rate is additively enhanced by \\cite{king2012pair},\n\\begin{equation}\n\\Gamma_{\\mr{Plane}} \\approx \\frac{3^{3\/4} e^2 (k_B T)^2 m^2 }{16 \\pi ^{5\/2} \\ensuremath{\\epsilon_0} \\hbar^5} \\left(\\frac{e E \\hbar k_B T}{\n m^3 c^5}\\right)^{1\/4} \n \\mr{e}^{-\\sqrt{\\frac{16c^5 m^3}{3 e E \\hbar k_B T }}}. \\label{eqn:rate_king} \n\\end{equation}\nThis result is valid for $\\sqrt{k_B T E\/(m c^2 E_c)}\\ll 1$. The crucial difference from that of a constant electric field is that the electromagnetic invariant $E^2-c^2B^2$ of a plane wave vanishes. As we will note later, Eq. \\eqref{eqn:rate_king} is orders of magnitude smaller than the thermal Schwinger rate, showing that the absence of the magnetic field is crucial for pair production.\n\n\\section{Observability}\n\nWe would like to understand exactly how high the temperatures and how strong the electric fields need to be to get a measurable rate of pair production. To answer that, we will make a simple comparison with the experiment of Ref.~\\cite{burke1997positron}, which was the first experiment to observe the (multi-photon) Breit-Wheeler process. They observed $106\\pm14$ positrons produced in this way, from a total spacetime interaction volume of order $10^{-21}~\\mr{m}^3 \\mr{s}$ (when integrated over all laser shots). Hence we take as our observable reference rate $\\Gamma_{\\mr{Ref}}=10^{23} ~\\mr{m}^{-3}\\mr{s}^{-1}=0.1~\\muup\\mr{m}^{-3}\\muup\\mr{s}^{-1}$, which is approximately Avogadro's number of positrons per metre cubed per second. One can therefore reasonably expect that a normalised rate greater than 1 will be required for the rate of pair production to be measurable. In Fig.~\\ref{fig:allRatesApprox} we show the thermal Schwinger rate in all three regimes, normalised by this reference rate.\n\nThe almost perfectly vertical lines of constant rate in the low temperature regime reflect that, in this regime, the thermal enhancements are small. As such, this regime offers no advantages over pure Schwinger pair production for experimentally observing pair production. On the other hand, in the intermediate and high temperature regimes, the thermal enhancements are very significant. Of these two regimes, observing pair production in the high temperature regime is easier, because the electric field intensities required are orders of magnitude smaller, while the temperatures required are very similar.\n\nFrom Figure \\ref{fig:allRatesApprox} one can see that temperatures around $O(20~\\mr{keV}\/k_B)$ or above are needed in order to produce an observable number of positrons. Perhaps the leading method of producing high temperature thermal photons is with a laser and cavity, or holhraum. The aim of achieving inertial confinement fusion (ICF) has been a powerful incentive in developing these technologies. Thermal distributions of $0.3~\\mr{keV}\/k_B$ have been achieved since 1990, though about $0.4~\\mr{keV}\/k_B$ is likely the upper limit of this approach \\cite{lindl2004physics}. Unfortunately at these temperatures, the thermal enhancement of the Schwinger rate is negligible.\n\nWhen ICF is achieved, the burning thermonuclear plasma leads to significantly higher energy densities. Charged particles in the plasma are expected to reach temperatures from $O(20~\\mr{keV}\/k_B)$ to $O(200~\\mr{keV}\/k_B)$, depending on the composition and size of the burning plasma \\cite{tabak1996role,rose2013electron}. Burning deuterium (D) plasmas are expected to be hotter than burning deuterium-tritium (DT) plasmas, as the peak nuclear reaction rate is at higher energies for D-D nuclear reactions. For a fixed composition, larger plasmas reach higher temperatures.\n\nAs the plasma is not optically thick, the effective temperature of the photons is lower than that of the charged particles. For representative examples, of burning deuterium plasma with radii $r=120~\\muup\\mr{m}$ and $r=150~\\muup\\mr{m}$, one finds that the photon energy density is equal to that of a Planck distribution with two degrees of freedom at $T=22~\\mr{keV}\/k_B$ and $T=26~\\mr{keV}\/k_B$ respectively. The photon distribution can be calculated using the approach of Ref.~\\cite{rose2013electron}. However, the result is further from equilibrium than that of the charged particles. For now, we will assume a thermal distribution of photons, though we will return to this point in Section \\ref{sec:distributions}.\n\n\\section{An experimental scheme} \\label{sec:schematic}\n\n\\begin{figure}\n \\includegraphics[width=1.0\\columnwidth]{experimentSchematic3.pdf}\n \\caption{Schematic of the experimental set-up. Two counter propagating high energy beams are focused into an X-ray radiation field produced by a burning fusion.}\n \\label{fig:schematic}\n\\end{figure}\n\nSo, in order to observe the thermal Schwinger process, we would propose combining two lasers with combined intensity $O(10^{23}~\\mr{Wcm}^{-2})$ with a source of thermal photons with temperature $O(20~\\mr{keV}\/k_B)$. A possible schematic for such an experiment is shown in Figure \\ref{fig:schematic}. The region of interest for our purposes is on the left-hand side, outside the ignited thermonuclear plasma. A window is needed to hold up the material expansion long enough to allow the high-intensity lasers to interact with the radiation from the ICF capsule. The wall of the hohlraum would in principle be able to act in this way whilst transmitting the majority of the radiation, though this would require specific design. As long as the distances from the nuclear plasma are small compared with its radius, the geometric reduction of the intensity will not be significant.\n\nThe electric field is provided by a high intensity laser, split into two counter-propagating beams. These are focused so that the magnetic fields cancel in the vicinity of a given point, whereas the electric fields reinforce. Assuming standard parameters for the laser, with wavelength $\\lambda \\sim 0.8~\\muup m$, the field maxima of the two beams are expected to be approximately of size $O(\\lambda^3)$ and of time extent $O(\\lambda\/c)$, with approximately 10-20 field maxima per shot, amounting to a possible pair production region of size $5\\times 10^{-32}~\\mr{m}^3\\mr{s}$. The integrals of the rate over the interaction region can be carried out in the locally constant field approximation (see for example Refs.~\\cite{galtsov1983macroscopic,Gavrilov:2016tuq}), within which the region around the field maxima will dominate the pair production. However, in what follows we simply multiply the rates by the approximate spacetime volume of the field maxima, which is sufficient to get the order of magnitude correct.\n\nTo achieve $\\gtrsim 1$ electron-positron pair produced per shot, requires a rate $5\\times 10^6$ times faster than $\\Gamma_{\\mr{Ref}}$ (see Fig.~\\ref{fig:sphaleron_rate}). Assuming a thermal distribution of photons from the burning nuclear plasma, this could be achieved at\n\\begin{align}\n T_\\star &\\approx 20~\\mr{keV}\/k_B, \\nonumber \\\\\n I_{E\\star} &\\approx 1.3\\times 10^{23}~ \\mr{W cm}^{-2}, \\label{eq:parameters}\n\\end{align}\nwhere $I_E$ refers to the combined intensity of the two beams. This parameter point is shown as a star in Fig.~\\ref{fig:sphaleron_rate}. For comparison, we also consider a second point with a significantly higher production rate, at \n\\begin{align}\n T_\\blacktriangle &\\approx 26~\\mr{keV}\/k_B, \\nonumber \\\\\n I_{E\\blacktriangle} &\\approx 3.7\\times 10^{23}~\\mr{W cm}^{-2}. \\label{eq:parameters_triangle}\n\\end{align}\nFor these two sets of parameters, the numbers of positrons produced per shot via the thermal Schwinger, Breit-Wheeler and pure Schwinger (without thermal enhancement) processes are given in Table \\ref{table:positrons_per_shot}. We also include the nonperturbative enhancement to the number of positrons produced due to only one of the two laser beams, given by Eq. \\eqref{eqn:rate_king}. \n\nNote that the pure Breit-Wheeler process can take place in a larger region than that of the Schwinger pair production, which is not accounted for in Table~\\ref{table:positrons_per_shot}. In order to ensure that the thermal Schwinger process dominates, and taking into account its $O(10^6)$ times higher rate, the volume of the interaction region should be significantly less than about $10^7 \\lambda^3\\approx 5\\times 10^{-3}\\mr{mm}^3$. This can be achieved by modifying the diameter of the hohlraum window and by focusing the laser fairly close to the window. Further, the directionality of emitted electrons and positrons can help distinguish between production mechanisms, with the Breit-Wheeler process producing pairs more or less isotropically and the Schwinger process producing pairs along the electric field of the counterpropagating lasers.\n\nIt is encouraging that a relatively small increase in both radiation temperature and laser intensity\nproduces such a significant increase in the production rate. One can also see that the thermal Schwinger process has a huge nonperturbative enhancement. A simple perturbative estimate of the number of positrons produced by the Breit-Wheeler process underestimates the actual number by a factor of $10^6$. Such large enhancements are a generic feature of the thermal Schwinger process in the high temperature regime \\cite{Gould:2018ovk}.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{ c c c c c } \n \\toprule\n & Thermal Schwinger & Breit-Wheeler & Eq. \\eqref{eqn:rate_king} & Schwinger\\\\\n \\colrule\n $\\star$ & 3 & $3\\times 10^{-6}$ & $10^{-167}$ & $10^{-1817}$\\\\\n $\\blacktriangle$ & $1\\times 10^6$ & 1 & $10^{-105}$ & $10^{-1062}$ \\\\\n\\end{tabular}\n\\caption{Numbers of positrons produced per shot via different mechanisms for the two sets of parameters given by Eqs. \\eqref{eq:parameters} and \\eqref{eq:parameters_triangle}. The column labelled Eq. \\eqref{eqn:rate_king} is that for a thermal bath plus a single laser beam (rather than counter-propagating beams).}\n\\label{table:positrons_per_shot}\n\\end{center}\n\\end{table}\n\nOne can also compare the thermal Schwinger rate to that obtained in a thermal bath plus only a single high intensity laser beam (in which case the magnetic field does not cancel). In this case, the enhancement of the rate of pair production due to the high intensity laser is given by Eq.~\\eqref{eqn:rate_king}. For the parameters of either Eqs. \\eqref{eq:parameters} or \\eqref{eq:parameters_triangle}, one finds that the enhancement is negligible and the rate of this process is smaller than the thermal Schwinger rate by a factor of $\\sim 10^{\u2212100}$ or more, as can be seen in Table \\ref{table:positrons_per_shot}.\n\nFor the validity of the locally constant field approximation, it is important that the electric field, as well as the photon distribution from the plasma, are slowly varying on the time and length scales of the pair creation process, described by an instanton. The time scale of the instanton is $t_{\\mr{inst}}\\sim \\hbar\/k_B T$ and the length scale is $\\sqrt{e\/(4\\pi \\ensuremath{\\epsilon_0} E)}$ \\cite{gould2017thermal,Gould:2018ovk}. Using temperature and electric field strengths determined by Eq. \\eqref{eq:parameters}, this amounts to $3\\times 10^{-20}~s\\approx 10^{-11}~\\mr{m}\/c$ and $5\\times 10^{-12}~\\mr{m}$ respectively. The smallest length scale on which the electric field varies is the wavelength of the laser. Assuming a laser with wavelength of $\\lambda \\sim 0.8~\\muup m$, one can safely treat the electric field as constant. Further, one would expect the photon distribution from the plasma to vary on a length scale of order the size of the hohlraum window. This will likewise be much larger than the length scale of the instanton, $\\sim 5\\times 10^{-12}~\\mr{m}$, and hence the locally constant field approximation is applicable.\n\n\\begin{figure}\n \\includegraphics[width=1.0\\columnwidth]{sphaleronQEDRateExp2.pdf}\n \\caption[Thermal Schwinger rate at high temperature]{The number of electron-positron pairs produced per shot in the experiment proposed here. The coloured region is the high temperature region, and where the approximations leading to Eq. \\eqref{eq:rate_sphaleron} are valid. The solid black line is the boundary between the intermediate and high temperature regions, defined by $T=T_{WS}$. The dashed black line is defined by $E=0.2e(k_BT)^2\/\\ensuremath{\\epsilon_0} c^2 \\hbar^2$ and the dotted black line by $T=0.2m_ec^2\/k_B$. The star and diamond are the points referred to in Eqs. \\eqref{eq:parameters} and \\eqref{eq:parameters_triangle}.}\n \\label{fig:sphaleron_rate}\n\\end{figure}\n\nIn the region where the electric field and thermal photons collide, electrons and positrons will be produced with an approximately thermal spectrum of velocities and then accelerated in opposite directions antiparallel and parallel respectively to the electric field. Their thermal velocities are isotropic in the lab frame and are expected to be rather large, $\\tfrac{1}{3}\\bar{v^2}\\sim k_B T\/m_e \\sim (0.2 c)^2$. The field then accelerates the particles over a distance $\\lesssim \\lambda$, giving them a highly relativistic velocity, $v\\approx c$, parallel to the electric field and up to energies of order $eE\\lambda \\sim 1~\\mr{GeV}$. Once produced, the electrons and positrons may be deflected in opposite directions with a magnet, after which their momenta can be measured by a calorimeter, as in Ref. \\cite{burke1997positron}. If the combined intensity of the lasers is greater than around $10^{24}~\\mr{Wcm}^{-1}$, a seed electron-positron pair produced by the thermal Schwinger process will induce a cascade of pair production, so amplifying any positive signal~\\cite{Bell:2008zzb,PhysRevLett.105.080402,Nerush:2010fe}.\n\nIn the absence of charged particles, the thermal Schwinger process is the dominant mechanism of electron-positron pair production. However, if charged particles are not adequately shielded, other pair production processes are possible, such as the trident mechanism ($e^-Z\\to e^- e^+e^- Z$) and the Bethe-Heitler process ($\\gamma Z\\to e^+e^- Z$). Another possibility is for non-linear Compton scattering of charged particles in the laser field, producing high energy photons which then take part in the Breit-Wheeler process. Debye screening by charged particles will also inhibit the thermal Schwinger process if the Debye length is not much longer than the length scale of the pair creation process, $\\sqrt{e\/(4\\pi \\ensuremath{\\epsilon_0} E)}$. For the parameters of Eq. \\eqref{eq:parameters}, one requires the density of charged particles to be much less than one per $\\mr{pm}^3$. In the regime we have considered here, the purely thermal and the purely Schwinger pair production rates are orders of magnitude lower than the combination. Thus by performing null shots, with either only the burning plasma or only the high intensity laser, one can measure the presence of any backgrounds.\n\n\\section{Photon distributions}\\label{sec:distributions}\n\nLet us return to consider the distribution of photons in the burning plasma. This must be close to equilibrium for our approach to be valid. To investigate this we have solved the Boltzmann equation for the distribution of photons for a range of different plasma sizes and compositions. We have followed the method of Ref.~\\cite{rose2013electron}, including the effect of Compton scattering. For our representative example, of a burning deuterium plasma of radius $r=150~\\muup\\mr{m}$, the photon intensity at the surface of this plasma is shown as the full black line in Fig.~\\ref{fig:intensity}.\n\n\\begin{figure}\n \\includegraphics[width=1.0\\columnwidth]{planckLogIntensities2.pdf}\n \\caption[Photon Intensity]{Photon intensity in a D-burning target of radius $150~\\muup\\mr{m}$, along with various approximations to it. Note that a purely thermal distribution at $T=148~\\mr{keV}\/k_B$ would lie at much higher intensities.}\n \\label{fig:intensity}\n\\end{figure}\n\nEquating the photon energy density to that of a thermal distribution, one finds that the effective temperature of the distribution is $26~\\mr{keV}\/k_B$. Doing the same for the photon number density, one instead finds a somewhat lower effective temperature of $16~\\mr{keV}\/k_B$, showing that the distribution is shifted to higher energies with respect to a thermal distribution. Plotting the photon intensity of a Planck distribution at $T=26~\\mr{keV}\/k_B$, the blue dotted line in Fig.~\\ref{fig:intensity}, one can see the shift to higher energies.\n\nThe high energy tail of the distribution, above about $700~\\mr{keV}$, is an exponential fall off and hence fits well a Boltzmann tail with an effective temperature of $T=148~\\mr{keV}\/k_B$, though scaled down by a normalisation, or equivalently a negative photon chemical potential\\footnote{The photon chemical potential must be zero in equilibrium, but not necessarily out of equilibrium. In this context its presence is natural as photon number conserving processes dominate.}, $\\mu=-1097~\\mr{keV}$, plotted as the dot-dashed green line in Fig.~\\ref{fig:intensity}. At the lowest energies, the distribution rises above this, and can be better described by a purely thermal distribution at a much lower temperature, $T=7.9~\\mr{keV}\/k_B$, plotted as the dashed red line in Fig.~\\ref{fig:intensity}. At intermediate energies, the distribution is not well described by a Bose-Einstein distribution. Nevertheless, the overall shape of the distribution is qualitatively similar to a thermal distribution, being smooth and highly occupied with a power-like rise at low energies and an exponential decrease at high energies, though we have used four different effective temperatures to describe different aspects of it, ranging from $7.9~\\mr{keV}\/k_B$ to $148~\\mr{keV}\/k_B$.\n\nIn two counterpropagating laser beams with intensity given by Eq.~\\eqref{eq:parameters} or \\eqref{eq:parameters_triangle}, one finds that the intermediate temperature regime of thermal Schwinger pair production would be reached at temperatures above\n\\begin{align}\n T_{CW,\\star} &=0.20~\\mr{keV}\/k_B, \\nonumber \\\\\n T_{CW,\\blacktriangle} &= 0.32~\\mr{keV}\/k_B, \\label{eq:tcw} \n\\end{align}\nand the high temperature regime would be reached at temperatures above\n\\begin{align}\n T_{WS,\\star} &=2.5~\\mr{keV}\/k_B, \\nonumber \\\\ \n T_{WS,\\blacktriangle} &= 3.7~\\mr{keV}\/k_B. \\label{eq:tws} \n\\end{align}\nAll four effective temperatures we have used to describe the distribution of photons in the burning plasma are well above these temperatures. We thus expect the high temperature regime to provide a better description of pair production in this setup than either the low or intermediate temperature regimes, which would imply that the diagrams of Fig.~\\ref{fig:feyn} do not dominate pair production and there is a nonperturbative enhancement over both pure Schwinger and pure thermal pair production.\n\nPhysically, it is clear that the process of pair production should not depend on the photon gas being precisely in equilibrium: if we use the picture of tunnelling from an excited state, one would expect that it is the energy and density of the photon distribution, rather than the nearness to equilibrium, that matters.\n\nOn the other hand, the condition of equilibrium is necessary for the calculation because it leads to important simplifications in the calculation of the production rate. The nonperturbative calculation of Eq.~\\eqref{eq:rate_sphaleron} \\cite{gould2017thermal,Gould:2018ovk} relied heavily on the Matsubara formalism~\\cite{Bloch1932,Matsubara:1955ws}, which is only valid in equilibrium. In the high temperature regime a resummation of all-orders of the perturbative loop expansion was required. Generalising the result to any out-of-equilibrium distribution is beyond the scope of this paper.\n\nWe note however that the diagrams of Fig.~\\ref{fig:feyn} can be calculated in an arbitrary photon distribution, following the approach of Ref.~\\cite{Torgrimsson:2019sjn}, though in the high temperature regime these diagrams are not dominant. Considering the calculation in this distribution, it can be seen that these diagrams reproduce the perturbative Breit-Wheeler rate up to very small corrections, essentially because the photon gas is highly occupied at energies much greater than $k_B T_{CW}$ (see Eqs.~\\eqref{eq:lowT} and \\eqref{eq:tcw}). Further, perturbative corrections in this distribution due to the high intensity laser require one photon from the high energy tail of the distribution and hence are suppressed by $\\exp(-\\lambda m^2 c^3\/(2\\pi \\hbar k_B T) + \\mu\/k_B T)\\sim 10^{-10^{5}}$, where $T$ and $\\mu$ here refer to the green dot-dashed line in Fig.~\\ref{fig:intensity}. Thus any enhancement due to the high intensity laser must be a nonperturbative phenomenon which goes beyond the diagrams of Fig.~\\ref{fig:feyn}.\n\nBecause the full nonperturbative calculation of the rate of pair production is beyond the scope of this paper, the possibility of nonperturbative enhancements in our proposed setup is conjectural. However, as all the effective temperatures we have used to describe the photon distribution are larger than $T_{WS}$, Eq.~\\eqref{eq:tws}, we expect the high temperature regime to best describe the photon distribution in question. We thus expect a nonperturbative enhancement over the perturbative prediction, as is the case in equilibrium where the enhancement to the positron yield was $O(10^6)$. The experiment we have proposed here would be able to test this plausible conjecture, by performing null shots without the counterpropagating laser beams, for which only the perturbative process is possible. This would be able to determine which features of a photon distribution are important for the nonperturbative enhancements to pair production which feature in the thermal Schwinger effect, and how generic such enhancements are.\n\n\\section*{Acknowledgements}\nO.G. would like to thank Holger Gies and Greger Torgrimsson for discussions related to this work. O.G. was supported from the Research Funds of the University of Helsinki. S.M. was supported by Engineering and Physical Sciences Research Council grant No. EP\/M018555\/1 and by Horizon 2020 under European Research Council Grant Agreement No. 682399. A.R. was supported by the U.K. Science and Technology Facilities Council grant ST\/P000762\/1. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Background}\n\nIn this paper, let $\\Sigma$ be a fixed, compact, smooth, oriented surface of genus \n$g > 1$. We also denote $\\sigma |dz|^2$ as a {hyperbolic metric} on $\\Sigma$, for conformal \ncoordinates $z$. On $(\\Sigma, \\sigma |dz|^2)$, we denote the Laplacian as\n\\begin{center}\n$\\Delta = {\\frac{4}{\\sigma}}{\\frac{\\partial^2}{\\partial z \\partial \\bar{z}}}$,\n\\end{center}\nwith nonpositive eigenvalues.\n\n\n{Teichm\\\"{u}ller space} is a complex manifold with the {mapping class group} acting by biholomorphisms, of {complex dimension} \n$3g-3$, the number of independent closed curves in any pair-of-pants decomposition \nof the surface. \n\n\nThe {Weil-Petersson} metric is invariant under the action of the {mapping class group}, hence it decends to a \nmetric on the {moduli space}. It is shown that the {Weil-Petersson} metric is K\\\"{a}hlerian (\\cite {Ah1}), \nwith negative {sectional curvature} (\\cite {Tr}, \\cite {Wp3}). The {Weil-Petersson} Riemannian curvature \ntensor is given as the Tromba-Wolpert formula\n(\\cite {Tr}, \\cite {Wp3}):\n\\begin{center} \n$R_{\\alpha\\bar{\\beta}\\gamma\\bar{\\delta}} = \n\\int_{\\Sigma}D(\\mu_{\\alpha}\\bar\\mu_{\\beta})\\mu_{\\gamma}\\bar\\mu_{\\delta}dA + \n\\int_{\\Sigma}D(\\mu_{\\alpha}\\bar\\mu_{\\delta})\\mu_{\\gamma}\\bar\\mu_{\\beta}dA$,\n\\end{center}\nwhere $\\Delta$ is the Laplacian for the {hyperbolic metric} $\\sigma$ on the surface, and \n$D = -2(\\Delta -2)^{-1}$ is a self-adjoint, compact operator.\nThose $\\mu$'s in the formula are tangent vectors, i.e., harmonic {Beltrami differential}s. There is a natural \npairing between \n$QD(\\Sigma) =\\{\\phi(z) dz^2\\}$ and $HB(\\Sigma) =\\{\\mu(z) {\\frac{d\\bar{z}}{dz}}\\}$:\n\\begin{center}\n$<\\phi dz^2,\\mu {\\frac{d\\bar{z}}{dz}}> = Re \\int_{\\Sigma}\\phi \\mu dzd\\bar{z}$.\n\\end{center}\n\n\nA remarkable property of the {Weil-Petersson} metric is that it is incomplete (\\cite {Ma}, \n\\cite {Wp1}). This is caused by pinching off at least one short closed geodesic on the \nsurface. The {Weil-Petersson} completion modulo the {mapping class group} is topologically the Deligne-Mumford \ncompactification of the {moduli space}. The compactification divisor, thus \nconsists of a union of lower dimensional {Teichm\\\"{u}ller space}s, each such space consists of noded \n{Riemann surface}s, obtained by pinching nontrivial short closed geodesics on the surface \n(\\cite {B}, \\cite {Ma}). Therefore the compactification divisor can be described \nvia the systole $l_{0}(\\sigma)$ as the set $\\{l_{0}(\\sigma) = 0\\}$.\n\n\nSince we will analyze {harmonic map}s between compact hyperbolic surfaces, we recall some \nfundamental facts here. For a Lipschitz map $w:(\\Sigma, \\sigma |dz|^2) \\rightarrow \n(\\Sigma, \\rho |dw|^2)$, where $\\sigma |dz|^2$ and $\\rho |dw|^2$ are {hyperbolic metric}s on \n$\\Sigma$, and $z$ and $w$ are conformal coordinates on $\\Sigma$, \none follows Sampson (\\cite {S}) to define \n\\begin{center}\n${\\mathcal{H}}(z) = {\\frac{\\rho(w(z))}{\\sigma(z)}}|w_z|^2, {\\mathcal{L}}(z) = \n{\\frac{\\rho(w(z))}{\\sigma(z)}}|w_{\\bar{z}}|^2$.\n\\end{center}\nWe call ${\\mathcal{H}}(z)$ the {\\it holomorphic energy density}, and ${\\mathcal{L}}(z)$ \nthe {\\it anti-holomorphic energy density}. Then the energy density function of $w$ is \nsimply $e(w)= {\\mathcal{H}} + {\\mathcal{L}}$, and the total energy is then given by\n\\begin{center}\n$E(w,\\sigma,\\rho) = \\int_{\\Sigma}e\\sigma|dz|^2$.\n\\end{center}\n\n\nWe also note that the {Jacobian determinant} relative to the $\\sigma$ metric is therefore given by \n$J(z) = {\\mathcal{H}}(z) - {\\mathcal{L}}(z)$.\n\n\nThe map $w$ is called {\\it harmonic} if it is a critical point of this energy \nfunctional, i.e., it satisfies Euler-Lagrange equation:\n\\begin{center}\n$w_{z\\bar{z}}+ {\\frac{\\rho_w}{\\rho}}w_z w_{z\\bar{z}} = 0$.\n\\end{center}\n\n\nThe $(2,0)$ part of the pullback $w^{*}\\rho$ is the so-called {\\it {Hopf differential}}:\n\\begin{center}\n$\\phi(z)dz^2 = (w^{*}\\rho)^{(2,0)} = \\rho(w(z)) w_z {\\bar{w}}_zdz^2$.\n\\end{center}\nIt is routine to check that $w$ is harmonic if and only if $\\phi dz^2 \\in QD(\\Sigma)$, \nand $w$ is conformal if and only if $\\phi=0$.\n\n\nIn our situation, there is a unique \n{harmonic map} $w:(\\Sigma, \\sigma) \\rightarrow (\\Sigma, \\rho)$ in the homotopy class of the \nidentity, moreover, this map $w$ is a diffeoemorphism with positive Jacobian $J$, and \n${\\mathcal{H}}>0$ (\\cite {ES}, \\cite {Hr}, \\cite {S}, \\cite {SY}). \n\n\nA key observation to link the harmonic maps to Teichm\\\"{u}ller theory is that one \nobtains a map from {Teichm\\\"{u}ller space} to $QD(\\Sigma)$, for some fixed {hyperbolic metric} $\\sigma$. More \nspecifically, this map sends any {hyperbolic metric} on $\\Sigma$ to a holomorphic quadratic \ndifferential associated to the unique {harmonic map} in the homotopy \nclass of the identity. This map is a diffeomorphism (\\cite {S}, \\cite{Wf1}).\n\n\\section{Proof of Main Theorems} \n\\noindent \nLet $\\mu = \\mu (z){\\frac{d\\bar{z}}{dz}}$ be a unit {Weil-Petersson} normed harmonic {Beltrami differential}, thus \n$\\int_{\\Sigma}|\\mu|^2 dA = 1$. The {Weil-Petersson} {holomorphic sectional curvature} in the direction of $\\mu$ is given by\n\\begin{center}\n$K_h = -2\\int_{\\Sigma}D(|\\mu|^2)|\\mu|^2 dA$.\n\\end{center} \nIts upper bound, in terms of the genus, is known, proved by Wolpert:\n\\begin{lem}(\\cite {Wp4})\n$K_h < -{\\frac{1}{2\\pi(g-1)}}$.\n\\end{lem}\n\n\n\\begin{rem}\nThis upper bound holds without restriction on the systole. Since all \n{sectional curvature}s are negative, so the {Ricci curvature} and scalar curvature are bounded from \nabove by $-{\\frac{1}{2\\pi(g-1)}}$, and $-{\\frac{3(3g-2)}{4\\pi}}$, respectively (\\cite {Wp4}). \n\\end{rem}\n\n\nWe now show the following pointwise estimate on $|\\mu(z)|$, where the tangent \nvector $\\mu(z){\\frac{d\\bar{z}}{dz}}$ is normalized to have unit {Weil-Petersson} \nnorm, and we shall apply this estimate to prove our main theorems.\n\\begin{theorem}\nFor $\\mu(z){\\frac{d\\bar{z}}{dz}} \\in HB(\\Sigma)$ with $||\\mu||_{WP} =1$, there \nexists a positive constant $h_0$, independent of $g$, such that \n$|\\mu (z)| \\le h_0$, for all $z \\in \\Sigma$, where the surface $\\Sigma$ is in \nthe thick part of the {moduli space}.\n\\end{theorem}\n\\begin{proof}\nRecall that $\\mu(z){\\frac{d\\bar{z}}{dz}}$ is a harmonic {Beltrami differential}, hence is a symmetric \ntensor given as $\\bar{\\phi}(ds^2)^{-1}$ for $\\phi$ a {holomorphic quadratic differential} with at most simpole poles \nat the cusps and $ds^2$ the {hyperbolic metric} tensor (\\cite{Wp2}). Since the surface $\\Sigma$ lies in the \nthick part of {moduli space}, hence no cusps and this {holomorphic quadratic differential} $\\phi = \\phi(z)dz^{2}$ has no poles.\n\n\nNote that $\\phi \\in QD(\\Sigma)$, as stated in the previous section, by a theorem of \nWolf (\\cite{Wf1}), there exists a {hyperbolic metric} $\\rho$ on surface $\\Sigma$, and a unique \n{harmonic map} $w: (\\Sigma,\\sigma) \\rightarrow (\\Sigma,\\rho)$, such that $\\phi$ is the \nHopf differential associated to this {harmonic map} $w$, i.e., \n$\\phi(z)dz^{2} = \\rho(w(z)) w_z {\\bar{w}}_zdz^{2}$.\n\n\nMuch of our study will be analyzing this {harmonic map} $w$. Note that even though \n$inj_{\\sigma}(\\Sigma) > r_{0} >0$, the metric $\\rho$ might not lie in the thick part \nof the {moduli space}. We recall that the holomorphic and anti-holomorphic energy density \nfunctions of $w$ are defined as \n${\\mathcal{H}}(z) = {\\frac{\\rho(w(z))}{\\sigma(z)}}|w_z|^2$, and ${\\mathcal{L}}(z) = \n{\\frac{\\rho(w(z))}{\\sigma(z)}}|w_{\\bar{z}}|^2$, respectively. \n\n\nThe energy density function of $w$ is $e(w)= {\\mathcal{H}} + {\\mathcal{L}}$, while the {Jacobian determinant} \nbetween {hyperbolic metric}s $\\sigma$ and $\\rho$ is therefore \n$J(z) = {\\mathcal{H}}(z) - {\\mathcal{L}}(z)$. Since the map $w$ is a diffeomorphism \nwith positive {Jacobian determinant}, we have ${\\mathcal{H}}(z) > {\\mathcal{L}}(z) \\ge 0$.\n\n\nWe also find that \n${\\mathcal{H}}{\\mathcal{L}} = {\\frac{|\\phi|^{2}}{\\sigma^{2}}} = |\\mu|^{2}$, so \nthe zeros of ${\\mathcal{L}}$ are the zeros of $|\\mu|$, or equivalently, the zeros of $\\phi$.\n\n\nLet $\\nu$ be the {Beltrami differential} of the map $w$, defined by \n$\\nu = {\\frac{w_{\\bar z}d\\bar z}{w_{z}dz}}$. It measures the failure of $w$ to be \nconformal, and since $J > 0$, we have $|\\nu|<1$. \n\n\nOne easily finds that\n $|\\nu|^{2} = {\\frac{\\mathcal{L}} {\\mathcal{H}}}$, therefore,\n \\begin{center}\n $|\\mu|= {\\sqrt{{\\mathcal{H}}{\\mathcal{L}}}} = {\\mathcal{H}}|\\nu| < {\\mathcal{H}}$.\n \\end{center}\nThus it suffices to estimate ${\\mathcal{H}}(z)$ to bound $|\\mu|$ pointwisely.\n\n\nLet $z_{0} \\in \\Sigma$ such that ${\\mathcal{H}}(z_{0}) = max_{z \\in \\Sigma}{\\mathcal{H}}(z)$. \nWe follow a calculation of Schoen-Yau to define a local one-form $\\theta = \\sqrt{\\sigma(z)}dz$, \nand find (\\cite {SY}):\n \\begin{center}\n $|w_{\\theta}|^{2}= {\\frac{\\rho}{\\sigma}}|w_{z}|^{2} = {\\mathcal{H}}(z)$,\n \\end{center}\n and \n \\begin{eqnarray}\n \\Delta |w_{\\theta}|^{2}= 4|w_{\\theta \\theta}|^{2}+2J|w_{\\theta}|^{2}-2|w_{\\theta}|^{2}.\n \\end{eqnarray}\n We rewrite this as \n \\begin{eqnarray}\n \\Delta{\\mathcal{H}}= 4|w_{\\theta \\theta}|^{2}+2J{\\mathcal{H}}-2{\\mathcal{H}} > -2{\\mathcal{H}}.\n \\end{eqnarray}\n \n \n Therefore ${\\mathcal{H}}$ is a subsolution to an elliptic equation $(\\Delta+2)f=0$.\n \n \n Recalling that $inj_{\\sigma}(\\Sigma)>r_{0}>0$, we embed a hyperbolic ball $B_{z_{0}}({\\frac{r_{0}}{2}})$ \n into $\\Sigma$, centered at $z_{0}$ with radius ${\\frac{r_{0}}{2}}$. Morrey's theorem (\\cite{Mo}, theorem 5.3.1) \n on subsolutions of elliptic differential equations guarantees that there is a constant $C(r_{0})$, such that, \n \\begin{center}\n ${\\mathcal{H}}(z_{0}) = sup_{B_{z_{0}}({\\frac{r_{0}}{4}})}{\\mathcal{H}}(z) \n \\le C(r_{0})\\int_{B_{z_{0}}({\\frac{r_{0}}{2}})}{\\mathcal{H}}(z)\\sigma dzd\\bar{z}$.\n \\end{center}\n \n \n Another consequence of formula $(3)$ is the Bochner identity (see \\cite {SY}), as now $log{\\mathcal{H}}$ \n is well defined:\n \\begin{eqnarray}\n \\Delta log{\\mathcal{H}}= 2{\\mathcal{H}}-2{\\mathcal{L}} -2.\n \\end{eqnarray} \n The minimal principle implies ${\\mathcal{H}}(z) \\ge 1$ for all $z \\in \\Sigma$, and we find\n \\begin{center}\n $\\int_{\\Sigma}{\\mathcal{L}}dA \\le \\int_{\\Sigma}{\\mathcal{H}}{\\mathcal{L}}dA = \n \\int_{\\Sigma}|\\mu|^{2}dA = ||\\mu||_{WP}= 1$. \n \\end{center}\n \n \nIt is not hard to see that we can actually bound the total energy of this {harmonic map} $w$ from above, \nin terms of the genus. More precisely, we recall that $w$ is a diffeomorphism, so\n\\begin{center}\n$ \\int_{\\Sigma}({\\mathcal{H}}-{\\mathcal{L}})dA = \\int_{\\Sigma}JdA = Area(w(\\Sigma)) = 4\\pi (g-1)$,\n\\end{center}\ntherefore the total energy satisfies\n\\begin{eqnarray*}\nE(w) & = & \\int_{\\Sigma}e(w)dA = \\int_{\\Sigma}({\\mathcal{H}}+{\\mathcal{L}})dA \\\\\n& = & \\int_{\\Sigma}({\\mathcal{H}}-{\\mathcal{L}})dA + 2\\int_{\\Sigma}{\\mathcal{L}}dA \\\\\n&\\le& Area(\\Sigma,\\sigma) + 2 \\\\\n&= & 4\\pi(g-1) +2.\n\\end{eqnarray*}\nTherefore\n \\begin{eqnarray*}\n\\int_{B_{z_{0}}({\\frac{r_{0}}{2}})}{\\mathcal{H}}(z)\\sigma dzd\\bar{z} \n& < & \\int_{B_{z_{0}}({\\frac{r_{0}}{2}})}({\\mathcal{H}}(z) + {\\mathcal{L}}(z))\\sigma dzd\\bar{z} \\\\\n& = & E(w) - \\int_{\\Sigma \\backslash B_{z_{0}}({\\frac{r_{0}}{2}})}e(z) \\sigma dzd\\bar{z} \\\\\n& \\le & 4\\pi(g-1) +2 - (4\\pi(g-1)-A_{1}({B_{z_{0}}}({\\frac{r_{0}}{2}}))) \\\\\n&=& A_{1}({B_{z_{0}}}({\\frac{r_{0}}{2}})) + 2.\n\\end{eqnarray*}\nwhere $A_{1}({B_{z_{0}}}({\\frac{r_{0}}{2}}))$ is the hyperbolic area of the ball \n$B_{z_{0}}({\\frac{r_{0}}{2}})$. \n\n\nWe set $h_0 = C(r_0)(A_{1}({B_{z_{0}}}({\\frac{r_{0}}{2}})) + 2)$, then $h_0$ \nis independent of the genus $g$, as it is obtained from a local estimate in a \ngeodesic ball. Therefore, \n \\begin{center}\n $|\\mu(z)| < {\\mathcal{H}}(z) \\le {\\mathcal{H}}(z_0) < h_0$,\n \\end{center}\nfor all $z \\in \\Sigma$.\n\\end{proof}\n\\begin{rem}\nThe assumption of the surface lying in the thick part of the {moduli space} is essential to this \nargument, since we used an estimate in an embedded geodesic ball.\n\\end{rem}\n\\noindent\nAs an application of this estimate, we can prove theorem 1.1 easily. \n\n\\begin{proof}(of theorem 1.1) Recall that the operator $D = -2(\\Delta -2)^{-1}$ is self-adjoint, and $||\\mu||_{WP} = 1$:\n\\begin{center}\n$|K_h| = 2\\int_{\\Sigma}D(|\\mu|^2)|\\mu|^2 dA < 2h_0^2 \\int_{\\Sigma}D(|\\mu|^2)dA = 2h_0^2$.\n\\end{center}\n\\end{proof}.\n\n\nWe now shift our attention to general {Weil-Petersson} {sectional curvature}s and to prove theorem 1.2.\n\\begin{proof}(of theorem 1.2)\nWe recall from the previous section the Riemannian curvature tensor of the {Weil-Petersson} metric \nis given by (\\cite {Tr}, \\cite{Wp3}):\n\\begin{center} \n$R_{\\alpha\\bar{\\beta}\\gamma\\bar{\\delta}} = \n\\int_{\\Sigma}D(\\mu_{\\alpha}\\bar\\mu_{\\beta})\\mu_{\\gamma}\\bar\\mu_{\\delta}dA + \n\\int_{\\Sigma}D(\\mu_{\\alpha}\\bar\\mu_{\\delta})\\mu_{\\gamma}\\bar\\mu_{\\beta}dA$,\n\\end{center}\nwhere $\\mu$'s in the formula are harmonic {Beltrami differential}s, and $D$ again is the operator \n$-2(\\Delta -2)^{-1}$.\n\n\nTo calculate the {sectional curvature}, we choose two arbitrary orthonormal harmonic {Beltrami differential}s \n$\\mu_0$ and $\\mu_1$. In other words, we have \n\\begin{center}\n$\\int_{\\Sigma}|\\mu_0|^2 dA = \\int_{\\Sigma}|\\mu_1|^2 dA = 1$ and \n$\\int_{\\Sigma}\\mu_0 {\\bar{\\mu}}_1 dA = 0$.\n\\end{center}\nThen the Gaussian curvature of the plane spanned by $\\mu_0$ and $\\mu_1$ is \n(\\cite {Wp3})\n\\begin{eqnarray*}\nK(\\mu_0,\\mu_1) &=& {\\frac{1}{4}}(R_{0\\bar{1}0\\bar{1}} - R_{0\\bar{1}1\\bar{0}} - \nR_{1\\bar{0}0\\bar{1}}+R_{1\\bar{0}1\\bar{0}}) \\nonumber \\\\\n&=& Re(\\int_{\\Sigma}D(\\mu_{0}\\bar\\mu_{1})\\mu_{0}\\bar\\mu_{1}dA) - \n{\\frac{1}{2}}Re(\\int_{\\Sigma}D(\\mu_{0}\\bar\\mu_{1})\\mu_{1}\\bar\\mu_{0}dA) \\nonumber \\\\\n&-& {\\frac{1}{2}}\\int_{\\Sigma}D(|\\mu_{1}|^2)|\\mu_{0}|^2dA. \n\\end{eqnarray*}\nFrom (\\cite {Wp3}, lemma 4.3) and H\\\"{o}lder inequality, we have\n\\begin{eqnarray*}\n|Re(\\int_{\\Sigma}D(\\mu_{0}\\bar\\mu_{1})\\mu_{0}\\bar\\mu_{1}dA)| &\\le& \n\\int_{\\Sigma}|D(\\mu_{0}\\bar\\mu_{1})||\\mu_{0}\\bar\\mu_{1}|dA \\nonumber \\\\\n&\\le& \\int_{\\Sigma}\\sqrt{D(|\\mu_{0}|^{2})}\\sqrt{D(|\\mu_{0}|^{2})}|\\mu_{0}\\bar\\mu_{1}|dA \\nonumber \\\\\n&\\le& \\int_{\\Sigma}D(|\\mu_{0}|^2)|\\mu_{1}|^2dA = \\int_{\\Sigma}D(|\\mu_{1}|^2)|\\mu_{0}|^2dA, \n\\end{eqnarray*}\nand similarly\n\\begin{center}\n$|\\int_{\\Sigma}D(\\mu_{0}\\bar\\mu_{1})\\mu_{1}\\bar\\mu_{0}dA| \\le \\int_{\\Sigma}D(|\\mu_{1}|^2)|\\mu_{0}|^2dA$. \n\\end{center}\nTherefore \n\\begin{center}\n$|K(\\mu_0,\\mu_1)| \\le 2\\int_{\\Sigma}D(|\\mu_{1}|^2)|\\mu_{0}|^2dA$. \n\\end{center}\nWe apply theorem 3.3 to find that there is a $h_0 > 0$, independent of $g$, such that $|\\mu_{0}| < h_0$. \nSo $|K| < 2h_0^2 \\int_{\\Sigma}D(|\\mu_{1}|^2)dA = 2h_0^2$.\n\\end{proof}\n\n\nIt is now straightforward to see that theorem 1.3 holds since all {sectional curvature}s are negative. \n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzccyc b/data_all_eng_slimpj/shuffled/split2/finalzzccyc new file mode 100644 index 0000000000000000000000000000000000000000..3037ecfa033b054326ae08f1db13cb7250f4f30a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzccyc @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn this note we consider non-spurious solutions by using a critical point\ntheory to the following Dirichlet problem \n\\begin{equation}\n\\begin{array}{l}\n\\ddot{x}\\left( t\\right) =f\\left( t,x\\left( t\\right) \\right) \\\\ \nx\\left( 0\\right) =x\\left( 1\\right) =\n\\end{array}\n\\label{par}\n\\end{equation}\nwhere $f:\\left[ 0,1\\right] \\times \\mathbb{R} \\rightarrow \\mathbb{R}$ is a\njointly continuous function. Further we will make precise what is meant by\nthe solutions to (\\ref{par}).\n\nThe existence of non-spurious solutions is very important for the\napplications since in such a case one can approximate solutions to (\\ref{par\n) with a sequence of solutions to a suitably chosen family of discrete\nproblems and one is sure that this approximation converges to the solution\nof the original problem, see \\cite{kelly}. There are many ways in which a\nboundary value problem can be discretized and the existence and multiplicity\ntheory on difference equations is very vast, see for example \\cite{agrawal}, \n\\cite{candito1}, \\cite{guojde}, \\cite{MRT}. However, as underlined by\nAgarwal, \\cite{agarvalpaper}, there are no clear relations between\ncontinuous problems and their discretization which means that both problems\ncan be solvable, but the approximation approaches nothing but the solution\nto the continuous problem or else, the discrete problem is solvable and the\ncontinuous one is not or the other way round. Let us recall his examples:\n\n\\begin{example}\nThe continuous problem $\\ddot{x}(t) + \\frac{\\pi^2}{n^2}x(t)=0$, $x(0)=x(n)=0$\nhas an infinite number of solutions $x(t)= c \\sin \\frac{\\pi t}{n}$ ($c$ is\narbitrary) whereas its discrete analogue $\\Delta^2x(k)+\\frac{\\pi^2}{n^2\nx(k)=0$, $x(0)=x(n)=0$ has only one solution $x(k)\\equiv 0$. The problem \n\\ddot{x}(t)+\\frac{\\pi^2}{4n^2}x(t)=0$, $x(0)=0$, $x(n)=1$ has only one\nsolution $x(t)=\\sin\\frac{\\pi t}{2n}$, and its discrete analogue $\\Delta^2\nx(k)+ \\frac{\\pi^2}{4n^2}x(k)=0$, $x(0)=0$, $x(n)=1$ also has one solution.\nThe continuous problem $\\ddot{x}(t)+4\\sin^2\\frac{\\pi}{2n}x(t)=0$, $x(0)=0$, \nx(n)=\\varepsilon \\neq 0$ has only one solution $x(t)= \\varepsilon \\frac{\\sin\n(2 \\sin\\frac{\\pi}{2n})t]}{\\sin[(2 \\sin\\frac{\\pi}{2n})n]}$, whereas its\ndiscrete analogue $\\Delta^2x(k)+4\\sin^2\\frac{\\pi}{2n}x(k)=0$, $x(0)=0$, \nx(n)=\\varepsilon \\neq 0$ has no solution.\n\\end{example}\n\nThus, the nature of the solution changes when a continuous boundary value\nproblem is being discretized. Moreover, two-point boundary value problems\ninvolving derivatives lead to multipoint problems in the discrete case.\n\nThe above remarks and examples show that steal it is important to consider\nboth continuous and discrete problems simultaneously and investigate\nrelation between solutions which is the key factor especially when the\nexistence part follows by standard techniques.\n\nThere have been some research in this case addressing mainly problems whose\nsolutions where obtained by the fixed point theorems and the method of lower\nand upper solutions, \\cite{rech1}, \\cite{rachunkowa2}, \\cite{thomsontisdell\n. In this submission we are aiming at using critical point theory method,\nnamely the direct method of the calculus of variations (see for example \\cit\n{Ma} for a nice introduction to this topic) in order to show that in this\nsetting one can also obtain suitable convergence results. The advance over\nworks mentioned is that we can have better growth conditions imposed on $f$\nat the expense of not putting derivative of $x$ in $f$. As expected we will\nhave to get the uniqueness of solutions for the associated discrete problem,\nwhich is not always easy to be obtained, see \\cite{uni2}.\n\nIn \\cite{kelly} following \\cite{gaines}, it is suggested which family of\ndifference equations for $n\\in \\mathbb{N}$ is to be chosen when\napproximating problem (\\ref{par}). For $a$, $b$ such that $a0$\nindependent of $n$ and such that \n\\begin{equation}\nn|\\Delta x^{n}(k-1)|\\leq Q\\text{ and }|x^{n}(k)|\\leq N \\label{ewa1}\n\\end{equation\nfor all $k\\in \\mathbb{N}(0,n)$ and all $n\\geq n_{0}$, where $n_{0}$ is fixed\n(and arbitrarily large). Lemma 9.2. from \\cite{kelly} says that for some\nsubsequence $x^{n_{m}}=\\left( x^{n_{m}}(k)\\right) $ of $x^{n}$ it holds \n\\begin{equation}\n\\lim_{m\\rightarrow \\infty }\\max_{0\\leq k\\leq n_{m}}\\left\\vert\nx^{n_{m}}\\left( k\\right) -x\\left( \\frac{k}{n_{m}}\\right) \\right\\vert =0.\n\\label{ewa2}\n\\end{equation\nIn other words, this means that the suitable chosen discretization\napproaches the given continuos boundary value problem. Such solutions to\ndiscrete BVPs are called non-spurious in contrast to spurious ones which\neither diverge or else converge to anything else but the solution to a given\ncontinuous Dirichlet problem.\n\n\\section{Non spurious solutions for (\\protect\\ref{par})}\n\n\\subsection{The continuous problem}\n\nIn the existence part we apply variational methods. This means that with\nproblem under consideration we must associate the Euler action functional,\nprove that this functional is weakly lower semicontinuous in a suitable\nfunction space, coercive and at least G\\^{a}teaux differentiable. Given this\nthree conditions one knows that at least a weak solution to problem under\nconsideration exists whose regularity can further be improved with known\ntools. Such scheme, commonly used within the critical point theory is well\ndescribed in the first chapters of \\cite{Ma}.\n\nThe solutions to (\\ref{par}) will be investigated in the space \nH_{0}^{1}\\left( 0,1\\right) $ consisting of absolutely continuous functions\nsatisfying the boundary conditions and with a.e. derivative being integrable\nwith square. Such a solution is called a weak one, i.e. a function $x\\in\nH_{0}^{1}\\left( 0,1\\right) $ is a weak $H_{0}^{1}\\left( 0,1\\right) $\nsolution to (\\ref{par}), if \n\\begin{equation*}\n\\int_{0}^{1}\\dot{x}\\left( t\\right) \\dot{v}\\left( t\\right)\ndt+\\int_{0}^{1}f\\left( t,x\\left( t\\right) \\right) v\\left( t\\right) dt=0\n\\end{equation*\nfor all $v\\in H_{0}^{1}\\left( 0,1\\right) $. The classical solution to (\\re\n{par}) is then defined as a function $x:$ $\\left[ 0,1\\right] \\rightarrow \n\\mathbb{R}$ belonging to $H_{0}^{1}\\left( 0,1\\right) $ such that $\\ddot{x}$\nexists a.e. and $\\ddot{x}\\in L^{1}\\left( 0,\\pi \\right) $. Since $f$ is\njointly continuous, then it is known from the Fundamental Theorem of the\nCalculus of Variations, see \\cite{Ma}, that $x$ is in fact twice\ndifferentiable with classical continuous second derivative. Thus $x\\in\nH_{0}^{1}\\left( 0,1\\right) \\cap C^{2}\\left( 0,1\\right) $.\n\nLet $F\\left( t,x\\right) =\\int_{0}^{x}f\\left( t,s\\right) ds$ for $\\left(\nt,x\\right) \\in \\left[ 0,1\\right] \\times \\mathbb{R}$. We link solutions to \n\\ref{par}) with critical points to a $C^{1}$ functional $J:H_{0}^{1}\\left(\n0,1\\right) \\rightarrow \\mathbb{R}$ given by \n\\begin{equation*}\nJ\\left( x\\right) =\\frac{1}{2}\\int_{0}^{1}\\dot{x}^{2}\\left( t\\right)\ndt+\\int_{0}^{1}F\\left( t,x\\left( t\\right) \\right) dt.\n\\end{equation*\nLet us examine $J$ for a while. Due to the continuity of $f$ functional $J$\nis well defined. Recall that the norm in $H_{0}^{1}\\left( 0,1\\right) $ reads \n\\begin{equation*}\n\\left\\Vert x\\right\\Vert =\\sqrt{\\int_{0}^{1}\\dot{x}^{2}\\left( t\\right) dt}.\n\\end{equation*\nThen we see $\\frac{1}{2}\\int_{0}^{1}\\dot{x}^{2}\\left( t\\right) dt=\\frac{1}{2\n\\left\\Vert x\\right\\Vert ^{2}$ is a $C^{1}$ functional by standard facts. Its\nderivative is a functional on $H_{0}^{1}\\left( 0,1\\right) $ which reads \n\\begin{equation*}\nv\\rightarrow \\int_{0}^{1}\\dot{x}\\left( t\\right) \\dot{v}\\left( t\\right) dt.\n\\end{equation*\nConcerning the nonlinear part we see that for any fixed $v\\in\nH_{0}^{1}\\left( 0,1\\right) $ (which is continuous of course) function \n\\varepsilon \\rightarrow \\int_{0}^{1}F\\left( t,x\\left( t\\right) +\\varepsilon\nv\\left( t\\right) \\right) dt$ (where the integral we can treat as the Riemann\none) due to the Leibnitz differentiation formula under integral sign is \nC^{1}$ and the derivative of $\\int_{0}^{1}F\\left( t,x\\left( t\\right) \\right)\ndt$ is a functional on $H_{0}^{1}\\left( 0,1\\right) $ which reads \n\\begin{equation*}\nv\\rightarrow \\int_{0}^{1}f\\left( t,x\\left( t\\right) \\right) v\\left( t\\right)\ndt\n\\end{equation*\nif we recall that $F\\left( t,x\\right) =\\int_{0}^{x}f\\left( t,s\\right) ds$.\nSince the above is obviously continuous in $x$ uniformly in $v$ form unit\nsphere, we see that $J$ is in fact $C^{1}.$\n\nRecall also Poincar\\'{e} inequality $\\int_{0}^{1}x^{2}\\left( t\\right) dt\\leq \n\\frac{1}{\\pi ^{2}}\\int_{0}^{1}\\dot{x}^{2}\\left( t\\right) dt$ and Sobolev's\none $\\max_{t\\in \\left[ 0,1\\right] }\\left\\vert x\\left( t\\right) \\right\\vert\n\\leq \\int_{0}^{1}\\dot{x}^{2}\\left( t\\right) dt$.\n\nWe sum up the assumptions on the nonlinear term in (\\ref{par}) since in\norder to get the above mentioned observations continuity of $f$ is\nsufficient. We assume that\\newline\n\\textit{\\textbf{H1}} $f:\\left[ 0,1\\right] \\times \\mathbb{R} \\rightarrow \n\\mathbb{R}$ is a continuous function such that $f\\left( t,0\\right) \\neq 0$\nfor $t\\in \\left[ 0,1\\right]$;\\newline\n\\textit{\\textbf{H2}} $f$ is nondecreasing in $x$ for all $t\\in \\left[ 0,\n\\right] $\n\n\\begin{proposition}\nAssume that \\textbf{H1} and \\textbf{H2} are satisfied. Then problem (\\re\n{par}) has exactly one nontrivial solution.\n\\end{proposition}\n\n\\begin{proof}\nFirstly, we consider the existence part. Note that by Weierstrass Theorem\nthere exists $c>0$ such that \n\\begin{equation*}\n\\left\\vert f\\left( t,0\\right) \\right\\vert \\leq c\\text{ for all }t\\in \\left[\n0,1\\right] .\n\\end{equation*\nSince $f$ is nondecreasing in $x$ \\textit{\\textbf{H2}} it follows that $F$\nis convex. Since $F\\left( t,0\\right) =0$ for all $t\\in \\left[ 0,1\\right] $\nwe obtain from the well known inequality \n\\begin{equation}\nF(t,x)=F(t,x)-F(t,0)\\geq f\\left( t,0\\right) x\\geq -\\left\\vert f\\left(\nt,0\\right) x\\right\\vert \\label{aaa}\n\\end{equation\nvalid for any $x$ and all for all $t\\in \\left[ 0,1\\right] $. We observe that\nfrom (\\ref{aaa}) we get \n\\begin{equation}\nF\\left( t,x\\right) \\geq -c\\left\\vert x\\right\\vert \\text{ for all }t\\in \\left[\n0,1\\right] \\text{ and all }x\\in \\mathbb{R}. \\label{estimF}\n\\end{equation\nHence for any $x\\in H_{0}^{1}\\left( 0,1\\right) $ we see by Schwartz and\nPoincar\\'{e} inequality \n\\begin{equation*}\n\\int_{0}^{1}F\\left( t,x\\left( t\\right) \\right) dt\\geq\n-c\\int_{0}^{1}\\left\\vert x\\left( t\\right) \\right\\vert dt\\geq -\\frac{c}{\\pi \n\\left\\Vert x\\right\\Vert .\n\\end{equation*\nTherefor\n\\begin{equation}\n\\begin{array}{l}\nJ\\left( x\\right) \\geq \\frac{1}{2}\\left\\Vert x\\right\\Vert ^{2}-\\left\\vert\nc\\right\\vert \\left\\Vert x\\right\\Vert \n\\end{array}\n\\label{Ju_}\n\\end{equation\nHence from (\\ref{Ju_}) we obtain that $J$ is coercive. Note that $\\frac{1}{2\n\\left\\Vert x\\right\\Vert ^{2}$ is obviously w.l.s.c. on $H_{0}^{1}\\left(\n0,1\\right) $. Next, by the Arzela-Ascoli Theorem and Lebesgue Dominated\nConvergence, see these arguments in full detail in \\cite{Ma} in the proof of\nTheorem 1.1 we see that $x\\rightarrow \\int_{0}^{1}F\\left( t,x\\left( t\\right)\n\\right) dt$ is weakly continuous. Thus $J$ is weakly l.s.c. as a sum of a\nw.l.s.c. and weakly continuous functionals. Since $J$ is $C^{1}$ and convex\nfunctional it has exactly one argument of a minimum which is necessarily a\ncritical point and thus a solution to (\\ref{par}). Putting $x=0$ in (\\re\n{par}) one see that we have a contradiction, so any solution is nontrivial.\n\\end{proof}\n\nIn order to get the existence of nontrivial solution to (\\ref{par}) it would\nsuffice to assume that $f\\left( t_{0},0\\right) \\neq 0$ for some $t_{0}\\in\n\\left[ 0,1\\right] $ but since we need to impose same conditions on discrete\nproblem it is apparent that our assumption is more reasonable. Moreover,\nthere is another way to prove the weak lower semincontinuity of $J$, namely\nshow that $J$ is continuous. Then it is weakly l.s.c. since it is convex.\nHowever, in proving continuity of $J$ on $H_{0}^{1}\\left( 0,1\\right) $ one\nuses the same arguments.\n\n\\subsection{The discrete problem}\n\nNow we turn the discretization of (\\ref{par}), i.e. to problem (\\ref{diffequ\n).\nconsidered in the $n$-dimensional Hilbert space $E$ consisting of functions \nx:\\mathbb{N}(0,n)\\rightarrow \\mathbb{R}$ such that $x(0)=x(n)=0$. Space $E$\nis considered with the following nor\n\\begin{equation}\n\\left\\Vert x\\right\\Vert =\\left( \\sum\\limits_{k=1}^{n}|{\\Delta \nx(k-1)|^{2}\\right) ^{\\frac{1}{2}}. \\label{norm_operator}\n\\end{equation\nWe can also consider $E$ with the following norm \n\\begin{equation*}\n\\left\\Vert u\\right\\Vert _{0}=\\left( \\sum\\limits_{k=1}^{n}|u(k)|^{2}\\right) ^\n\\frac{1}{2}}.\n\\end{equation*\nSince $E$ is finite dimensional there exist constants $c_{b}=\\frac{1}{2}$ \\\nand $c_{a}=\\left( \\left( n-1\\right) n\\right) ^{1\/2}$ such that \n\\begin{equation}\nc_{b}\\left\\Vert u\\right\\Vert \\leq \\left\\Vert u\\right\\Vert _{0}\\leq\nc_{a}\\left\\Vert u\\right\\Vert \\text{ for all }u\\in E. \\label{c_a_c_b}\n\\end{equation\nSolutions to (\\ref{diffequ}) correspond in a $1-1$ manner to the critical\npoints to the following $C^{1}$ functional $\\mathcal{I}:E\\rightarrow \\mathbb\nR}$ \n\\begin{equation*}\n\\mathcal{I}(x)=\\sum\\limits_{k=1}^{n}\\tfrac{1}{2}|\\Delta x(k-1)|^{2}+\\frac{1}\nn^{2}}\\sum\\limits_{k=1}^{n-1}F(\\frac{k}{n},x(k))\n\\end{equation*\nwith $F$ defined as before. This means that \n\\begin{equation*}\n\\frac{d}{dx}\\mathcal{I}(x)=0\\text{ if and only if }x\\text{ satisfies (\\re\n{diffequ}).}\n\\end{equation*\nNow we do not need to introduce the notion of the weak solution that is why\nwe have only one type of variational solution. We know that by the discrete\nSchwartz Inequality by (\\ref{estimF}) and by (\\ref{c_a_c_b}) \n\\begin{equation}\n\\begin{array}{l}\n\\mathcal{I}(x)\\geq \\frac{1}{2}\\Vert x\\Vert ^{2}-\\frac{1}{n^{2}}\\left\\vert\nc\\right\\vert \\sqrt{n}\\left( \\sum\\limits_{k=1}^{n-1}\\left\\vert x\\left(\nk\\right) \\right\\vert ^{2}\\right) ^{1\/2} \\\\ \n\\\\ \n\\geq \\frac{1}{2}\\Vert x\\Vert ^{2}-\\left\\vert c\\right\\vert \\frac{\\sqrt{n-1}}{\n}\\left\\Vert x\\right\\Vert \\geq \\frac{1}{2}\\Vert x\\Vert ^{2}-\\left\\vert\nc\\right\\vert \\left\\Vert x\\right\\Vert \n\\end{array}\n\\label{relcoer}\n\\end{equation\nHence $\\mathcal{I}(x)\\rightarrow +\\infty $ as $\\Vert x\\Vert \\rightarrow\n+\\infty $ and we are in position to formulate the following\n\n\\begin{proposition}\n\\label{solvability_diff_equ}Assume that \\textbf{H1}, \\textbf{H2} hold. Then\nproblem (\\ref{diffequ}) has exactly one nontrivial solution.\n\\end{proposition}\n\n\\subsection{Main result}\n\n\\begin{theorem}\n\\label{first convergence theorem}Assume that conditions \\textbf{H1}, \\textbf\nH2} are satisfied. Then there exists $x\\in H_{0}^{1}\\left( 0,1\\right) \\cap\nC^{2}\\left( 0,1\\right) $ which solves uniquely (\\ref{par}) and for each \nn\\in \\mathbb{N}$ there exists $x^{n}$ which solves uniquely (\\ref{diffequ}).\nMoreover, there exists a subsequence $x^{n_{m}}$ of $x^{n}$ such that\ninequalities (\\ref{ewa2}) are satisfied.\n\\end{theorem}\n\n\\begin{proof}\nWe need to show that there exist two constants independent of $n$ such that\ninequalities (\\ref{ewa1}) hold.\nwhere $n_{0}$ is fixed. Then Lemma 9.2. from \\cite{kelly} provides the\nassertion of the theorem. In our argument we use some observations used in\nthe investigation of continuous dependence on parameters for ODE, see \\cit\n{LedzewiczWalczak}. Fix $n$. By Proposition \\ref{solvability_diff_equ},\nthere exists $x^{n}$ solving uniquely (\\ref{diffequ}) and which is an\nargument of a minimum to $\\mathcal{I}$ such that it holds that $\\mathcal{I\n(x^{n})\\leq \\mathcal{I}(0)=0$. Thus relation (\\ref{relcoer}) leads to the\ninequality \n\\begin{equation*}\n\\frac{1}{2}\\Vert x^{n}\\Vert\\leq \\left\\vert c\\right\\vert \\frac{\\sqrt{n-1}}{n}.\n\\end{equation*}\nSince $\\max_{k\\in \\mathbb{N}(0,n)}\\left\\vert x^{n}\\left( k\\right)\n\\right\\vert \\leq \\frac{\\sqrt{n+1}}{2}\\left\\Vert x^{n}\\right\\Vert$ we get\nthat for all $k\\in \\mathbb{N}(0,n)$ \n\\begin{equation*}\n\\left\\vert x^{n}\\left( k\\right) \\right\\vert \\leq 2\\left\\vert c\\right\\vert \n\\frac{\\sqrt{n-1}}{n}\\frac{\\sqrt{n+1}}{2}\\leq \\left\\vert c\\right\\vert =N.\n\\end{equation*}\nBy Lemma 9.3 in \\cite{kelly} we now obtain that there is a constant $Q$ such\nthat condition \n\\begin{equation*}\nn\\vert \\Delta x^{n}(k-1) \\vert \\leq Q \\text{ and }\\vert x^{n}(k) \\vert \\leq N\n\\end{equation*}\nfor all $k\\in \\mathbb{N}(0,n)$ and all $n\\in \\mathbb{N}$ is satisfied. This\nmeans that the application of Lemma 9.2 from \\cite{kelly} finishes the proof.\n\\end{proof}\n\n\\section{Final comments and examples}\n\nIn this section we provide the examples of nonlinear terms satisfying our\nassumptions and we will investigate the possibility of replacing the\nconvexity assumption imposed on $F$ with some weaker requirement as well as\nwe comment on exisiting results in the literature.\n\nConcerning the examples of nonlinear terms any nondecreasing $f$ is of order\nbounded or unbounded, see \n\n\\begin{enumerate}\n\\item[a)] $f\\left( t,x\\right) =g\\left( t\\right) \\exp \\left( x-t^{2}\\right)$;\n\n\\item[b)] $f\\left( t,x\\right) =g\\left( t\\right) \\arctan \\left( x\\right)$;\n\n\\item[c)] $f\\left( t,x\\right) =g\\left( t\\right) x^{3}+\\exp \\left(\nx-t^{2}\\right)$,\n\\end{enumerate}\n\nwhere $g$ is any lower bounded continuous function with positive values. \n\nIn view of remarks contained in \\cite{Ma} functional $J$ can be written \n\\begin{equation*}\nJ\\left( x\\right) =\\left( \\frac{1}{2}\\int_{0}^{1}\\dot{x}^{2}\\left( t\\right)\ndt-\\frac{a}{2\\pi }\\int_{0}^{1}x^{2}\\left( t\\right) dt\\right) \n\\end{equation*\n\\begin{equation*}\n+\\left( \\int_{0}^{1}F\\left( t,x\\left( t\\right) \\right) dt+\\frac{a}{2\\pi \n\\int_{0}^{1}x^{2}\\left( t\\right) dt\\right) .\n\\end{equation*\nThen functional \n\\begin{equation*}\nx\\rightarrow \\left( \\frac{1}{2}\\int_{0}^{1}\\dot{x}^{2}\\left( t\\right) dt\n\\frac{a}{2\\pi }\\int_{0}^{1}x^{2}\\left( t\\right) dt\\right) \n\\end{equation*\nis strictly convex as long as $a\\in \\left( 0,1\\right) $. Note that the first\neigenvalue of the differential operator $-\\frac{d^{2}}{dt^{2}}$ with\nDirichlet boundary conditions on $\\left[ 0,1\\right] $ is $\\frac{1}{\\pi }$\n(note this is the best constant in Poincar\\'{e} inequality). Hence we can\nrelax convexity assumption $F$ by assuming that \n\\begin{equation*}\nx\\rightarrow F\\left( t,x\\right) +\\frac{a}{2\\pi }x^{2}\n\\end{equation*\nis convex for any $t\\in \\left[ 0,1\\right] $. Then $F_{1}\\left( t,x\\right)\n=F\\left( t,x\\right) +\\frac{a}{2\\pi }x^{2}$ satisfies (\\ref{estimF}). \n\nThe natural question arises if similar procedure is possible as far as the\ndiscrete problem (\\ref{diffequ}) is concerned. However there is one big\nproblem here since the first eigenvalue for $-\\Delta ^{2}$ reads $\\lambda\n_{1}=2-2\\cos \\left( \\frac{\\pi }{n+1}\\right) $ and of course $\\lambda\n_{1}\\rightarrow 0$ as $n\\rightarrow \\infty $. This means that the above idea\nwould not work, since we cannot find $a$ for all $n$ idependent of $n$ (for\neach $n$ such $a=a\\left( n\\right) $ exists ). \n\nA comparison with existing results is also in order. The only papers\nconcerning the existence of non-spurious solutions are \\cite{rech1}, \\cit\n{rachunkowa2}, \\cite{thomsontisdell} which follow ideas developed in \\cit\n{gaines} and which were mentioned already in the Introduction. We not only\nuse different methods, namely critical point theory, but also we are not\nlimited as far as the growth is concerned since in sources mentioned $f$ is\nsublinear. However, we could not incorporate the derivative of $x$ into the\nnonlinear term. This is not possible by variational approach but could be\nmade possible by connecting variational methods with Banach contraction\nprinciple and it shows that the research concerning the existence of\nnon-spurious solutions with critical point approach can be further developed.\n\nWe cannot use sublinear growth as in sources mentioned since it does not\nprovide the inequality \n\\begin{equation}\nF\\left( t,x\\right) -F\\left( t,0\\right) \\geq f\\left( t,0\\right) x\\text{ for\nall }t\\in \\left[ 0,1\\right] \\text{ and all }x\\in \\mathbb{R}. \\label{ccccc}\n\\end{equation\nWith our approach inequality (\\ref{ccccc}) is essential in proving the\nrequired estimations which lead to the existence of non-spurious solutions.\nThis is shown by the below remarks where direct calculations are performed.\n\nThe relevant growth condition reads\\newline\n\\textit{\\textbf{H2a} There exist constants} $a,b>0$ and $\\gamma \\in \\left[\n0,1\\right) $ such that \n\\begin{equation}\nf\\left( t,x\\right) \\leq a+b\\left\\vert x\\right\\vert ^{\\gamma }\\text{ for all \nt\\in \\left[ 0,1\\right] \\text{ and all }x\\in \\mathbb{R}.\n\\label{cond_unboud_below}\n\\end{equation\nBy (\\ref{cond_unboud_below}) for all $t\\in \\left[ 0,1\\right] $ and all $x\\in \n\\mathbb{R}$ it holds \n\\begin{equation*}\nF\\left( t,x\\right) \\leq a\\left\\vert x\\right\\vert +\\frac{b}{\\gamma +1\n\\left\\vert x\\right\\vert ^{\\gamma +1}.\n\\end{equation*\nSince $F\\left( t,x\\right) \\geq -\\left\\vert F\\left( t,x\\right) \\right\\vert $\nwe see by Schwartz, Holder and Poincar\\'{e} inequality for any $x\\in\nH_{0}^{1}\\left( 0,1\\right) $ \n\\begin{equation*}\n\\int_{0}^{1}F\\left( t,x\\left( t\\right) \\right) dt\\geq -c_{1}\\left\\Vert\nx\\right\\Vert -c_{2}\\left\\Vert x\\right\\Vert ^{\\gamma +1},\n\\end{equation*\nwhere $c_{1}=a$ and $c_{2}>0$ (the exact value of $c_{2}$ is not important\nsince $\\gamma +1<2$ and functional $J$ is coercive disregarding of the value\nof $c_{2}$. Then problem (\\ref{par}) has at least one solution by the direct\nmethod of the calculus of variations.\n\nIn order to consider problem (\\ref{diffequ}) we need to perform exact\ncalculations since in this case, in view of the convergence Theorem \\re\n{first convergence theorem}, the precise values of constants are of utmost\nimportance. In case of \\textit{\\textbf{H2a}} from H\\\"{o}lder's inequality\nand (\\ref{c_a_c_b}) we get \n\\begin{equation*}\n\\begin{array}{ll}\n\\sum\\limits_{k=1}^{n-1}\\vert u(k)\\vert^{\\gamma +1} & =\\su\n\\limits_{k=1}^{n-1}\\vert u(k)\\vert^{\\gamma +1} \\cdot 1 \\\\ \n& \\\\ \n& \\leq \\left( \\sum\\limits_{k=1}^{n-1}\\vert u(k)\\vert^{\\gamma +1}\\vert^{\\frac\n2}{\\gamma +1} }\\right) ^{\\frac{\\gamma +1}{2}}\\left(\n\\sum\\limits_{k=1}^{n-1}\\vert1\\vert^{\\frac{1}{1- \\frac{\\gamma +1}{2}}}\\right)\n^{1-\\frac{2}{\\gamma +1}} \\\\ \n& \\\\ \n& =\\left( n-1\\right) ^{\\frac{1-\\gamma }{2}}\\left\\Vert u\\right\\Vert\n_{0}^{\\gamma +1}\\leq \\left( \\left( n-1\\right) n\\right) ^{\\frac{\\gamma +1}{2\n}\\left( n-1\\right) ^{\\frac{1-\\gamma }{2}}\\left\\Vert u\\right\\Vert ^{\\gamma +1}\n\\\\ \n& \\\\ \n& =\\left( n-1\\right) n^{\\frac{\\gamma +1}{2}}\\left\\Vert u\\right\\Vert ^{\\gamma\n+1}\\leq \\left( n-1\\right) n\\left\\Vert u\\right\\Vert ^{\\gamma +1}\n\\end{array\n\\end{equation*}\nThus \n\\begin{equation*}\n\\frac{1}{n^{2}}\\frac{b}{\\gamma +1}\\sum\\limits_{k=1}^{n-1}\\vert\nu(k)\\vert^{\\gamma +1}\\leq \\frac{b}{\\gamma +1}n^{\\frac{\\gamma -1}{2\n}\\left\\Vert u\\right\\Vert ^{\\gamma +1}\n\\end{equation*}\n\nHence by the above calculations and (\\ref{relcoer}) we get for any $x\\in E$ \n\\begin{equation}\n\\mathcal{I}(x)\\geq \\frac{1}{2}\\Vert x\\Vert ^{2}-\\left\\vert a\\right\\vert \n\\frac{\\sqrt{n-1}}{n}\\left\\Vert x\\right\\Vert -\\frac{b}{\\gamma +1}n^{\\frac\n\\gamma -1}{2}}\\left\\Vert x\\right\\Vert ^{\\gamma +1}. \\label{rel_add_coer}\n\\end{equation\nThus $\\mathcal{I}(x)\\rightarrow +\\infty $ as $\\Vert x\\Vert \\rightarrow\n+\\infty .$ By Lemma 9.2. from \\cite{kelly}, we need to show that (\\ref{ewa1\n) holds. Fix $n$. Since $\\mathcal{I}(x^{n})\\leq \\mathcal{I}(0)=0$, the\nrelation (\\ref{rel_add_coer}) leads to the inequality \n\\begin{equation}\n\\frac{1}{2}\\Vert x^{n}\\Vert \\leq \\left\\vert a\\right\\vert \\frac{\\sqrt{n-1}}{n\n+\\frac{b}{\\gamma +1}n^{\\frac{\\gamma -1}{2}}\\left\\Vert x^{n}\\right\\Vert\n^{\\gamma }. \\label{ineq}\n\\end{equation\nSince $\\gamma <1$ we see $n^{\\frac{\\gamma -1}{2}}\\rightarrow 0$. Thus there\nis some $n_{0}$ that for all $n\\geq n_{0}$ it holds $\\frac{b}{\\gamma +1}n^\n\\frac{\\gamma -1}{2}}<\\frac{1}{4}$. Take $n\\geq n_{0}$. Let us consider two\ncases, namely $\\left\\Vert x^{n}\\right\\Vert \\leq 1$ and $\\left\\Vert\nx^{n}\\right\\Vert >1$. In case $\\left\\Vert x^{n}\\right\\Vert >1$ we get from \n\\ref{ineq}) that \n\\begin{equation*}\n\\frac{1}{2}\\Vert x^{n}\\Vert \\leq \\left\\vert a\\right\\vert \\frac{\\sqrt{n-1}}{n\n+\\frac{1}{4}\\left\\Vert x^{n}\\right\\Vert \n\\end{equation*\nRecall $\\max_{k\\in \\mathbb{N}(0,n)}\\left\\vert x^{n}\\left( k\\right)\n\\right\\vert \\leq \\frac{\\sqrt{n+1}}{2}\\left\\Vert x^{n}\\right\\Vert $ we get\nthat for all $k\\in \\mathbb{N}(0,n)$ \n\\begin{equation*}\n\\left\\vert x\\left( k\\right) \\right\\vert \\leq 4\\left\\vert a\\right\\vert \\frac\n\\sqrt{n-1}}{n}\\frac{\\sqrt{n+1}}{2}\\leq 2\\left\\vert a\\right\\vert =N.\n\\end{equation*\nFor the case $\\left\\Vert x^{n}\\right\\Vert \\leq 1$ we however we cannot\nproceed without (\\ref{ccccc}). The reason is what while on space $E$\ndisregarding of $n$ the sequence is norm bounded by $1$ (uniformely in $n$)\nin norm given by (\\ref{norm_operator}), this is not the case with the\nmax-norm where it is unbounded as $n\\rightarrow \\infty $.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThis paper considers estimation of parameters of distributions whose domain is a particular non-Euclidean geometry: a topological space divided into $M$ equivalence classes by actions of a finite spherical symmetry group. A well known example of a finite spherical symmetry group is the point group in 3 dimensions describing the soccer ball, or football, with truncated icosahedral symmetry that also corresponds to the symmetry of the Carbon-60 molecule. This paper formulates a general approach to parameter estimation in distributions defined over such domains. We use a restricted finite mixture representation introduced in~\\cite{chen_parameter_2015} for probability distributions that are invariant to actions of any topological group. This representation has the property that the number of mixture components is equal to the order of the group, the distributions in the mixture are all parameterized by the same parameters, and the mixture coefficients are all equal. This is practically significant since many reliable algorithms have been developed for parameter estimation when samples come from finite mixture distributions~\\cite{dempster_maximum_1977,sohn_efficient_2011}.\n\nWe apply the representation to an important problem in materials science: analysis of mean orientation in polycrystals. \nCrystal orientation characterizes properties of materials including electrical conductivity and thermal conductivity. Polycrystalline materials are composed of grains of varying size and orientation, where each grain contains crystal forms with similar orientations. The quality of the material is mainly determined by the grain structure i.e. the arrangement of the grains, their orientations, as well as the distribution of the precipitates. Thus accurate estimation of crystal orientation of the grains is useful for predicting how materials fail and what modes of failure are more likely to occur \\cite{de_graef_structure_2007}.\n\nThe mean orientation of the grain, characterized for example by its Euler angles, can only be specified modulo a set of angular rotations determined by the symmetry group associated with the specific type of crystal, e.g. hexagonal, cubic. This multiplicity of equivalent Euler angles complicates the development of reliable mean orientation estimators. The problem becomes even harder when the orientations are sampled from a region encompassing more than one grain such that the orientations cluster over different mean directions. In such a case, we would like to identify whether the orientations are multi-modally distributed and also estimate the mean direction for each cluster. \n\nIn our previous work~\\cite{chen_parameter_2015}, we introduced the finite mixture of Von Mises-Fisher (VMF) distribution for observations that are invariant to actions of a spherical symmetry group. We applied the expectation maximization (EM) maximum likelihood (ML) algorithm, called EM-VMF, to estimate the group-invariant parameters of this distribution. In this paper, we develop a hyperbolic representation simplification of the EM-VMF algorithm that reduces the computation time by a factor of $2$. We also introduce a new group invariant distribution for spherical symmetry groups, called the $\\mathcal{G}$-invariant Watson distribution, which like VMF is a density parameterized by location (angle mean) and scale (angle concentration) over the $p$-dimensional sphere. An EM algorithm is presented for estimation of the parameters, called the EM-Watson algorithm. Furthermore, mixture-of-$\\mathcal{G}$-invariant Watson (mGIW) and von Mises-Fisher (mGIV) distributions are introduced to perform clustering on the $\\mathcal{G}$-invariant sphere. An EM algorithm is presented for estimation of the parameters of the mGIW and mGIV distributions. We illustrate how the Generalized Likelihood Ratio Test (GLRT) can be used to detect the presence of multiple modes in a sample and how it can be combined with the EM algorithm for mGIW and mGIV distributions to cluster multiple orientations on the sphere. \n\nThe performance of the proposed EM orientation estimators is evaluated by simulation and compared to other estimators. The EM orientation estimators are then illustrated on Electron Backscatter Diffraction EBSD data collected from a Nickel alloy whose crystal form induces the $m\\overline{3}m$~\\cite{newnham_properties_2004} cubic point symmetry group. We establish that the EM orientation estimators result in significantly improved estimates of the mean direction in addition to providing an accurate estimate of concentration about the mean. Furthermore, with the extended mixture models, we are able to identify and cluster multi-modally distributed samples more accurately than the K-means algorithm.\n\nThe paper is organized as follows. Section~\\ref{sec:group-invariant} describes group invariant random variables and gives the mixture representation for their densities. Section \\ref{sec:spherical_symmetry_group} specializes to random variables invariant relative to actions of the spherical symmetry group and develops the $\\mathcal G$-invariant VMF and Watson distributions along with EM-ML parameter estimator. The clustering methods based on the $\\mathcal{G}$-invariant distributions along with the GLRT are elaborated in Section \\ref{sec:clustering_spherical_symmetry_group}. The crystallography application and data simulation are presented in Section~\\ref{sec:app_crystal_orientation_estimation} and the experiment results are shown in Section \\ref{sec:experiment}. Section~\\ref{sec:conclusion} has concluding remarks.\n\n\\section{Group-invariant random variables}\n\\label{sec:group-invariant}\n\\def\\bfx{{\\mathbf x}}\nConsider a finite topological group $\\mathcal G=\\{G_1, \\ldots, G_M\\}$ of $M$ distinct actions on a topological space $\\mathcal X$, $G_i: \\mathcal X \\rightarrow \\mathcal X$ and a binary operation \"*\" defining the action composition $G_i * G_j$, denoted $G_i G_j$. $\\mathcal G$ has the properties that composition of multiple actions is associative, for every action there exists an inverse action, and there exists an identity action \\cite{birkhoff_brief_1963}. A real valued function $f(\\bx)$ on $\\mathcal X$ is said to be invariant under $\\mathcal G$ if: $f(G\\bx)=f(\\bx)$ for $G\\in \\mathcal G$. Let $\\bX$ be a random variable defined on $\\mathcal X$. We have the following theorem for the probability density $f(\\bx)$ of $\\bX$.\n\\begin{theorem}\n\\label{thm:1}\nThe density function $f: \\mathcal X\\rightarrow \\Reals$ is invariant under $\\mathcal G$ if and only if\n\\begin{eqnarray}\n\\label{eq:thm1_representation}\n\\!\\begin{aligned}\n\\exists\\ &h: \\mathcal X\\rightarrow \\Reals\\ s.t. \\\\\n&f(\\bx)= \\frac{1}{M} \\sum_{i=1}^M h(G_i\\bx).\n\\end{aligned}\n\\end{eqnarray}\n\\end{theorem}\nThis theorem is a slight generalization of \\cite[Thm. 2.1]{chen_parameter_2015} in that the density $h(.)$ is not necessarily the same as $f(.)$. The proof is analogous to that of \\cite[Thm. 2.1]{chen_parameter_2015}.\n\nTheorem \\ref{thm:1} says that any density $f(\\bx)$ that is invariant under group $\\mathcal G$ can be represented as a finite mixture of a function and its translates $h(G_i\\bx)$ under the group's actions $G_i \\in \\mathcal G$. As pointed out in~\\cite{chen_parameter_2015}, Thm.~\\ref{thm:1} has important implications on $\\mathcal G$-invariant density estimation and parameter estimation. In particular it can be used to construct maximum likelihood estimators for parametric densities. Let $h(\\bx;\\btheta)$ be a density on $\\mathcal X$ that is parameterized by a parameter $\\btheta$ in a parameter space $\\Theta$. We extend $h(\\bx;\\btheta)$ to a $\\mathcal G$-invariant density $f$ by using Thm. \\ref{thm:1}, obtaining:\n\\begin{eqnarray}\nf(\\bx;\\btheta)=\\frac{1}{M} \\sum_{i=1}^M h_i(\\bx;\\btheta),\n\\label{eq:SSG}\n\\end{eqnarray}\nwhere $h_i(\\bx;\\btheta)=h(G_i\\bx;\\btheta)$. This density is of the form of a finite mixture of densities $h_i(\\bx;\\btheta)$ of known parametric form where the mixture coefficients are all identical and equal to $1\/M$. Maximum likelihood (ML) estimation of the parameter $\\btheta$ from an i.i.d. sample $\\{\\bx_i\\}_{i=1}^n$ from any $\\mathcal G$-invariant density $f$ can now be performed using finite mixture model methods \\cite{mclachlan_finite_2004} such as the Expectation-Maximization (EM) algorithm~\\cite{dempster_maximum_1977} or the restricted Boltzman machine (RBM) \\cite{sohn_efficient_2011}.\n\n\\section{ML within a Spherical Symmetry Group}\n\\label{sec:spherical_symmetry_group}\nAs in~\\cite{chen_parameter_2015} we specialize Thm.~\\ref{thm:1} to estimation of parameters for the case that the probability density is on a sphere and is invariant to actions in a spherical symmetry group. In Section~\\ref{sec:app_crystal_orientation_estimation} this will be applied to a crystallography example under spherical distribution likelihood models for the mean crystal orientation. In general, the measured and mean orientations can be represented by Euler angles~\\cite{eberly_euler_2008}, Rodrigues Vectors~\\cite{rodrigues_lois_1840}, or Quaternions~\\cite{altmann_rotations_2005}. As in~\\cite{chen_parameter_2015}, we use the quaternion representation to enable orientations to be modeled by spherical distributions since the quaternion representation is a $4$D vector on the $3$-sphere $S^3$, i.e. $\\bq = (q_1, q_2, q_3, q_4)$ such that $\\|\\bq\\| = 1$.\n\nAny of the aforementioned orientation representations have inherent ambiguity due to crystal symmetries. For example, if the crystal has cubic symmetry, its orientation is only uniquely defined up to a 24-fold set of proper rotations of the cube about its symmetry axes.\nThese actions form a point symmetry group, called $432$, a sub-group of $m\\overline{3}m$. In quaternion space, since each orientation corresponds to two quaternions with different sign $\\{\\bq,-\\bq\\}$, these rotations reflections, and inversions can be represented as a spherical symmetry group $\\mathcal{G}$ of quaternionic matrices $\\{\\bP_1,\\ldots,\\bP_{M}\\}$, with sign symmetry such that $\\bP_i=-\\bP_{i-M\/2}\\ \\forall M\/2 0, \\\\\n\\hat\\bmu &= \\bt_p, \\hat{\\kappa} < 0.\n\\end{split}\n\\end{equation}\n\nSimilarly by fixing $\\bmu$ and setting to zero the derivative of (\\ref{eq:Watson_Mstep_Dev}) with respect to $\\kappa$, we have:\n\\begin{equation}\n\\label{eq:Watson_Tp_func}\n\\begin{split}\n&Y_p(\\kappa) = \\frac{\\bbM'(\\frac{1}{2},\\frac{p}{2},\\kappa)}{\\bbM(\\frac{1}{2},\\frac{p}{2},\\kappa)} = \\frac{\\sum_{i=1}^n\\sum_{m=1}^{M'}r_{i,m}(\\bmu^T\\bP_m^T\\bx_i)^2}{n} \\\\\n\\Rightarrow& \\hat{\\kappa} = Y_p^{-1}\\left( \\frac{\\sum_{i=1}^n\\sum_{m=1}^{M'}r_{i,m}(\\bmu^T\\bP_m^T\\bx_i)^2}{n}\\right),\n\\end{split}\n\\end{equation}\nThe final estimates of $\\bmu$ and $\\kappa$ are obtained by checking both cases ($\\hat{\\kappa}>0$, $\\hat{\\kappa}<0$) and choosing the one which is consistent for (\\ref{eq:Watson_mu_est})(\\ref{eq:Watson_Tp_func}).\n\n\\section{Clustering with a Spherical Symmetry Group}\n\\label{sec:clustering_spherical_symmetry_group}\nIn this section we extend the parameter estimation problem to the situation where there are multiple group-invariant distributions with different parameters that govern the samples. This problem arises, for example, in poly-crystaline materials when estimating the mean crystal orientation over a region containing more than one grain (perhaps undetected). This problem can be solved by first applying some standard clustering methods, e.g. K-means\\cite{hartigan_algorithm_1979}, and then estimating the parameters for each cluster. However, clustering methods based on the distance relation between the samples are complicated by the presence of spherical symmetry because it is necessary to distinguish modes that are due only to symmetry from those that distinguish different clusters. Therefore, we propose a model-based clustering algorithm which accommodates symmetry to handle this problem.\n\nConsider the situation where the samples $\\{\\bx_i\\}_{i=1}^n$ follow a mixture of $\\mathcal{G}$-invariant density functions. For the VMF distribution, the mixture density has the following form:\n\\begin{equation}\n\\label{eq:mixture_of_Ginv_VMF}\ng_v(\\bx;\\{\\bmu_c,\\kappa_c,\\alpha_c\\}) = \\sum_{c=1}^C\\alpha_c\\left(\\sum_{m=1}^M \\frac{1}{M} \\phi(\\bx;\\bP_m\\bmu_c,\\kappa_c)\\right),\n\\end{equation}\nwhere $C$ is the number of clusters assumed to be fixed a priori, $\\bmu_c,\\kappa_c$ are the parameters for the $c$-th cluster and $\\alpha_c$ are the mixing coefficients where $\\sum_{c=1}^C\\alpha_c=1$ and $\\alpha_c>0$ for all $c$. The parameters of (\\ref{eq:mixture_of_Ginv_VMF}) can be estimated by the EM algorithm:\n\nE-step:\n\\begin{equation}\n\\label{eq:mVMF_mClusters_Estep}\nr_{i,c,m}=\\frac{\\alpha_c\\phi(\\bx_i; \\bP_m\\bmu_c,\\kappa_c)}{\\sum_{h=1}^C\\alpha_{h}\\sum_{l=1}^M\\phi(\\bx_i;\\bP_{l}\\mu_{h}, \\kappa_{h})}\n\\end{equation} \n\nM-step:\n\\begin{align}\n\\alpha_c&=\\sum_{i=1}^n\\sum_{m=1}^Mr_{i,c,m}, \\hat{\\bmu}_c=\\frac{\\bgamma_c}{\\|\\bgamma_c\\|}, \\hat{\\kappa}_c=A_p^{-1}\\left(\\frac{\\|\\bgamma_c\\|}{n\\alpha_c}\\right),\\\\\n\\label{eq:mVMF_mClusters_Mstep}\n\\bgamma_c&=\\sum_{i=1}^n\\sum_{m=1}^M r_{i,c,m}\\bP_m^T\\bx_i,\n\\end{align}\nwhere $r_{i,c,m}$ is the probability of sample $\\bx_i$ belonging to the $c$-th cluster and the $m$-th symmetric component.\n\nFor the Watson distribution, the mixture of $\\mathcal{G}$-invariant Watson density is\n\\begin{equation}\n\\label{eq:mixture_of_Ginv_Watson}\ng_w(\\bx;\\{\\bmu_c,\\kappa_c, \\alpha_c\\}) = \\sum_{c=1}^C\\alpha_c\\left(\\sum_{m=1}^{M'} \\frac{1}{M'}W_p(\\bx; \\bP_m\\bmu_c,\\kappa_c)\\right)\n\\end{equation}\n\nThe E-step is similar to (\\ref{eq:mVMF_mClusters_Estep}) with $\\phi$ replaced by $W_p$ function. The M-step can be computed with a similar approach as in Section~\\ref{sec:ginv_Watson_dist} with the following modifications:\n\\begin{align}\n\\label{eq:Watson_mClusters_Mstep_hatT}\n\\tilde{T_c}&=\\frac{1}{n\\alpha_c}\\sum_{i=1}^n\\sum_{m=1}^{M'} r_{i,c,m}(\\bP_m^T\\bx_i\\bx_i^T\\bP_m), \\\\\n\\label{eq:Watson_mClusters_Mstep_kappa}\n\\hat{\\kappa}_c&=Y_p^{-1}\\left(\\frac{\\sum_{i=1}^n\\sum_{m=1}^{M'}r_{i,c,m}(\\bmu_c^T\\bP_m^T\\bx_i)^2}{n\\alpha_c}\\right),\n\\end{align}\nwhere $\\alpha_c=\\sum_{i=1}^n\\sum_{m=1}^Mr_{i,c,m}$.\n\\subsection{Multi-modality Tests on $\\mathcal{G}$-invariant Spherical Distributions}\nGiven sample set $\\{\\bx_i\\}_{i=1}^n$ on $S^{p-1}$, the objective is to determine whether the $n$ samples are drawn from one single distribution or a mixture of $C$ distributions. For polycrystalline materials, the result of this determination can be used to discover undetected grains within a region. We propose to use a multi-modal hypothesis test based on the $\\mathcal{G}$-invariant distributions to solve this problem. The two hypotheses are $H_0$: The samples are from a single $\\mathcal{G}$-invariant distribution $f(\\bx;\\{\\bmu,\\kappa\\})$; and $H_1$: The samples are from a mixture of $C$ distributions $g(\\bx;\\{\\bmu_c,\\kappa_c,\\alpha_c\\}_{c=1}^C)$. The Generalized Likelihood Ratio Test (GLRT)~\\cite{hero_statistical_2000} has the following form:\n\\begin{equation}\n\\label{eq:multi_sample_GLRT}\n\\begin{aligned}\n\\Lambda_{GLR} &= \\frac{\\max_{\\{\\bmu_c,\\kappa_c,\\alpha_c\\}_{c=1}^C\\in\\Theta_1}g(\\{\\bx_i\\}_{i=1}^n;\\{\\bmu_c,\\kappa_c,\\alpha_c\\}_{c=1}^C)}{\\max_{\\{\\bmu,\\kappa\\}\\in\\Theta_0}f(\\{\\bx_i\\}_{i=1}^n;\\{\\bmu,\\kappa\\})} \\\\\n&\\gtrless^{H_1}_{H_0} \\eta\n\\end{aligned}\n\\end{equation}\nwhere $\\Theta_0,\\Theta_1$ are the parameter spaces for the two hypotheses. The $f$ and $g$ functions for VMF and Watson distributions are defined in (\\ref{eq:mixture_density}), (\\ref{eq:Watson_mixture_density}) and (\\ref{eq:mixture_of_Ginv_VMF}), (\\ref{eq:mixture_of_Ginv_Watson}) respectively and the test statistic $\\Lambda_{GLR}$ can be calculated by the proposed EM algorithm. According to Wilks's theorem~\\cite{wilks_large-sample_1938} as $n$ approaches $\\infty$, the test statistic $2\\log{\\Lambda_{GLR}}$ will be asymptotically $\\chi^2$-distributed with degrees of freedom equal to $(p+1)(C-1)$, which is the difference in dimensionality of $\\Theta_0$ and $\\Theta_1$. Therefore, the threshold $\\eta$ in (\\ref{eq:multi_sample_GLRT}) can be determined by a given significance level $\\alpha$.\n\n\\section{Application to Crystallographic Orientation}\n\\label{sec:app_crystal_orientation_estimation}\nCrystal orientation and the grain distribution in polycrystalline materials determine the mechanical properties of the material, such as, stiffness, elasticity, and deformability. Locating the grain regions and estimating their orientation and dispersion play an essential role in detecting anomalies and vulnerable parts of materials.\n\nElectron backscatter diffraction (EBSD) microscopy acquires crystal orientation at multiple locations within a grain by capturing the Kikuchi diffraction patterns of the backscatter electrons ~\\cite{saruwatari_crystal_2007}. A Kikuchi pattern can be translated to crystal orientation through Hough Transformation analysis~\\cite{lassen_automated_1994} or Dictionary-Based indexing~\\cite{park_ebsd_2013}. The process of assigning mean orientation values to each grain is known as indexing. Crystal forms possess point symmetries, e.g. triclinic, tetragonal, or cubic, leading to a probability density of measured orientations that is invariant over an associated spherical symmetry group $\\mathcal{G}$. Therefore, when the type of material has known symmetries, e.g., cubic-type symmetry for nickel or gold, the $\\mathcal{G}$-invariant VMF and Watson models introduced in Section~\\ref{sec:spherical_symmetry_group} can be applied to estimate the mean orientation $\\bmu_g$ and the concentration $\\kappa_g$ associated with each grain. Furthermore, the clustering method along with the multi-sample hypothesis test in Section~\\ref{sec:clustering_spherical_symmetry_group} can be used to detect the underlying grains within a region.\n\n\\subsection{Simulation of Crystallographic Orientation}\n\\label{sec:simulation_orientations}\nTo simulate the crystallographic orientations, we first draw random samples from VMF and Watson distributions with $p=4$. The random variable $\\bx$ in a spherical distribution can be decomposed~\\cite{mardia_directional_1999}:\n\\begin{equation}\n\\label{eq:normal_tangent_decompose}\n\\bx=t\\bmu+\\sqrt{1-t^2}S_\\bmu(\\bx),\n\\end{equation}\nwhere $t=\\bmu^T\\bx$ and $S_\\bmu(\\bx)=(I_p-\\bmu\\bmu^T)\\bx\/\\|(I_p-\\bmu\\bmu^T)\\bx\\|$. Let $f(\\bx;\\bmu)$ be the p.d.f. of the distribution where $\\bmu$ is the mean direction. According to the normal-tangent decomposition property, for any rotationally symmetric distribution, $S_\\bmu(\\bx)$ is uniformly distributed on $S_{\\bmu^\\bot}^{p-2}$, the $(p-2)$-dimensional sphere normal to $\\bmu$, and the density of $t=\\bx^T\\bmu$ is given by:\n\\begin{equation}\n\\label{eq:tangent_density}\nt\\mapsto cf(t)(1-t^2)^{(p-3)\/2}.\n\\end{equation}\n\nFor VMF distribution, substituting (\\ref{eq:normal_tangent_decompose}) into (\\ref{eq:VMF_pdf}) and combining with (\\ref{eq:tangent_density}), we have the density of the tangent component $t$ as:\n\\begin{equation}\n\\label{eq:VMF_tangent_density}\n\\begin{split}\nf_v(t)&= C_v\\exp{\\{\\kappa t\\}}(1-t^2)^{(p-3)\/2} \\\\\nC_v&=\\left(\\frac{\\kappa}{2}\\right)^{(p\/2-1)}\\left(I_{p\/2-1}(\\kappa)\\Gamma\\left(\\frac{p-1}{2}\\right)\\Gamma\\left(\\frac{1}{2}\\right)\\right)^{-1}.\n\\end{split}\n\\end{equation}\n\nSimilarly, the density of the tangent component of Watson distribution is:\n\\begin{equation}\n\\label{eq:Watson_tangetn_density}\n\\begin{split}\nf_w(t)&= C_w\\exp{\\{\\kappa t^2\\}}(1-t^2)^{(p-3)\/2} \\\\\nC_w&=\\frac{\\Gamma(\\frac{p}{2})}{\\Gamma(\\frac{p-1}{2})\\Gamma(\\frac{1}{2})}\\frac{1}{\\bbM(\\frac{1}{2},\\frac{p}{2},\\kappa)}.\n\\end{split}\n\\end{equation}\nRandom samples from the density functions (\\ref{eq:VMF_tangent_density}) and (\\ref{eq:Watson_tangetn_density}) can be easily generated by rejection sampling. \n\nThe generated quaternions from VMF and Watson distributions are then mapped into the Fundamental Zone (FZ) with the symmetric group actions to simulate the wrap-around problem we observe in real data, i.e. observations are restricted to a single FZ. For cubic symmetry, the FZ in quaternion space is defined in the following set of equations: \\\\\n\\begin{minipage}{0.48\\linewidth}\n\\begin{equation}\n\\begin{cases}\n|q_2\/q_1|\\le\\sqrt{2}-1 \\\\\n|q_3\/q_1|\\le\\sqrt{2}-1 \\\\\n|q_4\/q_1|\\le\\sqrt{2}-1 \\\\\n|q_2\/q_1 + q_3\/q_1 + q_4\/q_1|\\le 1 \\\\\n|q_2\/q_1 - q_3\/q_1 + q_4\/q_1|\\le 1 \\\\\n|q_2\/q_1 + q_3\/q_1 - q_4\/q_1|\\le 1 \\\\\n|q_2\/q_1 - q_3\/q_1 - q_4\/q_1|\\le 1 \\nonumber\n\\end{cases}\n\\end{equation}\n\\end{minipage}\n\\begin{minipage}{0.48\\linewidth}\n\\begin{equation}\n\\label{eq:FZ_equations}\n\\begin{cases}\n|q_2\/q_1 - q_3\/q_1|\\le\\sqrt{2} \\\\\n|q_2\/q_1 + q_3\/q_1|\\le\\sqrt{2} \\\\\n|q_2\/q_1 - q_4\/q_1|\\le\\sqrt{2} \\\\\n|q_2\/q_1 + q_4\/q_1|\\le\\sqrt{2} \\\\\n|q_3\/q_1 - q_4\/q_1|\\le\\sqrt{2} \\\\\n|q_3\/q_1 + q_4\/q_1|\\le\\sqrt{2} \\\\\n\\end{cases}\n\\end{equation}\n\\end{minipage}\nwhere $q_i$ is the $i$-th component of quaternion $\\bq$.\n\n\\section{Experimental Results}\n\\label{sec:experiment}\n\\subsection{$\\mathcal G$-invariant EM-ML Parameter Estimation on Simulated Data}\nSets of $n$ i.i.d. samples were simulated from the VMF or Watson distributions using the method described in Sec.\\ref{sec:simulation_orientations} with given $\\bmu=\\bmu_o,\\kappa=\\kappa_o$ for the $m\\overline{3}m$ point symmetry group associated with the symmetries of cubic crystal lattice planes. The number of samples for each simulation was set to $n=1000$ and $\\kappa_o$ was swept from $1$ to $100$ while, for each simulation run, $\\bmu_o$ was selected uniformly at random. The experiment was repeated $100$ times and the average values of $\\hat{\\kappa}$ and the inner product $\\hat{\\bmu}^T \\bmu_o$ are shown in Fig.~\\ref{fig:mu_est} and \\ref{fig:kappa_est}. In the figures we compare performance for the following methods: (1) the naive ML estimator for the standard VMF or Watson model that does not account for the point group structure (labeled \"ML Estimator\"). (2) Mapping each of the samples $\\bx_i$ toward a reference direction $\\bx_{r}$ (randomly selected from $\\{\\bx_i\\}_{i=1}^n$), i.e. $\\bx_i\\mapsto\\bP_m\\bx_i$, where $\\bP_m=\\arg\\min_{\\bP\\in\\mathcal{G}} \\arccos{(\\bx_{r}^T\\bP\\bx)}$, to remove possible ambiguity. Then performing ML for the standard VMF or Watson distribution (labeled \"Modified ML\"). (3) Applying our proposed EM algorithm directly to the $n$ samples using the mixture of VMF distribution (\\ref{eq:VMF_Estep})-(\\ref{eq:VMF_Mstep_gamma}) (labeled \"EM-VMF\") (4) Applying our proposed EM algorithm to the mixture of Watson distribution (\\ref{eq:Watson_EM_Estep})-(\\ref{eq:Watson_Tp_func}) (labeled \"EM-Watson\").\n\n\nFigure \\ref{fig:mu_est} shows the inner product values $\\bmu_o^T\\hat{\\bmu}$. The proposed EM-VMF and EM-Watson estimators have similar performance in that they achieve perfect recovery of the mean orientation ($\\bmu_o^T\\hat{\\bmu}=1$) much faster than the other methods as the concentration parameter $\\kappa_o$ increases (lower dispersion of the samples about the mean) no matter whether the data is generated from VMF (Fig.~\\ref{fig:mu_est_VMFdata}) or Watson distribution (Fig.~\\ref{fig:mu_est_Watsondata}), indicating the robustness of the proposed approaches under model mismatch. Notice that when $\\kappa_o$ is small ($\\kappa_o<20$ for VMF data and $\\kappa_o<10$ for Watson data), none of the methods can accurately estimate the mean orientation. The reason is that when $\\kappa_o$ is small the samples become nearly uniformly distributed over the sphere. The threshold $\\kappa_o$ value at which performance starts to degrade depends on the choice of point symmetry group and the distribution used to simulate the data. In Fig.~\\ref{fig:kappa_est} it is seen that the biases of the proposed EM-VMF~\\cite{chen_parameter_2015} and EM-Watson $\\kappa$ estimators are significantly lower than that of the other methods compared. While the modified ML performs better than the naive ML estimator, its bias is significantly worse than the proposed EM-VMF and EM-Watson approaches. \n\n\\begin{figure}\n \\centering\n \\subfigure[VMF Simulated Data]{\n \\label{fig:mu_est_VMFdata}\n \\includegraphics[width=4.25cm]{figures\/Mu_est_VMF}}\n \\subfigure[Watson Simulated Data]{\n \\label{fig:mu_est_Watsondata}\n \\includegraphics[width=4.25cm]{figures\/Mu_est_Watson}}\n \\caption{Mean orientation estimator comparisons for $\\mathcal G$-invariant densities. Shown is the average inner product $\\bmu_o^T\\hat{\\bmu}$ of four estimators $\\hat{\\bmu}$ when $\\bmu_o$ is the true mean orientation as a function of the true concentration parameter $\\kappa_o$ for the data simulated from VMF (Fig.~\\ref{fig:mu_est_VMFdata}) and from Watson (Fig.~\\ref{fig:mu_est_Watsondata}) distribution. The naive estimator (\"ML Estimator\" in blue line) does not attain perfect estimation (inner product $=1$) for any $\\kappa_o$ since it does not account for the spherical symmetry group structure. The modified ML (green dashed line) achieves perfect estimation as $\\kappa_o$ becomes large. The proposed EM-ML methods (\"EM-VMF\", \"EM-Watson\") achieve perfect estimation much faster than the other methods even under model mismatch (EM-VMF for Watson simulated data and vice versa).}\n \\label{fig:mu_est}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\subfigure[VMF Simulated Data]{\n \\label{fig:kappa_est_VMFdata}\n \\includegraphics[width=4.25cm]{figures\/Kappa_est_VMF}}\n \\subfigure[Watson Simulated Data]{\n \\label{fig:kappa_est_Watsondata}\n \\includegraphics[width=4.25cm]{figures\/Kappa_est_Watson}}\n \\caption{Concentration parameter estimator bias as a function of the true concentration $\\kappa_o$ for data simulated from VMF (Fig.~\\ref{fig:kappa_est_VMFdata})\\cite{chen_parameter_2015} and from Watson (Fig.~\\ref{fig:kappa_est_Watsondata}) distributions. The bias of the naive ML (blue solid line) is large over the full range of $\\kappa_o$. The modified ML (green dashed line) estimates $\\kappa$ more accurately when $\\kappa_o$ is small. Our proposed EM-VMF and EM-Watson estimators (black dotted line and magenta dashed line) have lower bias than the other estimators.}\n \\label{fig:kappa_est}\n\\end{figure}\n\nFigure \\ref{fig:computation_time} shows the computation time of the estimation algorithms presented in Fig.~\\ref{fig:mu_est} and Fig.~\\ref{fig:kappa_est}. The computation time for all methods decreases as $\\kappa_o$ becomes larger. When $\\kappa_o$ is small ($\\kappa_o<20$ for VMF data and $\\kappa_o<10$ for Watson data), because the samples are almost uniformly distributed around the sphere, it is difficult for the EM algorithms to converge to the optimal solution and they therefore require maximum number of iterations to stop, forming the plateaus in Fig.~\\ref{fig:computation_time}. Notice that EM-Watson requires less time than EM-VMF even though it has more complicated E and M-steps. The reason is that EM-Watson uses only half of the symmetry operators, which corresponds to the size of the quotient group $\\mathcal{G}\/\\mathcal{I}$ as described in Section~\\ref{sec:ginv_Watson_dist}. By applying the hyperbolic sinusoidal simplification in Section~\\ref{sec:ginv_VMF_dist} (labeled \"EM-VMF-Hyper\"), we can further reduce the computation time by more than a factor of $2$ compared to the original EM-VMF. \n\n\\begin{figure}\n \\centering\n \\subfigure[VMF Simulated Data]{\n \\label{fig:computation_time_VMFdata}\n \\includegraphics[width=4cm]{figures\/ComputationTime_VMF}}\n \\subfigure[Watson Simulated Data]{\n \\label{fig:computation_time_Watsondata}\n \\includegraphics[width=4cm]{figures\/ComputationTime_Watson}}\n \\caption{Computation time for calculating the result in Fig.~\\ref{fig:mu_est} and Fig.~\\ref{fig:kappa_est}. EM-Watson (magenta dashed line) has less computation time than EM-VMF (black dotted line) because it uses only half of the symmetry operators. EM-VMF-Hyper (cyan circle line) which uses the hyperbolic sinusoidal simplification of EM-VMF reduces the computation time by more than a factor of $2$.}\n \\label{fig:computation_time} \n\\end{figure}\n\n\\subsection{$\\mathcal G$-invariant Clustering on Simulated Data}\nIn this section, we demonstrate the performance of our proposed EM approaches for clustering. Sets of $n$ i.i.d. samples were simulated from the VMF or Watson distributions with $\\kappa=\\kappa_o$ and one of two mean directions ($\\bmu_1,\\bmu_2$) to generate two clusters of samples. The spherical symmetry group is $m\\overline{3}m$ as before. The number of samples for each set was set to $n=1000$ and $\\kappa_o$ was swept from $1$ to $100$ while, for each set, $\\bmu_1,\\bmu_2$ was selected uniformly at random. The experiment was repeated $100$ times and the average values of the inner product $(\\hat{\\bmu}_1^T\\bmu_1+\\hat{\\bmu}_2^T\\bmu_2)\/2$ are shown in Fig.~\\ref{fig:mu_est_2clusters}. In the figure we compare performances of the following methods: (1) Cluster the samples by standard K-means algorithm with the distance defined by the arc-cosine of the inner product and then use the naive ML within each cluster to estimate the mean directions (labeled \"K-means\"). (2) \nCluster the samples by K-means with the distance defined as (\\ref{eq:sym_dist}) and then use the aforementioned modified ML estimator (labeled \"Modified K-means\"). (3) Apply our proposed multi-cluster EM-VMF algorithm to the $n$ samples directly (\\ref{eq:mVMF_mClusters_Estep})-(\\ref{eq:mVMF_mClusters_Mstep}) (labeled \"EM-VMF\") (4) Apply our multi-cluster EM-Watson algorithm to the $n$ samples directly (\\ref{eq:Watson_mClusters_Mstep_hatT})-(\\ref{eq:Watson_mClusters_Mstep_kappa}) (labeled \"EM-Watson\").\n\nFigure \\ref{fig:mu_est_2clusters} shows the average inner product values $(\\hat{\\bmu}_1^T\\bmu_1+\\hat{\\bmu}_2^T\\bmu_2)\/2$ from the mean direction estimation. The proposed EM-VMF and EM-Watson are able to correctly cluster the samples and achieve perfect recovery of the two mean orientations much faster than the other K-means approaches. Notice that the region where all the methods fail is larger than the single cluster case since multiple clusters increase the difficulty of parameter estimation. Again, no matter whether the samples are simulated from VMF or Watson distribution, our proposed approaches perform equally well under both cases.\n\nTo further test the ability to detect multiple clusters given a set of samples, we generate $1000$ sets of samples. Each set has $1000$ samples and is assigned randomly to label $0$ or $1$. If the set is labeled $0$, the samples are generated from a single distribution; If the set is labeled $1$, then the samples in the set are randomly generated from two distributions with different means. The GLRT is used with the four aforementioned clustering methods to test whether the samples in each set are uni-modal or multi-modal. The Receiver Operating Characteristic (ROC) curves of the test results are shown in Fig.~\\ref{fig:ROC}. The naive K-means with ML estimator which does not consider the symmetry group actions fails to distinguish whether the multiple modes are from actual multiple distributions or due to the wrap-around effect from the fundamental zone mapping. Therefore, this approach tends to over-estimate the goodness of fit of the $H_1$ model for true negative cases and under-estimate it for true positive cases, resulting in a result that is even worse than random guessing. The modified K-means performs better than K-means but worse than our proposed EM-VMF and EM-Watson algorithms. \n\n\\begin{figure}\n \\centering\n \\subfigure[VMF Simulated Data]{\n \\label{fig:mu_est_2clusters_VMFdata}\n \\includegraphics[width=4.25cm]{figures\/Mu_est_2clusters_VMF}}\n \\subfigure[Watson Simulated Data]{\n \\label{fig:mu_est_2clusters_Watsondata}\n \\includegraphics[width=4.25cm]{figures\/Mu_est_2clusters_Watson}}\n \\caption{Mean orientation estimator comparisons for samples generated from two different means. Shown is the average inner product $(\\hat{\\bmu}_1^T\\bmu_1+\\hat{\\bmu}_2^T\\bmu_2)\/2$ of four methods when $\\bmu_1,\\bmu_2$ are the true mean orientations as a function of the true concentration parameter $\\kappa_o$ for the data simulated from VMF (Fig.~\\ref{fig:mu_est_2clusters_VMFdata}) and from Watson (Fig.~\\ref{fig:mu_est_2clusters_Watsondata}) distributions. The K-means with naive estimator (\"K-means\" in blue line) does not attain perfect estimation for any $\\kappa_o$. A modified K-means with ML estimator (\"modified K-means\" in green dashed line) achieve perfect estimation as $\\kappa_o$ becomes large. The proposed EM-VMF and EM-Watson methods (\"EM-VMF\", \"EM-Watson\") achieves perfect estimation much faster than the other methods.}\n \\label{fig:mu_est_2clusters} \n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\subfigure[VMF Simulated Data]{\n \\label{fig:ROC_VMFdata}\n \\includegraphics[width=4cm]{figures\/Hypotest_VMF_kappa50}}\n \\subfigure[Watson Simulated Data]{\n \\label{fig:ROC_Watsondata}\n \\includegraphics[width=4cm]{figures\/Hypotest_Watson_kappa50}}\n \\caption{ROC curve for detecting bi-modally distributed samples. The samples are uni-modal or bi-modal distributed from VMF (Fig.~\\ref{fig:ROC_VMFdata}) or Watson (\\ref{fig:ROC_Watsondata}) distributions with $\\kappa_o=50$. The naive K-means with ML estimator cannot cluster the samples well and estimate the mean directions accurately, resulting in poor detection which is even worse than random guessing. The modified K-means (green dashed line) performs better than K-means but is still unsatisfactory. Our proposed EM-VMF (black dots) and EM-Watson (magenta dashed line) methods have very good performance in this detection task.}\n \\label{fig:ROC} \n\\end{figure}\n\n\\subsection{EM-ML orientation estimation for IN100 Nickel Sample}\nWe next illustrate the proposed EM-VMF and EM-Watson orientation estimators on a real IN100 sample acquired from US Air Force Research Laboratory (AFRL) \\cite{park_ebsd_2013}. The IN100 sample is a polycrystalline Ni superalloy which has cubic symmetry in the $m\\overline{3}m$ point symmetry group. EBSD orientation measurements were acquired on a $512\\times 384$ pixel grid, corresponding to spatial resolution of $297.7$ nm. The Kikuchi diffraction patterns were recorded on a $80\\times 60$ photosensitive detector for each of the pixels. \n\nFigure \\ref{fig:IN100} (a) shows a $200\\times 200$ sub-region of the full EBSD sample where the orientations are shown in the inverse pole figure (IPF) coloring obtained from the OEM EBSD imaging software and (b) is the back-scattered electron (BSE) image. Note that the OEM-estimated orientations in some grain regions of the IPF image are very inhomogeneous, having a mottled appearance, which is likely due to a fundamental zone wrap-around problem. As an alternative, we apply a combination of the proposed EM estimators (EM-VMF or EM-Watson) and the GLRT~(\\ref{eq:multi_sample_GLRT}) with $C=2$ and significance level $\\alpha=0.05$ to detect multi-modal distributions within each OEM-segmented region. Figure \\ref{fig:IN100} (c)(e) show the estimates of the mean orientations of the regions\/sub-regions, where the sub-regions surrounded by white boundaries indicate those that have been detected as deviating from the distribution of the majority of samples from the same region. The multi-modally distributed regions may be due to undetected grains, inaccurate segmentation, or noisy orientation observations. To distinguish the latter situations from the first in which the region really consists of two grains, the misalignment\/noise test introduced in~\\cite{chen_coercive_2015} can be used. Figures \\ref{fig:IN100} (d)(f) show the estimated concentration parameter $\\kappa$ for the regions\/sub-regions. Note that the estimated $\\kappa$ are large for most of the regions\/sub-regions because those regions which have multi-modally distributed samples are detected and their concentration parameters are estimated separately for each sub-region. \n\n\n\\begin{figure}[htb]\n \\centering\n \\subfigure[IPF from OEM]{\n \t\\includegraphics[width=3.2cm]{figures\/EA}}\n \\subfigure[BSE from OEM]{\n \t\\includegraphics[width=3.8cm]{figures\/BSE}}\n \n \n \\subfigure[EM-VMF $\\hat{\\bmu}$]{\n \t\\includegraphics[width=3.2cm]{figures\/Detected_Grains_VMF}}\t\n \\subfigure[EM-VMF $\\hat{\\kappa}$]{\n \t\\includegraphics[width=3.8cm]{figures\/Detected_Kappa_VMF}}\n \n \n \\subfigure[EM-Watson $\\hat{\\bmu}$]{\n \t\\includegraphics[width=3.2cm]{figures\/Detected_Grains_Watson}}\n \\subfigure[EM-Watson $\\hat{\\kappa}$]{\n \t\\includegraphics[width=3.8cm]{figures\/Detected_Kappa_Watson}}\n \\caption{A $200\\times 200$ sub-region of the IN100 sample. (a) is the IPF image for the Euler angles extracted from EBSD by OEM imaging software. IPF coloring in some grains is not homogeneous, likely due to symmetry ambiguity. (b) is the BSE image of the sample. (c)(e) show the estimates of the mean orientations of the regions\/sub-regions using a combination of the proposed EM estimators, EM-VMF and EM-Watson respectively, and the GLRT~(\\ref{eq:multi_sample_GLRT}) to detect multi-modal distributions within each OEM-segmented region. The sub-regions surrounded by white boundaries indicate those that have been detected as deviating from the distribution of the majority of samples from the same region. (d)(f) show the estimated concentration parameter $\\kappa$ for the regions\/sub-regions. Note that the estimated $\\kappa$ are large for most of the regions\/sub-regions because those regions which have multi-modally distributed samples are detected and their concentration parameters are estimated separately for each sub-region.}\n\\label{fig:IN100}\n\\end{figure}\n\\section{Conclusion}\n\\label{sec:conclusion}\nA hyperbolic $\\mathcal G$-invariant von Mises-Fisher distribution was shown to be equivalent to the distribution proposed in~\\cite{chen_parameter_2015}. The advantage of the hyperbolic form is parameter estimation can be performed with substantially fewer computations. A different group invariant orientation distribution was introduced, called the $\\mathcal{G}$-invariant Watson distribution, and an EM algorithm was presented that iteratively estimates its orientation and concentration parameters. We introduced multi-modal generalizations of these $\\mathcal G$-invariant distributions using mixture models and showed that these can be used to effectively cluster populations of orientations that have spherical symmetry group invariances. The mixture of VMF and Watson models were applied to the problem of estimation of mean grain orientation parameters in polycrystalline materials whose orientations lie in the $m\\overline{3}m$ point symmetry group. Application of the finite mixture representation to other types of groups would be worthwhile future work.\n\n\n\\section*{Acknowledgment}\nThe authors are grateful for inputs from Megna Shah, Mike Jackson and Mike Groeber. AOH would like to acknowledge financial support from USAF\/AFMC grant FA8650-9-D-5037\/04 and AFOSR grant FA9550-13-1-0043. MDG would like to acknowledge financial support from AFOSR MURI grant FA9550-12-1-0458.\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n$B$-meson decays are an excellent source of information about the CKM mechanism and allow us to test our\n understanding of the CP violation. In nonleptonic $B$\ndecays we must deal with final states interactions (FSI) as well, since they may modify the values of the extracted\n parameters. It is hard to take FSI into consideration properly since\n there are a lot of possible decay channels.\n\\\\\n\nDuring the recent years several authors have investigated various possible corrections\n due to FSI. Most of the analyses take into account the elastic and inelastic effects\n arising from intermediate states containing light quarks ($u$, $d$, $s$) [1-6] and apply\n symmetries of strong interactions (isospin, SU(3)) to reduce the number of parameters.\n Some authors argued that intermediate states containing charmed quarks ($c$) may also play an important role [7-13].\n\\\\\n\nIn the present paper we analyse $B$ decays into two light noncharmed pseudoscalar mesons.\n We consider FSI originating only from the intermediate states containing c quarks.\nIn sections 2 and 3 we introduce our parametrisation and relations between the amplitudes.\n Section 4 contains the description of the fit procedure and our results together with the CP\n asymmetry predictions. Finally, a short summary is given in section 5.\n\n\\section{Short-distance amplitudes}\n\n The decays of $B$ meson into two noncharmed pseudoscalar mesons are characterised by 10 $SU(3)_{f}$ invariant\n amplitudes corresponding to the specific quark-line diagrams. As in \\cite{ZL,ZL2} we use four\n dominant amplitudes: tree $T(T^{'})$, colour-suppressed $C(C^{'})$, penguin\n $P(P^{'})$ and singlet penguin $S^{'}$. Unprimed (primed) amplitudes denote\n strangeness conserving (violating) processes and are related to each other.\n Topological decompositions of decay amplitudes can be found in \\cite{ZL}.\n\\\\\n\n We use the Wolfenstein parameters: $\\lambda=0.222$,\n $A=0.832$, $\\bar{\\rho}=0.224$ and $\\bar{\\eta}=0.317$ \\cite{fl}.\n All relations bellow are calculated up to $O(\\lambda^4)$ unless explicitly written\n otherwise. Terms proportional to $\\lambda^4$ are kept on account of complex factor in $P^{'}$, which may interfere with FSI correction. We assume that all short-distance (SD) strong phases\n are negligible. For the tree amplitude we have\n\\begin{equation}\nT^{'} = \\frac{V_{us}}{V_{ud}} \\frac{f_{K}}{f_{\\pi}} T =0.278 T,\n\\end{equation}\nwhere $\\frac{f_{K}}{f_{\\pi}}$ is the SU(3) breaking factor. Both $T$ and $T^{'}$\namplitudes have a weak phase equal $\\gamma$.\nWe assume that SD penguin amplitudes are dominated by $t$ quark contribution.\nWhen terms of order $\\lambda^{4}$ are included, the strangeness violating penguin amplitude $P^{'}$ acquires a small weak\nphase. Thus, $P^{'}$ can be represented as a sum of two terms, the second one due to the $O(\\lambda^4)$ correction:\n\\begin{equation}\nP^{'} = \\frac{V_{ts}}{V_{td}} P = -(5.241+0.105 e^{i\\gamma} )|P|\n\\end{equation}\nPenguin amplitude $P$ has weak phase -$\\beta$. We used in our fits the value of $\\beta=24^{o}$ consistent with the world average.\nThe singlet penguin has the same phase as penguin $P'$:\n\\begin{equation}\nS^{'}=e^{i arg(P')}|S'|\n\\end{equation}\n\\\\\n\n\nFinally, we accept relations between the tree and colour-suppressed amplitudes:\n\\begin{equation}\nC=\\xi T,\n\\end{equation}\n\n\\begin{equation}\nC^{'}=(\\xi -(1+\\xi )\\delta_{EW} e^{-i\\gamma})T^{'}\n\\end{equation}\nwhere $\\xi=0.17$ and $\\delta_{EW}=0.65$. The last equation includes electroweak penguin\n$P_{EW}^{'}$. The EW penguin contribution $\\sim \\delta_{EW} e^{-i\\gamma}$ was calculated (see e.g. \\cite{Neu2,Neu:98})\nwithout $\\lambda^{4}$ corrections. This fact should not affect the fits much since $P_{EW}^{'} \\sim S^{'}$ \\cite{h} and the small correction\n in $S^{'}$ is\npractically invisible in the fits (the only changes we\nobserved were in the asymmetry for the $B^{+} \\to \\eta^{'} K^{+}$ decay channel).\n\n\n\n\n\\section{Long-distance charming penguins }\nIt was argued [7-13] that the intermediate states composed of charmed mesons ($D\\bar{D}$, etc.), generated from the\n$b \\rightarrow c \\bar{c} d(s) $ tree amplitudes $T_{c}^{(')}$, may lead via rescattering to amplitudes of \npenguin topology with an internal $c$ quark (the \"charming penguin\"). Our calculations are similar as in\nthe case of long-distance $u$-type penguins \\cite{ZL2}. Assuming SU(3) symmetry, we can redefine\npenguins:\n\\begin{equation}\nP^{(')} \\rightarrow P^{(')}+id_{c}T_{c}^{(')}\n\\end{equation}\nwhere $d_{c}$ is related to the size of the LD charming penguin and is a complex number in general.\nBecause we do not have information about $d_{c}$ (or $T_{c}^{(')}$), it is convenient to\nintroduce the following parametrisation:\n\\begin{equation}\nid_{c}T_{c}^{(')}=P^{(')}_{cLD}e^{i \\delta_{c}}\n\\end{equation}\nStrong phase $\\delta_{c}$ and size $P^{(')}_{cLD}$ of the charming penguin are additional free\nparameters\nin our fits. The weak phases are determined by the tree amplitudes $T^{(')}_{c}$ and are either $\\pi$ or 0.\nWe can eliminate $P^{'}_{cLD}$ using the relation\n\\begin{equation}\n\\frac{P^{'}_{cLD}}{P_{cLD}}=\\frac{T^{'}_{c}}{T_{c}}=\\frac{V_{cs}}{V_{cd}}=-4.388\n\\end{equation}\n\\\\\n\nShort-distance charming penguin $P_{c}^{'}$ has the same weak phase as $P_{cLD}^{'}$.\nIt can be included in a new redefined charming penguin\n\\begin{equation}\nP^{(')}_{cef}e^{i \\delta} = P^{(')}_{c}+P^{(')}_{cLD}e^{i \\delta_{c}}\n\\end{equation}\nwith new effective size and strong phase.\n\n\n\n\n\\section{Results of fits }\nWe minimise function f defined as:\n\\begin{equation}\nf =\\sum_i{\\frac{(B_i^{\\rm theor}-B_i^{\\rm exp})^2}{(\\Delta B_i^{\\rm\nexp})^2}}\n\\end{equation}\nwhere $B_i^{\\rm theor(exp)}$ denote theoretical (experimental) CP-averaged\n branching fractions\nand $\\Delta B_{i}^{\\rm exp}$ is an experimental error for i-th decay channel.\nThe sum is over all 16 decay channels as in \\cite{ZL2,chpZ}.\nExperimental branching ratios and their errors are listed in Tables 1 and 2.\nThe connection between the amplitudes and branching ratios was corrected in our calculations for the lifetime\ndifference between $B^{+}$ and $B^{0}$:\n\n\\begin{equation}\n\\frac{\\tau_{B^{+}}}{\\tau_{B^{0}}}=1.068\n\\end{equation}\n\\\\\nWe considered two sets of data. The first one was the same as in \\cite{ZL2}. The second one was used\nin \\cite{chpZ}. Data in Table 2 are more recent and differ from the previous ones in a couple of entries.\nWe performed fits in three general cases:\n\\begin{enumerate}\n\\item {without long-distance charming penguin contributions and with $|T|$, $|P|$, $|S'|$, $\\gamma$\ntreated as free parameters}\n\\item {with long-distance charming penguins described by real $P_{cef}$ as an additional parameter and\n$\\delta=0$, which is consistent with\n calculations done in \\cite{chpZ} but without any assumed connection between $P_{c}$ and $P_{t}$}\n\\item {with long-distance charming penguins described by two additional free parameters: $\\delta$, $P_{cef}$.}\n\\end{enumerate}\n\n\n\\begin{table}[h]\n\\caption{Fits to the first set of data (in units of $10^{-6}$)} \\label{tab:fit1}\n\\begin{center}\n{\\footnotesize\n\\begin{tabular}{|l|l|c|c|c|c|}\n\\hline\nDecay channel & Exp &SD amplitudes &\\multicolumn{3}{|c|}{Charming penguin} \\\\\n\n\n & &only &(case 2) & \\multicolumn{2}{|c|}{(case3)}\\\\\n\n & & (case 1) &$\\delta=0^{o}$ &$\\gamma$ free &$\\gamma=64.5^o$ \\\\\n\n\n\n\\hline \\hline\n$(B^+ \\to \\pi ^+ \\pi ^0)$ &$5.8\\pm 1.0$ &$5.01$ &5.65\t &$5.73$ &$5.85$\t\t \\\\\n$(B^+ \\to K ^+ \\bar{K}^0)$ &$0.0\\pm 2.0$ &$0.68$ &0.71\t &$2.10$ &$1.81$\t\t \\\\\n$(B^+ \\to\\pi ^+ \\eta)$ &$2.9\\pm 1.1$ &$2.15$ &1.76\t &$2.47$ &$2.24$\t\t \\\\\n$(B^+ \\to\\pi ^+ \\eta ')$ &$0.0\\pm 7.0$ &$1.07$ &0.88\t &$1.24$ &$1.12$\t\t \\\\\n\\hline\n$(B^0_d \\to \\pi ^+ \\pi ^- )$ &$4.7\\pm 0.5$ &$4.90$ &4.78\t &$4.76$ &$4.75$\t\t \\\\\n$(B^0_d \\to\\pi ^0 \\pi ^0)$ &$1.9\\pm 0.7$ &$0.62$ &0.73\t &$1.50$ &$1.36$\t\t \\\\\n$(B^0_d \\to K^+ K^-)$ &$0.0\\pm 0.6$ &$0.00$ &0 \t &$0.00$ &$0.00$\t\t \\\\ \t\t\t \n$(B^0_d \\to K^0 \\bar{K}^0)$ &$0.0\\pm 4.1$ &$0.62$ &0.66\t &$1.94$ &$1.67$\t\t \\\\ \t \t \n\n\\hline\n$(B^+ \\to \\pi ^+ K ^0 )$ &$18.1\\pm 1.7$ &$18.40$ &19.21\t &$18.67$ &$20.41$\t\t \\\\ \t \n$(B^+ \\to \\pi ^0 K ^+ )$ &$12.7\\pm 1.2$ &$13.11$ &13.10\t &$11.61$ &$10.63$\t\t \\\\ \t \n$(B^+ \\to\\eta K^+)$ &$4.1\\pm 1.1$ &$2.46 $ &2.30\t &$4.30 $ &$3.96$\t\t \\\\ \t \n$(B^+ \\to\\eta ' K^+)$ &$75\\pm 7.0$ &$73.00$ &73.37\t &$68.91$ &$69.69$\t\t \\\\ \t \n\\hline\n$(B^0_d \\to\\pi ^- K^+)$ &$18.5\\pm 1.0$ &$18.76$ &18.60\t &$18.38$ &$18.60$\t\t \\\\ \t \n$(B^0_d \\to\\pi ^0 K^0)$ &$10.2\\pm 1.2$ &$6.20$ &6.57\t &$7.76$ &$9.12$\t\t \\\\ \t \n$(B^0_d \\to\\eta K^0)$ &$0.0\\pm 9.3$ &$1.81$ &1.79\t &$3.19$ &$4.22$\t\t \\\\ \t \n$(B^0_d \\to\\eta ' K^0)$ &$56\\pm 9.0$ &$66.28$ &67.36\t &$62.35$ &$66.12$\t\t \\\\ \t \n\\hline\n\n$|T|$& &$2.60$ &2.76\t &$2.78$ &$2.81$ \\\\\n\n$|P|$& &$0.79$ &1.45\t &$2.59$ &$1.92$ \\\\\n\n$|S'|$& &$1.75$ &1.72\t &$2.46$ &$3.02$ \\\\\n\n$P_{cef}$& & &-0.77\t &$-2.81$ &$-2.32$ \\\\\n\n$\\gamma$& &$103^o$ &$94^o$\t &$110^o$ &$64.5^o$ \\\\\n\n$\\delta$& & &$0^o$\t &$\\pm18^o$ &$\\pm26^o$ \\\\\n\n$f_{m}$& &$15.36$ &14.79\t &$6.37$ &$9.39$ \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\n\n\n\\begin{table}[h]\n\\caption{Fits to the second set of data (in units of $10^{-6}$)} \\label{tab:fit2}\n\\begin{center}\n{\\footnotesize\n\\begin{tabular}{|l|l|c|c|c|c|}\n\\hline\nDecay channel & Exp &SD amplitudes &\\multicolumn{3}{|c|}{Charming penguin} \\\\\n\n\n & &only &(case 2) & \\multicolumn{2}{|c|}{(case3)}\\\\\n\n & & (case 1) &$\\delta=0^{o}$ &$\\gamma$ free &$\\gamma=64.5^o$ \\\\\n\n\n\\hline \\hline\n$(B^+ \\to \\pi ^+ \\pi ^0)$ &$5.3\\pm 0.8$ &$4.27$ &5.32\t &$5.05$ &$5.40$\t\t \\\\\n$(B^+ \\to K ^+ \\bar{K}^0)$ &$0.0\\pm 2.4$ &$0.69$ &0.96\t &$2.55$ &$1.58$\t\t \\\\\n$(B^+ \\to\\pi ^+ \\eta)$ &$4.2\\pm 0.9$ &$2.66$ &2.04\t &$3.04$ &$2.29$\t\t \\\\\n$(B^+ \\to\\pi ^+ \\eta ')$ &$0.0\\pm 4.5$ &$1.33$ &1.02\t &$1.52$ &$1.14$\t\t \\\\\n\\hline\t\t\t\t\t\t \t\t \t \n$(B^0_d \\to \\pi ^+ \\pi ^- )$ &$4.6\\pm 0.4$ &$5.09$ &4.76\t &$4.75$ &$4.70$\t\t \\\\\n$(B^0_d \\to\\pi ^0 \\pi ^0)$ &$1.9\\pm 0.5$ &$0.51$ &0.83\t &$1.65$ &$1.18$\t\t \\\\\n$(B^0_d \\to K^+ K^-)$ &$0.0\\pm 0.6$ &$0.00$ &0 \t &$0.00$ &$0.00$\t\t \\\\\n$(B^0_d \\to K^0 \\bar{K}^0)$ &$0.0\\pm 1.8$ &$0.64$ &0.89\t &$2.35$ &$1.46$\t\t \\\\\n\t\t\t\t\t\t \t\t \t \n\\hline\n$(B^+ \\to \\pi ^+ K ^0 )$ &$21.8\\pm 1.4$ &$19.10$ &22.11\t &$22.44$ &$21.57$ \t \\\\\n$(B^+ \\to \\pi ^0 K ^+ )$ &$12.8\\pm 1.1$ &$11.97$ &12.45\t &$10.92$ &$11.39$ \t \\\\\n$(B^+ \\to\\eta K^+)$ &$3.2\\pm 0.7$ &$2.03 $ &1.57\t &$2.71$ &$3.04$\t\t \\\\\n$(B^+ \\to\\eta ' K^+)$ &$77.6\\pm 4.6$ &$74.02$ &76.18\t &$75.27$ &$74.64$ \t \\\\\n\\hline\t\t\t\t\t\t \t\t \t \n$(B^0_d \\to\\pi ^- K^+)$ &$18.2\\pm 0.8$ &$17.57$ &18.20\t &$19.33$ &$19.01$ \t \\\\\n$(B^0_d \\to\\pi ^0 K^0)$ &$11.9\\pm 1.5$ &$6.86$ &8.03\t &$9.86$ &$9.14$\t\t \\\\\n$(B^0_d \\to\\eta K^0)$ &$0.0\\pm 4.6$ &$1.76$ &1.63\t &$3.85$ &$3.26$\t\t \\\\\n$(B^0_d \\to\\eta ' K^0)$ &$65.2\\pm 6.0$ &$68.66$ &72.32\t &$73.14$ &$70.76$ \t \\\\\n\\hline\n\n$|T|$& &$2.36$ &2.68\t &$2.61$ &$2.7$\t\t \\\\\n\t\t\t\t\t\t \t\t \t \n$|P|$& &$0.83$ &2.06\t &$2.63$ &$1.9$\t\t \\\\\n\t\t\t\t\t\t \t\t \t \n$|S'|$& &$1.77$ &1.69\t &$2.96$ &$2.61$\t\t \\\\\n\t\t\t\t\t\t \t\t \t \n$P_{cef}$& &\t &-1.45\t &$-3.05$ &$-2.07$ \t \\\\\n\t\t\t\t\t\t \t\t \t \n$\\gamma$& &$85^o$ &$68^o$\t &$22^o$ &$64.5^o$ \t \\\\\n$\\delta$& &\t &$0^o$\t &$\\pm19^o$ &$\\pm26^o$\t \\\\\n\n$f_{m}$& &$27.98$ &24.73\t &$14.97$ &$15.71$ \t \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\n\n\\begin{figure}[h]\n\n\\includegraphics[angle=-90,width=0.5\\textwidth]{cv.eps}\n\\includegraphics[angle=-90,width=0.5\\textwidth]{cv2.eps}\n\n\\caption{\\small Dependence of $f_{m}(\\gamma)$ on $\\gamma$ for the first (left) and second (right) set of data.\nSolid lines denote case without charming penguin, dashed lines - case with charming penguin and $\\delta =0$, \ndotted lines - case with charming penguin and $\\delta$ let free.}\n\n\\end{figure}\n\n\n\n\nResults of the fits are contained in Tables 1, 2. The branching fractions were calculated for the best\nfits and for the fit with fixed $\\gamma=64.5^{o}$. The minimums $f_{m}$ obtained by minimising $f$ of (10) are showed in the last rows of the tables.\nIn general, the fitted values of $\\gamma$ are far from the standard model prediction.\nTo find out what happens one should study the dependence of the fitted function on $\\gamma$.\nLet us denote by $f_{m}(\\gamma)$ the minimum values of $f$ obtained when keeping $\\gamma$ fixed.\nThe function $f_{m}(\\gamma)$ is obtained either by setting $P_{cef}$=0, or by assuming $\\delta=0$ while letting\n $P_{cef}$ free, or by letting both $P_{cef}$ and $\\delta$ free.\nFigure 1 shows $f_{m}(\\gamma)$ for the first (left) and second (right) set of data. The worst fits are those\nwithout charming penguins (solid lines). The minimal values were achieved for $\\gamma=103^o$ and\n$\\gamma=85^o$ respectively. For both fits with charming penguin and the strong phase $\\delta_{c}=0$ (dashed lines), the\nbest fit corresponds to $\\gamma$\nshifted down by $9^{o}$($17^{o}$) and a slightly lower value of $f_{m}$. In the third case shown (dotted\n lines) $\\delta$ was let free. For the first set of data, the $f_{m}(\\gamma)$ is fairly small over the whole\n region shown $(\\gamma \\in (0,120^{o}))$.\n For the second, more recent set of data this region is restricted to about $10^{o}-80^{o}$. Since the values of\n $f_{m}$ differ a little in the above-mentioned region we should rather think of an allowed range of $\\gamma$.\n\\\\\n\nThe values of fitted parameters $|P|$,$|S^{'}|$,$|P_{cef}|$ vary for different values of $\\gamma$.\nThe most stable are the ratio $\\frac{P_{cef}}{P}$ and the strong phase $\\delta$. $|\\frac{P_{cef}}{P}|$\nchanges from 1.1 to 1.3(1.2) only. The function $f_{m}(\\delta ,\\gamma)$ has a deep minimum around\n$ \\delta \\approx \\pm 20^{o}$ (Fig.2) for a wide range of fits with fixed $\\gamma$. Both positive and negative signs of $\\delta$ are allowed\nas the fitted function is\nsymmetric under $\\delta \\leftrightarrow -\\delta$. The fact that the ratio $|\\frac{P_{cef}}{P}|$ is close to unity is in agreement with the calculation in \\cite{buras}. On the other hand, for the\n best fits with $\\delta=0$ the ratio $|\\frac{P_{cef}}{P}|$ is about 0.53(0.7). This value for the second set of data is higher than that assumed in \\cite{chpZ}.\n\\begin{figure}[h]\n\\begin{center}\n\n\\includegraphics[width=0.9\\textwidth]{delta_23.eps}\n\n\\caption{\\small Dependence of $f_{m}(\\gamma ,\\delta)$ on $\\delta$ for selected\nvalues of $\\gamma$, dashed (solid) lines denote the first (second) set of data.}\n\\end{center}\n\n\\end{figure}\n\n\\begin{table}[h]\n\\caption{Asymmetries generated by charming penguin for $\\delta>0$ (for $\\delta<0$ asymmetries are of opposite\nsign)}\n\\begin{center}\n{\\footnotesize\n\\begin{tabular}{|l|c|c|c|c|c|}\n\\hline\nDecay channel & \\multicolumn{2}{|c|}{First set of data}&\\multicolumn{2}{|c|}{Second set of data} &Experiment\\\\\n & $\\gamma$ fitted&$\\gamma=64.5^o$ &$\\gamma$ fitted&$\\gamma=64.5^o$ & \\\\\n\\hline \\hline\n$(B^+ \\to \\pi ^+ \\pi ^0)$ &$0$ &0 &$0$ &0&$-0.07 \\pm 0.14$\\\\\t \t\t\t\t\t\t\t\t \n$(B^+ \\to K ^+ \\bar{K}^0)$ &-0.93&-0.90 &-0.90&-0.98&-\\\\ \t\t \t\t\t\t\t\t\t\t \n$(B^+ \\to\\pi ^+ \\eta)$ &0.48&0.87 &0.47&0.76&$-0.44 \\pm 0.18 \\pm 0.01$\\\\\t\t\t\t\t\t\t\t \n$(B^+ \\to\\pi ^+ \\eta ')$&0.48&0.87 &0.47&0.76&-\\\\\t\t\t \t\t\t\t\t\t\t\t \n\\hline\n$(B^+ \\to \\pi ^+ K ^0 )$&0.11&0.08 &0.04&0.07&$0.02 \\pm 0.06$\\\\\t \t\t\t\t\t\t\t\t \n$(B^+ \\to \\pi ^0 K ^+ )$ &-0.21&-0.28 &-0.09&-0.23&$0.00 \\pm 0.12$\\\\\t \t\t\t\t\t\t\t\t \n$(B^+ \\to\\eta K^+)$ &0&0 &0&0&$-0.52 \\pm 0.24 \\pm 0.01$\\\\ \t\t\t\t\t\t\t\t \n$(B^+ \\to\\eta ' K^+)$ &0.006&-0.004 &0.005&-0.004&$0.02 \\pm 0.042$\\\\ \t\t\t\t\t\t\t\t \n\\hline\n$(B^0_d \\to\\pi ^- K^+)$ & -0.19&-0.24 & -0.075&-0.21&$-0.09 \\pm0.03$\\\\ \t\t\t\t\t\t\t\t \n\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\nCharming penguins with a nonvanishing strong phase may be a source of direct CP asymmetries.\nThe predicted values were calculated for the same points as in Tables 1,2. The results are\ngiven in Table 3 together with the averages from Belle, BABAR and CLEO experiments \\cite{Rosner}.\nThe main features are large asymmetries in the $\\Delta S=0$ sector with relatively small asymmetries\nfor the $\\Delta S=1$ decays channels. We are not able to predict the absolute signs of the asymmetries\nsince we have two allowed signs of $\\delta$. The asymmetry for $(B^+ \\to \\pi ^+ K ^0 )$\nis a pure $\\lambda^{4}$ effect and shows a potential influence of this correction.\n\n\n\n\\section{Conclusions}\nOur results permit to draw the following conclusions:\n\\begin{enumerate}\n\\item{Even without the charming penguins the value of angle $\\gamma$ extracted from the fit depends on the details of data. More recent data prefer the value of $\\gamma$ \nmore in accordance with the expectations of the standard model. }\n\\item{If we admit the non-zero value of the charming penguin (with strong phase equal zero), the fitted values of $\\gamma$ may move toward the SM value by $10^{o}-15^{o}$.}\n\\item{Admitting strong phase of the charming penguin as a free parameter leads to a relatively flat function $f_{m}(\\gamma)$ i.e. it allows a wide range of $\\gamma$.\n This means that there is probably too much freedom in the fits. However, the fitted strong phase $\\delta$ is\n relatively stable and close to $\\pm 20^{o}$.}\n\\end{enumerate}\n\n\n{\\it Acknowledgements}. I would like to thank P. \\.Zenczykowski for helpful discussions and comments.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuantum states can be well described in terms of Wigner functions.\nThe states with Gaussian Wigner function have been of particular\ninterests in context of quantum information processing. Entanglement\nin such states in terms of quadratures are also well studied. On the\nother hand, the quantum states with non-Gaussian Wigner function are\nalso quite important. For example, a single-photon state, which\nfinds many applications in quantum information processing, shows\nnon-Gaussian behavior in phase-space. In a recent experiment, a\nnon-Gaussian state has been produced by homodyne detection technique\nfrom a single-mode squeezed state of light \\cite{wenger,kim}.\nFor certain non-Gaussian states, Wigner functions can take negative\nvalues. Such negativity refers to nonclassicality of these states.\nThese states are useful in entanglement distillation\n\\cite{eisert,cirac}, loophole-free tests of Bell's inequality\n\\cite{bell}, and quantum computing \\cite{sanders}. A specific class\nof such nonclassical states has been shown to be similar to the\nSchrodinger kitten state, in the sense that their Wigner functions\nshow negativity at the origin of phase space \\cite{science,polzik}.\nIt is well known that the Schrodinger cat states \\cite{knight},\nwhich are quantum superpositions of coherent states, are\nnon-classical in nature and are very important to study the interface\nof quantum and classical worlds. Superposition of coherent states\nwith low amplitudes creates Schrodinger kitten states. Most of the\nexperiments to prepare the Schrodinger cat states have been\nperformed in cavities or bound systems. Thus they are not much\nuseful in quantum information networks though they have the\nnon-Gaussian nature which is required in certain quantum\ncommunication protocols. In \\cite{science,polzik}, it has been shown\nhow to prepare an Schrodinger kitten state in an optical system, by\nsubtracting a single photon from a squeezed vacuum state. This\noptical kitten state would overcome the limitations of bound\nsystems. Repeated photon-subtractions can lead to conditional\ngeneration of arbitrary single-mode state \\cite{cerf}. We note that\nsimilar non-Gaussian states could be prepared by adding a single\nphoton to a squeezed vacuum state (see \\cite{tara,bellini} for\ndetails of photon-added coherent states, which are also non-Gaussian\nstates). These states are equivalent to single-photon subtracted\nsqueezed vacuum state and exhibit similar behavior in phase space.\nIt is worth to mention that non-Gaussian two-mode entangled states\ncan be prepared by subtracting a photon from a two-mode squeezed\nstate \\cite{sasaki,paris}.\n\n\\begin{figure*}\n\\begin{center}\n\\caption{\\label{sq_figs}Plots of Wigner functions of single-photon\nsubtracted squeezed states for (a) $r=0.31$ and (b) $r=0.8$ with\n$\\theta=0$.}\n\\end{center}\n\\end{figure*}\n\\begin{figure}\n\\caption{\\label{cond}Variation of $C$ in phase space for $r=0.31$\nand $\\theta=0$.}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\caption{\\label{ellipse}Contour plot for $C=$ constant in phase space for\n(a) $r=0.31$ and (b) $r=0.8$.}\n\\end{center}\n\\end{figure}\n\nIn this paper, we focus our study on the nonclassical properties and\ndecoherence of single-photon subtracted squeezed vacuum states which\nare optically produced single-mode non-Gaussian states. The\nstructure of the paper is as follows. In Sec. II, we introduce the\nphoton-subtracted squeezed states and discuss its nonclassical\nproperties in terms of the sub-Poissonian statistics and the\nnegativity of its Wigner function. We derive a compact expression of\nthe Wigner function and find the region in phase space where it\nbecomes negative. We show that there is an upper bound of the\nsqueezing parameter for this state to exhibit sub-Poissonian\nstatistics. In Sec. III, we study the effects of two different model\nof decoherence: photon-number decay and phase damping. In both\ncases, we derive analytical expressions for the time-evolution of\nthe state and its Wigner function. We discuss the loss of\nnonclassicality due to decoherence. We show through the study of\nevolution of the Wigner function how the state decays to vacuum as a\nresult of photon-number decay. We further show that phase damping\nleads to much slower decoherence than the photon-number decay.\n\n\\begin{figure}\n\\begin{center}\n\\scalebox{0.6}{\\includegraphics{fig5.eps}}\n\\end{center}\n\\caption{\\label{Qdecay}Variation of the $Q$-parameter\nwith time in presence of decoherence due to decay of\nphoton for squeezing parameters $r=0.31$ (red line) and $r=0.8$\n(green line).}\n\\end{figure}\n\n\\begin{figure}\n\\caption{\\label{negative_time}Variation of $P\/(a^2-4|c|^2)^{5\/2}$ in\nphase space for $r=0.31$, $\\theta=0$, and $\\kappa t=0.1$.}\n\\end{figure}\n\n\\section{Photon-subtracted squeezed states}\nAn unnormalized single-mode squeezed vacuum state is given by\n\\begin{equation}\n|\\psi\\rangle\\equiv\nS(\\zeta)|0\\rangle\\;,\\;S(\\zeta)=\\exp\\left[\\frac{\\zeta}{2} a^{\\dag\n2}-\\frac{\\zeta^*}{2} a^2\\right]\\;,\n\\end{equation}\nwhere $S(\\zeta)$ is the squeezing operator, $\\zeta=re^{i\\theta}$ is\nthe complex squeezing parameter, and $a$ is the annihilation\noperator. The Wigner function of this state is Gaussian and positive\nin phase space. When $p~(>0)$ number of photons are subtracted from\nsuch states, the state can be written as\n\\begin{equation}\n|\\psi\\rangle_p \\equiv a^p S(\\zeta)|0\\rangle \\equiv\na^p\\exp\\left[\\frac{\\xi}{2}a^{\\dag 2}\\right]|0\\rangle\\;,\n\\end{equation}\nwhere $\\xi=\\tanh(\\zeta)$. For odd $p=2m+1$, the normalized form of\nthis state can be written as\n\\begin{equation}\n\\label{odd}|\\psi\\rangle_p =\\frac{1}{N_o}\\sum_{s=0}^\\infty\n\\frac{(\\xi\/2)^{s+m+1}}{(s+m+1)!}\\frac{(2s+2m+2)!}{\\sqrt{2s+1}}|2s+1\\rangle\\;,\n\\end{equation}\nwhile for even $p=2m$, the state becomes\n\\begin{equation}\n\\label{even}|\\psi\\rangle_p=\\frac{1}{N_e}\\sum_{s=0}^\\infty\n\\frac{(\\xi\/2)^{s+m}}{(s+m)!}\\frac{(2s+2m)!}{\\sqrt{2s}}|2s\\rangle\\;,\n\\end{equation}\nwhere $N_o$ and $N_e$ are the normalization constants. In this\npaper, we focus on the case when a single photon is subtracted from\nthe squeezed vacuum, i.e., for $m=0$ in (\\ref{odd}).\n\\begin{figure}\n\\begin{center}\n\\caption{\\label{ellipse1}Plot of $16C'\/(1-e^{-2\\kappa t})$ in phase\nspace for (a) $r=0.31$ and (b) $r=0.8$ at times (i) $\\kappa t=0.05$,\n(ii) $\\kappa t=0.1$, (iii) $\\kappa t=0.3$, and (iv) $\\kappa t=0.5$.}\n\\end{center}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\caption{\\label{small_sq}Wigner function of single-photon\nsubtracted squeezed states for $\\theta=0$ and $r=0.31$ at (a)\n$\\kappa t=0.05$, (b) $\\kappa t=0.1$, (c) $\\kappa t=0.3$, and (d)\n$\\kappa t=0.5$.}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\caption{\\label{large_sq}Wigner function of single-photon\nsqueezed states for $\\theta=0$ and $r=0.8$ at (a) $\\kappa t=0.1$,\n(b) $\\kappa t=0.3$, (c) $\\kappa t=0.5$, and (d) $\\kappa t=0.7$.}\n\\end{center}\n\\end{figure*}\n\\subsection{Negativity of the Wigner function}\nThe Wigner function of the squeezed vacuum state $|\\psi\\rangle$ is\ngiven by\n\\begin{equation}\n\\label{w_sq}W_{\\rm\nsq}(\\alpha,\\alpha^*)=\\frac{2}{\\pi}\\exp(-2|\\tilde{\\alpha}|^2)\\;,\n\\end{equation}\nwhere $\\tilde{\\alpha}=\\alpha\\cosh(r)-\\alpha^*e^{i\\theta}\\sinh(r)$.\nThe function is Gaussian in phase space, as shown in Fig.\n\\ref{vacuum}. We now calculate the Wigner function of the state\n$|\\psi\\rangle_p$. This state can be rewritten as\n\\begin{equation}\n\\label{rel}|\\psi\\rangle_p =a^p S(\\zeta)|0\\rangle=S(\\zeta)S^\\dag\n(\\zeta)a^pS(\\zeta)|0\\rangle\\;.\n\\end{equation}\nUsing the relation\n\\begin{equation}\nS^\\dag(\\zeta)a^pS(\\zeta)=[\\cosh(r)a+e^{i\\theta}\\sinh(r)a^{\\dag}]^p\\;,\n\\end{equation}\nwe get the following from (\\ref{rel})\n\\begin{equation}\n\\label{state}|\\psi\\rangle_1\\equiv S(\\zeta)|1\\rangle\\;,\n\\end{equation}\nfor $p=1$. For a density matrix $\\tilde{\\rho}(a,a^\\dag)$, we can\nwrite the following:\n\\begin{equation}\n\\label{relation}S(\\zeta)\\tilde{\\rho}(a,a^\\dag)S^\\dag\n(\\zeta)=\\tilde{\\rho}[S(\\zeta)aS^\\dag (\\zeta),S(\\zeta)a^\\dag\nS^\\dag(\\zeta)]\\;.\n\\end{equation}\nWe write the Wigner function of $\\tilde{\\rho}(a,a^\\dag)$ as\n$W_{\\tilde{\\rho}}(\\alpha,\\alpha^*)$. Using the identities\n\\begin{eqnarray}\nS(\\zeta)aS^\\dag (\\zeta) &=&a\\cosh(r)-a^\\dag e^{i\\theta}\\sinh(r)\\;,\\\\\nS(\\zeta)a^\\dag S^\\dag(\\zeta) &=& a^\\dag\n\\cosh(r)-ae^{-i\\theta}\\sinh(r)\\;,\n\\end{eqnarray}\nin (\\ref{relation}), we can thus write the Wigner function of the\ndensity matrix $\\rho(a,a^\\dag)=S(\\zeta)\\tilde{\\rho}(a,a^\\dag)S^\\dag\n(\\zeta)$ as\n\\begin{equation}\n\\label{formula}W_{\\rho}(\\alpha,\\alpha^*)=W_{\\tilde{\\rho}}(\\tilde{\\alpha},\\tilde{\\alpha}^*)\\;,\n\\end{equation}\nwhere we have used the linearity property of the Wigner function.\nFor the state (\\ref{state}), $\\tilde{\\rho}=|1\\rangle\\langle 1|$ and\nits Wigner function is given by\n\\begin{equation}\n\\label{wsq}W_{\\tilde{\\rho}}(\\alpha,\\alpha^*)=\\frac{2}{\\pi}(4|\\alpha|^2-1)e^{-2|\\alpha|^2}\\;.\n\\end{equation}\nThus, the Wigner function of the single-photon subtracted squeezed\nvacuum state becomes\n\\begin{equation}\n\\label{wig1}W_\\rho(\\alpha,\\alpha^*)=\\frac{2}{\\pi}(4|\\tilde{\\alpha}|^2-1)e^{-2|\\tilde{\\alpha}|^2}\\;,\n\\end{equation}\nwhere we have used (\\ref{formula}) and (\\ref{wsq}). Clearly, the\nWigner function (\\ref{wig1}) is non-Gaussian in phase-space. We show\nthe plot of this Wigner function in the phase-space in Figs.\n\\ref{sq_figs} for different squeezing parameters. As an evidence of\nnon-classicality of the state, squeezing in one of the quadratures\nis clear in the plots. Also there is some negative region of the\nWigner function in the phase-space which is another evidence of the\nnon-classicality of the state. The function becomes negative in\nphase space, when\n\\begin{equation}\n\\label{neg_cond}|\\tilde{\\alpha}|^2<\\frac{1}{4}\\;.\n\\end{equation}\nWe show in Fig. \\ref{cond} the variation of\n$C=|\\tilde{\\alpha}|^2-\\frac{1}{4}$ in phase space. The negative\nregion of $C$ corresponds to the negativity of the Wigner function.\nNote that $C=$ constant corresponds to ellipse in phase space, as\nshown in Fig. \\ref{ellipse}.\n\nNote that the photon-subtracted squeezed states are similar to\nSchrodinger kitten states \\cite{science,polzik} because Wigner functions\nexhibit the same characteristics in phase-space especially for the\nlarge values of the squeezing parameters. Moreover, in both cases,\nWigner function becomes negative in the center of phase space.\n\n\\subsection{Sub-Poissonian nature of the photon-subtracted state}\nThe nonclassicality of the state $|\\psi\\rangle_1$ can also be\nanalyzed by studying its sub-Poissonian character in terms of the\nMandel's $Q$-parameter \\cite{mandel} which is defined by\n\\begin{equation}\n\\label{Qpara}Q=\\frac{\\langle a^{\\dag 2}a^2\\rangle-\\langle a^\\dag\na\\rangle^2}{\\langle a^\\dag a\\rangle}\\;.\n\\end{equation}\nThe negativity of the $Q$-parameter refers to sub-Poissonian\nstatistics of the state. However in \\cite{tara_Q}, it has been shown\nthat a state can be nonclassical even if $Q$ is positive. A similar\nsituation occurs in the present case. For the state (\\ref{odd}), we\nfind\n\\begin{eqnarray}\n\\langle a^{\\dag\n2}a^2\\rangle &=&\\sum_{n=0}^\\infty n(n-1)\\rho_{n,n}=\\frac{3|\\xi|^4(3+2|\\xi|^2)}{N_o^2(1-|\\xi|^2)^{7\/2}}\\;,\\nonumber\\\\\n\\label{avg1}\\langle a^\\dag a\\rangle &=&\\sum_{n=0}^\\infty\nn\\rho_{n,n}=\\frac{|\\xi|^2(1+2|\\xi|^2)}{N_o^2(1-|\\xi|^2)^{5\/2}}\\;,\n\\end{eqnarray}\nwhere the normalization constant is given by\n\\begin{equation}\nN_o^2=\\frac{|\\xi|^2}{(1-|\\xi|^2)^{3\/2}}\\;.\n\\end{equation}\nFrom (\\ref{avg1}), we find that $Q$ becomes negative for\n$|\\xi|\\lesssim 0.43$, which is satisfied for $r\\lesssim 0.46$. We\nemphasize that the Wigner function has negative region for all\nvalues of $r$, and thus the photon-subtracted squeezed state is\nnonclassical for all $r$, though it does not exhibit sub-Poissonian\nphoton statistics above certain squeezing threshold.\n\n\\section{Models of decoherence}\nWe next consider how this state evolves under decoherence. The\ndecoherence of the single-mode state (\\ref{odd}) can be due to decay\nof photons to the reservoir or due to phase damping.\n\n\\subsection{Amplitude decay model}\nWhen the photons decay to reservoir, the corresponding Markovian\ndynamics of the state is well described by the following equation:\n\\begin{equation}\n\\frac{d}{dt}\\rho=-\\kappa(a^\\dag a\\rho-2a\\rho a^\\dag+\\rho a^\\dag a)\\;,\n\\end{equation}\nwhere $\\kappa$ is the rate of decay. The solution of this equation\ncan be written as\n\\begin{equation}\n\\label{sol}\\rho(t)=\\sum_{n,n'}\\rho_{n,n'}(t)|n\\rangle\\langle n'|\\;,\n\\end{equation}\nwhere the density matrix element $\\rho_{n,n'}(t)$ can be found by\nusing the Laplace transformation and the iteration methods\n\\cite{gsa_po}. To see this, let us start with the time-dependent\nequation for $\\rho_{n,n'}$:\n\\begin{equation}\n\\dot{\\rho}_{n,n'}=-\\kappa(n+n')\\rho_{n,n'}+2\\kappa\\sqrt{(n+1)(n'+1)}\\rho_{n+1,n'+1}\\;.\n\\end{equation}\nUsing the new subscripts $q=n-n'$ and $p=(n+n')\/2$, the above\nequation transforms into\n\\begin{equation}\n\\label{pq}\\dot{\\rho}_{p,q}=-2\\kappa p\\rho_{p,q}+2\\kappa\n\\sqrt{(p+1)^2-(q\/2)^2}\\rho_{p+1,q}\\;.\n\\end{equation}\nTaking Laplace transformation of (\\ref{pq}) and using the original\nsubscript $n$ and $n'$, we can write the time-dependent solution for\nthe density matrix elements as\n\\begin{eqnarray}\n\\rho_{n,n'}(t)&=&e^{-\\kappa\nt(n+n')}\\sum_{r=0}^\\infty\\sqrt{\\left(^{n+r}C_r\\right)\\left(^{n'+r}C_r\\right)}\\nonumber\\\\\n\\label{rhonn}&&\\times (1-e^{-2\\kappa t})^r\\rho_{n+r,n'+r}(t=0)\\;,\n\\end{eqnarray}\nwhere for the single-photon subtracted squeezed vacuum\n\\begin{eqnarray}\n\\rho_{n+r,n'+r}(t=0)&=&\\frac{1}{N_o^2}\\frac{(\\xi\/2)^{(n+r+1)\/2}(\\xi^*\/2)^{(n'+r+1)\/2}}{(\\frac{n+r+1}{2})!(\\frac{n'+r+1}{2})!}\\nonumber\\\\\n&&\\label{init}\\times\\frac{(n+r+1)!(n'+r+1)!}{\\sqrt{(n+r)!(n'+r)!}}\\;.\n\\end{eqnarray}\n\nWe next calculate the parameter $Q$ [Eq. (\\ref{Qpara})] for the\nstate (\\ref{sol}). We have found that\n\\begin{eqnarray}\n\\langle a^{\\dag 2}a^2\\rangle &=&\\sum_{n=0}^\\infty\nn(n-1)\\rho_{n,n}(t)\\;,\\nonumber\\\\\n\\label{avg2}\\langle a^\\dag a\\rangle &=&\\sum_{n=0}^\\infty n\\rho_{n,n}(t)\\;,\n\\end{eqnarray}\nwhere $\\rho_{n,n}(t)$ is given by (\\ref{rhonn}) and (\\ref{init}) for\n$n=n'$ in case of state $|\\psi\\rangle_1$. Using Eqs. (\\ref{avg2}), we\nplot $Q$ with time in Fig. \\ref{Qdecay}. It is easy to see that\nat long times ($\\kappa t\\rightarrow \\infty$), $Q$ vanishes. This is\nbecause at this limit, $\\rho_{n,n}(t)$ vanishes for all non-zero $n$\nand $\\rho_{0,0}(t\\rightarrow \\infty)=1$, i.e., the state decays to\nvacuum. Thus the averages (\\ref{avg2}) vanish and $Q$ also vanishes.\n\n\\subsubsection{Evolution of Wigner function}\nThe evolution of the Wigner function is governed by the following\nequation:\n\\begin{equation}\n\\label{wignereq}\\frac{\\partial W}{\\partial\nt}=\\kappa\\left[\\frac{\\partial}{\\partial\n\\alpha}\\alpha+\\frac{\\partial}{\\partial\n\\alpha^*}\\alpha^*+\\frac{\\partial^2}{\\partial\n\\alpha\\partial\\alpha^*}\\right]W(\\alpha,\\alpha^*)\\;.\n\\end{equation}\nThe solution can be written as\n\\begin{eqnarray}\n\\label{wignersol}W(\\alpha,\\alpha^*,t)&=&\\frac{2}{\\pi(1-e^{-2\\kappa\nt})}\\int\nd^2\\alpha_0W(\\alpha_0,\\alpha_0^*,0)\\nonumber\\\\\n&&\\exp\\left\\{-2\\frac{|\\alpha-\\alpha_0e^{-\\kappa\nt}|^2}{(1-e^{-2\\kappa t})}\\right\\}\\;,\n\\end{eqnarray}\nwhere $W(\\alpha_0,\\alpha_0^*,0)$ is the Wigner function of the\ninitial state. It is easy to verify this solution putting\n(\\ref{wignersol}) in (\\ref{wignereq}). The time-evolution of the\nWigner function of the squeezed vacuum state $|\\psi\\rangle$ can be\neasily calculated analytically using the following integral\nidentity \\cite{gsa_prdeq,puri}:\n\\begin{eqnarray}\n\\label{ident}&&\\int\nd^2\\alpha\\exp[-|\\alpha|^2]\\exp\\left(-\\frac{\\mu}{\\tau}\\alpha^2-\\frac{\\nu}{\\tau}\\alpha^{*2}-\\frac{z^*\\alpha}{\\sqrt{\\tau}}+\\frac{z\\alpha^*}{\\sqrt{\\tau}}\\right)\\nonumber\\\\\n&&=\\frac{\\pi\\tau}{\\sqrt{\\tau^2-4\\mu\\nu}}\\exp\\left(-\\frac{\\mu\nz^2+\\nu z^{*2}+\\tau|z|^2}{\\tau^2-4\\mu\\nu}\\right)\\;.\n\\end{eqnarray}\nUsing Eq. (\\ref{wsq}) and the above identity in Eq.\n(\\ref{wignersol}), we get the following:\n\\begin{eqnarray}\nW(\\alpha,\\alpha^*,t)&=&\\frac{4}{\\pi(1-e^{-2\\kappa t})}\n\\frac{\\exp\\left[-\\frac{2|\\alpha|^2}{1-e^{-2\\kappa t}}\\right]}{\\sqrt{a^2-4|c|^2}}\\nonumber\\\\\n&&\\times\\exp\\left[\\frac{b^2c^*+b^{*2}c+a|b|^2}{a^2-4|c|^2}\\right]\\;,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\na&=&2\\cosh(r)+2\\frac{e^{-2\\kappa t}}{1-e^{-2\\kappa t}}\\;,\\nonumber\\\\\nb&=&\\frac{2\\alpha^*e^{-\\kappa t}}{1-e^{-2\\kappa\nt}}\\;,\\;c=e^{-i\\theta}\\sinh(r)\\;.\n\\end{eqnarray}\nClearly the Wigner function of the squeezed vacuum state is Gaussian\nat all times.\n\nWe now calculate the time-dependence of the Wigner function of the\nstate $|\\psi\\rangle_1$. The initial Wigner function, as given by\n(\\ref{wig1}), can be rewritten as\n\\begin{equation}\n\\label{alter_W}W(\\alpha,\\alpha^*,0)=\\frac{2}{\\pi}D\\left.\\left[e^{-\\lambda|\\tilde{\\alpha}|^2}\\right]\\right|_{\\lambda=2}\\;,\\;D=-4\\frac{d}{d\\lambda}-1\\;.\n\\end{equation}\nUsing (\\ref{alter_W}) and (\\ref{wignersol}), we can find the\nfollowing expression for the Wigner function:\n\\begin{eqnarray}\nW(\\alpha,\\alpha^*,t)&=&\\left(\\frac{2}{\\pi}\\right)^2\\frac{1}{1-e^{-2\\kappa\nt}}D\\int\nd^2\\alpha_0\\exp[-\\lambda|\\tilde{\\alpha}_0|^2]\\nonumber\\\\\n&&\\times \\left.\\exp\\left[-2\\frac{|\\alpha-\\alpha_0e^{-\\kappa\nt}|^2}{1-e^{-2\\kappa t}}\\right]\\right|_{\\lambda=2}\\;,\n\\end{eqnarray}\nwhere\n$\\tilde{\\alpha}_0=\\alpha_0\\cosh(r)-\\alpha_0^*e^{i\\theta}\\sinh(r)$.\nSimplifying the above expression using Eq. (\\ref{ident}), we get\n\\begin{eqnarray}\nW(\\alpha,\\alpha^*,t)&=&\\frac{32P}{\\pi(1-e^{-2\\kappa\nt})}\\frac{\\exp\\left[-\\frac{2|\\alpha|^2}{1-e^{-2\\kappa\nt}}\\right]}{(a^2-4|c|^2)^{5\/2}}\\nonumber\\\\\n&&\\label{timeW}\\times\\exp\\left[\\frac{b^2c^*+b^{*2}c+a|b|^2}{a^2-4|c|^2}\\right]\\;,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nP&=&(1-x^2)\\{\\sinh(2r)(e^{i\\theta}b^2+e^{-i\\theta}b^{*2})\\nonumber\\\\\n&&+2[(x+1)^2+4x\\sinh^2(r)]\\}\\nonumber\\\\\n\\label{P}&&+2\\{(1+x^2)\\cosh(2r)+2x\\}|b|^2\n\\end{eqnarray}\nand\n\\begin{equation}\nx=\\frac{e^{-2\\kappa t}}{1-e^{-2\\kappa t}}\\;.\n\\end{equation}\nClearly, the Wigner function is non-Gaussian due to the presence of\nthe polynomial $P$. This becomes negative when the polynomial $P$\nbecomes negative. In Fig. \\ref{negative_time}, we have plotted\n$C'(\\alpha,\\alpha^*,t)=P\/(a^2-4|c|^2)^{5\/2}$ in phase space to show\nthe negative region for Wigner function.\n\nNote that at the center of the phase space ($\\alpha= \\alpha^*=0$),\nthe Wigner function is maximally negative. At the center,\n\\begin{equation}\nC'(0,0,t)=\\frac{2(1-x^2)\\{(x+1)^2+4x\\sinh^2(r)\\}}{(a^2-4|c|^2)^{5\/2}}\\;,\n\\end{equation}\nwhich becomes negative when $(1-x^2)$ becomes negative. This leads\nto the following condition:\n\\begin{equation}\n\\kappa t<\\kappa t_0=\\frac{1}{2}\\ln(2)\\;,\n\\end{equation}\nwhich is independent of the squeezing parameter $r$. Thus the Wigner\nfunction has certain negative region for the time\n$tt_0$, the ellipse\n$16C'\/(1-e^{-2\\kappa t})=$ constant interchanges its minor and major\naxes. We show this behavior in Figs. \\ref{ellipse1} for different\nvalues of $r$. Note that at times much larger than decoherence\ntime-scale $1\/\\kappa$ (i.e., for $\\kappa t\\rightarrow \\infty$),\n$P\\rightarrow 2$ and thus becomes constant throughout the phase\nspace.\n\nUsing Eq. (\\ref{timeW}), we show the variation of Wigner function at\ndifferent time-scales in Figs. \\ref{small_sq}. It is easy to see how\nthe negative region of the Wigner function gradually diminishes. At\nlong times $\\kappa t\\rightarrow \\infty$, the Wigner function becomes\n\\begin{equation}\nW(\\alpha,\\alpha^*,\\infty)= \\frac{2}{\\pi}e^{-2|\\alpha|^2}\\;,\n\\end{equation}\nwhich corresponds to vacuum state. We have shown this in Fig.\n\\ref{vacuum}(b). This can also be understood from Eq. (\\ref{rhonn}).\nFor $\\kappa t\\rightarrow \\infty$, $\\rho_{0,0}$ approaches unity,\nwhereas all other density matrix elements vanish. This means that at\nlong times, the state decays to vacuum, as we have discussed\nearlier.\n\nWe next study the time-evolution of the Wigner function for the case\nof large squeezing, i.e., large values of $\\zeta$. In this case the\nsingle photon subtracted squeezed state becomes similar to a\nSchrodinger cat state. For large times, such an optical cat state\ndecays to vacuum. Thus the Wigner function becomes Gaussian, as\ndiscussed above. We show this evolution for large squeezing in Figs.\n\\ref{large_sq}.\n\n\\subsection{Effect of phase damping}\nWe now study the effect of phase-damping on the state\n$|\\psi\\rangle_1$. Such damping can be described by the following\nmaster equation:\n\\begin{equation}\n\\dot{\\rho}=-\\kappa_p(A^{\\dag}A\\rho-2A\\rho A^\\dag+\\rho A^\\dag A)\\;,\n\\end{equation}\nwhere $A=a^\\dag a$ is the number operator and $\\kappa_p$ is the\ncorresponding rate of decoherence. The solution of this equation can\nbe easily found as (\\ref{sol}) where\n\\begin{equation}\n\\label{sol_phase}\\rho_{n,n'}(t)=\\exp[-(n-n')^2\\kappa_p\nt]\\rho_{n,n'}(0)\\;.\n\\end{equation}\nIt is easy to see that only the diagonal elements $\\rho_{n,n}$ do\nnot decay due to dephasing. Thus at long times, we can write\n\\begin{equation}\n\\rho(t\\rightarrow \\infty)=\\sum_{n=0}^\\infty\n\\rho_{n,n}(0)|n\\rangle\\langle n|\\;,\n\\end{equation}\nwhich refers to a mixed state.\n\nUsing Eqs. (\\ref{avg2}) and (\\ref{sol_phase}), we next calculate the\nparameter $Q$. We find that the averages\n$\\langle a^{\\dag 2}a^2\\rangle$ and $\\langle a^\\dag a\\rangle$ do not\ndepend upon time, because in case of phase damping\n$\\rho_{n,n}(t)=\\rho_{n,n}(0)$. Thus $Q$ remains the same for all\ntimes.\n\\begin{figure*}\n\\begin{center}\n\\caption{\\label{wig_phase_fig}Wigner function in phase\nspace at long times in presence of phase damping for (a) $r=0.31$\nand (b) $r=0.8$.}\n\\end{center}\n\\end{figure*}\nHowever, the corresponding Wigner function has certain\ntime-dependence. We find that at long times, the Wigner function\nbecomes\n\\begin{equation}\n\\label{wigner_phase}W(\\alpha,\\alpha^*,\\infty)=\\sum_{n=0}^\\infty\n\\rho_{n,n}(0)W_{|n\\rangle\\langle n|}(\\alpha,\\alpha^*)\\;,\n\\end{equation}\nwhere $W_{|n\\rangle\\langle n|}(\\alpha,\\alpha^*)$ is the Wigner\nfunction of a Fock state $|n\\rangle$ as given by\n\\begin{equation}\nW_{|n\\rangle\\langle\nn|}(\\alpha,\\alpha^*)=(-1)^n\\frac{2}{\\pi}e^{-2|\\alpha|^2}L_n(4|\\alpha|^2)\\;.\n\\end{equation}\nThe function (\\ref{wigner_phase}) refers to a highly nonclassical\nstate. It is interesting to note that all the Fock states have\nindependent contributions to the Wigner function at long times,\nweighted by their initial population $\\rho_{n,n}(0)$. On the other\nhand, in case of decoherence due to photon-number decay, only the\nvacuum state survives. In Fig. \\ref{wig_phase_fig}, we plot the\nWigner function (\\ref{wigner_phase}) in phase space for different\nsqueezing. Note that the Wigner function has negative region at long\ntimes representing nonclassicality for all $r$, even if the state\ndoes not exhibit sub-Poissonian statistics for $r\\gtrsim 0.46$\n(because $Q$ is positive). In fact, if $Q$ is positive, it does not\nmean that the the state is classical. In such cases, we have to use\nother parameters to test the nonclassicality. Several parameters\nhave been introduced in this context \\cite{tara_Q,nonclass}. We can use\nhierarchy of these parameters which have been shown to be especially\nuseful in context of cat states. Here we illustrate the utility of one such\nparameter, e.g., the $A_3$ parameter as defined by \\cite{tara_Q}\n\\begin{equation}\n\\label{a3}A_3=\\frac{{\\rm det}[m^{(3)}]}{{\\rm det}[\\mu^{(3)}]-{\\rm\ndet}[m^{(3)}]}\\;,\n\\end{equation}\nwhere\n\\begin{equation}\nm^{(3)}=\\left(\\begin{array}{ccc} 1&m_1&m_2\\\\\nm_1&m_2&m_3\\\\\nm_2&m_3&m_4\n\\end{array}\\right)\\;,\\;\\mu^{(3)}=\\left(\\begin{array}{ccc} 1&\\mu_1&\\mu_2\\\\\n\\mu_1&\\mu_2&\\mu_3\\\\\n\\mu_2&\\mu_3&\\mu_4\n\\end{array}\\right)\\;,\n\\end{equation}\n$m_s=\\langle a^{\\dag s}a^s\\rangle$, $\\mu_s=\\langle (a^\\dag\na)^s\\rangle$ and det indicates determinant of the matrix. The state exhibits phase-insensitive nonclassical\nproperties if $A_3$ lies between 0 and -1 \\cite{tara_Q}. For the\nstate $|\\psi\\rangle_1$ we have found that $A_3$ remains negative for\n$|\\xi|\\lesssim 0.6$ which corresponds to $r\\lesssim 0.7$. Clearly\n$A_3$ is a stronger measure of nonclassicality than $Q$ because it\nleads to a larger upper bound of $r$ to exhibit nonclassicality.\nFurther, comparing the Wigner functions in Figs. \\ref{wig_phase_fig} with those at $t=0$ [see\nFigs. \\ref{sq_figs}], we find that the Wigner function varies very\nslowly with time for small squeezing. But for large squeezing, the\nvariation is faster. Although we can conclude that phase damping\nleads to much slower decoherence than amplitude damping.\n\n\\section{Conclusions}\nIn conclusion, we have studied how a class of non-Gaussian states\nevolves in presence of decoherence. We have considered a\nsingle-photon subtracted squeezed vacuum state, the Wigner function\nof which is similar to that of a Schrodinger kitten state. We have\nfound an upper bound for squeezing parameter for which this state\nexhibits sub-Poissonian photon statistics. However, the state\nremains nonclassical for all values of the squeezing parameter\nbecause the Wigner function becomes negative around central region\nin phase space. Next, we have studied how the state evolves in\npresence of two different kinds of decoherence, viz., amplitude\ndecay and phase damping. We have found analytical expressions for\nthe time-evolution of the state and the Wigner function in both\ncases. In case of amplitude decay, the Wigner function loses its\nnon-Gaussian nature and becomes Gaussian at long times,\ncorresponding to vacuum. On the other hand, phase damping leads to\nmuch slower decoherence than amplitude damping. The state remains\nnonclassical at long times.\n\n\\begin{acknowledgments}\nA.B. gratefully acknowledges the partial support from the Women in\nScience and Engineering program in University of Southern\nCalifornia, Los Angeles, USA. G.S.A. kindly acknowledges support\nfrom NSF grant no. CCF0524673.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThis paper continues the series on spectroscopic orbits of stars\nbelonging to hierarchical systems \\citep{paper1,paper2,paper3}. It is\nmotivated by the need to improve statistics of orbital elements in\nstellar hierarchies. Statistics will inform us on the processes of\ntheir formation and dynamical evolution, as outlined in the previous\npapers of this series. This work augments the collection of\nobservational data on stellar hierarchies assembled in the multiple\nstar catalog \\citep[MSC;][]{MSC}.\n\n\n\\begin{deluxetable*}{c c rr l cc rr r c }\n\\tabletypesize{\\scriptsize} \n\\tablecaption{Basic parameters of observed multiple systems\n\\label{tab:objects} } \n\\tablewidth{0pt} \n\\tablehead{ \n\\colhead{WDS} & \n\\colhead{Comp.} &\n\\colhead{HIP} & \n\\colhead{HD} & \n\\colhead{Spectral} & \n\\colhead{$V$} & \n\\colhead{$V-K$} & \n\\colhead{$\\mu^*_\\alpha$} & \n\\colhead{$\\mu_\\delta$} & \n\\colhead{RV} & \n\\colhead{$\\overline{\\omega}$\\tablenotemark{a}} \\\\\n\\colhead{(J2000)} & \n & & & \n\\colhead{Type} & \n\\colhead{(mag)} &\n\\colhead{(mag)} &\n\\multicolumn{2}{c}{ (mas yr$^{-1}$)} &\n\\colhead{(km s$^{-1}$)} &\n\\colhead{(mas)} \n}\n\\startdata\n04125$-$3609 &A & 19639 & 26758 & F3V & 7.12 & 1.12 & 61 & 24 & 35.40 & 7.94 \\\\\n &B & 19646 & 26773 & F2IV & 7.91 & 0.92 & 61 & 24 & 35.98 & 8.03 \\\\\n12059$-$4951 &A & \\ldots & 105080 & G3V & 9.13 & 1.43 & 25 &$-$15 & 50.19 & 9.99 \\\\\n &B & \\ldots & 105081 & G0V & 9.18 & 1.42 & 31 &$-$20 & 50.09 & 7.20 \\\\\n12283$-$6146 &A & 60845 & 108500 & G3V & 6.82 & 1.64 & 71 &$-$160 & 40.02 & 19.93 \\\\\n &D & \\ldots & \\ldots & \\ldots& 13.70 & 4.54& 73 &$-$169 & \\ldots & 19.91 \\\\\n12404$-$4924 &A & 61840 & 110143 & G0V & 7.60 & 2.00 & $-$28 & $-$112 & 6.76 & 18.59 \\\\ \n15275$-$1058 &A & 75663 & 137631 & G0 & 8.14 & 1.35&$-$65 &$-$35 & $-$56.0 v & 9.29 \\\\\n &B & \\ldots & \\ldots & G0 & 9.21 & 1.50&$-$61 &$-$35 & $-$56.82 & 7.69 \\\\\n15410$-$1449 &A & 76816 & 139864 & F8V & 9.47 & 1.62&$-$26 &$-$1 & $-$38.94 & 3.23 \\\\\n &B & \\ldots & \\ldots & \\ldots& 9.74 & 2.51&$-$25 &$-$2 & $-$50.9 v & 3.15 \\\\ \n15577$-$3915 &A & 78163 & 142728 & G3V & 9.04 & 1.54& 17 & 6 & 9.41 & 10.42 \\\\\n &B & \\ldots & \\ldots & \\ldots&10.30 & \\ldots& 31 & 4 & 6.78 v & 13.57 \\\\\n16005$-$3605 &A & 78416 & 143215 & G1V &8.65 & 1.32 &$-$26 &$-$41 & 1.60 & 9.33 \\\\\n &B & \\ldots & \\ldots & \\ldots&9.32 & 1.31 &$-$28 &$-$41 & 1.43 & 9.31 \\\\\n16253$-$4909 &AB& 80448 & 147633 & G2V & 7.5? &2.3? &$-$95 &$-$94 & $-$2.08 & 19.66 \\\\\n17199$-$1121 &A & 84789 & 156769 & F2 & 9.11 & 1.37& 6 & 13 & 5.62 & 5.33 \\\\\n &B & \\ldots & \\ldots & \\ldots& 9.89 & 1.37& 5 & 12 & 5.97 & 5.34 \n\\enddata\n\\tablenotetext{a}{Proper motions and parallaxes are \n from the {\\it Gaia} DR2 \\citep{Gaia,Gaia1}.}\n\\end{deluxetable*}\n\nThe systems studied here are presented in Table~\\ref{tab:objects}.\nOnly one of them (HIP 61840) is a simple binary belonging to the 67 pc\nsample of solar-type stars; others contain from three to five components and\nare also relatively close to the Sun. Their principal components are\nmain sequence stars with spectral types from F2V to G3V. The data in\nTable~\\ref{tab:objects} are collected from Simbad and {\\it Gaia} DR2 \\citep{Gaia},\nthe radial velocities (RVs) are determined here (variable RVs are\nmarked by 'v').\n\nThe structure of this paper is similar to the previous ones. The data\nand methods are briefly recalled in Section~\\ref{sec:obs}, where the\nnew orbital elements are also given. Then in Section~\\ref{sec:obj}\neach system is discussed individually. The paper closes with a short\nsummary in Section~\\ref{sec:sum}.\n\n\n\\section{Observations and data analysis}\n\\label{sec:obs}\n\n\n\\subsection{Spectroscopic observations}\n\nThe spectra used here were taken with the 1.5 m telescope sited at the\nCerro Tololo Inter-American Observatory (CTIO) in Chile and operated\nby the SMARTS Consortium.\\footnote{\n \\url{http:\/\/www.astro.yale.edu\/smarts\/}} The observing time was\nallocated through NOAO. Observations were made with the CHIRON\noptical echelle spectrograph \\citep{CHIRON} by the telescope operators\nin service mode. In two runs, 2017 August and 2018 March, the author\nalso observed in classical mode. All spectra are taken in the slicer\nmode with a resolution of $R=80,000$ and a signal to noise ratio of at\nleast 20. Thorium-Argon calibrations were recorded for each target.\n\nRadial velocities are determined from the cross-correlation\nfunction (CCF) of echelle orders with the binary mask based on the solar\nspectrum, as detailed in \\citep{paper1}. The RVs derived by this\nmethod should be on the absolute scale if the wavelength calibration\nis accurate. The CHIRON RVs were checked against standards from\n\\citep{Udry1998}, and a small offset of $+0.15$ km~s$^{-1}$ was found\nin \\cite{paper3}. \n\nThe CCF contains two dips in the case of double-lined systems studied\nhere. The dip width is related to the projected rotation speed $V\n\\sin i$, while its area depends on the spectral type, metallicity, and\nrelative flux. Table~\\ref{tab:dip} lists average parameters of the\nGaussian curves fitted to the CCF dips. It gives the number of\naveraged measurements $N$ (blended CCFs were not used), the dip\namplitude $a$, its dispersion $\\sigma$, the product $a \\sigma$\nproportional to the dip area (hence to the relative flux), and the\nprojected rotation velocity $V \\sin i$, estimated from $\\sigma$ by the\napproximate formula given in \\citep{paper1}. The last column\nindicates the presence or absence of the lithium 6708\\AA ~line in\nindividual components.\n\n\\begin{deluxetable*}{l l c cccc c} \n\\tabletypesize{\\scriptsize} \n\\tablecaption{CCF parameters\n\\label{tab:dip} }\n\\tablewidth{0pt} \n\\tablehead{ \n\\colhead{HIP\/HD} & \n\\colhead{Comp.} & \n\\colhead{$N$} & \n\\colhead{$a$} & \n\\colhead{$\\sigma$} & \n\\colhead{$a \\sigma$} & \n\\colhead{$V \\sin i$ } & \n\\colhead{Li}\n\\\\\n & & & &\n\\colhead{(km~s$^{-1}$)} &\n\\colhead{(km~s$^{-1}$)} &\n\\colhead{(km~s$^{-1}$)} &\n\\colhead{ 6708\\AA}\n}\n\\startdata\nHIP 19639 & Aa & 7 & 0.061 & 12.53 & 0.765 & 21.7 &N\\\\\nHIP 19639 & Ab & 7 & 0.033 & 5.90 & 0.193 & 8.7 &N\\\\\nHIP 19646 & B & 2 & 0.042 & 18.87 & 0.787 & 33: &N\\\\ \nHD 105080 & A & 3 & 0.389 & 3.68 & 1.429 & 2.5 &N\\\\ \nHD 105081 & Ba & 11 & 0.179 & 3.82 & 0.684 & 3.1 &N\\\\\nHD 105081 & Bb & 11 & 0.167 & 3.78 & 0.630 & 3.0 &N\\\\\nHIP 60845 & Aa & 2 & 0.183 & 3.83 & 0.702 & 3.2 &N\\\\\nHIP 60845 & Ab & 2 & 0.124 & 4.01 & 0.497 & 3.8 &N\\\\\nHIP 60845 & BC & 2 & 0.430 & 3.57 & 1.535 & 2.0 &N\\\\ \nHIP 61840 & Aa & 9 & 0.189 & 4.51 & 0.853 & 5.3 &Y\\\\\nHIP 61840 & Ab & 9 & 0.124 & 4.13 & 0.511 & 4.2 &Y\\\\\nHIP 75663 & A & 5 & 0.223 & 6.85 & 1.528 & 10.7 &Y\\\\\nHIP 75663 & Ba & 12 & 0.161 & 4.88 & 0.789 & 6.3 &Y\\\\ \nHIP 75663 & Bb & 12 & 0.155 & 4.87 & 0.754 & 6.3 &Y\\\\\nHIP 76816 & Aa & 6 & 0.110 & 8.14 & 0.899 & 13.3 &Y\\\\ \nHIP 76816 & Ab & 6 & 0.030 & 4.49 & 0.137 & 5.3 &Y\\\\\nHIP 76816 & B & 5 & 0.506 & 3.78 & 1.913 & 3.0 &Y\\\\\nHIP 78163 & Aa & 9 & 0.197 & 4.08 & 0.803 & 4.0 &Y\\\\\nHIP 78163 & Ab & 9 & 0.192 & 4.05 & 0.778 & 4.0 &Y\\\\\nHIP 78163 & B & 3 & 0.371 & 4.67 & 1.732 & 5.8 &N\\\\ \nHIP 78416 & Aa & 9 & 0.047 & 15.43 & 0.725 & 27: &Y\\\\\nHIP 78416 & Ab & 9 & 0.041 & 13.18 & 0.536 & 23: &Y\\\\\nHIP 78416 & B & 4 & 0.058 & 22.68 & 1.324 & 40: &Y\\\\ \nHIP 80448 & Aa & 2 & 0.101 & 12.30 & 1.247 & 21.3 &Y\\\\ \nHIP 80448 & Ab & 2 & 0.023 & 9.78 & 0.221 & 16.5 &Y\\\\\nHIP 80448 & B & 3 & 0.171 & 8.62 & 1.469 & 14.3 &Y\\\\ \nHIP 84789 & Aa & 6 & 0.043 & 12.41 & 0.536 & 21.5 &N\\\\\nHIP 84789 & Ab & 6 & 0.048 & 10.88 & 0.522 & 18.6 &N\\\\\nHIP 84789 & B & 2 & 0.036 & 27.49 & 0.996 & 49: & N\n\\enddata \n\\end{deluxetable*}\n\n\n\n\\subsection{Speckle interferometry}\n\nInformation on the resolved subsystems is retrieved from the\nWashington Double Star Catalog \\citep[WDS;][]{WDS}. It is complemented\nby recent speckle interferometry at the Southern Astrophysical\nResearch (SOAR) telescope. The latest publication \\citep{SAM18}\ncontains references to previous papers.\n\n\\subsection{Orbit calculation}\n\nAs in Paper 3 \\citep{paper3}, orbital elements and their errors are\ndetermined by the least-squares fits with weights inversely\nproportional to the adopted errors. The IDL code {\\tt\n orbit} \\citep{ORBIT}\\footnote{Codebase: \\url{http:\/\/www.ctio.noao.edu\/~atokovin\/orbit\/} and \n\\url{http:\/\/dx.doi.org\/10.5281\/zenodo.61119} }\nis used. It can fit spectroscopic, visual, or combined\nvisual\/spectroscopic orbits. Formal errors of orbital elements are\ndetermined from these fits. The elements of spectroscopic orbits are\ngiven in Table~\\ref{tab:sborb} in common notation.\n\n\n\n\\begin{deluxetable*}{l l cccc ccc c c} \n\\tabletypesize{\\scriptsize} \n\\tablecaption{Spectroscopic orbits\n\\label{tab:sborb} }\n\\tablewidth{0pt} \n\\tablehead{ \n\\colhead{HIP\/HD} & \n\\colhead{System} & \n\\colhead{$P$} & \n\\colhead{$T$} & \n\\colhead{$e$} & \n\\colhead{$\\omega_{\\rm A}$ } & \n\\colhead{$K_1$} & \n\\colhead{$K_2$} & \n\\colhead{$\\gamma$} & \n\\colhead{rms$_{1,2}$} &\n\\colhead{$M_{1,2} \\sin^3 i$} \n\\\\\n& & \\colhead{(days)} &\n\\colhead{(+24,00000)} & &\n\\colhead{(degree)} & \n\\colhead{(km~s$^{-1}$)} &\n\\colhead{(km~s$^{-1}$)} &\n\\colhead{(km~s$^{-1}$)} &\n\\colhead{(km~s$^{-1}$)} &\n\\colhead{ (${\\cal M}_\\odot$) } \n}\n\\startdata\nHIP 19639 & Aa,Ab & 2.35254 & 58002.5732 & 0.0 & 0.0 & 39.474 & 49.194 & 35.400 & 0.36 & 0.094 \\\\\n && $\\pm$0.00005 & $\\pm$0.0019 & fixed & fixed & $\\pm$0.160 & $\\pm$0.298 & $\\pm$0.100 & 0.87 & 0.076 \\\\\nHD 105081 & Ba,Bb & 30.427 & 58214.038 & 0.418 & 82.1 & 46.856 & 47.493 & 50.099 & 0.07 & 1.00 \\\\\n && $\\pm$0.006 & $\\pm$0.023 & $\\pm$0.001 & $\\pm$0.3 & $\\pm$0.117 & $\\pm$0.118 & $\\pm$0.041 & 0.05 & 0.98 \\\\\nHIP 60845 & Aa,Ab & 6.3035 & 58195.6279 & 0.0 & 0.0 & 31.361 & 32.128 & 40.018 & 0.10 & 0.084 \\\\\n && $\\pm$0.0001 & $\\pm$0.0017 & fixed & fixed & $\\pm$0.063 & $\\pm$0.114 & $\\pm$0.036 & 0.27 & 0.082\\\\\nHIP 61840 & Aa,Ab& 9.6717 & 58194.383 & 0.007 & 357.0 & 56.665 & 60.744 & 6.745 & 0.05 & 0.84 \\\\\n && $\\pm$0.0008 & $\\pm$0.345 & $\\pm$0.001 & $\\pm$13.0 & $\\pm$0.525 & $\\pm$0.562 & $\\pm$0.048 & 0.05 & 0.78\\\\\nHIP 75663 & Ba,Bb & 22.8704 & 58204.4963 & 0.613 & 260.1 & 49.001 & 49.622 & $-$56.847 & 0.29 & 0.56 \\\\\n && $\\pm$0.0047 & $\\pm$0.020 & $\\pm$0.001 & $\\pm$0.3 & $\\pm$0.126 & $\\pm$0.134 & $\\pm$0.045 & 0.27 & 0.56\\\\\nHIP 76816 & Aa,Ab & 6.95176 & 58197.3528 & 0.0 & 0.0 & 45.790 & 62.448 & $-$39.145 & 0.29 & 0.53 \\\\\n && $\\pm$0.00002 & $\\pm$0.0063 & fixed & fixed & $\\pm$0.306 & $\\pm$0.570 & $\\pm$0.186 & 1.13 & 0.39 \\\\\nHIP 78163 & Aa,Ab & 21.8186 & 58202.091 & 0.577 & 301.0 & 45.429 & 45.671 & 9.411 & 0.04 & 0.47\\\\\n && $\\pm$0.0015 & $\\pm$0.014 & $\\pm$0.002 & $\\pm$0.3 & $\\pm$0.161 & $\\pm$0.186 & $\\pm$0.045 & 0.04 & 0.46\\\\\nHIP 78416 & Aa,Ab & 21.0802 & 58197.2344 & 0.708 & 99.1 & 64.636 & 71.737 & 1.636 & 0.16 & 1.03 \\\\\n && $\\pm$0.0030 & $\\pm$0.0072 & $\\pm$0.003 & $\\pm$0.3 & $\\pm$0.372 & $\\pm$0.509 & $\\pm$0.112 & 0.61 & 0.93\\\\\nHIP 80448 &Aa,Ab & 2.2699 & 58195.5948 & 0.0 & 0.0 & 73.124 & 108.452 & $-$1.848 & 0.64 & 0.84 \\\\\n && $\\pm$0.0002 & $\\pm$0.0033 & fixed & fixed & $\\pm$0.592 & $\\pm$0.698 & $\\pm$0.270 & 0.51 & 0.57\\\\\nHIP 84789 &Aa,Ab & 2.2758 & 58196.7968 & 0.0 & 0.0 & 78.044 & 79.004 & 5.624 & 0.40 & 0.46 \\\\\n && $\\pm$0.0001 & $\\pm$0.0020 & fixed & fixed & $\\pm$0.181 & $\\pm$0.182 & $\\pm$0.073 & 0.21 & 0.45\n\\enddata \n\\end{deluxetable*}\n\n\\begin{deluxetable*}{l l cccc ccc} \n\\tabletypesize{\\scriptsize} \n\\tablecaption{Visual orbits\n\\label{tab:vborb} }\n\\tablewidth{0pt} \n\\tablehead{ \n\\colhead{HIP\/HD} & \n\\colhead{System} & \n\\colhead{$P$} & \n\\colhead{$T$} & \n\\colhead{$e$} & \n\\colhead{$a$} & \n\\colhead{$\\Omega_{\\rm A}$ } & \n\\colhead{$\\omega_{\\rm A}$ } & \n\\colhead{$i$ } \\\\\n& & \\colhead{(years)} &\n\\colhead{(years)} & &\n\\colhead{(arcsec)} & \n\\colhead{(degree)} & \n\\colhead{(degree)} & \n\\colhead{(degree)} \n}\n\\startdata\nHD 105080 & Aa,Ab & 91.6 & 1999.15 & 0.40 & 0.176 & 5.4 & 224.1 & 40.4 \\\\\nHIP 60845 & A,BC & 690 & 1826.7 & 0.20 & 2.485 & 104.4 & 156.6 & 141.4 \\\\\nHIP 60845 & B,C & 28.2 & 1990.2 & 0.166 & 0.221 & 162.8 & 77.3 & 156.1 \\\\\n & & $\\pm$0.2 & $\\pm$0.3 & $\\pm$0.009 & $\\pm$0.003 & $\\pm$6.5 & $\\pm$5.9 & $\\pm$2.5 \\\\\nHIP 80448 & A,B & 1950 & 1926.58 & 0.63 & 4.644 & 75.5 & 267.9 & 152.1 \\\\\nHIP 80448 & Ba,Bb & 20.0 & 2018.08 & 0.42 & 0.176 & 6.1 & 150.9 & 109.1 \\\\\n & & $\\pm$0.3 & $\\pm$0.24 & $\\pm$0.02 & $\\pm$0.003 & $\\pm$1.2 & $\\pm$4.9 & $\\pm$0.8 \n\\enddata \n\\end{deluxetable*}\n\n\n\nFor two multiple systems, the resolved measurements of inner and outer\npairs are represented by visual orbits. Simultaneous fitting of inner\nand outer orbits is done using the code {\\tt orbit3.pro} described by\n\\citet{TL2017}; the code is available in \\citep{ORBIT3}.\\footnote{Codebase: \\url{http:\/\/dx.doi.org\/10.5281\/zenodo.321854}}\nIt accounts for the wobble in the trajectory of the outer pair caused\nby the subsystem. The wobble amplitude is $f$ times smaller than the\ninner semimajor axis, where the wobble factor $f = q_{\\rm\n in}\/(1+q_{\\rm in})$ depends on the inner mass ratio $q_{\\rm in}$.\nThe elements of visual orbits are given in Table~\\ref{tab:vborb}. As\nouter orbits are poorly constrained, I do not list their errors. The\nouter orbits serve primarily to model the observed part of the\ntrajectory for the determination of $f$. In the figures illustrating\nthese orbits, the observed trajectories are plotted relative to the\nprimary component of each system, on the same scale.\n\nIndividual RVs of spectroscopic binaries and their residuals to the\norbits are presented in Table~\\ref{tab:rv}. The HIP or HD number and\nthe system identifier (components joined by comma) in the first two\ncolumns define the binary. Then follow the Julian date, the RV of the\nprimary component $V_1$, its adopted error $\\sigma_1$ (blended CCF\ndips are assigned large errors), and its residual (O$-$C)$_1$. The\nlast three columns give velocities, errors, and residuals of the\nsecondary component. Table~\\ref{tab:rvconst} contains RVs of other\ncomponents, both constant and variable. Finally,\nTable~\\ref{tab:speckle} lists position measurements used for the\ncalculation of visual orbits. It contains the HIP or HD number, system\nidentification, date of observation, position angle $\\theta$,\nseparation $\\rho$, adopted error $\\sigma_\\rho$ (errors in radial and\ntangential directions are considered to be equal), and the residuals\nto the orbits in $\\theta$ and $\\rho$. The last column indicates the\nmeasurement technique. Measurements of the outer systems are of two\nkinds: when the inner pair is unresolved (e.g. HIP 60845 A,BC), they\nrefer to the photo-center of the inner pair, while resolved\nmeasurements refer to the individual components (e.g. HIP 60845\nA,B). The orbit-fitting code accounts for reduced wobble amplitude of\nunresolved (photo-center) measurements compared to resolved ones.\n\n\n\n\\begin{deluxetable*}{r l c rrr rrr} \n\\tabletypesize{\\scriptsize} \n\\tablecaption{Radial velocities and residuals (fragment)\n\\label{tab:rv} }\n\\tablewidth{0pt} \n\\tablehead{ \n\\colhead{HIP\/HD} & \n\\colhead{System} & \n\\colhead{Date} & \n\\colhead{$V_1$} & \n\\colhead{$\\sigma_1$} & \n\\colhead{(O$-$C)$_1$ } &\n\\colhead{$V_2$} & \n\\colhead{$\\sigma_2$} & \n\\colhead{(O$-$C)$_2$ } \\\\\n & & \n\\colhead{(JD $+$24,00000)} &\n\\multicolumn{3}{c}{(km s$^{-1}$)} &\n\\multicolumn{3}{c}{(km s$^{-1}$)} \n}\n\\startdata\n 19639 & Aa,Ab & 57985.8910 & 68.74 & 0.30 & 0.16 & $-$6.37 & 0.60 & $-$0.42 \\\\\n 19639 & Aa,Ab & 57986.8980 & 15.11 & 0.30 & 0.18 & 59.76 & 0.60 & $-$1.15 \\\\\n 19639 & Aa,Ab & 58052.6260 & 31.22 & 10.00 & 2.23 & \\ldots &\\ldots & \\ldots \\\\\n 19639 & Aa,Ab & 58053.6080 & 22.97 & 0.50 & 1.26 & 54.35 & 1.00 & 1.89 \n\\enddata \n\\end{deluxetable*}\n\n\\begin{deluxetable}{r l r r } \n\\tabletypesize{\\scriptsize} \n\\tablecaption{Radial velocities of other components\n\\label{tab:rvconst} }\n\\tablewidth{0pt} \n\\tablehead{ \n\\colhead{HIP\/HD} & \n\\colhead{Comp.} & \n\\colhead{Date} & \n\\colhead{RV} \\\\ \n & & \n\\colhead{(JD $+$24,00000)} &\n\\colhead {(km s$^{-1}$)} \n}\n\\startdata\n19646 & B & 57985.8932& 36.005 \\\\\n19646 & B & 58193.5386& 35.944 \\\\ \n105080 & A & 58193.7546& 50.182 \\\\ \n105080 & A & 58194.6308& 50.198 \\\\\n105080 & A & 58195.6489& 50.179 \\\\\n60845 & BC & 57985.4627& 42.462 \\\\\n60845 & BC & 58193.7521& 42.524 \\\\ \n60845 & BC & 58194.6447& 42.514 \\\\\n60845 & BC & 58195.6586& 42.523 \\\\\n60845 & BC & 58177.7617& 42.513 \\\\\n60845 & BC & 58232.5972& 42.540 \\\\\n60845 & BC & 58242.5361& 42.513 \\\\\n75663 & A & 57986.4885& $-$50.737 \\\\ \n75663 & A & 58177.8100& $-$56.446 \\\\\n75663 & A & 58193.8257& $-$57.181 \\\\\n75663 & A & 58195.8323& $-$57.305 \\\\\n76816 & B & 57986.4996& $-$50.912 \\\\\n76816 & B & 58193.8413& $-$50.916 \\\\ \n76816 & B & 58194.8267& $-$50.921 \\\\\n76816 & B & 58195.8509& $-$50.912 \\\\ \n78163 & B & 57986.5130& 6.116 \\\\ \n78163 & B & 58194.8533& 7.152 \\\\ \n78163 & B & 58195.7872& 7.082 \\\\ \n78416 & B & 57986.5221& 0.752 \\\\\n78416 & B & 58193.8595& 1.568 \\\\ \n78416 & B & 58194.8446& 1.816 \\\\ \n78416 & B & 58195.7774& 1.581 \\\\ \n80448 & B & 58193.8678& 7.509 \\\\\n80448 & B & 58195.8008& 7.752 \\\\\n80448 & B & 58194.8621& 7.679 \\\\\n80448 & B & 58228.8113& 7.515 \\\\ \n80448 & B & 58233.8463& 7.353 \\\\\n80448 & B & 58246.6693& 8.395 \\\\\n80448 & B & 58248.8529& 8.176 \\\\\n80448 & B & 58256.7395& 7.532 \\\\\n80448 & B & 58257.8020& 7.916 \\\\\n84789 & B & 57986.5346& 6.669 \\\\\n84789 & B & 58195.8671& 5.260 \n\\enddata \n\\end{deluxetable}\n\n\\begin{deluxetable*}{r l l rrr rr l} \n\\tabletypesize{\\scriptsize} \n\\tablecaption{Position measurements and residuals (fragment)\n\\label{tab:speckle} }\n\\tablewidth{0pt} \n\\tablehead{ \n\\colhead{HIP\/HD} & \n\\colhead{System} & \n\\colhead{Date} & \n\\colhead{$\\theta$} & \n\\colhead{$\\rho$} & \n\\colhead{$\\sigma_\\rho$} & \n\\colhead{(O$-$C)$_\\theta$ } & \n\\colhead{(O$-$C)$_\\rho$ } &\n\\colhead{Ref.\\tablenotemark{a}} \\\\\n & & \n\\colhead{(yr)} &\n\\colhead{(\\degr)} &\n\\colhead{(\\arcsec)} &\n\\colhead{(\\arcsec)} &\n\\colhead{(\\degr)} &\n\\colhead{(\\arcsec)} &\n}\n\\startdata\n 60845 & B,C & 1939.4600 & 357.8 & 0.3500 & 0.1500 & 4.2 & 0.1351 & M \\\\\n 60845 & B,C & 1956.3800 & 184.1 & 0.3100 & 0.0500 & 8.8 & 0.0956 & M \\\\\n 60845 & B,C & 2018.1639 & 92.6 & 0.1769 & 0.0050 & 1.9 & 0.0070 & S \\\\\n 60845 & A,BC & 1880.3800 & 270.5 & 2.4300 & 0.2500 & 2.2 & 0.3463 & M \\\\\n 60845 & A,BC & 1991.2500 & 201.1 & 2.0510 & 0.0100 & $-$1.2 & $-$0.0042 & H \\\\\n 60845 & A,B & 2018.1639 & 187.1 & 2.0965 & 0.0050 & 0.2 & $-$0.0018 & S\n\\enddata \n\\tablenotetext{a}{\nH: Hipparcos;\nS: speckle interferometry at SOAR;\ns: speckle interferometry at other telescopes;\nM: visual micrometer measures;\nG: Gaia DR2.\n}\n\\end{deluxetable*}\n\n\n\\section{Individual objects}\n\\label{sec:obj}\n\nFor each observed system, the corresponding Figure shows a typical\ndouble-lined CCF (the Julian date and components' designatios are marked\non the plot) together with the RV curve representing the orbit. In the\nRV curves, squares denote the primary component, triangles denote the\nsecondary component, while the full and dashed lines plot the orbit. \n\n\\subsection{HIP 19639 and 19646 (triple)}\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig1.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 19639 Aa,Ab.\n\\label{fig:19639}\n}\n\\end{figure}\n\nThe 50\\arcsec ~pair of bright stars HIP~19639 and 19646 was identified\nas a visual binary by J.F.W.~Hershel in 1838. The {\\it Gaia} DR2\nastrometry leaves no doubt that this pair is physical: the components\nhave common proper motion (PM), distance, and RV. The outer orbital\nperiod is of the order of 240 kyr. \\citet{N04} found that the\ncomponent A is a double-lined binary; its 2.35-day circular orbit is\ndetermined here for the first time (Figure~\\ref{fig:19639}). Two\nspectra (JD 2458052 and 2458053) were taken with the NRES spectrograph,\nas described in \\citep{paper3}.\n\nThe CCF of the stronger component Aa is wide and asymmetric, while the\nCCF of Ab is narrower; their widths correspond to approximate\nprojected rotation velocities $ V \\sin i$ of 21.7 and 8.7 km~s$^{-1}$,\nrespectively, while the ratio of the CCF areas implies $\\Delta V_{\\rm\n Aa,Ab} = 1.50$ mag. Wide and shallow dips lead to large RV errors\nand large residuals to the orbit. The mass ratio in the inner pair is\n$q_{\\rm Aa,Ab} = 0.82$. The RV of the component B its close to the\ncenter-of-mass velocity of A. The component B also has a wide CCF\ncorresponding to $ V \\sin i$ of $\\sim$33 km~s$^{-1}$. \n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig2.eps}\n\\caption{Location of three components of the HIP~19639 system on the\n color-magnitude diagram (squares). The full line is a 2 Gyr\n isochrone for solar metallicity \\citep{Dotter2008}, where asterisks\n and numbers mark masses.\n\\label{fig:iso}\n}\n\\end{figure}\n\nComponents of the triple system HIP~19639 are placed on the\ncolor-magnitude diagram (CMD) in Figure~\\ref{fig:iso}, using the\ndistance modulus of 5.47 mag. The $V$ magnitudes of Aa and Ab are\nestimated from the combined magnitude of A and the spectroscopic\nmagnitude difference of 1.5 mag. The $V-K$ color of Ab, not measured\ndirectly, is assumed to place it on the main sequence. It is clear\nthat both Aa and B are located above the main sequence, near the\nturn-off. Their positions match reasonably well the 2 Gyr isochrone\nand correspond to the masses of 1.6 and 1.5 ${\\cal M}_\\odot$. The mass\nof Ab deduced from the isochrone is 1.28 ${\\cal M}_\\odot$, matching\nthe spectroscopic mass ratio, while the radii of Aa and Ab are 2.7 and\n1.1 $R_\\odot$. The orbital axis $a_1 + a_2 = 10.6 R_\\odot$ means that\nthe binary is detached. However, contact and mass transfer are\nimminent when Aa expands further.\n\nThe spectroscopic mass sum of the inner binary is only 0.18 ${\\cal\n M}_\\odot$. The mass sum estimated above, 2.88 ${\\cal M}_\\odot$,\nimplies an inclination $i_{\\rm Aa,Ab} = 23\\fdg4$, or $\\sin i_{\\rm\n Aa,Ab} = 0.40$, hence the synchronous rotation velocities of Aa and\nAb are 23.8 and 9.7 km~s$^{-1}$, respectively, in agreement with the\nmeasured CCF width. Summarizing, this is an interesting triple system\nwhere the inner close binary is caught at evolutionary phase preceding\nthe mass transfer.\n\n\n\\subsection{HD 105080 and 105081 (2+2 quadruple) }\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig3.eps}\n\\caption{CCF (left) and RV curve (right) of HD 105081 Ba,Bb.\n\\label{fig:105080}\n}\n\\end{figure}\n\n\nTwo nearly equal stars HD~105080 and 105081 form a 12\\farcs9 physical\nbinary, first measured by J.~Hershel in 1835. Each of these stars is\na close pair, making the whole system quadruple. The pair Aa,Ab is a\nknown visual binary RST~4958 with a small magnitude difference. Since\nits discovery in 1942 by R. A. Rossiter, it was observed only episodically.\nBy adding three speckle measures made at SOAR in 2017 and 2018, a\npreliminary orbit with $P = 91.6$ years can be fitted to the\nobservations (Table~\\ref{tab:vborb}). Double lines were reported for\nthis star in the literature, although there could be a confusion with\nthe double-lined component B. The RV of A is practically coincident\nwith the center-of-mass velocity of B. The {\\it Gaia} DR2 parallax of\nA has a large error of 1\\,mas, being affected by the Aa,Ab pair. I\nadopt the parallax of B as the distance to the system. Both\ncomponents are then located on the CMD very close to each other, above\nthe main sequence. This distance and the visual orbit of Aa,Ab\ncorrespond to the mass sum of 1.7 ${\\cal M}_\\odot$; however, the orbit\nis poorly constrained.\n\nThe component B (HD~105081), which is only slightly fainter than A in\nthe $V$ and $G$ bands, is revealed here to be a twin double-lined pair\nwith $P=30.4$ days and eccentricity $e=0.42$\n(Figure~\\ref{fig:105080}). The spectroscopic mass sum of\nBa and Bb, 1.98 ${\\cal M}_\\odot$, and the mass sum inferred from the\nabsolute magnitudes, 2.26 ${\\cal M}_\\odot$, imply inclination\n$i_{\\rm Ba,Bb}= 73\\degr$. The CCF dips of Ba and Bb are narrow and\ndeep, hence the residuals to the orbit are small, only\n0.07~km~s$^{-1}$ .\n\n\n\n\\subsection{HIP 60845 (quintuple)}\n\n\n\\begin{figure}\n\\plotone{fig4.eps}\n\\caption{Quintuple system HIP 60845 (WDS J12283$-$6146). The positions\n of three components A, BC, and D on the sky and their motions are\n illustrated in the upper panel. Periods and masses are indicated.\n The lower panel shows the observed motion of the subsystems A,BC and\n B,C and their orbits. In this plot, the coordinate origin coincides\n with the main star A, the wavy line shows the motion of the\n component B around A according to the orbit. Small crosses depict\n measurements of A,BC where the pair BC was unresolved, asterisks\n depict the resolved measurents of A,B. The orbit of B,C is plotted\n on the same scale around the coordinate origin by the dashed\n line and triangles. \n\\label{fig:HIP60845}\n}\n\\end{figure}\n\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig5.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 60845 Aa,Ab. \n\\label{fig:60845}\n}\n\\end{figure}\n\n\nThis multiple system is located within 50\\,pc from the Sun and\ncontains at least five stars arranged in a rare 3-tier hierarchy\nillustrated in Figure~\\ref{fig:HIP60845}. The widest 7\\farcs9 pair\nRST~4499 AB,D is physical, based on its stable relative position and\nthe {\\it Gaia} DR2 parallaxes and PMs of the components. The fast\n(175 mas~yr$^{-1}$) PM facilitates discrimination between physical and\noptical companions, despite the high stellar density in this field and\nthe faintness of D ($V = 13.4$ mag). The period of AB,D estimated\nfrom its projected separation $\\rho_{\\rm AB,D} = 7\\farcs88$ is about 4\nkyr and corresponds to the chracteristic orbital velocity $\\mu^* =\n2 \\pi \\rho_{\\rm AB,D}\/P_{\\rm AB,D} = 13$ mas~yr$^{-1}$. The relative\nPM between A and D, measured by {\\it Gaia} and corrected for the\norbital motion of A, is 11 mas~yr$^{-1}$; it is directed almost\nexactly toward A (position angle 240\\degr), as indicated by the arrow\nin Figure~\\ref{fig:HIP60845}. If D moves on an eccentric orbit, it\nwill come close to A,BC in $\\sim$700 years, disrupting the system.\nAlternatively, the observed motion might correspond to a highly\ninclined and not very eccentric outer orbit, in which case the system\ncould be dynamically stable. If the pair AB,D is bound, the true\nseparation between A and D cannot exceed its projected separation by\nmore than $\\sim$2 times, given their relative speed of 11\nmas~yr$^{-1}$ and the total mass sum of 4.5 ${\\cal M}_\\odot$.\n\n\nThe visual binary A,BC (CPO~12), for which a crude orbit with\n$P=2520$\\,years and semimajor axis of 5\\farcs4 has been published by\n\\citet{USN2002}, occupies the intermediate hierarchical level. This\norbit is poorly constrained by the century-long observed arc. I\ncomputed an alternative orbit with $P_{\\rm A,BC} = 690$ years, with \nsmaller eccentricity and semimajor axis (Table~\\ref{tab:vborb}). This\norbit makes more sense, given the threat to dynamical stability posed\nby the outer component D. Even then, the period ratio $P_{\\rm AB,D}\/\nP_{\\rm A,BC} \\sim 5$ is comparable to the dynamical stability\nlimit. On the other hand, the nearly circular orbit of the inner pair\nB,C (RST~4499) with $P=28.2$\\,years is definitive. The visual orbits and\nthe estimated mass sums match the {\\it Gaia} DR2 parallax reasonably\nwell. Both orbits are retrograde and have small inclinations.\n\nThe visual primary star A was identified as a spectroscopic binary by\n\\citet{N04}. Now its 6.3-day double-lined orbit is determined\n(Figure~\\ref{fig:60845}). The eccentricity does not differ from zero\nsignificantly, hence the circular orbit is imposed. The masses of Aa\nand Ab are almost equal, as are their CCF dips. Given the small\nangular distance between A and BC, 2\\farcs06, the light is mixed in\nthe fiber, so the CCF often has 3 dips; the CCF shown in\nFigure~\\ref{fig:60845} is an exception recorded with good seeing and\ncareful guiding. The magnitude difference between Ab and Aa is 0.37\nmag, hence their individual $V$ magnitudes are 7.79 and 8.16 mag. By\ncomparing the mass sum of Aa and Ab estimated from their absolute\nmagnitudes, 2.2 ${\\cal M}_\\odot$, with the spectroscopic mass sum of\n0.167 ${\\cal M}_\\odot$, I find that the orbit of Aa,Ab has a small\ninclination of $i_{\\rm Aa,Ab} \\approx 25\\degr$. The synchronous\nrotation of the component Aa, of one solar radius, implies the\nprojected rotation of $V \\sin i = 3.3$ km~s$^{-1}$, close to the\nmeasured value. The three inner orbits could be close to coplanarity,\ngiven their small inclinations. \n\nThe CCF dip corresponding to the combined light of BC is narrow and\nhas a constant RV of 42.5 km~s$^{-1}$, close to the center-of-mass\nvelocity of A. Slow axial rotation, location of components on the\nmain sequence in the CMD, and non-detection of the lithium line\nsuggest that this quintuple system is relatively old.\n\n\n\\subsection{HIP 61840 (binary)}\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig6.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 61840 Aa,Ab.\n\\label{fig:61840}\n}\n\\end{figure}\n\nUnlike the rest of the objects in this paper, this star is a simple\nspectroscopic binary without additional components. It belongs to the\n67 pc sample of solar-type stars \\citep{FG67a} and is young, as\ninferred from the chromoshperic activity, X-ray flux, and the presence\nof lithium in its atmosphere. The double-lined nature was announced\nby \\citet{Wichman2003} and \\citet{N04}, but the orbital period was, so\nfar, unknown. The object has been observed by speckle interferometry\nat SOAR in 2011 and 2016 and found unresolved.\n\nThe orbit with $P=9.67$ days has a small, but significantly non-zero\neccentricity $e= 0.007 \\pm 0.001$ (Figure~\\ref{fig:61840}). The\nresiduals to the circular orbit are 0.3 km~s$^{-1}$, 6$\\times$\nlarger than to the eccentric orbit. The masses of Aa and Ab estimated\nfrom absolute magnitudes are 1.24 and 1.13 ${\\cal M}_\\odot$, the\nspectroscopic mass sum is 1.62 ${\\cal M}_\\odot$, hence the inclination\nis $i = 62\\degr$. The measured projected rotation speed matches the\nsynchronous speed.\n\n\\subsection{HIP 75663 (2+2 quadruple)}\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig7.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 75663 Ba,Bb.\n\\label{fig:75663}\n}\n\\end{figure}\n\n\nThis system is quadruple. The outer 9\\farcs4 pair was discovered in\n1825 by W.~Struve. Its estimated period is 17\\,kyr. \\citet{N04} noted\ndouble lines in the visual secondary B. Its orbital period is 22.9\ndays (Figure~\\ref{fig:75663}), with a large eccentricity of $e=0.61$\nand the mass ratio $q_{\\rm Ba,Bb} = 0.997$ (a twin). The areas of the\nCCF dips of Ba and Bb are equal to within 2\\%. The masses estimated\nfrom absolute magnitudes are 1.02 ${\\cal M}_\\odot$ each, leading to\nthe orbital inclination of $i_{\\rm Ba,Bb}=55\\degr$. The axial\nrotation of Ba and Bb is faster than synchronous, as expected for such\neccentric orbit.\n\n\nThe RV of the main component A is variable according to the CHIRON\ndata (range from $-$50.7 to $-$57.3 km~s$^{-1}$) and the literature.\n\\citet{N04} made two measurements averaging at $-58.2$ km~s$^{-1}$ and\nsuspected RV variability, while {\\it Gaia} measured $-55.4\n\\pm 2.$ km~s$^{-1}$. The photo-center motion of A caused by the subsystem\nAa,Ab could explain the discrepancy between the {\\it Gaia} DR2\nparallaxes of A and B {\\bf (9.29$\\pm$0.16 and 7.69$\\pm$0.07 mas,\n respectively). A similar discrepancy of parallaxes exists in the HD\n 105080\/81 system, where A is a visual binary. } The period of Aa,Ab is not known; presumably it\nis longer than a year. \\citet{Isaacson2010} classified this star as\nchromospherically active and found the RV jiter of\n4.2\\,m~s$^{-1}$. The location of the component A on the CMD indicates\nthat it is slightly evolved and matches approximately the 4-Gyr\nisochrone. Lithium is detectable in the spectra of A and B.\n\n\n\\subsection{HIP 76816 (2+2 quadruple)}\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig8.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 76816 Aa,Ab.\n\\label{fig:76816}\n}\n\\end{figure}\n\n\nThis is a quadruple system located at 309\\,pc from the Sun. The\n5\\farcs4 visual binary HWE~37 is known since 1876; its estimated\nperiod is 33\\,kyr. Double lines in the component A were discovered by\n\\citet{Desidera2006}. I used their measurement to refine the period of\nthe circular orbit of Aa,Ab with $P=6.95$ days, determined here\n(Figure~\\ref{fig:76816}). The eccentric orbit has similar residuals,\nhence the circular solution is retained. The components Aa and Ab are\nunequal in the amplitudes of their CCF dips (area ratio 0.152, or 2.0\nmag difference) and of the RV variation (mass ratio 0.735). The RV of\nthe visual component B is also variable with a long, still unknown\nperiod. I measured its RV at $-$50.9 km~s$^{-1}$ (constant), while\n{\\it Gaia} measured $-41.4$ km~s$^{-1}$ and \\citet{Desidera2006}\nmeasured $-37.5$ km~s$^{-1}$; these RVs differ from the center-of-mass\nvelocity of A, $-39.94$ km~s$^{-1}$.\n\nThe matching {\\it Gaia} DR2 parallaxes place both A and B above the\nmain sequence. The component B is more evolved: it is brighter than A\nin the $K$ band (unless its $K$ magnitude measured by 2MASS is \ndistorted by the proximity of A, as happens with other close\npairs). The mass sum of Aa and Ab, estimated crudely from the absolute\nmagnitudes, is almost 3 ${\\cal M}_\\odot$, leading to the\norbital inclination of 42\\fdg5. The stars\nAa and Ab apparently rotate synchronously with the orbit.\n\n\n\\subsection{HIP 78163 (2+2 quadruple)}\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig9.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 78163 Aa,Ab.\n\\label{fig:78163}\n}\n\\end{figure}\n\n\nThis multiple system is composed by the outer 7\\farcs9 pair WG~185\n(estimated period 8\\,kyr) and the inner subsystem Aa,Ab discovered by\n\\citet{N04}. For the latter, I determined here the orbit with $P=\n21.8$ days (Figure~\\ref{fig:78163}), $e=0.58$, and mass ratio $q_{\\rm\n Aa,Ab} = 0.996$ (a twin). The RV of the component B measured with\nCHIRON ranges from 6.1 to 7.1 km~s$^{-1}$ and differs from the\ncenter-of-mass RV of the component A, 9.4 km~s$^{-1}$. Considering\nalso the {\\it Gaia} RV(B)=16.3 km~s$^{-1}$, I infer that B is a\nsingle-lined binary, possibly with a long period and a small RV\namplitude. Its photo-center motion could explain the slight \n discrepancy between {\\it Gaia} parallaxes and PMs of A\nand B. Therefore, the parallax of A, 10.42\\,mas, is likely the correct\none. \n\nThe twin components Aa and Ab have masses of one solar each.\nComparting them to $M \\sin^3 i$, the inclination of 50\\degr ~is\nderived. The stars A and B are located on the main\nsequence. Interestingly, lithium is detectable in the spectra of Aa\nand Ab, but not in B, which is a similar solar-mass star.\n\n\n\n\\subsection{HIP 78416 (triple or quadruple)}\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig10.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 78416 Aa,Ab.\n\\label{fig:78416}\n}\n\\end{figure}\n\n\nThe outer 6\\farcs5 pair HWE~81, known since 1876, has an estimated\nperiod of 10\\,kyr. \\citet{N04} detected RV variability of the\ncomponent A, later found to be a double-lined binary by\n\\citet{Desidera2006}. The orbital period is 21 days and the\neccentricity $e=0.766$ is unusually high for such a short period\n(Figure~\\ref{fig:78416}). The wide and shallow CCF dips imply fast\naxial rotation. For this reason, the RVs are not measured very\naccurately and the residuals to the orbit are large, 0.2 and 0.6\nkm~s$^{-1}$. By comparing the estimated masses of Aa and Ab, 1.15 and\n1.03 ${\\cal M}_\\odot$ respectively, with $M \\sin^3 i$, I estimate the\norbital inclination of 74\\degr.\n\nThe visual component B also has a fast axial rotation of $V \\sin i \\sim\n40$ km~s$^{-1}$, degrading the accuracy of its RV measurement. The\nRVs of B measured with CHIRON, by {\\it Gaia}, and by \\citet{Desidera2006}\n(1.4, $-$0.7, and 0.5 km~s$^{-1}$ respectively) are reasonably close\nto the center-of-mass RV of A, 1.7 km~s$^{-1}$. Therefore, B is\nlikely a single star. All three stars Aa, Ab, and B have comparable\nmasses and similar colors. The component A, being a close pair, is\nlocated on the CMD just above B, as expected.\n\nThe RVs of Aa and Ab measured by \\citet{Desidera2006}, 58.9 and $-1.8$\nkm~s$^{-1}$, correspond to the center-of-mass velocity of 30.2\nkm~s$^{-1}$ and do not fit the present orbit with $\\gamma = 1.7$\nkm~s$^{-1}$. This discrepancy suggests that Aa,Ab is orbited by\nanother close companion. Further monitoring is needed, however, to\nprove this hypothesis. \n\nAccording to \\citet{Rizzuto2011}, this system belongs to the Sco OB2\nassociation with a probability of 74\\%. Fast axial rotation and the\npresence of lithium indicate a young age.\n\n\n\n\\subsection{HIP 80448 (2+2 quadruple)}\n\n\nThis young multiple system is located within 50\\,pc from the Sun. It\ncontains four components in a small volume. The outer pair A,B\n(COO~197) has an uncertain visual orbit with a millenium-long period\n\\citep{Ary2015b}. Its secondary component was resolved in 2004 into a\n0\\farcs13 pair CVN~27 Ba,Bb by \\citet{Chauvin2010}, using adaptive\noptics. Independently, a subsystem TOK~50 Aa,Ab with similar\nseparation was discovered in 2009 by \\citet{TMH10} using speckle\ninterferometry. In fact, the same subsystem Ba,Bb was wrongly\nattributed to the primary component; its published measures at SOAR\nwith angle inverted by 180\\degr ~match the preliminary orbit with\n$P_{\\rm Ba,Bb}=20$ years. The pair TOK~50 Aa,Ab does not exist. The\ncomponent Bb is fainter than Ba by 3.5 mag in the $V$ band and by 1.1\nmag in the $K$ band; its lines are not detected in the combined\nspectrum of all stars.\n\n\n\n\\begin{figure}\n\\plotone{fig11.eps}\n\\caption{Visual orbits of HIP~80448 A,B and Ba,Bb (WDS J16253$-$4909,\n COO~197 and CVN~27).\n\\label{fig:COO197}\n}\n\\end{figure}\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig12.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 80448 Aa,Ab. \n\\label{fig:80448}\n}\n\\end{figure}\n\n\n\nFigure~\\ref{fig:COO197} shows the positions of resolved components on\nthe sky and the fitted orbits. Considering that the outer orbit is not\nconstrained by the data, I fixed its period to $P_{\\rm A,B} = 1950$\nyears and its axis to $a_{\\rm A,B} = 4\\farcs64$ to adjust the\ndynamical mass sum to its estimated value, 3.5 ${\\cal M}_\\odot$. The\norbit of Ba,Bb with $P_{\\rm Ba,Bb} = 20$ years yields the mass sum of\n1.8 ${\\cal M}_\\odot$, close to the estimated one. The ``wobble'' in\nthe positions of A,Ba caused by the subsystem is clearly seen. Its\nrelative amplitude $f = 0.40$ corresponds to the mass ratio $q_{\\rm\n Ba,Bb} = f\/(1-f) = 0.67$ that argees with the magnitude difference.\n\nThe spectrum of the main component A (in fact, blended light of Aa,\nAb, and Ba) shows stationary lines of Ba and double lines of Aa and Ab\nin rapid motion; the subsystem Aa,Ab was discovered with CHIRON\n\\citep{survey}. It is found here that the orbital period is\n$P_{\\rm Aa,Ab} = 2.3$ days and the orbit is circular\n(Figure~\\ref{fig:80448}). The mass ratio $q_{\\rm Aa,Ab} = 0.67$ is\nsimilar to the mass ratio $q_{\\rm Ba,Bb}$, while the ratio of dip\nareas corresponds to $\\Delta V_{\\rm Aa,Ab} = 1.9$ mag. Comparison of\nestimated and spectroscopic mass sums leads to the orbital inclination\n$i_{\\rm Aa,Ab} = 66\\degr$ or $i_{\\rm Aa,Ab} = 114\\degr$. It is not\ndissimilar to the inclination of other inner pair, $i_{\\rm Ba,Bb}\n= 109\\degr$, but it is difficult to believe that these two subsystems\nhave coplanar orbits, given the huge difference of their periods.\n\nThe components Aa and Ab rotate synchronously with the orbit. The\nprojected rotation of Ba, $V \\sin i = 14.3$ km~s$^{-1}$, is relatively\nfast, supporting the thesis that this system is young. The presence of\nlithium also suggests youth. The four components are located in the\nCMD at about 0.7 mag above the main sequence.\n\n\nThe pair Ba,Bb is presently near the periastron of its 20 year\norbit. I measured the RV(Ba) from 7.51 to 8.18 km~s$^{-1}$, quite\ndifferent from the center-of-mass velocity of A, $-2.08$ km~s$^{-1}$.\nThis positive difference and the positive trend actually match the orbit\nof Ba,Bb; I predict that RV(Ba) will soon start to decrease. The\norbits of A,B and Ba,Bb are not coplanar, although both are\nretrograde. \n\n\n\n\\subsection{HIP 84789 (triple)}\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig13.eps}\n\\caption{CCF (left) and RV curve (right) of HIP 84789 Aa,Ab. Note that\n the secondary component Ab has a dip with larger amplitude.\n\\label{fig:84789}\n}\n\\end{figure}\n\n\nThis 5\\farcs6 visual binary STF~2148, discovered in 1832 by W.~Struve,\nhas an estimated period of 17\\,kyr. Double lines in its primary\ncomponent A were noted by \\citet{N04}. The orbital period of the\nsubsystem Aa,Ab determined here is $P_{\\rm Aa,Ab} = 2.3$ days; it is a\ntwin pair with $q_{\\rm Aa,Ab} = 0.988$ (Figure~\\ref{fig:84789}). The deeper CCF\ndip belongs to the less massive component Ab; the more massive star Aa\nrotates a little faster and has the dip area 3\\% larger than Ab, as\nwell as the smaller RV amplitude. The RVs of both components are\nmeasured with large errors owing to the wide and low-contrast CCF\ndips; the residuals to the orbits are also large. An attempt to fit\nthe orbit with non-zero eccentricity does not result in smaller\nresiduals, hence the orbit is circular.\n\nThe estimated masses of Aa and Ab are 1.30 ${\\cal M}_\\odot$ each,\nleading to the orbital inclination of $i_{\\rm Aa,Ab} =\n45\\degr$. Assuming the radii of 1.3 $R_\\odot$, the synchronous\nrotation velocity is $V \\sin i = 20.2$ km~s$^{-1}$. The width of the\ncorrelation dips matches this estimate and suggests that Aa rotates\nslightly faster and Ab slightly slower than synchronous.\n \nThe two components A and B are located on the CMD above each other\n(they have the same color) because A contains two equal stars; the\nmass of B is very similar to the masses of Aa and Ab, 1.3 solar. The\ncomponent B is single, as inferred from the equality of its RV to the\ncenter-of-mass velocity of A. However, it rotates much faster, at $V\n\\sin i \\sim 49$ km~s$^{-1}$. Very likely, the rotation of Aa and Ab has\nbeen slowed down by tidal synchronization with the orbit.\n\n\n\n\\section{Summary}\n\\label{sec:sum}\n\n\n\\begin{figure}\n\\plotone{fig14.eps}\n\\caption{Eccentricity vs. period for members of hierarchical systems\n studied here (large triangles) and for 467 spectroscopic binaries\n from the MSC with primary masses from 0.5 to 1.5 solar\n (crosses). The dashed line shows the locus of HIP~78416 Aa,Ab for\n evolution with constant angular momentum, $P(1 - e^2)^{3\/2} = {\\rm\n const}$.\n\\label{fig:pe}\n}\n\\end{figure}\n\nProbably by accident, the periods of 9 spectroscopic systems within\nhierarchical multiples are equally divided between three distinct\ngroups: (i) circular orbits with $P \\approx 2.3$ days, (ii)\nintermediate periods between 6 and 9 days, circular or nearly\ncircular, and (iii) eccentric orbits with periods from 21 to 30 days.\nFigure~\\ref{fig:pe} places these orbits on the period-eccentricity\nplot. The plus signs are 467 spectroscopic binaries with primary\nmasses from 0.5 to 1.5 ${\\cal M}_\\odot$ from the MSC \\citep{MSC}. When\nthe eccentric orbits of the group (iii) are tidally circularized,\ntheir periods will match those of group (ii), suggesting that these\nsubsystems could be formed by a common mechanism, such as Kozai-Lidov\ncycles with dynamical tides \\citep{Moe2018}. The periods of group (i)\nare substantially shorter, so their formation history could be\ndifferent.\n\nSix out of the 10 double-lined binaries studied here are twins with\nmass ratio $q > 0.95$, while the lowest measured mass ratio is\n0.67. If the mass ratios were uniformly distributed in the interval\n(0.7, 1.0), where double lines are detectable, the fraction of twins\nwould be only 0.15, whereas in fact it is 0.6. It is established that\ntwins correspond to a well-defined peak in the mass ratio distribution\nof solar-type spectroscopic binaries \\citep{twins}. They are believed\nto be formed when a low-mass binary accretes a major part of its mass.\nThe mass influx also creates conditions for formation of additional\ncomponents, building stellar hierarchies ``from inside out''. Thus,\ntwins are naturally produced as inner components of multiple systems\nin the process of mass assembly.\n\nThe goal of this study was to determine unknown periods of\nspectroscopic subsystems in several multiple stars. Although this goal\nis reached, I discovered RV variability of other visual components\n(HIP 75663A, 76816B, and 78163B), converting these triples into 2+2\nquadruples. The periods of new subsystems, presumably long, remain\nunknown so far.\n\n\n\n\n\n\\acknowledgements\n\nI thank the operator of the 1.5-m telescope R.~Hinohosa for executing\nobservations of this program and L.~Paredes for scheduling and\npipeline processing. Re-opening of CHIRON in 2017 was largely due to\nthe enthusiasm and energy of T.~Henry.\n\nThis work used the SIMBAD service operated by Centre des Donn\\'ees\nStellaires (Strasbourg, France), bibliographic references from the\nAstrophysics Data System maintained by SAO\/NASA, and the Washington\nDouble Star Catalog maintained at USNO.\n\nThis work has made use of data from the European Space Agency (ESA) mission\n{\\it Gaia} (\\url{https:\/\/www.cosmos.esa.int\/gaia}), processed by the {\\it Gaia}\nData Processing and Analysis Consortium (DPAC,\n\\url{https:\/\/www.cosmos.esa.int\/web\/gaia\/dpac\/consortium}). Funding for the DPAC\nhas been provided by national institutions, in particular the institutions\nparticipating in the {\\it Gaia} Multilateral Agreement.\n\n\\facilities{CTIO:1.5m, SOAR}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdgks b/data_all_eng_slimpj/shuffled/split2/finalzzdgks new file mode 100644 index 0000000000000000000000000000000000000000..835dfc1de1ad38f037c4e6011f4f78e64a60948e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdgks @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nLet $M$ be an $n$-dimensional Riemannian manifold, and let\n$\\Sigma$ be a compact surface with boundary which\ncarries a conformal structure; we do {\\bf not} assume\n$\\Sigma$ is orientable. Many existence theorems for minimal\nsurfaces in the literature \\cite{D31}, \\cite{R}, \\cite{C} find\nsolutions by minimizing the {\\bf energy} of a mapping \n$f:\\Sigma \\to M$, where both $f$ and the conformal structure of\n$\\Sigma$ are allowed to vary. The energy may be written as\n\\begin{equation} \\label{energy}\n E(f) := \\frac12 \\int_\\Sigma (|f_x|^2 + |f_y|^2)\\, dx\\, dy \n\\end{equation}\nwhere $(x,y)$ are local conformal coordinates for $\\Sigma$, and\nsubscripts are used to denote partial derivatives. Write\n$\\frac{D}{\\partial x}$, etc., for\ncovariant partial derivatives in the Riemannian manifold $M$.\nIf the mapping $f$ and the conformal structure on $\\Sigma$ are\nstationary for $E$, then the resulting mapping is {\\bf harmonic}: \n\\begin{equation} \\label{harm}\n\\Delta f :=\n\\frac{D}{\\partial x}\\frac{\\partial f}{\\partial x} +\n\\frac{D}{\\partial y}\\frac{\\partial f}{\\partial y} =0\n\\end{equation}\nand {\\bf conformal}:\n\\begin{equation} \\label{conf}\n |f_x| \\equiv |f_y|, \\langle f_x, f_y \\rangle \\equiv 0. \n\\end{equation}\n\nWe shall refer to a conformally parameterized harmonic mapping as\na {\\em conformally parameterized minimal surface} (CMS). Observe that\nfor any $W^{1,2}$ mapping, $E(f)$ is bounded below by the area \n\\begin{equation} \\label{area}\n A(f) := \\int_\\Sigma |f_x \\wedge f_y |\\, dx\\, dy,\n\\end{equation}\nwith equality if and only if $f$ is a conformal mapping almost\neverywhere. In particular, a mapping which minimizes $E(f)$ in a\ngeometrically defined class of mappings has minimum area among\nmappings of the admissible class (see \\cite{D39}, p. 232 or Remark\n\\ref{serrin} below). Moreover, a conformal mapping which is\nharmonic, that is, stationary for $E$, is minimal, that is,\nstationary for area.\n\nA CMS is an immersion except at a discrete set of {\\bf branch\npoints.} Let a point of $\\Sigma$ be given by $(0,0)$ in some local\nconformal coordinates $(x,y)$ for $\\Sigma$. Write $z=x+iy$. Then\n$(0,0)$ is a {\\em branch point} of $f$ of {\\em order} $m-1$ if for some\nsystem of coordinates $u^1, \\dots, u^n$ for $M$ and for some\ncomplex vector $c$, $f(x,y)$ satisfies the asymptotic description \n$$f^1(x,y) + i f^2(x,y) = c z^m + O(z^{m+1})$$ and \n$$ f^k(x,y) = O(z^{m+1}),$$ \n$k=3,\\dots, n$, as $(x,y)\\to (0,0)$. Here we have written \n$f^k(x,y)$ for the value of the $k^{th}$ coordinate $u^k$ at \n$f(x,y)$, $k=1, \\dots , n$, and $O(z^{m+1})$ denotes any\n``remainder\"\nfunction bounded by a constant times $|z^{m+1}|$. We shall\nrefer to a mapping which is an immersion except at a discrete set\nof branch points as a {\\bf branched immersion} (see \\cite{GOR}).\n\nOur main theorem is \n\\begin{thm} \\label{main}\nSuppose $\\Sigma$ is of the topological type of the real projective\nplane $\\R P^2$. Let a CMS $f:\\Sigma^2 \\to M^3$ have minimum area\namong all $h:\\Sigma \\to M$ which are not homotopic to a constant\nmapping. Then $f$ is an immersion.\n\\end{thm}\nIn order to prove this theorem, we will distinguish between two\ntypes of branch points: see \\cite{O}, \\cite{GOR}, \\cite{Alt72},\n\\cite{Alt73}, and \\cite{G73}. A {\\bf false branch point} of a\nbranched immersion $f:\\Sigma \\to M$ is a branch point $z_0$\nsuch that the image set $f(U)$ is an embedded surface, under\nanother parameterization, for some neighborhood $U$ of $z_0$ in\n$\\Sigma$. Otherwise, we call it a {\\bf true branch point}. A\nbranched immersion $f:\\Sigma \\to M$ is said to be {\\bf ramified}\nif there are two disjoint open sets $V,W \\subset \\Sigma$ with\n$f(V) = f(W)$. If $f$ is ramified in every neighborhood of a point\n$z_0$, we say that $z_0$ is a {\\bf ramified branch point}. Note\nthat any false branch point of a branched immersion must be\nramified. Osserman showed that in codimension one, a branched\nimmersion $f:\\Sigma^2 \\to M^3$ with a true branch point cannot\nminimize area, see \\cite{O} and Theorem \\ref{oss} below, in\ncontradiction to assertions of Douglas (p. 239 of \\cite{D32}) and\nof Courant (footnote p. 46 of \\cite{C41}). \nOn the other hand, regarding false branch points, we\nshall extend to nonorientable surfaces the fundamental theorem of\nbranched immersions in \\cite{G75}, and show that if a branched CMS\n$f:\\R P^2 \\to M^n$ is ramified, with any codimension, then there\nis another CMS $\\widetilde{f}:\\R P^2 \\to M$ with at most half the\narea of $f$. \n\nWe would like to acknowledge the interest of Simon Brendle in this\nproblem, whose questions, not used in \\cite{BBEN}, stimulated us\nto investigate this research topic. We are also indebted to the\nlate Jim Serrin for pointing us toward Remark \\ref{serrin}.\n\n\\section{Analysis of branch points}\\label{anal}\n\nThis section reports on material that has appeared in the\nliterature, see especially \\cite{G73}. In this paper, we shall\ndiscuss certain steps in the interest of clarity and completeness. \n\nLet $\\Sigma^2$ be a compact surface with a conformal structure, $M^n$ a\nRiemannian manifold, and let $f: \\Sigma \\to M$ be a CMS. Consider a\nbranch point $z_0 \\in \\Sigma$ for $f$. Write $D$ for the\nRiemannian connection on $M$. Let local conformal\ncoordinates $(x,y)$ for $\\Sigma$ and local coordinates \n$(q_1, \\dots, q_n)$ for $M$ be introduced with $z_0 = (0,0)$ and\n$f(z_0) = (0,\\dots,0)$ in these coordinates. Then equation\n\\eqref{harm} may be rewritten\n$$\\frac{D}{\\partial\\overline{z}}\\frac{\\partial f}{\\partial z}=0,$$\nwhere we write the complex coordinate $z=x+iy$, \n$\\frac{\\partial f}{\\partial z}= \n\\frac12[\\frac{\\partial f}{\\partial x}-i\\frac{\\partial f}{\\partial\ny}]$, and \n$\\frac{D}{\\partial \\overline z}= \n\\frac12[\\frac{D}{\\partial x} +i \\frac{D}{\\partial y}].$\nIn this form,\nwe see that harmonicity implies that the complex tangent vector\n$\\frac{\\partial f}{\\partial z}$ is holomorphic to first order. It\nis readily shown that for some positive integer $m$ and for some\ncomplex tangent vector $c=a+ib$ to $M$ at $f(z_0)$, \n\\begin{equation}\nf(z)=\\mathcal{R}\\{c z^m\\} +O_2(|z|^{m+1}).\n\\end{equation}\nHere we write $\\mathcal{R}\\{v\\}$ for the real part of a complex\nvector $v$, and we have used the big-O notation with the subscript\n$2$, meaning that as $z \\to 0$, the remainder term is bounded by a\nconstant times $|z|^{m+1}$, its first partial derivatives are\nbounded by a constant times $|z|^m$ and its second partial\nderivatives are bounded by a constant times $|z|^{m-1}$. It\nfollows from the conformality condition \\eqref{conf} that the\ncomplex-bilinear inner product \n$\\langle c,c \\rangle = |a|^2 - |b|^2 +2i \\langle a,b \\rangle =0.$ \nChoose a new system of coordinates $p_1, \\dots, p_n$ for $M$ near \n$f(z_0)$ with $\\frac{\\partial}{\\partial p_1}=a$ and \n$\\frac{\\partial}{\\partial p_2}=b$; and a new system of coordinates\n$(\\widetilde{x},\\widetilde{y})$ for $\\Sigma$ with\n$\\widetilde{z}=\\widetilde{x}+i\\widetilde{y}=|a|^{1\/m} z$.\nThen along the mapping $f$,\n$$p_1+ip_2=\\widetilde{z}^m +\\sigma(\\widetilde{z})$$\n and\n$$p_\\ell=\\psi_\\ell(\\widetilde{z}),$$\n$\\ell = 3,\\dots,n$, where \n$\\sigma(\\widetilde{z}), \\psi_\\ell(\\widetilde{z}) = O_2(\\widetilde{z}^{m+1})$. \nWe now define a non-conformal complex parameter $w=u_1+iu_2$ on a\nneighborhood of the branch point in $\\Sigma:$\n\\begin{equation}\\label{defw}\nw:=\\widetilde{z}\\,\n\\Big[1+\\widetilde{z}^{-m}\\sigma(\\widetilde{z})\\Big]^{1\/m}.\n\\end{equation}\nThen $w$ is a $C^{1,\\alpha}$ coordinate on $\\Sigma,$ for some\nH\\\"{o}lder exponent $\\alpha >0$, in terms of\nwhich the coordinate representation of $f$ is simplified:\n\\begin{equation}\\label{anonpar}\np_1+ip_2=w^m, \n\\end{equation} \nand\n$$p_\\ell=\\phi_\\ell(w)=O_2(w^{m+1}),$$\n$\\ell=3,\\dots,n.$\n\nWe now turn our attention to the case n=3 of {\\bf codimension\none}. The self-intersection of the surface is determined by the\nsingle real-valued function $\\phi(w)=\\phi_3(w).$ Define \n$\\overline\\phi(w)= \\phi(\\zeta_m w)$, where \n$\\zeta_m = e^{2\\pi i\/m}$ is a primitive $m^{\\it th}$ root of unity,\nand let $\\Phi(w)=\\phi(w)-\\overline\\phi(w).$ Then the zeroes of $\\Phi$\ncorrespond to curves of intersection of the surface with itself.\nBut both $\\phi$ and $\\overline\\phi$ satisfy the {\\it same}\nquasilinear minimal surface equation in $M$, with the {\\it same}\ncoefficients. Therefore, their difference \n$\\Phi:= \\phi-\\overline\\phi$ satisfies a linear homogeneous PDE:\n\\begin{equation}\\label{Phieq}\n\\sum_{i,j=1}^2 a_{ij}\\Phi_{u_i u_j}+\\sum_{i=1}^2 a_i \\Phi_{u_i} +\na \\Phi =0, \n\\end{equation}\nwhose coefficients, as functions of $w$, are obtained by \nintegrating from the PDE\nsatisfied by $\\phi$ to the PDE satisfied by $\\overline\\phi$ along\nconvex combinations. We have $a_{ij}(0,0) =\\delta_{ij}$. \n\nIt follows that $\\Phi$ satisfies an asymptotic formula \n\\begin{equation}\\label{asymp}\n\\Phi(w) = {\\mathcal R}\\{A w^N\\} + O_2(w^{N+1}),\n\\end{equation}\nfor some integer $N >m$ and some complex constant $A\\neq 0$ \n(see \\cite{HW}). We shall call\n$N-1$ the {\\bf proper index} of the branch point.\nWe may sketch the proof of Hartman and Wintner in \\cite{HW}. We \nrewrite the PDE \\eqref{Phieq} in terms of the complex \ngradient $\\Phi_w= \\frac12(\\Phi_{u_1}- i \\Phi_{u_2})$. \nIf $\\nabla\\Phi(w)=o(w^{k-1})$, then we test against the function\n$g(w) = w^{-k} (w-\\zeta)^{-1}$, where $\\zeta \\neq 0$ is small, to\nshow that $\\Phi_\\zeta(\\zeta) = a \\zeta^k+o(\\zeta^{k+1})$ for some\n$a\\in \\C$. Proceeding by induction on $k$, one finds formula \n\\eqref{asymp} for some integer $N$ and some $A\\neq 0$, unless \n$\\nabla\\Phi(w) \\equiv 0$. Details are as in \\cite{HW}, pp. 455-458.\n\nThus, there are two alternatives: $\\Phi$ is either identically\nzero or satisfies the asymptotic formula \\eqref{asymp} where \nthe integer $N$ is $\\geq m+1$ and the complex number\n$A\\neq 0$. If $\\Phi \\equiv 0$, then $z_0$ is a false branch point:\nsee section \\ref{false} below.\n\nIf $\\Phi$ is not $\\equiv 0$, then we have a {\\bf true branch point}. \n\n\\begin{thm} \\label{oss}\n(\\cite{O}, \\cite{G73}.) Suppose $\\Sigma$ is a\nsurface with a conformal structure, $M^3$ a Riemannian manifold \nand $f:\\Sigma \\to M$ a\nmapping which has smallest area in a $C^0$ neighborhood of\n$f$. Then $f$ has no true branch points.\n\\end{thm}\n\\pf\nAs we have just seen, a true branch point $z_0$ has an order \n$m-1 \\geq 1$ and a coordinate neighborhood in $\\Sigma$ with a\n$C^{1,\\alpha}$ complex coordinate $w$, $w=0$ at $z_0$, such that\n$f$ has the representation \\eqref{anonpar} near $z_0$. Adjacent\nsheets $p_3=\\phi(w)$ and $p_3=\\overline\\phi(w)=\\phi(\\zeta_m w)$ \nintersect when $\\Phi(w)=0$, which, according to formula\n\\eqref{asymp}, occurs along $2N\\geq 6$ arcs in $\\Sigma$ forming\nequal angles $\\pi\/N$ when they leave $w=0$. Let one of these\narcs be parameterized as $\\gamma_1:[0,\\varepsilon]\\to \\Sigma$, and\nlet the corresponding arc be $\\gamma_2:[0,\\varepsilon]\\to \\Sigma$, \ndefined by $w(\\gamma_2(t)) =\\zeta_m w(\\gamma_1(t))$.\nThen for all $0\\leq t \\leq \\varepsilon$, \n$\\phi(\\gamma_1(t))=\\phi(\\gamma_2(t))$. Note from formula\n\\eqref{anonpar} that all three coordinates coincide: the mapping\n$f(\\gamma_1(t))= f(\\gamma_2(t)), 0\\leq t \\leq \\varepsilon$.\n\nWe may now construct a Lipschitz-continuous and piecewise smooth\nsurface $\\widetilde{f}$ which has the same area as $f$, but has\ndiscontinuous tangent planes, following Osserman \\cite{O}. The\nidea of the following construction is that the parameter domain\n$D$ may be cut along the arcs $\\gamma_1((0,\\varepsilon))$ and\n$\\gamma_2((0,\\varepsilon))$, opened up to form a lozenge, with two\npairs of adjacent sides originally identified, and then closed up\nalong the remaing two pairs of adjacent sides. \n\nIn detail: choose an open topological disk $D\\subset \\C$ on which the\ncoordinate $w$ is defined, $(0,0)\\in D$, and which is invariant \nunder the rotation taking $w$ to $\\zeta_m w$. Assume\n$\\gamma_1(\\varepsilon)$ and $\\gamma_2(\\varepsilon)$ are the first\npoints along $\\gamma_1$ resp. $\\gamma_2$ which lie on the boundary of\n$D$. We shall construct a discontinuous,\npiecewise $C^1$ mapping $Q:B_1 \\to D$, such that\n$\\widetilde{f}(\\zeta):=f(Q(\\zeta))$ is nonetheless continuous, and\n$Q$ is one-to-one and onto except for sets of measure $0$. Here,\n$B_1$ is the disk $\\{z\\in \\C: |z|<1\\}$. Choose\npoints $A_i=\\gamma_i(\\varepsilon\/2)$, $i=1,2$. Then $D$ is\nbroken along $\\gamma_1$ and $\\gamma_2$ into two curvilinear pentagons \nwith vertices $\\gamma_1(\\varepsilon), A_1, (0,0), A_2$ and\n$\\gamma_2(\\varepsilon).$ The edges of these pentagons are \n$\\gamma_1([\\varepsilon\/2,\\varepsilon])$, $\\gamma_1([0,\\varepsilon\/2])$,\n$\\gamma_2([0,\\varepsilon\/2])$, $\\gamma_2([\\varepsilon\/2,\\varepsilon])$ \nand one of the two arcs of $\\partial D$ with endpoints \n$\\gamma_1(\\varepsilon)$ and $\\gamma_2(\\varepsilon)$.\nSimilarly, break the unit disc $B_1$ \nalong the interval $(-1,1)$ of the $x$-axis and the interval\n$[-\\frac12, \\frac12]$ of the $y$-axis into two pentagons.\nEach pentagon in $B_1$ will be bounded by four line segments, an\ninterval along the $y$-axis being used twice, plus the upper or\nlower half-circle of $\\partial B_1$. Denote the points\n$a=(0,\\frac12),$ $e=(0,-\\frac12)$, $c_1=(1,0)$, $c_2=(-1,0)$ and\ngive the origin $(0,0)$ four different names: $b_1$ when approached\nfrom the first quadrant $\\{x>0,y>0\\}$, \n$b_2$ when approached from the second quadrant $\\{x<0, y>0\\}$,\n$d_2$ when approached from the third quadrant $\\{x<0, y<0\\}$, and\n$d_1$ when approached from the fourth quadrant $\\{x>0, y<0\\}$. $Q$\nwill map the pentagon in $B_1$ in the upper half-plane $y>0$ to the pentagon\nin $D$ lying counterclockwise from $\\gamma_1$ and clockwise from\n$\\gamma_2$, with $Q(c_1)=\\gamma_1(\\varepsilon)$, $Q(b_1)=A_1$,\n$Q(a)=(0,0)$, $Q(b_2)=A_2$, and $Q(c_2)=\\gamma_2(\\varepsilon)$.\nThis describes $Q$ on the boundary of one of the two pentagons;\nthe other pentagon is similar. The\ninterior of each pentagon may be made to correspond by a $C^1$\ndiffeomorphism. We require $Q$ to be continuous along the\n$x$-axis. Of course, $Q$ is discontinuous along the intervals\n$0N$ and some \n$A_2\\in \\C\\backslash \\{0\\}$. That is, the curves of intersection\nof non-successive sheets form a family of equally spaced\ndirections, which are presumably independent of the directions of\nthe curves of intersection of successive sheets. This philosophy\nis justified by the following explicit example with $N=6$ and\n$N_2=7$.\n\nChoose $a,b\\in\\C\\backslash \\{0\\}$. Using the Weierstra\\ss\\ representation\n(see \\cite{O69}, p. 63) for a minimal surface $f:\\C \\to \\R^3$ in\nEuclidean 3-space, based on the polynomials $4z^3$ and\n$2az^2+2bz^3$ (the latter representing the Gau\\ss\\ map in\nstereographic projection), we have the specific CMS with \n\\begin{eqnarray}\nf^1_z(z) = \\Big[1-(az^2+bz^3)^2\\Big]2z^3\\\\\nf^2_z(z) = -i\\Big[1+(az^2+bz^3)^2\\Big]2z^3\\\\\nf^3_z(z) = 4z^3(az^2+bz^3),\n\\end{eqnarray}\nwhich leads to \n$$w^4:=f^1+if^2 = z^4-\\frac{\\ol{a}^2\\ol{z}^8}{2}-\n\\frac{8}{9}\\ol{a}\\ol{b}\\ol{z}^9- \\frac{2}{5}\\ol{b}^2\\ol{z}^{10}$$\nand to\n$$z=w\\Big(1+\\frac{\\ol{a}^2\\ol{w}^8}{8 w^4}+\n\\frac{2\\ol{a}\\ol{b}\\ol{w}^9}{9 w^4}+O_2(|w|^6)\\Big),$$\nvia an extensive, but straightforward, computation. \nRecall that each component $f^k$ of\n$f$ is real and harmonic as a function of $z$. Rewriting \n$$f^3(z)=8\\,\\mathcal{R} \\{ \\frac{a}{6} z^6 + \\frac{b}{7} z^7\\}$$\nas a (non-harmonic) function of $w$, we find \n$$ \\phi(w) = 8\\,\\mathcal{R}\\{\\frac{a}{6}w^6 + \\frac{b}{7}w^7+\n\\frac{\\ol{a}|a|^2}{8} \\ol{w}^8 w^2 +O_2(|w|^{11})\\}:$$\nthe difference of $\\phi$ on successive sheets is \n\\begin{equation}\\label{succ}\n\\Phi(w):=\\phi(w)-\\phi(iw)=\n8\\,\\mathcal{R}\\{\\frac{a}{3}w^6+\\frac{b}{7}(1+i)w^7\\}+ O_2(|w|^{10})\n\\end{equation}\nand on non-successive sheets is\n\\begin{equation}\\label{nonsucc}\n\\Phi_2(w):=\\phi(w)-\\phi(-w)=\n8\\,\\mathcal{R}\\{bw^7\\}+ O_2(|w|^{10}).\n\\end{equation}\n\nFrom the formula \\eqref{succ}, we see that $N=6$, $A=\\frac{8a}{3}$ \nand the curves of intersection of\nsuccessive sheets are curves in $\\R^3$ leaving the branch point\nalong the $(x_1,x_2)$-plane, which is the tangent plane to\n$\\Sigma$ at the branch point, in the $12$ directions\n$(\\cos(4\\theta),\\sin(4\\theta),0)$, where $6\\theta+\\arg(a)$ is an\ninteger multiple of $\\pi$. The $12$ directions are\npaired off to form $6$ curves in $\\R^3$ leaving the branch point \nat equal angles $\\frac{2\\pi}{3}$.\n\nSimilarly, from the formula \\eqref{nonsucc}, we see that $N_2=7$,\n$A_2=8b$ and the curves \nof intersection of nonsuccessive sheets are seven curves leaving\nthe branch point and making equal angles (images of 14 curves in\nthe $w$-plane, paired). The arguments of \n$a,b \\in \\C\\backslash\\{0\\}$ may be given arbitrary values, so that the\nangle between a representative of the family of six curves of\nself-intersection and a representative of the family of seven\ncurves of self-intersection may be chosen arbitrarily. For most\nchoices, these 13 curves in $\\R^3$ will {\\bf not} form equal\nangles at the branch point.\n\n\\end{obs}\n\n\\section{False branch points}\\label{false}\n\nThe elimination of false branch points from an area-minimizing CMS\n$f:\\Sigma \\to M$ is in general only possible by comparison with\nsurfaces $\\Sigma_0$ of {\\it reduced topological type} (see\n\\cite{D39}, \\cite{G77}): for orientable surfaces, $\\Sigma_0$ has \nsmaller genus or the same total genus and more connected components. \nAs an oriented example, we may\nchoose $\\Sigma$ to be a surface of genus $2$ and $\\Sigma_0$ to be\na torus. Then there is a branched covering $\\pi: \\Sigma \\to\n\\Sigma_0$ with two branch points of order one. (Think of $\\Sigma$\nas embedded in $\\R^3$ so that it is invariant under a rotation by\n$\\pi$ about the $z$-axis, and meets the $z$-axis only at two\npoints: the quotient under this rotation is a torus.) Now choose a\nminimizing CMS $f_0: \\Sigma_0 \\to M^n$, and let $f:\\Sigma \\to M^n$\nbe $f = f_0 \\circ \\pi$. Then $f$ has two false branch points. In\norder to be sure that $f$ minimizes area in its homotopy class, we\nmay choose $M^3$ to be a flat $3$-torus with two small periods and\none large period. As one sees from this example, in order to show \nthat false branch points do not\noccur, we must assume that $f$ minimizes area among mappings from\nsurfaces of the topological type of $\\Sigma$ {\\bf and} of lower\ntopological type. This hypothesis was used by J. Douglas (see\n\\cite{D39}), in a strict form, to find the existence of minimal\nsurfaces $f:\\Sigma \\to \\R^3$ with prescribed boundary. For \n$\\R P^2$, however, there are no nonorientable surfaces of lower type.\n\nResults in the literature for false branch points have until now\nassumed that $\\Sigma$ is {\\it oriented}, see \\cite{G73},\n\\cite{Alt73}, \\cite{GOR}, \\cite{G75}, \\cite{G77} and \\cite{T}.\nIn order to treat false branch points for nonorientable\nsurfaces, we will need to extend certain known results. In\nparticular, the following theorem appears in \\cite{G75} for \n{\\em orientable} surfaces, possibly with boundary, including\nsurfaces of prescribed mean curvature vector not necessarily zero.\n\n\\begin{thm}\\label{fund}\n(Fundamental theorem of branched immersions)\nLet $\\Sigma^2$ be a compact surface with boundary endowed with a \nconformal structure, $\\partial\\Sigma$ possibly empty \nand $\\Sigma$ not necessarily orientable. Let $M^n$ be a Riemannian\nmanifold and $f:\\Sigma \\to M$ a CMS. Assume that the restriction\nof $f$ to $\\partial\\Sigma$ is injective. Then there exists a compact\nRiemann surface with boundary $\\widetilde\\Sigma$, a branched covering \n$\\pi:\\Sigma \\to \\widetilde\\Sigma$ and a CMS \n$\\widetilde{f}: \\widetilde\\Sigma \\to M$ such that \n$f = \\widetilde{f} \\circ \\pi$. Moreover, the restriction of\n$\\widetilde{f}$ to $\\partial\\widetilde\\Sigma$ is injective.\nFurther, $\\widetilde\\Sigma$ is orientable if and only if $\\Sigma$\nis orientable.\n\\end{thm}\n\\pf\nIf $\\Sigma$ is orientable, then \nTheorem 4.5 of \\cite{G75} provides an orientable quotient surface\n$\\widetilde\\Sigma$, a branched covering \n$\\pi:\\Sigma \\to \\widetilde\\Sigma$ and an unramified CMS \n$\\widetilde f: \\widetilde\\Sigma \\to M$ such that \n$f=\\widetilde f \\circ \\pi$. \n\nThere remains the case where $\\Sigma$ is {\\bf not orientable}.\nAssume, without loss of generality, that $\\Sigma$ is connected. \n\nLet $p:\\widehat\\Sigma \\to \\Sigma$ be the oriented double cover of\n$\\Sigma$, with the induced conformal structure. Then \n$\\widehat\\Sigma$ is connected and orientable, and $p$ is two-to-one. \nThe composition \n$\\widehat{f}=f \\circ p: \\widehat\\Sigma \\to M$ is a CMS, defined on\nan orientable surface, and we may apply Theorem 4.5 of \\cite{G75}\nto find a compact orientable surface with boundary \n$\\widehat{\\widetilde\\Sigma}$, an unramified\nCMS $\\widehat{\\widetilde{f}}:\\widehat{\\widetilde\\Sigma}\\to M$ and\nan orientation-preserving branched covering \n$\\widehat{\\pi}:\\widehat\\Sigma \\to \\widehat{\\widetilde\\Sigma}$ so that\n$\\widehat{f}$ factors as $\\widehat{\\widetilde{f}}\\circ\\widehat{\\pi}$.\n\nNow let $\\widetilde\\Sigma$ be the quotient surface of\n$\\widehat{\\widetilde\\Sigma}$ under the identification of \n$\\widehat{\\pi}(x^+) \\in \\widehat{\\widetilde\\Sigma}$ with \n$\\widehat{\\pi}(x^-) \\in \\widehat{\\widetilde\\Sigma}$ whenever \n$x^\\pm \\in \\widehat{\\Sigma}$ and $p(x^+)= p(x^-)$ in $\\Sigma$.\nThen for each $x \\in \\Sigma$, $p^{-1}(x)$ consists of two points\n$x^+,\\ x^-\\in\\widehat{\\Sigma}$ and there are diffeomeorphic\nneighborhoods of $\\widehat{\\pi}(x^+)$ and of $\\widehat{\\pi}(x^-)$\nwhich are thereby identified in $\\widetilde\\Sigma$, with reversal\nof orientation. This implies \nthat $\\widetilde\\Sigma$ is a differentiable $2$-manifold. Write\n$\\widetilde{p}:\\widehat{\\widetilde\\Sigma}\\to\\widetilde\\Sigma$ for\nthe quotient mapping. Then $\\widetilde{f}:\\widetilde\\Sigma \\to M$\nis well defined such that \n$\\widehat{\\widetilde{f}} = \\widetilde{f}\\circ\\widetilde{p}$.\nAlso, for $x\\in \\Sigma$, the two pre-images $x^+, x^-\\in\n\\widehat{\\Sigma}$ have\n$\\widetilde{p}\\circ\\widehat{\\pi}(x^+)=\\widetilde{p}\\circ\\widehat{\\pi}(x^-),$ \nso that we may define\n$\\pi:\\Sigma \\to \\widetilde\\Sigma$ by \n$\\pi(x):= \\widetilde{p}\\circ \\widehat{\\pi}(x^\\pm)$.\n\nNote that the mappings $p, \\widehat{\\pi}$ and $\\widetilde p$ are\nsurjective, and therefore also $\\pi:\\Sigma\\to\\widetilde{\\Sigma}$.\n\nIn the event that $\\partial{\\Sigma}$ is nonempty, since\n$f=\\widetilde f \\circ \\pi$ restricted to $\\partial\\Sigma$ is\ninjective, it follows readily that the restriction of\n$\\widetilde{f}$ to $\\partial{\\widetilde{\\Sigma}}$ is injective. \n\nThen in the above construction, for each $x\\in \\Sigma$, \n$f$ defines the\nsame piece of surface, with opposite orientations, on\nneighborhoods of $x^+$ and of $x^-$. The branched covering\n$\\widehat\\pi: \\widehat\\Sigma \\to \\widehat{\\widetilde\\Sigma}$\npreserves orientation, implying that\n$\\widehat\\pi(x^+) \\neq \\widehat\\pi(x^-)$.\nSince $\\widehat{\\widetilde\\Sigma}$ is connected, there is a path\nfrom $\\widehat\\pi(x^+)$ to $\\widehat\\pi(x^-)$ whose image in\n$\\widetilde\\Sigma$ reverses orientation. Therefore\n$\\widetilde\\Sigma$ is not orientable. \n\\qed\n\n\\section{An immersion of $\\R P^2$}\n\nWe are now ready to give the proof of the main Theorem \\ref{main}.\nLet $f: \\R P^2 \\to M^3$ be a CMS into a three-dimensional\nRiemannian manifold, which has minimum area among all mappings\n$\\R P^2 \\to M^3$ not homotopic to a constant. Write\n$\\Sigma = \\R P^2$. From Theorem \\ref{oss}, we see that $f$ has no\ntrue branch points. (For this conclusion, it would suffice that $f$\nminimizes area in a $C^0$ neighborhood of each branch point.)\n\nThere remains the possibility of false branch points.\n\nWe first recall the computation of the Euler characteristic of a\nsurface. For a compact, connected surface which is either\norientable or nonorientable, the Euler characteristic\n$$ \\chi(\\Sigma) = 2 - r(\\Sigma),$$\nwhere $r(\\Sigma)$ is the topological characteristic of $\\Sigma$\n\\cite{D39}, also known as the nonorientable genus;\nwe\nshall adopt the term {\\bf demigenus}. If $\\Sigma$ is orientable,\nthen it has even demigenus and genus $\\frac12 r(\\Sigma)$. If \nit is non-orientable, then $\\Sigma$ may be constructed\nby adding $r(\\Sigma)$ cross-caps to the sphere.\nThe demigenus of the sphere equals zero, of $\\R P^2$ equals one, of\nthe torus and the Klein bottle equals two. For other compact\nsurfaces without boundary, the demigenus is $\\geq 3$. \n\nNow according to Theorem \\ref{fund}, there is a compact Riemann\nsurface $\\widetilde\\Sigma$, a branched covering\n$\\pi:\\Sigma \\to \\widetilde\\Sigma$ and an unramified CMS\n$\\widetilde f:\\widetilde\\Sigma \\to M$ such that\n$f = \\widetilde f \\circ \\pi$. We will apply the Riemann-Hurwitz\nformula to the branched covering $\\pi$:\n\\begin{equation}\\label{RH}\n \\chi(\\Sigma) = d \\, \\chi(\\widetilde\\Sigma) - {\\mathcal O}(\\pi),\n\\end{equation}\nwhere $d$ is the degree of $\\pi$, \n${\\mathcal O}(\\pi)$ is the total order of branching of $\\pi$, and\n$\\chi$ is the Euler number. Suppose that $f$ has a false\nbranch point, or more generally, a ramified branch point. Then\n${\\mathcal O}(\\pi)\\geq 1,$ and the branched covering $\\pi$ has\ndegree $d \\geq 2$.\n\nUsing the formula \\eqref{RH}, we can determine the\ntopological type of $\\widetilde\\Sigma$. Since $\\Sigma$ is\nhomeomorphic to $\\R P^2$, it has $\\chi(\\Sigma)=1$. We also know\nthat $d>0$\nand ${\\mathcal O}(\\pi) \\geq 1$. It follows that the demigenus\n$r(\\widetilde\\Sigma) \\leq 1$. Otherwise, the integer\n$r(\\widetilde\\Sigma)$ would be $\\geq 2$, which implies\n$\\chi(\\widetilde\\Sigma) \\leq 0$ and by the formula \\eqref{RH}, \n$1 \\leq -{\\mathcal O}(\\pi)\\leq -1$, a contradiction. That is,\n$\\widetilde\\Sigma$ is either the sphere or the projective plane.\nBut according to Theorem \\ref{fund}, since $\\Sigma$ is not\norientable, $\\widetilde\\Sigma$ is not orientable; therefore,\n$\\widetilde\\Sigma$ is homeomorphic to $\\R P^2$.\n\nNote that if $\\widetilde f:\\widetilde\\Sigma \\to M$ were homotopic\nto a constant mapping, then so would be $f = \\widetilde f \\circ\n\\pi$.\n\nOn the other hand,\nthe area of $f$ equals the area of $\\widetilde f$ times the degree\n$d$ of $\\pi$. But $d\\geq 2$, so the area of $\\widetilde f$ is at\nmost one-half the area of $f$. But this would mean that $f$ does\nnot have minimum area among maps $:\\R P^2 \\to M$ not homotopic to\na constant mapping, contradicting our hypothesis. This implies\nthat $f$ has no branch points, and is therefore an immersion. \n\\qed\n\n\\begin{rem}\nWe have treated conformally parameterized {\\em minimal} surfaces\nin this paper. However, the proofs go through with only minor\nchanges for projective planes of nonzero {\\em prescribed} mean\ncurvature $H:M^3 \\to \\R$, provided that $f(\\Sigma)$ has a\ntransverse orientation. This can occur only when $M$ is\nnon-orientable.\n\nIt also appears plausible that a version of Theorem \\ref{fund} can be\nextended to the more general case of mappings satisfying the {\\em\nunique continuation property}, see \\cite{GOR}. \n\\end{rem}\n\n\\begin{rem}\\label{serrin}\nAn alternative approach to branch points of minimal surfaces\nof the type of the disk appears in the recent book \\cite{T} by \nTromba. The second and higher variations of energy $E$ \n(see \\eqref{energy}) of a CMS $f$ are computed in a\nneighborhood of a branch point $z_0$, and the lowest nonvanishing\nvariation is shown to be negative if the branch point is\nnonexceptional. This is defined in terms of the {\\bf index} $i>m$ \nof $z_0$, where $i+1$ is the order of contact of the mapping\nwith the tangent plane at the branch point. Note that the proper\nindex $N$ is $\\geq i$ (recall the definition \\eqref{asymp}).\nIf $i+1$ is an integer multiple of $m$, where $m-1$ is the order\nof the branch point, then the branch point is called {\\em\nexceptional}. Tromba also shows that exceptional interior branch\npoints will not occur, provided that the mapping has minimum area\n$A$ among surfaces with the same boundary curve. \n\nIn fact, minimizing area and minimizing energy, under such Plateau\nboundary conditions, are equivalent properties of a Lipschitz\ncontinuous mapping $f:B \\to \\R^n$, where $B$ is the unit disk in\n$\\R^2$. Namely, if $E(f) \\leq E(g)$ for all Lipschitz-continuous\nmappings $g:B \\to \\R^n$ defining the same boundary curve, then\n$A(f) \\leq A(g)$ for all such $g$, as we now show. \n\nOtherwise, for some $g:B \\to \\R^n$ with the same Plateau boundary \nconditions as $f$, $A(f) > A(g)$. Write $\\eta = A(f)-A(g)>0$.\nApproximate $g$ with $g^\\delta(w):=(g(w),\\delta w) \\in \\R^{n+2}$.\nThen $g^\\delta$ is a Lipschitz immersion, so there are conformal\ncoordinates $\\widetilde w=:F^{-1}(w)$ for some bi-Lipschitz \nhomeomorphism $F:B \\to B$ which preserves $\\partial B$ (see\n\\cite{M}). Write\n$\\widetilde g^\\delta(\\widetilde w):= g^\\delta(F(\\widetilde w))$ for\nthe conformal mapping with the same image as $g^\\delta$. Define\n$\\widetilde g:B \\to \\R^n$ by composing $\\widetilde g^\\delta$ with\nthe projection from $\\R^{n+2} \\to \\R^n$. Then the energy \n$E(\\widetilde g^\\delta) = E(\\widetilde g) + \\delta^2 E(F)$.\nAlso, the area $A(g^\\delta) \\leq A(g) + C\\delta$ for some constant\n$C$. It follows that\n$$E(\\widetilde g)\\leq E(\\widetilde g^\\delta)=A(\\widetilde\ng^\\delta)=A(g^\\delta)\\leq A(g) + C\\delta 1$, for any of the models. \nThe intersection of the horizontal line \nwith a PC shows where that mode gives a $2\\sigma$ indication of difference \nfrom $\\Lambda$. For $|w_a|<0.4$, only 2 PCs meet this criterion. \n(Note breaks in the PC5 curve come from $\\alpha_5$ switching sign.) \n}\n\\label{fig:sigalf}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[!htb]\n\\begin{center}\n\\psfig{file=sigalfo.ps,width=3.4in}\n\\caption{As Fig.~\\ref{fig:sigalf}, but for $w(a)=-1+A\\,[1-\\cos(\\ln a)]$ \nwith non-monotonic behavior. Note $w$ would exceed 0 at some redshifts \nfor $A>0.5$. \n}\n\\label{fig:sigalf2}\n\\end{center}\n\\end{figure}\n\n\nTo exhibit robustness of these results against the \nspecific assumed model, we scan over model parameters for classes of \nqualitatively different dark energy physics, representing the thawing \nclass, freezing class, and a non-monotonic EOS. In the last case, we \nsee that the ability of PCA not to be locked into a particular \nfunctional form, e.g.\\ a monotonic parametrization, does not make \nmore degrees of freedom significant. \n\nAgain, what we really care about is the $S\/N$. In Table~\\ref{tab:sn2} \nwe list the fraction of the total $S\/N$ (Eq.~\\ref{eq:snr}) \ncontributed by the two best modes -- for the case \nfor each dark energy class where higher modes contribute \nthe {\\it most\\\/}. For the thawing class the higher modes add less \nthan 0.3\\% to the total, and for the freezing class less than 2.8\\% in \nthe most sensitive case, dropping to less than 0.5\\% for modes above \nthe third. And recall this was for a highly idealized experiment. In \nthe oscillating model, an ad hoc case designed to be especially \nPCA-friendly, the most extreme case with oscillations reaching $w=0$ \nallows higher modes to contribute up to 14\\% (8\\% for above the third \nmode). These are actually {\\it overestimates\\\/} of the importance of high \nmodes because as discussed in the next section the $S\/N$ of the higher \nmodes degrades when $w(z>9)$ is marginalized over rather than fixed. \n\n\n\\begin{table}[htbp]\n\\begin{center}\n\\begin{tabular*}{0.95\\columnwidth} \n{@{\\extracolsep{\\fill}} l c c }\n\\hline\nModel & $(S\/N)_2\/(S\/N)_{\\rm all}$ & \\ $(S\/N)_3\/(S\/N)_{\\rm all}$ \\\\ \n\\hline\nFreezing ($w_a=0.7$)& 0.972& \\ 0.995 \\\\ \nThawing ($w_a=-0.5$) & 0.997& \\ 0.9998 \\\\ \nOscillating ($A=0.5$) & 0.862& \\ 0.922 \\\\ \n\\hline \n\\end{tabular*}\n\\caption{Fraction of total signal-to-noise contributed by the first \ntwo, or three, principal components for the case in each dark energy \nclass {\\it most\\\/} favoring PCA high modes.} \n\\label{tab:sn2}\n\\end{center}\n\\end{table}\n\n\nInterestingly, fitting $w_0$-$w_a$ \nrather than using PCA provides distinction from the cosmological \nconstant at the $1\\sigma$ ($2\\sigma$) level for $|w_a|=0.13$ (0.24) -- \nvery comparable to the PCA approach. That is, the second parameter \nbecomes useful at almost the same values as in the top panel of \nFig.~\\ref{fig:sigalf}, for the PCA freezing case, and is more \nsensitive than in the bottom panel for the PCA thawing case. \nBoth methods demonstrate that significant physical constraints on \nthe dark energy EOS are described by of order two quantities. \n\nThe main point though is that the important information is not in \n$\\sigma_i$, but $\\sigma_i\/\\alpha_i$. Just because PCA may say \nuncertainties $\\sigma_i$ are small, \nthis does not mean that we know the physics answer. \n\n\n\\section{What Happens at High Redshifts, Stays at High Redshifts?} \n\nWhile the previous demonstration of PC uncertainties and \nsignal-to-noise is \nthe most important of this paper, we also note that assumptions about \nthe high redshift EOS behavior have significant effects. \nIn practice, the PCs are often computed assuming a cut off at some \nmaximum redshift to avoid complications in calculating \nthe cosmic microwave background (CMB) primordial power spectra and the \ninitial conditions for growth of matter perturbations. At higher \nredshifts one must therefore choose a particular form or value for \nthe EOS. FOMSWG fixes $w(z>9)=-1$. One justification \nis that for the cosmological constant the dark energy density \nfades away quickly into the past, so the exact value of $w$ there is \nunimportant. However, we have no guarantee that the cosmological \nconstant is the correct model, nor even essentially any current \ninformation on the behavior of the EOS at $z>1$ \\cite{kowalski}. \nAssumptions about the high redshift behavior can lead to significant \nbiases and improper conclusions about the nature \nof dark energy (see, e.g., \\cite{deplpca}). \n\nBias should not be as severe a problem for a high transition redshift, \n$z=9$, and with many redshift bins at $z\\gtrsim3$ the extra degrees of \nfreedom should ameliorate bias from \nthe prior at $z>9$. However, fixing $w(z>9)=-1$ does demonstrably \ninfluence the PCs: for example some of the uncertainties \n$\\sigma_i\/\\alpha_i$ that are apparently tightly determined \ncan degrade by a factor three when $w(z>9)$ is not fixed. \nThe modes themselves also change shape, as we discuss next. \n\nThus, what happens at high redshift \ndoes {\\it not\\\/} stay at high redshift, but can affect some important \naspects of the principal component analysis. \n\n\n\\section{Highest is Best?} \n\nDoes the location in redshift of the maximum of a PC, say the first one, \nsay something fundamental about the science reach of the survey or probe \nemployed? No -- as is clear from Eq.~(\\ref{eq:pc}) the EOS constraints \nfollow from the sum -- with both positive and negative contributions -- \nover all the PCs, not any single one. \n\nMoreover, Figure~\\ref{fig:pchiz} demonstrates that artificially fixing \nthe high redshift EOS behavior changes the PC shape and peak location. \nThis shift has nothing to do with the experimental design and so the \npeak location is not a signpost to experiment optimization. \nAssumptions on $w(z>9)$ can affect probes differently: e.g.\\ supernova \ndistances do not involve $w(z>9)$ while baryon acoustic oscillations, \nbeing tied to high redshift, do. \n\n\n\\begin{figure}[!htb]\n\\begin{center}\n\\psfig{file=PCA_w01wa7.ps,width=3.4in}\n\\caption{The location and shape of peaks in principal component modes \ndepend on the high redshift treatment of the EOS. The peak location \ntherefore does not in itself translate into the science impact of a given \nexperiment. Note the effects are more severe for a more realistic \nexperiment. \n}\n\\label{fig:pchiz}\n\\end{center}\n\\end{figure}\n\n\nFinally, even if a PC peak did mean that the experiment is most sensitive \nto the dark energy EOS at a higher redshift, say, that would not imply \nthat the experiment is most sensitive to the nature of dark energy. \nFor example, dark energy is most influential today, so perhaps one wants \nan experiment most sensitive to the low redshift behavior. \n(We emphasize that understanding dark energy at low redshift still \nrequires measuring expansion and growth to high redshift, to break \ndegeneracies.) At best, one could say that probes that weight the dark \nenergy differently in redshift have some complementarity. \n\nBut there is \nno justification for claiming that the probe with the highest peak, or \nwith the peak at the highest redshift, is the best probe. \n\n\n\n\n\\section{Conclusions} \n\nPrincipal component analysis is a valid technique, \nused appropriately. Oversimplifying PCA interpretation or \ninadequately appreciating the effect of assumptions \nemployed can lead to misunderstandings and false beliefs. We \npresent cautionary examples of three apparently plausible but \nunjustified extrapolations. While data \nshould be analyzed in every reasonable manner, for understanding the \ngeneric cosmology reach the more complicated PCA approach \ndemonstrates no extraordinary advantage over the well-tested and highly \ncalibrated phase space dynamics approach of $w_0$-$w_a$. \n\n\n\\acknowledgments \n\nWe thank Andy Albrecht and Dragan Huterer for detailed discussions of \nPCA issues, and Bob Cahn for useful suggestions. \nThis work has been supported in part by the Director, Office of Science, \nOffice of High Energy Physics, of the U.S.\\ Department of Energy under \nContract No.\\ DE-AC02-05CH11231. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA well known item of the string\/gauge theory holographic dictionary states that closed strings are the duals of glueballs in the corresponding gauge theories. On the other hand, using the gravity\/gauge theory duality, glueball operators of the boundary field theory correspond to fields in the gravity bulk theory, in particular modes of the dilaton, the graviton, and the RR field. Using this latter correspondence a spectrum of holographic glueballs has been determined \\cite{Csaki:1998qr,Brower:2000rp,Elander:2013jqa}. However, from the same reasoning as in \\cite{Sonnenschein:2014jwa} and \\cite{Sonnenschein:2014bia}, it is probably the spectrum of the strings and not of the bulk fields that would really correspond to the experimental data when moving from large $N_c$ and large $\\lambda$ to the to the realistic values of $N_c=3$ and $\\lambda\\sim 1$. For mesons \\cite{Sonnenschein:2014jwa} and baryons \\cite{Sonnenschein:2014bia} it was argued that the (open) string configurations admit a modified Regge behavior that matches that of the observed hadrons whereas bulk modes do not admit this property. In this paper we argue that a similar correspondence exists for glueballs, and that the glueballs will probably have a better description in terms of closed strings rather than modes of bulk fields. The idea of glueballs as closed strings has previously been discussed in various terms in works such as \\cite{Bhanot:1980fx,Niemi:2003hb,Sharov:2007ag,Solovev:2000nb,Talalov:2001cp,LlanesEstrada:2000jw,Szczepaniak:2003mr,Pons:2004dk,Abreu:2005uw, Brau:2004xw,Mathieu:2005wc,Simonov:2006re,Mathieu:2006bp,BoschiFilho:2002vd}.\n\nIt is common lore that glueballs and flavorless mesons cannot be distinguished since they carry the same quantum numbers and that the corresponding resonances encountered in experiments are in fact generically linear combinations of the two kinds of states. If, however, we refer to the stringy description of hadrons then, since mesons and glueballs correspond to open and closed strings respectively, there are certain characterizing features with which one can distinguish between them.\n\nThe most important difference between the open string mesons and the closed string glueballs is the slope, or equivalently the (effective) tension. It is a basic property of strings (see sections \\ref{sec:lightcone} and \\ref{sec:flat_string}) that the effective tension of a closed string is twice that of an open string and hence there is a major difference between the two types of strings, as\n\\begin{equation} \\ensuremath{\\alpha^\\prime}_{closed}= \\frac12 \\ensuremath{\\alpha^\\prime}_{open}\\qquad \\rightarrow \\qquad \\ensuremath{\\alpha^\\prime}_{gb}= \\frac12 \\ensuremath{\\alpha^\\prime}_{meson} \\,. \\end{equation}\nThus the basic idea of this paper is that one should be able to distinguish between glueballs and flavorless mesons by assigning some of them to certain trajectories with a mesonic slope $\\ensuremath{\\alpha^\\prime}_{meson}\\sim 0.9$ GeV$^{-2}$ and others to trajectories with a glueball slope of $\\ensuremath{\\alpha^\\prime}_{gb}\\sim 0.45$ GeV$^{-2}$.\n\nThe slope is not the only thing that is different between open and closed strings. It follows trivially from the spectrum of closed strings (see section \\ref{sec:lightcone}) that in the critical dimension it has an intercept that is twice that of the open string. However, we are interested in strings in four dimensions rather than the critical dimension, and there, as will be discussed in section \\ref{sec:non_critical_string}, the determination of the intercept is still not fully understood. Thus the intercept cannot currently serve as a tool for identifying glueballs.\n\nAnother important difference between open and closed string hadrons is in their decay mechanisms. Based on the holographic description of a meson as a string that connects to flavor branes at its two endpoints, it was determined in \\cite{Peeters:2005fq} that the width of decay of a meson of mass \\(M\\) behaves like\n\\begin{equation} \\Gamma \\sim \\left(\\frac{ 2 M}{\\pi T} - \\frac{m^1_{sep}+m^2_{sep}}{2T}\\right) e^{\\frac{-{m^q_{sep}}^2}{T}} \\,, \\label{eq:holo_width}\\end{equation}\nwhere $M$ is the mass of the meson, $m^1_{sep}$ and $m^2_{sep}$ are the masses of the string endpoint quarks in the initial state (assumed here to be small), $m^q_{sep}$ is the mass of the quark and antiquark pair generated by the split of the string and $T$ is the string tension. The factor preceding the exponent is in fact the string length.\n\nAs we discuss in section \\ref{sec:decays}, for the case of a closed string decaying into two open strings the width will be proportional to the string length squared, and the single exponential suppressing factor will be replaced by\n\\begin{equation} e^{\\frac{-m^q_{sep}{}^2}{T}}e^{\\frac{-m^{q^\\prime}_{sep}{}^2}{T}} \\,,\\end{equation}\nwhere $m^{q}_{sep}$ and $ m^{q^\\prime}_{sep}$ are the masses of each of the two quark-antiquark pairs that will have to be created in the process.\n\nThus it is clear that the width of a glueball should be narrower than that of the corresponding meson open string, particularly for decay channels involving heavier quarks like $s$, $c$ and $b$. This can serve as an additional tool of disentangling between mesons and glueballs. We list one distinguishing feature of a glueball decaying into two mesons in section \\ref{sec:decays}.\n\nThe main motivation of reviving the description of mesons and baryons in terms of open strings in \\cite{Sonnenschein:2014jwa} and \\cite{Sonnenschein:2014bia} has been the holographic string\/gauge duality. The same applies also to the closed string picture of glueballs. The spectra of closed strings in a class of holographic confining models was analyzed in \\cite{PandoZayas:2003yb}. The result was that the relation between the mass and angular momentum takes the following form:\n\\begin{equation} J = \\ensuremath{\\alpha^\\prime}_{gb} (E^2 - 2 m_0 E) + a \\,, \\end{equation}\nwhere $\\ensuremath{\\alpha^\\prime}_{gb}$ is the corresponding slope, $E$ is the mass of the glueball, $a$ is the intercept, and $m_0$ is a parameter that can be either positive or negative and is determined by the particular holographic model used. Note that this relation modifies the well known linear relation between $J$ and $E^2$. In section \\ref{sec:holo_fits} we discuss the phenomenological implications of this relation and analyze the possibility of grouping flavorless hadrons along such holographic trajectories.\n\nThe main goal of this paper is to perform an explicit comparison between observational data of flavorless hadrons and the resonance states predicted by the models of rotating open string with massive endpoints for the mesons and rotating folded closed strings for glueballs.\n\nUnfortunately there exists no unambiguous way to assign the known flavorless hadrons (the focus in this paper is on the \\(f_0\\) and \\(f_2\\) resonances) into trajectories of mesons and glueballs, but it is clear that \\textbf{one cannot consistently sort all the known resonances into meson trajectories alone}. One of the main problems in identifying glueball trajectories is simply the lack of experimental data, particularly in the mass region between \\(2.4\\) GeV and the \\(c\\bar{c}\\) threshold, the region where we expect the first excited states of the glueball to be found. It is because of this that we cannot find a glueball trajectory in the angular momentum plane.\n\nWe mostly focused then on the radial trajectories of the \\(f_0\\) (\\(J^{PC} = 0^{++}\\)) and \\(f_2\\) (\\(2^{++}\\)) resonances. For the \\(f_0\\) we examined the possibility of identifying one of the states \\(f_0(980)\\), \\(f_0(1370)\\), \\(f_0(1500)\\), or \\(f_0(1710)\\) as the glueball ground state and building the trajectories beginning from those states. This procedure did not show any significant preference for any one of the glueball candidates over the other. For the \\(f_2\\) there is less ambiguity, but still no positive identification. Between the different \\(2^{++}\\) state we find that the two very narrow resonances \\(f_2(1430)\\) and \\(f_J(2220)\\) (the latter being a popular candidate for the tensor glueball) do not belong on meson trajectories.\n\nThe paper is organized as follows. Section \\ref{sec:theory} is devoted to the theory of rotating closed strings. In section \\ref{sec:lightcone} we review the light-cone quantization of the basic bosonic string and describe its spectrum. Next we address the rotating folded string. We present the classical solution and the corresponding Regge trajectory, starting by discussing the case of flat spacetime. We introduce the Polchinski-Strominger term needed to assure two dimensional conformal invariance in non-critical dimension and discuss the problematic result for the intercept for a folded closed string in four dimensions. In section \\ref{sec:holo_string} we review the results of \\cite{PandoZayas:2003yb} for the rotating folded string in holographic backgrounds, and the semiclassical correction obtained there. Section \\ref{sec:decays} is devoted to the decay process of string decaying into two strings. We summarize the result for the decay of an open string into two open strings \\cite{Peeters:2005fq} and generalize it also to the case of a closed string decaying into two open strings. Section \\ref{sec:phenomenology} deals with the phenomenology of the rotating folded string models and the comparison between them and the observational data. We begin by spelling out the basic assumptions of the phenomenological models in section \\ref{sec:fitting_models}. We then present the key experimental players: the $f_0$ and $f_2$ resonances. In \\ref{sec:f0_fits} we propose several assignments of the $f_0$ resonances into radial $(n,M^2)$ trajectories, first into only various mesonic trajectories and then into various possible combinations when singling out some states as glueballs. In \\ref{sec:f2_fits} we describe possible assignments of the $f_2$, first into orbital $(J,M^2)$ trajectories, then into \\((n,M^2)\\) trajectories. Section \\ref{sec:holo_fits} expands on previous sections by using the non-linear trajectory that characterizes the glueballs of holographic models. In section \\ref{sec:lattice} we discuss the spectrum of glueballs that follows from lattice gauge theory models. We review the trajectories determined in lattice simulations and their corresponding slopes. Both types of trajectories, $(J,M^2)$ and $(n,M^2)$, are discussed. Section \\ref{sec:summary} is a summary and discussion of the results and states some open questions. In the appendix \\ref{sec:predictions} we list the predictions of our models for the yet unobserved excited partners of the glueball candidates, based on their Regge trajectories.\n\n\\section{The rotating closed string} \\label{sec:theory}\n\\subsection{Quantized closed string in light cone gauge} \\label{sec:lightcone}\nWe review here the derivation of the spectrum of the bosonic closed string in the light cone gauge. We simply present the derivation in chapter 1 of \\cite{Polchinski:Vol1}, omitting some of the details for brevity's sake. The following treatment is essentially true only for the critical dimension \\(D = 26\\), but we keep a general \\(D\\) in the formulae. We return to this point in section \\ref{sec:non_critical_string}.\n\nWe start from the Polyakov action\n\\begin{equation} S = -\\frac{1}{4\\pi\\ensuremath{\\alpha^\\prime}}\\int d\\tau d\\sigma \\sqrt{-\\gamma}\\gamma^{\\alpha\\beta}\\eta_{\\mu\\nu}\n\\partial_\\alpha X^\\mu \\partial_\\beta X^\\nu \\,.\\end{equation}\nWe define the light cone coordinates \\(x^\\pm = \\frac{1}{\\sqrt{2}}(x^0\\pm ix^1)\\), and set the gauge by making the three requirements\n\\begin{equation} X^+ = \\tau\\,, \\qquad \\partial_\\sigma \\gamma_{\\sigma\\sigma} = 0\\,, \\qquad \\sqrt{-\\gamma} = 1 \\label{eq:lightcone} \\,.\\end{equation}\nThe equations of motion for the transverse coordinates are then simple wave equations and they are generally solved (with closed string boundary conditions, for \\(\\sigma \\in (-\\ell,\\ell)\\)) by\n\\begin{equation} X^i(\\sigma,\\tau) = x^i + \\frac{p^i}{p^+}\\tau + i\\left(\\frac{\\ensuremath{\\alpha^\\prime}}{2}\\right)^{1\/2}\n\\sum_{n\\neq0}\\left[\\frac{\\alpha^i_n}{n}\\exp\\left(-i\\frac{2\\pi n (\\sigma+c\\tau)}{2\\ell}\\right)\n + \\frac{\\beta^i_n}{n}\\exp\\left(i\\frac{2\\pi n (\\sigma-c\\tau)}{2\\ell}\\right)\\right] \\,.\\end{equation}\nThe constant \\(c\\) is related to the coordinate length \\(\\ell\\) and the conserved quantity \\(p^+\\) via \\(c = \\ell\/(\\pi\\ensuremath{\\alpha^\\prime} p^+)\\). Aside from \\(\\ell\\), which is proportional to the physical string length, these constants do not have any significance on their own except in keeping track of units.\n\nThe left and right moving modes, \\(\\alpha^i_n\\) and \\(\\beta^i_n\\), are independent of each other (and hence, commute) and are normalized in such a way that\n\\begin{equation} [\\alpha^i_m,\\alpha^j_n] = [\\beta^i_m,\\beta^j_n] = m\\delta^{ij}\\delta_{m,-n} \\,.\\end{equation}\nThe Hamiltonian has the mode expansion\n\\begin{equation} H = \\frac{p^i p^i}{2p^+} + \\frac{1}{\\pi\\ensuremath{\\alpha^\\prime}}\\left[\\sum_{n>0}\\left(\\alpha^i_{-n}\\alpha^i_n+\\beta^i_{-n}\\beta^i_n\\right)+ A + \\tilde{A}\\right] \\,, \\end{equation}\nnoting that \\((\\alpha^i_n)^\\dagger = \\alpha^i_{-n}\\). \\(A\\) and \\(\\tilde{A}\\) are the c-numbers one gets when normal-ordering the sums. After regularizing the appropriate infinite sums, identical for the left and the right moving modes, and taking contributions from \\(D-2\\) transverse modes, we get the result\n\\begin{equation} A = \\tilde{A} = \\frac{2-D}{24} \\,.\\end{equation}\n\nFrom here we get the spectrum using the mass shell condition \\(M^2 = -p^2 = 2p^+ H - p^i p^i\\), which translates to\n\\begin{equation} M^2 = \\frac{2}{\\ensuremath{\\alpha^\\prime}}\\left(\\sum_{n>0}\\left(\\alpha^i_{-n}\\alpha^i_n+\\beta^i_{-n}\\beta^i_n\\right)+ A + \\tilde{A}\\right) \\,,\\end{equation}\nor,\n\\begin{equation} M^2 = \\frac{2}{\\ensuremath{\\alpha^\\prime}}\\left(N+\\tilde{N}+A+\\tilde{A}\\right) \\,,\\end{equation}\nwhere \\(N\\) and \\(\\tilde{N}\\) are the total population numbers of the left and right moving modes.\n\nFor comparison, the same treatment of the open string leads to the result\n\\begin{equation} M^2_{open} = \\frac{1}{\\ensuremath{\\alpha^\\prime}}\\left(N+A\\right) \\,.\\end{equation}\nHere we have neither the constant prefactor of two which halves the slope of the closed string, nor do we have two different kinds of modes on the string and the resulting doubling of the intercept.\n\n\\subsubsection{Quantized closed string: The spectrum}\nWhile the left and right moving modes on the closed string are independent, there is one constraint that relates them, affecting the spectrum. After making the gauge choice by imposing the three conditions of eq. \\ref{eq:lightcone} we still have a residual symmetry of \\(\\tau\\)-independent translations of \\(\\sigma\\). This results in the additional constraint\n\\begin{equation} N = \\tilde{N} \\,.\\end{equation}\nThe total number of excitations has to be equal for the left and right moving modes.\n\nThe vacuum state of the closed string is defined as the state annihilated by all \\(\\alpha^i_n\\) and \\(\\beta^i_n\\), for positive \\(n\\). It has \\(N = \\tilde{N} = 0\\), we denote it simply \\(|0\\rangle\\),\\footnote{The vacuum state may also have some center of mass momentum \\(p\\), but we suppress it in this notation.} and its mass is determined by the intercepts:\n\\begin{equation} M^2 = \\frac{2}{\\ensuremath{\\alpha^\\prime}}(A+\\tilde{A}) = \\frac{2-D}{6\\ensuremath{\\alpha^\\prime}} \\,.\\end{equation}\nFor \\(D = 26\\) this state is a tachyon, with \\(M^2 = -4\/\\ensuremath{\\alpha^\\prime}\\). The first excited state has \\(N = \\tilde{N} = 1\\), and so is of the form\n\\begin{equation} \\alpha^{i}_{-1}\\beta^j_{-1}|0\\rangle \\end{equation}\nand its mass is\n\\begin{equation} M^2 = \\frac{2}{\\ensuremath{\\alpha^\\prime}}(2+A+\\tilde{A}) = \\frac{26-D}{6\\ensuremath{\\alpha^\\prime}} \\,.\\end{equation}\nIn the critical dimension we have here a massless tensor and a massless scalar.\n\nThe most important feature of the spectrum for our uses is that it forms an infinite tower of states, with the difference between each pair of consecutive states being\n\\begin{equation} \\Delta M^2 = \\frac4\\ensuremath{\\alpha^\\prime} \\,, \\end{equation}\nwith one factor of two coming from the halving of the slope, and the other from the fact that \\(N+\\tilde{N}\\) takes only even values: \\(0,2,4,6,\\ldots\\).\n\n\\subsection{The rotating closed string solution} \\label{sec:rotating_string}\n\\subsubsection{Classical rotating folded string} \\label{sec:flat_string}\n\nHere we use the Nambu-Goto action for the string\n\\begin{equation} S = -\\frac{1}{2\\pi\\ensuremath{\\alpha^\\prime}}\\int d\\tau d\\sigma \\sqrt{-h} \\,,\\end{equation}\nwith\n\\begin{equation} h = \\det h_{\\alpha\\beta}\\,, \\qquad h_{\\alpha\\beta} = \\eta_{\\mu\\nu}\\partial_\\alpha X^\\mu \\partial_\\beta X^\\nu \\,,\\end{equation}\nand \\begin{equation} \\ensuremath{\\alpha^\\prime} = \\frac{1}{2\\pi T} \\,. \\end{equation}\n\nThe rotating folded string is the solution\n\\begin{equation} X^0 = \\tau \\qquad X^1 = \\frac{1}{\\omega}\\sin(\\omega\\sigma)\\cos(\\omega\\tau) \\qquad X^2 = \\frac{1}{\\omega}\\sin(\\omega\\sigma)\\sin(\\omega\\tau) \\,.\\label{eq:rotsol}\\end{equation}\nWe take \\(\\sigma \\in (-\\ell,\\ell)\\) and correspondingly \\(\\omega\\) takes the value \\(\\omega = \\pi\/\\ell\\). The energy of this configuration is\n\\begin{equation} E = T \\int_{-\\ell}^\\ell d\\sigma \\partial_\\tau X^0 = 2T\\ell \\,.\\end{equation}\nThe angular momentum we can get by going to polar coordinates (\\(X^1 = \\rho\\cos\\theta, X^2 = \\rho\\sin\\theta\\)), then\n\\begin{equation} J = T \\int_{-\\ell}^\\ell d\\sigma \\rho^2 \\partial_\\tau \\theta =\n\\frac{T}{\\omega} \\int_{-\\ell}^\\ell d\\sigma \\sin^2(\\omega\\sigma) = \\frac{\\pi T}{\\omega^2} = \\frac{T\\ell^2}{\\pi}\\,.\\end{equation}\nFrom the last two equations we can easily see that for the classical rotating folded string\n\\begin{equation} J = \\frac{1}{4\\pi T}E^2 = \\frac{1}{2}\\ensuremath{\\alpha^\\prime} E^2 \\,.\\end{equation}\n\n\\subsubsection{Quantization of the rotating folded string} \\label{sec:non_critical_string}\nIn a previous section we reviewed the quantization of the bosonic closed string in the critical dimension, \\(D = 26\\). There we have the result\n\\begin{equation} \\frac{1}{2}\\ensuremath{\\alpha^\\prime} M^2 = N + \\tilde{N} - \\frac{D-2}{12} \\,. \\end{equation}\nWe would like to obtain a correction to the classical trajectory of a similar form when quantizing the rotating folded string in \\(D = 4\\) dimensions. In \\cite{Hellerman:2013kba} the intercept was computed in the context of effective string theory where the Polchinski-Strominger (PS) term \\cite{Polchinski:1991ax},\n\\begin{equation} \\mathcal{L}_{PS} = \\frac{26-D}{24\\pi}\\frac{(\\partial_+^2X\\cdot\\partial_-X)(\\partial_-^2X\\cdot\\partial_+X)}{(\\partial_+X\\cdot\\partial_-X)^2} \\label{eq:psterm} \\,, \\end{equation}\ncompensates for the conformal anomaly when working outside the critical dimension. The derivatives are with respect to the variables \\(\\sigma^\\pm \\equiv \\tau\\pm\\sigma\\).\n\nAs was described in the introduction and will be further discussed in section \\ref{sec:holo_string}, a major candidate for describing the glueball is a rotating closed string in a holographic background which lives, by definition, in the critical dimension. One may conclude that in this case the PS term is not needed. However, as was argued in \\cite{Aharony:2009gg}, upon integrating out the massive degrees of freedom of the closed string that resides in the critical holographic dimension one gets the PS action as part of the effective string action in the non-critical $D$ dimensions. \n\nThe calculation in \\cite{Hellerman:2013kba} is for a general dimension \\(D\\), with, as already mentioned, the PS term included. In dimensions larger than four the string will rotate in two planes and the angular momentum is characterized by two quantum numbers \\(J_1\\) and \\(J_2\\). The result obtained there for the Regge trajectory of the closed string is\n\\begin{equation} \\frac{\\ensuremath{\\alpha^\\prime}}{2}M^2 = (J_1+J_2) - \\frac{D-2}{12} + \\frac{26-D}{24}\n\\left((\\frac{J_1}{J_2})^\\frac{1}{4}-(\\frac{J_2}{J_1})^\\frac{1}{4}\\right)^2 \\,. \\end{equation}\nThis expression is singular when \\(J_2 = 0\\), which is necessarily the case when \\(D = 4\\), since in four dimensions the rotation is in a single plane. Therefore the expression is not usable precisely in the context in which we would like to use it.\n\nWe can see where this originates by inserting the 4D rotating solution from eq. \\ref{eq:rotsol} into the expression for the PS term, eq. \\ref{eq:psterm}. The expression obtained,\n\\begin{equation} \\mathcal{L}_{PS} = -\\frac{D-26}{24\\pi}\\omega^2\\tan^2(\\omega\\sigma) \\,,\\end{equation}\nis singular when \\(\\omega\\sigma = \\pm\\frac{\\pi}{2}\\), i.e. at the two points \\(\\sigma = \\pm\\frac{\\ell}{2}\\), which are the ``endpoints'', or folding-points, of the rotating folded string, and the integral on \\(\\mathcal{L}_{PS}\\) giving the correction diverges:\n\\begin{equation} \\int_{-\\ell}^\\ell d\\sigma \\mathcal{L}_{PS} = -\\frac{D-26}{12\\pi}\\omega^2\\int_{-\\ell\/2}^{\\ell\/2}d\\sigma\\tan^2(\\omega\\sigma) = -\\frac{D-26}{12\\pi}\\omega\\left(\\tan x-x\\right)|_{x=-\\pi\/2}^{\\pi\/2} \\,.\\end{equation}\nWe see that beneath the divergent \\(\\tan x\\) there is also the finite part\n\\begin{equation} \\frac{D-26}{12}\\frac{\\pi}{\\ell} \\,.\\end{equation}\n\nThe denominator in the PS term is simply \\((\\dot{X}^2)^2\\), so the problem emerges because the endpoints move at the speed of light. The same problem is encountered in the treatment of the open string, but as was shown in \\cite{Hellerman:2013kba} in that case one can introduce a counterterm at the string boundaries that renders the action and correspondingly the intercept finite. In fact it was found out that summing up the contributions to the latter from the PS and from the Casimir term, the \\(D\\) dependence is canceled out between the two terms, and the intercept is given simply by $a= 1$, for all \\(D\\). Another possible approach for regularizing the rotating open string is to add masses to its endpoints. However, the quantization of the system of a rotating string with massive particles on its ends is still not fully understood \\cite{ASY}.\n\nFor the closed string it is not clear how to regularize the system. One potential way to do it might be to add two masses at the two endpoints of the folded string. The resulting system looks like two open strings connected at their boundaries by these masses, but not interacting in any other way. In the rotating solution the two strings lie on top of one another. The boundary condition, which is the equation of motion of the massive endpoint is modified: it is the same as for the open string, but with an effective double tension \\(T \\rightarrow 2T\\), in accordance with the ratio of the slopes of the open and closed strings discussed above. In fact everything else is doubled too. If this process of adding masses on the closed string and taking then the limit of zero mass is a legitimate way to regularize, then it is probable that the result is also simply double that of the open string, as it is for the critical dimension. Obviously, though, even in that case we cannot perform the quantization of the folded closed string since, as mentioned above, we do not fully control the quantization of an open string with massive endpoints. \n\n\\subsubsection{The closed string in a curved background} \\label{sec:holo_string}\nThe full analysis of rotating closed string in holographic curved backgrounds was performed in \\cite{PandoZayas:2003yb}. We present here the key points in short form.\n\nIf we look at a curved background metric of the form\n\\begin{equation} ds^2 = h(r)^{-1\/2}(-dX^0dX^0+dX^idX^i) + h(r)^{1\/2}dr^2 + \\ldots \\,,\\end{equation}\nwith \\(i = 1,2,3\\) and the ellipsis denoting additional transverse coordinates, the rotating folded string, namely the configuration,\n\\begin{equation} X^0 = l\\tau \\qquad X^1 = l\\sin\\sigma\\cos\\tau \\qquad X^2 = l\\sin\\sigma\\sin\\tau \\,,\\end{equation}\nis still\\footnote{We follow a somewhat different normalization here, taking \\(\\omega = \\pi\/\\ell\\) from the previous section to be \\(1\\), and introducing a common prefactor \\(l\\), but the solution is essentially the same as the flat space solution of section \\ref{sec:flat_string}.} a solution to the string equations of motion provided we take\n\\begin{equation} r(\\sigma,\\tau) = r_0 = Const. \\end{equation}\nwhere \\(r_0\\) is a point where the metric satisfies the condition\n\\begin{equation} \\partial_r g_{00}(r)|_{r=r_0} = 0, \\qquad g_{00}(r)|_{r=r_0} \\neq 0 \\,.\\end{equation}\nThe existence of such a point is also one of the sufficient conditions for the dual gauge theory to be confining \\cite{Kinar:1998vq}. Compared to the folded string in flat spacetime, the energy and angular momentum take each an additional factor in the form of \\(g_{00}(r_0)\\):\n\\begin{equation} E = \\frac{1}{2\\pi\\ensuremath{\\alpha^\\prime}}\\int_{-\\pi}^\\pi g_{00}(r_0)d\\sigma = g_{00}(r_0) \\frac{l}{\\ensuremath{\\alpha^\\prime}} \\,,\\end{equation}\n\\begin{equation} J = T\\int_{-\\pi}^\\pi g_{00}(r_0)\\sin^2\\sigma d\\sigma = g_{00}(r_0) \\frac{l^2}{2\\ensuremath{\\alpha^\\prime}} \\,.\\end{equation}\nDefining an effective string tension \\(T_{eff} = g_{00}(r_0)T\\) and slope \\(\\ensuremath{\\alpha^\\prime}_{eff} = (2\\pi T_{eff})^{-1}\\), we can write the relation\n\\begin{equation} J = \\frac{1}{2}\\ensuremath{\\alpha^\\prime}_{eff} E^2 \\,.\\end{equation}\nThe same factor of \\(g_{00}(r_0)\\) multiplies the effective tension in the open string case, and therefore the closed and open string slopes are still related by the factor of one half, although the open string trajectories have the additional modification which can be ascribed to the presence of endpoint masses \\cite{Kruczenski:2004me,Sonnenschein:2014jwa}. We draw the two types of strings in figure \\ref{fig:closed_open_map}.\n\n\\begin{figure}[t!] \\centering\n\t\\includegraphics[width=0.95\\textwidth]{closed_and_open.png} \\\\\n\t\\caption{\\label{fig:closed_open_map} Closed and open strings in holographic backgrounds (top), and their mappings into flat spacetime (bottom). For the open string, the mapping from curved to flat background adds endpoint masses to the strings \\cite{Kruczenski:2004me,Sonnenschein:2014jwa}, with the vertical segments mapped to the point-like masses in flat space. For the closed string, we look at the simple folded string in both cases. Note that classically the rotating folded string has zero width, and as such would look like an open string with no endpoint masses, and not like in the drawing.}\n\t\t\t\\end{figure}\n\n\nCalculations of the quantum corrected trajectory of the folded closed string in a curved background in different holographic backgrounds were performed in \\cite{PandoZayas:2003yb} and \\cite{Bigazzi:2004ze} using semiclassical methods. This was done by computing the spectrum of quadratic fluctuations, bosonic and fermionic, around the classical configuration of the folded string.\nIt was shown in \\cite{PandoZayas:2003yb} that the Noether charges of the energy $E$ and angular momentum $J$ that incorporate the quantum fluctuations, are related to the expectation value of the world-sheet Hamiltonian in the following manner:\n\\begin{equation}\nlE -J = \\int d\\sigma <{\\cal H}_{ws}> \\,.\n\\end{equation}\nThe contributions to the expectation value of the world-sheet Hamiltonian are from several massless bosonic modes, ``massive\" bosonic modes and massive fermionic modes. \n For the ``massive\" bosonic fluctuations around the rotating solution one gets a \\(\\sigma\\)-dependent mass term, with equations of motion of the form\n\\begin{equation} (\\partial_\\tau^2-\\partial_\\sigma^2+2m_0^2l^2\\cos^2\\sigma)\\delta X^i = 0 \\end{equation}\nappearing in both analyses, \\(m_0\\) being model dependent. A similar mass term, also with \\(\\cos\\sigma\\), appears in the equations of motion for some fermionic fluctuations as well, the factor of \\(\\cos^2\\sigma\\) in the mass squared coming in both cases from the induced metric calculated for the rotating string, which is \\(h_{\\alpha\\beta} \\sim \\eta_{\\alpha\\beta}\\cos^2\\sigma\\).\n\nThe result in both papers is that the Regge trajectories are of the form\n\\begin{equation} J = \\ensuremath{\\alpha^\\prime}_{closed}(E^2- 2m_0 E) +a \\,.\\end{equation}\nwhere $m_0$ is a mass parameter that characterizes the holographic model and $a$ is the intercept which generically takes the form $a= \\frac{\\pi}{24}(\\#\\text{bosonic massless modes} - \\#\\text{fermionic massless modes})$. The two papers \\cite{PandoZayas:2003yb} and \\cite{Bigazzi:2004ze} use different holographic models (Klebanov-Strassler and Maldacena-N\\'{u}\\~{n}ez backgrounds in the former and Witten background in the latter) and predict different signs for \\(m_0\\), which is given as a combination of the parameters specific to the background. In \\cite{PandoZayas:2003yb} \\(m_0\\) is positive, while in \\cite{Bigazzi:2004ze} it is negative. According to \\cite{PandoZayas:2003yb} the slope of the closed string trajectory is left unchanged from the classical case\n\\begin{equation} \\ensuremath{\\alpha^\\prime}\\!_{closed} = \\frac{1}{2}\\ensuremath{\\alpha^\\prime}\\!_{open} \\,,\\end{equation}\nwhile the model used in \\cite{Bigazzi:2004ze} predicts an additional renormalization of the slope,\n\\begin{equation} \\ensuremath{\\alpha^\\prime}\\!_{closed} = \\frac{1}{2}\\left(1-\\frac{c}{\\lambda}\\right)\\ensuremath{\\alpha^\\prime}\\!_{open} \\,,\\end{equation}\nfor some small constant \\(c\\), which makes this a smaller effect than that caused by the addition of the \\(m_0\\) mass term.\n\n\\subsection{Other string models of the glueball and the Regge slope}\nIn previous sections we have shown that the expected Regge slope for the closed string is\n\n\\begin{equation} \\ensuremath{\\alpha^\\prime}_{closed} = \\frac12\\ensuremath{\\alpha^\\prime}_{open} \\,, \\end{equation}\nbut other string models of the glueball predict different values for the effective slope of the glueballs, \\(\\ensuremath{\\alpha^\\prime}_{gb}\\).\n\nOne such prediction is based on the potential between two static adjoint SU(N) charges, that, according to lattice calculations, is expected to be proportional to the quadratic Casimir operator. For small distances this added group theory factor can be obtained easily from perturbation theory, and calculations in \\cite{Bali:2000un} show that what is referred to as the ``Casimir scaling hypothesis'' holds in lattice QCD for large distances as well, and this means that the effective string tension also scales like the Casimir operator (as the potential at large distances is simply \\(V(\\ell) \\approx T_{eff}\\ell\\)). Therefore, a model of the glueball as two adjoint charges (or constituent gluons) joined by a flux tube predicts the ratio between the glueball and meson (two fundamental charges) slopes to be\n\\begin{equation} \\frac{\\ensuremath{\\alpha^\\prime}\\!_{gb}}{\\ensuremath{\\alpha^\\prime}_{meson}} = \\frac{C_2(\\text{Fundamental})}{C_2(\\text{Adjoint})} = \\frac{N^2-1}{2N^2} = \\frac{4}{9}\\,, \\end{equation}\nwhere for the last equation we take \\(N = 3\\). For \\(N \\rightarrow \\infty\\) we recover the ratio of \\(1\/2\\), as can be easily seen. An argument from field theory for the double tension of the adjoint string at large \\(N\\) is in \\cite{Armoni:2006ri}.\n\nOther models attempt to tie the closed string to the phenomenological pomeron. The pomeron slope is measured to be \\cite{Donnachie:1984xq}\n\\begin{equation} \\ensuremath{\\alpha^\\prime}\\!_{pom} = 0.25\\:\\text{GeV}^{-2} \\approx 0.28\\times\\ensuremath{\\alpha^\\prime}\\!_{meson}\\,, \\end{equation}\nand the pomeron trajectory is commonly associated with both glueballs and closed strings. One string model that predicts a pomeron-like slope was proposed in \\cite{Isgur:1984bm} and is presented in \\cite{Meyer:2004jc} or in more detail in \\cite{Meyer:2004gx}. It is simply the model of a rotating closed string, with a fixed circular shape. This string has two types of trajectories, a phononic trajectory (excitations propagating along the string) which has \\(\\ensuremath{\\alpha^\\prime}_{phonon} = \\frac{1}{4}\\ensuremath{\\alpha^\\prime}_{open}\\), and an orbital trajectory (the circular string rotating around an axis in the circle's plane), for which \\(\\ensuremath{\\alpha^\\prime}\\!_{orbital} = \\frac{3\\sqrt{3}}{16}\\ensuremath{\\alpha^\\prime}_{open} \\approx 0.32\\times\\ensuremath{\\alpha^\\prime}_{open}\\). If the rotating circular loop were allowed to deform, it would have necessarily flowed towards the flattened folded string configuration that we have been discussing, which always maximizes the angular momentum at a given energy.\n\nThere are also other possibilities of rigidly rotating closed string of other shapes, as in \\cite{Burden:1982zb}, which may give yet another prediction of the ratio between open and closed string Regge slopes. Another related object is the ``\\(\\Delta\\)-shaped'' string, which we mentioned in \\cite{Sonnenschein:2014bia} as one of the stringy models of the baryon. The model is that of three masses with each pair of them connected by a string. This results in what is essentially a closed string with three quarks placed on it, which has lead 't Hooft to remark that such a configuration could be related to a quark-gluon hybrid \\cite{'tHooft:2004he}, rather than a pure glueball.\n\n\\subsection{The decays of the holographic closed string} \\label{sec:decays}\n\\subsubsection{Open string decays}\nThe open string hadron decays when it tears at a point along the string and the two loose ends connect via quantum fluctuations to a flavor brane, creating a quark-antiquark pair. Another way to think of this process is that the string fluctuates, before tearing,\tand when it reaches a flavor brane it connects to it, tears, and the pair is created. When thinking of the decay in this second way, with the fluctuation preceding the tear, it is clear that the quark and antiquark are of the same flavor, a result not a priori guaranteed when the strings tears and then reconnects to the branes. This is illustrated in figure \\ref{fig:decay_open}.\n\n\\begin{figure}[t!] \\centering\n\t\\includegraphics[width=0.95\\textwidth]{decay_open.png}\n\t\\caption{\\label{fig:decay_open} A schematic look at the decay of a holographic open string, in this case a strange meson decaying into a strange meson and a light meson. \\textbf{Top}: the picture where the string tears first, then reconnects to the flavor branes. \\textbf{Bottom}: the string fluctuates up to the brane before tearing, the splits. We prefer the second picture since it assures that flavor is conserved, which is not a priori the case when the string tears at the bottom.}\n\t\t\t\\end{figure}\n\nThe probability that a fluctuation reaches the flavor brane of a quark of flavor \\(q\\) is \\cite{Peeters:2005fq}\n\\begin{equation} e^{-(m_{sep}^q)^2\/T} \\,, \\end{equation}\nwhere the quark mass \\(m_{sep}^q\\) in this context is equal to the string tension times the distance of the brane from the holographic wall.\\footnote{The fact that in this model the mass \\(m_{sep}^q\\) is proportional to \\(T\\) is especially important when considering the opposing limits \\(T \\rightarrow 0\\) and \\(T\\rightarrow\\infty\\).}\n\nSince the tear can occur at any point along the string, we expect the total probability (and hence the total decay width) to be proportional to the string length \\(L\\).\\footnote{In the holographic picture, it is the length of the horizontal segment of the string that is considered. When moving into flat space, it is the length between the two endpoint masses, and the relation \\(M \\propto TL\\) receives corrections from the endpoint masses, as already written in eq. \\ref{eq:holo_width} in the introduction.} We then expect that the total decay width behave like\n\\begin{equation} \\Gamma \\propto Le^{-(m_{sep}^q)^2\/T}\\,, \\end{equation}\nwhere \\(m_{sep}^q\\) is the quark produced in the decay. In \\cite{Sonnenschein:2014jwa} we extracted some values of the quark masses as obtained from the Regge trajectories of mesons. For the light \\(u\/d\\) quarks the masses were small enough so the exponent is close to one, while the \\(s\\) quark showed a mass for which \\(m_s^2\/T \\sim 1\\). We would then say that decays where an \\(\\ensuremath{s\\bar{s}}\\) pair is created are suppressed by a factor of \\(e^{-1}\\) (before taking into account the smaller phase space).\\footnote{In an alternative description \\cite{Cotrone:2005fr,Bigazzi:2006jt}, the decay rate is power-like (rather than exponentially) suppressed with the mass of the quark-antiquark pair.}\n\n\\subsubsection{Rotating closed string}\nThe decay process of a closed string is less simple as the string has to tear twice.\\footnote{Another holographic approach to describe the decay of a glueball into two mesons, based on fields in the bulk and not closed strings was discussed in \\cite{Hashimoto:2007ze,Brunner:2015oqa,Brunner:2015yha}} A single tear in the closed would produce an open string, and it in turn will have to tear again, so at the end of the process we have two open strings. If the closed string is the glueball, then this is the process of a glueball decaying to two mesons. In the total decay width we will have then the string length squared, one factor of \\(L\\) for each time the string tears, as well as two exponents for the two pair creation events:\n\\begin{equation} \\Gamma \\propto L^2\\exp(-\\frac{m_q^2}{T})\\exp(-\\frac{m_{q^\\prime}^2}{T}) \\,. \\label{eq:decay_closed} \\end{equation}\nThis process is illustrated in figure \\ref{fig:decay_closed}.\n\nIf we want to identify a glueball from this basic prediction we have to look at the branching ratios of processes where the presence of the second exponent is significant, namely at processes where pairs of \\(s\\) and \\(\\bar{s}\\) are produced.\n\nThe glueball unlike the meson will have the possibility of decaying into either of the three options: decay into two light mesons with two pairs of light quarks created, into \\(K\\bar{K}\\) with one pair of \\(\\ensuremath{s\\bar{s}}\\) and the other light, or into \\(\\phi\\phi\\) when two pairs of \\(\\ensuremath{s\\bar{s}}\\) are created. The exponents predict the following hierarchy between the three modes:\n\\begin{equation} \\Gamma(Gb\\rightarrow \\text{2 light}) : \\Gamma(Gb\\rightarrow K\\bar{K}) : \\Gamma(Gb\\rightarrow \\phi\\phi) = 1\\,:e^{-1}\\,:e^{-2} \\,. \\end{equation}\nThis ratio will still need to be modified by phase space factors, which in any realistic scenario will be significant and will suppress the \\(\\ensuremath{s\\bar{s}}\\) modes even further. This is because the states we would measure are not too far from the \\(\\phi\\phi\\) threshold of approximately 2 GeV.\n\n\\begin{figure}[tp!] \\centering\n\t\\includegraphics[width=0.95\\textwidth]{decay_closed.png} \\\\\n\t\\includegraphics[width=0.67\\textwidth]{decay_closed_worldsheet.png}\n\t\\caption{\\label{fig:decay_closed} A schematic look at the decay of a holographic closed string to two mesons. (I) The string tears for the first time. (II) An \\(\\ensuremath{s\\bar{s}}\\) pair is produced and the string tears for the second time. (III) A second pair is created, this time of light quarks (i.e. \\(u\\bar{u}\\) or \\(d\\bar{d}\\)), and two open strings are formed. (IV) A different perspective showing more clearly the final product of this decay: a \\(K\\) meson and a \\(\\bar{K}\\). Note that the distances in this schematic between the flavor branes and the wall are not in scale. The bottom figure is the corresponding world sheet, of a closed string opening up at two points and forming two open strings.}\n\t\t\t\\end{figure}\n\n\\clearpage\n\\section{Phenomenology} \\label{sec:phenomenology}\n\\subsection{Basic assumptions and fitting models} \\label{sec:fitting_models}\nWe will be looking at unflavored isoscalar resonances below the \\(c\\bar{c}\\) threshold. These states will be either mesons with the quark contents \\(\\frac{1}{\\sqrt{2}}(u\\bar{u}-d\\bar{d})\\) or \\(s\\bar{s}\\), or glueballs.\\footnote{Some states, such as the \\(f_0(500)\/\\sigma\\), may also be exotic multiquark states.} Correspondingly, we have several types of trajectories. For the light mesons we have the usual linear form,\n\\begin{equation} J + n = \\ensuremath{\\alpha^\\prime} M^2 + a \\,, \\label{eq:traj_lin}\\end{equation}\nwith \\(\\ensuremath{\\alpha^\\prime} = (2\\pi T)^{-1}\\). Note that whenever we use \\(\\ensuremath{\\alpha^\\prime}\\) without a subscript in this paper, it refers to this slope of the linear meson trajectories.\n\nFor \\(s\\bar{s}\\) states, we use the formula for the mass corrected trajectory (as was used in \\cite{Sonnenschein:2014jwa}) defined by\n\t\\begin{equation} E = 2m_s\\left(\\frac{\\beta\\arcsin \\beta+\\sqrt{1-\\beta^2}}{1-\\beta^2}\\right) \\label{eq:massFitE} \\,,\\end{equation}\n\t\\begin{equation} J + n = a + 2\\pi\\ensuremath{\\alpha^\\prime} m_s^2\\frac{\\beta^2}{(1-\\beta^2)^2}\\left(\\arcsin \\beta+\\beta\\sqrt{1-\\beta^2}\\right) \\,.\\label{eq:traj_mass} \\end{equation}\nThese are the trajectories of a rotating string with two masses \\(m_s\\) at its endpoints, and with an added intercept and extrapolated \\(n\\) dependence. \\(\\beta\\) is the velocity of the endpoint mass. The limit \\(m_s \\rightarrow 0\\) (with \\(\\beta \\rightarrow 1\\)) takes us back to the linear trajectory of eq. \\ref{eq:traj_lin}, with the first correction in the expression for \\(J\\) being proportional to \\(\\ensuremath{\\alpha^\\prime} m_s^{3\/2} E^{1\/2}\\).\n\nFor the glueballs we assume linear trajectories of the form\n\\begin{equation} J + n = \\ensuremath{\\alpha^\\prime}\\!_{gb} M^2 + a \\,,\\end{equation}\nand we take \\(\\ensuremath{\\alpha^\\prime}\\!_{gb}\\) to be \\(\\frac{1}{2}\\ensuremath{\\alpha^\\prime}\\), where \\(\\ensuremath{\\alpha^\\prime}\\) is the slope of the mesons as obtained in our fits of the various meson trajectories. A typical value would be between 0.80 and 0.90 GeV\\(^{-2}\\).\n\nIn a later section we examine the possible application of the formula based on the holographic prediction,\n\\begin{equation} J + n = \\ensuremath{\\alpha^\\prime}_{gb}E^2-2\\ensuremath{\\alpha^\\prime}_{gb}m_0E + a\\,.\\end{equation}\nWhen using this formula we will also take \\(\\ensuremath{\\alpha^\\prime}\\!_{gb} = \\frac{1}{2}\\ensuremath{\\alpha^\\prime}\\), ignoring the possible correction to the slope, which we assume to be small.\n\nOne assumption which we must state explicitly before continuing to the fits is that there is no mixing of light mesons, \\(\\ensuremath{s\\bar{s}}\\) mesons, and glueballs. It is an open question how strongly glueballs and mesons are mixed, with results varying greatly between different models, from almost maximal mixing to very weak (different results based on different models are collected in \\cite{Crede:2008vw}). In a stringy model, where glueballs are represented by closed strings and mesons by open strings, it seems more natural that they will not mix at all. We also assume that the mixing between the light quark states and the \\(\\ensuremath{s\\bar{s}}\\) is weak, in placing states either on the linear trajectories of the light mesons or on the mass corrected trajectories of the \\(\\ensuremath{s\\bar{s}}\\), the same assumption that was used in \\cite{Sonnenschein:2014jwa} in fitting the \\(\\omega\\) and \\(\\phi\\) mesons. It is not obvious how the possible mixing between the two types of mesons affects the trajectories.\n\n\\subsubsection{The two types of trajectories}\nAlong \\emph{radial trajectories}, or trajectories in the \\((n,M^2)\\) plane, the states differ only by the radial\\footnote{The term should not be confused as having something to do with the radial coordinate of holography.} excitation number \\(n\\), all other quantum numbers constant. Since \\(n\\) is not actually measured we have to assign a value ourselves to the different states, and from there emerges a great ambiguity that we have to solve.\n\nMesons belong on trajectories in the \\((n,M^2)\\) plane with a slope that seems to be slightly smaller than in the \\((J,M^2)\\) plane. The typical values are 0.80--0.85 GeV\\(^{-2}\\) for the former and \\(0.90\\) GeV\\(^{-2}\\) for the latter type of trajectories, as our fits in \\cite{Sonnenschein:2014jwa} have shown. We implicitly assume in the following sections that for the glueballs there will be a similar difference between the slopes in the different planes. When we write that \\(\\ensuremath{\\alpha^\\prime}_{gb} = \\frac12\\ensuremath{\\alpha^\\prime}_{meson}\\) we refer to \\(\\ensuremath{\\alpha^\\prime}_{meson}\\) as it is obtained for the meson fits in the same plane, rather than taking fixed values of \\(\\ensuremath{\\alpha^\\prime}\\). This also serves to restrict the number of parameters in a given fit: we always try to describe all the trajectories using a single value of \\(\\ensuremath{\\alpha^\\prime}\\).\n\nWe should also note that while for the mesons, \\(n\\) naturally takes the values \\(n = 0,1,2,\\ldots\\) along the radial trajectories, the case is not so for glueballs. For the closed strings we noted in section \\ref{sec:lightcone} that the number of left and right moving modes has to be equal, and so \\(n\\), which is really \\(N+\\tilde{N}\\) in this case, should be even: \\(n = 0,2,4,\\ldots\\).\n\nFor the \\emph{orbital trajectories}, or trajectories in the \\((J,M^2)\\) plane, we expect to find, along the leading trajectory of the glueball, the ground state with \\(J^{PC} = 0^{++}\\) followed by the tensor glueball (\\(2^{++}\\)) as its first excited state, and continue to higher states with even \\(J\\) and \\(PC = ++\\).\n\nThe orbital trajectories of the mesons will be constructed as usual, and using the known quark model relations \\(P = (-1)^{L+1}\\) and \\(C = (-1)^{L+S}\\). The relevant trajectories are then expected to have states with \\(J^{PC} = 1^{--}, 2^{++}, 3^{--}, 4^{++}, \\ldots\\). It is worth noting then that for mesons, a \\(0^{++}\\) state is an excited state with \\(L = 1\\) and \\(S = 1\\), and not a part of what we usually take for the trajectory when we use states of increasing \\(J\\).\n\n\\subsection{The glueball candidates: The \\texorpdfstring{$f_0$}{f0} and \\texorpdfstring{$f_2$}{f2} resonances}\nThere is an abundance of isoscalar states with the quantum numbers \\(J^{PC} = 0^{++}\\) (the \\(f_0\\) resonances) or \\(J^{PC} = 2^{++}\\) (\\(f_2\\)). The Particle Data Group's (PDG) latest Review of Particle Physics \\cite{PDG:2014}, which we we use as the source of experimental data throughout this paper, lists 9 \\(f_0\\) states and 12 \\(f_2\\) states, with an additional 3 \\(f_0\\)'s and 5 \\(f_2\\)'s listed as unconfirmed ``further states''. These are listed in tables \\ref{tab:allf0} and \\ref{tab:allf2}. In the following we make a naive attempt to organize the known \\(f_0\\) and \\(f_2\\) states into trajectories, first in the plane of orbital excitations \\((J,M^2)\\), then in the radial excitations plane \\((n,M^2)\\).\n\nThe states classified as ``further states'' are generally not used unless the prove to be necessary to complete the trajectories formed by the other states. The ``further states'' will be marked with an asterisk below.\\footnote{Note that the asterisk is not standard notation nor a part of the PDG given name of a state, we only use it to make clear the status of given states throughout the text.}\n\nIt is not the purpose of this paper to review all the information available on the \\(f_0\\) and \\(f_2\\) resonances, nor to present the different theories and speculations regarding their meson or glueball nature. We usually attempt to form Regge trajectories first, using just the masses and basic quantum numbers, and then verify if the implications regarding the contents of a given state make sense in the light of additional experimental data, namely the different states' decay modes.\n\nFor a more complete picture regarding the spectrum and specifically the interpretation of the different resonances as glueballs, the reader is referred to reviews on glueball physics and their experimental status such as \\cite{Klempt:2007cp,Mathieu:2008me,Crede:2008vw,Ochs:2013gi}, citations therein, and subsequent works citing these reviews.\n\n\n\\begin{table}[t!] \\centering\n\t\t\\begin{tabular}{|l|l|l|l|l|} \\hline\n\t\t\\textbf{State} & \\textbf{Mass} [MeV] & \\textbf{Width} [MeV] & \\textbf{Width\/mass} & \\textbf{Decay modes} \\\\ \\hline\\hline\n\t\t\\(f_0(500)\/\\sigma\\) & 400--550 & 400--700 & 1.16\\plm0.36 & \\(\\pi\\pi\\) dominant\\\\ \\hline\n\t\t\\(f_0(980)\\) & \\(990\\pm20\\) & 40--100 & 0.07\\plm0.03 & \\(\\pi\\pi\\) dominant, \\(K\\overline{K}\\) seen \\\\ \\hline\n\t\t\\(f_0(1370)\\) & 1200--1500 & 200--500 & 0.26\\plm0.11 & \\(\\pi\\pi\\), \\(4\\pi\\), \\(\\eta\\eta\\), \\(K\\overline{K}\\) \\\\ \\hline\n\t\t\\(f_0(1500)\\) & \\(1505\\pm6\\) & \\(109\\pm7\\) & 0.072\\plm0.005 & \\(\\pi\\pi\\) \\([35\\%]\\), \\(4\\pi\\) \\([50\\%]\\), \\\\\n\t\t\t\t\t\t\t\t\t& & & & \\(\\eta\\eta\\)\/\\(\\eta\\eta\\prime\\) \\([7\\%]\\), \\(K\\overline{K}\\) \\([9\\%]\\) \\\\ \\hline\t\t\n\t\t\\(f_0(1710)\\) & \\(1720\\pm6\\) & \\(135\\pm8\\) & 0.078\\plm0.005 & \\(K\\overline{K}\\), \\(\\eta\\eta\\), \\(\\pi\\pi\\) \\\\ \\hline\n\t\t\\(f_0(2020)\\) & \\(1992\\pm16\\) & \\(442\\pm60\\) & 0.22\\plm0.03 & \\(\\rho\\pi\\pi\\), \\(\\pi\\pi\\), \\(\\rho\\rho\\), \\(\\omega\\omega\\), \\(\\eta\\eta\\) \\\\ \\hline\n\t\t\\(f_0(2100)\\) & \\(2103\\pm8\\) & \\(209\\pm19\\) & 0.10\\plm0.01 & \\\\ \\hline\n\t\t\\(f_0(2200)\\) & \\(2189\\pm13\\) & \\(238\\pm50\\) & 0.11\\plm0.02 & \\\\ \\hline\n\t\t\\(f_0(2330)\\) & \\(2325\\pm35\\) & \\(180\\pm70\\) & 0.08\\plm0.03 & \\\\ \\hline\n\t\t*\\(f_0\\)(1200--1600) & 1200--1600 & 200--1000 & 0.43\\plm0.29 & \\\\ \\hline\n\t\t*\\(f_0\\)(1800) & \\(1795\\pm25\\) & \\(95\\pm80\\) & 0.05\\plm0.04 & \\\\ \\hline\n\t\t*\\(f_0\\)(2060) & \\(\\sim2050\\) & \\(\\sim120\\) & \\(\\sim0.04\\)--\\(0.10\\) & \\\\ \\hline\n\t\t\\end{tabular} \\caption{\\label{tab:allf0} All the \\(f_0\\) states as listed by the PDG. The last few states, marked here by asterisk, are classified as ``further states''.}\n\t\\end{table}\n\n\t\t\\begin{table}[t!] \\centering\n\t\t\\begin{tabular}{|l|l|l|l|l|} \\hline\n\t\t\\textbf{State} & \\textbf{Mass} [MeV] & \\textbf{Width} [MeV] & Width\/mass & \\textbf{Decay modes} \\\\ \\hline\\hline\n\t\t\\(f_2(1270)\\) & 1275.1\\plm1.2 & 185.1\\plm2.9 & 0.15\\plm0.00 & \\(\\pi\\pi\\) \\([85\\%]\\), \\(4\\pi\\) \\([10\\%]\\), \\(KK\\), \\(\\eta\\eta\\), \\(\\gamma\\gamma\\), ... \\\\ \\hline\n\t\t\\(f_2(1430)\\) & 1453\\plm4 & 13\\plm5 & 0.009\\plm0.006 & \\(KK\\), \\(\\pi\\pi\\) \\\\ \\hline\n\t\t\\(f^\\prime_2(1525)\\) & 1525\\plm5 & 73\\plm6 & 0.048\\plm0.004 & \\(KK\\) \\([89\\%]\\), \\(\\eta\\eta\\) \\([10\\%]\\), \\(\\gamma\\gamma\\) [seen], ... \\\\ \\hline\n\t\t\\(f_2(1565)\\) & 1562\\plm13 & 134\\plm8 & 0.09\\plm0.01 & \\(\\pi\\pi\\), \\(\\rho\\rho\\), \\(4\\pi\\), \\(\\eta\\eta\\), ... \\\\ \\hline\n\t\t\\(f_2(1640)\\) & 1639\\plm6 & 99\\plm60 & 0.06\\plm0.04 & \\(\\omega\\omega\\), \\(4\\pi\\), \\(KK\\) \\\\ \\hline\n\t\t\\(f_2(1810)\\) & 1815\\plm12 & 197\\plm22 & 0.11\\plm0.01 & \\(\\pi\\pi\\), \\(\\eta\\eta\\), \\(4\\pi\\), \\(KK\\), \\(\\gamma\\gamma\\) [seen] \\\\ \\hline\n\t\t\\(f_2(1910)\\) & 1903\\plm9 & 196\\plm31 & 0.10\\plm0.02 & \\(\\pi\\pi\\), \\(KK\\), \\(\\eta\\eta\\), \\(\\omega\\omega\\), ... \\\\ \\hline\n\t\t\\(f_2(1950)\\) & 1944\\plm12 & 472\\plm18 & 0.24\\plm0.01 & \\(K^*K^*\\), \\(\\pi\\pi\\), \\(4\\pi\\), \\(\\eta\\eta\\), \\(KK\\), \\(\\gamma\\gamma\\), \\(pp\\) \\\\ \\hline\n\t\t\\(f_2(2010)\\) & 2011\\plm76 & 202\\plm67 & 0.10\\plm0.03 & \\(KK\\), \\(\\phi\\phi\\) \\\\ \\hline\n\t\t\\(f_2(2150)\\) & 2157\\plm12 & 152\\plm30 & 0.07\\plm0.01 & \\(\\pi\\pi\\), \\(\\eta\\eta\\), \\(KK\\), \\(f_2(1270)\\eta\\), \\(a_2\\pi\\), \\(pp\\) \\\\ \\hline\n\t\t\\(f_J(2220)\\) & 2231.1\\plm3.5 & 23\\plm8 & 0.010\\plm0.004 & \\(\\pi\\pi\\), \\(KK\\), \\(pp\\), \\(\\eta\\eta^\\prime\\) \\\\ \\hline\n\t\t\\(f_2(2300)\\) & 2297\\plm28 & 149\\plm41 & 0.07\\plm0.02 & \\(\\phi\\phi\\), \\(KK\\), \\(\\gamma\\gamma\\) [seen] \\\\ \\hline\n\t\t\\(f_2(2340)\\) & 2339\\plm55 & 319\\plm81 & 0.14\\plm0.04 & \\(\\phi\\phi\\), \\(\\eta\\eta\\) \\\\ \\hline\n\t\t*\\(f_2(1750)\\) & 1755\\plm10 & 67\\plm12 & 0.04\\plm0.01 & \\(KK\\), \\(\\gamma\\gamma\\), \\(\\pi\\pi\\), \\(\\eta\\eta\\) \\\\ \\hline\n\t\t*\\(f_2(2000)\\) & 2001\\plm10 & 312\\plm32 & 0.16\\plm0.02 & \\\\ \\hline\n\t\t*\\(f_2(2140)\\) & 2141\\plm12 & 49\\plm28 & 0.02\\plm0.01 & \\\\ \\hline\n\t\t*\\(f_2(2240)\\) & 2240\\plm15 & 241\\plm30 & 0.11\\plm0.01 & \\\\ \\hline\n\t\t*\\(f_2(2295)\\) & 2293\\plm13 & 216\\plm37 & 0.10\\plm0.02 & \\\\ \\hline\n\t\t\\end{tabular} \\caption{\\label{tab:allf2} All the \\(f_2\\) states as listed by the PDG. The last few states, marked here by asterisk, are classified as ``further states''.}\n\t\\end{table}\n\n\t\\subsection{Assignment of the \\texorpdfstring{$f_0$}{f0} into trajectories} \\label{sec:f0_fits}\n\tIn a given assignment, we generally attempt to include all the \\(f_0\\) states listed in table \\ref{tab:allf0}, sorting them into meson and, if possible, glueball trajectories.\n\t\n\tWe make an exception of the \\(f_0(500)\/\\sigma\\) resonance, which we do not use in any of the following sections. Its low mass and very large width are enough to make it stand out among the other \\(f_0\\) states listed in the table. There is no common consensus regarding the composition of the \\(\\sigma\\). We find that it does not belong on a meson Regge trajectory. If we assume it is a glueball then our model predicts the next state to be at around 2.2 GeV, and, since we assume its width to be proportional to its mass squared (as implied by eq. \\ref{eq:decay_closed}), it would have a width of at least 8 GeV. We hope, in that case, that there is no reason to make such an assumption.\\footnote{The authors of \\cite{Nebreda:2011cp} state that the interpretation of the \\(f_0(500)\/\\sigma\\) as a glueball is ``strongly disfavored'', from what they consider a model independent viewpoint. We found no references that suggest the opposite.} Therefore, we simply ``ignore'' the \\(f_0(500)\\) in the following sections.\n\n\t\\subsubsection{Assignment of all states as mesons}\n\t\tSorting the \\(f_0\\) states into trajectories with a meson-like slope leads to an assignment of the \\(f_0\\)'s into two groups of four:\n\t\\[\\mathrm{Light}:\\qquad980, 1500, 2020, 2200, \\]\n\t\\[\\ensuremath{s\\bar{s}}:\\qquad1370, 1710, 2100, 2330. \\]\nWhile this simple assignment includes all the confirmed \\(f_0\\) states (except the \\(f_0(500)\\)) on two parallel trajectories, it remains unsatisfactory. If there are no glueballs we expect the states in the lower trajectory to be (predominantly) composed of light quarks, while the higher states should be \\(\\ensuremath{s\\bar{s}}\\). This does not match what we know about the decay modes of the different states. For example, the \\(f_0(1370)\\) does not decay nearly as often to \\(K\\bar{K}\\) as one would expect from an \\(\\ensuremath{s\\bar{s}}\\) state. In fact, this assignment of the \\(f_0\\)'s into meson trajectories was proposed in some other works \\cite{Anisovich:2000kxa,Anisovich:2002us,Masjuan:2012gc}, and the mismatch with the decay modes was already addressed in greater detail in \\cite{Bugg:2012yt}.\n\n\t\\subsubsection{Assignment with \\texorpdfstring{$f_0$}{f0}(980) as glueball}\n\t\\begin{table}[tp!] \\centering\n\t\\includegraphics[width=0.60\\textwidth]{glue_fit_980_f1n.png} \\qquad\n\t\t\t\t\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline\n\t\t\t\t\t\\(n\\) & \\multicolumn{2}{|c|}{Light} & \\multicolumn{2}{|c|}{\\(s\\bar{s}\\)} & \\multicolumn{2}{|c|}{Glueball} \\\\ \\hline\n\t\t\t\t\t& Exp. & Thry. & Exp. & Thry. & Exp. & Thry. \\\\ \\hline\n\t\t\t0 &\t1350\\plm150 & 1317 & 1505\\plm6 & 1505 & 990\\plm20 & 990 \\\\\n\t\t\t1 &\t1720\\plm6 & 1738 & 1992\\plm16 & 1984 & - & - \\\\\n\t\t\t2 &\t2103\\plm8 & 2075 & ? & 2340 & ? & 2470 \\\\\n\t\t\t3 &\t2325\\plm35 & 2365 & & & & \\\\\n\t\t\t4 & ? & 2620 & & & & \\\\ \\hline\n\t\t\t\t\\end{tabular} \\caption{\\label{tab:fit980} The results of the fit to the assignment with \\(f_0(980)\\) as the glueball ground state. The slope is \\(\\ensuremath{\\alpha^\\prime} = 0.788\\) GeV\\(^{-2}\\) and the mass of the \\(s\\) quark \\(m_s = 500\\) MeV. This fit has \\(\\chi^2 = 3.78\\). The intercepts obtained are (-1.35) for light mesons, (-0.52) for \\(\\ensuremath{s\\bar{s}}\\), and (-0.38) for glueballs. We also list the predicted mass of the next state in each trajectory.}\n\t\t\t\\end{table}\n\n\tIn this and the following sections we pick and single out a state as the glueball ground state and try to build the meson trajectories without it.\n\n\tFirst is the the \\(f_0(980)\\). Assuming it is the glueball then the \\(f_0(2330)\\) is at the right mass to be its first excited (\\(n = 2\\)) partner. However, we find that the two meson trajectories given this assignment,\n\\[\\mathrm{Light:}\\qquad 1370, 1710, 2100,\\]\n\\[\\ensuremath{s\\bar{s}}\\mathrm{:}\\qquad 1500, 2020,\\]\nalso predict a state very near the mass of the \\(f_0(2330)\\), and according to this assignment, there should be two more \\(f_0\\) states near the \\(f_0(2330)\\), for a total of three. The \\(f_0(2200)\\) has to be excluded.\n\nWe again have to put some states on trajectories that are not quite right for them: the \\(f_0(1710)\\) has a significant branching ratio for its decay into \\(K\\overline{K}\\), while the \\(f_0(1500)\\), which is taken as the head of the \\(\\ensuremath{s\\bar{s}}\\) trajectory, decays to \\(K\\overline{K}\\) less than \\(10\\%\\) of the time.\n\nNote that the assignment above is the same as the one we would make if we excluded the \\(f_0(980)\\) on the grounds of it being an exotic (but non-glueball) state and assumed all the other states are mesons. The \\(f_0(980)\\) is commonly believed to be a multiquark state or a \\(K\\bar{K}\\) ground state,\\footnote{See the PDG's ``Note on scalar mesons below 2 GeV'' and references therein.} and in fact, we will find in following sections that even it is not a glueball, it is better to exclude it from the meson trajectories. The trajectories and masses obtained are in table \\ref{tab:fit980}.\n\t\n\t\\subsubsection{Assignment with \\texorpdfstring{$f_0$}{f0}(1370) as glueball}\n\t\\begin{table}[tp!] \\centering\n\t\\includegraphics[width=0.60\\textwidth]{glue_fit_1370_f1n.png} \\qquad\n\t\t\t\t\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline\n\t\t\t\t\t\\(n\\) & \\multicolumn{2}{|c|}{Light} & \\multicolumn{2}{|c|}{\\(s\\bar{s}\\)} & \\multicolumn{2}{|c|}{Glueball} \\\\ \\hline\n\t\t\t\t\t& Exp. & Thry. & Exp. & Thry. & Exp. & Thry. \\\\ \\hline\n\t\t\t0 &\t990\\plm20 & 1031 & 1720\\plm6 & 1733 & 1350\\plm150 & 1350 \\\\\n\t\t\t1 &\t1505\\plm6 & 1488 & 2189\\plm13 & 2103 & - & - \\\\\n\t\t\t2 &\t1795\\plm25 & 1835 & ? & 2400 & ? & 2530 \\\\\n\t\t\t3 &\t2103\\plm8 & 2123 & & & & \\\\\n\t\t\t4 & 2325\\plm35 & 2377 & & & & \\\\\n\t\t\t5 & ? & 2610 & & & & \\\\ \\hline\n\t\t\t\t\\end{tabular} \\caption{\\label{tab:fit1370} The results of the fit to the assignment with \\(f_0(1370)\\) as the glueball ground state. The slope is \\(\\ensuremath{\\alpha^\\prime} = 0.873\\) GeV\\(^{-2}\\) and the mass of the \\(s\\) quark \\(m_s = 500\\) MeV. This fit has \\(\\chi^2 = 10.01\\). The intercepts obtained are (-0.93) for light mesons, (-1.06) for \\(\\ensuremath{s\\bar{s}}\\), and (-0.80) for glueballs. We also list the predicted mass of the next state in each trajectory.}\n\t\t\t\\end{table}\n\nFrom here onwards the states singled out as glueballs are too high in mass for their excited states to be in the range of the \\(f_0\\) states listed in table \\ref{tab:allf0}, that is beneath 2.4 GeV.\n\nExcluding the \\(f_0(1370)\\), we have:\n\\[\\mathrm{Light:}\\qquad [980], 1500, *1800, 2100, 2330\\]\n\\[\\ensuremath{s\\bar{s}}\\mathrm{:}\\qquad 1710, 2200.\\]\nThe \\(f_0(980)\\) is put here in brackets to emphasize that it is optional. Including or excluding it can affect some of the fitting parameters but the trajectory is certainly not incomplete if we treat \\(f_0(980)\\) as a non-meson resonance and take \\(f_0(1500)\\) as the head of the trajectory.\n\nThe main issue here is that we have to use the state \\(*f_0(1800)\\) to fill in a hole in the meson trajectory, a state that is still considered unconfirmed by the PDG and whose nature is not entirely known. It was observed so far only as an enhancement in the radiative decay \\(J\/\\psi \\rightarrow \\gamma\\omega\\phi\\) and its observers at BESIII \\cite{Ablikim:2012ft} suggest it is an exotic state - a tetraquark, a hybrid, or itself a glueball. More experimental data is needed here.\n\nOther than that we have \\(f_0(2100)\\) as a light meson and \\(f_0(2200)\\) as \\(\\ensuremath{s\\bar{s}}\\). This is the option that is more consistent with the decays, as \\(f_0(2200)\\) is the one state of the two which is known to decay into \\(K\\overline{K}\\) (we again refer to the comments in \\cite{Bugg:2012yt} and references therein). However, in terms of the fit, we might do better to exchange them. It is possible that the proximity of these two resonances to each other affects their masses in such a way that our model can not predict, and this affects badly the goodness of our fit, as can be seen in table \\ref{tab:fit1370}.\n\n\\subsubsection{Assignment with \\texorpdfstring{$f_0$}{f0}(1500) as glueball}\n\n\\begin{table}[tp!] \\centering\n\t\\includegraphics[width=0.60\\textwidth]{glue_fit_1500_f1n.png} \\qquad\n\t\t\t\t\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline\n\t\t\t\t\t\\(n\\) & \\multicolumn{2}{|c|}{Light} & \\multicolumn{2}{|c|}{\\(s\\bar{s}\\)} & \\multicolumn{2}{|c|}{Glueball} \\\\ \\hline\n\t\t\t\t\t& Exp. & Thry. & Exp. & Thry. & Exp. & Thry. \\\\ \\hline\n\t\t\t0 &\t1350\\plm150 & 1031 & 1720\\plm6 & 1723 & 1505\\plm6 & 1505 \\\\\n\t\t\t1 &\t1795\\plm25 & 1723 & 2103\\plm8 & 2097 & - & - \\\\\n\t\t\t2 &\t1992\\plm16 & 2029 & ? & 2400 & ? & 2620 \\\\\n\t\t\t3 &\t2325\\plm35 & 2295 & & & & \\\\\n\t\t\t4 & ? & 2530 & & & & \\\\ \\hline\n\t\t\t\t\\end{tabular} \\caption{\\label{tab:fit1500} The results of the fit to the assignment with \\(f_0(1500)\\) as the glueball ground state. The slope is \\(\\ensuremath{\\alpha^\\prime} = 0.870\\) GeV\\(^{-2}\\) and the mass of the \\(s\\) quark \\(m_s = 500\\) MeV. This fit has \\(\\chi^2 = 2.51\\). The intercepts obtained are (-1.58) for light mesons, (-1.03) for \\(\\ensuremath{s\\bar{s}}\\), and (-0.99) for glueballs. We also list the predicted mass of the next state in each trajectory.}\n\t\t\t\\end{table}\n\t\t\t\nTaking the \\(f_0(1500)\\) to be the glueball, then the light meson trajectory will start with \\(f_0(1370)\\), giving:\n\t\\[\\mathrm{Light:}\\qquad 1370, *1800, 2020, 2330,\\]\n\t\\[\\ensuremath{s\\bar{s}}\\mathrm{:}\\qquad 1710, 2100.\\]\nWith \\(f_0(1500)\\) identified as the glueball, this assignment includes all the states except \\(f_0(2200)\\). Incidentally though, the \\(f_0(2200)\\) would have belonged on the glueball trajectory if we had allowed odd values of \\(n\\) for the glueball. In other words, it matches the prediction for the \\(n = 1\\) state of the half slope trajectory beginning with \\(f_0(1500)\\). We could also use \\(f_0(2200)\\) as the \\(\\ensuremath{s\\bar{s}}\\) state and leave out \\(f_0(2100)\\) instead.\n\nThere is no glaring inconsistency in this assignment with the decay modes, but we are again confronted with the state \\(*f_0(1800)\\), which we need to complete the light meson trajectory. We can see from table \\ref{tab:allf0} that the \\(f_0(2020)\\) is wider than other states in its trajectory, whereas we maintain that the ratio between width and mass \\(\\Gamma\/M\\) should be roughly constant along a trajectory. In particular, the last state in the trajectory, \\(f_0(2330)\\), is much narrower than \\(f_0(2020)\\). We can assign the \\(f_0(2330)\\) to the \\(\\ensuremath{s\\bar{s}}\\) trajectory instead, but there is no other argument for that state being \\(\\ensuremath{s\\bar{s}}\\), considering it was observed only in its decays to \\(\\pi\\pi\\) and \\(\\eta\\eta\\). Perhaps the fact that \\(f_0(1370)\\) and \\(f_0(2020)\\) are both quite wide means that there should be two additional states, with masses comparable to those of \\(*f_0(1800)\\) and \\(f_0(2330)\\), that are also wide themselves, and those states will better complete this assignment. The results of the assignment are presented in table \\ref{tab:fit1500}.\n\t\t\t\n\\subsubsection{Assignment with \\texorpdfstring{$f_0$}{f0}(1710) as glueball}\n\n\\begin{table}[tp!] \\centering\n\t\\includegraphics[width=0.60\\textwidth]{glue_fit_1710_f1n.png} \\qquad\n\t\t\t\t\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline\n\t\t\t\t\t\\(n\\) & \\multicolumn{2}{|c|}{Light} & \\multicolumn{2}{|c|}{\\(s\\bar{s}\\)} & \\multicolumn{2}{|c|}{Glueball} \\\\ \\hline\n\t\t\t\t\t& Exp. & Thry. & Exp. & Thry. & Exp. & Thry. \\\\ \\hline\n\t\t\t0&\t1350\\plm150 & 1378 & 1505\\plm6 & 1506 & 1720\\plm6 & 1720 \\\\\n\t\t\t1&\t1795\\plm25 & 1777 & 1992\\plm16 & 1977 & - & - \\\\\n\t\t\t2&\t2103\\plm8 & 2102 & ? & 2330 & ? & 2830 \\\\\n\t\t\t3&\t2325\\plm35 & 2383 & & & & \\\\\n\t\t\t4&\t? & 2640 & & & & \\\\ \\hline\n\t\t\t\t\\end{tabular} \\caption{\\label{tab:fit1710} The results of the fit to the assignment with \\(f_0(1710)\\) as the glueball ground state. The slope is \\(\\ensuremath{\\alpha^\\prime} = 0.793\\) GeV\\(^{-2}\\) and the mass of the \\(s\\) quark \\(m_s = 500\\) MeV. This fit has \\(\\chi^2 = 0.71\\). The intercepts obtained are (-1.51) for light mesons, (-0.53) for \\(\\ensuremath{s\\bar{s}}\\), and (-1.17) for glueballs.}\n\t\t\t\\end{table}\n\t\t\t\nExcluding the \\(f_0(1710)\\) from the meson trajectories we can make an assignment that includes all states except the \\(f_0(500)\\) and \\(f_0(980)\\):\n\\[\\mathrm{Light:}\\qquad 1370, *1800, 2100, 2330\\]\n\\[\\ensuremath{s\\bar{s}}\\mathrm{:}\\qquad 1500, 2020, 2200\\]\n\\[\\mathrm{Glue:}\\qquad 1710\\]\nThe disadvantage here is that we again have to use \\(f_0(1500)\\) as the head of the \\(\\ensuremath{s\\bar{s}}\\) trajectory despite knowing that its main decay modes are to \\(4\\pi\\) and \\(\\pi\\pi\\), as well as the fact the we - once again - need the \\(*f_0(1800)\\) resonance to fill in a hole for \\(n = 1\\) in the resulting light meson trajectory. This trajectory can be seen in table \\ref{tab:fit1710}.\n\n\\subsubsection{Conclusions from the \\texorpdfstring{$f_0$}{f0} fits}\nIt is not hard to see that the \\(f_0\\) resonances listed in the PDG's Review of Particle Physics all fit in quite neatly on two parallel trajectories with a slope similar to that of other mesons. However, upon closer inspection, these trajectories - one for light quark mesons and one for \\(\\ensuremath{s\\bar{s}}\\) - are not consistent with experimental data, as detailed above. For us the naive assignment is also inconsistent with what we have observed for the other \\(\\ensuremath{s\\bar{s}}\\) trajectories in \\cite{Sonnenschein:2014jwa}, namely that the \\(\\ensuremath{s\\bar{s}}\\) trajectories are not purely linear, and have to be corrected by adding a non-zero string endpoint mass for the \\(s\\) quark, usually of at least 200 MeV.\n\nThe other novelty that we hoped to introduce, the half slope trajectories of the glueball, proved to be impractical - given the current experimental data which only goes up to less than \\(2.4\\) GeV for the relevant resonances.\n\n\tOne conclusion that can be drawn is that the state \\(f_0(980)\\) can be comfortably excluded from any of the meson trajectories, which is consistent with its being the \\(K\\overline{K}\\) ground state.\n\n\tThe unconfirmed state \\(*f_0(1800)\\) turns up in the assignments with glueballs in them, usually to fill in a hole in the light meson trajectory. If the \\(*f_0(1800)\\) is not in itself a meson as mentioned before, then we would hope that there is another yet unobserved \\(f_0\\) state with a very similar mass, say 1800--1850 MeV.\n\t\n\tThere is no one assignment that seems the correct one, although the two assignments singling out either \\(f_0(1370)\\) or \\(f_0(1500)\\) as the glueball ground states seem more consistent than the other possibilities. The best way to determine which is better is, as always, by finding more experimental data. We list our predictions for higher resonances based on these assignments in section \\ref{sec:predictions} of the appendix.\n\n\\subsection{Assignment of the \\texorpdfstring{$f_2$}{f2} into trajectories} \\label{sec:f2_fits}\nWe now turn to the \\(f_2\\) tensor resonances, that were listed in the beginning of the section in table \\ref{tab:allf2}. We will first examine trajectories in the \\((J,M^2)\\) plane, then move on to the attempt to assign all the \\(f_2\\) states to trajectories in the \\((n,M^2)\\) plane.\n\\subsubsection{Trajectories in the \\texorpdfstring{$(J,M^2)$}{(J,M2)} plane} \\label{sec:f2_orbital}\nThe only way to get a linear trajectory connecting a \\(0^{++}\\) and a \\(2^{++}\\) state with the slope \\(\\ensuremath{\\alpha^\\prime}\\!_{gb} = \\frac{1}{2}\\ensuremath{\\alpha^\\prime}_{meson}\\) is to take the lightest \\(f_0\\) glueball candidate and the heaviest known \\(f_2\\). Then we have the pair \\(f_0(980)\\) and \\(f_2(2340)\\), and the straight line between them has a slope of 0.45 GeV\\(^{-2}\\). There is no \\(J = 1\\) resonance near the line stretched between them. However, this example mostly serves to demonstrate once again the difficulty of forming the glueball trajectories in practice. The glueball states are predicted to be fewer and farther apart then the mesons in their respective Regge trajectories.\n\nTherefore, it is a more sound strategy to look again for the meson trajectories, see what states are excepted from them, and check for overall consistency of the results. In forming the meson trajectories, we know that we can expect the \\(\\omega\\) mesons with \\(J^{PC} = 1^{--}\\) to be part of the trajectories, in addition to some states at higher spin, which will allow us to form trajectories with more points.\n\nMoving on from \\(J = 0^{++}\\) and \\(2^{++}\\) to higher spin states, we see two \\(J^{PC} = 4^{++}\\) states that could belong to a trajectory: \\(f_4(2050)\\) and \\(f_4(2300)\\). The first of those, \\(f_4(2050)\\), belongs to a well known meson trajectory in the \\((J,M^2)\\) plane, following \\(\\omega(782)\\), \\(f_2(1270)\\), and \\(\\omega_3(1670)\\). The slope of the fit to that trajectory is \\(\\ensuremath{\\alpha^\\prime} = 0.91\\) GeV\\(^{-2}\\), and we can even include in it states of spin 5 and 6: \\(*\\omega_5(2250)\\) and \\(f_6(2510)\\).\n\nThe mass of the \\(f_4(2300)\\) is too low for it to belong to a linear trajectory with a glueball slope. Taking it to be a meson one can put it on a linear trajectory following \\(\\omega(1420)\\) and \\(f_2(1810)\\). To complete this trajectory we need a \\(J^{PC} = 3^{--}\\) state with a mass near 2070 MeV. The PDG lists one unconfirmed state, \\(X(2080)\\), with the quantum numbers \\(I(J^{PC}) = ?(3^{-?})\\), which might be a match.\n\nWe also find another meson trajectory involving the second excited \\(\\omega\\) meson - \\(\\omega(1650)\\). This trajectory would be comprised of \\(\\omega(1650)\\), \\(*\\omega_3(2255)\\), and with one of \\(f_2(1950)\\) or \\(f_2(2010)\\) between them.\n\nWe also have the meson trajectories of the \\(\\ensuremath{s\\bar{s}}\\). The first joins the ground state \\(\\phi(1020)\\) with \\(f_2^\\prime(1525)\\) and \\(\\phi_3(1850)\\). We can form a daughter trajectory starting with the \\(\\phi(1680)\\), and going on to include \\(f_2(1950)\\) or \\(f_2(2010)\\), as well as the unconfirmed \\(*\\omega_3(2285)\\). This trajectory is nearly identical to that of the \\(\\omega(1650)\\) of the last paragraph.\n\n\\begin{figure}[t!] \\centering\n\t\\includegraphics[width=0.76\\textwidth]{traj_j_mes.png}\n\t\t\t\t\\caption{\\label{fig:traj_j_mes} The trajectory of the \\(\\omega\\) (blue) and \\(\\phi\\) (red) mesons in the \\((J,M^2)\\) plane and their daughter trajectories. The fits have the common slope \\(\\ensuremath{\\alpha^\\prime} = 0.903\\) GeV\\(^{-2}\\), and the \\(\\ensuremath{s\\bar{s}}\\) trajectories are fitted using a mass of \\(m_s = 250\\) MeV for the \\(s\\) quark. The states forming the trajectories are as follows: With \\(J^{PC} = 1^{--}\\), \\(\\omega(782)\\), \\(\\phi(1020)\\), \\(\\omega(1420)\\), \\(\\omega(1650)\\), \\(\\phi(1680)\\). With \\(J^{PC} = 2^{++}\\), \\(f_2(1270)\\), \\(f_2^\\prime(1525)\\), \\(f_2(1810)\\), \\(f_2(1950)\\), and \\(f_2(2010)\\). With \\(J^{PC} = 3^{--}\\), \\(\\omega_3(1670)\\), \\(\\phi_3(1850)\\), \\(*\\omega_3(2255)\\), and \\(\\omega_3(2285)\\). And with \\(J^{PC} = 4^{++}\\), \\(f_4(2050)\\) and \\(f_4(2300)\\). We also plot at \\(J^{PC} = 0^{++}\\) the \\(f_0(980)\\) and \\(f_0(1370)\\) which are found to lie near the trajectories fitted, but were not included themselves in the fits, as they are not theoretically expected to belong to them.}\n\t\t\t\\end{figure}\n\nThe meson trajectories described above are plotted in figure \\ref{fig:traj_j_mes}.\n\nTo summarize, we have found several meson trajectories in the \\((J,M^2)\\) plane of at least three states. As shown in the figure, these trajectories pass quite closely to the states \\(f_0(980)\\) and \\(f_0(1370)\\), but as meson trajectories these should begin with a \\(J^{PC} = 1^{--}\\) state (with orbital angular momentum \\(L = 0\\) and spin \\(S = 1\\)). A \\(0^{++}\\) meson state could only be included as an excited state with \\(L = 1\\) and \\(S = 1\\), but we found that for each trajectory we can use an existing \\(f_2\\) state in that place. The \\(f_2\\) states classified in this assignment as mesons are \\(f_2(1270)\\), \\(f_2^\\prime(1525)\\), \\(f_2(1810)\\), \\(f_2(1950)\\), and \\(f_2(2010)\\). These can perhaps be partnered to existing \\(f_0\\) states as members of triplets of states with \\(J = 0, 1, 2\\) and \\(PC = ++\\) split by spin-orbit interactions. We do not know the exact magnitude of the splitting. There are some \\(f_0\\) states close (within 20--100 MeV) to the \\(f_2\\) states mentioned above, and the PDG lists some \\(f_1\\) (\\(1^{++}\\)) resonances that may be useful, but we do not find any such trio of states with similar properties and masses that could be said to belong to such a spin-orbit triplet. Therefore, we limit our conclusions from these Regge trajectories to the \\(f_2\\) which we found we could directly place on them.\n\n\\subsubsection{\\texorpdfstring{Trajectories in the $(n,M^2)$}{(n,M2)} plane} \\label{sec:f2_radial}\nSorting the \\(f_2\\) resonances into trajectories, the situation is somewhat simpler than with the \\(f_0\\) scalars, as here we have two states that belong on meson trajectories in the \\((J,M^2)\\) plane, as we found in previous sections. In particular, the \\(f_2(1270)\\) belongs to the trajectory of the \\(\\omega\\) meson, and the \\(f^\\prime_2(1525)\\) is an \\(s\\bar{s}\\) and sits on the \\(\\phi\\) trajectory. Their decay modes and other properties are also well known and there is no real doubt about their nature.\n\nThe linear trajectory beginning with the \\(f_2(1270)\\) meson includes the states \\(f_2(1640)\\)and \\(f_2(1950)\\). We can include one of the further states \\(*f_2(2240)\\) as the fourth point in the trajectory. We can also use the \\(f_J(2220)\\) in place of the \\(*f_2(2240)\\), but it seems an unnatural choice because of the widths of the states involved (the \\(f_J(2220)\\) is much narrower than the others).\n\nThe projected trajectory of the \\(f^\\prime_2(1525)\\), using the same slope as the \\(f_2(1270)\\) trajectory and adding mass corrections for the \\(s\\) quark, includes the \\(f_2(2010)\\) and the \\(f_2(2300)\\).\n\nThis leaves out the states \\(f_2(1430)\\), \\(f_2(1565)\\), \\(f_2(1810)\\), \\(f_2(1910)\\), \\(f_J(2220)\\), and \\(f_2(2340)\\), as well as the five resonances classified as further states.\n\nThe next state we look at is \\(f_2(1810)\\), classified as a light meson in the \\((J,M^2)\\) fits of the previous section. Its mass is not right for it to belong to the trajectory of the \\(f_2(1270)\\), so we try to use it as the head of another light meson trajectory. If it belongs to a parallel trajectory to that of the \\(f_2(1270)\\) then the state that follows it is \\(f_2(2150)\\). The next state could be \\(f_2(2340)\\), except that it has been observed to decay to \\(\\phi\\phi\\), making it very unlikely to be a light quark meson.\n\nThe state \\(f_2(1430)\\) is intriguing. In part because of the very small width reported by most (but not all) experiments cited in the PDG, and in part because it is located in mass between the two lightest mesons of \\(J^{PC} = 2^{++}\\), that is between \\(f_2(1270)\\) (light) and \\(f_2^\\prime(1525)\\) (\\(\\ensuremath{s\\bar{s}}\\)). If we had to assign the \\(f_2(1430)\\) to a Regge trajectory, then it is best placed preceding the \\(f_2(1810)\\) and \\(f_2(2150)\\) in the linear meson trajectory discussed in the last paragraph.\n\nThe \\(f_J(2200)\\), previously known as \\(\\xi(2230)\\), is also a narrow state. It is currently listed by the PDG as having either \\(J^{PC} = 2^{++}\\) or \\(4^{++}\\), but some of the experiments cited by the PDG tend towards \\(J = 2\\). It has been considered a candidate for the tensor glueball \\cite{Bai:1996wm,Crede:2008vw}. It can be assigned to a linear meson trajectory, as already discussed, but it is clear already from its narrow width that it is not the best choice, even before addressing other experimental finds regarding it (for example, the fact that it was not observed in \\(\\gamma\\gamma\\) scattering \\cite{Benslama:2002pa} and the resulting bounds on its decay into photons).\n\nThe \\(f_2(1565)\\) is also left out, but it could be paired with \\(f_2(1910)\\) to form another linear meson trajectory. To continue we need another state with a mass of around 2200 MeV.\n\nTo summarize, we may organize the \\(f_2\\) resonances by picking first the resonances for the trajectories of the two known mesons,\n\\[\\mathrm{Light:}\\qquad 1270, 1640, 1950\\]\n\\[\\ensuremath{s\\bar{s}}\\mathrm{:}\\qquad 1525, 2010, 2300 \\]\nthen find the trajectories starting with the lightest states not yet included. This gives us another meson trajectory using the states\n\\[\\mathrm{Light:}\\qquad 1810, 2150\\]\n\n\\begin{figure}[t!] \\centering\n\t\\includegraphics[width=0.76\\textwidth]{traj_n_f2.png}\n\t\t\t\t\\caption{\\label{fig:traj_n_f2} Some radial trajectories of the \\(f_2\\), with blue lines for light mesons and red for \\(\\ensuremath{s\\bar{s}}\\). The fits have the common slope \\(\\ensuremath{\\alpha^\\prime} = 0.846\\) GeV\\(^{-2}\\), and the \\(\\ensuremath{s\\bar{s}}\\) trajectories are fitted using a mass of \\(m_s = 400\\) MeV for the \\(s\\) quark. The states forming the trajectories are as follows: The first light meson trajectory with \\(f_2(1270)\\), \\(f_2(1640)\\), and \\(f_2(1950)\\), and followed by the unconfirmed state \\(*f_2(2240)\\) which was not used in the fit. The \\(\\ensuremath{s\\bar{s}}\\) trajectory with \\(f_2^\\prime(1525)\\), \\(f_2(2010)\\), and \\(f_2(2300)\\). And the second light meson trajectory with \\(f_2(1810)\\) and \\(f_2(2150)\\).}\n\t\t\t\\end{figure}\n\nThe trajectories formed by these eight states are drawn in figure \\ref{fig:traj_n_f2}.\n\n\\subsubsection{Conclusions from the \\texorpdfstring{$f_2$}{f2} fits}\nThere are some simplifications in assigning the \\(f_2\\) to radial trajectories compared to assigning the \\(f_0\\) resonances, as we can look at both orbital and radial trajectories and it is easier to classify some states as mesons. The radial trajectories described in section \\ref{sec:f2_radial} are consistent with the orbital trajectories of section \\ref{sec:f2_orbital}: states classified as mesons in the latter are also classified as mesons in the former, and with the same quark contents.\n\nIn the previous sections we for the most part avoided using the five \\(f_2\\) states classified in the PDG as ``further states'', although some of them could have played a role in the radial trajectory assignments. Counting confirmed and unconfirmed states alike, the PDG lists a total of 11 states with masses between 1900 and 2340 MeV. Since the different states have been observed in different processes, and hence have different decay modes, it would be useful to clarify experimentally the status of all these states and then reattempt the assignments of the then confirmed states into trajectories. We also note that the fact that there are many resonances with identical quantum numbers near to each other can interfere with the naive mass predictions of the Regge trajectories. In any case, and like for the \\(f_0\\), further experimental data on resonances between 2.3--3.0 GeV will likely prove useful.\n\nWe have not addressed yet the issue of the decay modes of the different states and how consistent they are with the assignments of the previous sections. The \\(f_2(1270)\\) and \\(f_2^\\prime(1525)\\) are well established as a light quark meson and an \\(\\ensuremath{s\\bar{s}}\\) respectively, and they were the basis from which we built the different trajectories. As for their excited states, the data on their branching ratios cited by the PDG is very partial for higher states. However, we find an interesting case when looking at the trio of states \\(f_2(1910)\\), \\(f_2(1950)\\), and \\(f_2(2010)\\). We have classified \\(f_2(1950)\\) as a light meson and \\(f_2(2010)\\) as \\(\\ensuremath{s\\bar{s}}\\), which is what fits best with the Regge trajectories. Another option would be to use \\(f_2(1910)\\) as a light meson and \\(f_2(1950)\\) as \\(\\ensuremath{s\\bar{s}}\\), which is still consistent. Then the \\(f_2(2010)\\), which was observed to decay to \\(\\phi\\phi\\) (despite the very small phase space), could perhaps be classified as a \\(\\phi\\phi\\) bound state, in an analogous fashion to the \\(f_0(980)\\).\n\nThe most interesting states after that remain the \\(f_2(1430)\\) and \\(f_J(2220)\\). While the latter has been considered a candidate for the glueball and has been the object of some research (see papers citing \\cite{Bai:1996wm}), the former is rarely addressed, despite its curious placement in the spectrum between the lightest \\(2^{++}\\) light and \\(\\ensuremath{s\\bar{s}}\\) mesons. It seems a worthwhile experimental question to clarify its status - and its quantum numbers, as the most recent observation \\cite{Vladimirsky:2001ek} can not confirm whether it is a \\(0^{++}\\) or \\(2^{++}\\) state, a fact which led to at least one suggestion \\cite{Vijande:2004he} that the \\(f_2(1430)\\) could be itself the scalar glueball.\n\n\\subsection{Assignments with non-linear trajectories for the glueball} \\label{sec:holo_fits}\nIn this section we check the applicability of a glueball trajectory of the form\n\\begin{equation} J = \\ensuremath{\\alpha^\\prime}_{gb}E^2-2\\ensuremath{\\alpha^\\prime}_{gb}m_0E + a \\,, \\label{eq:holotraj} \\end{equation}\nwhich is the general form we expect from a semi-classical calculation of the corrections to the trajectory in a curved background, and as put forward in section \\ref{sec:holo_string}. The novelty here is a term linear in the mass \\(E\\), which makes the Regge trajectory \\(\\alpha(t)\\) non-linear in \\(t = E^2\\). The constant \\(m_0\\) can be either negative or positive, depending on the specific holographic background, and a priori we have to examine both possibilities. It was also noted in section \\ref{sec:holo_string} that there may be a correction to the slope, but we assume it is small compared to the uncertainty in the phenomenological value of the Regge slope, and we use\n\\begin{equation} \\ensuremath{\\alpha^\\prime}_{gb} = \\frac{1}{2}\\ensuremath{\\alpha^\\prime} \\end{equation}\nthroughout this section. We also substitute \\(J \\rightarrow J + n\\) as usual to apply the formula to radial trajectories.\n\nWith the \\(m_0\\) term we can write\n\\begin{equation} \\frac{\\partial J}{\\partial E^2} = \\frac{\\ensuremath{\\alpha^\\prime}}{2}\\left(1-\\frac{m_0}{E}\\right) \\,. \\label{eq:eff_holo_slope} \\end{equation}\nWe can look at this as an effective slope, and it is the easiest way to see that when \\(m_0\\) is negative, the effective slope is higher than that of the linear trajectory, and vice versa.\n\n\\subsubsection{Fits using the holographic formula}\nUsing the simple linear formula we could not, in most cases, find glueball trajectories among the observed \\(f_0\\) and \\(f_2\\) states. This is because the first excited state is expected to be too high in mass and outside the range of the states measured in experiment.\n\nAdding an appropriate \\(m_0\\) term can modify this behavior enough for us to find some pairs of states on what we would then call glueball trajectories, and by appropriate we mean a negative value that will make the effective slope of eq. \\ref{eq:eff_holo_slope} higher. The problem is then that we have only pairs of states, with two fitting parameters: \\(m_0\\) and \\(a\\) (and \\(\\ensuremath{\\alpha^\\prime}\\) which is fixed by the meson trajectory fits). We form these pairs by picking a state left out from the meson trajectories proposed in sections \\ref{sec:f0_fits} and \\ref{sec:f2_fits} and assigning it as the excited partner of the appropriate glueball candidate.\n\nThere is a solution for \\(m_0\\) and \\(a\\) for any pair of states which we can take, and the question then becomes whether there is a reason to prefer some values of the two parameters over others. We list some other values obtained for \\(m_0\\) and \\(a\\) in table \\ref{tab:holo_fits}.\n\n\\begin{table}[t!] \\centering\n\t\\begin{tabular}{|c|c|c|c|c|} \\hline\n\tGround state & Excited state & \\(\\ensuremath{\\alpha^\\prime}\\) [GeV\\(^{-2}\\)] & $m_0$ [GeV] & \\(a\\) \\\\ \\hline\n\t\n\t\\(f_0(980)\\) & \\(f_0(2200)\\) & 0.79 & -0.52 & -0.78 \\\\\n\t\n\t\\(f_0(1370)\\) & \\(f_0(2020)\\) & 0.87 & -2.00 & -3.21 \\\\\n\t\n\t\\(f_0(1500)\\) & \\(f_0(2200)\\) & 0.87 & -1.51 & -2.97 \\\\\n\t\n\t\\(f_2(1430)\\) & \\(f_J(2220)\\) & 0.81 & -1.33 & -2.42 \\\\ \\hline\n\t\n\t\\end{tabular} \\caption{\\label{tab:holo_fits} Values obtained for the parameters \\(m_0\\) and \\(a\\) for some of the possible pairs of states on glueball trajectories. The states selected as the excited state of the glueball are those not included in the meson trajectories of the assignments of sections \\ref{sec:f0_fits} and \\ref{sec:f2_fits}, and the slopes are selected based on the results of the meson fits presented in the same sections.}\n\\end{table}\n\n\\subsubsection{Using the holographic formula with a constrained intercept}\n\\cite{Bigazzi:2004ze} implies that a universal form of the first semi-classical correction of the Regge trajectory of the rotating folded string is\n\\begin{equation} J + n = \\frac{1}{2}\\ensuremath{\\alpha^\\prime}(E-m_0)^2 \\,, \\end{equation}\nup to further (model dependent) modifications of the slope, which in the cases calculated are small. In other words, the intercept obtained then from the semi-classical calculation is\n\\begin{equation} a = \\frac{1}{2}\\ensuremath{\\alpha^\\prime} m_0^2 \\,. \\end{equation}\n\nThe intercept is always positive in this scenario. If we want to include the ground state with \\(J = n = 0\\) the only way to do it is to take a positive \\(m_0\\), specifically we should take \\(m_0 = M_{gs}\\), where \\(M_{gs}\\) is the mass of the ground state. There is no problem with the resulting expression theoretically, but it is not very useful in analyzing the observed spectrum. The trouble is that when using this expression the energy rises much too fast with \\(J\\) and we end up very quickly with masses outside the range of the glueball candidates. If we take, for instance, \\(f_0(980)\\) as the ground state then the first excited state is expected to have a mass of around 2500 MeV, and the heavier candidates naturally predict even heavier masses for the excited states.\n\nAnother way to use eq. \\ref{eq:holotraj} is to begin the trajectory with a \\(J = 2\\) state. Then \\(m_0\\) can be either positive or negative. We can then proceed as usual: we pick the head of a trajectory and see if there are any matches for its predicted excited states. We can see, for example, that we can again pair \\(f_2(1430)\\) with \\(f_J(2220)\\). Constraining \\(\\ensuremath{\\alpha^\\prime}\\) to be \\(0.90\\) GeV\\(^{-2}\\), the best fit has \\(m_0 = -0.72\\) GeV, and the masses calculated are 1390 and 2260 MeV for the experimental values of \\(1453\\pm4\\) and \\(2231\\pm4\\) MeV.\n\n\\subsection{Glueball Regge trajectories in lattice QCD} \\label{sec:lattice}\nThe glueball spectrum has been studied extensively in lattice QCD. Some works have compared results with different stringy models, e.g. \\cite{Athenodorou:2010cs,Bochicchio:2013aha,Bochicchio:2013sra,Caselle:2015tza}. However, the question whether or not the glueballs form linear Regge trajectories is not often addressed, due to the difficulty involved in computing highly excited states. When linear Regge trajectories are discussed, it is often when trying to identify the glueball with the pomeron and searching for states along the given pomeron trajectory,\n\\begin{equation} \\alpha(t) = \\ensuremath{\\alpha^\\prime}_p t + 1 + \\epsilon \\, \\end{equation}\n where the slope and the intercept are known from experiment to be \\(\\ensuremath{\\alpha^\\prime}_p = 0.25\\) GeV\\(^{-2}\\) and \\(1 + \\epsilon \\approx 1.08\\) \\cite{Donnachie:1984xq}.\n\nThe most extensive study of glueball Regge trajectories is that of Meyer and Teper \\cite{Meyer:2004jc,Meyer:2004gx}, where a relatively large number of higher mass states is computed, including both high spin states and some highly excited states at low spin.\n\nWe quote in table \\ref{tab:lat_masses} some lattice results for glueball masses from different calculations. The results are for \\(SU(3)\\) and \\(D = 4\\), and more results are collected in \\cite{Gregory:2012hu}. Most of these give only the masses of the lowest glueball states for different quantum numbers. These are low spin states with different combinations of parity and charge parity. While a spectrum is obtained, most states are isolated, in the sense that they cannot be grouped with other states to form Regge trajectories.\n\nIn the table \\ref{tab:lat_masses} we list the lattice results for the \\(0^{++}\\) ground state, the lowest \\(2^{++}\\) state, and the first excited \\(0^{++}\\) glueball, as well as for the \\(0^{-+}\\) and \\(2^{-+}\\). We may straight lines between the first spin-0 state and its excited partner to calculate the slope.\n\nOne thing we see at this first glance at the spectrum is that the spin-2 state is, in most studies, lower than we would expect it based on the Regge slope assumption.\\footnote{The fact that the tensor glueball is close to the scalar seems to have been long known in lattice QCD, see e.g. \\cite{Albanese:1987ds}.} The second spin-0 state, on the other hand, is about where we want it to be, assuming a closed string model, where the slope is half that of meson trajectories, and the first excited state has the excitation number \\(n = 2\\) (for one left moving and one right moving mode excited). In the next section we do some fits to some trajectories with more than two states, based on the results in \\cite{Meyer:2004gx}.\n\n\\begin{table}[t!] \\centering\n\t\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline\n\t& Meyer \\cite{Meyer:2004gx} & M\\&P \\cite{Morningstar:1999rf} & Chen \\cite{Chen:2005mg} & Bali \\cite{Bali:1993fb} & Gregory \\cite{Gregory:2012hu} \\\\ \\hline\\hline\n\n\t\\(0^{++}\\) & 1475\\plm30\\plm65\t\t\t& 1730\\plm50\\plm80 & 1710\\plm50\\plm80 & 1550\\plm50\\plm80 & 1795\\plm60 \\\\ \\hline\n\n\t\\(2^{++}\\) & 2150\\plm30\\plm100\t\t& 2400\\plm25\\plm120 & 2390\\plm30\\plm120 & 2270\\plm100\\plm110 & 2620\\plm50\\\\ \\hline\n\n\t\\(0^{++}\\) & 2755\\plm30\\plm120\t\t& 2670\\plm180\\plm130 & - & - & 3760\\plm240 \\\\ \\hline\\hline\n\n\t\\(0^{-+}\\) & 2250\\plm60\\plm100\t\t\t& 2590\\plm40\\plm130 & 2560\\plm35\\plm120 & 2330\\plm260\\plm120 & - \\\\ \\hline\n\n\t\\(2^{-+}\\) & 2780\\plm50\\plm130\t\t& 3100\\plm30\\plm150 & 3040\\plm40\\plm150 & 3010\\plm130\\plm150 & 3460\\plm320\\\\ \\hline\n\n\t\\(0^{-+}\\) & 3370\\plm150\\plm150\t\t& 3640\\plm60\\plm180 & - & - & 4490\\plm590 \\\\ \\hline\\hline\n\n\t\\(\\ensuremath{\\alpha^\\prime}\\!_{++}\\) (in $J$) & 0.82\\plm0.17 & 0.72\\plm0.18 & 0.72\\plm0.17 & 0.73\\plm0.19 & 0.55\\plm0.05 \\\\ \\hline\n\n\t\\(\\ensuremath{\\alpha^\\prime}\\!_{++}\\) (in $n$) & 0.37\\plm0.05 & 0.48\\plm0.14 & - & - & 0.18\\plm0.03 \\\\ \\hline\\hline\n\n\t\\(\\ensuremath{\\alpha^\\prime}\\!_{-+}\\) (in $J$) & 0.75\\plm0.26 & 0.69\\plm0.28 & 0.74\\plm0.32 & 0.55\\plm0.27 & - \\\\ \\hline\n\n\t\\(\\ensuremath{\\alpha^\\prime}\\!_{-+}\\) (in $n$) & 0.32\\plm0.08 & 0.31\\plm0.07 & - & - & - \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{\\label{tab:lat_masses} Lattice predictions from different studies for glueball masses [MeV] and resulting Regge slopes [GeV\\(^{-2}\\)], in the \\((J,M^2)\\) plane or in the \\((n,M^2)\\) plane. The slope is calculated assuming the first excited state has \\(n = 2\\).}\n\t\\end{table}\n\n\\subsubsection{Regge trajectory fits to results from the lattice}\nResults in lattice computations are for the dimensionless ratio between the mass of a state and the square root of the string tension: \\(M\/\\sqrt{T}\\). To get the masses \\(M\\) in MeV one has to fix the scale by setting the value of \\(T\\). This introduces an additional uncertainty in the obtained values. In table \\ref{tab:lat_masses} we listed the masses in MeV and calculated the dimensionful slope, but for the purpose of identifying Regge trajectories we can work directly with dimensionless quantities, avoiding this extra error. Thus, for the following, our fitting model will be\n\\begin{equation} \\frac{M^2}{T} = \\frac{2\\pi}{q} (N + a) \\end{equation}\nIn this notation the ratio \\(q\\), which is the primary fitting parameter (in addition to the intercept \\(a\\)), is expected to be 1 for open strings and \\(1\/2\\) for closed strings. It is referred to below as the ``relative slope''. \\(N\\) will be either the spin \\(J\\) or the radial excitation number \\(n\\).\n\n\\paragraph{Trajectories in the \\texorpdfstring{$(J,M^2)$}{(J,M2)} plane:} As mentioned above, \\cite{Meyer:2004gx} has the most high spin states. The analysis there observes that the first \\(2^{++}\\)and \\(4^{++}\\) states can be connected by a line with the relative slope\n\\begin{equation} q = 0.28\\pm0.02, \\end{equation}\nwhich, when taking a typical value of the string tension \\(\\sqrt{T} = 430\\) MeV (\\(\\ensuremath{\\alpha^\\prime} = 0.84\\) GeV\\(^{-2}\\)), gives a slope virtually identical to that expected for the pomeron, \\(0.25\\) GeV\\(^{-2}\\). This trajectory can be continued with the calculated \\(6^{++}\\) state. A fit to the three state trajectory gives the result\n\\begin{equation} q = 0.29\\pm0.15.\\end{equation}\nThis trajectory leaves out the \\(0^{++}\\) ground state. In \\cite{Meyer:2004gx} the lowest \\(0^{++}\\) is paired with the second, excited, \\(2^{++}\\) state, giving a trajectory with\n\\begin{equation} q = 0.40\\pm0.04.\\end{equation}\nA possibility not explored in \\cite{Meyer:2004gx} is that of continuing this trajectory, of the first \\(0^{++}\\) and the excited \\(2^{++}\\), and with the \\(4^{++}\\) and \\(6^{++}\\) states following. Then we have the result\n\\begin{equation} q = 0.43\\pm0.03\\end{equation}\nThis second option not only includes more points, it is also a better fit in terms of \\(\\chi^2\\) per degrees of freedom (0.37 instead of 1.24). This is a nice result from the closed string perspective, but the lowest \\(2^{++}\\) state is then left out. There is also a \\(J = 3\\) state in the \\(PC = ++\\) sector that lies very close to the trajectory of the \\(0^{++}\\) ground state. In our model it is not expected to belong to the trajectory, so that state is also left out of the fit. The trajectories of the \\(PC = ++\\) states are in the left side of figure \\ref{fig:lat_Meyer}.\n\n\\begin{figure}[tp!] \\centering\n\t\\includegraphics[width=0.49\\textwidth]{lattice_Meyer_pp.png}\n\t\\includegraphics[width=0.49\\textwidth]{lattice_Meyer_0pp.png}\n\t\\caption{\\label{fig:lat_Meyer} The trajectories of the \\(PC = ++\\) glueball states found in lattice calculations in \\cite{Meyer:2004gx}. \\textbf{Left:} Trajectories in the \\((J,M^2)\\) plane. The full line is the fit to a proposed trajectory using four states with \\(J = 0, 2, 4, 6\\), where the relative slope is \\(0.43\\) and the lightest tensor is excluded (\\(\\chi^2 = 0.37\\)). The dotted line is the leading trajectory proposed in the analysis in \\cite{Meyer:2004gx}, with a pomeron-like slope. It includes the \\(J = 2, 4,\\) and \\(6\\) states. (\\(\\chi^2 = 1.24\\)). In this second option the scalar is excluded. Also plotted is the \\(3^{++}\\) state, which was not used in the fit. \\textbf{Right:} trajectory of four states with \\(J^{PC} = 0^{++}\\). The relative slope is exactly \\(0.50\\) (\\(\\chi^2 = 1.48\\)).}\n\\end{figure}\n\n\\paragraph{Trajectories in the \\texorpdfstring{$(n,M^2)$}{(n,M2)} plane:}\nIn trajectories in the \\((n,M^2)\\) plane we assume \\(n\\) takes only even values, i.e. \\(n = 0,2,4,\\ldots\\), as it does for the closed string. The results when taking \\(n = 0,1,2,\\ldots\\) will be half those listed.\n\nIn this section, we again have to rely mostly on \\cite{Meyer:2004gx}, as it offers calculations of several excited states with the same \\(J^{PC}\\). Most notably we see there four states listed with \\(J^{PC} = 0^{++}\\). We observe that those points are well fitted by a trajectory with the slope\n\\begin{equation} q = 0.50\\plm0.07, \\end{equation}\nwhere \\(\\chi^2 = 1.48\\) for the fit. It is interesting to compare this with the trajectory that can be drawn from the \\(0^{++}\\) ground state in the \\((J,M^2)\\) plane. The \\((n,M^2)\\) trajectory with \\(n = 0,2,4,6\\) is very similar to the trajectory beginning with the same state and continuing to \\(J = 2, 4,\\) and \\(6\\). This is what we see also for mesons and baryons in experiment: two analogous trajectories with similar slopes in the different planes.\n\nOther than the trajectory of the four \\(0^{++}\\) states (plotted in figure \\ref{fig:lat_Meyer}), we list the slopes calculated for pairs of states who share other quantum numbers. This is in table \\ref{tab:lat_n}.\n\\begin{table} \\centering\n\t\\begin{tabular}{|c|c|c|c|c|c|} \\hline\n\t\t\\(J^{PC}\\) & \\(0^{++}\\) & \\(2^{++}\\) & \\(4^{++}\\) & \\(0^{-+}\\) & \\(2^{-+}\\) \\\\ \\hline\\hline\n\t\tMeyer \\cite{Meyer:2004gx} &\n\t\t\t\t0.50\\plm0.07 & 0.67\\plm0.10 & 0.30\\plm0.06 & 0.39\\plm0.07 & 0.56\\plm0.13 \\\\ \\hline\n\n\t\tM\\&P \\cite{Morningstar:1999rf} &\n\t\t\t\t0.51\\plm0.12 & -\t\t\t\t\t\t& -\t\t\t\t\t\t & 0.32\\plm0.02 & 0.38\\plm0.03 \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{\\label{tab:lat_n} Relative slopes \\(q\\) of trajectories in the \\((n,M^2)\\) plane. The first result (Meyer\/\\(0^{++}\\)) is that of a fit to the four point trajectory drawn in \\ref{fig:lat_Meyer}. The other results are obtained when calculating the slopes between pairs of states, where the lowest state is assumed to have \\(n = 0\\), and the first excited state is taken to have \\(n = 2\\).}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{\\texorpdfstring{$SU(N)$ vs. $SU(3)$}{SU(N) vs. SU(3)} and the quenched approximation}\nMost of the studies of glueballs on the lattice utilize the ``quenched'' approximation, which in this case amounts to calculating the spectrum of the pure SU(3) Yang-Mills gauge theory without matter. The degree to which the quenched results are modified when fermions are added to the theory is still unknown. However, if our purpose is to see whether or not glueballs form Regge trajectories, the spectrum of the pure gluon theory should be as useful as that of real QCD.\n\nThere have also been some calculations of the ``glueball'' spectrum of \\(SU(N)\\) Yang-Mills for other values of \\(N\\) \\cite{Lucini:2014paa}. These results taken from \\cite{Lucini:2004my}, are fitted in \\cite{Lucini:2014paa} to the formulae (the numbers in brackets are the errors in the last significant digits):\n\\begin{equation} \\frac{M_{0^{++}}}{\\sqrt{T}} = 3.28(8)+\\frac{2.1(1.1)}{N^2} \\,,\\end{equation}\n\\[ \\frac{M_{2^{++}}}{\\sqrt{T}} = 4.78(14)+\\frac{0.3(1.7)}{N^2} \\,,\\]\n\\[ \\frac{M_{0^{++*}}}{\\sqrt{T}} = 5.93(17)-\\frac{2.7(2.0)}{N^2} \\,.\\]\nUsing these values, we get for \\(SU(3)\\), the relative slopes (the prefactor of 2 in these formulae is \\(J\\) or \\(n\\)):\n\\begin{equation} 2\\frac{2\\pi T}{M^2_{2^{++}}-M^2_{0^{++}}} = 1.16\\plm0.27, \\qquad\n 2\\frac{2\\pi T}{M^2_{0^{++*}}-M^2_{0^{++}}} = 0.65\\plm0.11 \\,, \\end{equation}\nwhile for the \\(N\\rightarrow\\infty\\) limit,\n\\begin{equation} 2\\frac{2\\pi T}{M^2_{2^{++}}-M^2_{0^{++}}} = 1.04\\plm0.13, \\qquad\n 2\\frac{2\\pi T}{M^2_{0^{++*}}-M^2_{0^{++}}} = 0.52\\pm0.05 \\,.\\end{equation}\n\nWhile this is too little data to be significant, we observe that the value approaches 1 as \\(N\\) grows for the excitation in \\(J\\), and it approaches \\(\\frac12\\) (the closed string value) in \\(n\\). However, as was already seen from the results of \\cite{Meyer:2004gx}, the first \\(2^{++}\\) does not seem to lie on the trajectory of the \\(0^{++}\\) ground state, and these results seem to confirm this further. The radial trajectory on the other hand is again perfectly consistent with the closed string picture, and more so when going to the limit of this large \\(N\\) computation.\n\n\\section{Summary} \\label{sec:summary}\n\nFor many years the identification of glueballs, a basic prediction of QCD, in the experimental spectrum of flavorless isoscalar hadrons has been an open question. Moreover, the common lore is that there is no way to disentangle glueballs from flavorless mesons since there is no quantum number that distinguish between them. \n\nHere in this paper we have attempted to identify glueballs by turning to a well known feature of the hadron spectrum, its Regge trajectories. Stating it differently we use a stringy picture of rotating folded closed strings to describe the glueball in a similar way to the description of mesons and baryons in terms of open string with massive endpoints of \\cite{Sonnenschein:2014jwa} and \\cite{Sonnenschein:2014bia}.\n\nThe great disadvantage in using trajectories is that they are a property not of single states, but of a spectrum of states. Thus, for positive identification, we need to have in our spectrum, to begin with, several glueballs which we would then assign to a trajectory. The fact that the ratio between the open and closed string slopes is exactly half adds some ambiguity to the \\((n,M^2)\\) trajectories where the value of \\(n\\) cannot be determined by experiment: two states whose mass difference is, for instance, \\(\\Delta M^2 = 4\/\\ensuremath{\\alpha^\\prime}\\) can be either open strings with \\(\\Delta n = 4\\) between them, or closed strings with \\(\\Delta n = 2\\). The difference between the open and closed string trajectories would be in the number of states between those two: there would be more open strings for \\(\\Delta n = 1\\), 2, and 3. Thus we have to rely on experiment to observe all the relevant states in the given mass range, so that the absence of a state from a Regge trajectory could reasonably be used as evidence.\n\nDue to this situation it is clearly advisable to use additional predictions pertaining to the properties of single states to identify them as open or closed string hadrons. We have presented, qualitatively, the decay mechanism of the closed string to two open strings, which would be the decay of a glueball into two mesons. We included one prediction of the branching ratios of glueballs when decaying into light mesons, kaons, or \\(\\phi\\) (\\(s\\bar{s}\\)) mesons. If there were measurements of a state which has those three decay modes with the hierarchy we predict between them, we could have declared it a glueball, based on our model of holographic strings. One has to look more closely to find more ways in which open and closed strings vary.\n\nThere are obviously additional tasks and questions to further explore the closed string picture of glueballs. Here we list some of them:\n\\begin{itemize}\n\\item\nAs was emphasized in this note the most urgent issue is to gain additional data about flavorless hadrons. This calls for a further investigation of experiments that yield this kind of resonances and for proposing future experiments of potential glueball production, in particular in the range above 2.4 GeV. This can follow the predictions of the masses and width of the resonances as were listed in appendix \\ref{sec:predictions}.\n\\item\nRelated to the exploration of experimental data is the investigation of efficient mechanisms of creating glueballs. This issue was not addressed in this paper. Among possible glueball formation one finds radiative $J\/\\psi$ decays, pomeron pomeron collisions in hadron-hadron central production and in $p$-$\\bar p$ annihilation. Naturally, we would like to understand possible glueball formation in LHC experiments. It is known that we can find in the latter processes of gluon-gluon scattering and hence it may serve as a device for glueball creation. \n\\item\nAs was mentioned in section \\ref{sec:non_critical_string}, the quantization of folded closed strings in D non-critical dimensions has not yet been deciphered. In \\cite{Hellerman:2013kba} the expression derived for the intercept is singular in the case where is only one rotation plane - as it naturally is in $D=4$. We mentioned a potential avenue to resolve this issue by introducing massive particles on the folds, quantize the system as that of a string with massive endpoints \\cite{ASY}, and then take the limit of zero mass. \n\\item\nWe have mentioned that the rotating closed strings are in fact rotating folded closed strings. However, we did not make any attempt in this note to explore the role of the folds. In fact it seems that very few research has been devoted to the understanding of folded strings \\cite{Ganor:1994rm}. It would be interesting to use the rotating closed string as a venue to the more general exploration of strings with folds which may be related to certain systems in nature.\n\\item\nA mystery related to the closed string description of glueballs is the relation between the pomeron and the glueball. Supposedly both the glueball and the pomeron are described by a closed string. As we have emphasized in this note the slope of the closed string is half that of the open string and hence we advocated the search of trajectories with that slope. However, it was found from fitting the differential cross section of $p$-$p$ collisions that the slope of the pomeron is $\\ensuremath{\\alpha^\\prime}_{pomeron}\\approx 0.25$ GeV$^{-2}$. That is, a slope which is closer to a quarter of that of the meson open string rather than half. Thus the stringy structure of the pomeron and its exact relation to the glueball is still an open question.\n\\item\nThe closed string description of the glueball faces a very obvious question. In QCD one can form a glueball as a bound state of two, three, or in fact any number of gluons. The stringy picture seems to describe the composite of two gluons, and it is not clear how to realize those glueballs constructed in QCD from more than two gluons.\n\\end{itemize} \n\n\\acknowledgments{\nWe would like to thank Ofer Aharony and Abner Soffer for their comments on the manuscript and for insightful conversations, and Shmuel Nussinov and Shimon Yankielowicz for useful discussions. This work was supported in part by a centre of excellence supported by the Israel Science Foundation (grant number 1989\/14), and by the US-Israel bi-national fund (BSF) grant number 2012383 and the Germany Israel bi-national fund GIF grant number I-244-303.7-2013. \n}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{introduction}\n\nIn Ref.~\\cite{norma93} the representations in Grassmann and in Clifford space were discussed.\nIn Ref.~(\\cite{nh2018} and the references therein) the second quantization procedure in both\nspaces --- in Clifford space and in Grassmann space --- were discussed in order to try to understand \n\"why nature made a choice of Clifford rather than Grassmann space\" during the expansion \nof our universe, although in both spaces the creation operators $ \\hat{b}^{\\dagger}_{j} $ \nand the annihilation operators $ \\hat{b}_{j} $ exist fulfilling the anticommutation relations \nrequired for fermions~\\cite{nh2018}\n\\begin{eqnarray}\n\\{ \\hat{b}_i, \\hat{b}^{\\dagger}_{j} \\}_{+} |\\phi_{o}> &=&\\delta_{i j}\\; |\\psi_{o}>\\,,\n\\nonumber\\\\\n\\{\\hat{b}_i, \\hat{b}_{j} \\}_{+} |\\psi_{o}>&=& 0\\; |\\psi_{o}> \\,,\\nonumber\\\\\n\\{\\hat{b}^{\\dagger}_i,\\hat{b}^{\\dagger}_{j}\\}_{+} |\\psi_{o}>\n&=&0\\; |\\psi_{o}> \\,,\\nonumber\\\\\n\\hat{b}^{\\dagger}_{j} |\\psi_{o}>& =& \\, |\\psi_{j}>\\, \\nonumber\\\\\n\\hat{b}_{j} |\\psi_{o}>& =&0\\, |\\psi_{o}>\\,.\n\\label{ijthetaprod} \n\\end{eqnarray} \n$|\\psi_{o}>$ is the vacuum state. We use $|\\psi_o>= |1>$.\n\n\n Some observations to be included into introduction:\n However, even in the Grassmann case and with gravity only the scalar fields would appear in \n$d=(3+1)$.\n\nThe creation operators can be expressed in both spaces as products of eigenstates \nof the Cartan subalgebra, Eq.~(\\ref{choicecartan}), of the Lorentz algebra, \nEqs.~(\\ref{cartaneigengrass}, \\ref{signature}).\nStarting with one state (Ref.~\\cite{nh2018})\nall the other states\nof the same representation are reachable by the generators of the Lorentz transformations\n(which do not belong to the Cartan subalgebra), with ${\\cal {\\bf S^{ab}}}$ presented in \nEq.~(\\ref{Lorentztheta}) in Grassmann space and with either $S^{ab}$ or $\\tilde{S}^{ab}$, \nEq.~(\\ref{Lorentzgammatilde}), in Clifford space.\n\n But while there are in Clifford case two kinds of the generators of the Lorentz \ntransformations --- $S^{ab}$ and $\\tilde{S}^{ab}$, the first transforming members of one \nfamily among themselves, and the second transforming one member of a particular family into\nthe same member of other families --- there is in Grassmann space only one kind \nof the Lorentz generators --- ${\\cal {\\bf S^{ab}}}$. Correspondingly are all the states in \nClifford space, which can be second quantized as products of nilpotents and projectors~%\n\\cite{nh02,nh03,nh2018},\nreachable with one of the two kinds of the operators $S^{ab}$ and $\\tilde{S}^{ab}$, \nwhile different representations are in Grassmann space disconnected. \n\nOn the other hand the vacuum state is in Grassmann case simple --- $|\\psi_o>= |1>$ ---\nwhile in Clifford case is the sum of products of projectors, Eq.~(\\ref{vac1}). \n\nIn Grassmann space states are in the adjoint representations with respect to the Lorentz \ngroup, while states in Clifford space belong to the fundamental representations with respect \nto both generators, $S^{ab}$ and $\\tilde{S}^{ab}$, or they are singlets. Correspondingly \nare properties of fermions, described with the {\\it spin-charge-family} theory~%\n\\cite{IARD2016,n2014matterantimatter,JMP2013,normaJMP2015,nh2017,nd2017}, which \nuses the Clifford space to describe fermion degrees of freedom, in agreement with the \nobservations, offering explanation for all the assumptions of the {\\it standard model} \n(with families included) and also other observed phenomena. \n\nIn Grassmann case the spins \nmanifest, for example, in the case of $SO(6)$ or $SO(5,1)$ decuplets or singlets --- triplets and \nsinglets in Clifford case, \nTable~\\ref{Table grassdecupletso51.} --- while with respect to the subgroups $SU(3)$ and \n$U(1)$ of $SO(6)$ the states belong to either singlets, or triplets or sextets, \nTables~\\ref{Table grassdecuplet.},~\\ref{Table grasssextet.}\n --- triplets and singlets in the Clifford case.\n\n\nIn what follows we discuss representations, manifesting as charges and spins of fermions, of \nsubgroups of $SO(13,1)$, when internal degrees of freedom of fermions are described in \nGrassmann space and compare properties of these representations with the properties of \nthe corresponding representations appearing in Clifford space. We assume, as in the\n{\\it spin-charge-family} theory, that both spaces, the internal and the ordinary space, \nhave $d=2(2n+1)$-dimensions, $n$ is positive integer, $d \\ge 14$ and that all the degrees \nof freedom of fermions and bosons originate in $d=2(2n+1)$, in which fermions interact with \ngravity only. \n\nAfter the break of the starting symmetry $SO(13,1)$ into $SO(7,1) \\times SU(3) \\times U(1)$, and\nfurther to $SO(3,1) \\times SU(2) \\times SU(2) \\times SU(3) \\times U(1)$, fermions manifest \nin $d=(3+1)$ the spin and the corresponding charges and interact with the gauge fields, which \nare indeed the spin connections with the space index $m=(0,1,2,3)$, originating in \n$d=(13,1)$~\\cite{nd2017}. Also scalar fields originate in gravity: Those spin connections with \nthe space index $a =(5,6,7,8)$ determine masses of fermions, those with the space index \n$a=(9,10,\\dots,14)$ contribute to particle\/antiparticle asymmetry in our \nuniverse~\\cite{n2014matterantimatter}. \n\nWe pay attention in this paper mainly on fermion fields with spin $1$, the creation and \nannihilation operators of which fulfill the anticommutation relations of Eq.~(\\ref{ijthetaprod})\nin Grassmann space. \n\n\n\n\n\\subsection{Creation and annihilation operators in Grassmann space}\n\\label{grassmann}\n\nIn Grassmann $d=2(2n+1)$-dimensional space the creation and annihilation operators follow from\nthe starting two creation and annihilation operators, both with an odd Grassmann character, \nsince those with an even Grassmann character do not obey the anticommutation relations of \nEq.~(\\ref{ijthetaprod})~\\cite{nh2018} \n\\begin{eqnarray}\n\\hat{b}^{\\theta 1 \\dagger}_{1} &=& (\\frac{1}{\\sqrt{2}})^{\\frac{d}{2}} \\,\n(\\theta^0 - \\theta^3) (\\theta^1 + i \\theta^2) (\\theta^5 + i \\theta^6) \\cdots (\\theta^{d-1} +\n i \\theta^{d}) \\,,\\nonumber\\\\\n\\hat{b}^{\\theta 1}_{1} &=& (\\frac{1}{\\sqrt{2}})^{\\frac{d}{2}}\\,\n (\\frac{\\partial}{\\;\\partial \\theta^{d-1}} - i \\frac{\\partial}{\\;\\partial \\theta^{d}})\n\\cdots (\\frac{\\partial}{\\;\\partial \\theta^{0}}\n-\\frac{\\partial}{\\;\\partial \\theta^3})\\,,\\nonumber\\\\\n\\hat{b}^{\\theta 2 \\dagger }_{1} &=& (\\frac{1}{\\sqrt{2}})^{\\frac{d}{2}} \\,\n(\\theta^0 + \\theta^3) (\\theta^1 + i \\theta^2) (\\theta^5 + i \\theta^6) \\cdots (\\theta^{d-1} +\n i \\theta^{d}) \\,,\\nonumber\\\\\n\\hat{b}^{\\theta 2}_{1} &=& (\\frac{1}{\\sqrt{2}})^{\\frac{d}{2}}\\,\n (\\frac{\\partial}{\\;\\partial \\theta^{d-1}} - i \\frac{\\partial}{\\;\\partial \\theta^{d}})\n\\cdots (\\frac{\\partial}{\\;\\partial \\theta^{0}}\n+\\frac{\\partial}{\\;\\partial \\theta^3})\\,.\n\\label{start(2n+1)2theta}\n\\end{eqnarray}\nAll the creation operators are products of the eigenstates of the Cartan subalgebra operators,%\nEq.~(\\ref{choicecartan})\n\\begin{eqnarray}\n\\label{cartaneigengrass}\n{\\cal {\\bf S}}^{ab} (\\theta^a \\pm \\epsilon \\theta^b) &=& \\mp i \n\\frac{\\eta^{aa}}{\\epsilon} (\\theta^a \\pm \\epsilon \\theta^b)\\,, \\nonumber\\\\\n\\epsilon = 1\\,, \\;\\, {\\rm for}\\;\\, \\eta^{aa}=1\\,,&&\n\\epsilon = i \\,,\\;\\, {\\rm for}\\;\\, \\eta^{aa}= -1\\,, \\nonumber\\\\\n{\\cal {\\bf S}}^{ab}\\, (\\theta^a \\theta^b \\pm \\epsilon \\theta^c \\theta^d)=0\\,,&& \n {\\cal {\\bf S}}^{cd}\\, (\\theta^a \\theta^b \\pm \\epsilon \\theta^c \\theta^d)=0\\,.\n\\end{eqnarray}\n\n\nThe two creation operators, $\\hat{b}^{\\theta 1 \\dagger}_{1}$ and \n$\\hat{b}^{\\theta 2 \\dagger}_{1}$, if applied on the vacuum state, form the starting two states \n$\\phi^{1}_{1}$ and $\\phi^{2}_{1}$ of the two representations, respectively. The vacuum state \nis chosen to be the simplest one~\\cite{nh2018} --- $|\\phi_{0}> = |1>$. The rest of creation operators\nof each of the two groups, $\\hat{b}^{\\theta 1 \\dagger}_{i}$ and $\\hat{b}^{\\theta 2 \\dagger}_{i}$, \nfollow from the starting one by the application of the generators of the Lorentz transformations in \nGrassmann space ${\\cal {\\bf S}}^{ab}$, Eq.~(\\ref{Lorentztheta}), which do not belong to the \nCartan subalgebra, Eq.~(\\ref{choicecartan}), of the Lorentz algebra. They generate either \n$|\\phi^{1}_{j}>$ of the first group or $|\\phi^{2}_{j}>$ of the second group.\n\nAnnihilation operators $\\hat{b}^{\\theta 1}_{i}$ and $\\hat{b}^{\\theta 2}_{i}$ follow from the \ncreation ones by the Hermitian conjugation~\\cite{nh2018}, when taking into account the \nassumption\n\\begin{eqnarray}\n\\label{grassher}\n(\\theta^a)^{\\dagger} &=& \\frac{\\partial}{\\partial \\theta_{a}} \\eta^{aa}=\n-i \\,p^{\\theta a} \\eta^{aa}\\,, \n\\end{eqnarray}\nfrom where it follows\n\\begin{eqnarray}\n\\label{grassp}\n(\\frac{\\partial}{\\partial \\theta_{a}})^{\\dagger} &=& \\eta^{aa}\\, \\theta^a\\,,\\quad\n(p^{\\theta a})^{\\dagger} = -i \\eta^{aa} \\theta^a\\,.\n\\end{eqnarray}\n\n\nThe annihilation operators $\\hat{b}^{\\theta 1}_{i}$ and $\\hat{b}^{\\theta 2}_{i}$ annihilate \nstates $|\\phi^{1}_{i}>$ and $|\\phi^{2}_{i}>$, respectively. \n\nThe application of ${\\cal {\\bf S}}^{01}$ on $\\hat{b}^{\\theta 1 \\dagger}_{1}$, for example,\n transforms this creation operator into $\\hat{b}^{\\theta 1 \\dagger}_{2} = $ \n$(\\frac{1}{\\sqrt{2}})^{\\frac{d}{2} -1} \\,(\\theta^0 \\theta^3 +i \\theta^1 \\theta^2)$\n$ (\\theta^5 + i \\theta^6) \\cdots (\\theta^{d-1} - i \\theta^{d})$. Correspondingly its Hermitian \nconjugate annihilation operator is equal to\n$\\hat{b}^{\\theta 1}_{2} = (\\frac{1}{\\sqrt{2}})^{\\frac{d}{2}-1}\\,\n (\\frac{\\partial}{\\;\\partial \\theta^{d-1}} - i \\frac{\\partial}{\\;\\partial \\theta^{d}})\n\\cdots (\\frac{\\partial}{\\;\\partial \\theta^{3}} \\,\\frac{\\partial}{\\;\\partial \\theta^0} - i \n\\frac{\\partial}{\\;\\partial \\theta^{2}} \\,\\frac{\\partial}{\\;\\partial \\theta^1})$.\n\nAll the states are normalized with respect to the integral over the Grassmann coordinate \nspace~\\cite{norma93}\n\n\\begin{eqnarray}\n\\label{grassnorm}\n<\\phi^{a}_{i}|\\phi^{b}_{j} > &=& \\int d^{d-1} x d^d \\theta^a\\, \\,\\omega \n <\\phi^{a}_{i}|\\theta> <\\theta|\\phi^{b}_{j} > = \\delta^{ab}\\,\\delta_{ij} \\,, \\nonumber\\\\\n\\omega&=& \\Pi^{d}_{k=0}(\\frac{\\partial}{\\;\\,\\partial \\theta_k} + \\theta^{k})\\,,\n\\end{eqnarray}\nwhere $\\omega$ is a weight function, defining the scalar product $<\\phi^a_{i}|\\phi^b_{j} >$,\n and we require that~\\cite{norma93}\n\\begin{eqnarray}\n\\label{grassintegral}\n\\{ d\\theta^a, \\theta^b \\}_{+} &=&0, \\,\\;\\; \\int d\\theta^a =0\\,,\\,\\;\\; \n\\int d\\theta^a \\theta^a =1\\,,\\nonumber\\\\\n\\int d^d \\theta \\,\\,\\theta^0 \\theta^1 \\cdots \\theta^d &=&1\\,,\n\\nonumber\\\\\nd^d \\theta &=&d \\theta^d \\dots d\\theta^0\\,,\n\\end{eqnarray}\nwith $ \\frac{\\partial}{\\;\\,\\partial \\theta_a} \\theta^c = \\eta^{ac}$. \n\n\nThere are $\\frac{1}{2}\\, \\frac{d!}{\\frac{d!}{2} \\frac{d!}{2}}$ in each of these two groups \nof creation operators of an odd Grassmann character in $d=2(2n+1)$-dimensional space.\n\nThe rest of creation operators (and the corresponding annihilation operators) would have rather \nopposite Grassmann character than the ones studied so far: like {\\bf a.} $\\theta^0 \\theta^1$ \nfor the creation operator and \n[$\\frac{\\partial}{\\partial \\theta^1}\\frac{\\partial}{\\partial \\theta^0}$] for the \ncorresponding annihilation operator in $d=(1+1)$\n (since $\\{\\theta^0 \\theta^1, \\frac{\\partial}{\\partial \\theta^{1}}$ \n $\\frac{\\partial}{\\partial \\theta^{0}} \\}_{+}$ gives $(1+ (1+1) \\theta^0 \\theta^1 $\n $\\frac{\\partial}{\\partial \\theta^{1}} \\frac{\\partial}{\\partial \\theta^{0}}) $), and like {\\bf b.}\n $(\\theta^0 \\mp \\theta^3) (\\theta^1\\pm i \\theta^2)$ for creation operator and \n[$ (\\frac{\\partial}{\\partial \\theta^1}\\mp i \\frac{\\partial}{\\partial \\theta^2}) \n(\\frac{\\partial}{\\partial \\theta^0} \\mp \\frac{\\partial}{\\partial \\theta^3})$] for the\nannihilation operator, or $\\theta^0 \\theta^3\n \\theta^1 \\theta^2$ for the creation operator and [$\\frac{\\partial}{\\partial \\theta^2} \n\\,\\frac{\\partial}{\\partial \\theta^1}\\,\\frac{\\partial}{\\partial \\theta^3}\\,\n\\frac{\\partial}{\\partial \\theta^0}$] for the annihilation operator in $d=(3+1)$ \n (since, let say, $\\{\\frac{1}{2} (\\theta^0 - \\theta^3) (\\theta^1 + i \\theta^2),$ \n $\\frac{1}{2} (\\frac{\\partial}{\\partial \\theta^1} - i \\frac{\\partial}{\\partial \\theta^2}) \n (\\frac{\\partial}{\\partial \\theta^0} - \\frac{\\partial}{\\partial \\theta^3})\\}_{+}$ gives\n $(1 + \\frac{1}{4} (1+1) (\\theta^0 - \\theta^3) (\\theta^1 + i \\theta^2) \n (\\frac{\\partial}{\\partial \\theta^1} - i \\frac{\\partial}{\\partial \\theta^2}) \n (\\frac{\\partial}{\\partial \\theta^0} - \\frac{\\partial}{\\partial \\theta^3})$ and equivalently for \nother cases), but applied on a vacuum states some of them still fulfill some of the relations\n of Eq.~(\\ref{ijthetaprod}), but not all (like $\\{\\frac{1}{2}(\\theta^0 - \\theta^3) \n(\\theta^1 + i \\theta^2),$ $\\frac{1}{2} (\\theta^0 + \\theta^3) (\\theta^1 - i \\theta^2)\\}_{+}\n=$ $i \\theta^0 \\theta^1\\theta^2 \\theta^3 $, while it should be zero). \n\nLet us add that, like in Clifford case, one can simplify the scalar product in Grassmann case \nby recognizing that the scalar product is equal to $\\delta^{ab}\\,\\delta_{ij}$\n\\begin{eqnarray}\n\\label{grassscalar}\n<\\phi^{a}_{i}|\\theta> <\\theta|\\phi^{b}_{j} > &=& \\delta^{ab}\\,\\delta_{ij}\\,,\n\\end{eqnarray}\nwithout integration over the Grassmann coordinates. Let us manifest this in the case\nof $d=(1+1)$:$<1|\\frac{1}{\\sqrt{2}}(\\frac{\\partial}{\\partial \\theta^0} - \n\\frac{\\partial}{\\partial \\theta^1})\n\\frac{1}{\\sqrt{2}}(\\theta^0 - \\theta^1)|1>=1 $, $|1>$ is the normalized vacuum state,\n$<1|1>=1$. It is true \nin all dimensions, what can easily be understood for all the states, which are defined by \nthe creation operators $\\hat{b}_{i}^{\\dagger}$ on the vacuum state $|1>$, \n$|\\phi^{b}_{i} >= \\hat{b}_{i}^{\\dagger}|1>$, fulfilling the anticommutation relations\nof Eq.~(\\ref{ijthetaprod}).\n\n \n\\subsection{Creation and annihilation operators in Clifford space}\n\\label{clifford}\n\nThere are two kinds of Clifford objects~\\cite{norma93}, (\\cite{IARD2016} and Refs. therein),\n$\\gamma^a$ and $\\tilde{\\gamma}^a$, both fulfilling the anticommutation relations\n\\begin{eqnarray}\n\\label{tildecliffcomrel}\n \\{\\gamma^a, \\gamma^b \\}_{+} &=& 2 \\eta^{a b} = \n\\{\\tilde{\\gamma}^a, \\tilde{\\gamma}^b \\}_{+}\\,, \\nonumber\\\\ \n \\{\\gamma^a, \\tilde{\\gamma}^b \\}_{+}&=&0\\,. \n\\end{eqnarray}\nBoth Clifford algebra objects are expressible with $\\theta^a $ and \n$\\frac{\\partial}{\\,\\;\\partial \\theta^a}$~\\cite{norma93,nh2018}, \n (\\cite{IARD2016} and Refs. therein)\n\\begin{eqnarray}\n\\label{cliffthetarel}\n\\gamma^a &=& (\\theta^a + \\frac{\\partial}{\\;\\partial \\theta_a})\\,,\\nonumber\\\\\n\\tilde{\\gamma}^a &=& i \\, (\\theta^a - \\frac{\\partial}{\\;\\partial \\theta a})\\,, \\nonumber\\\\\n\\theta^a &=&\\frac{1}{2} (\\gamma^a - i \\tilde{\\gamma}^a)\\,,\\nonumber\\\\\n\\frac{\\,\\partial}{\\partial \\theta_a} &=&\\frac{1}{2} \\, (\\gamma^a + i \\tilde{\\gamma}^a)\\,,\n\\end{eqnarray}\nfrom where it follows: $(\\gamma^a)^{\\dagger} = \\gamma^a \\eta^{aa}$, \n$(\\tilde{\\gamma}^a)^{\\dagger} = \\tilde{\\gamma}^a \\eta^{aa}$,\n$\\gamma^a \\gamma^a = \\eta^{aa}$, $\\gamma^a (\\gamma^a)^{\\dagger} =1$,\n$ \\tilde{\\gamma}^a \\tilde{\\gamma}^a = \\eta^{aa}$, \n$ \\tilde{\\gamma}^a (\\tilde{\\gamma}^a)^{\\dagger} =1$.\n\n\nCorrespondingly we can use either $\\gamma^a$ or $\\tilde{\\gamma}^a$ instead of \n$\\theta^a$ to span the internal space of fermions. Since both, $\\gamma^a$ and \n$\\tilde{\\gamma}^a$, are expressible with $\\theta^a$ and the derivatives with respect to \n$\\theta^a$, the norm of vectors in Clifford space can be defined by the same integral as in \nGrassmann space, Eq.(\\ref{grassnorm}), or we can simplify the scalar product (as in the \nGrassmann case, Eq.~(\\ref{grassscalar}) by introducing\nthe Clifford vacuum state $|\\psi_{oc}>$, Eq.~(\\ref{vac1}), instead of $|1>$ in Grassmann\ncase. \n\nWe make use of $\\gamma^a$ to span the vector space. As in the case of Grassmann\nspace we require that the basic states are eigenstates of the Cartan subalgebra operators of \n$S^{ab}$ and $\\tilde{S}^{ab}$, Eq.~(\\ref{choicecartan}). \n\\begin{eqnarray}\n\\stackrel{ab}{(k)}:&=& \n\\frac{1}{2}(\\gamma^a + \\frac{\\eta^{aa}}{ik} \\gamma^b)\\,,\\quad \n\\stackrel{ab}{(k)}^{\\dagger} = \\eta^{aa}\\stackrel{ab}{(-k)}\\,,\\nonumber\\\\\n\\stackrel{ab}{[k]}:&=&\n\\frac{1}{2}(1+ \\frac{i}{k} \\gamma^a \\gamma^b)\\,,\\quad \\;\\,\n\\stackrel{ab}{[k]}^{\\dagger} = \\,\\stackrel{ab}{[k]}\\,,\\nonumber\\\\\nS^{ab}\\,\\stackrel{ab}{(k)} &=& \\frac{1}{2} k\\, \\stackrel{ab}{(k)}\\,,\\quad \\quad \\quad\nS^{ab}\\,\\stackrel{ab}{[k]} = \\frac{1}{2} k\\, \\stackrel{ab}{[k]}\\,, \\nonumber\\\\\n\\tilde{S}^{ab}\\,\\stackrel{ab}{(k)} &=& \\frac{1}{2} k\\, \\stackrel{ab}{(k)}\\,,\\quad \\quad \\quad\n\\tilde{S}^{ab}\\,\\stackrel{ab}{[k]} = -\\frac{1}{2} k\\, \\stackrel{ab}{[k]}\\,,\n\\label{signature}\n\\end{eqnarray}\nwith $k^2 = \\eta^{aa} \\eta^{bb}$. To calculate $\\tilde{S}^{ab}\\,\\stackrel{ab}{(k)}$ and \n$\\tilde{S}^{ab}\\,\\stackrel{ab}{[k]}$ we define~\\cite{nh03,nh02} the application of \n$\\tilde{\\gamma}^a$ on any Clifford algebra object A as follows\n\\begin{eqnarray}\n(\\tilde{\\gamma^a} A = i (-)^{(A)} A \\gamma^a)|\\psi_{oc}>\\,,\n\\label{gammatildeA}\n\\end{eqnarray}\nwhere $A$ is any Clifford algebra object and $(-)^{(A)} = -1$, if $A$ is an odd Clifford algebra\n object and $(-)^{(A)} = 1$, if $A$ is an even Clifford algebra\n object, $|\\psi_{oc}>$ is the vacuum state, replacing the vacuum state $|\\psi_o>= |1>$,\n used in Grassmann case, with the \none of Eq.~(\\ref{vac1}), in accordance with the relation of Eqs.~({\\ref{cliffthetarel},\n\\ref{grassnorm}, \\ref{grassintegral}}), Ref.~\\cite{nh2018}.\n\n\nWe can define now the creation and annihilation operators in Clifford space so that they fulfill the\nrequirements of Eq.~(\\ref{ijthetaprod}).\nWe write the starting creation operator and its Hermitian conjugate one (in \naccordance with Eq.~(\\ref{signature}) and Eq.(\\ref{choicecartan})) in $2(2n+1)$-dimensional \nspace as follows~\\cite{nh2018}\n\\begin{eqnarray}\n\\hat{b}^{1 \\dagger}_1&=& \\stackrel{03}{(+i)} \\stackrel{12}{(+)} \\stackrel{56}{(+)}\\cdots\n\\stackrel{d-1\\;d}{(+)}\\,,\\nonumber\\\\\n\\hat{b}^{1}_1&=& \\stackrel{d-1\\;d}{(-)} \\cdots \\stackrel{56}{(-)} \\stackrel{12}{(-)}\n\\stackrel{03}{(-i)}\\,.\n\\label{bstart}\n\\end{eqnarray}\nThe starting creation operator $\\hat{b}^{1 \\dagger}_1$, when applied on the vacuum state\n$|\\psi_{oc}>$, defines the starting family member of the starting ''family\". The corresponding\nstarting annihilation operator is its Hermitian conjugated one, Eq.~(\\ref{signature}).\n\nAll the other creation operators of the same family can be obtained by the application of the \ngenerators of the Lorentz transformations $S^{ab}$, Eq.~(\\ref{Lorentzgammatilde}), which do \nnot belong to the Cartan subalgebra of $SO(2(2n+1) -1,1)$, Eq.~(\\ref{choicecartan}). \n\\begin{eqnarray}\n\\hat{b}^{1\\dagger}_i &\\propto & S^{ab} ..S^{ef} \\hat{b}^{1\\dagger}_1\\,,\\nonumber\\\\\n\\hat{b}^{1}_i&\\propto & \\hat{b}^{1}_1 S^{ef}..S^{ab}\\,,\n\\label{b1i}\n\\end{eqnarray}\nwith $S^{ab\\dagger} = \\eta^{aa} \\eta^{bb} S^{ab}$. The proportionality factors are chosen \nso, that the corresponding states $|\\psi^{1}_{1}>= \\hat{b}^{1\\dagger}_i |\\psi_{oc}>$ are \nnormalized, where $|\\psi_{oc}>$ is the normalized vacuum state, $<\\psi_{oc}|\\psi_{oc}> =1$. \n\nThe creation operators creating different \"families\" with respect to the starting \"family\",\n Eq.~(\\ref{bstart}), \ncan be obtained from the starting one by the application of $\\tilde{S}^{ab}$, \nEq.~(\\ref{Lorentzgammatilde}), which do not belong to the Cartan subalgebra of \n$\\widetilde{SO}(2(2n+1) -1,1)$, Eq.~(\\ref{choicecartan}). They all keep the \"family member\" \nquantum number unchanged.\n\\begin{eqnarray}\n\\hat{b}^{\\alpha \\dagger}_i &\\propto & \\tilde{S}^{ab} \\cdots \\tilde{S}^{ef}\\,\n\\hat{b}^{1\\dagger}_{i}\\,.\n\\label{balpha1}\n\\end{eqnarray}\nCorrespondingly we can define (up to the proportionality factor) any creation operator for any\n\"family\" and any \"family member\" with the application of $S^{ab}$ and $\\tilde{S}^{ab}$%\n~\\cite{nh2018}\n\\begin{eqnarray}\n\\hat{b}^{\\alpha \\dagger}_i&\\propto & \\tilde{S}^{ab} \\cdots \\tilde{S}^{ef} \n{S}^{mn}\\cdots {S}^{pr}\n\\hat{b}^{1\\dagger}_{1}\\nonumber\\\\\n&\\propto & {S}^{mn}\\cdots {S}^{pr} \\hat{b}^{1\\dagger}_{1} {S}^{ab} \\cdots {S}^{ef}\\,.\n\\label{anycreation}\n\\end{eqnarray}\nAll the corresponding annihilation operators follow from the creation ones by the Hermitian \nconjugation.\n\nThere are $2^{\\frac{d}{2}-1}$ $\\times \\;\\, 2^{\\frac{d}{2}-1}$ creation operators of an odd \nClifford character and the same number of annihilation operators, which fulfill the anticommutation\nrelations of Eq.~(\\ref{ijthetaprod}) on the vacuum state $|\\psi_{oc}>$ with\n$2^{\\frac{d}{2}-1}$ summands\n\\begin{eqnarray}\n|\\psi_{oc}>&=& \\alpha\\,( \\stackrel{03}{[-i]} \\stackrel{12}{[-]} \n\\stackrel{56}{[-]}\\cdots\n\\stackrel{d-1\\;d}{[-]} + \\stackrel{03}{[+i]} \\stackrel{12}{[+]} \\stackrel{56}{[-]} \\cdots\n\\stackrel{d-1\\;d}{[-]} + \\stackrel{03}{[+i]} \\stackrel{12}{[-]} \\stackrel{56}{[+]}\\cdots\n\\stackrel{d-1\\;d}{[-]} + \\cdots ) |0>\\,, \\quad \\nonumber\\\\\n&&\\alpha =\\frac{1}{\\sqrt{2^{\\frac{d}{2}-1}}}\\,, \\nonumber\\\\\n&&{\\rm for}\\; d=2(2n+1)\\,,\n\\label{vac1}\n\\end{eqnarray}\n$n$ is a positive integer. For a chosen $\\alpha =\\frac{1}{\\sqrt{2^{\\frac{d}{2}-1}}}$\nthe vacuum is normalized: $<\\psi_{oc}|\\psi_{oc}>=1$. \n\n\nIt is proven in Ref.~\\cite{nh2018} that the creation and annihilation operators fulfill the\nanticommutation relations required for fermions, Eq.~(\\ref{ijthetaprod}). \n\n\n\n\n\\section{Properties of representations of the Lorentz group $SO(2(2n+1))$ and of subgroups\nin Grassmann and in Clifford space}\n\\label{grassmanncliffordrepresentations}\n\nThe purpose of this contribution is to compare properties of the representations of the Lorentz\ngroup $SO(2(2n+1))$, $n \\ge 3$, when for the description of the internal degrees of freedom \nof fermions either {\\bf i.} Grassmann space or {\\bf ii.} Clifford space is used. The \n{\\it spin-charge-family} theory~(\\cite{JMP2013,normaJMP2015,IARD2016,%\nn2014matterantimatter,nh2017,nd2017,n2012scalars} and the references therein) namely\npredicts that all the properties of the observed either quarks and leptons or vector gauge fields \nor scalar gauge fields originate in $d \\ge (13+1)$, in which massless fermions interact with the \ngravitational field only --- with its spin connections and \nvielbeins. \n\nHowever, both --- Clifford space and Grassmann space --- allow second quantized states, the \ncreation and annihilation operators of which fulfill the anticommutation relations for fermions of \nEq.~(\\ref{ijthetaprod}). \n\nBut while Clifford space offers the description of spins, charges and families of fermions in \n$d=(3+1)$, all in the fundamental representations of the Lorentz group $SO(13,1)$ and the \nsubgroups of the Lorentz group, in agreement with the observations, the representations of the \nLorentz group are in Grassmann space the adjoint ones, in disagreement with what we \nobserve.\n\nWe compare properties of the representations in Grassmann case with those in Clifford case \nto be able to better understand \"the choice of nature in the expanding universe, making use of the \nClifford degrees of freedom\", rather than Grassmann degrees of freedom.\n\nIn introduction we briefly reviewed properties of creation and annihilation operators in both spaces,\npresented in Ref.~\\cite{nh2018} (and the references therein). \nWe pay attention on spaces with $d=2(2n+1)$ of ordinary coordinates and $d=2(2n+1)$ internal \ncoordinates, either of Clifford or of Grassmann character. \n\n{\\bf i.} $\\;\\;$ In Clifford case there are $2^{\\frac{d}{2} - 1}$ creation operators of an odd \nClifford character, creating \"family members\" when applied on the vacuum state. We choose \nthem to be eigenstates of the Cartan subalgebra operators, Eq.(\\ref{choicecartan}), of the Lorentz \nalgebra. All the members\ncan be reached from any of the creation operators by the application of $S^{ab}$, \nEq.~(\\ref{Lorentzgammatilde}).\nEach \"family member\" appears in $2^{\\frac{d}{2} - 1}$ \"families\", again of an odd Clifford \ncharacter, since the corresponding creation operators are reachable by $\\tilde{S}^{ab}$, \nEq.~(\\ref{Lorentzgammatilde}), which are Clifford even objects. \n\nThere are correspondingly $2^{\\frac{d}{2} - 1} \\cdot$ $2^{\\frac{d}{2} - 1}$ creation and the \nsame number ($2^{\\frac{d}{2} - 1} \\cdot$ $2^{\\frac{d}{2} - 1}$) of annihilation operators. Also \nthe annihilation operators, annihilating states of $2^{\\frac{d}{2} - 1}$ \"family members\" in\n $2^{\\frac{d}{2} - 1}$ \"families\", have an odd Clifford character, since they are Hermitian conjugate\n to the creation ones. \n\nThe rest of $2 \\cdot$ $2^{\\frac{d}{2} - 1}\\cdot$ $2^{\\frac{d}{2} - 1}$ members\nof the Lorentz representations have an even Clifford character, what means that the corresponding \ncreation and annihilation operators can not fulfill the anticommutation relations required for \nfermions, Eq.~(\\ref{ijthetaprod}). \nAmong these $2^{\\frac{d}{2} - 1}$ products of projectors determine the vacuum state, \nEq.~(\\ref{vac1}).\n\n\n{\\bf ii.} $\\;\\;$ In Grassmann case there are $\\frac{d!}{\\frac{d}{2}!\\,\\frac{d}{2}! }$ \noperators of an odd\nGrassmann character, which form the creation operators, fulfilling with the corresponding \nannihilation operators the requirements of Eq.~(\\ref{ijthetaprod}). All the creation operators \nare chosen to be products of the eigenstates of the Cartan subalgebra ${\\cal {\\bf S}}^{ab}$, \nEq.~(\\ref{choicecartan}). The corresponding annihilation operators are the Hermitian conjugated \nvalues of the creation operators, Eqs.~(\\ref{grassher}, \\ref{grassp}, \\ref{start(2n+1)2theta}). \nThe creation operators form, when applied on the simple vacuum state $|\\phi_{o}> = |1>$,\ntwo independent groups of states. The members of each of the two groups are \nreachable from any member of a group by the application of ${\\cal {\\bf S}}^{ab}$, \nEq.~(\\ref{Lorentztheta}). All the states of any of the two decuplets are orthonormalized.\n\nWe comment in what follows the representations in $d=(13+1)$ in Clifford and in Grassmann case.\nIn {\\it spin-charge family} theory there are breaks of the starting symmetry from $SO(13,1)$ to\n$SO(3,1)\\times SU(2) \\times SU(3) \\times U(1)$ in steps, which lead to the so far observed \nquarks and leptons, gauge and scalar fields and gravity. One \nof the authors (N.S.M.B.), together with H.B. Nielsen, defined the discrete symmetry operators for \nKaluza-Klein theories for spinors in Clifford space~\\cite{nhds}. In Ref.~\\cite{nh2018} the same \nauthors define the discrete \nsymmetry operators in the case that for the description of fermion degrees of \nfreedom Grassmann space is used. Here we comment symmetries in both spaces for some of\nsubgroups of the $SO(13,1)$ group, as well as the appearance of the Dirac sea. \n\n\n\\subsection{Currents in Grassmann space} \n\\label{currents}\n\n\n\n\\begin{eqnarray}\n\\{ \\theta^a p_a, \\frac{\\partial }{\\partial \\theta_b} p_b \\}_{+}&=&\n \\theta^a p_a \\frac{\\partial }{\\partial \\theta_b} p_b + \n \\frac{\\partial }{\\partial \\theta_b} p_b \\theta^a p_a = \\eta^{ab}p_a p_b \\,, \n\\label{KGgrass}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\phi^{\\dagger}(\\theta^0 + \\frac{\\partial }{\\partial \\theta_0})\n(\\theta^a + \\frac{\\partial }{\\partial \\theta_a}) \\phi &=& j^{a} \\,, \n\\label{currentgrass}\n\\end{eqnarray}\n\n\\subsection{Equations of motion in Grassmann and Clifford space}\n\\label{equationinCandG}\n\nWe define~\\cite{nh2018} the action in Grassmann space, for which we require --- similarly as in \n Clifford case --- that the action for a free massless object \n\\begin{eqnarray}\n{\\cal A}\\, &=& \\frac{1}{2} \\int \\; d^dx \\;d^d\\theta\\; \\omega \\, \\{\\phi^{\\dagger} \n(1-2\\theta^0 \\frac{\\partial}{\\partial \\theta^0}) \\,\\frac{1}{2}\\,\n (\\theta^a p_{a}+ \\eta^{aa} \\theta^{a \\dagger}\n p_{a}) \\phi \\} \\,, \n\\label{actionWeylGrass}\n\\end{eqnarray}\nis Lorentz invariant. The corresponding equation of motion is \n\\begin{eqnarray}\n\\label{Weylgrass}\n\\frac{1}{2}[ (1-2\\theta^0 \\frac{\\partial}{\\partial \\theta^0}) \\,\\theta^a + \n((1-2\\theta^0 \\frac{\\partial}{\\partial \\theta^0}) \\,\\theta^a)^{\\dagger}]\\,\n\\, p_{a} \\,|\\phi^{\\theta}_{i}>\\,&= & 0\\,,\n\\end{eqnarray}\n$p_{a} = i\\, \\frac{\\partial}{\\partial x^a}$, leading to the Klein-Gordon equation\n\\begin{eqnarray}\n\\label{LtoKGgrass}\n\\{(1-2\\theta^0 \\frac{\\partial}{\\partial \\theta^0}) \\theta^a p_{a}\\}^{\\dagger}\\,\\theta^b p_{b}\n|\\phi^{\\theta}_{i}>&= & \np^a p_a |\\phi^{\\theta}_{i}>=0\\,.\n\\end{eqnarray}\nIn the Clifford case the action for massless fermions is well known\n\\begin{eqnarray}\n{\\cal A}\\, &=& \\int \\; d^dx \\; \\frac{1}{2}\\, (\\psi^{\\dagger}\\gamma^0 \\, \\gamma^a p_{a} \\psi) +\n h.c.\\,, \n\\label{actionWeyl}\n\\end{eqnarray}\n leading to the equations of motion \n\\begin{eqnarray}\n\\label{Weyl}\n\\gamma^a p_{a} |\\psi^{\\alpha}>&= & 0\\,, \n\\end{eqnarray}\nwhich fulfill also the Klein-Gordon equation\n\\begin{eqnarray}\n\\label{LtoKG}\n\\gamma^a p_{a} \\gamma^b p_b |\\psi^{\\alpha}_{i}>&= & \np^a p_a |\\psi^{\\alpha}_{i}>=0\\,.\n\\end{eqnarray}\n\n\n\n\n\\subsection{Discrete symmetries in Grassmann and Clifford space}\n\\label{CPT}\n\n\nWe follow also here Ref.~\\cite{nh2018} and the references therein.\nWe distinguish in $d$-dimensional space two kinds of dicsrete operators ${\\cal C}, {\\cal P}$ \nand ${\\cal T}$\n operators with respect to the internal space which we use.\n\nIn the Clifford case~\\cite{nhds}, when the whole $d$-space is treated equivalently, we have \n\\begin{eqnarray}\n\\label{calCPTH}\n{\\cal C}_{{\\cal H}}&=& \\prod_{\\gamma^a \\in \\Im} \\gamma^a \\,\\, K\\,,\\quad\n{\\cal T}_{{\\cal H}}= \\gamma^0 \\prod_{\\gamma^a \\in \\Re} \\gamma^a \\,\\, K\\, I_{x^0}\\,\\,\\,,\n \\quad {\\cal P}^{(d-1)}_{{\\cal H}} = \\gamma^0\\,I_{\\vec{x}}\\,,\\nonumber\\\\\nI_{x} x^a &=&- x^a\\,, \\quad I_{x^0} x^a = (-x^0,\\vec{x})\\,, \\quad I_{\\vec{x}} \\vec{x} =\n -\\vec{x}\\,, \\nonumber\\\\\nI_{\\vec{x}_{3}} x^a &=& (x^0, -x^1,-x^2,-x^3,x^5, x^6,\\dots, x^d)\\,.\n\\end{eqnarray}\nThe product $\\prod \\, \\gamma^a$ is meant in the ascending order in $\\gamma^a$.\n\nIn the Grassmann case we correspondingly define\n\\begin{eqnarray}\n\\label{calCPTG}\n{\\cal C}_{G}&=& \\prod_{\\gamma^a_{ G} \\in \\Im \\gamma^a} \\, \\gamma^a_{ G}\\, K\\,,\\quad\n{\\cal T}_{G} = \\gamma^0_{G} \\prod_{\\gamma^a_{G} \\in \\Re\n \\gamma^a} \\,\\gamma^a_{ G}\\, K \\, I_{x^0}\\,,\\quad\n{\\cal P}^{(d-1)}_{G} = \\gamma^0_{G} \\,I_{\\vec{x}}\\,\n\\end{eqnarray}\nwith $\\gamma^a_{G}$ defined as \n\\begin{eqnarray}\n\\label{gammaG}\n\\gamma^{a}_{G} &=& \n(1- 2 \\theta^a \\eta^{aa} \\frac{\\partial}{\\partial \\theta_{a}})\\,, \n\\end{eqnarray}\nwhile $I_{x}$,\n$I_{\\vec{x}_{3}}$ \nis defined in Eq.~(\\ref{calCPTH}).\nLet be noticed, that since $\\gamma^a_{G}$ ($= - i \\eta^{aa}\\, \\gamma^a \\tilde{\\gamma}^a$) is \nalways real as there is $ \\gamma^a i \\tilde{\\gamma}^a$, while $ \\gamma^a$ is either real or \nimaginary,\nwe use in Eq.~(\\ref{calCPTG}) $\\gamma^a$ to make a choice of appropriate $\\gamma^a_{G}$. \nIn what follows we shall use the notation as in Eq.~(\\ref{calCPTG}).\n\n\nWe define, according to Ref.~\\cite{nh2018} (and the references therein) in both cases --- Clifford \nGrassmann case --- the operator\n \"emptying\"~\\cite{JMP2013,normaJMP2015} (arxiv:1312.1541) the Dirac sea, so that operation of \n\"emptying$_{N}$\" after the charge conjugation ${\\cal C }_{{\\cal H}}$ in the Clifford case and \n\"emptying$_{G}$\" after the charge conjugation ${\\cal C }_{G}$\nin the Grassmann case (both transform the state put on the top of either the Clifford or the Grassmann\nDirac sea into the corresponding negative energy state) creates the anti-particle state to the starting \nparticle state, both put on the top of the Dirac sea and both solving the Weyl equation, either in the\nClifford case, Eq.~(\\ref{Weyl}), or in the Grassmann case, Eq.~(\\ref{Weylgrass}), for free massless \nfermions\n\\begin{eqnarray}\n\\label{empt}\n\"{\\rm emptying}_{N}\"&=& \\prod_{\\Re \\gamma^a}\\, \\gamma^a \\,K\\, \\quad {\\rm in} \\, \\;\n{\\rm Clifford}\\, {\\rm space}\\,, \n\\nonumber\\\\\n\"{\\rm emptying}_{G}\"&=& \\prod_{\\Re \\gamma^a}\\, \\gamma^a_{G} \\,K\\, \\quad {\\rm in}\\;\\, \n{\\rm Grassmann} \\, {\\rm space}\\,, \n\\end{eqnarray}\nalthough we must keep in mind that indeed the anti-particle state is a hole in the Dirac sea from the \nFock space point of view. The operator \"emptying\" is bringing the single particle operator \n${\\cal C }_{{\\cal H}}$ in the Clifford case and ${\\cal C }_{G}$ in the Grassmann case into the operator \non the Fock space in each of the two cases.\nThen the anti-particle state creation operator --- \n${\\underline {\\bf {\\Huge \\Psi}}}^{\\dagger}_{a}[\\Psi_{p}]$ --- to the corresponding particle state \ncreation operator --- can be obtained also as follows\n\\begin{eqnarray}\n\\label{makingantip}\n{\\underline {\\bf {\\Huge \\Psi}}}^{\\dagger}_{a}[\\Psi_{p}]\\, |vac> &=& \n{\\underline {\\bf \\mathbb{C}}}_{{{\\bf \\cal H}}}\\, \n{\\underline {\\bf {\\Huge \\Psi}}}^{\\dagger}_{p}[\\Psi_{p}]\\, |vac> = \n\\int \\, {\\mathbf{\\Psi}}^{\\dagger}_{a}(\\vec{x})\\, \n({\\bf \\mathbb{C}}_{\\cal H}\\,\\Psi_{p} (\\vec{x})) \\,d^{(d-1)} x \\, \\,|vac> \\,,\\nonumber\\\\\n{\\bf \\mathbb{C}}_{\\cal H} &=& \"{\\rm emptying}_{N}\"\\,\\cdot\\, {\\cal C}_{{\\cal H}} \\,\n\\end{eqnarray}\nin both cases.\n\nThe operators ${\\bf \\mathbb{C}}_{\\cal H}$ and ${\\bf \\mathbb{C}}_{G}$ \n\\begin{eqnarray}\n\\label{emptCHG}\n{\\bf \\mathbb{C}}_{\\cal H} &=& \"{\\rm emptying}_{N}\" \\,\\cdot\\, {\\cal C}_{{\\cal H}} \\,,\\nonumber\\\\\n{\\bf \\mathbb{C}}_{G} &=& \"{\\rm emptying}_{NG}\" \\,\\cdot\\, {\\cal C}_{G}\\,,\n\\end{eqnarray}\n operating on \n$\\Psi_{p} (\\vec{x})$ transforms the positive energy spinor state (which solves the corresponding \nWeyl equation for a massless free fermion) put on the top of the Dirac sea into the positive energy \nanti-fermion state, which again solves the corresponding Weyl equation for a massless free \nanti-fermion put on the top of the Dirac sea. Let us point out that either the operator \n$\"{\\rm emptying}_{N}\" $ \nor the operator $\"{\\rm emptying}_{NG}\"$ transforms the single particle operator either\n$ {\\cal C}_{\\cal H}$ or ${\\cal C}_{G}$ into the operator operating in the Fock space. \n\n\nWe use the Grassmann even, Hermitian and real operators $\\gamma^{a}_{G}$, \nEq.~(\\ref{gammaG}), to define discrete symmetry in Grassmann space, first we did in \n$((d+1)-1)$ space, Eq.~(\\ref{calCPTG}), now we do in $(3+1)$ space, Eq.~(\\ref{calCPTNG}),\n as it is done \nin~\\cite{nhds} in the Clifford case. \nIn the Grassmann case we do this in analogy with the operators in the Clifford case~\\cite{nhds}\n\\begin{eqnarray}\n\\label{calCPTNG}\n{\\cal C}_{NG}&=& \\prod_{\\gamma^m_{ G} \\in \\Re \\gamma^m} \\, \\gamma^m_{ G}\\, K \\,\n I_{x^6 x^8...x^d}\\,,\\nonumber\\\\\n{\\cal T}_{NG}&=& \\gamma^0_{G} \\prod_{\\gamma^m_{G} \\in \\Im\n \\gamma^m} \\, K \\, I_{x^0} I_{x^5 x^7...x^{d-1}}\\,,\\nonumber\\\\\n{\\cal P}^{(d-1)}_{NG} &=& \\gamma^0_{G} \\, \\prod_{s=5}^{d}\\, \\gamma^s_{ G} I_{\\vec{x}}\\,,\n\\nonumber\\\\\n{\\bf \\mathbb{C}}_{NG} &=&\\prod_{\\gamma^s_{ G} \\in \\Re \\gamma^s} \\,\\gamma^s_{ G}\\,, \n I_{x^6 x^8...x^d}\\,,\\quad\n{\\bf \\mathbb{C}}_{NG} {\\cal P}^{(d-1)}_{NG} = \\gamma^0_{G} \\, \\prod_{\n\\gamma^s_{ G} \\in \\Im \\gamma^s, s=5}^{d}\\, \\gamma^s_{ G}\\, I_{\\vec{x}_{3}}\\,\n I_{x^6 x^8...x^d}\\,,\n\\nonumber\\\\\n{\\bf \\mathbb{C}}_{NG} {\\cal T}_{NG} {\\cal P}^{(d-1)}_{NG} &=& \\prod_{\\gamma^s_{ G} \\in \n\\Im \\gamma^a} \\,\\gamma^a_{ G}\\,I_x K\\,.\n\\end{eqnarray}\n\n\n\\subsection{Representations in Grassmann and in Clifford space in $d=(13+1)$}\n\\label{so13+1}\nIn the {\\it spin-charge-family} theory the starting dimension of space must be $\\ge(13+1)$, in \norder that the theory manifests in $d=(3+1)$ all the observed properties of quarks and leptons,\n gauge and scalar fields (explaining the appearance of higgs and \nthe Yukawa couplings), offering as well the explanations for the observations in \ncosmology.\n\nLet us therefore comment properties of representations in both spaces when $d=(13+1)$,\n if we analyze one group of \"family members\" of one of families in Clifford space, and \none of the two representations of $\\frac{1}{2}\\, \\frac{d!}{\\frac{d}{2}! \\frac{d}{2}!}$.\n\n\n{\\bf a.} $\\;\\;$ Let us start \nwith Clifford space~\\cite{IARD2016,normaJMP2015,n2014matterantimatter,JMP2013,pikanorma,%\nportoroz03,norma93}. Each \"family\" representation has $2^{\\frac{d}{2} - 1}= 64$\n \"family members\". If we analyze this representation with respect to the subgroups $SO(3,1)$, \n$(SU(2)\\times SU(2))$ of $SO(4)$ and ($SU(3)\\times$ $U(1)$) of $SO(6)$ of the \nLorentz group $SO(13,1)$, we find that the representations have quantum numbers of all the so \nfar observed quarks and leptons and antiquarks and antileptons, all with spin up and spin down, \nas well as of the left and right handedness, with the right handed neutrino included as the\nmember of this representation.\n\nLet us make a choice of the \"family\", which follows by the application of $\\tilde{S}^{15}$ on the \n\"family\", for which the creation operator of the right-handed neutrino with spin $\\frac{1}{2}$ \nwould be $ \\stackrel{03}{(+i)}\\,\\stackrel{12}{(+)}|\\stackrel{56}{(+)}\\,\n\\stackrel{78}{(+)}||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$.\n(The corresponding annihilation operator of this creation operator is $\\stackrel{13 \\;14}{(-)}\\;\\;\n\\stackrel{11\\;12}{(-)}\\;\\;\\stackrel{9\\;10}{(-)}||\\stackrel{78}{(-)}\\,\\stackrel{56}{(-)}| \n\\stackrel{12}{(-)}\\,\\stackrel{03}{(-i)}$). In Table~\\ref{Table so13+1.} presented creation\noperators for all the \"family members\" of this family follow \nby the application of $S^{ab}$ on \n$\\tilde{S}^{15}$ $ \\stackrel{03}{(+i)}\\,\\stackrel{12}{(+)}|\\stackrel{56}{(+)}\\,\n\\stackrel{78}{(+)}||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$.\n(The annihilation operator of $\\tilde{S}^{15}$ $ \\stackrel{03}{(+i)}\\,\n\\stackrel{12}{(+)}|\\stackrel{56}{(+)}\\,\\stackrel{78}{(+)}||\\stackrel{9\\;10}{(+)}\\;\\;\n\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)} $ is $\\stackrel{13 \\;14}{[-]}\\;\\;\n\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{9\\;10}{(-)}||\\stackrel{78}{(-)}\\,\\stackrel{56}{[+]}| \n\\stackrel{12}{[+]}\\,\\stackrel{03}{(-i)}$.) \n\nThis is the representation of \nTable~\\ref{Table so13+1.}, in which all the 'family members'' of one \"family\" are classified with \nrespect to the subgroups $SO(3,1)\\times SU(2) \\times SU(2)\\times SU(3) \\times U(1)$.\nThe vacuum state on which the creation operators, represented in the third column, apply \nis defined in Eq.~(\\ref{vac1}). All the creation operators of all the states are of an odd Clifford \ncharacter, fulfilling together with the annihilation operators (which have as well the equivalent\nodd Clifford character, since the Hermitian conjugation do not change the \nClifford character) the requirements of Eq.~(\\ref{ijthetaprod}). Since the Clifford even operators \n$S^{ab}$ and $\\tilde{S}^{ab}$ do not change the Clifford character, all the creation and \nannihilation operators, obtained by products of $S^{ab}$ or $\\tilde{S}^{ab}$ or both,\nfulfill the requirements of Eq.~(\\ref{ijthetaprod}).\n\nWe recognize in Table~\\ref{Table so13+1.} that quarks distinguish from leptons only in the \n$SO(6)$ part of the creation operators. Quarks belong to the colour ($SU(3)$) triplet carrying\nthe \"fermion\" $(U(1))$ quantum number $\\tau^{4} =\\frac{1}{6}$, antiquarks belong to the colour \nantitriplet, carrying the \"fermion\" quantum number $\\tau^{4} = -\\frac{1}{6}$. Leptons belong \nto the colour ($SU(3)$) singlet, carrying the \"fermion\" $(U(1))$ quantum number $\\tau^{4} =\n -\\frac{1}{2}$, while antileptons belong to the colour antisinglet, carrying the \"fermion\" quantum \nnumber $\\tau^{4} = \\frac{1}{2}$. \n \nLet us also comment that the oddness and evenness of part of states in the subgroups of the \n$SO(13,1)$ group change: While quarks and leptons have in the part of $SO(6)$ an odd Clifford \ncharacter, have antiquarks and antileptons in this part an even odd Clifford character. \nCorrespondingly the Clifford character changes in the rest of subgroups \n\n\\bottomcaption{\\label{Table so13+1.}%\n\\tiny{\nThe left handed ($\\Gamma^{(13,1)} = -1$~\\cite{IARD2016})\nmultiplet of spinors --- the members of the fundamental representation of the $SO(13,1)$ group,\nmanifesting the subgroup $SO(7,1)$\n of the colour charged quarks and antiquarks and the colourless\nleptons and antileptons --- is presented in the massless basis using the technique presented in\nRefs.~\\cite{nh02,nh03,IARD2016,normaJMP2015}.\nIt contains the left handed ($\\Gamma^{(3,1)}=-1$)\n weak ($SU(2)_{I}$) charged ($\\tau^{13}=\\pm \\frac{1}{2}$, Eq.~(\\ref{so42})),\nand $SU(2)_{II}$ chargeless ($\\tau^{23}=0$, Eq.~(\\ref{so42}))\nquarks and leptons and the right handed ($\\Gamma^{(3,1)}=1$)\n weak ($SU(2)_{I}$) chargeless and $SU(2)_{II}$\ncharged ($\\tau^{23}=\\pm \\frac{1}{2}$) quarks and leptons, both with the spin $ S^{12}$ up and\ndown ($\\pm \\frac{1}{2}$, respectively). \nQuarks distinguish from leptons only in the $SU(3) \\times U(1)$ part: Quarks are triplets\nof three colours ($c^i$ $= (\\tau^{33}, \\tau^{38})$ $ = [(\\frac{1}{2},\\frac{1}{2\\sqrt{3}}),\n(-\\frac{1}{2},\\frac{1}{2\\sqrt{3}}), (0,-\\frac{1}{\\sqrt{3}}) $], Eq.~(\\ref{so64}))\ncarrying the \"fermion charge\" ($\\tau^{4}=\\frac{1}{6}$, Eq.~(\\ref{so64})).\nThe colourless leptons carry the \"fermion charge\" ($\\tau^{4}=-\\frac{1}{2}$).\nThe same multiplet contains also the left handed weak ($SU(2)_{I}$) chargeless and $SU(2)_{II}$\ncharged antiquarks and antileptons and the right handed weak ($SU(2)_{I}$) charged and\n$SU(2)_{II}$ chargeless antiquarks and antileptons.\nAntiquarks distinguish from antileptons again only in the $SU(3) \\times U(1)$ part: Antiquarks are\nantitriplets,\n carrying the \"fermion charge\" ($\\tau^{4}=-\\frac{1}{6}$).\nThe anticolourless antileptons carry the \"fermion charge\" ($\\tau^{4}=\\frac{1}{2}$).\n $Y=(\\tau^{23} + \\tau^{4})$ is the hyper charge, the electromagnetic charge\nis $Q=(\\tau^{13} + Y$).\nThe vacuum state,\non which the nilpotents and projectors operate, is presented in Eq.~(\\ref{vac1}).\nThe reader can find this Weyl representation also in\nRefs.~\\cite{n2014matterantimatter,pikanorma,portoroz03,normaJMP2015} and the references\ntherein. }\n}\n\\tablehead{\\hline\ni&$$&$|^a\\psi_i>$&$\\Gamma^{(3,1)}$&$ S^{12}$&\n$\\tau^{13}$&$\\tau^{23}$&$\\tau^{33}$&$\\tau^{38}$&$\\tau^{4}$&$Y$&$Q$\\\\\n\\hline\n&& ${\\rm (Anti)octet},\\,\\Gamma^{(7,1)} = (-1)\\,1\\,, \\,\\Gamma^{(6)} = (1)\\,-1$&&&&&&&&& \\\\\n&& ${\\rm of \\;(anti) quarks \\;and \\;(anti)leptons}$&&&&&&&&&\\\\\n\\hline\\hline}\n\\tabletail{\\hline \\multicolumn{12}{r}{\\emph{Continued on next page}}\\\\}\n\\tablelasttail{\\hline}\n\\begin{center}\n\\tiny{\n\\begin{supertabular}{|r|c||c||c|c||c|c||c|c|c||r|r|}\n1&$ u_{R}^{c1}$&$ \\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\n\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]} $ &1&$\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{2}{3}$&$\\frac{2}{3}$\\\\\n\\hline\n2&$u_{R}^{c1}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$&1&$-\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{2}{3}$&$\\frac{2}{3}$\\\\\n\\hline\n3&$d_{R}^{c1}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$&1&$\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$-\\frac{1}{3}$&$-\\frac{1}{3}$\\\\\n\\hline\n4&$ d_{R}^{c1} $&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\n\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]} $&1&$-\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$-\\frac{1}{3}$&$-\\frac{1}{3}$\\\\\n\\hline\n5&$d_{L}^{c1}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$&-1&$\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$-\\frac{1}{3}$\\\\\n\\hline\n6&$d_{L}^{c1} $&$ - \\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]} $&-1&$-\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$-\\frac{1}{3}$\\\\\n\\hline\n7&$ u_{L}^{c1}$&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$ &-1&$\\frac{1}{2}$&\n$\\frac{1}{2}$&0 &$\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$\\frac{2}{3}$\\\\\n\\hline\n8&$u_{L}^{c1}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$&-1&$-\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$\\frac{2}{3}$\\\\\n\\hline\\hline\n\\shrinkheight{0.3\\textheight}\n9&$ u_{R}^{c2}$&$ \\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\n\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $ &1&$\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$-\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{2}{3}$&$\\frac{2}{3}$\\\\\n\\hline\n10&$u_{R}^{c2}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]}$&1&$-\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$-\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{2}{3}$&$\\frac{2}{3}$\\\\\n\\hline\n11&$d_{R}^{c2}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]}$\n&1&$\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$ - \\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$-\\frac{1}{3}$&$-\\frac{1}{3}$\\\\\n\\hline\n12&$ d_{R}^{c2} $&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\n\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $\n&1&$-\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$-\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$-\\frac{1}{3}$&$-\\frac{1}{3}$\\\\\n\\hline\n13&$d_{L}^{c2}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]}$\n&-1&$\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$-\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$-\\frac{1}{3}$\\\\\n\\hline\n14&$d_{L}^{c2} $&$ - \\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $&-1&$-\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$-\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$-\\frac{1}{3}$\\\\\n\\hline\n15&$ u_{L}^{c2}$&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]}$ &-1&$\\frac{1}{2}$&\n$\\frac{1}{2}$&0 &$-\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$\\frac{2}{3}$\\\\\n\\hline\n16&$u_{L}^{c2}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]}$&-1&$-\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$-\\frac{1}{2}$&$\\frac{1}{2\\,\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$\\frac{2}{3}$\\\\\n\\hline\\hline\n17&$ u_{R}^{c3}$&$ \\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\n\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)} $ &1&$\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$0$&$-\\frac{1}{\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{2}{3}$&$\\frac{2}{3}$\\\\\n\\hline\n18&$u_{R}^{c3}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$&1&$-\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$0$&$-\\frac{1}{\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{2}{3}$&$\\frac{2}{3}$\\\\\n\\hline\n19&$d_{R}^{c3}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$&1&$\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$0$&$-\\frac{1}{\\sqrt{3}}$&$\\frac{1}{6}$&$-\\frac{1}{3}$&$-\\frac{1}{3}$\\\\\n\\hline\n20&$ d_{R}^{c3} $&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\n\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)} $&1&$-\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$0$&$-\\frac{1}{\\sqrt{3}}$&$\\frac{1}{6}$&$-\\frac{1}{3}$&$-\\frac{1}{3}$\\\\\n\\hline\n21&$d_{L}^{c3}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$&-1&$\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$0$&$-\\frac{1}{\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$-\\frac{1}{3}$\\\\\n\\hline\n22&$d_{L}^{c3} $&$ - \\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)} $&-1&$-\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$0$&$-\\frac{1}{\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$-\\frac{1}{3}$\\\\\n\\hline\n23&$ u_{L}^{c3}$&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$ &-1&$\\frac{1}{2}$&\n$\\frac{1}{2}$&0 &$0$&$-\\frac{1}{\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$\\frac{2}{3}$\\\\\n\\hline\n24&$u_{L}^{c3}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$&-1&$-\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$0$&$-\\frac{1}{\\sqrt{3}}$&$\\frac{1}{6}$&$\\frac{1}{6}$&$\\frac{2}{3}$\\\\\n\\hline\\hline\n25&$ \\nu_{R}$&$ \\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\n\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)} $ &1&$\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$0$&$0$&$-\\frac{1}{2}$&$0$&$0$\\\\\n\\hline\n26&$\\nu_{R}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$&1&$-\\frac{1}{2}$&0&\n$\\frac{1}{2}$ &$0$&$0$&$-\\frac{1}{2}$&$0$&$0$\\\\\n\\hline\n27&$e_{R}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$&1&$\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$0$&$0$&$-\\frac{1}{2}$&$-1$&$-1$\\\\\n\\hline\n28&$ e_{R} $&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\n\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)} $&1&$-\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$0$&$0$&$-\\frac{1}{2}$&$-1$&$-1$\\\\\n\\hline\n29&$e_{L}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$&-1&$\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$0$&$0$&$-\\frac{1}{2}$&$-\\frac{1}{2}$&$-1$\\\\\n\\hline\n30&$e_{L} $&$ - \\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)} $&-1&$-\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$0$&$0$&$-\\frac{1}{2}$&$-\\frac{1}{2}$&$-1$\\\\\n\\hline\n31&$ \\nu_{L}$&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$ &-1&$\\frac{1}{2}$&\n$\\frac{1}{2}$&0 &$0$&$0$&$-\\frac{1}{2}$&$-\\frac{1}{2}$&$0$\\\\\n\\hline\n32&$\\nu_{L}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$&-1&$-\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$0$&$0$&$-\\frac{1}{2}$&$-\\frac{1}{2}$&$0$\\\\\n\\hline\\hline\n33&$ \\bar{d}_{L}^{\\bar{c1}}$&$ \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\n\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)} $ &-1&$\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$-\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$\\frac{1}{3}$&$\\frac{1}{3}$\\\\\n\\hline\n34&$\\bar{d}_{L}^{\\bar{c1}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$&-1&$-\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$-\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$\\frac{1}{3}$&$\\frac{1}{3}$\\\\\n\\hline\n35&$\\bar{u}_{L}^{\\bar{c1}}$&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$&-1&$\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$-\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$&$-\\frac{2}{3}$\\\\\n\\hline\n36&$ \\bar{u}_{L}^{\\bar{c1}} $&$ - \\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\n\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)} $&-1&$-\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$-\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$&$-\\frac{2}{3}$\\\\\n\\hline\n37&$\\bar{d}_{R}^{\\bar{c1}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$&1&$\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$-\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$\\frac{1}{3}$\\\\\n\\hline\n38&$\\bar{d}_{R}^{\\bar{c1}} $&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)} $&1&$-\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$-\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$\\frac{1}{3}$\\\\\n\\hline\n39&$ \\bar{u}_{R}^{\\bar{c1}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$ &1&$\\frac{1}{2}$&\n$-\\frac{1}{2}$&0 &$-\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$\\\\\n\\hline\n40&$\\bar{u}_{R}^{\\bar{c1}}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$\n&1&$-\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$-\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$\\\\\n\\hline\\hline\n41&$ \\bar{d}_{L}^{\\bar{c2}}$&$ \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\n\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)} $\n&-1&$\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$\\frac{1}{3}$&$\\frac{1}{3}$\\\\\n\\hline\n42&$\\bar{d}_{L}^{\\bar{c2}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$\n&-1&$-\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$\\frac{1}{3}$&$\\frac{1}{3}$\\\\\n\\hline\n43&$\\bar{u}_{L}^{\\bar{c2}}$&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$\n&-1&$\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$&$-\\frac{2}{3}$\\\\\n\\hline\n44&$ \\bar{u}_{L}^{\\bar{c2}} $&$ - \\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\n\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)} $\n&-1&$-\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$&$-\\frac{2}{3}$\\\\\n\\hline\n45&$\\bar{d}_{R}^{\\bar{c2}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$\n&1&$\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$\\frac{1}{3}$\\\\\n\\hline\n46&$\\bar{d}_{R}^{\\bar{c2}} $&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)} $\n&1&$-\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$\\frac{1}{3}$\\\\\n\\hline\n47&$ \\bar{u}_{R}^{\\bar{c2}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$\n &1&$\\frac{1}{2}$&\n$-\\frac{1}{2}$&0 &$\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$\\\\\n\\hline\n48&$\\bar{u}_{R}^{\\bar{c2}}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$\n&1&$-\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$\\frac{1}{2}$&$-\\frac{1}{2\\,\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$\\\\\n\\hline\\hline\n49&$ \\bar{d}_{L}^{\\bar{c3}}$&$ \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\n\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $ &-1&$\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$0$&$\\frac{1}{\\sqrt{3}}$&$-\\frac{1}{6}$&$\\frac{1}{3}$&$\\frac{1}{3}$\\\\\n\\hline\n50&$\\bar{d}_{L}^{\\bar{c3}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $&-1&$-\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$0$&$\\frac{1}{\\sqrt{3}}$&$-\\frac{1}{6}$&$\\frac{1}{3}$&$\\frac{1}{3}$\\\\\n\\hline\n51&$\\bar{u}_{L}^{\\bar{c3}}$&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $&-1&$\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$0$&$\\frac{1}{\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$&$-\\frac{2}{3}$\\\\\n\\hline\n52&$ \\bar{u}_{L}^{\\bar{c3}} $&$ - \\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\n\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $&-1&$-\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$0$&$\\frac{1}{\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$&$-\\frac{2}{3}$\\\\\n\\hline\n53&$\\bar{d}_{R}^{\\bar{c3}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $&1&$\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$0$&$\\frac{1}{\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$\\frac{1}{3}$\\\\\n\\hline\n54&$\\bar{d}_{R}^{\\bar{c3}} $&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $&1&$-\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$0$&$\\frac{1}{\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$\\frac{1}{3}$\\\\\n\\hline\n55&$ \\bar{u}_{R}^{\\bar{c3}}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $ &1&$\\frac{1}{2}$&\n$-\\frac{1}{2}$&0 &$0$&$\\frac{1}{\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$\\\\\n\\hline\n56&$\\bar{u}_{R}^{\\bar{c3}}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $&1&$-\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$0$&$\\frac{1}{\\sqrt{3}}$&$-\\frac{1}{6}$&$-\\frac{1}{6}$&$-\\frac{2}{3}$\\\\\n\\hline\\hline\n57&$ \\bar{e}_{L}$&$ \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\n\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]} $ &-1&$\\frac{1}{2}$&0&\n$\\frac{1}{2}$&$0$&$0$&$\\frac{1}{2}$&$1$&$1$\\\\\n\\hline\n58&$\\bar{e}_{L}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$&-1&$-\\frac{1}{2}$&0&\n$\\frac{1}{2}$ &$0$&$0$&$\\frac{1}{2}$&$1$&$1$\\\\\n\\hline\n59&$\\bar{\\nu}_{L}$&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$&-1&$\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$0$&$0$&$\\frac{1}{2}$&$0$&$0$\\\\\n\\hline\n60&$ \\bar{\\nu}_{L} $&$ - \\stackrel{03}{(+i)}\\,\\stackrel{12}{(-)}|\n\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]} $&-1&$-\\frac{1}{2}$&0&\n$-\\frac{1}{2}$&$0$&$0$&$\\frac{1}{2}$&$0$&$0$\\\\\n\\hline\n61&$\\bar{\\nu}_{R}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$&1&$\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$0$&$0$&$\\frac{1}{2}$&$\\frac{1}{2}$&$0$\\\\\n\\hline\n62&$\\bar{\\nu}_{R} $&$ - \\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]} $&1&$-\\frac{1}{2}$&\n$-\\frac{1}{2}$&0&$0$&$0$&$\\frac{1}{2}$&$\\frac{1}{2}$&$0$\\\\\n\\hline\n63&$ \\bar{e}_{R}$&$\\stackrel{03}{(+i)}\\,\\stackrel{12}{[+]}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$ &1&$\\frac{1}{2}$&\n$\\frac{1}{2}$&0 &$0$&$0$&$\\frac{1}{2}$&$\\frac{1}{2}$&$1$\\\\\n\\hline\n64&$\\bar{e}_{R}$&$\\stackrel{03}{[-i]}\\,\\stackrel{12}{(-)}|\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}\n||\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$&1&$-\\frac{1}{2}$&\n$\\frac{1}{2}$&0&$0$&$0$&$\\frac{1}{2}$&$\\frac{1}{2}$&$1$\\\\\n\\hline\n\\end{supertabular}\n}\n\\end{center}\n\n\n\n\nFamilies are generated by $\\tilde{S}^{ab}$ applying on any one of the \"family members\". \nAgain all the \"family members\" of this \"family\" follow by the application of all $S^{ab}$ (not \nbelonging to Cartan subalgebra). \n\nThe spontaneous break of symmetry from \n$SO(13,1)$ to $SO(7,1) \\times SU(3) \\times U(1)$, Refs.~\\cite{IARD2016,n2014matterantimatter,%\nnormaJMP2015}, makes in the {\\it spin-charge-family} theory all the families, generated by \n$\\tilde{S}^{mt}$ and $\\tilde{S}^{st}$,\n[$m=(0,1,2,3)$, $s=(5,6,7,8), t=(9,10,11,12,13,14)$], massive of the scale \nof $\\ge 10^{16}$ GeV~\\cite{DHN,DN012,%\nfamiliesDNproc}. Correspondingly there are only eight families of\nquarks and leptons, \nwhich split into two groups of four families, both manifesting the symmetry $\\widetilde{SU}(2)\n\\times \\widetilde{SU}(2)$ $\\times U(1)$. (The fourth of the lower four families is predicted to be \nobserved at the LHC, the stable of the upper four families\ncontributes to the dark \nmatter~\\cite{gn2009}.)\n\nIn the {\\it spin-charge-family} theory fermions interact with only gravity, which manifests\nafter the break of the starting symmetry in $d=(3+1)$ as all the known vector gauge fields, \nordinary gravity and the higgs and the Yukawa couplings~\\cite{nd2017,IARD2016,%\nn2014matterantimatter,normaJMP2015,n2012scalars}. There are scalar fields which bring masses \nto family members. The theory explains not only all the assumptions of the {\\it standard model}\nwith the appearance of families, the vector gauge fields and the scalar fields, it also explains \nappearance of the dark matter~\\cite{gn2009}, \nmatter\/antimatter asymmetry~\\cite{n2014matterantimatter} and other phenomena, like the\nmiraculous cancellation of the triangle anomalies in the {\\it standard model}~\\cite{nh2017}. \n\n\n{\\bf b.} $\\;\\;$ We compare representations of $SO(13,1)$ in Clifford space with those in \nGrassmann space. We have {\\bf no \"family\" quantum numbers in Grassmann space}. \nWe only have two groups of creation operators, defining --- when applied on the vacuum state\n $|1>$ --- $\\frac{1}{2}$ $\\frac{d!}{\\frac{d}{2}! \\frac{d}{2}!}$ equal in $d=(13+1)$ to $1716$\nmembers in each of the two groups in comparison in Clifford case with $64$ \"family \nmembers\" in one \"family\" and $64$ \"families\", which the breaks of symmetry reduce to \n$8$ \"families\", making all the $(64 - 8)$ \"families\" massive and correspondingly not observable at low \nenergies~(\\cite{normaJMP2015,DHN} and the references therein).\n\n \nSince the $1716$ members are hard to be mastered, let us look therefore at each subgroup ---\n$SU(3) \\times U(1)$, $SO(3,1)$ and $SU(2)\\times SU(2)$ of $SO(13,1)$ --- separately. \n\n\nLet us correspondingly analyze the subgroups: $SO(6)$ from the point of view of\n the two subgroups $SU(3)\\times U(1)$, and $SO(7,1)$ from the point of view of\n the two subgroups $SO(3,1) \\times SO(4)$, and let us also analyze $SO(4)$ as \n$SU(2)\\times$ $SU(2)$.\n\n\n\n\n\n\\subsection{Examples of second quantizable states in Grassmann and in Clifford space} \n\\label{examples}\n\nWe compare properties of representations in Grassmann and in Clifford space for several\nchoices of subgroups of $SO(13,1)$ in the case \nthat in both spaces creation and annihilation operators fulfill requirements of\n Eq.~(\\ref{ijthetaprod}), that is that both kinds of states can be second quantized.\nLet us again point out that in Grassmann case fermions carry integer spins, while in Clifford\ncase they carry half integer spin.\n\n\\subsubsection{ States in Grassmann and in Clifford space for $d=(5+1)$}\n\\label{examples51}\n\nWe study properties of representations of the subgroup $SO(5,1)$ (of the group $SO(13,1)$),\nin Clifford and in Grassmann space, requiring that states can be in both spaces second quantized, \nfulfilling therefore Eq.~(\\ref{ijthetaprod}).\n\n{\\bf a. }\nIn Clifford space there are $2^{\\frac{d}{2}-1}$, each with $2^{\\frac{d}{2}-1}$ family \nmembers, that is $4$ families, each with $4$ members. All these sixteen states are of \nan odd Clifford character, since all can be obtained by products of $S^{ab}$, \n$\\tilde{S}^{ab}$\nor both from a Clifford odd starting state and are correspondingly second quantizable\nas required in Eq.~(\\ref{ijthetaprod}). All the states are the eigenstates of the Cartan \nsubalgebra of the Lorentz algebra in Clifford space, Eq.~(\\ref{choicecartan}), solving the Weyl\nequation for free massless spinors in Clifford space, Eq.~(\\ref{Weyl}). \n The four familes, with four members each, are presented in\nTable~\\ref{Table clifffourxfour51.}. All of these $16$ states are reachable from the first one\nin each of the four families by $S^{ab}$, or by $\\tilde{S}^{ab}$ if applied on any family member.\n\nEach of these four families have positive and negative energy solutions, as presented in~%\n\\cite{nhds}, in Table I.. We present in Table~\\ref{Table clifffourxfour51.} only states of a\npositive energy, that is states above the Dirac sea. The antiparticle states are reachable from\nthe particle states by the application of the operator $\\mathbb{C}_{{\\cal N}}\\,\n $$ {\\cal P}^{(d-1)}_{{\\cal N}} = \\gamma^0 \\gamma^5 I_{\\vec{x}_3} I_{x^6}$, keeping \nthe spin $\\frac{1}{2}$, while changing the charge from $\\frac{1}{2}$ to $- \\frac{1}{2}$. \nAll the states above the Dirac sea are indeed the hole in the Dirac sea, as explained in\nRef.~\\cite{nhds}.\n\\begin{table}\n\\begin{tiny}\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|c|c|}\n \\hline\\hline\n $$ & $\\psi$ & $S^{03}$ & $S^{12}$ & $S^{56 }$ \n & $\\Tilde{S}^{03}$ & $\\Tilde{S}^{12}$ & $\\Tilde{S}^{56 }$\\\\\n \\hline\\hline\n $\\psi^{ I}_{1}$ & $\\stackrel{03}{(+i)} \\stackrel{12}{(+)} \\stackrel{56}{(+)} $\n & $\\frac{i}{2}$ & $\\frac{1}{2}$ & $\\frac{1}{2}$\n & $\\frac{i}{2}$ & $\\frac{1}{2}$ & $\\frac{1}{2}$\\\\\n $\\psi^{ I}_{2}$ &$\\stackrel{03}{[-i]}\\stackrel{12}{[-]} \\stackrel{56}{(+)}$\n & $-\\frac{i}{2}$ & $-\\frac{1}{2}$ & $\\frac{1}{2}$ & $\\frac{i}{2}$ & $\\frac{1}{2}$ & $\\frac{1}{2}$ \\\\\n $\\psi^{ I}_{3}$ &$\\stackrel{03}{[-i]} \\stackrel{12}{(+)} \\stackrel{56}{[-]} $\n & $- \\frac{i}{2}$ & $\\frac{1}{2}$ & $-\\frac{1}{2}$\n & $\\frac{i}{2}$ & $\\frac{1}{2}$ & $\\frac{1}{2}$ \\\\\n $\\psi^{ I}_{4}$ &$\\stackrel{03}{(+i)} \\stackrel{12}{[-]} \\stackrel{56}{[-]}$\n & $\\frac{i}{2}$ & $-\\frac{1}{2}$ & $-\\frac{1}{2}$ & $\\frac{i}{2}$ & $\\frac{1}{2}$ & $\\frac{1}{2}$ \\\\\n \\hline \n $\\psi^{II}_{1}$ & $\\stackrel{03}{[+i]} \\stackrel{12}{[+]} \\stackrel{56}{(+)}$ \n & $\\frac{i}{2}$ & $\\frac{1}{2}$ & $\\frac{1}{2}$\n & $-\\frac{i}{2}$ & $-\\frac{1}{2}$ & $\\frac{1}{2}$\\\\\n $\\psi^{II}_{2}$ &$\\stackrel{03}{(-i)} \\stackrel{12}{(-)} \\stackrel{56}{(+)} $\n & $-\\frac{i}{2}$ & $-\\frac{1}{2}$ & $\\frac{1}{2}$ & $-\\frac{i}{2}$ & $-\\frac{1}{2}$ & $\\frac{1}{2}$ \\\\\n $\\psi^{II}_{3}$ & $\\stackrel{03}{(-i)} \\stackrel{12}{[+]} \\stackrel{56}{[-]}$\n & $-\\frac{i}{2}$ & $\\frac{1}{2}$ & $-\\frac{1}{2}$ & $-\\frac{i}{2}$ & $-\\frac{1}{2}$ & $\\frac{1}{2}$ \\\\\n $\\psi^{II}_{4}$ & $\\stackrel{03}{[+i]}\\stackrel{12}{(-)} \\stackrel{56}{[-]}$\n & $\\frac{i}{2}$ & $-\\frac{1}{2}$ & $-\\frac{1}{2}$ & $-\\frac{i}{2}$ & $-\\frac{1}{2}$ & $\\frac{1}{2}$ \\\\\n \\hline\n $\\psi^{III}_{1}$ & $\\stackrel{03}{[+i]} \\stackrel{12}{(+)} \\stackrel{56}{[+]}$\n & $\\frac{i}{2}$ & $\\frac{1}{2}$ & $\\frac{1}{2}$\n & $-\\frac{i}{2}$ & $\\frac{1}{2}$ & $-\\frac{1}{2}$\\\\\n $\\psi^{III}_{2}$ & $\\stackrel{03}{(-i)} \\stackrel{12}{[-]} \\stackrel{56}{[+]}$\n & $-\\frac{i}{2}$ & $-\\frac{1}{2}$ & $\\frac{1}{2}$ & $-\\frac{i}{2}$ & $\\frac{1}{2}$ & $-\\frac{1}{2}$ \\\\\n $\\psi^{III}_{3}$ & $\\stackrel{03}{(-i)} \\stackrel{12}{(+)} \\stackrel{56}{(-)} $\n & $-\\frac{i}{2}$ & $\\frac{1}{2}$ & $-\\frac{1}{2}$ & $-\\frac{i}{2}$ & $\\frac{1}{2}$ & $-\\frac{1}{2}$ \\\\\n $\\psi^{III}_{4}$ & $\\stackrel{03}{[+i]} \\stackrel{12}{[-]} \\stackrel{56}{(-)} $\n & $\\frac{i}{2}$ & $-\\frac{1}{2}$ & $-\\frac{1}{2}$ & $-\\frac{i}{2}$ & $\\frac{1}{2}$ & $-\\frac{1}{2}$\\\\\n \\hline\n $\\psi^{IV}_{1}$ & $\\stackrel{03}{(+i)}\\stackrel{12}{[+]} \\stackrel{56}{[+]} $\n & $\\frac{i}{2}$ & $\\frac{1}{2}$ & $\\frac{1}{2}$ \n & $\\frac{i}{2}$ & $-\\frac{1}{2}$ & $-\\frac{1}{2}$\\\\\n $\\psi^{IV}_{2}$ & $\\stackrel{03}{[-i]} \\stackrel{12}{(-)}\\stackrel{56}{[+]} $ \n & $-\\frac{i}{2}$ & $-\\frac{1}{2}$ & $\\frac{1}{2}$ & $\\frac{i}{2}$ & $-\\frac{1}{2}$ & $-\\frac{1}{2}$ \\\\\n $\\psi^{IV}_{3}$ & $\\stackrel{03}{[-i]} \\stackrel{12}{[+]}\\stackrel{56}{(-)} $\n & $-\\frac{i}{2}$ & $\\frac{1}{2}$ & $-\\frac{1}{2}$ & $\\frac{i}{2}$ & $-\\frac{1}{2}$ & $-\\frac{1}{2}$ \\\\\n $\\psi^{IV}_{4}$ & $\\stackrel{03}{(+i)} \\stackrel{12}{(-)} \\stackrel{56}{(-)} $ \n & $\\frac{i}{2}$ & $-\\frac{1}{2}$ & $-\\frac{1}{2}$ & $\\frac{i}{2}$ & $-\\frac{1}{2}$ & $-\\frac{1}{2}$ \\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{\\label{Table clifffourxfour51.} The four families, each with four members. For the \nchoice $p^a=(p^0,0,0,p^3,0,0)$ have the first and the second member the space part \nequal to $e^{- i |p^0| x^0+i |p^3|x^3}$ and $e^{- i |p^0| x^0-i |p^3|x^3}$, \nrepresenting the particles with spin up and down, respectively. The third and the fourth member\nrepresent the antiparticle states, with the space part equal to $e^{- i |p^0| x^0-i |p^3|x^3}$\nand $e^{- i |p^0| x^0+i |p^3|x^3}$, with the spin up and down respectively.\nThe antiparticle states follow from the particle state by the application of $\\mathbb{C}_{{\\cal N}}\\,\n $$ {\\cal P}^{(d-1)}_{{\\cal N}} = \\gamma^0 \\gamma^5 I_{\\vec{x}_3} I_{x^6}$.\nThe charge of the particle states is $\\frac{1}{2}$, for antiparticle states $-\\frac{1}{2}$.}\n\\end{tiny}\n\\end{table}\n\n\n{\\bf b.0}\nIn Grassmann space there are $ \\frac{d!}{\\frac{d}{2}! \\frac{d}{2}!}$ second quantizable\nstates as required in Eq.~(\\ref{ijthetaprod}), forming in $d=(5+1)$ two decuplets --- \neach with $ \\frac{1}{2} \\,\\frac{d!}{\\frac{d}{2}! \\frac{d}{2}!}$ states --- all are the \neigenstates of the Cartan subalgebra of the Lorentz algebra in (internal) Grassmann space. \nAll the states of one (anyone of the two) decuplets are reachable by the application of the \noperators ${\\cal {\\bf S}}^{ab}$ on a starting state. The two decouplets are presented in\nTable~\\ref{Table grassdecupletso51.}\n\nLet us first find the solution of the equations of motion for free massless fermions,\n Eq.~(\\ref{Weylgrass}), with the momentum $p^a =(p^0, p^1,p^2,p^3,0,0)$. One obtains\nfor $\\psi_I = \\alpha (\\theta^0-\\theta^3) (\\theta^1 +i \\theta^2) (\\theta^5 +i \\theta^6)$\n$+ \\beta (\\theta^0 \\theta^3 + i\\theta^1\\theta^2) (\\theta^5 +i \\theta^6) + $ \n$\\gamma (\\theta^0+\\theta^3) (\\theta^1 - i \\theta^2) (\\theta^5 +i \\theta^6)$ the solution\n\\begin{eqnarray}\n\\label{5+1I}\n\\beta&=& \\frac{2\\gamma (p^1-i p^2)}{(p^0 -p^3)}=\n\\frac{2\\gamma (p^0+ p^3)}{(p^1 +i p^2)}=- \\frac{2\\alpha (p^0- p^3)}{(p^1 -i p^2)}\n= - \\frac{2\\alpha (p^1+i p^2)}{(p^0 +p^3)}\\,,\\nonumber\\\\\n(p^0)^2 &=& (p^1)^2 + (p^2)^2 +(p^3)^2\\,, \\nonumber\\\\\n\\frac{\\beta}{- \\alpha } &=& \\frac{2 (p^0- p^3)}{(p^1 -i p^2)}\\,, \\quad\n\\frac{\\gamma}{- \\alpha } =\\frac{(p^0- p^3)^2}{(p^1 -i p^2)^2}\\,.\n\\end{eqnarray}\n\nOne has for $p^0= |p^0|$ the positive energy solution, describing a fermion above the \"Dirac sea\", \nand for $p^0=- |p^0|$ the negative energy solution, describing a fermion in the \"Dirac sea\".\nThe \"charge\" of the \"fermion\" is $1$. \nSimilarly one finds the solution for the other three states with the negative \"charge\" $-1$, again\nwith the positive and negative energy.\nThe space part of the \"fermion\" state is for \"spin up\" equal to \n$e^{- i |p^0| x^0+i \\vec{p}\\vec{x}}$, for his antiparticle for the same internal spin \n$e^{- i |p^0| x^0 - i \\vec{p}\\vec{x}}$. \n\nThe discrete symmetry operator ${\\bf \\mathbb{C}}_{NG}$ ${\\cal P}^{(d-1)}_{NG}$, which is \nin our case equal to $\\gamma^0_{G} \\gamma^5_{G} I_{\\vec{x}_{3}} I_{x^6}$, transforms\nthe first state in Table~\\ref{Table grassdecupletso51.} into the sixth, the second state\ninto the fifth, the third state into the fourth, keeping the same spin while changing the \"charge\" of\nthe superposition of the three states $\\psi_{Ip}$.\nBoth superposition of states, Eq.~(\\ref{5+1I}) represent the positive energy states put on \nthe top of the \"Dirac\" sea, the first \ndescribing a particle with \"charge\" $1$ and the second superposition of the second three states \n$\\psi_{Ia}$, describing the antiparticle with the\"charge\" $-1$. \nWe namely apply\n ${\\underline {\\bf \\mathbb{C}}}_{{{\\bf \\cal NG}}}\\, $$ {\\cal P}^{(d-1)}_{{\\cal NG}}$\n on ${\\underline {\\bf {\\Huge \\Psi}}}^{\\dagger}_{p}[\\Psi^{pos}_{I}]$ by applying \n $\\mathbb{C}_{{\\cal NG}}\\, $$ {\\cal P}^{(d-1)}_{{\\cal NG}}$ on $ \\Psi^{pos}_{I}$ as follows: \n ${\\underline {\\bf \\mathbb{C}}}_{{{\\bf \\cal NG}}}\\, $$ {\\cal P}^{(d-1)}_{{\\cal NG}}$ \n ${\\underline {\\bf {\\Huge \\Psi}}}^{\\dagger}_{p}\n \\Psi^{pos}_{I}]$ $({\\underline {\\bf \\mathbb{C}}}_{{{\\bf \\cal NG}}}\\, $\n$ {\\cal P}^{(d-1)}_{{\\cal NG}})^{-1} =$ \n ${\\underline {\\bf {\\Huge \\Psi}}}^{\\dagger}_{aNG}$$[\\mathbb{C}_{{\\cal NG}}$\n ${\\cal P}^{(d-1)}_{{\\cal NG}}\\Psi^{pos}_{1}]$.\n\n One recognizes that it is $\\mathbb{C}_{{\\cal NG}}\\, $$ {\\cal P}^{(d-1)}_{{\\cal NG}}$ \n$ \\Psi^{pos}_{I}=$\n $\\Psi^{pos}_{II}$ (Table~\\ref{Table grassdecupletso51.}), \n which must be put on the top of the \"Dirac\" sea, representing the hole in the particular state \n in the \"Dirac\" sea, which solves the corresponding equation of motion for the negative energy.\n\n\n\n\n \\begin{table}\n \\begin{center}\n\\includegraphics{TabDecupletsGrasspdf-fig.pdf}\n \\end{center}\n \\caption{\\label{Table grassdecupletso51.} The creation operators of the decuplet and the \nantidecuplet of the orthogonal group $SO(5,1)$ in Grassmann space are presented. \n Applying on the vacuum state $|\\phi_{0}> = |1>$ the creation operators form eigenstates of\nthe Cartan subalgebra, Eq.~(\\ref{choicecartan}), (${\\cal {\\bf S}}^{0 3}, {\\cal {\\bf S}}^{1 2}$, \n${\\cal {\\bf S}}^{5 6}$). The states within each decuplet are reachable from any \nmember by ${\\cal {\\bf S}}^{ab}$. The product of the discrete operators\n${\\bf \\mathbb{C}}_{NG}$ ($=\\prod_{\\Re \\gamma^s} \\, \\gamma^s_{G}\\, I_{x^6 x^8...x^d}$, \ndenoted as ${\\bf \\mathbb{C}}$ in the last column) \n${\\cal P}^{(d-1)}_{NG}$ ($ = \\gamma^0_{G} \\, \\prod_{s=5}^{d}\\, \\gamma^s_{ G}\n I_{\\vec{x}_{3}}$)\ntransforms, for example, $\\psi^{I}_{1}$ into $\\psi^{I}_{6}$, $\\psi^{I}_{2}$ into $\\psi^{I}_{5}$ \nand $\\psi^{I}_{3}$ into $\\psi^{I}_{4}$. Solutions of the Weyl equation, Eq.~(\\ref{Weylgrass}), \nwith the negative energies belong to the \"Grassmann sea\", with the positive energy to the particles \nand antiparticles.\nAlso the application of the discrete operators ${\\cal C}_{GN}$, Eq.~(\\ref{calCPTNG}) and \n${\\cal C}_{NG}$ ${\\cal P}^{(d-1)}_{NG}$, Eq.~(\\ref{calCPTNG}) is demonstrated.\n}\n \\end{table}\n\n\n\\subsubsection{Properties of $SO(6)$ in Grassmann and in Clifford space when $SO(6)$ is embedded\ninto $SO(13,1)$}\n\\label{so6}\n{\\bf a.}\nLet us first repeat properties of the $SO(6)$ part of the $SO(13,1)$ representation of $64$ \n\"family members\" in Clifford space, presented in Table~\\ref{Table so13+1.}. As seen in\nTable~\\ref{Table so13+1.} there are one quadruplet ($2^{\\frac{d}{2}-1}=4$) --- \n($\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]} $, \n$\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]}$,\n$\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)} $,\n$\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)}$), representing\nquarks and leptons --- and one antiquadruplet --- \n($\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{(+)} $, \n$\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{(+)}$,\n$\\stackrel{9 \\;10}{(+)}\\;\\;\\stackrel{11\\;12}{(+)}\\;\\;\\stackrel{13\\;14}{[-]} $,\n$\\stackrel{9 \\;10}{[-]}\\;\\;\\stackrel{11\\;12}{[-]}\\;\\;\\stackrel{13\\;14}{[-]}$), representing\nantiquarks and antileptons, which both belong to the $64^{th}$-plet, if $SO(6)$ is embedded into \n$SO(13,1)$. The creation operators (and correspondingly their annihilation operators) have \nfor $32$ members (representing quarks and leptons) the $SO(6)$ part of an odd Clifford \ncharacter (and can be correspondingly second quantized (by itselves~\\cite{nh2018} or) together\nwith the rest of space, manifesting $SO(7,1)$ (since it has an even Clifford character). The rest \nof $32$ creation operators \n(representing antiquarks and antileptons) has in the $SO(6)$ part an even Clifford character and \ncorrespondingly in the rest of the Clifford space in $SO(7,1)$ an odd Clifford character.\n\n\nLet us discuss the case with the quadruplet of $SO(6)$ with an odd Clifford character. From the \npoint of view \nof the subgroups $SU(3)$ (the colour subgroup) and $U(1)$ (the $U(1)$ subgroup carrying the\n \"fermion\" quantum number), the quadruplet consists of one $SU(3)$ singlet with the \n\"fermion\" quantum number $-\\frac{1}{2}$ and one triplet with the \"fermion\" quantum number \n$\\frac{1}{6}$. The Clifford even $SO(7,1)$ part of $SO(13,1)$ define together with the Clifford \nodd $SO(6)$ part the quantum numbers of the right handed \nquarks and leptons and of the left handed quarks and leptons of the {\\it standard model}, the\nleft handed weak charged and the right handed weak chargeless. \n\n In the same representation of $SO(13,1)$ there is also one antiquadruplet, which has the even \nClifford character of $SO(6)$ part and the odd Clifford character in the $SO(7,1)$ part of the \n$SO(13,1)$. The antiquadruplet of the $SO(6)$ part consists of one \n$SU(3)$ antisinglet with the \"fermion\" quantum number $\\frac{1}{2}$ and one antitriplet \n with the \"fermion\" quantum number $-\\frac{1}{6}$. The $SO(7,1) \\times SO(6)$ antiquadruplet\n of $SO(13,1)$ carries quantum numbers of left handed weak chargeless antiquarks and \nantileptons and of the right handed weak charged antiquarks and antileptons of the \n{\\it standard model}. \n\nBoth, quarks and leptons and antiquarks and antileptons, belong to the same representation of \n$SO(13,1)$,\nexplaining the miraculous cancellation of the triangle anomalies in the {\\it standard model}\nwithout connecting by hand the handedness and the charges of quarks and leptons~\\cite{nh2017},\nas it must be done in the $SO(10)$ models. \n\n{\\bf b.}\nIn Grassmann space there are one ($\\frac{1}{2}\\,\\frac{d!}{\\frac{d}{2}! \\frac{d}{2}!}= 10$)\ndecuplet representation of $SO(6)$ and one antidecuplet, both presented in \nTable~\\ref{Table grassdecuplet.}. To be able to second quantize the theory,\nthe whole representation must be Grassmann odd. Both decuplets in \nTable~\\ref{Table grassdecuplet.} have an odd Grassmann character, what means that products\nof eigenstates of the Cartan subalgebra in the rest of Grassmann space must be of an \nGrassmann even character to be second quantizable. Both decuplets would, however, appear in \nthe same representation of $SO(13,1)$, and one can expect also decuplets of an even\nGrassmann character, if $SO(6)$ is embedded into $SO(13,1)$%\n~\\footnote{This can easily be understood, if we look at the subgroups of the group $SO(6)$. \n{\\bf i.} $\\,$ Let us look at the subgroup $SO(2)$. There are two creation operators of an odd \nGrassmann character, in this case $(\\theta^{9} - i \\theta^{10})$ and $(\\theta^{9} +\n i \\theta^{10})$. Both appear in either decuplet or in antidecuplet --- together with \n$\\theta^{9} \\theta^{10}$ with an even Grassmann character ---\nmultiplied by the part appearing from the rest of space $d=(11,12,13,14)$. But if $SO(2)$ is \nnot embedded in $SO(6)$, then the two states, corresponding to the creation operators, \n$(\\theta^{9} \\mp i \\theta^{10})$, belong to different representations, and so is \n$\\theta^{9} \\theta^{10}$.\n{\\bf ii.} $\\,$ Similarly we see, if we consider the subgroup $SO(4)$ of the group \n$SO(6)$. All six states, $(\\theta^{9} + i \\theta^{10})\n\\cdot (\\theta^{11} + i \\theta^{12})$, $(\\theta^{9} - i \\theta^{10})\n\\cdot (\\theta^{11} - i \\theta^{12})$, $(\\theta^{9} \\theta^{10} + \\theta^{11} \\theta^{12})$,\n $(\\theta^{9} + i \\theta^{10})\\cdot (\\theta^{11} - i \\theta^{12})$, \n$(\\theta^{9} - i \\theta^{10})\\cdot (\\theta^{11} + i \\theta^{12})$, $(\\theta^{9} \\theta^{10} - \n\\theta^{11} \\theta^{12})$, appear in the \ndecuplet and in the antidecuplet, multiplied with the part appearing from the rest of space, in this \ncase in $d=(13,14)$, if $SO(4)$ is embedded in $SO(6)$. But, in $d=4$ \nspace there are two decoupled groups of three states~\\cite{norma93}: \n[$(\\theta^{9} + i \\theta^{10})\\cdot (\\theta^{11} + i \\theta^{12})$, $(\\theta^{9} \\theta^{10}\n + \\theta^{11} \\theta^{12})$, $(\\theta^{9} - i \\theta^{10}) \\cdot\n (\\theta^{11} - i \\theta^{12})$] and [$(\\theta^{9} - i \\theta^{10}) \\cdot (\\theta^{11} + \ni \\theta^{12})$, $(\\theta^{9} \\theta^{10} - \\theta^{11} \\theta^{12})$, $(\\theta^{9} + i \n\\theta^{10}) \\cdot (\\theta^{11} - i \\theta^{12})$]. Neither of these six members could be \nsecond quantized in $d=4$ alone.}.\n\nWith respect to $SU(3)\\times U(1)$ subgroups of the group $SO(6)$ the decuplet manifests as one \nsinglet, one triplet and one sextet, while the antidecuplet manifests as one antisinglet, one \nantitriplet and one antisextet. All the corresponding quantum numbers of either the Cartan \nsubalgebra operators or of the corresponding diagonal operators of the $SU(3)$ or $U(1)$ subgroups\nare presented in Table~\\ref{Table grassdecuplet.}. \n \\begin{table}\n \\begin{center}\n\\begin{tiny}\n \\begin{tabular}{|c|r|r|r|r|r|r|r|r|}\n \\hline\n$I$& &$\\rm{decuplet}$&${\\cal {\\bf S}}^{9\\,10}$&${\\cal {\\bf S}}^{11\\,12}$&\n${\\cal {\\bf S}}^{13\\,14}$&${\\bf \\tau^{4}}$&${\\bf \\tau^{33}}$& ${\\bf \\tau^{38}}$\\\\\n \\hline \n& $1$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11} + i \\theta^{12})\n (\\theta^{13} + i \\theta^{14})$ &$1$&$1$&$1$&$-1$&$0$&$0$\\\\\n\\hline\n&$2$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11}\\theta^{12} +\n \\theta^{13} \\theta^{14})$ &$1$&$0$&$0$&$-\\frac{1}{3}$&$+\\frac{1}{2}$&$+\n\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$3$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11} - i \\theta^{12})\n (\\theta^{13} - i \\theta^{14})$ &$1$&$-1$&$-1$&$ +\\frac{1}{3}$&$+ 1$&$+\n\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n& $4$ & $ (\\theta^{9} \\theta^{10} + \\theta^{11} \\theta^{12})\n (\\theta^{13} + i \\theta^{14})$ &$0$&$0$&$1$&$-\\frac{1}{3}$&$0$&$-\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$5$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11} - i \\theta^{12})\n (\\theta^{13} + i \\theta^{14})$ &$-1$&$-1$&$-1$&$ +\\frac{1}{3}$&$0$&$-\\frac{2}{\\sqrt{3}}$\\\\\n\\hline\n&$6$ & $ (\\theta^{11} + i \\theta^{12}) (\\theta^{9}\\theta^{10}\n+ \\theta^{13} \\theta^{14})$ &$0$&$1$&$0$&$-\\frac{1}{3}$&$-\\frac{1}{2}$&$+\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$7$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11} + i \\theta^{12})\n (\\theta^{13} - i \\theta^{14})$ &$-1$&$1$&$-1$&$ +\\frac{1}{3}$&$- 1$&$+\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$8$ & $ (\\theta^{9} \\theta^{10} - \\theta^{11} \\theta^{12})\n (\\theta^{13} - i \\theta^{14})$ &$0$&$0$&$-1$&$+\\frac{1}{3}$&$0$&$+\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n& $9$ & $ (\\theta^{9} \\theta^{10} - \\theta^{13} \\theta^{14})\n (\\theta^{11} - i \\theta^{12})$ &$0$&$-1$&$0$&$+\\frac{1}{3}$&$+\\frac{1}{2}$&$-\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$10$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11}\\theta^{12} -\n \\theta^{13} \\theta^{14})$ &$-1$&$0$&$0$&$+\\frac{1}{3}$&$-\\frac{1}{2}$&$-\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline \\hline \n $II$& &$\\rm{decuplet}$&${\\cal {\\bf S}}^{9\\,10}$&${\\cal {\\bf S}}^{11\\,12}$&\n${\\cal {\\bf S}}^{13\\,14}$&${\\bf \\tau^{4}}$&${\\bf \\tau^{33}}$& ${\\bf \\tau^{38}}$\\\\\n\\hline\n& $1$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11} - i \\theta^{12})\n (\\theta^{13} - i \\theta^{14})$ &$-1$&$-1$&$-1$&$+1$&$0$&$0$\\\\\n\\hline\n&$2$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11}\\theta^{12} +\n \\theta^{13} \\theta^{14})$ &$-1$&$0$&$0$&$+\\frac{1}{3}$&$-\\frac{1}{2}$&$-\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$3$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11} + i \\theta^{12})\n (\\theta^{13} + i \\theta^{14})$ &$-1$&$1$&$1$&$ -\\frac{1}{3}$&$- 1$&$-\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n& $4$ & $ (\\theta^{9} \\theta^{10} + \\theta^{11} \\theta^{12})\n (\\theta^{13} - i \\theta^{14})$ &$0$&$0$&$-1$&$+\\frac{1}{3}$&$0$&$+\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$5$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11} + i \\theta^{12})\n (\\theta^{13} - i \\theta^{14})$ &$1$&$1$&$-1$&$ -\\frac{1}{3}$&$0$&$+\\frac{2}{\\sqrt{3}}$\\\\\n\\hline\n&$6$ & $ (\\theta^{11} - i \\theta^{12}) (\\theta^{9}\\theta^{10} +\n \\theta^{13} \\theta^{14})$ &$0$&$-1$&$0$&$+\\frac{1}{3}$&$+\\frac{1}{2}$&$-\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$7$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11} - i \\theta^{12})\n (\\theta^{13} + i \\theta^{14})$ &$1$&$-1$&$1$&$ -\\frac{1}{3}$&$+ 1$&$-\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$8$ & $ (\\theta^{9} \\theta^{10} - \\theta^{11} \\theta^{12})\n (\\theta^{13} + i \\theta^{14})$ &$0$&$0$&$1$&$-\\frac{1}{3}$&$0$&$-\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n& $9$ & $ (\\theta^{9} \\theta^{10} - \\theta^{13} \\theta^{14})\n (\\theta^{11} + i \\theta^{12})$ &$0$&$1$&$0$&$-\\frac{1}{3}$&$-\\frac{1}{2}$&$+\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$10$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11}\\theta^{12} -\n \\theta^{13} \\theta^{14})$ &$1$&$0$&$0$&$-\\frac{1}{3}$&$+\\frac{1}{2}$&$+\\frac{1}{2\\sqrt{3}}$\\\\\n \\hline\n \\end{tabular}\n\\end{tiny}\n \\end{center}\n \\caption{\\label{Table grassdecuplet.} The creation operators of the decuplet and the \nantidecuplet of the orthogonal group $SO(6)$ in Grassmann space are presented. Applying on the vacuum state $|\\phi_{0}> = |1>$ the creation operators form eigenstates of the Cartan subalgebra, \nEq.~(\\ref{choicecartan}), (${\\cal {\\bf S}}^{9\\,10}, {\\cal {\\bf S}}^{11\\,12}$, \n${\\cal {\\bf S}}^{13\\,14}$). The states within each decouplet are reachable from any \nmember by ${\\cal {\\bf S}}^{ab}$. The quantum numbers (${\\bf \\tau^{33}}, {\\bf \\tau^{38}}$) and \n${\\bf \\tau^{4}}$ of \nthe subgroups $SU(3)$ and $U(1)$ of the group $SO(6)$ are also presented, Eq.~(\\ref{so64}).\n}\n \\end{table}\n\nWhile in Clifford case the representations of $SO(6)$, if the group $SO(6)$ is embedded into \n$SO(13,1)$, are defining a Clifford odd quadruplet and a Clifford even antiquadruplet, the \nrepresentations in Grassmann case \ndefine one decuplet and one antidecuplet, both of the same Grassmann character, the odd one in \nour case. The two quadruplets in Clifford case manifest \nwith respect to the subgroups $SU(3)$ and $U(1)$ as a triplet and a singlet, and as an antitriplet\nand an antisinglet, respectively. In Grassmann case the two decuplets manifest with \nrespect to the subgroups $SU(3)$ and $U(1)$ as a (triplet, singlet, sextet) and as an (antitriplet,\nantisinglet, antisextet), respectively. The corresponding multiplets are presented in \nTable~\\ref{Table grasssextet.}. \n \\begin{table}\n \\begin{center}\n\\begin{tiny}\n \\begin{tabular}{|c|r|r|r|r|r|}\n \\hline \\hline\n$I$& &&${\\bf \\tau^{4}}$&${\\bf \\tau^{33}}$& ${\\bf \\tau^{38}}$\\\\\n \\hline \n$\\rm{singlet}$&$ $ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11} + i \\theta^{12})\n (\\theta^{13} + i \\theta^{14})$ &$-1$&$0$&$0$\\\\\n\\hline\\hline\n$\\rm{triplet}$&$1$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11}\\theta^{12} +\n \\theta^{13} \\theta^{14}) $ &$-\\frac{1}{3}$&$+\\frac{1}{2}$&$+\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$2$ & $ (\\theta^{9} \\theta^{10} + \\theta^{11} \\theta^{12})\n (\\theta^{13} + i \\theta^{14}) $ &$-\\frac{1}{3}$&$0$&$-\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$3$ & $ (\\theta^{11} + i \\theta^{12}) (\\theta^{9}\\theta^{10}\n+ \\theta^{13} \\theta^{14}) $ &$-\\frac{1}{3}$&$-\\frac{1}{2}$&$+\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline \\hline\n$\\rm{sextet}$&$1$ & $(\\theta^{9}+ i \\theta^{10}) (\\theta^{11} - i \\theta^{12}) \n(\\theta^{13} - i \\theta^{14})\n$ &$\\frac{1}{3}$&$+1$&$+\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$2$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11} - i \\theta^{12})\n (\\theta^{13} + i \\theta^{14}) $ &$\\frac{1}{3}$&$0$&$-\\frac{2}{\\sqrt{3}}$\\\\\n\\hline\n&$3$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11} + i \\theta^{12})\n (\\theta^{13} - i \\theta^{14}) $ &$\\frac{1}{3}$&$-1$&$+\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$4$ & $ (\\theta^{9} \\theta^{10} - \\theta^{11} \\theta^{12})\n (\\theta^{13} - i \\theta^{14}) $ &$\\frac{1}{3}$&$0$&$+\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$5$ & $(\\theta^{9} \\theta^{10} - \\theta^{13} \\theta^{14})\n (\\theta^{11} - i \\theta^{12}) $ &$\\frac{1}{3}$&$+\\frac{1}{2}$&$-\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$6$ & $(\\theta^{9} - i \\theta^{10}) (\\theta^{11}\\theta^{12} -\n \\theta^{13} \\theta^{14}) $ &$\\frac{1}{3}$&$-\\frac{1}{2}$&$-\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\\hline\n$II$&&&${\\bf \\tau^{4}}$&${\\bf \\tau^{33}}$& ${\\bf \\tau^{38}}$\\\\\n \\hline \n$\\rm{antisinglet}$&$ $ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11} - i \\theta^{12})\n (\\theta^{13} - i \\theta^{14})$ &$+1$&$0$&$0$\\\\\n\\hline\n\\hline\n$\\rm{antitriplet}$&$1$ & $ (\\theta^{9} - i \\theta^{10}) (\\theta^{11}\\theta^{12} +\n \\theta^{13} \\theta^{14})$ &$+\\frac{1}{3} $ &$-\\frac{1}{2}$&$-\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$2$ & $ (\\theta^{9} \\theta^{10} + \\theta^{11} \\theta^{12})\n (\\theta^{13} - i \\theta^{14})$ &$+\\frac{1}{3}$&$0$&$+\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$3$ & $(\\theta^{11} - i \\theta^{12}) (\\theta^{9}\\theta^{10} +\n \\theta^{13} \\theta^{14}) $ &$+\\frac{1}{3}$&$+\\frac{1}{2}$&$-\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline \\hline\n$\\rm{antisextet}$&$1$ & $(\\theta^{9} - i \\theta^{10}) (\\theta^{11} + i \\theta^{12})\n (\\theta^{13} + i \\theta^{14})$ &$ -\\frac{1}{3} $ &$-1$&$-\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$2$ & $(\\theta^{9} + i \\theta^{10}) (\\theta^{11} + i \\theta^{12})\n (\\theta^{13} - i \\theta^{14}) $ &$-\\frac{1}{3}$&$0$&$+\\frac{2}{\\sqrt{3}}$\\\\\n\\hline\n&$3$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11} - i \\theta^{12})\n (\\theta^{13} + i \\theta^{14}) $ &$-\\frac{1}{3}$&$+1$&$-\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$4$ & $ (\\theta^{9} \\theta^{10} - \\theta^{11} \\theta^{12})\n (\\theta^{13} + i \\theta^{14}) $ &$-\\frac{1}{3}$&$0$&$-\\frac{1}{\\sqrt{3}}$\\\\\n\\hline\n&$5$ & $ (\\theta^{9} \\theta^{10} - \\theta^{13} \\theta^{14})\n (\\theta^{11} + i \\theta^{12}) $ &$-\\frac{1}{3}$&$-\\frac{1}{2}$&$+\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n&$6$ & $ (\\theta^{9} + i \\theta^{10}) (\\theta^{11}\\theta^{12} -\n \\theta^{13} \\theta^{14}) $ &$-\\frac{1}{3}$&$+\\frac{1}{2}$&$+\\frac{1}{2\\sqrt{3}}$\\\\\n\\hline\n \\end{tabular}\n\\end{tiny}\n \\end{center}\n \\caption{\\label{Table grasssextet.} The creation operators in Grassmann space of the decuplet \nof Table~\\ref{Table grassdecuplet.} are arranged with respect to the $SU(3)$ and $U(1)$ \nsubgroups of the group $SO(6)$ into a singlet, a triplet and a sextet. \nThe corresponding antidecuplet manifests as an antisinglet, an antitriplet and an antisextet. \n${\\bf \\tau^{33}}= \\frac{1}{2} ({\\cal {\\bf S}}^{9\\,10} - {\\cal {\\bf S}}^{11\\,12})$, \n${\\bf \\tau^{38}}=$ $\\frac{1}{2 \\sqrt{3}} ({\\cal {\\bf S}}^{9\\,10} + {\\cal {\\bf S}}^{11\\,12} - \n2 {\\cal {\\bf S}}^{13\\,14})$, ${\\bf \\tau^{4}}=$ - \n$\\frac{1}{3} ({\\cal {\\bf S}}^{9\\,10} + {\\cal {\\bf S}}^{11\\,12} + {\\cal {\\bf S}}^{13\\,14})$;\n${\\cal {\\bf S}}^{a b}$ $= i (\\theta^a \\frac{\\partial}{\\partial \\theta_b} - \\theta^b \n\\frac{\\partial}{\\partial \\theta_a})$. \n}\n \\end{table}\nThe \"fermion\" quantum number ${\\bf \\tau^{4}}$ has for either singlets or triplets in Grassmann \nspace, Table~\\ref{Table grasssextet.}, twice the value of the corresponding singlets and triplets\n in Clifford space, Table~\\ref{Table so13+1.}: $(-1, +1)$ in \nGrassmann case to be compared with $(- \\frac{1}{2}, +\\frac{1}{2})$ in Clifford case and \n$(+ \\frac{1}{3}, -\\frac{1}{3})$ in Grassmann \ncase to be compared with $(+ \\frac{1}{6}, -\\frac{1}{6})$ in Clifford case. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{sl1a-sing-8May.pdf} \n \\hfill\n \\includegraphics[width=0.45\\textwidth]{sl1sing-8May.pdf}\n \\caption{\\label{FigSO6} Representations of the subgroups $SU(3)$ and $U(1)$ of the group \n$SO(6)$ in Grassmann space for two Grassmann odd representations of \nTable~\\ref{Table grasssextet.} are presented. \nOn the abscissa axis and on the ordinate axis the values of the two diagonal operators, \n${\\bf \\tau^{33}}$ and ${\\bf \\tau^{38}}$ of the coulour ($SU(3)$)\nsubgroup are presented, respectively, with full circles. On the third axis the values of the \nsubgroup of the \"fermion number\" $U(1)$ is presented with the open circles, the same for all \nthe representations of each multiplet. There are one singlet, one triplet and one sextet on the \nleft hand side and one antisinglet, one antitriplet and one antisextet on the right hand side.} \n\\end{figure}\n\nWhen $SO(6)$ is embedded into $SO(13,1)$, the $SO(6)$ representations of either even or odd \nGrassmann character contribute to both of the decoupled, \n$1716$ states of $SO(13,1)$ representations\ncontribute, provided that the $SO(8)$ content has the opposite Grassmann character than the $SO(6)$\ncontent. The product of both representations must be Grassmann odd in order that the corresponding \ncreation and annihilation operators fulfill the required anticommutation relations for fermions, \nEq.~(\\ref{ijthetaprod}).\n\n\\subsubsection{Properties of the subgroups $SO(3,1)$ and $SO(4)$ of the group $SO(8)$ in \nGrassmann and in Clifford space, when $SO(8)$ is embedded into $SO(13,1)$}\n\\label{so8}\n\n{\\bf a.}\nLet us again repeat first properties of the $SO(3,1)$ and $SO(4)$ parts of the $SO(13,1)$\nrepresentation of $64$ \"family members\" in Clifford space, presented in Table~\\ref{Table so13+1.}. \nAs seen in Table~\\ref{Table so13+1.} there are four octets and four antioctets of $SO(8)$. \nAll four octets, having an even Clifford character and forming $32$ states when embedded into \n$SO(13,1)$, are the same for either\nquarks or for leptons, they distinguish only in the $SO(6)$ part (of a Clifford odd character) of the \n$SO(13,1)$ group, that is in the colour ($SU(3)$) part and the \"fermion quantum number\" \n($U(1)$) part. \nAlso the four antioctets, having an odd Clifford character, are all the same for the $32$ \nfamily members of antiquarks and antileptons, they again distinguish only in the Clifford \neven $SO(6)$ part of $SO(13,1)$, that is in the anticolour ($SU(3)$) part and the \"fermion \nquantum number\" ($U(1)$) part.\n\nThe $64^{th}$-plet of creation operators has an odd Clifford character either for quarks and \nleptons or for antiquarks and antileptons --- correspondingly have an odd \nClifford character also their annihilation operators --- and can be second quantized~\\cite{nh2018}.\n\n\nLet us analyze first the octet ($2^{\\frac{8}{2}-1}=8$), which is the same for all $32$ members of\nquarks and leptons. The octet has an even Clifford character. All the right handed $u_{R}$-quarks \nand $\\nu_{R}$-leptons have the $SO(4)$ part of $SO(8)$ equal to\n$\\stackrel{56}{[+]}\\,\\stackrel{78}{(+)}$, while their left handed partners have the $SO(4)$ part of \n$SO(8)$ equal to $\\stackrel{56}{[+]}\\,\\stackrel{78}{[-]}$.\nAll the right handed $d_{R}$-quarks and $e_{R}$-leptons have the $SO(4)$ part of $SO(8)$ \nequal to $\\stackrel{56}{(-)}\\,\\stackrel{78}{[-]}$, while their left handed partners have the $SO(4)$ \npart of $SO(8)$ equal to $\\stackrel{56}{(-)}\\,\\stackrel{78}{(+)]}$. The left handed quarks and \nleptons are doublets with respect to $\\vec{\\tau}^{1}$ and singlets with $\\vec{\\tau}^{2}$, while \nthe right handed quarks and leptons are singlets with respect to $\\vec{\\tau}^{1}$ and doublets\n with $\\vec{\\tau}^{2}$. \nThe left and right handed quarks and lepton belong with respect to the $SO(3,1)$ group to either\nleft handed or the right handed spinor representations, respectively.\n\n{\\bf b.}\nIn Grassmann space the $SO(8)$ group of an odd Grassmann character has $\\frac{1}{2}$ \n$\\frac{8!}{4! 4!} = 35$ creation operators in each of the two groups and the same number of \nannihilation operators, obtained from the creation operators by Hermitian conjugation, \nEq.~(\\ref{grassher}). The corresponding states, created by the creation operators on the vacuum \nstate $|\\phi_{o}>$, can be therefore second quantized. But if embedded the group $SO(8)$ into\nthe group $SO(13,1)$ the subgroup $SO(6)$ must have an even Grassmann character in oder \nthat the states in $SO(13,1)$ can be second quantized according to Eq.~(\\ref{ijthetaprod}). \n\nAccording to what we learned in the case of the group $SO(6)$, each of the two independent \nrepresentations of the group $SO(13,1)$ of an odd Grassmann character must \ninclude either the even $SO(7,1)$ part and the odd $SO(6)$ part or the odd \n$SO(7,1)$ part and the even $SO(6)$ part. To the even $SO(7,1)$ representation \neither the odd $SO(3,1)$ and the odd $SO(4)$ parts contribute or both must be of the \nGrassmann even character. In the case that the $SO(7,1)$ part has an odd Grassmann character \n(in this case the $SO(6)$ has an even Grassmann character) then one of the\ntwo parts $SO(3,1)$ and $SO(4)$ must be odd and the other even.\n\n\n\\section{Concluding remarks}\n\\label{conclusions}\n\nWe learned in this contribution that although either Grassmann or Clifford space offer the second\nquantizable description of the internal degrees of freedom of fermions (Eq.~(\\ref{ijthetaprod})),\nthe Clifford space offers more: It offers not only the description of all the \"family members\", \nexplaining all the degrees of freedom of the observed quarks and leptons and antiquark and antileptons, \nbut also the explanation for the appearance of families. \n\nThe interaction of fermions with the gravity fields --- the vielbeins and the spin connections\n --- in the $2(2n+1)$-dimensional space can be achieved, as suggested by the {\\it spin-charge-family}\ntheory~(\\cite{normaJMP2015,n2014matterantimatter} and references therein), by replacing the \nmomentum $p_{a}$ in the Lagrange density function for a free particle by the covariant momentum, \nequally appropriate for both representations. In Grassmann space we have: $p_{0a}= f^{\\alpha}{}_a$\n $p_{0\\alpha}$, with $p_{0\\alpha} = p_{\\alpha} - \\frac{1}{2}\\, {\\cal {\\bf S}}^{ab} \n\\Omega_{ab \\alpha}$, where $ f^{\\alpha}{}_a$ is the vielbein in $d=2(2n+1)$-dimensional space and\n$\\Omega_{ab \\alpha}$ is the spin connection field of the Lorentz generators ${\\cal {\\bf S}}^{ab}$.\nIn Clifford space we have equivalently: $p_{0a}= f^{\\alpha}{}_a$ $p_{0\\alpha}$, $p_{0\\alpha}= \n p_{\\alpha} - \\frac{1}{2} S^{ab} \\omega_{ab \\alpha} - \\frac{1}{2} \\tilde{S}^{ab} \n \\tilde{\\omega}_{ab \\alpha}$. Since ${\\cal {\\bf S}}^{ab} = S^{ab} + \\tilde{S}^{ab}$ we find that\nwhen no fermions are present either $\\Omega_{ab \\alpha}$ or $\\omega_{ab \\alpha}$ or \n$\\tilde{\\omega}_{ab \\alpha}$ are uniquely expressible by vielbeins $f^{\\alpha}{}_a$\n (\\cite{normaJMP2015,n2014matterantimatter} and references therein). It might be that\n \"our universe made a choice between the Clifford and the Grassmann algebra\" when breaking\n the starting symmetry by making condensates of fermions, since that for breaking symmetries \nClifford space offers better opportunity\".\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\label{intro}\n\nThe study of topological superconducting wires, which host Majorana zero\nmodes (MZMs) at their ends, is a field of intense research in condensed\nmatter physics, not only because of the interesting basic physics involved \n\\cite{sato}, but also because of possible applications in decoherence-free\nquantum computing.\\cite{kitaev,nayak,alicea,lobos}\n\nIn 2010, Lutchyn \\textit{et al.} \\cite{lutchyn2010} and Oreg \\textit{et al.} \n\\cite{oreg2010} proposed a model for topological superconducting wires\ndescribing a system formed by a semiconducting wire with spin-orbit coupling\n(SOC) and proximity-induced s-wave superconductivity under an applied\nmagnetic field perpendicular to the direction of the SOC. This yields a\ntopological superconducting phase with MZMs localized at its ends. The\nobservation of these MZMs in these types of wires was reported in different\nexperimental studies.\\cite{wires-exp1,wires-exp2,wires-exp3,wires-exp4}\n\nThe search for different models and mechanisms leading to topological\nsuperconducting phases continues being a very active avenue of research\ntheoretically and experimentally.\n\nMore recently, there has been experimental research as well as theoretical\nstudies in similar models, including those for time-reversal invariant\ntopological superconductors,\\cite{review-tritops,volpez} of the effects of\nMZMs in Josephson junctions, in particular because the dependence on the\napplied magnetic flux introduces an additional control knob.\\cite{volpez,zazu,pientka,hell,ren,fornie,cata,tomo}\n\nIn particular, it has been recently proposed that the current-phase relation\nmeasured in Josephson junctions may be used to find the parameters that\ndefine the MZMs.\\cite{tomo} A possible difficulty in these experiments is\nthe slow thermalization to the ground state\nin the presence of a gap.\\cite{bondy}\nA way to circumvent this problem is to rotate the magnetic field\nslowly from a direction not perpendicular to the SOC in which the system is\nin a gapless superconducting phase, in which thermalization is easier.\\cite{tomo}\nTherefore, it is convenient to know the phase diagram of the system\nand the extension of this gapless phase.\n\nIn this work we calculate the phase diagram of the lattice version of the\nmodel and discuss in particular the gapless phase. The paper is organized as\nfollows. In Sec. \\ref{model} we describe the model. The topological\ninvariants use to define the phase diagram are presented in Sec. \\ref{inv}. In Sec. \\ref{res} we show the numerical results, analytical expressions for\nthe boundaries of the topological phase and discuss briefly the Majorana zero modes.\nWe summarize the results in Sec. \\ref{sum}.\n\n\\section{Model}\n\n\\label{model}\n\nThe model for topological superconducting wires studied in this work is the\nlattice version of that introduced by Lutchyn \\textit{et al.} \n\\cite{lutchyn2010} and Oreg \\textit{et al.} \\cite{oreg2010}. The Hamiltonian can\nbe written as \\cite{tomo} \n\\begin{eqnarray}\nH &=&\\sum_{\\ell }[\\mathbf{c}_{\\ell }^{\\dagger }\\left( -t\\;\\sigma _{0}-i\\vec{\n\\lambda}\\cdot \\vec{\\sigma}\\right) \\mathbf{c}_{\\ell +1}+\\Delta c_{\\ell\n\\uparrow }^{\\dagger }c_{\\ell \\downarrow }^{\\dagger }+\\text{H.c.} \\notag \\\\\n&-&\\mathbf{c}_{\\ell }^{\\dagger }\\left( \\vec{B}\\cdot \\vec{\\sigma}+\\mu \\sigma\n_{0}\\right) \\mathbf{c}_{\\ell }],\\;\\; \\label{ham}\n\\end{eqnarray}\nwhere $\\ell $ labels the sites of a chain, $\\mathbf{c}_{\\ell }=(c_{\\ell\n\\uparrow },c_{\\ell \\downarrow })^{T}$, $t$ is the nearest-neighbor hopping, $\n\\vec{\\lambda}$ is the SOC, $\\Delta $ represents the magnitude of the\nproximity-induced superconductivity, $\\vec{B}$ is the applied magnetic field\nand $\\mu $ is the chemical potential. As usual, the components of the vector $\n\\vec{\\sigma}=\\left( \\sigma _{x},\\sigma _{y},\\sigma _{z}\\right) $ are the\nPauli matrices and $\\sigma _{0}$ is the 2$\\times $2 unitary matrix. The\npairing amplitude $\\Delta $ can be assumed real. Otherwise, the phase can be\neliminated by a gauge transformation in the operators $c_{\\ell \\sigma\n}^{\\dagger }$ that absorbs the phase.\n\nWithout loss of generality, we choose the $z$ direction as that of the\nmagnetic field ($\\vec{B}=B\\mathbf{\\hat{z}}$) and $x$ perpendicular to the\nplane defined by $\\vec{\\lambda}$ and $\\vec{B}$ ($\\vec{\\lambda}=\\lambda _{y} \n\\mathbf{\\hat{y}}+\\lambda_{z} \\mathbf{\\hat{z}}$). After Fourier\ntransformation, the Hamiltonian takes the form $H=\\sum_{k}H_{k}$, with\n\n\\begin{eqnarray}\nH_{k} &=&-(\\mu +2t\\cos (k))(c_{k\\uparrow }^{\\dagger }c_{k\\uparrow\n}+c_{k\\downarrow }^{\\dagger }c_{k\\downarrow }) \\\\\n&& -B(c_{k\\uparrow }^{\\dagger }c_{k\\uparrow }-c_{k\\downarrow }^{\\dagger\n}c_{k\\downarrow }) -2\\sin (k)\\left[ i\\lambda _{y}(c_{k\\uparrow }^{\\dagger\n}c_{k\\downarrow }-c_{k\\downarrow }^{\\dagger }c_{k\\uparrow }) \\right. \\notag\n\\\\\n&& \\left. +\\lambda _{z}(c_{k\\uparrow }^{\\dagger }c_{k\\uparrow\n}-c_{k\\downarrow }^{\\dagger }c_{k\\downarrow })\\right] +\\Delta (c_{k\\uparrow\n}^{\\dagger }c_{-k\\downarrow }^{\\dagger }+c_{-k\\downarrow }c_{k\\uparrow }). \n\\notag \\label{hk}\n\\end{eqnarray}\nUsing the four-component spinor $(c_{k\\uparrow }^{\\dagger },c_{k\\downarrow\n}^{\\dagger },c_{-k\\uparrow },c_{-k\\downarrow })$,\\cite{tewari} the\ncontribution to the Hamiltonian for wave vector $k$ can be written in the\nform\n\n\\begin{eqnarray}\nH_{k} &=&-(\\mu +2t\\cos (k))\\tau _{z}\\otimes \\sigma _{0}-B\\tau _{z}\\otimes\n\\sigma _{z}-\\Delta \\tau _{y}\\otimes \\sigma _{y} \\notag \\\\\n&&+2\\lambda _{y}\\sin (k)\\tau _{z}\\otimes \\sigma _{y}-2\\lambda _{z}\\sin\n(k)\\tau _{0}\\otimes \\sigma _{z}, \\label{hk2}\n\\end{eqnarray\nwhere the Pauli matrices $\\sigma _{\\alpha }$ act on the spin space, while\nthe $\\tau _{\\alpha }$ act on the particle-hole space. Writing the matrix\nexplicitly, $H_{k}$ takes the form\n\n\\begin{equation}\nH_{k}\n\\begin{pmatrix}\n-a-B-z & -iy & 0 & \\Delta \\\\ \niy & -a+B+z & -\\Delta & 0 \\\\ \n0 & -\\Delta & a+B-z & iy \\\\ \n\\Delta & 0 & -iy & a-B+z\n\\end{pmatrix}\n\\label{hk3}\n\\end{equation\nwhere $a=\\mu +2t\\cos (k)$, $B=|B|=B_{z}$, $y=2\\lambda _{y}\\sin (k)$ and $\nz=2\\lambda _{z}\\sin (k)$.\n\n\\section{Topological invariants}\n\n\\label{inv} In this section we define the topological invariants we use to\ncharacterize the topological phases. In general, the Hamiltonian belongs to\ntopological class D with a \n$\\mathbb{Z}_{2}$ topological invariant.\\cite{Schn,ryu} However, for perpendicular $\\vec{\\lambda}$ and $\\vec{B}$ ($z=0$),\nthe system has a chiral symmetry and belongs to the topological class BDI\nwith a $\\mathbb{Z}$ (integer) topological invariant corresponding to a\nwinding number.\\cite{tewari} In this case, the calculation of the\ntopological invariant is simpler, as shown by Tewari and Sau.\\cite{tewari} \n\nFollowing this work, we perform a rotation in $\\pi \/2$ around the $\\hat{y}$\naxis in particle-hole space, which transforms $\\tau _{z}$ to $\\tau _{x}$: $\nH_{k}^{\\prime }=UH_{k}U^{\\dagger }$ with $U=$exp$(-i\\pi \/4)\\tau _{y}$. With\nthis transformation $H_{k}^{\\prime }$ becomes\n\n\\begin{equation}\nH_{k}^{\\prime }=\n\\begin{pmatrix}\n-z & 0 & -a-B & \\Delta -iy \\\\ \n0 & z & -\\Delta +iy & -a+B \\\\ \n-a-B & -\\Delta -iy & -z & 0 \\\\ \n\\Delta +iy & -a+B & 0 & z\n\\end{pmatrix}\n\\label{hkp}\n\\end{equation}\nTaking $z=0$, this rotation yields an off-diagonal (chiral symmetric)\nHamiltonian. This allows us to define a winding number $W$ (a topological $\n\\mathbb{Z}$ invariant) from the phase of the determinant of the $2\\times 2$\nmatrix $A(k)$, which is the upper right corner of Eq. (\\ref{hkp}).\\cite{tewari}\nSpecifically $\\mathrm{Det}(A(k))$$=|\\mathrm{Det}(A(k))|$$e^{i\\theta (k)}=$$a^{2}-B^{2}-(\\Delta -iy)^{2}$, and\n\n\\begin{equation}\nW=\\frac{-i}{\\pi }\\int\\limits_{0}^{\\pi }\\frac{d(e^{i\\theta (k)})}{e^{i\\theta\n(k)}}. \\label{win}\n\\end{equation}\nIn addition, a $\\mathbb{Z}_{2}$ invariant $I$ can be defined from the\nrelative sign of $\\mathrm{Det}(A)$ (which is real for $k=0$ and $k=\\pi$)\nbetween the points $k=0$ and $k=\\pi$:\n\n\\begin{equation}\nI=(-1)^{W}=\\text{sign}\\frac{\\mathrm{Det}(A(\\pi ))}{\\mathrm{Det}(A(0))}\n\\label{z2}\n\\end{equation}\nLooking for the condition that $I\\equiv -1$ (mod 2), we obtain that the\nconditions for the system to be in the topological phase are that $\\lambda\n_{y}=|\\vec{\\lambda}|\\neq 0\\neq \\Delta $ and the remaining parameters should\nsatisfy \n\\begin{equation}\n|2|t|-r|<|\\mu |<|2|t|+r|,\\text{ \\ \\ \\ with }r=\\sqrt{B^{2}-\\Delta ^{2}}>0.\n\\label{bound}\n\\end{equation}\nWe note that changing the sign of any of the parameters does not change the\nboundary of the topological phase. This is due to the symmetry properties of\nthe Hamiltonian.\\cite{tomo}\n\nIn the more general case, when $\\vec{\\lambda}$ and $\\vec{B}$ are not\nperpendicular, it is not possible to follow the approach outlined above. In\nthis case, we use the Zak Berry phase to construct the topological\ninvariant. \\cite{zak,king,resta,ortiz,bf1,bf2,hatsu,bf3,ryu,deng,budich}\nSpecifically, the Hamiltonian $H_{k}$ has four different eigenvectors and\nfor each of them, following Zak,\\cite{zak} one can calculate a Berry phase\nfrom the Bloch functions as the wave vector $k$ varies in the loop $0\\leq\nk\\leq 2\\pi $ (with $k=2\\pi $ equivalent to $k=0$). For each eigenstate $\n|u(k)\\rangle $ of $H_{k}$, the Berry phase is \n\n\\begin{equation}\n\\gamma =-\\text{Im }\\int\\limits_{0}^{2\\pi }dk\\langle u(k)|\\frac{\\partial }{\n\\partial k}|u(k)\\rangle . \\label{gam}\n\\end{equation}\n\n\\ In addition (as noted before \\cite{tomo}) choosing a suitable coordinate\nframe ( $\\vec{\\lambda}\\cdot \\mathbf{\\hat{y}}=\\vec{B}\\cdot \\mathbf{\\hat{y}}=0$\n), the Hamiltonian Eq. (\\ref{ham}) is invariant under an antiunitary\noperator defined as the product of inversion (defined by the transformation $\n\\ell \\leftrightarrow N+1-\\ell $, for a chain with $N$ sites) and complex\nconjugation, implying that the Berry phase $\\gamma $ is quantized with only\ntwo possible values $0$ and $\\pi $ (mod $2\\pi $).\\cite{hatsu} Naturally the\nvalue of the Berry phase does not depend on the choice of the reference\nframe. Therefore, as for an insulator, if the system has a gap, the sum of\nthe Berry phases of all one-particle states of energies below the gap mod $2\\pi $, defines a \n$\\mathbb{Z}_{2}$ topological number, indicating that the system is trivial\n(topological) if this sum is equivalent to 0 ($\\pi $) mod $2\\pi .\n$\\cite{ryu,budich} Moreover, from Eq. (\\ref{hk}) it is easy to realize that the\ncharge conjugation $c_{\\ell \\sigma }^{\\dagger }\\leftrightarrow c_{\\ell\n\\sigma }$, which in Fourier space means $c_{k\\sigma }^{\\dagger }=(1\/\\sqrt{N\n)\\sum_{l}e^{-ik\\ell }c_{\\ell \\sigma }^{\\dagger }\\leftrightarrow c_{-k\\sigma }\n$, transforms $H_{k}\\leftrightarrow -H_{-k}$. Therefore the sum of the Berry\nphases of all positive eigenvalues gives the same topological number as the\nsum of all negative eigenvalues.\n\nIn our model, $H_{k}$ has four eigenvalues $E(k)$. The lowest one $E_{1}(k)$\nis always negative and the corresponding eigenvector has always a Berry\nphase 0. From the above mentioned charge-transfer symmetry, the fourth\neigenvalue (the highest one) has energy $E_{4}(k)=-E_{1}(-k)>0$. Therefore,\nthe Berry phase of the second eigenvalue (which is equal to that of the\nthird one) determines the $\\mathbb{Z}_{2}$ invariant. We have calculated the\nBerry phase $\\gamma $ of each of the four bands (and particularly the second\none) from the normalized eigenvectors $|u_{j}\\rangle =|u(k_{j})\\rangle $ of\nthe $4\\times 4$ matrix obtained numerically at $M$ wave vectors $k_{j}=2\\pi\n(j-1)\/M$, using a numerically invariant expression \\cite{ortiz,deng}. This\nexpression is derived in the following way. Discretizing Eq. (\\ref{gam}) and\napproximating $\\partial |u(k)\/\\partial k=(M\/2\\pi )(|u(k_{j+1})\\rangle\n-|u(k_{j})\\rangle )$, one obtains\n\n\\begin{equation}\n\\gamma =-\\text{Im}\\sum_{j=1}^{M}\\left[ \\langle u_{j}|\\left( |u_{j+1}\\rangle\n-|u_{j}\\rangle \\right) \\right] . \\label{gam2}\n\\end{equation}\nIf $M$ is large enough so that $k_{j}$ and $k_{j+1}$ are very close, then \n$x=\\langle u_{j}|u_{j+1}\\rangle -1$ is very small and one can retain only the\nfirst term in the Taylor series expansion ln$(1+x)=x-x^{2}\/2+...$ Replacing in Eq. (\n\\ref{gam2}) one obtains\n\n\\begin{eqnarray}\n\\gamma &=&-\\mathrm{{Im}\\left[ \\ln (P)\\right] }\\text{, where } \\notag \\\\\nP &=&\\langle u_{1}|u_{2}\\rangle \\langle u_{2}|u_{3}\\rangle ...\\langle\nu_{M-1}|u_{1}\\rangle \\label{gam4}\n\\end{eqnarray}\nIt is easy to see that Eq. (\\ref{gam}) is gauge invariant. This means that\nthe result does not change if $|u(k)\\rangle $ is replaced by $e^{i\\varphi\n(k)}|u(k)\\rangle $, where $\\varphi (k)$ is a smooth function with $\\varphi\n(2\\pi )=\\varphi (0).$ Similarly, the product $P$ is independent of the base\nchosen by the numerical algorithm to find the eigenstates $|u_{j}\\rangle .$\nTherefore Eq. (\\ref{gam4}) is numerically gauge invariant. Analyzing the\nchange in the results with increasing $M$, we find that $M\\sim 250$ is\nenough to obtain accurately all phase boundaries shown below. A further\nincrease in $M$ leads to changes that are not visible in the scale of the\nfigures.\n\nThis $\\mathbb{Z}_{2}$ topological invariant defined by the Berry phase \nof the second (or third) state can be trivially extended to the gapless case \nif the energies of the second and third state do not cross as a function of $k$.\nEven if the energies cross the Berry phases can be calculated switching \nthe states at the crossing. However, this case is not of interest here.\n\n\\section{Results}\n\n\\label{res}\n\n\\subsection{Phase diagram}\n\n\\label{phdi}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{4p_D-fase_varioB_TeX.eps}\n\\end{center}\n\\caption{Phase diagram in the $\\mu,\\Delta$ plane for perpendicular \n$\\vec{\\lambda}$ and $\\vec{B}$, $t=1$, \n$\\lambda=|\\vec{\\lambda}|=2$, and several values of $B$. \nGray region I denotes the topological sector and white region II the \nnon topological one.}\n\\label{variob}\n\\end{figure}\n\nWe start by discussing the simplest case of perpendicular $\\vec{\\lambda}$\nand $\\vec{B}$. In Fig. \\ref{variob} we display the resulting phase diagram\nfor some parameters, showing the possible different shapes. There are two\ngapped phases, the trivial (white region II) and the topological one (light\ngray I), separated in general by two circular arcs defined by Eqs. \n(\\ref{bound}). For simplicity we discuss the case $t, B > 0$. The topological character\nis independent of the sign of the different parameters. If $B<2t$, the\nregion of possible values of $|\\mu|$ inside the topological sector extends\nfrom $2t-B$ to $2t+B$ for $\\Delta \\rightarrow 0$ and shrinks for increasing \n\\Delta$ until it reduces to the point $|\\mu|=2t$ for $\\Delta \\rightarrow B$.\nIf $B=2t$, the semicircle touches the point $\\mu=0$. For larger $B$, the\nregion $|\\mu|< \\sqrt{B^{2}-\\Delta^{2}}-2t$ for $\\Delta^2 < B^{2}-4t^2$ is\nexcluded from the topological region.\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{4p_fase_phiB0_beta_TeX.eps}\n\\end{center}\n\\caption{(Color online) Phase diagram in the $\\mu,\\Delta$ plane for $t=1$, \n$\\lambda=2$, $B=4$ and several values of the angle \n$\\beta_{\\lambda B}$ between $\\vec{\\lambda}$ and $\\vec{B}$. \nRegions I and II as in Fig. \\ref{variob}. Region III (IV)\nin dark gray (black) corresponds to the gapless phase\nwith Berry phase $\\pi$ (0).\nThe red points at the top left corresponds to numerical calculations\nwhich detected localized states at the ends.}\n\n\\label{variobeta}\n\\end{figure}\n\nWhile for perpendicular $\\vec{\\lambda}$ and $\\vec{B}$, the gap vanishes only\nat particular lines in the phase diagram (black lines in Fig. \\ref{variob})\nfor which the topological transition takes place, for general angles $\\beta\n_{\\lambda B}$ between both vectors, there is a finite region in the $\\mu\n,\\Delta $ plane for which the gap vanishes, in particular for $|\\Delta\n|<\\Delta _{c}$, where $\\Delta _{c}$ is a critical value, independent of $\\mu \n$, determined analytically below. Before presenting the analytical\ncalculation, we describe the general features of each phase in the phase\ndiagram, as shown in Fig. \\ref{variobeta}. The gapped regions in the figure\nare denoted by I and II. The remaining two regions are gapless. We separate them by the trivial (topological) character of the Berry phases of the second and third eigenstate, indicating the\ncorresponding regions with black (dark gray) color and roman number IV (III). \nIn spite of the topological Berry phases of the latter gapless phase, \nMZMs in a finite chain are not expected to be protected against small\nperturbations because of the absence of a gap. Therefore we describe this\nphase as non topological. Furthermore we do not find numerically signatures\nof localized end states in this phase.\n\nWe have also checked the boundaries of the\ntopological phase solving numerically finite chains and searching for\nlocalized states at their ends and the presence of the finite gap. The\nlocalized states are described in Sec. \\ref{majo}. The presence of the\ngap is defined by the condition that the determinant $D(k)$ of $H_{k}$ is\npositive for each $k$. As it can be seen in Fig. \\ref{variobeta} top left,\nthe results of both approaches agree.\n\n\\subsection{Analytical expressions for the boundaries of the topological\nphase}\n\n\\label{anal}\n\nFor perpendicular $\\vec{\\lambda}$ and $\\vec{B}$, the boundaries of the\ntopological phase are defined by Eqs. (\\ref{bound}) and the conditions \n$|\\vec{\\lambda}|\\neq 0$ and $\\Delta \\neq 0$. As the angle is changed \nfrom 90\\textdegree, the gap reduces and a non-zero $|\\Delta |$ is necessary to keep the gap\nopen (see Fig. \\ref{variobeta}). For convenience, we discuss first the case $\n\\lambda _{z}=0$ (perpendicular $\\vec{\\lambda}$ and $\\vec{B}$) and later\nconsider the general case $\\vec{\\lambda}=\\lambda _{y}\\mathbf{\\hat{y}}\n+\\lambda _{z}\\mathbf{\\hat{z}}$ with $\\lambda _{z}\\neq 0$. For $\\lambda _{z}=0\n$, the determinant $D_{0}(k)$ of $H_{k}$ [see Eqs. (\\ref{hk3}) or (\\ref{hkp}\n)]\n\n\\begin{eqnarray}\nD_{0}(k) &=&C^{2}+4\\Delta ^{2}y^{2}, \\notag \\\\\nC &=&a^{2}+\\Delta ^{2}-B^{2}-y^{2}, \\label{d0k}\n\\end{eqnarray\nis positive semidefinite. It can vanish only for $y=0$ implying either $k=0$\nor $k=\\pi $. For $k=0$ ($k=\\pi )$, $C=0$ implies $|\\mu +2t|=r$ ($|\\mu -2t|=r$\n). Comparing with Eqs. (\\ref{bound}), one realizes that the gap vanishes in\ngeneral only at one wave vector and only at the transition between\ntopological and non-topological gapped phases, as expected. The exception is\nthe case $|2t|=r$ and $\\mu =0$, for which the gap vanishes at both wave\nvectors.\n\nIn the general case with $z=2\\lambda _{z}\\sin (k)$ non zero, the determinant\nof $H_{k}$ is [see Eq. (\\ref{hkp})]\n\n\\begin{equation}\nD(k)=D_{0}+2z^{2}(\\Delta ^{2}+y^{2}-a^{2}-B^{2})+z^{4} \\label{dk}\n\\end{equation}\nWe can consider $D(k)$ as a function of $x=\\cos (k)$. For large enough $\n|\\lambda _{z}|$, it turns out that, at the wave vector $k=0$, and parameters\nfor which $C=y=z=0$ [implying $D(0)=0$], $dD(x)\/dx>0$ and as a consequence\nfor small positive $k$ ($x<1$) the determinant becomes negative signaling\nthe instability of the gapped phase. For $\\lambda _{z}=0$, as in the previous\ncase the derivative is negative, but $x$ cannot be increased beyond 1, so\nthat $D(k)\\geq 0$. A similar reasoning with the corresponding changes in the\nsign can be followed for $k=\\pi $. An explicit calculation of the derivative\nusing the conditions $C=\\sin (k)=0$ gives \n\\begin{equation}\n\\frac{dD}{dx}=32[B^{2}\\lambda _{z}^{2}-\\Delta ^{2}(\\lambda _{z}^{2}+\\lambda\n_{y}^{2})]x. \\label{dddx}\n\\end{equation}\nThis implies that to have a gap one needs that $|\\Delta |>\\Delta _{c}$ where \n\\begin{equation}\n\\Delta _{c}^{2}=B^{2}\\frac{\\lambda _{z}^{2}}{\\lambda _{z}^{2}+\\lambda\n_{y}^{2}}=B^{2}\\cos ^{2}(\\beta _{\\lambda B}). \\label{deltac}\n\\end{equation}\nThis condition has been found before for a model similar to ours in the\ncontinuum with quadratic dispersion.\\cite{rex}\n\nAfter some algebra, the determinant in the general case can be written in\nthe form \n\\begin{equation}\nD=(C-z^{2})^{2}+16(\\lambda _{z}^{2}+\\lambda _{y}^{2})(\\Delta ^{2}-\\Delta\n_{c}^{2})(1-x^{2}), \\label{d2}\n\\end{equation}\nwhich is again positive semidefinite for $|\\Delta |>\\Delta _{c}$ and\npositive definite for $0\\neq k\\neq \\pi $, indicating a gapped phase. Since $\nx=1$ implies $y=z=1$, the remaining boundaries of the topological phase\nremain the same as for perpendicular $\\vec{\\lambda}$ and $\\vec{B}.$ For $\n|\\Delta|=\\Delta _{c}$ (as in Fig. \\ref{eigenmu}), the values of $k$ for\nwhich the determinant vanishes are given by the solutions with $|x|\\leq 1$\nof the following quadratic equation\n\n\\begin{eqnarray}\n0 &=&4(t^{2}+\\lambda ^{2})x^{2}+4t\\mu x \\notag \\\\\n&&+\\mu ^{2}+\\Delta _{c}^{2}-B^{2}-4\\lambda ^{2}, \\label{xc}\n\\end{eqnarray}\nwhere $\\lambda =|\\vec{\\lambda}|$.\n\n\\subsection{Transition from the topological phase to the gapless phases}\n\n\\label{topogap}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{3p_autovalores_beta_TeX_pi_sc_rojo.eps}\n\\end{center}\n\\caption{(Color online) Second (black thin lines) and third (red thick lines) eigenvalues of $H_k$ as a function of wave vector\nfor $t=1$, $\\lambda=\\Delta=2$, $B=\\mu=5$, and several values\nof the angle $\\beta_{\\lambda B}$ between \n$\\vec{\\lambda}$ and $\\vec{B}$.}\n\\label{eigenbeta}\n\\end{figure}\n\nTo gain insight into the transition from the topological phase to the\ngapless phases, we represent in Fig. \\ref{eigenbeta} the second and third\neigenvalues of $H_{k}$ [$E_{2}(k)$ and $E_{3}(k)$, respectively] for\ndifferent values $\\beta _{\\lambda B}$ of the angle between $\\vec{\\lambda}$\nand $\\vec{B}$. The parameters are such that, for $\\vec{\\lambda}\\cdot \\vec{B}=0\n$, the system is in the topological phase with a finite gap. As the angle is\nchanged (in either direction) the gap between the second and third\neigenvalue decreases until at a certain critical angle [given by the\nsolution of Eq. (\\ref{xc})] $E_{2}(k_{c})=E_{3}(-k_{c})=0$ at one particular\nwave vector $k_{c}$ ($0.3613\\pi$ in the figure), denoting the onset of the\ngapless phase. Further turning $\\vec{\\lambda}$ and $\\vec{B}$ to the parallel\n(or antiparallel) direction, both eigenvalues vanish at two different wave\nvectors.\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{mus_betacritico_TeX_sc_rojo.eps}\n\\end{center}\n\\caption{(Color online) Same as Fig. \\ref{eigenbeta} \nfor $t=1$, $\\lambda=\\Delta=2$, $B=5$, \n$\\beta_{\\lambda B}=66.42$\\textdegree \\ and several values of $\\mu$.}\n\\label{eigenmu}\n\\end{figure}\n\n\nIf keeping the other parameters fixed, the chemical potential $\\mu $ is\nchanged towards one border $\\mu _{c}$ of the topological phase for $\\vec\n\\lambda}\\cdot \\vec{B}=0$ [given by Eq. (\\ref{bound})]; the critical wave\nvector $k_{c}$ is displaced either to $k_{c}=0$ or to $k_{c}=\\pi $ depending\non the border. This is illustrated in Fig. \\ref{eigenmu}. At the\ncorresponding border $\\mu =\\mu _{c}$, one has $E_{2}(k_{c})=E_{3}(k_{c})=0$, indicating a crossing of the levels which is also accompanied by a change\nin the Berry phases of the corresponding eigenvectors. Further displacing \n\\mu $ the system enters the non topological gapped phase. Therefore, the\npoint $\\mu =\\mu _{c}$, $\\Delta =\\Delta _{c}$ is at the border of the\ntopological phase, the nontrivial gapless phase with Berry phase $\\pi$, and the non-topological gapped phase. In fact also the trivial gapless phase reaches\nthis tetracritical point in the phase diagram (see Fig. \\ref{variobeta}).\n\n\n\\subsection{Majorana modes}\n\n\\label{majo}\n\nThe topological phase is characterized \nby the presence of Majorana modes zero modes at the ends of an infinite chain.\nFor a finite chain, the modes at both ends mix, giving rise to a \nfermion $\\Gamma$ and its Hermitian conjugate with energies $\\pm E$ which \ndecay exponentially with the length $L$ of the chain. We have obtained $\\Gamma$ numerically in chains of $L \\sim 200$ sites. The probability $p(i)$ of finding\na fermion at site $i$ (adding both spins and creation and annihilation) is shown\nin Fig. \\ref{pis}. The main feature of the top figure is a decay of $p(i)$\nas the distance from any of the ends increases. We have chosen a case with a rather slow decay to facilitate visualization. In addition to this decay, some oscillations are visible with a short period.\n\nIn order to quantify the decay length of the localization of the end modes, we have fit the probability with an exponentially decaying function \n$p(i) \\sim A$exp$(-i\/\\xi)$ at the left end. At the bottom of \nFig. \\ref{pis} we show the dependence\nof $\\xi$ inside the topological phase \\textrm{I} as one of the parameters is varied.\nAs expected, $\\xi$ diverges at the boundary with the non topological gapped phase \\textrm{II}, which has a different $\\mathbb{Z}_{2}$ topological invariant \n(at $\\Delta_{c_2}=3.872983346$ in the figure). We also find that \n$\\xi$ diverges at the boundary with the gapless phase \\textrm{III} \n(at $\\Delta_{c_1}=0.694592711$ in the figure), a phase with the same topological invariant but gapless. These facts allow us to obtain numerically\nthe transitions from the localization of the end states \n(see top left panel of Fig. \\ref{variobeta}). \n\n\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{2p_c2_xi_TeX.eps}\n\\end{center}\n\t\\caption{Top: probability of finding a fermion at each site of a chain for the eigenstate of lowest positive energy for $t=1$, $\\lambda=2$, $B=4$, $\\beta_{\\lambda B}=80$\\textdegree, $\\mu=3$, and $\\Delta=0.75$. Bottom: inverse of the localization length as a function of $\\Delta$. The transition between phases I and III is at $\\Delta_{c_1}=0.694592711$ and the transition between phases I and II is at $\\Delta_{c_2}=3.872983346$.}\n\\label{pis}\n\\end{figure}\n\n\n\\section{Summary and discussion}\n\n\\label{sum}\n\nUsing numerical and analytical methods, \nwe calculate the phase diagram of a widely used model for topological superconducting\nwires, the essential ingredients of which are local s-wave pairing $\\Delta$, spin-orbit coupling $\\vec{\\lambda}$ and magnetic field $\\vec{B}$. \nWe determine the boundary of the gapped topological phase analytically. \nThis phase contains robust Majorana zero modes at both ends that are of great interest.\nWe expect that\nthis result will be relevant for future studies in the field.\n\nThe optimal situation for topological superconductivity is when $\\vec{B}$ is perpendicular\nto $\\vec{\\lambda}$. In this case, both the topological and non-topological phases are gapped. If instead $\\vec{B}$ has a component in the direction of $\\vec{\\lambda}$, a gapless\nsuperconducting phase appears for certain parameters. This phase can also be separated in two \nphases differing in a $\\mathbb{Z}_{2}$ topological invariant. However, due to the absence of a gap,\nwe do not find Majorana zero-modes at the ends of the phase with \nnontrivial $\\mathbb{Z}_{2}$, in contrast to those present in the gapped topological phase.\n\nTilting the magnetic field to enter the gapless phase might be used as a trick to relax\nthe system to the ground state in some measurements, like Josephson current.\nIn the gapped topological phase, in the absence of \nlow-frequency phonons or other excitations, the physics is dominated by a few \nbound states inside the gap, completely isolated from the continuum, and the current would oscillate, without reaching a steady state.\\cite{chung} One way to avoid this problem would be to use a magnetic field so that the system is in the gapless phase, with low-energy\nexcitations available for thermalization,\nand then rotate adiabatically the field to the desired value so that the system remains in the ground state.\n\n\\section*{Acknowledgments}\n\nWe thank L. Arrachea for helpful discussions. We are sponsored by PIP\n112-201501-00506 of CONICET, PICT-2017-2726, PICT-2018-04536 and\nPICT-Raices-2018.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdhdb b/data_all_eng_slimpj/shuffled/split2/finalzzdhdb new file mode 100644 index 0000000000000000000000000000000000000000..59598d117a72000e7e96aae9a521fcf460556a9c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdhdb @@ -0,0 +1,5 @@ +{"text":"\\section{Ongoing work - scratch}\n\n\n\\section{Introduction}\n\nAutonomous Underwater Vehicles (AUVs) have found applications in a variety of underwater exploration and monitoring tasks including high-resolution, geo-referenced optical\/acoustic ocean floor mapping and measurements of water column properties such as currents, temperature and salinity \\cite{yoerger1991autonomous}. An advantage of AUVs over other methods of ocean observation is the autonomy and decoupling from a surface vessel that a self-contained robot provides.\n\nThe ability to geo-reference, or to compute the absolute position in a global reference frame, is essential for AUVs for the purposes of path planning for mission requirements, registration with independently navigated information, or revisiting a previous mission. Geo-referenced navigation is often achieved by initializing the navigation solution to the GPS while on the surface and, once submerged, relaying on velocity measurements from a Doppler Velocity Log (DVL). When the water depth is less than the range of the DVL (a 300kHz DVL has a range of $\\sim$200m), the DVL has continuous bottom lock throughout the mission. The DVL sensor provides measurements of the seafloor relative velocity of the AUV. By combining this information with an appropriate heading reference, the observations are placed in the global reference frame and integrated to facilitate underwater dead reckoning. The result is accuracies of 22m per hour (2\\(\\sigma\\)) in position error growth attainable during diving and 8m per hour error growth (2\\(\\sigma\\)) is possible if coupled with a navigation-grade (\\(>\\)\\$100K) Inertial Measurement Unit (IMU) \\cite{napolitano2004phins}.\n\nIn cases where the seafloor depth is greater than the DVL bottom lock range, transitioning from the surface to the seafloor presents a localization problem \\cite{kinseynonlinear2014}, since both GPS and DVL are unavailable in the mid-water column. Traditional solutions include range-limited Long Base Line (LBL) acoustic networks requiring deployment, Ultra Short Base Line (USBL) navigation requiring a dedicated ship, or single range navigation from an acoustic beacon attached to a ship \\cite{webster2012advances} or an autonomous surface vehicle (ASV) \\cite{kinsey2013-auv-asv}. In addition to requiring dedicated infrastructure, acoustic positioning also suffers from multipath returns and the need to accurately measure the sound speed profile through the water column. Acoustic methods typically give \\(\\mathcal{O}({10m})\\) accuracy at 1km ranges \\cite{kinsey2006survey} \\cite{mandt2001integrating}.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics [width=0.45\\textwidth] {pictures\/20151203-ff_ocean.jpg}\n \\caption{The \\textit{FlatFish} AUV \\cite{albiez2015flatfish} during sea trails. Image: Jan Albiez, SENAI CIMATEC}\n \\label{fig:flatfish}\n\\end{figure}\n\nIMUs provide a strap down navigation capability through providing body accelerations and rotation rates without external aiding such as GPS, acoustic ranging, or DVL velocities. However, IMUs quickly accumulate position errors, with an unaided tactical grade IMU (\\(>\\)\\$10K) drifting at $\\sim$100km per hour, and a navigation grade IMU drifting at $\\sim$1km per hour \\cite{titterton2004strapdown}. There also exists cases where DVL bottom-lock is not possible, when the altitude is very low, such as in inspection or docking scenarios.\n\nIn \\cite{hegrenaes2011model}, a model-aiding Inertial Navigation System (INS) is applied with water-track from the DVL. Comparatively, the novel contributions of the work presented in this paper are as follows: \n\\begin{enumerate}\n\t\\item Utilizing and validating through experiment a manifold based Unscented Kalman Filter (UKF) which can observe and utilize the Earth rotation for heading estimation,\n\t\\item Incorporating and validating a novel drag and thrust model-based aiding, which accounts for the systematic uncertainty in vehicle parameters by incorporating them as states in the UKF and\n\t\\item Incorporating and validating the use of ADCP measurements in a novel form to further aid the estimation in cases of DVL bottom-lock loss.\n\\end{enumerate}\n\nIMUs with low gyro bias uncertainty allow gyrocompassing by measuring the Earth rotation to estimate heading. The price range for navigation grade IMUs (as used in \\cite{hegrenaes2011model}) with a low bias uncertainty are typically in the \\(>\\)\\$100K USD price range. In this paper, the KVH1750 IMU, in the \\(>\\)\\$10K USD price range, is utilized. In order for this price range IMU to be utilized, the biases are estimated in a fully coupled approach in the navigation filter. Real-world experiments with the \\textit{FlatFish} AUV (Fig. \\ref{fig:flatfish}) show that less than 1$^{\\circ}$ (2$\\sigma$) heading uncertainty is possible in the filter following an initialization within 15$^{\\circ}$ of the true heading (possible from a magnetic sensor). Further experiments also show that the filter is capable of consistent positioning, and data denial validates the method for DVL dropouts due to very low or high altitude scenarios. Additionally this work was implemented using the MTK \\cite{hertzberg2013integrating} and ROCK \\cite{rock} framework in C++\\footnote{The implementation is under open source license available on \\url{https:\/\/github.com\/rock-slam\/slam-uwv_kalman_filters}}, and is capable of running in real-time on computing available on the \\textit{FlatFish} AUV.\n\nThe work in this paper utilizes vehicle model-based aiding and the ADCP sensor for further ocean water current and vehicle velocity constraints. Model-aiding allows physics based constraints on the positioning, and the uncertainty in each parameter can be set to account for the systematic error associated with a system identification. Thus even a low accuracy system identification can still be used with this filter without resulting in filter overconfidence. Additionally, by modeling the vehicle parameters as time-varying, the model itself has become uncertain, as any small deviations in dynamics from the modeling equations can be absorbed by the time-varying parameters. ADCP-aiding in cases where DVL dropout would occur, due to being higher altitude than the bottom-lock range, also can aid the model by providing independent vehicle velocity constraints. ADCP also gives information regarding the surrounding water currents when there is a DVL dropout and the vehicle state estimation relies more on model-aiding.\nGenerally, inertial navigation is achieved using error-state filtering \\cite{hegrenaes2011model}, but this is not necessary as is shown in this paper. This paper gives a more conceptually simplified approach, while also utilizing manifold methods \\cite{forster2015manifold} to represent attitude, which is more general than other methods.\n\n\n\n\n\n\\section{Model-aided Inertial filter design}\n\n\n\n\n\nOur filter design is conceptually simple, since we model all modalities in one filter and model the attitude of the vehicle as a manifold.\nWe utilize an Unscented Kalman Filter (UKF) since it doesn't require the Jacobians of the process or measurement models and can handle non-linearities better than an Extended Kalman Filter \\cite{wan2000unscented}.\n\nThe attitude of the vehicle is an element of $SO(3)$, the group of orientations in $\\mathbb{R}^3$. To directly estimate the attitude in the filter it can be either modeled by a minimal parametrization (Euler angles) or by a over-parametrization (quaternion or rotation matrix). A minimal parametrization has singularities, i.e. small changes in the state space might require large changes in the parameters. An over-parametrization has redundant parameters and needs to be re-normalized as required. In both cases it requires special treatment in the filter, which leads to a conceptually more complex filter design.\nRepresenting the attitude as a manifold is a more general solution in which the filter operates on a locally mapped neighborhood of $SO(3)$ in $\\mathbb{R}^3$ \\cite{hertzberg2013integrating}.\n\n\\begin{table}[h]\n\\centering\n\\caption{Filter state}\n\\begin{tabular}{l|l}\n\\hline\n\\thead{Elements of \\\\ the state vector} & Description \\\\\n\\hline\n$\\mathbf{p}^n \\in \\mathbb{R}^3$\t\t& Position of the IMU in the navigation frame \\\\\n$\\boldsymbol{\\phi}^n \\in \\mathbb{R}^3$ \t& Attitude of the IMU in the navigation frame \\\\\n$\\mathbf{v}^n \\in \\mathbb{R}^3$ \t& Velocity of the IMU in the navigation frame \\\\\n$\\mathbf{a}^n \\in \\mathbb{R}^3$ \t& Acceleration of the IMU in the navigation frame \\\\\n$\\mathbf{M_{\\text{sub}}} \\in \\mathbb{R}^{2\\times3}$ & Inertia sub-matrix of the motion model \\\\\n$\\mathbf{D}_{l,\\text{sub}} \\in \\mathbb{R}^{2\\times3}$ & Linear damping sub-matrix of the motion model \\\\\n$\\mathbf{D}_{q,\\text{sub}} \\in \\mathbb{R}^{2\\times3}$ & Quadratic damping sub-matrix of the motion model \\\\\n$\\mathbf{v}_{c,v}^{n} \\in \\mathbb{R}^{2}$\t& \\begin{tabular}{@{}l@{}}Water current velocity surrounding \\\\ the vehicle in navigation frame\\end{tabular} \\\\\n$\\mathbf{v}_{c,b}^{n} \\in \\mathbb{R}^{2}$\t& \\begin{tabular}{@{}l@{}}Water current velocity below the \\\\ vehicle in navigation frame\\end{tabular} \\\\\n$g^n \\in \\mathbb{R}$\t\t& Gravity in the navigation frame \\\\\n$\\mathbf{b}_{\\omega} \\in \\mathbb{R}^3$ \t& Gyroscope bias \\\\\n$\\mathbf{b}_{a} \\in \\mathbb{R}^3$ \t& Accelerometer bias \\\\\n$\\mathbf{b}_{c} \\in \\mathbb{R}^{2}$ \t& Bias in the ADCP measurements \\\\\n\\hline\n\\end{tabular}\n\\label{state_table}\n\\end{table}\n\nTable \\ref{state_table} shows the state vector of the filter as element of $\\mathbb{R}^{43}$ and gives a detailed description of the higher dimensional elements of the state vector. The navigation frame is North-East-Down (NED).\nThe body and IMU frames are x-axis pointing forward, y-axis pointing left and z-axis pointing up. \nIn the filter design we consider the IMU frame not to be rotated with respect to the body frame.\n\n\n\n\\subsection{Inertial prediction equations}\n\nThe following equations describe the prediction models for position, velocity, acceleration and attitude, applying a constant acceleration model for translation and a constant angular velocity model for rotation:\n\\begin{equation}\n\\mathbf{p}^n_{t} = \\mathbf{p}^n_{t-1} + \\mathbf{v}_{t-1}^n \\delta t\n\\label{eq:pred1}\n\\end{equation}\n\\begin{equation}\n\\label{vn}\n\\mathbf{v}^n_{t} = \\mathbf{v}^n_{t-1} + \\mathbf{a}_{t-1}^n \\delta t\n\\end{equation}\n\\begin{equation}\n\\label{vn}\n\\mathbf{a}^n_{t} = \\mathbf{a}^n_{t-1}\n\\end{equation}\n\\begin{equation}\n\\boldsymbol{\\phi}^n_{t} = \\boldsymbol{\\phi}^n_{t-1} \\boxplus [C^n_{b,t-1}(\\boldsymbol{\\omega}_{t-1}^{b} - \\mathbf{b}_{\\omega,t-1}) - \\boldsymbol{\\Omega}_{e}^{n} \\delta t]\n\\label{eq:pred2}\n\\end{equation}\n\nwhere $\\mathbf{p}^n_{t}$ is the position of the IMU in the navigation frame at time $t$, $\\mathbf{v}^n_{t}$ is the velocity of the IMU in the navigation frame, $\\mathbf{a}^n_{t}$ is the acceleration of the IMU in the navigation frame, $\\mathbf{C}^n_{b,t}$ is the coordinate transformation from body to navigation frame, $\\boldsymbol{\\phi}^n_{t}$ is the attitude of the IMU in the navigation frame, $\\boldsymbol{\\omega}^b_t$ is the rotation rates in the body frame, $\\mathbf{b}_{\\omega,t}$ is the gyroscope bias and $\\boldsymbol{\\Omega}_{e}^{n}$ is the Earth rotation in the navigation frame. The $\\boxplus$ operator in \\eqref{eq:pred2} is a manifold based addition, as defined in \\cite{hertzberg2013integrating}.\nEquations \\eqref{eq:pred1} to \\eqref{eq:pred2} each have corresponding prediction noise added.\n\nThe accelerometer measurements are handled with an update equation on the acceleration state as follows:\n\n\\begin{align}\n\\textbf{z}_{a}(t) = \\mathbf{f}^b_t+ \\mathbf{b}_{a,t} + \\mathbf{C}^{n}_{b,t}\\mathbf{g}^{n}_{t} + \\nu_{a}\n\\end{align}\n\nwhere $\\mathbf{f}^b_t$ is the specific force acting on the vehicle at time $t$, $\\mathbf{b}_{a,t}$ is the accelerometer bias and $\\mathbf{g}^{n}_{t}$ is the gravity vector $\\begin{bmatrix} 0, 0, g_t^n \\end{bmatrix}^T$ in the navigation frame.\nThe gravity state is modeled applying a constant gravity model in order to refine the theoretical gravity according to the WGS-84 ellipsoid earth model starting with a small initial uncertainty.\nThe acceleration state in the filter allows both the accelerometer and model-aiding to act on the filter in a consistent fashion, without resorting to virtual correlation terms when an acceleration state does not exist, such as in \\cite{hegrenaes2011model}.\nAccelerometer and gyro bias terms are modeled as a first order Markov process as follows:\n\n\\begin{equation}\n\\dot{b} = -\\frac{1}{\\tau_{b}}(b-b_{0}) + \\nu_{b}\n\\label{bias_equation}\n\\end{equation}\nwhere $\\tau_{b}$ is the expected rate change of the bias, $b_{0}$ is the mean bias value, and $\\nu_{b}$ is a zero-mean normally distributed random variable with\n\\begin{equation}\n\\sigma_{b} = \\sqrt{\\frac{2f\\sigma_{b\\,drift}^2}{\\tau_{b}}}\n\\end{equation}\nwhere $\\sigma_{b\\,drift}$ is a bound to the bias drift and $f$ is the measurement frequency. The accelerometer and gyro bias are assumed to be zero mean.\n\n\n\\subsection{Model-aiding update equations}\n\nIn this section we show a model-aiding measurement function using a simplified vehicle motion model for which a subset of the parameter space is part of the filter state.\nThis allows the filter to refine the parameters at runtime and to account for the systematic uncertainty in the vehicle parameters.\n\nThe nonlinear equations for motion \\cite{fossen2002marine} of a rigid body with 6 DOF can be written as\n\\begin{equation}\n\\label{eq:motion_fossen}\n\\boldsymbol{\\tau} = \\mathbf{M} \\dot{\\boldsymbol{\\nu}} + \\mathcal{C}(\\boldsymbol{\\nu}) \\boldsymbol{\\nu} + \\mathbf{D}(\\boldsymbol{\\nu}) \\boldsymbol{\\nu} + \\mathbf{g}(\\mathbf{R}_{b}^n)\n\\end{equation}\nwhere $\\boldsymbol{\\tau}$ is the vector of forces and torques, $\\boldsymbol{\\nu}$ is the vector of linear and angular velocities, $\\mathbf{M}$ is the inertia matrix including added mass, $\\mathcal{C}(\\boldsymbol{\\nu})$ is the Coriolis and centripetal matrix, $\\mathbf{D}(\\boldsymbol{\\nu})$ is the hydrodynamic damping matrix and $\\mathbf{g}(\\mathbf{R}_{b}^n)$ is the vector of gravitational forces and moments given the rotation matrix from the body to the navigation frame $\\mathbf{R}_{b}^n$.\n\n\\begin{equation}\n\\mathbf{g}(\\mathbf{R}) = \\begin{bmatrix} \\mathbf{R}^{-1} \\hat{k} (W-B) \\\\ \\mathbf{r}_{G} \\times \\mathbf{R}^{-1}\\hat{k}W - \\mathbf{r}_{B} \\times \\mathbf{R}^{-1}\\hat{k}B \\end{bmatrix}\n\\label{eq:gravity}\n\\end{equation}\nEquation~\\eqref{eq:gravity} shows how the gravitational forces and moments are calculated given the weight $W$, buoyancy $B$, center of gravity $\\mathbf{r}_{G}$ and center of buoyancy $\\mathbf{r}_{B}$ of the vehicle,\nwhere $\\hat{k}$ is the unit vector $\\begin{bmatrix} 0, 0, 1 \\end{bmatrix}^T$.\n\nWe assume the Coriolis and centripetal forces as well as damping terms higher than second order are negligible for vehicles operating within lower speeds (typically below $1.5$ m\/s).\nThis allows us the define the measurement function for the forces and torques in the body frame from \\eqref{eq:motion_fossen} as\n\\begin{equation}\n\\textbf{z}_{\\boldsymbol{\\tau}}(t) = \\mathbf{M}_{t} \\begin{bmatrix}\\mathbf{a}_{t}^{b} \\\\ \\boldsymbol{\\alpha}_{t}^{b}\\end{bmatrix} + \\mathbf{D}(\\begin{bmatrix}\\mathbf{v}_{t}^{b} \\\\ \\boldsymbol{\\omega}^b_t\\end{bmatrix},t) + \\mathbf{g}(\\mathbf{R}_{b,t}^n) + \\nu_{\\boldsymbol{\\tau}}\n\\label{eq:meas_tau}\n\\end{equation}\nwhere $\\mathbf{a}_{t}^{b}$ is the linear acceleration, $\\boldsymbol{\\alpha}_{t}^{b}$ is the angular acceleration, $\\mathbf{v}_{t}^{b}$ is the linear velocity and $\\boldsymbol{\\omega}^b_t$ is the angular velocity, all expressed in the body-fixed frame at time $t$. $\\nu_{\\boldsymbol{\\tau}}$ is the random noise of the force and torque measurement, with a standard deviation given by the thruster manufacturer.\n\n$\\mathbf{a}_{t}^{b}$ can be computed given the acceleration in the navigation frame $\\mathbf{a}_{t}^{n}$ as\n\\begin{equation}\n\\mathbf{a}_{t}^{b} = \\mathbf{C}_{n,t}^{b}\\mathbf{a}_{t}^{n} - \\boldsymbol{\\omega}^b_t \\times (\\boldsymbol{\\omega}^b_t \\times \\mathbf{p}^b)\n\\label{eq:acc_body}\n\\end{equation}\nwhere $\\mathbf{C}_{n,t}^{b}$ is the coordinate transformation matrix from navigation to body frame at time $t$ and $\\mathbf{p}^b$ is the position of the IMU in the body frame.\n\n$\\mathbf{v}_{t}^{b}$ can be computed given the velocity in the navigation frame $\\mathbf{v}_{t}^{n}$ as\n\\begin{equation}\n\\mathbf{v}_{t}^{b} = \\mathbf{C}_{n,t}^{b} (\\mathbf{v}_{t}^{n} - \\mathbf{v}_{c,v,t}^{n}) - \\boldsymbol{\\omega}^b_t \\times \\mathbf{p}^b\n\\label{eq:velocity_body}\n\\end{equation}\nwhere $\\mathbf{v}_{c,v,t}^{n}$ is the water current velocity surrounding the vehicle at time $t$.\n\nEquation~\\eqref{eq:damping} shows how the damping is defined given the linear and angular velocities at time $t$.\n\\begin{equation}\n\\mathbf{D}(\\mathbf{\\nu}_{t}, t) = \\mathbf{D}_{l,t} \\cdot \\mathbf{\\nu}_{t} + |\\mathbf{\\nu}_{t}|^{T} \\cdot \\mathbf{D}_{q,t} \\cdot \\mathbf{\\nu}_{t}\n\\label{eq:damping}\n\\end{equation}\nThe linear damping matrix $\\mathbf{D}_{l,t}$, the quadratic damping matrix $\\mathbf{D}_{q,t}$ and the inertia matrix $\\mathbf{M}_{t}$ are time dependent, since for all of them a sub matrix is part of the filter state.\nThe filter states $\\mathbf{D}_{l,\\text{sub},t}$, $\\mathbf{D}_{q,\\text{sub},t}$, $\\mathbf{M}_{\\text{sub},t}$ $\\in \\mathbb{R}^{2\\times3}$ are defined by removing the rows 3 to 6 and columns 3 to 5 from the full damping and inertia matrices $\\mathbf{D}_{l}$, $\\mathbf{D}_{q}$, $\\mathbf{M}$ $\\in \\mathbb{R}^{6\\times6}$. In other words we model the $x,xy,x\\psi,yx,y,y\\psi$ terms of the matrices in the filter, where $\\psi$ is the yaw. Because we expect them to have the major impact with respect to the horizontal accelerations and velocities, in case of an AUV keeping roll and pitch stable. It would be easy to extended the filter states and add more model terms, however it is a trade-off between the additional benefit, the computational complexity and potential filtering instability.\n\nThe damping and inertia state prediction models have a base time varying component, with a timescale of around one hour, modeled as in \\eqref{bias_equation}.\nThe vehicle parameters are initialized using a prior system identification, with the means of the states set at these values in the first order Markov process equation. Since the vehicle parameters are states in the filter, the systematic uncertainty in their error can be accounted for, which acts like a bias rather than a noise. This allows even a low accuracy system identification, or very crude estimates of the parameter values, to still allow estimation without resulting in overconfidence, due to the bias error having a stronger and different effect to simply increasing the uncertainty in the vehicle model noise. \nThis also allows the vehicle modeling to adapt to different scenarios, such as surfacing or changes to the vehicle following the system identification, while constraining their value range by utilizing the first order Markov process model. \nAlthough these parameters could have been modeled as a constant, by allowing the parameters to have a time-varying component it acts as a way to implement ``model uncertainty''.\nIn this way, the robustness of the filter improves as we no longer fully trust our model to be a perfect representation of the true dynamics, which is most definitely the case with applying a simplified and computationally tractable model for real-time usage to the real-world.\n\n\\subsection{ADCP-aiding update equations}\n\nGiven the 3D velocities output from the ADCP, the observation function for each ADCP measurement is\n\\begin{equation}\n\\textbf{z}_{c,i}(t) = \\textbf{C}_{n,t}^{b}(- \\textbf{v}_{t}^{n} + \\frac{d_{max}-d_{i}}{d_{max}}\\textbf{v}_{c,v,t}^{n} + \\frac{d_{i}}{d_{max}}\\textbf{v}_{c,b,t}^{n}) + \\textbf{b}_{a,t} + \\nu_{a}\n\\label{z_adcp}\n\\end{equation}\n\n\nwhere $\\textbf{z}_{c,i}$ is the ADCP measured current vector in the i$^{th}$ measurement cell, $\\textbf{C}_{n,t}^{b}$ is the coordinate transform from navigation\/world frame to ADCP\/body frame at time $t$,\n$\\textbf{v}_{t}^{n}$ is the vehicle velocity in the world\/navigation frame, $\\textbf{v}_{c,v,t}^{n}$ is the water current velocity surrounding the vehicle, $\\textbf{v}_{c,b,t}^{n}$ is the water current velocity at the maximum ADCP range, $\\textbf{b}_{a,t}$ is the bias in the ADCP measurement and $\\nu_{a}$ is the random noise in the ADCP measurement, with a standard deviation given by the sensor manufacturer.\n\nTo reduce the state number of the filter, the vertical velocity of the water currents are not estimated. The ADCP measurement model is a depth dependent function with two water current states, which linearly interpolates between them. The states are located at the vehicle position, and at a water volume at end of the ADCP measurement range. The water velocity and the ADCP bias state prediction models have a base time varying component, with a timescale of around one hour for the water current and half an hour for the bias, modeled as in \\eqref{bias_equation}. In addition to this, the water velocity state will vary more given spatial motion through a water current vector field. This component scales the process model uncertainty of the water velocity state according to the vehicle velocity. In this way, if the vehicle is slowly traveling through the water current vector field, it can account for the spatial scale of the water currents, which can depend on the environment. For example, water currents near complex bathymetry or strong wind and tides can contribute to smaller spatial scale water current velocity changes, compared to the case of the mid-water ocean \\cite{medagoda2015autonomous}.\n\n\\section{Results}\n\n\n\n\n\nAll the experiments have been made using the \\textit{FlatFish} AUV \\cite{albiez2015flatfish} shown in Fig. \\ref{fig:flatfish}.\nAs relevant sensors for our experiments, the vehicle is equipped with a KVH 1750 IMU, a Rowe SeaProfiler DualFrequency 300\/1200 kHz ADCP\/DVL, a Paroscientific 8CDP700-I pressure sensor, a u-blox PAM-7Q GPS receiver and six 60N Enitech ring thrusters. For heading evaluation purposes we also use a Tritech Gemini 720i Multibeam Imaging Sonar attached to the AUV.\nThe data sets have been collected during the sea trails of the second phase of the \\textit{FlatFish} project close to the shore of Salvador (Brazil) during April 2017.\n\nSince the experiments took place in the open ocean in all data sets, a fiber optic tether was attached to the vehicle for safety reasons. As a result, even though the vehicle model parameters were estimated with a prior system identification, there would be a large error associated with the model given the tether, so there is $\\sim$20\\% uncertainty in the parameter values. Nonetheless, the filter is robustly capable of accounting for this increase in the uncertainty of the vehicle model parameters. This allows the filter to adaptively change the parameters while keeping them in a constrained range through the use of the first order Markov process model.\nThe filter is capable of running in 14$\\times$ real-time with an integration frequency of 100 Hz on computing available on the \\textit{FlatFish} AUV.\n\n\\subsection{Heading estimation experiment}\n\\label{sec:heading_experiment}\n\n \n \n\nIn this data set we show that the filter is able to find its true heading without a global positioning reference, given an initial guess.\nThe mission consists of a initialization phase on the surface followed by a submerged phase before resurfacing. During the initialization phase the vehicle moves for around 8 minutes on a straight line in order to estimate its true heading and position by incorporating GPS measurements. In the submerged phase the vehicle changes its heading to face the target coordinate and follows a straight line for about 112 meters to reach it.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 5,clip] {pictures\/random_init_min_max_heading_with_and_without_gps.pdf}\n \\caption{The plots show the estimated heading during the mission given different filter configurations and initial headings distributed over 30$^{\\circ}$.\n\t The green crosses show independent land mark based heading measurements. 200 seconds in the mission the heading offset was corrected resulting in the short change of attitude.}\n \\label{fig:heading_comp}\n\\end{figure}\n\nFig. \\ref{fig:heading_comp} shows six runs of the same data set in different filter configurations. Three GPS-aided runs with a different initial heading distributed over 30$^{\\circ}$, one with a close initial guess (black line), one with a 15$^{\\circ}$ positive offset (cyan line) and one with a 15$^{\\circ}$ negative offset (blue line). Due to the help of the GPS measurements the estimated headings converge in the first 5 minutes.\nThe three runs not integrating a global position reference starting with the same heading distribution show that the filter is able to find its true heading by observing the rotation of the earth (gyro compassing), relying only on Inertial and velocities.\nAfter 15 minutes the GPS-aided and the non-GPS-aided estimated headings have converged with an uncertainty below 0.5$^{\\circ}$ (1$\\sigma$).\nInitial errors $>$15$^{\\circ}$ will converge as well given more time. Critical however are initial errors close to 180$^{\\circ}$.\nThe green crosses show multiple independent measurements of the expected vehicle heading based on landmarks (poles) visible in the multibeam imaging sonar on the vehicle.\nThe average difference between the landmark based headings to the filter estimates is below 1$^{\\circ}$.\nWe expect the uncertainties of these measurements to be within 5$^{\\circ}$ due to the uncertainties associated with the pole positions in surveyed maps and in the sonar images.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 5,clip] {pictures\/random_init_min_max_heading_convergence_with_and_without_gps.pdf}\n \\caption{The blue solid line is the error in heading with integrated GPS measurements.\n\t The red solid line is the error in heading without the integration of GPS measurements.\n\t The dashed lines are the corresponding uncertainties (1$\\sigma$).}\n \\label{fig:heading_diff}\n\\end{figure}\n\nUsing the GPS-aided heading with a close initial guess shown in Fig. \\ref{fig:heading_comp} (black solid line) as ground truth, we can have a closer look in Fig. \\ref{fig:heading_diff} on the uncertainties and how the estimates improve.\nIn Fig. \\ref{fig:heading_diff} both filter configurations start with an offset of -15$^{\\circ}$ to the ground truth and an initial uncertainty of 30$^{\\circ}$ (1$\\sigma$).\nThe GPS-aided heading estimate converges, as expected, quickly to the ground truth while staying in the 1$\\sigma$ bound.\nFor the heading estimate without global positioning reference we can see that the strong offset and high uncertainty in the beginning leads to a fast compensation in the correct direction with an overshot slightly exceeding the 1$\\sigma$ bound. As the experiment progresses we can see that observing different orientations helps to estimate the gyroscope bias and therefore helps to detect the error between the expected rotation of the earth given the current orientation. We have shown that our filter is able to estimate its true heading by observing the rotation of the earth and that observations from different attitudes help to improve the process.\n\n\\subsection{Repeated square path experiment}\n\n \n \n \n\nIn this experiment we show how the filter performs when the vehicle travels a longer distance of 1 km without horizontal position aiding measurements, such as GPS.\nThe vehicle was following a 5 times repeated square trajectory with an edge length of 50 meter for $\\sim$1 hour. After resurfacing, the position difference to the GPS ground truth is within 0.5\\% of the traveled distance.\n\nStarting with an initialization phase (same as in \\ref{sec:heading_experiment}) on the surface, to estimate its heading and position using GPS measurements, the vehicle submerges to 10 m depth, performs the mission and surfaces at the end.\nThe blue line in Fig. \\ref{fig:square_trajectory} shows the trajectory of the vehicle from minute 20 to minute 80 in the mission, i.e. 1 minute before submerging and 2 minutes after surfacing.\nThe red dots are the GPS measurements including outliers.\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 5,clip] {pictures\/5x50m_square_trajectory.pdf}\n \\caption{The blue solid line shows the trajectory of the vehicle performing 5 times a 50 meter square trajectory in a depth of 10 meter.\n\t After traveling the distance of 1 km the horizontal (North\/East plane) position difference is withing 5 meter (0.5\\% of distance traveled).}\n \\label{fig:square_trajectory}\n\\end{figure}\n\nThe pose filter used on the vehicle at the time the data set was created was not aware of the drift and the initial error in heading.\nOur filter can correct the heading by observing the rotation of the earth and compensate for DVL dropouts utilizing the motion model.\nHowever during the mission a fiber optic tether was attached to the vehicle which represents an unmodeled source of error.\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 -2,clip] {pictures\/5x50m_square_horizontal_pos_error.pdf}\n \\caption{The blue solid line shows the horizontal (North\/East plane) position difference with respect to the GPS measurements (including outliers).\n\t The red and magenta dashed lines represent the corresponding uncertainty (1$\\sigma$ and 2$\\sigma$).}\n \\label{fig:square_pos_diff}\n\\end{figure}\n\nThe blue line in Fig. \\ref{fig:square_pos_diff} shows the position difference on the North\/East plane with respect to the GPS measurements (including outliers).\nDuring the first 20 minutes of the mission the GPS measurements are integrated in the filter allowing initialization.\nAfter resurfacing (minute 78 and onward) the GPS measurements are not integrated allowing us to observe the difference to the ground truth.\nAfter traveling a distance of 1 km the position difference is withing 5 meter (0.5\\% of distance traveled) and in the 2$\\sigma$ bound of the position uncertainty.\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 5,clip] {pictures\/5x50m_square_water_current.pdf}\n \\caption{Estimated water current in north (red) and east (blue) direction. The dashed lines represent the corresponding uncertainties (2$\\sigma$).}\n \\label{fig:5x50m_square_water_current}\n\\end{figure}\n\nIn the case that ADCP measurements are not available the filter will estimate the water currents only by the difference between the motion model based velocity and the DVL based velocity, as modeled in \\eqref{eq:velocity_body}.\nFig. \\ref{fig:5x50m_square_water_current} shows the estimated water current velocities in North and East direction during this experiment without the aiding of ADCP measurements.\nDuring the first 20 minutes the uncertainties of the water current velocities stay constant, since we apply the model-aiding measurements with an increased uncertainty in case the vehicle is surfaced.\nWhen the mission starts and the vehicle submerges (starting around minute 21) to a depth of 10 meters we can see that the estimated water flow changes to the one on the surface and that its velocity continuously increases during the 1 hour mission. The uncertainties reduce during this phase since we trust the model more when submerged.\nThe impact of the tether attached to the vehicle is seen as an unmodeled but estimated drag, which changes depending on the direction the vehicle travels.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics [width=0.45\\textwidth] {pictures\/5x50m_square_damping_terms.pdf}\n \\caption{Linear damping in x (red) and y (blue) direction in the body frame. The dashed lines represent the corresponding uncertainties (2$\\sigma$).}\n \\label{fig:5x50m_damping}\n\\end{figure}\n\nFig. \\ref{fig:5x50m_damping} shows the linear damping terms on the x and y-axis in the body frame of the vehicle and how they are refined during the mission.\nBecause the vehicle travels during the mission mainly in the forward direction, the damping term on the x-axis is refined and the corresponding uncertainty reduces more compared to the y-axis damping term. The uncertainty reduction reaches a limit however due to observability, and the first order Markov process model ensures that the parameters become neither overconfident nor unconstrained. In this way, the model parameters can adapt with time to new conditions and implicitly represents some uncertainty in the model equations themselves.\n\n\n\\subsection{Square path with ADCP}\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=5 10 5 25,clip] {pictures\/adcp_traj.pdf}\n \\caption{The solid blue line shows the trajectory of the vehicle performing a square path in a depth of 2 meter while surfacing in each corner.}\n \\label{fig:adcp_traj}\n\\end{figure}\n\n\\begin{figure*}[!ht]\n \\centering\n \\includegraphics [width=0.75\\textwidth,trim=0 5 0 0,clip] {pictures\/adcp_compare.pdf}\n \\caption{Square path with ADCP - The position uncertainties and differences from the ground truth are compared for different data denials.}\n \\label{fig:adcp_compare}\n\\end{figure*}\n\n\\begin{table}[]\n\\caption{Filter position difference from ground truth and estimated uncertainty}\n\\label{adcp_table}\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\nFilter measurements used & \\thead{Estimated uncertainty\\\\after 1000 seconds} & \\thead{Position difference \\\\from ground truth\\\\ after 1000 seconds} \\\\ \\hline\nInertial + ADCP & 50.9 m (2$\\sigma$) & 22.0 m \\\\ \\hline\nInertial + model & 45.7 m (2$\\sigma$) & 21.5 m \\\\ \\hline\nInertial + model + ADCP & 32.3 m (2$\\sigma$) & 16.1 m \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nThis mission undergoes a 600 second initialization phase on the surface (as in \\ref{sec:heading_experiment}), then 1000 seconds of data denial to show the performance of the filter in different scenarios. During the data denial phase, the vehicle completes a square trajectory, and surfaces at the corners. The ground truth trajectory is shown in Fig. \\ref{fig:adcp_traj}. The ground truth is determined using Inertial, DVL, GPS, ADCP and model-aiding.\n\nSince this mission also includes ADCP measurements interleaved with DVL, the ADCP-aiding update is applied. During this mission, there are cases where the downward facing DVL drops out due to very low altitude (between 0-2m during the mission), and there is collision with the sandy bottom. Despite this challenging data set, the filter is capable of estimating the position of the vehicle, validated by the smooth trajectory without sudden corrections at the GPS measurements during the corner surfacing shown in Fig. \\ref{fig:adcp_traj}.\n\nWith the full measurement filter (without data denial), the filter is able to handle DVL drop outs, which could be the case in low-altitude scenarios such as inspection or docking, by letting the model-aiding fill in during these time periods. Data denial further validates the filter performance in DVL loss scenarios, as shown in Fig. \\ref{fig:adcp_compare}. In cases of DVL bottom-lock loss due to altitude being too high (simulated through data-denial), the ADCP and model-aiding combined gives the best solution, compared to either ADCP or model-aiding alone.\n\nThe position estimate differences compared to the ground truth for these data denials are consistent with the 2$\\sigma$ uncertainty bounds, while remaining stable. At approximately 400 seconds following data denial, the filter with only ADCP and inertial measurements appears to slightly exceed the 2$\\sigma$ bounds, due to a low altitude section with very little valid ADCP measurements, and some ADCP outliers are incorporated into the filter since the innovation gate increases due to inertial-only dead-reckoning. The ground truth also increases in uncertainty at this stage due to the lack of DVL measurements, relying more on the model-aiding. Following further measurements, the filter recovers and is able to reduce the difference between the filter estimate and the ground truth. This is possible since the water current estimate will not vary significantly in this timescale, so that the vehicle can use this state when there are ADCP measurements available again to estimate the velocity and thus position of the vehicle.\n\nThe ADCP-aiding typically performed worse in this case than the model-aiding, but this can be attributed to low altitude where there are very few valid ADCP measurements available. Nonetheless, incorporating these ADCP measurements into the model-aiding improved on the performance of either option. In addition to another source of velocity-aiding information from the ADCP, it also allows an independent source of information regarding the water currents surrounding the vehicle, which is required to transform the water relative velocity of the vehicle model to the navigation frame position used in the filter.\n\nThe results are further quantitatively compared in Table \\ref{adcp_table}. The combination of the ADCP-aiding and model-aiding results in a significant improvement compared to model-aiding alone, reducing position uncertainty from 45.7m (2$\\sigma$) to 32.3m (2$\\sigma$) during 1000 seconds of data denial.\n\n\n\n\n\\section{Conclusions}\n\n\nThe filter designed and implemented in this paper would be appropriate for general AUV navigation, despite not using a navigation grade IMU. In comparison to \\cite{hegrenaes2011model}, the primary insight to the design of this filter is the incorporation of the acceleration state, and adding many parameters as states to account for their correlated error, while modeling with a first order Markov process to constrain the change the filter can apply. The engineering design trade-off is that adding too many states will unnecessarily add computational complexity and potential filtering instability. \n\nThis furthers the state-of-the-art for robust filter design for INS, model-aiding and ADCP measurements, capable of real-time performance, consistency and stability as outlined in the experiments, while remaining conceptually simple. \nThis paper has shown a manifold based UKF that applies a novel strategy for inertial, model-aiding and ADCP measurement incorporation.\nThe filter is capable of observing and utilizing the Earth rotation for heading estimation to within 1$^{\\circ}$ (2$\\sigma$) by estimating the KVH 1750 IMU biases. \nThe drag and thrust model-aiding accounts for the correlated nature of vehicle model parameter error by applying them as states in the filter. The usage of the model-aiding is validated through observing that the filter remains consistent and does not become overconfident or unstable in the real-world experiments, despite uncertain vehicle model parameters. \n\nIt is hypothesized that the usage of time varying first order Markov processes to model these parameters act as a way to implement ``model uncertainty'', improving the robustness of the filter as we no longer fully trust our model to be a perfect representation of the true dynamics, which is most definitely the case with applying simplified and computationally tractable model for real-time usage to the real-world.\nADCP-aiding provides further information for the model-aiding in the case of DVL bottom-lock loss. The importance of water current estimation is highlighted in underwater navigation in the absence of external aiding, justifying the use of the model-aiding and ADCP sensor. Through data denial, scenarios with no DVL bottom lock are shown to be consistently estimated. Additionally this work was implemented using the MTK and ROCK framework in C++, and is capable of running in 14$\\times$ real-time on computing available on the \\textit{FlatFish} AUV.\n\n\nFuture work would include full spatiotemporal real-time ADCP based methods to more accurately model and observe the water current state around vehicle. This requires implementing a mapping approach, such as the work from \\cite{medagoda2016mid} \\cite{medagoda2015autonomous}.\nThe primary source of bias uncertainty for the KVH 1750 IMU is due to temperature change. If the temperature of the IMU can be controlled, or this bias can be calibrated with further experiments, then the performance can be further improved. Further heading evaluation will be possible with better ground truth, such as a visual confirmation or by utilizing an independent heading estimator such as an iXblue PHINS, so that a more accurate heading comparison can be undertaken. The error in alignments of sensors could also be further compensated, perhaps by adding states to the filter similar to the strategy for other systematic biases. Finally, further experiments and implementations in a variety of scenarios are planned to further test and refine the proposed filtering strategy.\n\n\n\n\n\n\\addtolength{\\textheight}{-12cm} \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgement}\n\nWe like to thank Shell and SENAI CIMATEC for the opportunity to test the presented work on \\textit{FlatFish}.\n\nWe also like to thank all colleagues of the \\textit{FlatFish} team for their support and Javier Hidalgo-Carri\\'o for his review.\n\nThis work was supported in part by the EurEx-SiLaNa project\n(grant No. 50NA1704) which is funded by the German Federal\nMinistry of Economics and Technology (BMWi).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nDislocations can alter different stages of the precipitation process\nin crystalline solids, which consists of nucleation, growth and\ncoarsening \\cite{Larche_1979,Wagner_Kampmann_1991}. Distortion of the\nlattice in proximity of a dislocation can enhance nucleation in\nseveral ways \\cite{Porter_Easterling_1981,Christian_2002}. The main\neffect is the reduction in the volume strain energy associated with\nthe phase transformation. Nucleation on dislocations can also be\nhelped by solute segregation which raises the local concentration of\nthe solute in the vicinity of a dislocation, caused by migration of\nsolutes toward the dislocation, the Cottrell atmosphere effect. When\nthe Cottrell atmosphere becomes supersaturated, nucleation of a new\nphase may occur followed by growth of nucleus. Moreover, dislocation\ncan aid the growth of an embryo beyond its critical size by providing\na diffusion passage with a lower activation energy.\n\n\nPrecipitation of second-phase along dislocation lines has been\nobserved in a number of alloys\n\\cite{Aaronson_et_al_1971,Aaron_Aaronson_1971}. For example, in\nAl-Zn-Mg alloys, dislocations not only induce and enhance nucleation\nand growth of the coherent second-phase MgZn$_2$ precipitates, but\nalso produce a spatial precipitate size gradient around them\n\\cite{Allen_Vandesande_1978,Deschamps_et_al_1999,Deschamps_Brechet_1999}.\nCahn \\cite{Cahn_1957} provided the first quantitative model for\nnucleation of second-phase on dislocations in solids. In Cahn's\nmodel, it is assumed that a cross-section of the nucleus is circular,\nwhich is strictly valid for a screw dislocation\n\\cite{Larche_1979}. Also, it is posited that the nucleus is incoherent\nwith the matrix so that a constant interfacial energy can be allotted\nto the boundary between the new phase and the matrix. An incoherent\nparticle interface with the matrix has a different\natomic configuration than that of the phases. The matrix is\nan isotropic elastic material and the formation of the precipitate\nreleases the elastic energy initially stored in its volume. Moreover,\nthe matrix energy is assumed to remain constant by precipitation. In\nthis model, besides the usual volume and surface energy terms in the\nexpression for the total free energy of formation of a nucleus of a\ngiven size, there is a term representing the strain energy of the\ndislocation in the region currently occupied by the new phase. Cahn's\nmodel predicts that both a larger Burgers vector and\na more negative chemical free energy change between the precipitate\nand the matrix induce higher nucleation rates, in agreement with\nexperiment \\cite{Aaronson_et_al_1971,Aaron_Aaronson_1971}.\n\n\nSegregation phenomenon around dislocations, i.e. the Cottrell\natmosphere effect, has been observed among others in Fe-Al alloys\ndoped with boron atoms \\cite{Blavette_et_al_1999} and in silicon\ncontaining arsenic impurities \\cite{Thompson_et_al_2007}, in\nqualitative agreement with Cottrell and Bilby's predictions\n\\cite{Cottrell_Bilby_1949}. Cottrell and Bilby considered segregation\nof impurities to straight-edge dislocations with the Coulomb-like\ninteraction potential of the form $\\phi=A\\sin \\theta\/r$, where $A$\ncontains the elasticity constants and the Burgers vector, and\n$(r,\\theta)$ are the polar coordinates. Cottrell and Bilby ignored the\nflow due concentration gradients and solved the simplified diffusion\nequation in the presence of the aforementioned potential field. The\nmodel predicts that the total number of impurity atoms removed from\nsolution to the dislocation increases with time $t$ according to $N(t)\n\\sim t^{2\/3}$, which is good agreement with the early stages of\nsegregation of impurities to dislocations, e.g. in iron containing\ncarbon and nitrogen \\cite{harper_1951}. A critical review of the\nBilby-Cottrell model, its shortcomings and its improvements are given\nin \\cite{Bullough_Newman_1970}.\n\n\nThe object of our present study is the diffusion-controlled growth of\na new phase, i.e., a post nucleation process in the presence of\ndislocation field rather than the segregation effect. As in Cahn's\nnucleation model \\cite{Cahn_1957}, we consider an incoherent\nsecond-phase precipitate growing under the action of a screw\ndislocation field. This entails that the stress field due to\ndislocation is pure shear. The equations used for diffusion-controlled\ngrowth are radially symmetric. These equations for second-phase in a solid or from a\nsupercooled liquid have been, in the absence of an external field, solved\nby Frank \\cite{Frank_1950} and discussed by Carslaw and Jaeger\n\\cite{Carslaw_Jaeger_1959}. The exact analytical solutions of the\nequations and their various approximations thereof have been\nsystematized and evaluated by Aaron et al. \\cite{Aaron_et_al_1970},\nwhich included the relations for growth of planar\nprecipitates. Applications of these solutions to materials can\nbe found in many publications, e.g. more recent papers on growth of\nquasi-crystalline phase in Zr-base metallic glasses\n\\cite{Koster_et_al_1996} and growth of Laves phase in Zircaloy\n\\cite{Massih_et_al_2003}. We should also mention another\ntheoretical approach to the problem of nucleation and growth of an\nincoherent second-phase particle in the presence of dislocation\nfield \\cite{Sundar_Hoyt_1991}. Sundar and Hoyt \n\\cite{Sundar_Hoyt_1991} introduced the dislocation field, as in Cahn\n\\cite{Cahn_1957}, in the nucleation part of the model,\nwhile for the growth part the steady-state solution of the concentration field\n(Laplace equation) for elliptical particles was utilized.\n\n\nThe organization of this paper as follows. The formulation of the\nproblem, the governing equations and the formal solutions are given in\nsection \\ref{sec:formul}. Solutions of specific cases are presented in\nsection \\ref{sec:comp}, where the supersaturation as a function of the\ngrowth coefficient is evaluated as well as the spatial variation of\nthe concentration field in the presence of dislocation. In section\n\\ref{sec:disc}, besides a brief discourse on the issue of interaction\nbetween point defects and dislocations, we calculate the\nsize-dependence of the concentration at the curved precipitate\/matrix\nfor the problem under consideration. We have carried out our\ncalculations in space dimensions $d=2$ and $d=3$. Some mathematical\nanalyses for $d=3$ are relegated to appendix \\ref{sec:appa}.\n\n\n\n\\section{Formulation and general solutions}\n\\label{sec:formul}\n\n We consider the problem of growth of the new phase, with radial\nsymmetry (radius $r$), governed by the diffusion of a single entity,\n$u\\equiv u(r,t)$, which is a function of space and time $(r,t)$. $u$\ncan be either matter (solvent or solute) or heat (the latent heat of\nformation of new phase). The diffusion in the presence of an external\nfield obeys the Smoluchowski equation \\cite{Chandra_1943} of the form\n\\begin{eqnarray}\n \\label{eqn:smolu}\n \\frac{\\partial u}{\\partial\nt} & = & \\nabla \\cdot \\mathbf{J}, \\\\\n\\label{eqn:smolu-flux}\n \\mathbf{J} & = & D(\\nabla u-\\beta\\mathbf{F}u),\n\\end{eqnarray}\n\\noindent\nwhere $D$ is the diffusivity, $\\beta=1\/k_BT$, $k_B$ the Boltzmann\nconstant, $T$ the temperature, and $\\mathbf{F}$ is an external field\nof force. The force can be local (e.g., stresses due to dislocation\ncores in crystalline solids) or caused externally by an applied field (e.g., electric field\nacting on charged particles). If the acting force is conservative, it\ncan be obtained from a potential $\\phi$ through $\\mathbf{F}=-\\nabla\n\\phi$. The considered geometric condition applies\nto the case of second-phase particles growing in a\nsolid solution under phase transformation \\cite{Massih_et_al_2003} or\ndroplets growing either from vapour or from a second liquid\n\\cite{Frank_1950}. A steady state is reached when\n$\\mathbf{J}=\\mathrm{const.}=0$, resulting in\n$u=u_0\\exp(-\\beta\\phi)$.\n\nHere, we suppose that the diffusion field is along\nthe core of dislocation line and that a cross-section of the\nprecipitate (nucleus), perpendicular to the dislocation, is circular,\ni.e., the precipitate surrounds the dislocation. Furthermore, we treat\nthe matrix and solution as linear elastic isotropic media. The elastic\npotential energy of a stationary dislocation of length $l$ is given by\n\\cite{Kittel_1996,Friedel_1967}\n\\begin{equation}\n \\label{eqn:screw}\n \\phi = A\\ln\\frac{r}{r_0}, \\qquad \\qquad \\qquad \\textrm{for}\\quad r\\ge r_0\n\\end{equation}\n\\noindent\nwhere $A=Gb^2l\/4\\pi$ for screw dislocation, $G$ is the elastic shear\nmodulus of the crystal, $b$ the magnitude of the Burgers vector, $\\nu$\nPoisson's ratio, and $r_0$ is the usual effective core radius. Also,\nwe assume that the dislocation's elastic energy is relaxed within the\nvolume occupied by the precipitate and that the precipitate is\nincoherent with the matrix. Hence the interaction energy between the\nelastic field of the screw dislocation and the elastic field of the solute\nis zero. In the case of an edge dislocation and coherent\nprecipitate\/matrix interface, this interaction is non-negligible.\n\n\nWe study the effect of the potential field (\\ref{eqn:screw}) on\ndiffusing atoms in solid solution using the Smoluchowski\nequation (\\ref{eqn:smolu}). The governing\nequation in spherical symmetry, in $d$ spatial dimension, with $B \\equiv \\beta A$, is \n\\begin{equation}\n \\label{eqn:pde-1}\n \\frac{1}{D}\\frac{\\partial u}{\\partial t}=\n \\frac{\\partial^2 u}{\\partial r^2}+(d-1+B)\\frac{1}{r}\\frac{\\partial\nu}{\\partial r}+(d-2)B\\frac{u}{r^2}.\n\\end{equation}\n\\noindent\n Making a usual change of variable to the\ndimensionless reduced radius $s=r\/\\sqrt{Dt}$, the partial differential\nequation (\\ref{eqn:pde-1}) is reduced to an ordinary differential\nequation of the form\n\\begin{equation}\n \\label{eqn:ode-1}\n \\frac{{\\rm d^2} u}{{\\rm d}s^2}+\\Big(\\frac{s}{2}+\\frac{d-1+B}{s}\\Big)\\frac{{\\rm\nd}u}{{\\rm d}s}\n+(d-2)B\\frac{u}{s^2}=0,\n\\end{equation}\n\\noindent\nwith the boundary conditions, $u(\\infty) = u_m$, and $u(2\\lambda) =\nu_s$, where $u_m$ is the mean (far-field) solute concentration in the\nmatrix and $u_s$ is the concentration in the matrix at the\nnew-phase\/matrix interface determined from thermodynamics of new\nphase, i.e., phase equilibrium and the capillary effect. Moreover,\nthe conservation of flux at the interface radius $R=2\\lambda\\sqrt{Dt}$ gives\n\\begin{equation}\n \\label{eqn:flux-1}\n K_d R^{d-1}\\vert \\mathbf{J}\\vert_{r=R} = q \\frac{{\\rm d}V_d}{{\\rm d}t},\n\\end{equation}\n\\noindent\nwhere $K_d=2\\pi^{d\/2}\/\\Gamma(d\/2)$, $\\Gamma(x)$ the usual\n$\\Gamma$-function, $V_d=2\\pi^{d\/2}R^d\/d\\Gamma(d\/2)$, and $q$ the\namount of the diffusing entity ejected at the boundary of the growing\nphase per unit volume of the latter (new phase) formed. In $s$-space,\nequation (\\ref{eqn:flux-1}) is written as \n\\begin{equation}\n \\label{eqn:conserve-flux}\n \\Big(\\frac{{\\rm d}u}{{\\rm d}s}\\Big)_{s=2\\lambda} = -\\Big(\\frac{Bu_s}{2\\lambda}+q\\lambda\\Big).\n\\end{equation}\n\\noindent\nThe boundary condition $u(2\\lambda) = u_s$ and equation (\\ref{eqn:conserve-flux})\nwill provide a relationship between $u_s$ and $u_m$ through $\\lambda$.\n\nFor $d=2$, equation (\\ref{eqn:ode-1}) is very much simplified, and we find \n\\begin{equation}\n \\label{eqn:sol-2d}\n u(s)=u_m+\\frac{(Bu_m+2q\\lambda^2)\\lambda^B e^{\\lambda^2}\\Gamma(-B\/2,s^2\/4)}\n {2-B\\lambda^Be^{\\lambda^2}\\Gamma(-B\/2,\\lambda^2)},\n\\end{equation}\n\\noindent\nwhere we utilized $u(\\infty) = u_m$ and equation (\\ref{eqn:conserve-flux}). Here\n $\\Gamma(a,z)$ is the incomplete\ngamma function defined by the integral $\\Gamma(a,z)=\\int_z^\\infty\nt^{a-1}e^{-t}dt$ \\cite{Abramowitz_Stegun_1964}. The yet unknown parameter\n$\\lambda$ is found from relation (\\ref{eqn:sol-2d}) at $u(2\\lambda) =\nu_s$ for a set of input parameters $u_s$, $u_m$ $q$, and $B$, through which the concentration\nfield, equation (\\ref{eqn:sol-2d}), and the growth of second-phase\n($R=2\\lambda\\sqrt{Dt}$) are determined.\n\n\nLet us consider the case of $d=3$, that is assume that the\npotential in equation (\\ref{eqn:screw}) is meaningful for a\nspherically symmetric system. In this case, for $B \\ne 0$, the point $z=0$ is\na regular singularity of equation\n(\\ref{eqn:ode-1}), while $z=\\infty$ is an irregular singularity for\nthis equation, see appendix \\ref{sec:appa} for further\nconsideration. Nevertheless, for\n$d=3$, the general solution of equation (\\ref{eqn:ode-1}) is expressed in the form\n\\begin{equation}\n \\label{eqn:sol-3d}\n u(s)=\n2C_1\\,{_1\\!F}_1\\Big(-\\frac{1}{2};\\frac{1+B}{2};-\\frac{s^2}{4}\\Big)s^{-1}\n+ 2^{B}C_2\\,{_1\\!F}_1\\Big(-\\frac{B}{2};\\frac{3-B}{2};-\\frac{s^2}{4}\\Big) s^{-B},\n\\end{equation}\n\\noindent\nwhere ${_1\\!F}_1(a;b;z)$ is Kummer's confluent hypergeomtric function,\nsometimes denoted by $M(a,b,z)$ \\cite{Abramowitz_Stegun_1964}. The\nintegration constants $C_1$ and $C_2$ in equation (\\ref{eqn:sol-3d}) can be determined\nby invoking equation (\\ref{eqn:conserve-flux}) and also the condition\n$u(\\infty)=u_m$, cf. appendix \\ref{sec:appa}.\n\n\\section{Computations}\n\\label{sec:comp}\n\nTo study the growth behavior of a second-phase in a solid solution\nunder the action of screw dislocation field, we attempt to compute the\ngrowth rate constant as a function of the supersaturation parameter $k$,\ndefined as $k \\equiv (u_s-u_m)\/q_u$ with $q_u=u_p-u_s$, where $u_p$\n is the composition of the nucleus \\cite{Aaron_et_al_1970}. For $d=2$,\ni.e., a cylindrical second-phase platelet, equation (\\ref{eqn:sol-2d}) with\n$u(2\\lambda) = u_s$ yields\n\n\\begin{equation}\n k = \\Bigg[\\frac{2\\lambda^2+Bu_m(u_p-u_s)^{-1}}{2-B\\lambda^B e^{\\lambda^2}\\Gamma\\big(-B\/2,\\lambda^2\\big)}\\Bigg]\n \\lambda^B e^{\\lambda^2}\\Gamma\\big(-B\/2,\\lambda^2\\big).\n\\label{eqn:supsat-2d-exact}\n\\end{equation}\n\\noindent\nFor $B=0$, the relations obtained by Frank \\cite{Frank_1950} are recovered,\nnamely\n\\begin{eqnarray}\n u(z) & = & u_m + q_u \\lambda^2e^{\\lambda^2}E_1(z^2\/4), \\label{eqn:sol-0d2} \\\\\n k & = & \\lambda^2 e^{\\lambda^2}E_1(\\lambda^2), \\label{eqn:flux-0d2}\n\\end{eqnarray}\n\\noindent\nwhere $E_1(x)$ is the exponential integral of order one, related to the\nincomplete gamma function through the identity\n$E_n(x)=x^{n-1}\\Gamma(1-n,x)$ \\cite{Abramowitz_Stegun_1964}.\n\nFrom equation (\\ref{eqn:supsat-2d-exact}), it is seen that a complete separation of the supersaturation parameter\n$k \\equiv (u_s-u_m)(u_p-u_s)^{-1}$ is not possible for $B \\ne\n0$. However, for $u_s << u_p$ (a reasonable proviso) we write\n\\begin{equation}\n k = \\Big(\\lambda^2 + \\frac{B}{2}\\,\\epsilon\\Big)\\, \n \\lambda^B e^{\\lambda^2}\\Gamma\\big(-B\/2,\\lambda^2\\big)+\\mathcal{O}(\\epsilon^2),\n\\label{eqn:supsat-2d-approx}\n\\end{equation}\n\\noindent\nwith $\\epsilon \\equiv u_s\/u_p$. For $B=1$, equations\n(\\ref{eqn:sol-2d}) and (\\ref{eqn:supsat-2d-approx}) yield, respectively\n\\begin{eqnarray}\n u(z) & = & u_m + \\frac{2\\lambda\\,e^{\\lambda^2}\\,(u_m + 2q_u\\lambda^2)\\,E_{3\/2}(z^2\/4)}\n{[2-e^{\\lambda^2} E_{3\/2}(\\lambda^2)]z},\n \\label{eqn:sol-1d2} \\\\\n k & = & \\Big(\\lambda^2 + \\frac{\\epsilon}{2}\\Big)\\,e^{\\lambda^2}E_{3\/2}(\\lambda^2)+\\mathcal{O}(\\epsilon^2).\n\\label{eqn:flux-1d2}\n\\end{eqnarray}\n\\noindent\nSimilarly for $B=2$, we have \n\\begin{eqnarray}\n u(z) & = & u_m + \\frac{4\\lambda^2e^{\\lambda^2}(u_m + q_u \\lambda^2)E_2(z^2\/4)\n}{[1-e^{\\lambda^2} E_{2}(\\lambda^2)]z^2},\n \\label{eqn:sol-2d2} \\\\\n k & = & (\\lambda^2 + \\epsilon)E_2(\\lambda^2)+\\mathcal{O}(\\epsilon^2).\n\\label{eqn:flux-2d2}\n\\end{eqnarray}\n\\noindent\n\nWe have plotted the growth coefficient $\\lambda=R\/2\\sqrt{Dt}$ as a\nfunction of the supersaturation parameter $k$ in figure\n\\ref{fig:k-2d} and the spatial variation of the concentration field\nin figure \\ref{fig:u-2d} for $d=2$ and several values of $B$. The\ncomputations are performed to $\\mathcal{O}(\\epsilon^2)$ with\n$\\epsilon=0.01$. Figure \\ref{fig:k-2d} shows that $\\lambda$ is an\nincreasing function of $k$; and also, as $B$ is raised $\\lambda$ is\nelevated. This means that an increase in the amplitude of dislocation\nforce (e.g., the magnitude of the Burgers vector) enhances second-phase\ngrowth in an alloy.\n\nFigure \\ref{fig:u-2d} displays the reduced concentration versus the\nreduced radius $z=r\/\\sqrt{Dt}$ for $\\lambda=1$. The reduced\nconcentration is calculated via equation (\\ref{eqn:sol-2d}). It is seen that for\n$z \\lesssim 1.6$ the concentration is enriched with increase in $B$,\nwhereas for $z \\gtrsim 1.6$, it is vice versa. So, for\n$\\lambda=1$, the crossover $z$-value is $z_c \\approx 1.6$. Also, as\n$\\lambda$ is reduced, $z_c$ is decreased.\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=0.80\\textwidth]{fig\/lambda_k2d}\n \\caption{ Growth coefficient $\\lambda$ as a function of supersaturation $k$ \n at various levels of dislocation force\namplitude $B$ for a circular plate ($d=2$) and $u_s=0.01u_p$.}\n\\label{fig:k-2d}\n \\end{center}\n\\end{figure}\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=0.80\\textwidth]{fig\/u2d}\n\\caption{Reduced concentration field as a function of reduced distance\n from the surface of the circular plate ($d=2$) at various levels of dislocation force\namplitude $B$ and at $\\lambda=1$.}\n\\label{fig:u-2d}\n\\end{center}\n\\end{figure}\n\nFor $d=3$, i.e., a spherical second-phase particle in the absence of\ndislocation field ($B=0$), we find\n\\begin{eqnarray}\n u(z) & = & u_m + 2 q_u\n\\lambda^3 e^{\\lambda^2}\\Big[\\frac{2e^{-z^2\/4}}{z}\n-\\sqrt{\\pi}\\,\\mathrm{erfc}(z\/2)\\Big],\n \\label{eqn:sol-3d-b0} \\\\\n k & = & 2\\lambda^2\\Big[1-\\sqrt{\\pi}\\,\\lambda\\, e^{\\lambda^2}\\mathrm{erfc}(\\lambda)\\Big].\n\\label{eqn:flux-3d-b0}\n\\end{eqnarray}\n\\noindent\nThis corresponds to the results obtained by Frank\n\\cite{Frank_1950}.\n\n For $d=3$ and $B=2$, equation (\\ref{eqn:ode-1}) is\nsimplified and an analytical solution can be found, resulting in\n\\begin{eqnarray}\n u(z) & = & \\Bigg(\\frac{e^{z^2\/4}(z^2+2)\\Big[\\sqrt{\\pi}\\lambda\ne^{\\lambda^2}\\Big(\\mathrm{erf}(\\frac{z}{2})-\\mathrm{erf}(\\lambda)\\Big)-1\\Big]\n+2\\lambda e^{\\lambda^2}z}{\\sqrt{\\pi}\\lambda e^{\\lambda^2}\\mathrm{erfc}(\\lambda)-1}\\Bigg)\n\\frac{e^{-z^2\/4}}{z^2}u_m +\n\\nonumber\\\\\n & & + \\;\n \\Bigg(\\frac{\\lambda^3 e^{\\lambda^2}\\Big[2z-\\sqrt{\\pi}e^{z^2\/4}(z^2+2)\\mathrm{erfc}(\\frac{z}{2})\\Big]}\n{\\sqrt{\\pi}\\lambda e^{\\lambda^2}\\mathrm{erfc}(\\lambda)-1}\\Bigg)\\frac{e^{-z^2\/4}}{z^2}q_u.\n \\label{eqn:sol-3d2}\n\\end{eqnarray}\n\\noindent\nPutting $u(2\\lambda)=u_s$, we obtain\n\\begin{equation}\n k = \\frac{1+2\\lambda^2\\Big(1-\\sqrt{\\pi}\\lambda\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)\\Big)}\n{2\\lambda^2\\Big(\\sqrt{\\pi}\\lambda\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)-1\\Big)}\\;\\frac{u_m}{q_u}+\n\\frac{2\\lambda^2-(1+2\\lambda^2)\\sqrt{\\pi}\\lambda\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)}\n{2\\Big(\\sqrt{\\pi}\\lambda\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)-1\\Big)}.\n\\label{eqn:k-3d-2e}\n\\end{equation}\nFor $u_s << u_p$, we write\n\\begin{equation}\n k =\n-2\\lambda^4+\\sqrt{\\pi}\\lambda^3(1+2\\lambda^2)\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)\n + \\Big(1-2\\lambda^2 + 2\\sqrt{\\pi}\\lambda^3\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)\\Big)\\epsilon\n+\\mathcal{O}(\\epsilon^2).\n\\label{eqn:k-3d-2a}\n\\end{equation}\n\\noindent\n\n\nGeneral analytical expressions of $u(z)$ and $k$, in terms of confluent\nhypergeometric functions, can also be found for even values of $B$ as\ndetailed in appendix \\ref{sec:appa}. Furthermore, asymptotic forms of\n$u(z)$ for large and small $z$ can be calculated, see appendix\n\\ref{sec:appa} for analysis of $z>>1$. Figure \\ref{fig:k-23d} compares $k$\nversus $\\lambda$ for $d=2$ and $d=3$ in the absence of dislocation\nfield ($B=0$).\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=0.80\\textwidth]{fig\/lambda_k2d3d}\n \\caption{Growth coefficient $\\lambda$ as a function of supersaturation parameter $k$ \n at $B=0$ for a circular plate ($d=2$) versus a sphere ($d=3$).}\n\\label{fig:k-23d}\n \\end{center}\n\\end{figure}\n\n\\section{Discussion}\n \\label{sec:disc}\n\nThe potential energy in equation (\\ref{eqn:screw}) describes the\nelastic energy of the dislocation relaxed within the volume\noccupied by the second-phase precipitate \\cite{Cahn_1957}. It was treated\nhere as an external field affecting the diffusion-limited growth of\nsecond-phase precipitate. The interaction energy of impurities in a\ncrystalline with dislocations depends on the specific model or\nconfiguration of a solute atom and a matrix which is used. Commonly,\nit is assumed that the solute acts as an elastic center of\ndilatation. It is a fictitious sphere of radius $R^\\prime$ embedded\nconcentrically in a spherical hole of radius $R$ cut in the matrix. If\nthe elastic constants of the solute and matrix are the same, the work\ndone in inserting the atom in the presence of dislocation is\n$w=p\\Delta v$, where $p$ is the hydrostatic pressure and $\\Delta v$ is\nthe difference between the volume of the hole in the matrix and the\nsphere of the fictitious impurity. For a screw dislocation $p=0$,\nwhile near an edge dislocation\n$p=\\frac{(1+\\nu)bG\\sin\\theta}{3\\pi(1-\\nu)r}$ for an impurity with\npolar coordinates $(r,\\theta)$ with respect to the dislocation $0z$,\nhence $w \\propto \\Delta v\\sin\\theta\/r$\n\\cite{Cottrell_Bilby_1949}. Using a nonlinear elastic theory\n\\cite{Nabarro_1987}, a screw dislocation may also interact with the\nspherical impurity with the interaction energy $w \\propto \\Delta\nv\/r^2$. Moreover, accounting for the differences in the elastic\nconstants of a solute and a matrix, the solute will relieve shear\nstrain energy as well as dilatation energy, which will also interact\nwith a screw dislocation with a potential $w \\propto \\Delta v\/r^2$\n\\cite{Friedel_1967}. Indeed, Friedel \\cite{Friedel_1967} has\nformulated that by introducing a dislocation into a solid solution of\nuniform concentration $c_0$, the interaction energy between the\ndislocation and solute atoms can be written as $w \\backsimeq\nw_0(b\/\\delta)^n f(\\theta)$, where $\\delta$ is the distance between the two\ndefects, $w_0$ the binding energy when $\\delta=b$, and $f(\\theta)$\naccounts for the angular dependence of the interaction along the\ndislocation. Also, $n=1$ for size effects and $n=2$ for effects due to\ndifferences in elastic constants. The discussed model for the\ninteraction energy between solute atoms and dislocations has been used\nto study the precipitation process on dislocations by number of\nworkers in the past \\cite{Ham_1959,Bullough_Newman_1962} and thoroughly\nreviewed in \\cite{Bullough_Newman_1970}. These studies concern\nprimarily the overall phase transformation (precipitation of a new\nphase) rather than the growth of a new phase considered in our\nnote. That is, they used different boundary conditions as compared to the\nones used here.\n\n\nLet us now link the supersaturation parameter $k$ to an experimental\nsituation. For this purpose, the values of $u_s$, i.e. the\nconcentration at the interface between the second-phase and matrix\nshould be known. The capillary effect leads to a relationship between\n$u_s$ and the equilibrium composition $u_{eq}$ (solubility line in a\nphase diagram). To obtain this relationship, we consider an incoherent\nnucleation of second-phase on a dislocation \\`a la Cahn\n\\cite{Cahn_1957}. A Burgers loop around the dislocation in the matrix\nmaterial around the incoherent second-phase (circular plate) will have\na closure mismatch equal to $b$. Following Cahn, on forming the\nincoherent plate of radius $R$, the total free energy change per unit\nlength is\n\\begin{equation}\n\\label{eqn:cahn-fe}\n\\mathcal{G}=-\\pi R^2 \\Delta g_v +2\\pi\\gamma\nR-A^\\prime\\ln(R\/r_0),\n\\end{equation}\n\\noindent\nwhere $\\Delta g_v$ is the volume free energy of\nformation, $\\gamma$ the interfacial energy and the last term is the\ndislocation energy, $A^\\prime=Gb^2\/4\\pi$ for screw dislocations,\ncf. equation (\\ref{eqn:screw}). Setting ${\\rm d}\\mathcal{G}\/{\\rm d}R=0$, yields\n\\begin{equation}\n \\label{eqn:crit-rad}\n R = \\frac{\\gamma}{2\\Delta g_v}\\Big(1 \\pm \\sqrt{1-\\alpha}\\Big),\n\\end{equation}\n\\noindent\nwhere $\\alpha=2A^\\prime \\Delta g_v\/\\pi\\gamma^2$. So, if $\\alpha>1$,\nthe nucleation is barrierless, i.e., the phase transition kinetics is\nonly governed by growth kinetics, which is the subject of our\ninvestigation here. If, however, $\\alpha < 1$, there is an energy\nbarrier and the local minimum of $\\mathcal{G}$ at $R=R_0$, which\ncorresponds to the negative sign in equation (\\ref{eqn:crit-rad}),\nensued by a maximum at $R=R^\\ast$ corresponding to the positive sign\nin this equation. The local minimum corresponds to a subcritcal\nmetastable particle of the second-phase surrounding the dislocation\nline, and it is similar to the Cottrell atmosphere of solute atoms in\na segregation problem. When $\\alpha = 0$, corresponding to $B=0$, the\ntwo phases are in equilibrium and the maximum in $\\mathcal{G}$ is\ninfinite, as for homogeneous nucleation.\n\n\nFor a dilute regular solution, $\\Delta\ng_v=(k_BT\/V_p)\\ln(u_s\/u_{eq})$, where $V_p$ is the atomic volume of\nthe precipitate compound, $u_s$ is the concentration of the matrix at\na curved particle\/matrix interface and $u_{eq}$ that of a flat\ninterface, which is in equilibrium with the solute concentration in\nthe matrix. Equation (\\ref{eqn:crit-rad}) gives\n$\\Delta g_v=\\gamma\/R-A^\\prime\/2\\pi R^2$. Hence, for a dilute regular\nsolution, we write\n\\begin{equation}\n \\label{eqn:gibbs-thom}\n u_s=u_{eq}\\exp{\\Big[\\frac{\\zeta}{R}\\Big(1-\\frac{\\eta}{R}\\Big)\\Big]},\n\\end{equation}\n\\noindent\nwhere $\\zeta = \\beta V_p\\gamma$, $\\beta=1\/k_BT$ and $\\eta = A^\\prime\/2\\pi\\gamma$. Subsequently, the\nsupersaturation parameter is expressed by\n\\begin{equation}\n \\label{eqn:supsat}\n k = \\frac{u_{eq}\\exp[{\\frac{\\zeta}{R}(1-\\frac{\\eta}{R})}]-u_m}{u_p-u_{eq}\\exp[\\frac{\\zeta}{R}(1-\\frac{\\eta}{R})]}.\n\\end{equation}\n\\noindent\nTaking the following typical values: $\\gamma=0.2$ Jm$^{-2}$, $G=40$\nGPa, and $b=0.25$ nm, then $A^\\prime \\approx 2.0\\times10^{-10}$ N and\n$\\eta=0.16$ nm. Figure \\ref{fig:thoms-2d} depicts $u_s\/u_{eq}$, from\nequation (\\ref{eqn:gibbs-thom}), as a function of scaled radius\n$R\/\\zeta$ for $V_p=1.66 \\times 10^{-29}$ m$^3$, $\\eta=0$ and\n$\\eta=0.16$ nm at $T=600$ K. Equation\n(\\ref{eqn:gibbs-thom}) is analogous to the Gibbs-Thomson-Freundlich\nrelationship \\cite{Christian_2002} comprising a dislocation defect.\n\n\nRecalling now the values used for the interaction parameter $B$ in the\ncomputations presented in the foregoing section, we note that for\n$B=2$ and the above numerical values for $G$ and $b$ at $T=1000$ K, we\nfind $l\\approx 0.14$ nm, which is close to the calculated value of\n$\\eta$.\n\nIn Cahn's model, the assumption that all the strain energy of the\ndislocation within the volume occupied by the nucleus can be relaxed\nto zero demands that the nucleus is incoherent. For a coherent nucleus\nforming on or in proximity of dislocations, this supposition is not\ntrue. Instead, it is necessary to calculate the elastic interaction\nenergy between the nucleus and the matrix, which for an edge\ndislocation is in the form $Gb^2\/[4\\pi(1-\\nu)r]$ for the energy\ndensity per unit length \\cite{Barnett_1971}. In the same manner, to\nextend our calculations for growth of coherent precipitate, we must\nemploy this kind of potential energy, i.e. the potential energy of the\nform $\\phi(r)=-A\\ln(r\/r_0)+C \\sin\\theta\/r$, in the governing kinetic\nequation rather than relation (\\ref{eqn:screw}).\n\\begin{figure}[htbp] \n\\begin{center}\n\\includegraphics[width=0.80\\textwidth]{fig\/thoms_2d}{} \\\\\n\\caption{The size dependence of the concentration at the curved\nprecipitate\/matrix interface $u_s$ relative to that of the flat\ninterface $u_{eq}$ for a set of parameter values given in the\ntext, cf. eq. (\\ref{eqn:gibbs-thom}).}\n\\label{fig:thoms-2d}\n\\end{center}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusion}\n \n This paper presents a novel unsupervised representation learning framework for complex multivariate time series, called Temporal Neighborhood Coding (TNC). This framework is designed to learn the underlying dynamics of non-stationary signals and to model the progression over time by defining a temporal neighborhood. The problem is motivated by the medical field, where patients transition between distinct clinical states over time, and obtaining labels to define these underlying states is challenging. We evaluate the performance of TNC on multiple datasets and show that our representations are generalizable and can easily be used for diverse tasks such as classification and clustering. \n We finally note that TNC is flexible to be used with arbitrary encoder architectures; therefore, the framework is applicable to many time series data domains. Moreover, in addition to tasks presented in this paper, general representations can be used for several other downstream tasks, such as anomaly detection, which is challenging in supervised learning settings for time series data in sparsely labeled contexts.\n\n\n\n\n\\section{Results}\nIn this section we present the results for clusterability of the latent representations and downstream classification performance for all datasets and across all baselines. Clusterability indicates how well each method recovers appropriate states, and classification assesses how informative our representations are for downstream tasks. \n\n\\subsection{Evaluation: Clusterability}\nMany real-world time series data have underlying multi-category structure, naturally leading to representations with clustering properties. Encoding such general priors is a property of a good representation \\citep{bengio2013representation}.\nIn this section, we assess the distribution of the representations in the encoding space. If information of the latent state is properly learned and encoded by the framework, the representations of signals from the same underlying state should cluster together. Figures \\ref{fig:tcl_dist}, \\ref{fig:trip_dist}, and \\ref{fig:cpc_dist} show an example of this distribution for simulated data across compared approaches. Each plot is a 2-dimensional t-SNE visualization of the representations where each data point in the scatter plot is an encoding $Z \\in R^{10}$ that represents a window of size $\\delta=50$ of a simulated time series. \nWe can see that without any information about the hidden states, representations learned using TNC cluster windows from the same hidden state better than the alternative approaches. The results show that CPC and Triplet Loss have difficulty separating time series that are generated from non-linear auto-regressive moving average (NARMA) models with variable regression parameters.\n\n\\begin{figure}[!h]\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution.pdf}\n\\captionof{figure}{TNC representations}\n\\label{fig:tcl_dist}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_trip.pdf}\n\\captionof{figure}{T-loss representations}\n\\label{fig:trip_dist}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n \\centering\n\\includegraphics[scale=.32]{TCL\/figures\/encoding_distribution_cpc.pdf}\n\\caption{CPC representations}\n\\label{fig:cpc_dist}\n\\end{subfigure}\n\\label{fig:distribution}\n\\caption{T-SNE visualization of signal representations for the simulated dataset across all baselines. Each data point in the plot presents a 10-dimensional representation of a window of time series of size $\\delta=50$, and the color indicates the latent state of the signal window. See Appendix \\ref{app:plots} for similar plots from different datasets.}\n\\end{figure}\n\n\nTo compare the representation clusters' consistency for each baseline, we use two very common cluster validity indices, namely, the Silhouette score and the Davies-Bouldin index. We use K-means clustering in the representation space to measure these clusterability scores. The Silhouette score measures the similarity of each sample to its own cluster, compared to other clusters. The values can range from $-1$ to $+1$, and a greater score implies a better cohesion. The Davies-Bouldin Index measures intra-cluster similarity and inter-cluster differences. This is a positive index score, where smaller values indicate low within-cluster scatter and large separation between clusters. Therefore, a lower score represents better clusterability (more details on the cluster validity scores and how they are calculated can be found in Appendix \\ref{app:clustering_metrics}). \n\n\\begin{table}[!h]\n\\begin{tabular}{lcccccc}\n &\\multicolumn{2}{c}{Simulation} & \\multicolumn{2}{c}{ECG Waveform} & \\multicolumn{2}{c}{HAR}\\\\\n \\toprule\n Method & Silhouette $\\uparrow$ & DBI $\\downarrow$ & Silhouette $\\uparrow$ & DBI $\\downarrow$ & Silhouette $\\uparrow$ & DBI $\\downarrow$\\\\\n \\midrule\n \\bf{TNC} & {\\bf{0.71$\\pm$0.01}} & {\\bf{0.36$\\pm$0.01}} & {\\bf{0.44$\\pm$0.02}} & {\\bf{0.74$\\pm$0.04}} & {\\bf{0.61$\\pm$0.02}} & {\\bf{0.52$\\pm$0.04}} \\\\\n CPC & 0.51$\\pm$0.03 & 0.84$\\pm$0.06 & 0.26$\\pm$0.02 & 1.44$\\pm$0.04 & 0.58$\\pm$0.02 & 0.57$\\pm$0.05\\\\\n T-Loss & 0.61$\\pm$0.08 & 0.64$\\pm$0.12 & 0.25$\\pm$0.01 & 1.30$\\pm$0.03 & 0.17$\\pm$0.01 & 1.76$\\pm$0.20\\\\\n \\midrule\n K-means & 0.01$\\pm$0.019 & 7.23$\\pm$0.14 & 0.19$\\pm$0.11 & 3.65$\\pm$0.48 & 0.12$\\pm$0.40 & 2.66$\\pm$0.05\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Clustering quality of representations in the encoding space for multiple datasets. }\n \\label{tab:sil_score}\n\\end{table}\n\n\nTable \\ref{tab:sil_score} summarizes the scores for all baselines and across all datasets, demonstrating that TNC is superior in learning representations that can distinguish the latent dynamics of time series. CPC performs closely to Triplet loss on waveform data but performs poorly on the simulated dataset, where signals are highly non-stationary, and transitions are less predictable. However, for the HAR dataset, CPC clusters the states very well because most activities are recorded in a specific order, empowering predictive coding. Triplet loss performs reasonably well in the simulated setting; however, it fails to distinguish states $0$ and $2$, where signals come from autoregressive models with different parameters and have a relatively similar generative process. Performing K-means on the original time series generally does not generate coherent clusters, as demonstrated by the scores. However, the performance is slightly better in time series like the ECG waveforms, where the signals are formed by consistent shapelets, and therefore the DTW measures similarity more accurately.\n\n\n\n\\subsection{Evaluation: Classification}\nWe further evaluate the quality of the encodings using a classification task. We train a linear classifier to evaluate how well the representations can be used to classify hidden states. The performance of all baselines is compared to a supervised classifier, composed of an encoder and a classifier with identical architectures to that of the unsupervised models, and a K-nearest neighbor classifier that uses DTW metric. The performance is reported as the prediction accuracy and the area under the precision-recall curve (AUPRC) score since AUPRC is a more accurate reflection of model performance for imbalance classification settings like the waveform dataset.\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{lcccccc}\n &\\multicolumn{2}{c}{Simulation}& \\multicolumn{2}{c}{ECG Waveform}& \\multicolumn{2}{c}{HAR}\\\\\n \\toprule\n Method & AUPRC & Accuracy & AUPRC & Accuracy & AUPRC & Accuracy\\\\\n \\midrule\n \\bf{TNC}& {\\bf{0.99$\\pm$0.00}} & {\\bf{97.52$\\pm$0.13}} & {\\bf{0.55$\\pm$0.01}} & {\\bf{77.79$\\pm$0.84}} & {\\bf{0.94$\\pm$0.007}} & {\\bf{88.32$\\pm$0.12}}\\\\\n CPC & 0.69$\\pm$0.06 & 70.26$\\pm$6.48 & 0.42$\\pm$0.01 & 68.64$\\pm$0.49 & 0.93$\\pm$0.006 & 86.43$\\pm$1.41\\\\\n T-Loss & 0.78$\\pm$0.01 & 76.66$\\pm$1.40 & 0.47$\\pm$0.00 & 75.51$\\pm$1.26 & 0.71$\\pm$0.007 & 63.60$\\pm$3.37 \\\\\n \\midrule\n KNN & 0.42$\\pm$0.00 & 55.53$\\pm$0.65 & 0.38$\\pm$0.06 & 54.76$\\pm$5.46 & 0.75$\\pm$0.01& 84.85$\\pm$0.84\\\\\n \\midrule\n Supervised& {\\bf{0.99$\\pm$0.00}} & {\\bf{98.56$\\pm$0.13}} & {\\bf{0.67$\\pm$0.01}} & {\\bf{94.81$\\pm$0.28}} & {\\bf{0.98$\\pm$0.00}} & {\\bf{92.03$\\pm$2.48}}\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Performance of all baselines in classifying the underlying hidden states of the time series, measured as the accuracy and AUPRC score. }\n \\label{tab:classification_result}\n\\end{table}{}\n\n\n\nTable \\ref{tab:classification_result} demonstrates the classification performance for all datasets. The performance of the classifiers that use TNC representations are closer to the end-to-end supervised model in comparison to CPC and Triplet Loss. This provides further evidence that our encodings capture informative parts of the time series and are generalizable to be used for downstream tasks. In datasets like the HAR, where an inherent ordering usually exists in the time series, CPC performs reasonably. However, in datasets with increased non-stationarity, the performance drops. Triplet Loss is also a powerful framework, but since it samples positive examples from overlapping windows of time series, it is vulnerable to map the overlaps into the encoding and, therefore, fail to learn more general representations. TNC, on the other hand, samples similar windows from a wider distribution, defined by the temporal neighborhood, where many of the neighboring signals do not necessarily overlap. \nThe lower performance of the CPC and Triplet Loss methods can also be partly because none of these methods explicitly account for the potential sampling bias that happens when randomly selected negative examples are similar to the reference $W_t$.\n\n\n\n\n\\subsection{Evaluation: Trajectory}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.2]{TCL\/figures\/embedding_trajectory_hm.pdf}\n \\caption{Trajectory of a signal encoding from the simulated dataset. The top plot shows the original time series with shaded regions indicating the underlying state. The bottom plot shows the 10 dimensional encoding of the sliding windows $W_t$ where $\\delta=50$. }\n \\label{fig:trajectory}\n\\end{figure}\n\n\nThis section investigates the trajectories of our learned encodings over time to understand how the state transitions are captured and modeled in the representation space. This is an important property for non-stationary time series where underlying states change over time, and capturing those changes is critical in many application domains such as healthcare. Figure \\ref{fig:trajectory} shows a sample from the simulated dataset. The top panel shows the signal measurements over time, and the shaded regions indicate the underlying latent states. The bottom panel illustrates the $10$-dimensional representation of a sliding window $W_t$ estimated over time. From the bottom panel of Figure \\ref{fig:trajectory}, we can see that the encoding pattern changes at state transitions and settle into a different pattern, corresponding to the new state. This change happens at every transition, and we can see the distinct patterns for all $4$ underlying states in the representations. This analysis of the trajectory of change could be very informative for the users' post-analysis; for instance, in clinical applications, it could help clinicians visualize the evolution of the patient state over time and plan treatment based on the state progression\n\n\n\\section{Related work}\nWhile integral for many applications, unsupervised representation learning has been far less studied for time series \\citep{langkvist2014review}, compared to other domains such as vision or natural language processing \\citep{denton2017unsupervised, radford2015unsupervised, gutmann2012noise, wang2015unsupervised}. One of the earliest approaches to unsupervised end-to-end representation learning in time series is the use of auto-encoders \\citep{choi2016multi, amiriparian2017sequence, malhotra2017timenet} and seq-to-seq models \\citep{lyu2018improving}, with the objective to train an encoder jointly with a decoder that reconstructs the input signal from its learned representation. Using fully generative models like variational auto-encoders is also useful for imposing properties like disentanglement, which help with the interpretability of the representations \\citep{dezfouli2019disentangled}. However, in many cases, like for high-frequency physiological signals, the reconstruction of complex time series can be challenging; therefore, more novel approaches are designed to avoid this step. Contrastive Predictive Coding \\citep{oord2018representation, lowe2019putting} learns representations by predicting the future in latent space, eliminating the need to reconstruct the full input. The representations are such that the mutual information between the original signal and the concept vector is maximally preserve using a lower bound approximation and a contrastive loss. Very similarly, in Time Contrastive Learning \\citep{hyvarinen2016unsupervised}, a contrastive loss is used to predict the segment-ID of multivariate time-series as a way to extract representation. \\cite{franceschi2019unsupervised} employs time-based negative sampling and a triplet loss to learn scalable representations for multivariate time series. Some other approaches use inherent similarities in temporal data to learn representations without supervision. For instance, in similarity-preserving representation learning \\citep{lei2017similarity}, learned encodings are constrained to preserve the pairwise similarities that exist in the time domain, measured by DTW distance. Another group of approaches combines reconstruction loss with clustering objectives to cluster similar temporal patterns in the encoding space \\citep{ma2019learning}.\n\nIn healthcare, learning representation of rich temporal medical data is extremely important for understanding patients' underlying health conditions. However, most of the existing approaches for learning representations are designed for specific downstream tasks and\nrequire labeling by experts \\citep{choi2016medical, choi2016learning, tonekaboni2020went}. Examples of similar works to representation learning in the field of clinical ML include computational phenotyping for discovering subgroups of patients with similar underlying disease mechanisms from temporal clinical data \\citep{lasko2013computational, suresh2018learning, schulam2015clustering}, and disease progression modeling, for learning the hidden vector of comorbidities representing a disease over time \\citep{wang2014unsupervised, alaa2018forecasting}.\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nReal-world time-series data is high dimensional, complex, and has unique properties that bring about many challenges for data modeling \\citep{yang200610}. In addition, these signals are often sparsely labeled, making it even more challenging for supervised learning tasks. Unsupervised representation learning can extract informative low-dimensional representations from raw time series by leveraging the data's inherent structure, without the need for explicit supervision. These representations are more generalizable and robust, as they are less specialized for solving a single supervised task. Unsupervised representation learning is well studied in domains such as vision \\citep{donahue2019large, denton2017unsupervised, radford2015unsupervised} and natural language processing \\citep{radford2017learning, young2018recent, mikolov2013efficient}, but has been underexplored in the literature for time series settings.\nFrameworks designed for time series need to be efficient and scalable because signals encountered in practice can be long, high dimensional, and high frequency. Moreover, it should account for and be able to model dynamic changes that occur within samples, i.e., non-stationarity of signals. \n\n\nThe ability to model the dynamic nature of time series data is especially valuable in medicine. Health care data is often organized as a time series, with multiple data types, collected from various sources at different sampling frequencies, and riddled with artifacts and missing values. Throughout their stay at the hospital or within the disease progression period, patients transition gradually between distinct clinical states, with periods of relative stability, improvement, or unexpected deterioration, requiring escalation of care that alters the patient's trajectory. A particular challenge in medical time-series data is the lack of well-defined or available labels that are needed for identifying the underlying clinical state of an individual or for training models aimed at extracting low-dimensional representations of these states.\nFor instance, in the context of critical-care, a patient's stay in the critical care unit (CCU) is captured continuously via streaming physiological signals by the bedside monitor. Obtaining labels for the patient's state for extended periods of these signals is practically impossible as the underlying physiological state can be unknown even to the clinicians. This further motivates the use of unsupervised representation learning in these contexts. Learning rich representations can be crucial in facilitating the tracking of disease progression, predicting the future trajectories of the patients, and tailoring treatments to these underlying states. \n\nIn this paper, we propose a self-supervised framework for learning representations for complex multivariate non-stationary time series. This approach, called Temporal Neighborhood Coding (TNC), is designed for temporal settings where the latent distribution of the signals changes over time, and it aims to capture the progression of the underlying temporal dynamics. TNC is efficient, easily scalable to high dimensions, and can be used in different time series settings. We assess the quality of the learned representations on multiple datasets and show that the representations are general and transferable to many downstream tasks such as classification and clustering. We further demonstrate that our method outperforms existing approaches for unsupervised representation learning, and it even performs closely to supervised techniques in classification tasks. The contributions of this work are three-fold:\n\\begin{enumerate}\n \\item We present a novel neighborhood-based unsupervised learning framework for \\emph{non-stationary} multivariate time series data.\n \\item We introduce the concept of a temporal neighborhood with stationary properties as the distribution of similar windows in time. The neighborhood boundaries are determined automatically using the properties of the signal and statistical testing.\n \\item We incorporate concepts from Positive Unlabeled Learning, specifically, sample weight adjustment, to account for potential bias introduced in sampling negative examples for the contrastive loss.\n\\end{enumerate}\n\n\\section{Method}\n\nWe introduce a framework for learning representations that encode the underlying state of a multivariate, non-stationary time series. Our self-supervised approach, TNC, takes advantage of the local smoothness of the generative process of signals to learn generalizable representations for windows of time series. This is done by ensuring that in the representation space, the distribution of signals proximal in time is distinguishable from the distribution of signals far away, i.e., proximity in time is identifiable in the encoding space. We represent our multivariate time series signals as ${X}\\in R^{D\\times T}$, where $D$ is the number of features and $T$ is the number of measurements over time.\n$X_{[t-\\frac{\\delta}{2}, t+\\frac{\\delta}{2}]}$ represents a window of time series of length $\\delta$, centered around time $t$, that includes measurements of all features taken in the interval $[t-\\frac{\\delta}{2}, t+\\frac{\\delta}{2}]$. Throughout the paper, we refer to this window as $W_t$ for notational simplicity. Our goal is to learn the underlying representation of $W_t$, and by sliding this window over time, we can obtain the trajectory of the underlying states of the signal.\n\nWe define the temporal neighborhood ($N_t$) of a window $W_t$ as the set of all windows with centroids $t^*$, sampled from a normal distribution $t^*\\sim\\mathcal{N}(t,\\eta\\cdot\\delta)$. Where $\\mathcal{N}$ is a Gaussian centered at $t$, $\\delta$ is the size of the window, and $\\eta$ is the parameter that defines the range of the neighborhood. Relying on the local smoothness of a signal's generative process, the neighborhood distribution is characterized as a Gaussian to model the gradual transition in temporal data, and intuitively, it approximates the distribution of samples that are similar to $W_t$. The $\\eta$ parameter determines the neighborhood range and depends on the signal characteristics and how gradual the time series's statistical properties change over time. This can be set by domain experts based on prior knowledge of the signal behavior, or for more robust estimation, it can be determined by analyzing the stationarity properties of the signal for every $W_t$. Since the neighborhood represents similar samples, the range should identify the approximate time span within which the signal remains stationary, and the generative process does not change. For this purpose, we use the Augmented Dickey-Fuller (ADF) statistical test to determine this region for every window. Proper estimation of the neighborhood range is an integral part of the TNC framework. If $\\eta$ is too small, many samples from within a neighborhood will overlap, and therefore the encoder would only learn to encode the overlapping information. On the other hand, if $\\eta$ is too big, the neighborhood would span over multiple underlying states, and therefore the encoder would fail to distinguish the variation among these states. Using the ADF test, we can automatically adjust the neighborhood for every window based on the signal behavior. More details on this test and how it is used to estimate $\\eta$ is described in section \\ref{sec:neighbourhood}. \n\n\nNow, assuming windows within a neighborhood possess similar properties, signals outside of this neighborhood, denoted as $\\bar{N_t}$, are considered non-neighboring windows. Samples from $\\bar{N_t}$ are likely to be different from $W_t$, and can be considered as negative samples in a context of a contrastive learning framework. However, this assumption can suffer from the problem of \\emph{sampling bias}, common in most contrastive learning approaches \\citep{chuang2020debiased, saunshi2019theoretical}. This bias occurs because randomly drawing negative examples from the data distribution may result in negative samples that are actually similar to the reference. This can significantly impact the learning framework's performance, but little work has been done on addressing this issue \\citep{chuang2020debiased}. In our context, this can happen when there are windows from $\\bar{N_t}$ that are far away from $W_t$, but have the same underlying state. To alleviate this bias in the TNC framework, we consider samples from $\\bar{N_t}$ as unlabeled samples, as opposed to negative, and use ideas from Positive-Unlabeled (PU) learning to accurately measure the loss function. In reality, even though samples within a neighborhood are all similar, we cannot make the assumption that samples outside this region are necessarily different. For instance, in the presence of long term seasonalities, signals can exhibit similar properties at distant times. In a healthcare context, this can be like a stable patient that undergoes a critical condition, but returns back to a stable state afterwards. \n\nIn PU learning, a classifier is learned using labeled data drawn from the positive class ($P$) and unlabeled data ($U$) that is a mixture of positive and negative samples with a positive class prior $\\pi$ \\citep{du2014analysis, kiryo2017positive, du2014class}. Existing PU learning methods fall under two categories based on how they handle the unlabeled data:\n1) methods that identify negative samples from the unlabeled cohort \\citep{li2003learning};\n2) methods that treat the unlabeled data as negative samples with smaller weights \\citep{lee2003learning, elkan2008learning}.\nIn the second category, unlabeled samples should be properly weighted in the loss term in order to train an unbiased classifier. \\cite{elkan2008learning} introduces a simple and efficient way of approximating the expectation of a loss function by assigning individual weights $w$ to training examples from the unlabeled cohort. This means each sample from the neighborhood is treated as a positive example with unit weight, while each sample from $\\bar{N}$ is treated as a combination of a positive example with weight $w$ and a negative example with complementary weight $1-w$. In the original paper \\citep{elkan2008learning}, the weight is defined as the probability for a sample from the unlabeled set to be a positive sample, i.e. $w = p(y = 1|x)$ for $x \\in U$. In the TNC framework, this weight represents the probability of having samples similar to $W_t$ in $\\bar{N}$. By incorporating weight adjustment into the TNC loss (Equation \\ref{eq:obj}), we account for possible positive samples that occur in the non-neighboring distribution. $w$ can be approximated using prior knowledge of the underlying state distribution or tuned as a hyperparameter. Appendix \\ref{app:w} explains how the weight parameter is selected for our different experiment setups and also demonstrates the impact of weight adjustment on performance for downstream tasks.\n\nAfter defining the neighborhood distribution, we train an objective function that encourages a distinction between the representation of samples of the same neighborhood from the outside samples. An ideal encoder preserves the neighborhood properties in the encoding space. Therefore representations $Z_l=Enc(W_l)$ of samples from a neighborhood $W_l \\in N_t$, can be distinguished from representation $Z_k=Enc(W_k)$ of samples from outside the neighborhood $W_k \\in \\bar{N}_t$.\nTNC is composed of two main components:\n\n\\begin{enumerate}\n \\item An Encoder $Enc(W_t)$ that maps $W_t\\in \\mathbb{R}^{D\\times\\delta}$ to a representation $Z_t\\in\\mathbb{R}^M$, in a lower dimensional space ($M \\ll D\\times\\delta$), where $ D\\times\\delta$ is the total number of measurements in $W_t$.\n \\item A Discriminator $\\mathcal{D}(Z_t,Z)$ that approximates the probability of $Z$ being the representation of a window in $N_t$. More specifically, it receives two samples from the encoding space and predicts the probability of those samples belonging to the same temporal neighborhood. \n\\end{enumerate}\n\nTNC is a general framework; therefore, it is agnostic to the nature of the time series and the architecture of the encoder. The encoder can be any parametric model that is well-suited to the signal properties \\citep{oord2016wavenet, bai2018empirical, fawaz2019deep}. For the Discriminator $\\mathcal{D}(Z_t,Z)$ we use a simple multi-headed binary classifier that outputs $1$ if $Z$ and $Z_t$ are representations of neighbors in time, and $0$ otherwise. In the experiment section, we describe the architectural details of the models used for our experiments in more depth. \n\n\\begin{figure}\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{TCL\/figures\/tnc_explain_a.pdf} \n \\caption{Neighborhood samples}\n \\label{fig:explanation_a}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{TCL\/figures\/tnc_explain_b.pdf} \n \\caption{Non-neighboring samples}\n \\label{fig:explanation_b}\n\\end{subfigure}\n\\caption{Overview of the TNC framework components. For each sample window $W_t$ (indicated with the dashed black box), we first define the neighborhood distribution. The encoder learns the distribution of windows sampled from $N_t$ and $\\bar{N_t}$, in the representation space. Then samples from these distributions are fed into the discriminator alongside $Z_t$, to predict the probability of the windows being in the same neighborhood.}\n\\label{fig:explanation}\n\\end{figure}\n\nFigure \\ref{fig:explanation} provides a summary overview of the TNC framework. We formalize the objective function of our unsupervised learning framework in Equation \\ref{eq:obj}. In essence, we would like the probability likelihood estimation of the Discriminator to be accurate, i.e., close to $1$ for the representation of neighboring samples and close to $0$ for windows far apart. Samples from the non-neighboring region ($\\bar{N}$) are weight-adjusted using the $w$ parameters to account for positive samples in this distribution.\n\n\\begin{equation}\\label{eq:obj}\n \\mathcal{L} = -\\mathbb{E}_{W_t \\sim X}[ \n \\mathbb{E}_{W_l \\sim N_t}[\\log \\underbrace{\\mystrut{1.5ex}\\mathcal{D} (Z_t, Z_l)}_{\\makebox[0pt]{\\text{$\\mathcal{D}(Enc(W_t), Enc(W_l))$}}}] + \n \\mathbb{E}_{W_k \\sim {\\bar{N}_t}} [(1-w_t) \\times \\log{\\underbrace{\\mystrut{1.5ex}(1-\\mathcal{D}(Z_t, Z_k))}_{\\makebox[0pt]{$\\mathcal{D}(Enc(W_t), Enc(W_k))$}}} + \n w_t \\times \\log{\\mathcal{D}(Z_t, Z_k)}]\n ]\n\\end{equation}{}\n\\vspace{-3mm}\n\n\nWe train the encoder and the discriminator hand in hand by optimizing for this objective. Note that the Discriminator is only part of training and will not be used during inference. Similar to the encoder, it can be approximated using any parametric model. However, the more complex the Discriminator, the harder it becomes to interpret the latent space's decision boundaries since it allows similarities to be mapped on complex nonlinear relationships. \n\n\\paragraph{Defining the neighborhood parameter using the ADF test:}\\label{sec:neighbourhood} \nAs mentioned earlier, the neighborhood range can be specified using the characteristics of the data. In non-stationary time series, the generative process of the signals changes over time. We define the temporal neighborhood around every window as the region where the signal is relatively stationary. Since a signal may remain in an underlying state for an unknown amount of time, each window's neighborhood range may vary in size and must be adjusted to signal behavior. \nTo that end, we use the Augmented Dickey-Fuller (ADF) statistical test to derive the neighborhood range $\\eta$. The ADF test belongs to a category of tests called \"Unit Root Test\", and is a method for testing the stationarity of a time series. For every $W_t$, we want to find the neighborhood range around that window that indicates a stationary region. To determine this, we start from $\\eta=1$ and gradually increase the neighborhood size $\\eta$, measuring the $p$-value from the test at every step. Once $p$-value is above a threshold (in our setting $0.01$), it means that it fails to reject the null hypothesis and suggests that within this neighborhood region, the signal is no longer stationary. This way, we find the widest neighborhood within which the signal remains relatively stationary. Note that the window size $\\delta$ is constant throughout the experiment, and during ADF testing, we only adjust the neighborhood's width.\n\n\n\\section{Experiments}\nWe evaluate our framework's usability on multiple time series datasets with dynamic latent states that change over time.\nWe compare classification performance and clusterability against two state-of-the-art approaches for unsupervised representation learning for time series: \n\\begin{inparaenum}\n \\item Contrastive Predictive Coding (CPC) \\citep{oord2018representation} that uses predictive coding principles to train the encoder on a probabilistic contrastive loss.\n \\item Triplet-Loss (T-Loss), introduced in \\citep{franceschi2019unsupervised}, which employs time-based negative sampling and a triplet loss to learn representations for time series windows. The triplet loss objective ensures similar time series have similar representations by minimizing the pairwise distance between positive samples (subseries) while maximizing it for negative ones. \n\\end{inparaenum}\n(See Appendix \\ref{app:baseline_imp} for more details on each baseline.)\n\nFor a fair comparison and to ensure that the difference in performance is not due to the differences in the models' architecture, the same encoder network is used across all compared baselines. Our objective is to compare the performance of the learning frameworks, agnostic to the encoder's choice. Therefore, we selected simple architectures to evaluate how each framework can use a simple encoder's limited capacity to learn meaningful representations.\nWe assess the generalizability of the representations by 1) evaluating clusterability in the encoding space and 2) using the representations for a downstream classification task. \nIn addition to the baselines mentioned above, we also compare clusterability performance with unsupervised K-means and classification with a K-Nearest Neighbor classifier, using Dynamic Time Warping (DTW) to measure time series distance. All models are implemented using Pytorch $1.3.1$ and trained on a machine with Quadro 400 GPU \\footnote{Code implementation can be found at \\url{https:\/\/github.com\/sanatonek\/TNC_representation_learning}}. Below we describe the datasets for our experiments in more detail.\n\n\n\\subsection{Simulated data}\nThe simulated dataset is designed to replicate very long, non-stationary, and high-frequency time series for which the underlying dynamics change over time. Our generated time series consists of $2000$ measurements for $3$ features, generated from $4$ different underlying states. We use a Hidden Markov Model (HMM) to generate the random latent states over time, and in each state, the time series is generated from a different generative process, including Gaussian Processes (GPs) with different kernel functions and Nonlinear Auto-regressive Moving Average models with different sets of parameters ($\\alpha$ and $\\beta$). Besides, for it to further resemble realistic (e.g., clinical) time series, two features are always correlated. More details about this dataset are provided in the Appendix \\ref{app:simulated_data}. \nFor this experimental setup, we use a two-directional, single-layer recurrent neural network encoder. We have selected this simple architecture because it handles time series with variable lengths, and it easily extends to higher-dimensional inputs. The encoder model encodes multi-dimensional signal windows of $\\delta=50$ into $10$ dimensional representation vectors. The window size is selected such that it is long enough to contain information about the underlying state but not too long to span over multiple underlying states. A more detailed discussion on the window size choice is presented in Appendix \\ref{app:window_size}. \n\n\n\\subsection{Clinical waveform data}\nFor a real-world clinical experiment, we use the MIT-BIH Atrial Fibrillation dataset \\citep{moody1983new}.\nThis dataset includes $25$ long-term Electrocardiogram (ECG) recordings ($10$ hours in duration) of human subjects with atrial fibrillation. It consists of two ECG signals; each sampled at 250 Hz. The signals are annotated over time for the following different rhythm types: 1) Atrial fibrillation, 2) Atrial flutter, 3) AV junctional rhythm, and 4) all other rhythms. Our goal in this experiment is to identify the underlying type of arrhythmia for each sample without any information about the labels. This dataset is particularly interesting and makes this experiment challenging due to the following special properties:\n\n\n\\begin{itemize}\n \\item The underlying heart rhythm changes over time in each sample. This is an opportunity to evaluate how different representation learning frameworks can handle alternating classes in non-stationary settings; \n \\item The dataset is highly imbalanced, with atrial flutter and AV junctional rhythm being present in fewer than $0.1\\%$ of the measurements. Data imbalance poses many challenges for downstream classification, further motivating the use of unsupervised representation learning;\n \\item The dataset has samples from a small number of individuals, but over an extended period (around 5 million data points). This realistic scenario, common in healthcare data, shows that our framework is still powerful in settings with a limited number of samples.\n\\end{itemize}\n\nThe simple RNN encoder architecture used for other experiment setups cannot model the high-frequency ECG measurements. Therefore, inspired by state-of-the-art architectures for ECG classification problems, the encoder $Enc$ used in this experiment is a 2-channel, 1-dimensional strided convolutional neural network that runs directly on the ECG waveforms. We use six convolutional layers with a total down-sampling factor of 16. The window size is $2500$ samples, meaning that each convolutional filter covers at least half a second of ECG recording, and the representations are summarized in a 64-dimensional vector.\n\n\\subsection{Human Activity Recognition (HAR) data}\nHuman Activity Recognition (HAR) is the problem of predicting the type of activity using temporal data from accelerometer and gyroscope measurements. We use the HAR dataset from the UCI Machine Learning Repository \\footnote{\\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/human+activity+recognition+using+smartphones}} that includes data collected from 30 individuals using a smartwatch. Each person performs six activities: 1) walking, 2) walking upstairs, 3) walking downstairs, 4) sitting, 5) standing, and 6) laying. The time-series measurements are pre-processed to extract 561 features. For our purpose, we concatenate the activity samples from every individual over time using the subject identifier to build the full-time series for each subject, which includes continuous activity change. Similar to the simulated data setting, we use a single-layer RNN encoder. The selected window size is $4$, representing about 15 seconds of recording, and the representations are encoded in a $10$-dimensional vector space. \n\\section{Submission of conference papers to ICLR 2021}\n\nICLR requires electronic submissions, processed by\n\\url{https:\/\/openreview.net\/}. See ICLR's website for more instructions.\n\nIf your paper is ultimately accepted, the statement {\\tt\n {\\textbackslash}iclrfinalcopy} should be inserted to adjust the\nformat to the camera ready requirements.\n\nThe format for the submissions is a variant of the NeurIPS format.\nPlease read carefully the instructions below, and follow them\nfaithfully.\n\n\\subsection{Style}\n\nPapers to be submitted to ICLR 2021 must be prepared according to the\ninstructions presented here.\n\n\nAuthors are required to use the ICLR \\LaTeX{} style files obtainable at the\nICLR website. Please make sure you use the current files and\nnot previous versions. Tweaking the style files may be grounds for rejection.\n\n\\subsection{Retrieval of style files}\n\nThe style files for ICLR and other conference information are available online at:\n\\begin{center}\n \\url{http:\/\/www.iclr.cc\/}\n\\end{center}\nThe file \\verb+iclr2021_conference.pdf+ contains these\ninstructions and illustrates the\nvarious formatting requirements your ICLR paper must satisfy.\nSubmissions must be made using \\LaTeX{} and the style files\n\\verb+iclr2021_conference.sty+ and \\verb+iclr2021_conference.bst+ (to be used with \\LaTeX{}2e). The file\n\\verb+iclr2021_conference.tex+ may be used as a ``shell'' for writing your paper. All you\nhave to do is replace the author, title, abstract, and text of the paper with\nyour own.\n\nThe formatting instructions contained in these style files are summarized in\nsections \\ref{gen_inst}, \\ref{headings}, and \\ref{others} below.\n\n\\section{General formatting instructions}\n\\label{gen_inst}\n\nThe text must be confined within a rectangle 5.5~inches (33~picas) wide and\n9~inches (54~picas) long. The left margin is 1.5~inch (9~picas).\nUse 10~point type with a vertical spacing of 11~points. Times New Roman is the\npreferred typeface throughout. Paragraphs are separated by 1\/2~line space,\nwith no indentation.\n\nPaper title is 17~point, in small caps and left-aligned.\nAll pages should start at 1~inch (6~picas) from the top of the page.\n\nAuthors' names are\nset in boldface, and each name is placed above its corresponding\naddress. The lead author's name is to be listed first, and\nthe co-authors' names are set to follow. Authors sharing the\nsame address can be on the same line.\n\nPlease pay special attention to the instructions in section \\ref{others}\nregarding figures, tables, acknowledgments, and references.\n\n\nThere will be a strict upper limit of 8 pages for the main text of the initial submission, with unlimited additional pages for citations. Note that the upper page limit differs from last year!Authors may use as many pages of appendices (after the bibliography) as they wish, but reviewers are not required to read these. During the rebuttal phase and for the camera ready version, authors are allowed one additional page for the main text, for a strict upper limit of 9 pages.\n\n\\section{Headings: first level}\n\\label{headings}\n\nFirst level headings are in small caps,\nflush left and in point size 12. One line space before the first level\nheading and 1\/2~line space after the first level heading.\n\n\\subsection{Headings: second level}\n\nSecond level headings are in small caps,\nflush left and in point size 10. One line space before the second level\nheading and 1\/2~line space after the second level heading.\n\n\\subsubsection{Headings: third level}\n\nThird level headings are in small caps,\nflush left and in point size 10. One line space before the third level\nheading and 1\/2~line space after the third level heading.\n\n\\section{Citations, figures, tables, references}\n\\label{others}\n\nThese instructions apply to everyone, regardless of the formatter being used.\n\n\\subsection{Citations within the text}\n\nCitations within the text should be based on the \\texttt{natbib} package\nand include the authors' last names and year (with the ``et~al.'' construct\nfor more than two authors). When the authors or the publication are\nincluded in the sentence, the citation should not be in parenthesis using \\verb|\\citet{}| (as\nin ``See \\citet{Hinton06} for more information.''). Otherwise, the citation\nshould be in parenthesis using \\verb|\\citep{}| (as in ``Deep learning shows promise to make progress\ntowards AI~\\citep{Bengio+chapter2007}.'').\n\nThe corresponding references are to be listed in alphabetical order of\nauthors, in the \\textsc{References} section. As to the format of the\nreferences themselves, any style is acceptable as long as it is used\nconsistently.\n\n\\subsection{Footnotes}\n\nIndicate footnotes with a number\\footnote{Sample of the first footnote} in the\ntext. Place the footnotes at the bottom of the page on which they appear.\nPrecede the footnote with a horizontal rule of 2~inches\n(12~picas).\\footnote{Sample of the second footnote}\n\n\\subsection{Figures}\n\nAll artwork must be neat, clean, and legible. Lines should be dark\nenough for purposes of reproduction; art work should not be\nhand-drawn. The figure number and caption always appear after the\nfigure. Place one line space before the figure caption, and one line\nspace after the figure. The figure caption is lower case (except for\nfirst word and proper nouns); figures are numbered consecutively.\n\nMake sure the figure caption does not get separated from the figure.\nLeave sufficient space to avoid splitting the figure and figure caption.\n\nYou may use color figures.\nHowever, it is best for the\nfigure captions and the paper body to make sense if the paper is printed\neither in black\/white or in color.\n\\begin{figure}[h]\n\\begin{center}\n\\fbox{\\rule[-.5cm]{0cm}{4cm} \\rule[-.5cm]{4cm}{0cm}}\n\\end{center}\n\\caption{Sample figure caption.}\n\\end{figure}\n\n\\subsection{Tables}\n\nAll tables must be centered, neat, clean and legible. Do not use hand-drawn\ntables. The table number and title always appear before the table. See\nTable~\\ref{sample-table}.\n\nPlace one line space before the table title, one line space after the table\ntitle, and one line space after the table. The table title must be lower case\n(except for first word and proper nouns); tables are numbered consecutively.\n\n\\begin{table}[t]\n\\caption{Sample table title}\n\\label{sample-table}\n\\begin{center}\n\\begin{tabular}{ll}\n\\multicolumn{1}{c}{\\bf PART} &\\multicolumn{1}{c}{\\bf DESCRIPTION}\n\\\\ \\hline \\\\\nDendrite &Input terminal \\\\\nAxon &Output terminal \\\\\nSoma &Cell body (contains cell nucleus) \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Default Notation}\n\nIn an attempt to encourage standardized notation, we have included the\nnotation file from the textbook, \\textit{Deep Learning}\n\\cite{goodfellow2016deep} available at\n\\url{https:\/\/github.com\/goodfeli\/dlbook_notation\/}. Use of this style\nis not required and can be disabled by commenting out\n\\texttt{math\\_commands.tex}.\n\n\n\\centerline{\\bf Numbers and Arrays}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1in}p{3.25in}}\n$\\displaystyle a$ & A scalar (integer or real)\\\\\n$\\displaystyle {\\bm{a}}$ & A vector\\\\\n$\\displaystyle {\\bm{A}}$ & A matrix\\\\\n$\\displaystyle {\\tens{A}}$ & A tensor\\\\\n$\\displaystyle {\\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\\\\n$\\displaystyle {\\bm{I}}$ & Identity matrix with dimensionality implied by context\\\\\n$\\displaystyle {\\bm{e}}^{(i)}$ & Standard basis vector $[0,\\dots,0,1,0,\\dots,0]$ with a 1 at position $i$\\\\\n$\\displaystyle \\text{diag}({\\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\\bm{a}}$\\\\\n$\\displaystyle {\\textnormal{a}}$ & A scalar random variable\\\\\n$\\displaystyle {\\mathbf{a}}$ & A vector-valued random variable\\\\\n$\\displaystyle {\\mathbf{A}}$ & A matrix-valued random variable\\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Sets and Graphs}\n\\bgroup\n\\def1.5{1.5}\n\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle {\\mathbb{A}}$ & A set\\\\\n$\\displaystyle \\mathbb{R}$ & The set of real numbers \\\\\n$\\displaystyle \\{0, 1\\}$ & The set containing 0 and 1 \\\\\n$\\displaystyle \\{0, 1, \\dots, n \\}$ & The set of all integers between $0$ and $n$\\\\\n$\\displaystyle [a, b]$ & The real interval including $a$ and $b$\\\\\n$\\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\\\\n$\\displaystyle {\\mathbb{A}} \\backslash {\\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\\mathbb{A}}$ that are not in ${\\mathbb{B}}$\\\\\n$\\displaystyle {\\mathcal{G}}$ & A graph\\\\\n$\\displaystyle \\parents_{\\mathcal{G}}({\\textnormal{x}}_i)$ & The parents of ${\\textnormal{x}}_i$ in ${\\mathcal{G}}$\n\\end{tabular}\n\\vspace{0.25cm}\n\n\n\\centerline{\\bf Indexing}\n\\bgroup\n\\def1.5{1.5}\n\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle {a}_i$ & Element $i$ of vector ${\\bm{a}}$, with indexing starting at 1 \\\\\n$\\displaystyle {a}_{-i}$ & All elements of vector ${\\bm{a}}$ except for element $i$ \\\\\n$\\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\bm{A}}_{i, :}$ & Row $i$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\bm{A}}_{:, i}$ & Column $i$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\\tens{A}}$\\\\\n$\\displaystyle {\\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\\\\n$\\displaystyle {\\textnormal{a}}_i$ & Element $i$ of the random vector ${\\mathbf{a}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\n\\centerline{\\bf Calculus}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle\\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\\\ [2ex]\n$\\displaystyle \\frac{\\partial y} {\\partial x} $ & Partial derivative of $y$ with respect to $x$ \\\\\n$\\displaystyle \\nabla_{\\bm{x}} y $ & Gradient of $y$ with respect to ${\\bm{x}}$ \\\\\n$\\displaystyle \\nabla_{\\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\\bm{X}}$ \\\\\n$\\displaystyle \\nabla_{\\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\\tens{X}}$ \\\\\n$\\displaystyle \\frac{\\partial f}{\\partial {\\bm{x}}} $ & Jacobian matrix ${\\bm{J}} \\in \\mathbb{R}^{m\\times n}$ of $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$\\\\\n$\\displaystyle \\nabla_{\\bm{x}}^2 f({\\bm{x}})\\text{ or }{\\bm{H}}( f)({\\bm{x}})$ & The Hessian matrix of $f$ at input point ${\\bm{x}}$\\\\\n$\\displaystyle \\int f({\\bm{x}}) d{\\bm{x}} $ & Definite integral over the entire domain of ${\\bm{x}}$ \\\\\n$\\displaystyle \\int_{\\mathbb{S}} f({\\bm{x}}) d{\\bm{x}}$ & Definite integral with respect to ${\\bm{x}}$ over the set ${\\mathbb{S}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Probability and Information Theory}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle P({\\textnormal{a}})$ & A probability distribution over a discrete variable\\\\\n$\\displaystyle p({\\textnormal{a}})$ & A probability distribution over a continuous variable, or over\na variable whose type has not been specified\\\\\n$\\displaystyle {\\textnormal{a}} \\sim P$ & Random variable ${\\textnormal{a}}$ has distribution $P$\\\\% so thing on left of \\sim should always be a random variable, with name beginning with \\r\n$\\displaystyle \\mathbb{E}_{{\\textnormal{x}}\\sim P} [ f(x) ]\\text{ or } \\mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\\textnormal{x}})$ \\\\\n$\\displaystyle \\mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\\textnormal{x}})$ \\\\\n$\\displaystyle \\mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\\textnormal{x}})$\\\\\n$\\displaystyle H({\\textnormal{x}}) $ & Shannon entropy of the random variable ${\\textnormal{x}}$\\\\\n$\\displaystyle D_{\\mathrm{KL}} ( P \\Vert Q ) $ & Kullback-Leibler divergence of P and Q \\\\\n$\\displaystyle \\mathcal{N} ( {\\bm{x}} ; {\\bm{\\mu}} , {\\bm{\\Sigma}})$ & Gaussian distribution %\nover ${\\bm{x}}$ with mean ${\\bm{\\mu}}$ and covariance ${\\bm{\\Sigma}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Functions}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle f: {\\mathbb{A}} \\rightarrow {\\mathbb{B}}$ & The function $f$ with domain ${\\mathbb{A}}$ and range ${\\mathbb{B}}$\\\\\n$\\displaystyle f \\circ g $ & Composition of the functions $f$ and $g$ \\\\\n $\\displaystyle f({\\bm{x}} ; {\\bm{\\theta}}) $ & A function of ${\\bm{x}}$ parametrized by ${\\bm{\\theta}}$.\n (Sometimes we write $f({\\bm{x}})$ and omit the argument ${\\bm{\\theta}}$ to lighten notation) \\\\\n$\\displaystyle \\log x$ & Natural logarithm of $x$ \\\\\n$\\displaystyle \\sigma(x)$ & Logistic sigmoid, $\\displaystyle \\frac{1} {1 + \\exp(-x)}$ \\\\\n$\\displaystyle \\zeta(x)$ & Softplus, $\\log(1 + \\exp(x))$ \\\\\n$\\displaystyle || {\\bm{x}} ||_p $ & $L^p$ norm of ${\\bm{x}}$ \\\\\n$\\displaystyle || {\\bm{x}} || $ & $L^2$ norm of ${\\bm{x}}$ \\\\\n$\\displaystyle x^+$ & Positive part of $x$, i.e., $\\max(0,x)$\\\\\n$\\displaystyle \\bm{1}_\\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\n\n\\section{Final instructions}\nDo not change any aspects of the formatting parameters in the style files.\nIn particular, do not modify the width or length of the rectangle the text\nshould fit into, and do not change font sizes (except perhaps in the\n\\textsc{References} section; see below). Please note that pages should be\nnumbered.\n\n\\section{Preparing PostScript or PDF files}\n\nPlease prepare PostScript or PDF files with paper size ``US Letter'', and\nnot, for example, ``A4''. The -t\nletter option on dvips will produce US Letter files.\n\nConsider directly generating PDF files using \\verb+pdflatex+\n(especially if you are a MiKTeX user).\nPDF figures must be substituted for EPS figures, however.\n\nOtherwise, please generate your PostScript and PDF files with the following commands:\n\\begin{verbatim}\ndvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps\nps2pdf mypaper.ps mypaper.pdf\n\\end{verbatim}\n\n\\subsection{Margins in LaTeX}\n\nMost of the margin problems come from figures positioned by hand using\n\\verb+\\special+ or other commands. We suggest using the command\n\\verb+\\includegraphics+\nfrom the graphicx package. Always specify the figure width as a multiple of\nthe line width as in the example below using .eps graphics\n\\begin{verbatim}\n \\usepackage[dvips]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]{myfile.eps}\n\\end{verbatim}\nor\n\\begin{verbatim}\n \\usepackage[pdftex]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]{myfile.pdf}\n\\end{verbatim}\nfor .pdf graphics.\nSee section~4.4 in the graphics bundle documentation (\\url{http:\/\/www.ctan.org\/tex-archive\/macros\/latex\/required\/graphics\/grfguide.ps})\n\nA number of width problems arise when LaTeX cannot properly hyphenate a\nline. Please give LaTeX hyphenation hints using the \\verb+\\-+ command.\n\n\\subsubsection*{Author Contributions}\nIf you'd like to, you may include a section for author contributions as is done\nin many journals. This is optional and at the discretion of the authors.\n\n\\subsubsection*{Acknowledgments}\nUse unnumbered third level headings for the acknowledgments. All\nacknowledgments, including those to funding agencies, go at the end of the paper.\n\n\n\n\n\\section{Appendix}\n\n\n\\subsection{Simulated Dataset}\\label{app:simulated_data}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.42]{TCL\/figures\/simulation_sample.pdf}\n \\caption{A normalized time series sample from the simulated dataset. Each row represents a single feature, and the shaded regions indicate one of the $4$ underllying simulated states.}\n \\label{fig:sim_data_example}\n\\end{figure}{}\n\nThe simulated time series consists of $3$ features generated from different underlying hidden states. Figure \\ref{fig:sim_data_example} shows a sample from this dataset. Each panel in the figure shows one of the features, and the shaded regions indicate the underlying state of the signal in that period. We use a Hidden Markov Model (HMM) to generate these random latent states over time. The transition probability is set equal to \\%5 for switching to an alternating state, and \\%85 for not changing state. In each state, the time series is generated from a different signal distribution. Table \\ref{tab:state_signal_distribution} describes the generative process of each signal feature in each state. Note that feature $1$ and $2$ are always correlated, mainly to mimic realistic clinical time series. As an example, physiological measurements like pulse rate and heart rate are always correlated. \n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{lllll}\n \\toprule\n & State 1 & State 2 & State 3 & State 4\\\\\n \\midrule\n Feature 1& GP (periodic) & NARMA$_\\alpha$ & GP (Squared Exp.) & NARMA$_\\beta$ \\\\\n \n Feature 2 & GP (periodic) & NARMA$_\\alpha$ & GP (Squared Exp.) & NARMA$_\\beta$ \\\\\n \n Feature 3& GP (Squared Exp.) & NARMA$_\\beta$ & GP (periodic) & NARMA$_\\alpha$ \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Signal distributions for each time series feature of the simulated dataset}\n \\label{tab:state_signal_distribution}\n\\end{table}{}\n\nIn the state $1$, the correlated features are generated by a Gaussian Process (GP) with a periodic kernel. Feature 3, which is uncorrelated with the other two features, comes from another GP with a squared exponential kernel. In addition to GPs, we also have multiple Non-Linear Auto-Regressive Moving Average (NARMA) time series models. The linear function of NARMA$_\\alpha$ and NARMA$_\\beta$ are shown in Equation \\ref{eq:narma_alpha} and \\ref{eq:narma_beta}.\n\n\\begin{equation}\n\\text{NARMA}_\\alpha: y(k+1) = 0.3 y(k) + 0.05 y(k) \\sum_{i=0}^{n-1} y(k-i) + 1.5 u(k-(n-1)) u(k) + 0.1\n \\label{eq:narma_alpha}\n\\end{equation}\n\n\\begin{equation}\n\\text{NARMA}_\\beta: y(k+1) = 0.1 y(k) + 0.25 y(k) \\sum_{i=0}^{n-1} y(k-i) + 2.5 u(k-(n-1)) u(k) -0.005\n \\label{eq:narma_beta}\n\\end{equation}\n\nA white Gaussian noise with $\\sigma=0.3$ is added to all signals, and overall, the dataset consists of 500 instances of $T=2000$ measurements.\n\n\n\n\\subsection{Baseline implementation details}\\label{app:baseline_imp}\nImplementation of all baselines are included in the code base for reproducibility purposes, and hyper-parameters for all baselines are tuned using cross-validation.\n\n\n\\paragraph{Contrastive Predictive Coding (CPC):} The CPC baseline first processes the sequential signal windows using an encoder $Z_t = Enc(X_t)$, with a similar architecture to the encoders of other baselines. Next, an autoregressive model $g_{ar}$ aggregates all the information in $Z_{\\leq t}$ and summarizes it into a context latent representation $c_t = g_{ar}(Z_{\\leq t})$. In our implementation, we have used a single layer, a one-directional recurrent neural network with GRU cell and hidden size equal to the encoding size as the auto-regressor. Like the original paper, the density ratio is estimated using a linear transformation, and the model is trained for $1$ step ahead prediction.\n\\vspace{-2mm}\n\\paragraph{Triplet-Loss (T-Loss):} The triplet loss baseline is implemented using the original code made available by the authors on Github\\footnote{\\url{https:\/\/github.com\/White-Link\/UnsupervisedScalableRepresentationLearningTimeSeries}}. \n\\vspace{-2mm}\n\\paragraph{KNN and K-means:} These two baselines for classification and clustering are implemented using the tslearn library \\footnote{\\url{https:\/\/tslearn.readthedocs.io\/en\/stable\/index.html}}, that integrates distance metrics such as DTW. Note that evaluating DTW is computationally expensive, and the tslearn implementation is not optimized. Therefore, for the waveform data with windows of size 2500, we had to down-sampled the signal frequency by a factor of two. \n\n\n\\subsection{TNC implementation extra details}\\label{app:TNC_imp}\nTo define the neighborhood range in the TNC framework, as mentioned earlier, we use the Augmented-Dickey Fuller (ADF) statistical test to determine this range ($\\eta$) as the region for which the signals remain stationary. More precisely, we gradually increase the range, from a single window size up to 3 times the window size (the upper limit we set), and repeatedly perform the ADF test. We use the $p$-value from this statistical test to determine whether the Null hypothesis can be rejected, meaning that the signal is stationary. At the point where the $p$-value is above our defined threshold ($0.01$), we can no longer assume that the signal is stationary, and this is where we set the $\\eta$ parameter. Now, once the neighborhood is defined, we make sure the non-neighboring samples are taken from the distribution of windows with at least $4\\times \\eta$ away from $W_t$, ensuring a low likelihood of belonging to the neighborhood. Note that for implementation of ADF, we use the stats model library \\footnote{\\url{https:\/\/www.statsmodels.org\/dev\/generated\/statsmodels.tsa.stattools.adfuller.html}}. Unfortunately, this implementation is not optimized and does not support GPU computation, therefore evaluating the neighborhood range using ADF slows down TNC framework training. As a future direction, we are working on an optimized implementation of the ADF score for our framework.\n\n\n\n\n\\subsection{Selecting the window size}\\label{app:window_size}\n\nThe window size $\\delta$ is an important factor in the performance of a representation learning framework, not only for TNC but also for similar baselines such as CPC and triplet loss. Overall, the window size should be selected such that it is long enough to contain information about the underlying state and not too long to span over multiple underlying states. In our settings, we have selected the window sizes based on our prior knowledge of the signals. For instance, in the case of an ECG signal, the selected window size is equivalent to 7 seconds of recording, which is small enough such that the ECG remains in a stable state and yet has enough information to determine that underlying state. Our understanding of the time series data can help us select an appropriate window size, but we can also experiment with different $\\delta$ to learn this parameter. Table \\ref{tab:window_size} shows classification performance results for the simulation setups, under different window sizes. We can clearly see the drop in performance for all baseline methods when the window size is too small or too large.\n\n\\begin{table}[]\n \\centering\n \\begin{tabular}{lcccccc}\n &\\multicolumn{2}{c}{$\\delta=10$}& \\multicolumn{2}{c}{$\\delta=50$}& \\multicolumn{2}{c}{$\\delta=100$} \\\\\n \\toprule\n & AUPRC & Accuracy & AUPRC & Accuracy & AUPRC & Accuracy \\\\\n \\midrule\n TNC & 0.74 $\\pm$ 0.01 & 71.60 $\\pm$ 0.59 & 0.99 $\\pm$ 0.00 & 97.52 $\\pm$ 0.13 & 0.84 $\\pm$ 0.11 & 84.25 $\\pm$ 9.08\\\\\n CPC & 0.49 $\\pm$ 0.02 & 51.85 $\\pm$ 1.81 & 0.69 $\\pm$ 0.06 & 70.26 $\\pm$ 6.48 & 0.49 $\\pm$ 0.05 & 56.65 $\\pm$ 0.81\\\\\n T-Loss & 0.48 $\\pm$ 0.06 & 56.70 $\\pm$ 1.07 & 0.78 $\\pm$ 0.01 & 76.66 $\\pm$ 1.14 & 0.73 $\\pm$ 0.008 & 73.29 $\\pm$ 1.58 \\\\\n \\bottomrule\n \n \\end{tabular}\n \\caption{Downstream classification performance for different window size $\\delta$ on the simulated dataset}\n \\label{tab:window_size}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Clustering metrics}\\label{app:clustering_metrics}\nMost cluster validity measures assess certain structural properties of a clustering result. In our evaluation, we have used two measures, namely the Silhouette score and Davies-Bouldin index, to evaluate the representations' clustering quality. Davies-Bouldin measures intra-cluster similarity (coherence) and inter-cluster differences (separation).\nLet $\\mathcal{C} = \\{\\mathcal{C}_1,..., \\mathcal{C}_k\\}$ be a clustering of a set $D$ of objects. The Davies-Bouldin score is evaluated as follows: \n\n\\begin{equation}\n DB = \\frac{1}{k}\\sum_{i=1}^{k} max_{j} \\frac{s(\\mathcal{C})+s(\\mathcal{C})}{\\delta \\mathcal{C}_i\\mathcal{C}_j}\n \\label{eq:db_score}\n\\end{equation}\n\nWhere $s(\\mathcal{C})$ measures the scatter within a cluster, and $\\delta$ is a cluster to cluster distance measure. On the other hand, the silhouette score measures how similar an object is to its cluster \\emph{compared} to other clusters. Both measures are commonly used for the evaluation of clustering algorithms. A comparison of 2 metrics has shown that the Silhouette index produces slightly more accurate results in some cases. However, the Davies-Bouldin index is generally much less complex to compute \\cite{petrovic2006comparison}. \n\n\n\n\n\n\n\\subsection{Setting the weights for PU learning}\\label{app:w}\n\nAs mentioned in the Experiment section, the weight parameter in the loss is the probability of sampling a positive window from the non-neighboring region. One way to set this parameter is using prior knowledge of the number and the distribution of underlying states. Another way is to learn it as a hyperparameter. Table \\ref{tab:weight} shows the TNC loss for different weight parameters. The loss column reports the value measured in Equation \\ref{eq:obj}, and the accuracy shows how well the discriminator identifies the neighboring samples from non-neighboring ones for settings with different weight parameters. To also assess the impact of re-weighting the loss on downstream classification performance, we compared these performance measures for weighted and non-weighted settings. Table \\ref{tab:weight_TF} demonstrates these results and confirms that weight adjusting the loss for non-neighboring samples improves the quality of learned representations.\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{lcccccc}\n &\\multicolumn{2}{c}{Simulation}& \\multicolumn{2}{c}{ECG Waveform}& \\multicolumn{2}{c}{HAR}\\\\\n \\toprule\n Weight & Loss & Accuracy & Loss & Accuracy & Loss & Accuracy\\\\\n \\midrule\n 0.2 & 0.582$\\pm$0.002 & 74.29$\\pm$0.61 & 0.631$\\pm$0.011 & 60.44$\\pm$2.56 & 0.475$\\pm$0.004 & 85.75$\\pm$0.5 \\\\\n 0.1 & 0.571$\\pm$0.011 & 75.41$\\pm$0.37 & 0.637$\\pm$0.011 & 63.67$\\pm$1.29 & 0.413$\\pm$0.003 & 88.21$\\pm$1.29\\\\\n 0.05 & 0.576$\\pm$0.002 & 75.73$\\pm$0.24 & 0.622$\\pm$0.023 & 66.04$\\pm$3.46 & 0.383$\\pm$0.001 & 87.33$\\pm$0.17\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Training the TNC framework using different weight parameters. The loss is the measured value determined in Equation \\ref{eq:obj}, and the Accuracy is the accuracy of the discriminator.}\n \\label{tab:weight}\n\\end{table}{}\n\n\n\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{lccc}\n Weighting? &\\multicolumn{1}{c}{Simulation}& \\multicolumn{1}{c}{ECG Waveform}& \\multicolumn{1}{c}{HAR}\\\\\n \\toprule\n True & {\\bf{97.52$\\pm$0.13}} & {\\bf{77.79$\\pm$0.84}} & {\\bf{88.32$\\pm$0.12}}\\\\\n False & 97.17$\\pm$0.44 & 75.26$\\pm$1.48 & 75.25$\\pm$13.6\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Downstream classification accuracy on simulated data with the TNC frameworks, using 2 different weighting strategies: 1)Trained with weight adjustment, 2)Trained with $w=0$.}\n \\label{tab:weight_TF}\n\\end{table}{}\n\n\n\n\\subsection{Supplementary Figure}\\label{app:plots}\n\n\n\\subsubsection{Clinical waveform data}\nIn order to understand what TNC framework encodes from the high dimensional ECG signals, we visualize the trajectory of the representations of an individual sample over time. Figure \\ref{fig:trajectory_wf} demonstrates this example, where the top 2 rows are ECG signals from two recording leads and the bottom row demonstrates the representation vectors. We see that around second $40$ the pattern in the representations change as a result of an artifact happening in one of the signals. With the help from our clinical expert, we also tried to interpret different patterns in the encoding space. For instance, between time $80$ and $130$, where features 0-10 become more activated, the heart rate (HR) has increased. Increase in HR can be seen as increased frequency in the ECG signals and is one of the indicators of arrhythmia that we believe TNC has captured. Figure \\ref{fig:distribution_wf} shows the distribution of the latent encoding of ECG signals for different baselines, with colors indicating the arrhythmia class. \n\n\\begin{figure}[!h]\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_wf.pdf}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_wf_trip.pdf}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n \\centering\n\\includegraphics[scale=.32]{TCL\/figures\/encoding_distribution_wf_cpc.pdf}\n\\end{subfigure}\n\\caption{T-SNE visualization of {\\bf{waveform}} signal representations for unsupervised representation learning baselines. Each point in the plot is a 64 dimensional representation of a window of time series, with the color indicating the latent state.}\n\\label{fig:distribution_wf}\n\\end{figure}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.2]{TCL\/figures\/embedding_trajectory_hm_wf.pdf}\n \\caption{Trajectory of a {\\bf{waveform}} signal encoding. The top two plots show the ECG recordings from 2 ECG leads. The bottom plot shows the 64 dimensional encoding of the sliding windows $W_t$ where $\\delta=2500$. }\n \\label{fig:trajectory_wf}\n\\end{figure}\n\n\n\n\n\\subsubsection{HAR data}\nFigure \\ref{fig:distribution_har} and \\ref{fig:trajectory_har} are similar plots to the ones demonstrated in the previous section, but for the HAR dataset. As shown in Figure \\ref{fig:trajectory_har}, the underlying states of the signal are clearly captured by the TNC framework as different patterns in the latent representations.\n\n\\begin{figure}[!h]\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_har.pdf}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_har_trip.pdf}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n \\centering\n\\includegraphics[scale=.32]{TCL\/figures\/encoding_distribution_har_cpc.pdf}\n\\end{subfigure}\n\\caption{T-SNE visualization of {\\bf{HAR}} signal representations for all baselines. Each point in the plot is a 10 dimensional representation of a window of $\\delta=4$, with colors indicating latent states.}\n\\label{fig:distribution_har}\n\\end{figure}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.2]{TCL\/figures\/embedding_trajectory_hm_har.pdf}\n \\caption{Trajectory of a {\\bf{HAR}} signal encoding. The top plot shows the original time series with shaded regions indicating the underlying state. The bottom plot shows the 10 dimensional encoding of the sliding windows $W_t$ where $\\delta=4$. }\n \\label{fig:trajectory_har}\n\\end{figure}\n\n\n\n\n\\subsubsection{Simulation data}\nIn addition to the initial experiment, we also show the trajectory of the encoding for a smaller encoding size ($3$). In this setting, we have 4 underlying states in the signal, and only 3 dimensions for the encoding.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.2]{TCL\/figures\/embedding_trajectory_3.pdf}\n \\caption{Trajectory of a {\\bf{simulation}} signal encoding. The top plots shows the signals and the bottom plot shows the 3 dimensional encoding of the sliding windows $W_t$ where $\\delta=50$. }\n \\label{fig:trajectory_sim_3}\n\\end{figure}\n\\subsubsection*{Acknowledgments}\nResources used in preparing this research were provided, in part, by the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. This research was undertaken, in part, thanks to funding from the Canadian Institute of Health Research (CIHR) and the Natural Sciences and Engineering Research Council of Canada (NSERC).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nNovae occur in binary systems in which a Roche lobe filling secondary is\nlosing hydrogen-rich material through the inner Lagrangian point onto a\nwhite dwarf (WD) primary. Mass transfer can also occur in long period \nsystems if the secondary has a significant wind, {\\it e.g.} the giant \nsecondary in RS Oph or V407 Cyg. Core material is mixed into the accreted \nmaterial and is violently ejected into space when the pressure at the \nWD-accretion interface becomes great enough to initiate a thermonuclear \nrunaway (TNR). Novae eject, into the interstellar medium (ISM), a \nmixture of material accreted from the companion star, highly processed \nmaterial from the underlying WD, and products of nucleosynthesis occurring \nduring the TNR. As a result of the TNR, up to 10$^{-4}$ M$_{\\odot}$ of \nmaterial can be ejected from the WD enriched in C, N, O, Ne, Mg, Al \nand other species \n\\btxt{\\citep{2006NuPhA.777..550J}}\nat $v \\sim 10^2 - 10^4$ km s$^{-1}$. Any remaining \nhydrogen still bound to the WD continues to burn in hydrostatic \nequilibrium until it is consumed or ejected via a wind.\n\nInitially, the radiative output of a nova occurs in the optical but as the\nphotosphere of the WD recedes, the spectral energy distribution\nshifts to higher energies \\citep{1978ARA&A..16..171G}. The rate of the \noptical decline defines a nova's primary characteristics\n\\citep[{\\it e.g.},][and references therein]{Warner2008}, namely the time \nto decline 2 magnitudes from visual maximum, t$_2$. The decline \nrate depends on the amount of mass ejected, its velocity, composition,\nand if it runs into circumbinary material. The bolometric luminosity \nduring the outburst is high, near or exceeding the Eddington limit\n(for the fastest novae), and thus additional \nmaterial is ejected via a strong stellar wind\n\\citep{1998MNRAS.300..931S,2001MNRAS.320..103S}. In some novae the \ncollision between this fast wind and the initial exploded mass or any\npre-existing circumbinary material can produce X-ray emission from shocks. \nThe emission from this early X-ray phase is hard, has a low luminosity, \nof order 10$^{33-35}$ erg s$^{-1}$, and declines relatively rapidly \n\\citep{1998ApJ...499..395B,2001MNRAS.326L..13O}. As fuel continues to burn, \nmass loss causes the photosphere of the WD to shrink \n\\citep{1985ApJ...294..263M}. \n\\btxt{The effective temperature increases, \npeaking in the soft X-rays, at (2-8)$\\times$10$^5$ K \n\\citep{1996ApJ...456..788K,1996ApJ...463L..21S,2010ApJ...717..363R}}. \nOnce the ejecta have cleared sufficiently, and if the line of sight extinction \nis not severe, some novae exhibit characteristics similar to the\nSuper Soft X-ray binary sources \n\\citep[SSSs:][]{1997ARA&A..35...69K} with strong and soft, \nE$_{peak}$ $<$ 1 keV, X-ray emission. This point in novae evolution is \ncalled the SSS phase. At low spectral resolution, the UV\/X-ray\nspectral energy distributions (SED) resembles\nblackbodies, but higher resolution {\\it Chandra}\\ or {\\it XMM}\\ grating observations\nreveal a significantly more complex picture. The spectra frequently have\nP-Cygni profiles or emission lines superimposed on a line blanketed\natmosphere. Models sophisticated enough to interpret the high resolution\ndata are only now becoming available \\citep{2010AN....331..175V}. Once \nnuclear burning ends, the X-ray light curve rapidly declines as the WD cools \nmarking the end of the SSS phase and the outburst. At some point mass\ntransfer resumes and eventually another eruption occurs. These are\ncalled classical novae (CNe) until a second outburst is observed then they\nbecome recurrent novae (RNe). \n\\btxt{Detailed reviews of nova evolution are \npresented by \\citet{Starrfield08} and \\citet{2010AN....331..160B}.\n\\citet{2010AN....331..169H} discuss the theoretical implications of X-ray \nobservations of novae while \\citet{2010ApJS..187..275S} discusses the \ncurrent understanding of the RN class.}\n\nAn important, but not the sole\ndriver of the nova phenomenon is the mass of the WD. \nExplosions on larger mass WDs expel less mass but at higher velocities. \nThey have larger luminosities, are in outburst for less time, and (should)\nhave shorter recurrence times than novae on lower mass WDs. \nHigh mass ($>$ 1.25 M$_{\\odot}$) WDs reach TNR ignition more rapidly\nthan low mass WDs and thus do not have the chance to accrete as much\nmaterial. They also reach higher peak temperatures during the TNR leading\nto a more energetic explosion. However, other factors are believed to \nplay important roles leading to a nova event. These include\nthe composition of the WD, either CO or ONe, \nthe initial temperature of the WD during accretion, the mass accretion \nrate \\citep{2005ApJ...623..398Y}, the composition of the accreted \nmaterial \\citep{2000AIPC..522..379S}, and the mixing history\nof the core\/envelope. All impact \nhow much mass can be transfered to the WD before \nan outburst begins. Models show that different combinations of these \ncharacteristics can reproduce a wide range of nova outbursts \n\\citep{2005ApJ...623..398Y,WoodStar11}. \nUnfortunately very few of these parameters\nhave been observationally verified in any nova.\n\nThe X-ray regime is a crucial component for the study of novae providing \ninsight into TNR burning processes, WD mass and composition, accretion and \nmixing mechanisms, dust grain formation and destruction, and mass loss \nprocesses. In addition, high mass novae such as RNe are potential\nSN Ia progenitors via the single degenerate scenario \n\\citep[e.g.][]{2008A&A...484L...9W,2010ASPC..429..173W,2010Ap&SS.329..287M}.\nTo make progress understanding the physics of these important astrophysical\nphenomena, observations of a large number of novae are required \nto sample all the contributing \nfactors. Prior to the launch of {\\it Swift}\\ the general X-ray temporal\nevolution of novae was far from complete \nas only a few novae had been observed at more than one epoch in X-rays. \n\n{\\it Swift}\\ is an excellent facility for studying novae as it has a superb\nsoft X-ray response with its XRT instrument \\citep{2005SSRv..120..165B}.\n\\citet{2007ApJ...663..505N} show how the XRT favorably compares with \ncurrently available X-ray instruments. \n{\\it Swift}\\ also has a co-aligned UV\/optical instrument, UVOT\n\\citep[see][for details]{2005SSRv..120...95R}, which provides either 6 \nfilter photometry or low resolution grism spectroscopy. The other \n{\\it Swift}\\ instrument is a $\\gamma$-ray detector, BAT. \n\\btxt{However, novae are generally not strong $\\gamma$-ray \nsources \\citep{2005NuPhA.758..721H}.}\nThe decay of $^{22}$Na\n(half-life 2.6 yrs) generates a 1275 keV emission line but only $>$ \n1.25 M$_{\\odot}$ WDs are predicted to produce sufficient \n$^{22}$Na during the TNR. This line \nhas not yet been definitively detected by satellites \\citep{Hernanz08}\nbut there is a recent claim by \\citet{2010ApJ...723L..84S}\nthat their models with Compton decay of $^{22}$Na can account for\nthe hard X-ray flux in V2491 Cyg provided an exceptionally large amount\nof $^{22}$Na, 3$\\times$10$^{-5}$M$_{\\odot}$, was synthesized.\nAnother $\\gamma$-ray emission mechanism is electron-positron \nannihilation very early in the outburst but this is expected to be\ndetectable only in nearby novae. \nThe symbiotic RN RS Oph, at 1.6 kpc, was clearly detected \nin the lowest energy channels of the {\\it Swift}\/BAT \\citep{2008A&A...485..223S},\nbut that emission is consistent with that from high temperature\nshocks as the outburst ejecta plow into the pre-existing red giant\nwind. Recently the symbiotic RN V407 Cyg was detected in the GeV band \nby {\\it Fermi-LAT} \\citep{2010Sci...329..817A} only a few days after\nvisual maximum. \\citet{2010Sci...329..817A} show that the $\\gamma$-ray\nemission can be explained by either Compton scattering of infrared photons \nin the red giant wind or $\\pi^0$ decay from proton-proton collisions.\n\\citet{2011arXiv1101.6013L} predict that $\\pi^0$ $\\gamma$-rays\nwill be created in the high circumbinary densities of very long orbital \nperiods systems such as V407 Cyg, with a period of $\\sim$ 43 years \n\\citep{1990MNRAS.242..653M,2011arXiv1109.5397S}.\n\n{\\it Swift}\\ has a rapid response ToO procedure and flexible scheduling \nwhich is critical in obtaining well sampled X-ray light curves of \ntransient events. Initial {\\it Swift}\\ results of 11 novae were \npresented in \\citet{2007ApJ...663..505N}.\nSince that time significantly more data have been obtained by the \n{\\it Swift}\\ Nova-CV group\\footnote{The current members of the group and \nobservation strategy are provided at \\url{http:\/\/www.swift.ac.uk\/nova-cv\/}.}\nwhich has devised an observing strategy to efficiently utilize the\nsatellite's unique capabilities and maximize the science return by \nobserving interesting and bright novae with low extinction\nrecently discovered in the Milky Way and Magellanic Clouds. In five \nyears {\\it Swift}\\ has performed multiple visits for \\totalswift\\ classical and \nrecurrent Galactic\/Magellanic Cloud novae totaling well over 2 Ms of \nexposure time. \n\nHere we present a summary of all the Galactic\/Magellanic Cloud {\\it Swift}\\\nnova observations from launch (2004 November 20) to 2010 July 31 using\nthe XRT (0.3-10 keV) X-ray instrument (count rates and hardness ratios) \nand the available UVOT (1700-8000\\AA) filter photometry. {\\it Swift}\\ \nobservations of novae in the M31 group are reported in \n\\citet{2010A&A...523A..89H}, \\citet{2010AN....331..187P} and \nreferences within. We combine the {\\it Swift}\\ Galactic\/Magellanic Cloud \ndata with archival pointed observations of CNe and RNe from {\\it ROSAT}, \n{\\it XMM}, {\\it Chandra}, {\\it BeppoSax}, {\\it RXTE}, and {\\it ASCA} to produce\nthe most comprehensive X-ray sample of local nova. \nThe sample includes \\totalSSS\\ systems that were observed during the SSS phase.\n\nIn Section 2, we summarize the properties of the \\total\\ novae in the\nX-ray sample. The averaged {\\it Swift}\\ XRT count rates and UVOT magnitudes\nfor each observational session are also provided. Studies of high \nfrequency phenomena in individual objects are either left for future\nwork or have previously been published \\citep[V458 Vul, V2941 Cyg, V598 Pup, \nRS Oph, V407 Cyg, and V723 Cas in][ respectively]{2009AJ....137.4160N,\n2010MNRAS.401..121P,2009A&A...507..923P,2011ApJ...727..124O,\n2011A&A...527A..98S,2008AJ....135.1328N}. \nSections 3 and 4 detail the observations and results \nduring the hard and SSS phases, respectively. A discussion \nfollows in Section 5 articulating trends between\nthe SSS duration and t$_2$, expansion velocity of the ejecta, and\norbital period plus the role of SSS emission in \ndust-forming novae. Also included is a discussion on the origin of the \ndifferent variability observed in the X-ray and UV light curves of the \n{\\it Swift}\\ sources. Optical characteristics indicative of SSS emission\nin CN and RN are also presented. The last section, Section 6, \nprovides a summary of this work.\n\n\\section{THE X-RAY DATA SET}\n\n\\subsection{Characteristics}\n\nTable \\ref{chartable} presents the primary characteristics of all the \nGalactic\/Magellanic Cloud novae with pointed X-ray observations prior\nto July 31st, 2010. In addition to the {\\it Swift}\\ data, the sample includes \nall the publicly available pointed observations from the {\\it ROSAT}, {\\it XMM}, \n{\\it Chandra}, {\\it BeppoSax}, {\\it RXTE}, and {\\it ASCA} archives.\nThe columns give the nova name, visual magnitude at maximum, Julian date of\nvisual maximum, time to decline two magnitudes from visual maximum,\nthe Full-Width at Half-Maximum, FWHM, of H$\\alpha$ or H$\\beta$ taken near\nvisual maximum, E(B-V) and averaged Galactic hydrogen column density, N$_H$, \nalong the line of sight, proposed orbital period, estimated distance, \nwhether the nova was observed to form dust, and if the nova is a \nknown RN. The numbers in the parentheses are the literature \nreferences given in the table notes. The names of novae with {\\it Swift}\\ \nobservations are shown in {\\it bold}.\n\n\\begin{deluxetable}{lllrrrrrrll}\n\\tablecaption{Observable Characteristics of \nGalactic\/Magellanic Cloud novae with X-ray observations\\label{chartable}}\n\\rotate\n\\tablewidth{700pt}\n\\tabletypesize{\\scriptsize}\n\\tablehead{\n\\colhead{Name\\tablenotemark{a}} & \\colhead{V$_{max}$\\tablenotemark{b}} & \n\\colhead{Date\\tablenotemark{c}} & \\colhead{t$_2$\\tablenotemark{d}} & \n\\colhead{FWHM\\tablenotemark{e}} & \\colhead{E(B-V)} & \n\\colhead{N$_H$\\tablenotemark{f}} & \\colhead{Period} & \n\\colhead{D} & \\colhead{Dust?\\tablenotemark{g}} & \\colhead{RN?} \\\\ \n\\colhead{} & \\colhead{(mag)} & \\colhead{(JD)} & \\colhead{(d)} & \n\\colhead{(km s$^{-1}$)} & \\colhead{(mag)} & \\colhead{(cm$^{-2}$)} &\n\\colhead{(d)} & \\colhead{(kpc)} & \\colhead{} & \\colhead{}\n} \n\\startdata\nCI Aql & 8.83 (1) & 2451665.5 (1) & 32 (2) & 2300 (3) & 0.8$\\pm0.2$ (4) & 1.2e+22 & 0.62 (4) & 6.25$\\pm5$ (4) & N & Y \\\\\n{\\bf CSS081007}\\tablenotemark{h} & \\nodata & 2454596.5\\tablenotemark{i} & \\nodata & \\nodata & 0.146 & 1.1e+21 & 1.77 (5) & 4.45$\\pm1.95$ (6) & \\nodata & \\nodata \\\\\nGQ Mus & 7.2 (7) & 2445352.5 (7) & 18 (7) & 1000 (8) & 0.45 (9) & 3.8e+21 & 0.059375 (10) & 4.8$\\pm1$ (9) & N (7) & \\nodata \\\\\nIM Nor & 7.84 (11) & 2452289 (2) & 50 (2) & 1150 (12) & 0.8$\\pm0.2$ (4) & 8e+21 & 0.102 (13) & 4.25$\\pm3.4$ (4) & N & Y \\\\\n{\\bf KT Eri} & 5.42 (14) & 2455150.17 (14) & 6.6 (14) & 3000 (15) & 0.08 (15) & 5.5e+20 & \\nodata & 6.5 (15) & N & M \\\\\n{\\bf LMC 1995} & 10.7 (16) & 2449778.5 (16) & 15$\\pm2$ (17) & \\nodata & 0.15 (203) & 7.8e+20 & \\nodata & 50 & \\nodata & \\nodata \\\\\nLMC 2000 & 11.45 (18) & 2451737.5 (18) & 9$\\pm2$ (19) & 1700 (20) & 0.15 (203) & 7.8e+20 & \\nodata & 50 & \\nodata & \\nodata \\\\\n{\\bf LMC 2005} & 11.5 (21) & 2453700.5 (21) & 63 (22) & 900 (23) & 0.15 (203) & 1e+21 & \\nodata & 50 & M (24) & \\nodata \\\\\n{\\bf LMC 2009a} & 10.6 (25) & 2454867.5 (25) & 4$\\pm1$ & 3900 (25) & 0.15 (203) & 5.7e+20 & 1.19 (26) & 50 & N & Y \\\\\n{\\bf SMC 2005} & 10.4 (27) & 2453588.5 (27) & \\nodata & 3200 (28) & \\nodata & 5e+20 & \\nodata & 61 & \\nodata & \\nodata \\\\\n{\\bf QY Mus} & 8.1 (29) & 2454739.90 (29) & 60: & \\nodata & 0.71 (30) & 4.2e+21 & \\nodata & \\nodata & M & \\nodata \\\\\n{\\bf RS Oph} & 4.5 (31) & 2453779.44 (14) & 7.9 (14) & 3930 (31) & 0.73 (32) & 2.25e+21 & 456 (33) & 1.6$\\pm0.3$ (33) & N (34) & Y \\\\\n{\\bf U Sco} & 8.05 (35) & 2455224.94 (35) & 1.2 (36) & 7600 (37) & 0.2$\\pm0.1$ (4) & 1.2e+21 & 1.23056 (36) & 12$\\pm2$ (4) & N & Y \\\\\n{\\bf V1047 Cen} & 8.5 (38) & 2453614.5 (39) & 6 (40) & 840 (38) & \\nodata & 1.4e+22 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\n{\\bf V1065 Cen} & 8.2 (41) & 2454123.5 (41) & 11 (42) & 2700 (43) & 0.5$\\pm0.1$ (42) & 3.75e+21 & \\nodata & 9.05$\\pm2.8$ (42) & Y (42) & \\nodata \\\\\nV1187 Sco & 7.4 (44) & 2453220.5 (44) & 7: (45) & 3000 (44) & 1.56 (44) & 8.0e+21 & \\nodata & 4.9$\\pm0.5$ (44) & N & \\nodata \\\\\n{\\bf V1188 Sco} & 8.7 (46) & 2453577.5 (46) & 7 (40) & 1730 (47) & \\nodata & 5.0e+21 & \\nodata & 7.5 (39) & \\nodata & \\nodata \\\\\n{\\bf V1213 Cen} & 8.53 (48) & 2454959.5 (48) & 11$\\pm2$ (49) & 2300 (50) & 2.07 (30) & 1.0e+22 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\n{\\bf V1280 Sco} & 3.79 (51) & 2454147.65 (14) & 21 (52) & 640 (53) & 0.36 (54) & 1.6e+21 & \\nodata & 1.6$\\pm0.4$ (54) & Y (54) & \\nodata \\\\\n{\\bf V1281 Sco} & 8.8 (55) & 2454152.21 (55) & 15:& 1800 (56) & 0.7 (57) & 3.2e+21 & \\nodata & \\nodata & N & \\nodata \\\\\n{\\bf V1309 Sco} & 7.1 (58) & 2454714.5 (58) & 23$\\pm2$ (59) & 670 (60) & 1.2 (30) & 4.0e+21 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\n{\\bf V1494 Aql} & 3.8 (61) & 2451515.5 (61) & 6.6$\\pm0.5$ (61) & 1200 (62) & 0.6 (63) & 3.6e+21 & 0.13467 (64) & 1.6$\\pm0.1$ (63) & N & \\nodata \\\\\n{\\bf V1663 Aql} & 10.5 (65) & 2453531.5 (65) & 17 (66) & 1900 (67) & 2: (68) & 1.6e+22 & \\nodata & 8.9$\\pm3.6$ (69) & N & \\nodata \\\\\nV1974 Cyg & 4.3 (70) & 2448654.5 (70) & 17 (71) & 2000 (19) & 0.36$\\pm0.04$ (71) & 2.7e+21 & 0.081263 (70) & 1.8$\\pm0.1$ (72) & N & \\nodata \\\\\n{\\bf V2361 Cyg} & 9.3 (73) & 2453412.5 (73) & 6 (40) & 3200 (74) & 1.2: (75) & 7.0e+21 & \\nodata & \\nodata & Y (40) & \\nodata \\\\\n{\\bf V2362 Cyg} & 7.8 (76) & 2453831.5 (76) & 9 (77) & 1850 (78) & 0.575$\\pm0.015$ (79) & 4.4e+21 & 0.06577 (80) & 7.75$\\pm3$ (77) & Y (81) & \\nodata \\\\\n{\\bf V2467 Cyg} & 6.7 (82) & 2454176.27 (82) & 7 (83) & 950 (82) & 1.5 (84) & 1.4e+22 & 0.159 (85) & 3.1$\\pm0.5$ (86) & M (87) & \\nodata \\\\\n{\\bf V2468 Cyg} & 7.4 (88) & 2454534.2 (88) & 10: & 1000 (88) & 0.77 (89) & 1.0e+22 & 0.242 (90) & \\nodata & N & \\nodata \\\\\n{\\bf V2491 Cyg} & 7.54 (91) & 2454567.86 (91) & 4.6 (92) & 4860 (93) & 0.43 (94) & 4.7e+21 & 0.09580: (95) & 10.5 (96) & N & M \\\\\nV2487 Oph & 9.5 (97) & 2450979.5 (97) & 6.3 (98) & 10000 (98) & 0.38$\\pm0.08$ (98) & 2.0e+21 & \\nodata & 27.5$\\pm3$ (99) & N (100) & Y (101) \\\\\n{\\bf V2540 Oph} & 8.5 (102) & 2452295.5 (102) & \\nodata & \\nodata & \\nodata & 2.3e+21 & 0.284781 (103) & 5.2$\\pm0.8$ (103) & N & \\nodata \\\\\nV2575 Oph & 11.1 (104) & 2453778.8 (104) & 20: & 560 (104) & 1.4 (105) & 3.3e+21 & \\nodata & \\nodata & N (105) & \\nodata \\\\\n{\\bf V2576 Oph} & 9.2 (106) & 2453832.5 (106) & 8: & 1470 (106) & 0.25 (107) & 2.6e+21 & \\nodata & \\nodata & N & \\nodata \\\\\n{\\bf V2615 Oph} & 8.52 (108) & 2454187.5 (108) & 26.5 (108) & 800 (109) & 0.9 (108) & 3.1e+21 & \\nodata & 3.7$\\pm0.2$ (108) & Y (110) & \\nodata \\\\\n{\\bf V2670 Oph} & 9.9 (111) & 2454613.11 (111) & 15: & 600 (112) & 1.3: (113) & 2.9e+21 & \\nodata & \\nodata & N (114) & \\nodata \\\\\n{\\bf V2671 Oph} & 11.1 (115) & 2454617.5 (115) & 8: & 1210 (116) & 2.0 (117) & 3.3e+21 & \\nodata & \\nodata & M (117) & \\nodata \\\\\n{\\bf V2672 Oph} & 10.0 (118) & 2455060.02 (118) & 2.3 (119) & 8000 (118) & 1.6$\\pm0.1$ (119) & 4.0e+21 & \\nodata & 19$\\pm2$ (119) & \\nodata & M \\\\\nV351 Pup & 6.5 (120) & 2448617.5 (120) & 16 (121) & \\nodata & 0.72$\\pm0.1$ (122) & 6.2e+21 & 0.1182 (123) & 2.7$\\pm0.7$ (122) & N & \\nodata \\\\\n{\\bf V382 Nor} & 8.9 (124) & 2453447.5 (124) & 12 (40) & 1850 (23) & \\nodata & 1.7e+22 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\nV382 Vel & 2.85 (125) & 2451320.5 (125) & 4.5 (126) & 2400 (126) & 0.05: (126) & 3.4e+21 & 0.146126 (127) & 1.68$\\pm0.3$ (126) & N & \\nodata \\\\\n{\\bf V407 Cyg} & 6.8 (128) & 2455266.314 (128) & 5.9 (129) & 2760 (129) & 0.5$\\pm0.05$ (130) & 8.8e+21 & 15595 (131) & 2.7 (131) & \\nodata & Y \\\\\n{\\bf V458 Vul} & 8.24 (132) & 2454322.39 (132) & 7 (133) & 1750 (134) & 0.6 (135) & 3.6e+21 & 0.06812255 (136) & 8.5$\\pm1.8$ (133) & N (135) & \\nodata \\\\\n{\\bf V459 Vul} & 7.57 (137) & 2454461.5 (137) & 18 (138) & 910 (139) & 1.0 (140) & 5.5e+21 & \\nodata & 3.65$\\pm1.35$ (138) & Y (140) & \\nodata \\\\\nV4633 Sgr & 7.8 (141) & 2450895.5 (141) & 19$\\pm3$ (142) & 1700 (143) & 0.21 (142) & 1.4e+21 & 0.125576 (144) & 8.9$\\pm2.5$ (142) & N & \\nodata \\\\\n{\\bf V4643 Sgr} & 8.07 (145) & 2451965.867 (145) & 4.8 (146) & 4700 (147) & 1.67 (148) & 1.4e+22 & \\nodata & 3 (148) & N & \\nodata \\\\\n{\\bf V4743 Sgr} & 5.0 (149) & 2452537.5 (149) & 9 (150) & 2400 (149) & 0.25 (151) & 1.2e+21 & 0.281 (152) & 3.9$\\pm0.3$ (151) & N & \\nodata \\\\\n{\\bf V4745 Sgr} & 7.41 (153) & 2452747.5 (153) & 8.6 (154) & 1600 (155) & 0.1 (154) & 9.0e+20 & 0.20782 (156) & 14$\\pm5$ (154) & \\nodata & \\nodata \\\\\n{\\bf V476 Sct} & 10.3 (157) & 2453643.5 (157) & 15 (158) & \\nodata & 1.9 (158) & 1.2e+22 & \\nodata & 4$\\pm1$ (158) & M (159) & \\nodata \\\\\n{\\bf V477 Sct} & 9.8 (160) & 2453655.5 (160) & 3 (160) & 2900 (161) & 1.2: (162) & 4e+21 & \\nodata & \\nodata & M (163) & \\nodata \\\\\n{\\bf V5114 Sgr} & 8.38 (164) & 2453081.5 (164) & 11 (165) & 2000 (23) & \\nodata & 1.5e+21 & \\nodata & 7.7$\\pm0.7$ (165) & N (166) & \\nodata \\\\\n{\\bf V5115 Sgr} & 7.7 (167) & 2453459.5 (167) & 7 (40) & 1300 (168) & 0.53 (169) & 2.3e+21 & \\nodata & \\nodata & N (169) & \\nodata \\\\\n{\\bf V5116 Sgr} & 8.15 (170) & 2453556.91 (170) & 6.5 (171) & 970 (172) & 0.25 (173) & 1.5e+21 & 0.1238 (171) & 11$\\pm3$ (173) & N (174) & \\nodata \\\\\n{\\bf V5558 Sgr} & 6.53 (175) & 2454291.5 (175) & 125 (176) & 1000 (177) & 0.80 (178) & 1.6e+22 & \\nodata & 1.3$\\pm0.3$ (176) & N (179) & \\nodata \\\\\n{\\bf V5579 Sgr} & 5.56 (180) & 2454579.62 (180) & 7: & 1500 (23) & 1.2 (181) & 3.3e+21 & \\nodata & \\nodata & Y (181) & \\nodata \\\\\n{\\bf V5583 Sgr} & 7.43 (182) & 2455051.07 (182) & 5: & 2300 (182) & 0.39 (30) & 2.0e+21 & \\nodata & 10.5 & \\nodata & \\nodata \\\\\n{\\bf V574 Pup} & 6.93 (183) & 2453332.22 (183) & 13 (184) & 2800 (184) & 0.5$\\pm0.1$ & 6.2e+21 & \\nodata & 6.5$\\pm1$ & M (185) & \\nodata \\\\\n{\\bf V597 Pup} & 7.0 (186) & 2454418.75 (186) & 3: & 1800 (187) & 0.3 (188) & 5.0e+21 & 0.11119 (189) & \\nodata & N (188) & \\nodata \\\\\n{\\bf V598 Pup} & 3.46 (14) & 2454257.79 (14) & 9$\\pm1$ (190) & \\nodata & 0.16 (190) & 1.4e+21 & \\nodata & 2.95$\\pm0.8$ (190) & \\nodata & \\nodata \\\\\n{\\bf V679 Car} & 7.55 (191) & 2454797.77 (191) & 20: & \\nodata & \\nodata & 1.3e+22 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\n{\\bf V723 Cas} & 7.1 (192) & 2450069.0 (192) & 263 (2) & 600 (193) & 0.5 (194) & 2.35e+21 & 0.69 (195) & 3.86$\\pm0.23$ (196) & N & \\nodata \\\\\nV838 Her & 5 (197) & 2448340.5 (197) & 2 (198) & \\nodata & 0.5$\\pm0.1$ (198) & 2.6e+21 & 0.2975 (199) & 3$\\pm1$ (198) & Y (200) & \\nodata \\\\\n{\\bf XMMSL1 J06}\\tablenotemark{j} & 12 (201) & 2453643.5 (202) & 8$\\pm2$ (202) & \\nodata & 0.15 (203) & 8.7e+20 & \\nodata & 50 & \\nodata & \\nodata \\\\\n\\enddata\n\\tablenotetext{a}{Novae with {\\it Swift}\\ observations are presented in {\\it bold}.}\n\\tablenotetext{b}{Visual maximum.}\n\\tablenotetext{c}{Date of visual maximum.}\n\\tablenotetext{d}{As measured from the visual light curve. A \":\" indicates\nan uncertain value due to an estimate from the AAVSO light curve.}\n\\tablenotetext{e}{Of Balmer lines measured at or near visual maximum.}\n\\tablenotetext{f}{Average Galactic N$_H$ within 0.5$\\arcdeg$ of the nova \nposition as given in the HEASARC N$_H$ tool.}\n\\tablenotetext{g}{Dust forming nova? (Y)es, (N)o, or (M)abye.\nNovae with \"N\" but no dust reference were sufficiently observed\nbut no dust was specifically reported in any of the references.\n}\n\\tablenotetext{h}{Full nova name is CSS081007030559+054715.}\n\\tablenotetext{i}{An averaged date based on available photometry.}\n\\tablenotetext{j}{Full nova name is XMMSL1 J060636.2-694933.} \\\\\n\\tablecomments{Numbers in parenthesis are the reference codes.} \n\\tablerefs{\n1 = \\citet{2000IAUC.7411....3H};\n2 = \\citet{2010AJ....140...34S};\n3 = \\citet{2000IAUC.7409....1T};\n4 = \\citet{2010ApJS..187..275S};\n5 = \\citet{2010AN....331..156B};\n6 = \\citet{2008ATel.1847....1S};\n7 = \\citet{1984MNRAS.211..421W};\n8 = \\citet{1983IAUC.3766....1C};\n9 = \\citet{1984A&A...137..307K};\n10 = \\citet{1989ApJ...339L..41D};\n11 = \\citet{2002IAUC.7791....2K};\n12 = \\citet{2002IAUC.7799....3D};\n13 = \\citet{2003MNRAS.343..313W};\n14 = \\citet{2010ApJ...724..480H};\n15 = \\citet{2009ATel.2327....1R};\n16 = \\citet{1995IAUC.6143....2L};\n17 = \\citet{2004IBVS.5582....1L};\n18 = \\citet{2000IAUC.7453....1L};\n19 = \\citet{2003A&A...405..703G};\n20 = \\citet{2000IAUC.7457....1D};\n21 = \\citet{2005IAUC.8635....1L};\n22 = \\citet{2007JAVSO..35..359L};\n23 = Average of our SMARTS spectra;\n24 = Evidence from our SMARTS IR lightcurve;\n25 = \\citet{2009IAUC.9019....1L};\n26 = \\citet{2009ATel.2001....1B};\n27 = \\citet{2005IAUC.8582....2L};\n28 = \\citet{2005IAUC.8582....3M};\n29 = \\citet{2008IAUC.8990....2L};\n30 = \\citet{1998ApJ...500..525S};\n31 = \\citet{2007ApJ...665L..63B};\n32 = \\citet{1985IAUC.4067....2S};\n33 = \\citet{2008ApJ...673.1067N};\n34 = \\citet{2007ApJ...671L.157E};\n35 = \\citet{2010ATel.2419....1S};\n36 = \\citet{2001A&A...378..132E};\n37 = \\citet{2010ATel.2411....1A};\n38 = \\citet{2005IAUC.8596....1L};\n39 = \\citet{2007ApJ...663..505N};\n40 = \\citet{2007ApJ...662..552H};\n41 = \\citet{2007IAUC.8800....1L};\n42 = \\citet{2010AJ....140.1347H};\n43 = \\citet{2007IAUC.8800....2W};\n44 = \\citet{2006ApJ...638..987L};\n45 = \\citet{2004AAS...20515003S};\n46 = \\citet{2005IAUC.8574....1P};\n47 = \\citet{2005IAUC.8576....2N};\n48 = \\citet{2009IAUC.9043....1P};\n49 = From AAVSO lightcurve;\n50 = \\citet{2009IAUC.9043....2P};\n51 = \\citet{2007IAUC.8807....1Y};\n52 = \\citet{2008MNRAS.391.1874D};\n53 = \\citet{2007CBET..852....1M};\n54 = \\citet{2008A&A...487..223C};\n55 = \\citet{2007IAUC.8810....1Y};\n56 = \\citet{2007IAUC.8812....2N};\n57 = \\citet{2007IAUC.8846....2R};\n58 = \\citet{2008IAUC.8972....1N};\n59 = From AAVSO light curve;\n60 = \\citet{2008IAUC.8972....2N};\n61 = \\citet{2000A&A...355L...9K};\n62 = \\citet{1999IAUC.7324....1F};\n63 = \\citet{2003A&A...404..997I};\n64 = \\citet{2001IAUC.7665....2B};\n65 = \\citet{2005IAUC.8540....1P};\n66 = \\citet{2006JBAA..116..320B};\n67 = Average of our SMARTS spectra;\n68 = \\citet{2005IAUC.8640....2P};\n69 = \\citet{2007ApJ...669.1150L};\n70 = \\citet{1994ApJ...431L..47D};\n71 = \\citet{1996AJ....111..869A};\n72 = \\citet{1997A&A...318..908C};\n73 = \\citet{2005IAUC.8483....1N};\n74 = \\citet{2005IAUC.8484....1N};\n75 = \\citet{2005IAUC.8524....2R};\n76 = \\citet{2006CBET..466....2W};\n77 = \\citet{2008A&A...479L..51K};\n78 = \\citet{2006ATel..792....1C};\n79 = \\citet{2006IAUC.8702....2S};\n80 = \\citet{2009ATel.2137....1B};\n81 = \\citet{2008AJ....136.1815L};\n82 = \\citet{2007IAUC.8821....1N};\n83 = \\citet{2009AAS...21349125L};\n84 = \\citet{2007IAUC.8848....1M};\n85 = \\citet{2008ATel.1723....1S};\n86 = \\citet{2009AN....330...77P};\n87 = \\citet{WoodStar11};\n88 = \\citet{2008IAUC.8927....2N};\n89 = \\citet{2008IAUC.8936....2R};\n90 = \\citet{2009ATel.2157....1S};\n91 = \\citet{2008IAUC.8934....1N};\n92 = \\citet{2008ATel.1485....1T};\n93 = \\citet{2008ATel.1475....1T};\n94 = \\citet{2008IAUC.8938....2R};\n95 = \\citet{2008ATel.1514....1B} but Darnley et al. (2011, submitted) do not find convincing evidence of this period in their data.;\n96 = \\citet{2008CBET.1379....1H};\n97 = \\citet{1998IAUC.6941....1N};\n98 = \\citet{2000ApJ...541..791L};\n99 = \\citet{2000ApJ...541..791L};\n100 = \\citet{1998IAUC.7049....1R};\n101 = \\citet{2009AJ....138.1230P};\n102 = \\citet{2002IAUC.7809....1R};\n103 = \\citet{2005PASA...22..298A};\n104 = \\citet{2006IAUC.8671....1P};\n105 = \\citet{2006IAUC.8710....2R};\n106 = \\citet{2006IAUC.8700....1W};\n107 = \\citet{2006IAUC.8730....5L};\n108 = \\citet{2008MNRAS.387..344M};\n109 = \\citet{2007IAUC.8824....1N};\n110 = \\citet{2007IAUC.8846....2R};\n111 = \\citet{2008IAUC.8948....2A};\n112 = \\citet{2008IAUC.8948....3N};\n113 = \\citet{2008IAUC.8956....1R};\n114 = \\citet{2008IAUC.8998....3S};\n115 = \\citet{2008IAUC.8950....1N};\n116 = \\citet{2008CBET.1448....1H};\n117 = \\citet{2008IAUC.8957....1R};\n118 = \\citet{2009IAUC.9064....1N};\n119 = \\citet{2010MNRAS.tmp.1484M};\n120 = \\citet{1992IAUC.5422....1C};\n121 = \\citet{1996ApJ...466..410O};\n122 = \\citet{1996MNRAS.279..280S};\n123 = \\citet{2001MNRAS.328..159W};\n124 = \\citet{2005IAUC.8497....1L};\n125 = \\citet{1999IAUC.7176....1L};\n126 = \\citet{2002A&A...390..155D};\n127 = \\citet{2006AJ....131.2628B};\n128 = \\citet{2010CBET.2199....1H};\n129 = \\citet{2011MNRAS.410L..52M};\n130 = \\citet{2011A&A...527A..98S};\n131 = \\citet{1990MNRAS.242..653M};\n132 = \\citet{2007CBET.1029....1M};\n133 = \\citet{2008Ap&SS.315...79P};\n134 = \\citet{2007IAUC.8862....2B};\n135 = \\citet{2007IAUC.8883....1L};\n136 = \\citet{2010MNRAS.407L..21R};\n137 = \\citet{2007CBET.1183....1M};\n138 = \\citet{2010NewA...15..170P};\n139 = \\citet{2007CBET.1181....1N};\n140 = \\citet{2008IAUC.8936....3R};\n141 = \\citet{1998IAUC.6847....2N};\n142 = \\citet{2001MNRAS.328.1169L};\n143 = \\citet{1998IAUC.6848....1D};\n144 = \\citet{2008MNRAS.387..289L};\n145 = \\citet{2001IAUC.7590....2K};\n146 = \\citet{2001IBVS.5138....1B};\n147 = \\citet{2001IAUC.7589....2A};\n148 = \\citet{2008AstL...34..249B};\n149 = \\citet{2002IAUC.7975....2K};\n150 = \\citet{2003MNRAS.344..521M};\n151 = \\citet{2007AAS...210.0402V};\n152 = \\citet{2003IAUC.8176....3W};\n153 = \\citet{2003IAUC.8126....2L};\n154 = \\citet{2005A&A...429..599C};\n155 = \\citet{2003IAUC.8132....2K};\n156 = \\citet{2006MNRAS.371..459D};\n157 = \\citet{2005IAUC.8607....1S};\n158 = \\citet{2006MNRAS.369.1755M};\n159 = \\citet{2005IAUC.8638....1P};\n160 = \\citet{2006A&A...452..567M};\n161 = \\citet{2005IAUC.8617....1P};\n162 = \\citet{2005IAUC.8644....1M};\n163 = \\citet{2005IAUC.8644....1M};\n164 = \\citet{2004IAUC.8306....1N};\n165 = \\citet{2006A&A...459..875E};\n166 = \\citet{2004IAUC.8368....3L};\n167 = \\citet{2005IAUC.8502....1S};\n168 = \\citet{2005IAUC.8501....2K};\n169 = \\citet{2005IAUC.8523....4R};\n170 = \\citet{2005IAUC.8559....2G};\n171 = \\citet{2008A&A...478..815D};\n172 = \\citet{2005IAUC.8559....1L};\n173 = \\citet{2008ApJ...675L..93S};\n174 = \\citet{2005IAUC.8579....3R};\n175 = \\citet{2007CBET.1010....1M};\n176 = \\citet{2008NewA...13..557P};\n177 = \\citet{2007CBET.1006....1I};\n178 = \\citet{2007IAUC.8884....2R};\n179 = \\citet{2009AAS...21442806R};\n180 = \\citet{2008CBET.1352....1M};\n181 = \\citet{2008IAUC.8948....1R};\n182 = \\citet{2009IAUC.9061....1N};\n183 = \\citet{2004IAUC.8445....3S};\n184 = \\citet{2005IBVS.5638....1S};\n185 = \\citet{Heltonthesis};\n186 = \\citet{2007IAUC.8895....1P};\n187 = \\citet{2007IAUC.8896....2N};\n188 = \\citet{2008IAUC.8911....2N};\n189 = \\citet{2009MNRAS.397..979W};\n190 = \\citet{2008A&A...482L...1R};\n191 = \\citet{2008IAUC.8999....1W};\n192 = \\citet{1996A&A...315..166M};\n193 = \\citet{1996IAUC.6365....2I};\n194 = \\citet{2008AJ....135.1328N};\n195 = \\citet{2007AstBu..62..125G};\n196 = \\citet{2009AJ....138.1090L};\n197 = \\citet{1991IAUC.5222....1S};\n198 = \\citet{1996MNRAS.282..563V};\n199 = \\citet{1994ApJ...420..830S};\n200 = \\citet{1992ApJ...384L..41W};\n201 = \\citet{2009A&A...506.1309R};\n202 = \\citet{2009A&A...506.1309R};\n203 = Standard LMC value.\n}\n\\end{deluxetable}\n\nAlthough P-Cygni absorption profiles provide the best values for\nthe early velocities of the ejecta, they are not nearly as well reported \nin the literature as FWHMs of Balmer lines near maximum. Since nearly \nevery nova has a FWHM citation as part of the spectroscopic confirmation of \nthe initial visual detection, they are used as the expansion velocity\nproxy. Expansion velocities provide another way to classify a nova since \nmore massive WDs eject less mass and at a greater velocity than low mass WDs. \nThis characteristic can be preferable to t$_2$ since the rate of decline \ncan be difficult to determine for novae with secondary maxima, dust \nformation, or that have poorly sampled early light curves. Both FWHM \nand t$_2$ are used as simple proxies for the WD mass.\n\\footnote{While many other parameters also affect these observables, such as \nthe accretion rate, these parameters are generally not known for specific\nnovae and thus their contributions to the secular evolution can not be\ndetermined.}\nThe N$_H$ values were \nobtained from the HEASARC N$_H$ \ntool\\footnote{http:\/\/heasarc.gsfc.nasa.gov\/cgi-bin\/Tools\/w3nh\/w3nh.pl}\nusing the averaged LAB \\citep{2005A&A...440..775K} and DL \n\\citep{1990ARA&A..28..215D} maps within a 0.5$\\arcdeg$ area around each nova.\n\n\\begin{figure*}[htbp]\n\\plotone{newt2vsfwhm.ps}\n\\caption{The t$_2$ vs. FWHM near maximum for the novae in the sample. Filled\ncircles are known RNe. Half filled circles are suspected\nRNe based on their characteristics. Circles with asterisks inside\nindicate dusty novae. The distribution histograms for t$_2$ and the FWHM \nare also shown in the secondary graphs. The dotted lines in the t$_2$ \nhistogram show the boundaries between the \"very fast\", \"fast\", \"moderately \nfast\", and \"slow\" light curve classifications \\citep{Warner2008}. The \nmajority of our novae belong to the \"fast\" or \"very fast\" classifications.\n\\label{t2vsfwhm}}\n\\end{figure*}\n\nFigure \\ref{t2vsfwhm} shows the t$_2$ and FWHM distribution for all the novae \nwith both values. The 7 filled circles are known RNe while the 3\nhalf filled circles are suspected RNe. Dusty novae have an asterisk\ninside their circle symbols. As expected, the RNe tend \ntoward large FWHM and fast t$_2$ times. In this sample the dusty novae \nare scattered throughout the FWHM-t$_2$ phase space showing no particular \npreference for any type of nova. Figure \\ref{t2vsfwhm} also shows\nthat there is a wide dispersion between FWHM and t$_2$, e.g. novae\nwith t$_2$ of 10 days have FWHM values between 1000 and 3000 km s$^{-1}$.\n\nThe top panel of Figure \\ref{t2vsfwhm} shows the distribution of t$_2$ in \none day bins. Using the light curve classifications of \\citet{Warner2008},\nthe sample is heavily weighted toward very fast (t$_2$ $<$ 10 days) and \nfast (11 $>$ t$_2$ $<$ 25 days) novae. These are intrinsically more luminous,\nwith a larger rise from quiescence to maximum light. The peak is \nat 8 days and with a median t$_2$ of 9 days. There are only 5 novae \nin the entire sample with t$_2$ times greater than 50 days, \nIM Nor, LMC 2005, QY Mus, V723 Cas, and V5558 Sgr. The far\nright panel in Figure \\ref{t2vsfwhm} gives the distribution for \nFWHM in 500 km s$^{-1}$ bins. The majority of the novae in the sample\nhave low expansion velocities with the peak in the 1500-2000 km s$^{-1}$ bin. \nThe median FWHM is 1800 km s$^{-1}$. There are only 5 novae with \nFWHM $\\geq$ 4000 km s$^{-1}$ in the sample and all but V4643 Sgr \nare RNe or suspected RNe.\n\nThe X-ray sample is biased toward fast novae for multiple reasons. \nThe bulk of the observations are from {\\it Swift}, and {\\it Swift}\\ has only been \noperational for 5 years. Fast systems, like the RN RS Oph will rise \nand fall on time-scales of months (see Section \\ref{var}) while slow \nnovae, such as V1280 Sco, have not yet had sufficient time to evolve \ninto soft X-ray sources (and may not) thus are therefore underrepresented. \nSlow novae also require more observing time \nto be monitored over their lifetime, particularly if the same coverage of\nthe X-ray evolution is desired. Allocations of {\\it Swift}\\ observing time\nover multiple cycles are difficult to justify and execute unless a compelling\nscientific rationale is forthcoming, such as unusual or significant \nspectral variations (see Section \\ref{fex}), count rate oscillations, \nabundance pattern changes, etc.\nSlow and old novae (many tens of months post-outburst)\nare generally sampled once a year in part due to their slow evolution.\nHowever, the main reason the sample depicted in Fig. \\ref{t2vsfwhm}\nfavors fast novae is due to the strong \nselection effect toward outbursts on high mass WDs. While high mass WDs, \n{\\it e.g.} $\\geq$ 1.2 M$_{\\odot}$, are relatively rare in the field,\nthe time-scale between outbursts is significantly shorter than for low-mass \nWDs, meaning they dominate any observational sample\n\\citep{1994ApJ...425..797L}. Finally, high mass WDs give rise to more\nluminous outbursts and the {\\it Swift}\\ Nova-CV group has a V $<$ 8 magnitude\nselection criterion which leads to preferentially selecting brighter sources.\n\n\\begin{figure*}[htbp]\n\\plotone{dusthistogram.ps}\n\\caption{The distribution of dusty novae in the sample. The cross-hashed\nregion is for novae that showed strong dust characteristics; however, the\npresence of dust in these systems has not been spectrophotometrically \ncorroborated at IR wavelengths. The majority of novae in our X-ray\nselected sample (Table \\ref{chartable}) did not form dust.\n\\label{dusthistogram}}\n\\end{figure*}\n\nFigure \\ref{dusthistogram} shows the distribution of our sample \n(Table \\ref{chartable}) with respect to dust formation frequency. \nOnly $\\sim$ 16\\% of the novae in the sample had clear indications\nin the literature of dust formation in the ejecta. The dust formation \nfrequency increases to 31\\% when including the 7 novae where dust \nlikely formed based on characteristics of the visual light curve but not \nyet confirmed by a measured SED excess in the thermal- and mid-infrared, \n{\\it e.g.} the \"maybe\"s in column 10 of Table \\ref{chartable}.\nThis is consistent with the expectations of the general population\nof dusty novae which ranges from 18\\% to $\\gtrsim 40$\\%. The lower limit\nis set by \\citet{2010AJ....140...34S} who find that 93 well sampled \nAmerican Association of Variable Star Observers (AAVSO) novae have\nthe large dip in their visual light curves indicative of strong \ndust formation \\citep[see][]{1998PASP..110....3G}. The upper limit\nis from a recent \\textit{Spitzer} survey of IR bright novae that finds\nmany novae have weak dust emission signatures with little or no dip in the \nvisual light curve especially at late epochs (many 100s of days post-outburst) \nwhen emission from the dust envelope is a few $\\mu$Jy\n\\citep{WoodStar11,Heltonthesis}\n\nIn order to obtain the best X-ray and UV data, it is desirable to target\nnovae with low extinction along the line of sight. However, determining\nthe extinction early in the outburst is challenging. N$_H$ maps are crude\nsince they sample large regions of the sky. The region size used to \nderive the N$_H$ values in Table \\ref{chartable} was 0.5$^{\\arcdeg}$.\nTypically just a handful of sight lines are available in regions of this\nrelatively small size. The problem is exacerbated in inhomogeneous areas \nlike the Galactic plane where most novae are found. The \nextinction maps of \\citet{1998ApJ...500..525S} can be used to obtain E(B-V) \nsince their spatial resolution is significantly higher. However, the\n\\citet{1998ApJ...500..525S} maps suffer from large errors in the Galactic\nplane, $|b| <$ 5$^{\\arcdeg}$. Maps also give the total Galactic line of\nsight with no information versus distance and thus provide only an upper \nlimit. E(B-V) can also be determined from\nindirect methods but these require either high resolution spectroscopy \nto measure ISM absorption lines \n\\citep[{\\it e.g.} \\ion{Na}{1}~D $\\lambda$5890\\AA;][]{munari97}, \nthe line strengths of optical and near-IR spectroscopy of \\ion{O}{1} lines\n\\citep{rudy89}, or extensive B and V photometry during\nthe early outburst \\citep[{\\it e.g.} intrinsic (B-V) at V$_{max}$ or\nt$_2$;][]{1987A&AS...70..125V}. Finally, \nE(B-V) estimates can be affected by other factors occurring during \nthe outburst such as dust formation or intrinsic absorption from the ejecta\nwhile the expanding material is still dense.\n\nIt is therefore desirable to check \nthat the general relationship between N$_H$ and E(B-V) holds for novae. \nFigure \\ref{ebmvnh} shows N$_H$ versus E(B-V) for the novae in this \npaper with the dotted line showing the average Milky Way extinction law, \nE(B-V) = N$_H$\/4.8$\\times$10$^{21}$ \\citep{1978ApJ...224..132B}, as the \ndotted line. The circles represent novae with Galactic latitudes, \n$|b| \\geq$ 5$^{\\arcdeg}$ while pluses are novae found within the disk,\n$|b| <$ 5$^{\\arcdeg}$. Filled circles are Magellanic novae. Errors are \npresent on all sources when available in the literature. There is good \nagreement with the relationship for novae with E(B-V) $\\leq$0.6 and \nN$_H$ $\\leq$ 2.9$\\times$10$^{21}$ with a correlation coefficient of 0.85. \nThese are primarily novae found outside of the galactic disk and thus\nfit the relationship well. Novae with these low extinction values and\ncolumn densities are ideal for soft X-ray detection. The relationship \nbreaks down at larger values with a lower correlation coefficient of 0.64 \nfor the entire sample as it is dominated by novae embedded within\nthe Galactic disk. Novae with E(B-V) values greater than 1.5 generally \nmake poor SSS candidates due to the large extinction.\n\nThe maximum magnitude vs. rate of decline relationship of\n\\citet{1995ApJ...452..704D} provides an estimate of the distances\nfor the Galactic novae in Table \\ref{chartable}. The distance estimate\nrange extends from the relatively nearby V1280 Sco ($\\sim$ 1 kpc) \nto the other side of the Galaxy for V2576 Oph ($\\sim$ 28 kpc).\nThe median Galactic distance from this relationship is 5.5 kpc\nfor this sample.\n\n\\begin{figure*}[htbp]\n\\plotone{ebmvnh.ps}\n\\caption{Local N$_H$ value versus the estimated E($B-V$). The values\nare from Table~\\ref{chartable}. The dotted line shows the E(B-V) vs. N$_H$ \nrelationship of \\citet{1978ApJ...224..132B}. Circles are $|b| \\geq$ \n5$^{\\arcdeg}$ novae, the $|b| <$ 5$^{\\arcdeg}$ novae are shown as pluses,\nand filled circles are Magellanic novae. Errors bars are given when\navailable.\n\\label{ebmvnh}}\n\\end{figure*}\n\n\\subsection{X-ray evolution\\label{xrayevol}}\n\nAll the available {\\it Swift}\\ XRT and UVOT data of novae in the public archive\nup to 2010 July 31 are presented in Table \\ref{fullswift}. The data were\nprimarily obtained from pointed observations but a few serendipitous \nobservations are also included. The full data \nset is available in the electronic edition with only V1281 Sco shown as an\nexample here. The columns provide the {\\it Swift}\\ observation identification,\nexposure time, day of the observation from visual maximum (see Table \n\\ref{chartable}), XRT total (0.3-10 keV) count rate, the Hard (1-10 keV)\nto Soft (0.3-1 keV) hardness ratio, HR1, the Soft and Hard band count rates,\nthe (Hard-Soft)\/(Hard+Soft) hardness ratio, HR2, and the uvw2 \n($\\lambda_c$ = 1928\\AA), uvm2 (2246\\AA), uvw1 (2600\\AA), $u$ (3465\\AA), \n$b$ (4392\\AA), and $v$ (5468\\AA) UVOT filter magnitudes if available. \nThe UVOT magnitudes do not include the systematic photometric calibration \nerrors from \\citet[][Table 6]{2008MNRAS.383..627P}.\n\nThere is one row in the table per observation ID, however this is not a \nfixed unit of time; most observation IDs are less than 0.13 days \nduration and the median exposure time is 1.76 ks. For this exposure time, \nour 3 sigma detection limit is 0.0037 counts s$^{-1}$ (0.3-10 keV, corrected \nfor typical PSF coverage). This corresponds to an unabsorbed flux limit in \nthe same band, assuming absorption by N$_H$ = 3$\\times$ 10$^{20}$ of \n1.5$\\times$ 10$^{-13}$ and 2.0$\\times$ 10$^{-13}$ erg cm$^{-2}$ s$^{-1}$ for\na 5 keV optically thin thermal spectrum and a 50 eV blackbody spectrum, \nrespectively.\n\nTo create a self consistent dataset for Table~\\ref{fullswift} we used the \nsoftware described by \\citet{2009MNRAS.397.1177E,2007A&A...469..379E}.\nThis extracts source and background event lists from the data (using an \nannular source region where necessary to eliminate pile up), and then bins \nthese data to form the light curve, applying corrections for pile up, bad \npixels and the finite size of the source region as necessary. \nSince novae tend to be soft, we chose the energy bands for the\nhardness ratio to be 0.3--1 keV and 1--10 keV. There is also evidence\nthat, for very soft sources, pile up occurs at lower count rates than for\nhard sources; we thus set the threshold at which pile up is considered a\nrisk to be 0.3 (80) count s$^{-1}$ in PC (WT) mode \\citep[the defaults from]\n[are 0.6 and 150 respectively]{2007A&A...469..379E}.\n\nWe chose to group the data in one bin per {\\it Swift}\\ observation. In the\ncurrent version of the online software (for this binning mode), background\nsubtraction is only carried out using Gaussian statistics, and does not\nproduce upper limits if this results in a non-detection. We thus took\nthe `detailed' light curves produced by the web tools, which include the\nnumber of measured counts in each bin, the exposure time, and the\ncorrection factor (accounting for pile up etc.). Following the approach\nof \\citet{2007A&A...469..379E} for other binning methods, where any bins had\nfewer than 15 detected source counts, we used the Bayesian method of\n\\citet{1991ApJ...374..344K} to determine whether the source was\ndetected at the 3-sigma level. If this was not the case, a 3-$\\sigma$\nupper limit was produced using this Bayesian method, otherwise a data\npoint with standard 1-$\\sigma$ uncertainty was produced using the \n\\citet{1991ApJ...374..344K} approach.\n\nThe hardness ratios were always calculated using Gaussian statistics, \nunless one band had zero detected source photons: in this case no ratio\ncould be produced. The hardness ratios were defined as HR1 = H\/S and \nHR2 = (H-S)\/(H+S) where H = 1.0-10 keV and S = 0.3-1.0 keV.\n\n\\begin{deluxetable}{cccccccccccccc}\n\\tablecaption{{\\it Swift}\\ XRT\/UVOT data for novae in the archive\\label{fullswift}}\n\\tablewidth{0pt}\n\\tabletypesize{\\scriptsize}\n\\rotate\n\\tablecolumns{14}\n\\tablehead{\n\\colhead{ObsID} & \\colhead{Exp} & \\colhead{Day\\tablenotemark{a}} & \n\\colhead{CR\\tablenotemark{b}} & \n\\colhead{HR1\\tablenotemark{c}} & \\colhead{Soft} &\n\\colhead{Hard} & \\colhead{HR2\\tablenotemark{c}} &\n\\colhead{uvw2} & \\colhead{uvm2} & \\colhead{uvw1} & \\colhead{u} & \n\\colhead{b} & \\colhead{v} \\\\ \n\\colhead{} & \\colhead{(ksec)} & \\colhead{(d)} & \\colhead{(ct s$^{-1}$)} &\n\\colhead{} & \\colhead{(ct s$^{-1}$)} & \\colhead{(ct s$^{-1}$)} &\n\\colhead{} & \\colhead{(mag)} & \\colhead{(mag)} & \\colhead{(mag)} & \n\\colhead{(mag)} & \\colhead{(mag)} & \\colhead{(mag)}\n}\n\\startdata\n\\cutinhead{V1281\\,Sco}\n00030891001 & 3.87& 2.95&$<$ 0.0030 & \\nodata & \\nodata&\\nodata&\\nodata&\\nodata&\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164001 & 5.24& 338.66& 0.1634$^{+0.0079}_{-0.0079}$ & 0.0090$\\pm$0.0074 & 0.1619& 0.0015 &-0.98 &\\nodata&\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164002 & 3.45& 344.07& 0.2429$^{+0.0120}_{-0.0120}$ & 0.0062$\\pm$0.0057 & 0.2414& 0.0015 &-0.99 &19.50 &19.64 &18.20 &\\nodata&\\nodata&\\nodata \\\\\n00037164003 & 4.24& 351.05& 0.6376$^{+0.0282}_{-0.0282}$ & 0.0047$\\pm$0.0053 & 0.6346& 0.0030 &-0.99 &20.32 &\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164005 & 1.66& 361.10& 0.2727$^{+0.0185}_{-0.0185}$ & 0.0012$\\pm$0.0081 & 0.2723& 0.0003 &-1.00 &20.43 &\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164006 & 2.89& 366.41& 0.2284$^{+0.0129}_{-0.0129}$ & 0.0097$\\pm$0.0089 & 0.2262& 0.0022 &-0.98 &20.42 &\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164007 & 2.02& 432.69& 0.0853$^{+0.0079}_{-0.0079}$ & 0.0002$\\pm$0.0063 & 0.0853& 0.0000 &-1.00 &20.22 &\\nodata &19.11 &\\nodata&\\nodata&\\nodata \\\\\n00090248001 & 4.68& 819.99&$<$ 0.0013 & \\nodata & \\nodata&\\nodata&\\nodata&20.32 &$>$20.56&20.07 &\\nodata&\\nodata&19.30 \\\\\n\\enddata\n\\tablenotetext{a}{Days after visual maximum, see Table \\ref{chartable}.}\n\\tablenotetext{b}{corrected for PSF losses and bad columns.\nThe 3 sigma upper limits are given when there is no detection \nbetter than 3 sigma.}\n\\tablenotetext{c}{Hardness ratios defined as HR1=H\/S and HR2=(H-S)\/(H+S) with \nHard(H)=1-10\\,keV and Soft(S)=0.3-1\\,keV}\n\\tablecomments{Table \\ref{fullswift} is published in its entirety in the \nelectronic edition of the {\\it Astrophysical Journal}. A portion is shown \nhere for guidance regarding its form and content.}\n\\end{deluxetable}\n\n\\begin{figure*}[htbp]\n\\includegraphics[angle=90,scale=0.60]{sssgood.ps}\n\\caption{The X-ray epochs for {\\it Swift}\\ sources with the best SSS\nphase coverage.\nThe novae are arranged by increasing optical emission line FWHM with \nthe FWHM values shown either left or right of the source. \"U\" is used \nfor novae with unknown FWHM velocities. Refer to Table \\ref{colordescriptions}\nfor a summary of the color coding. \\label{sssgood}}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\includegraphics[angle=90,scale=0.60]{sssbad.ps}\n\\caption{Same as Figure \\ref{sssgood} except for the {\\it Swift}\\ sources without\nsignificant SSS detections. \\label{sssbad}}\n\\end{figure*}\n\nFigures \\ref{sssgood} and \\ref{sssbad} show the X-ray observations of the \n{\\it Swift}\\ novae as a function of time since visual maximum. The novae are \norganized from bottom to top in increasing FWHM values (Table \n\\ref{chartable}), with the FWHM alternating on the left and right sides\nof the figures. Novae with unknown FWHM are labeled \"U\" and placed\nat the bottom. Figure \\ref{sssgood} shows the novae with confirmed SSS \nemission while Figure \\ref{sssbad} shows the\nnovae with no current SSS detections. Note that some novae in \nFigure \\ref{sssbad}, particularly the slowly evolving ones V5558 Sgr \nand V2468 Cyg,\nmay eventually evolve to the SSS phase. Figure \\ref{sssother} is the same \nbut for well observed SSS novae observed prior to the launch of {\\it Swift};\nthese novae typically have much poorer observational coverage.\nThe black stars are the individual {\\it Swift}\\ observations. \nThe figures also contain supplemental observations obtained with other\nX-ray facilities, {\\it Chandra}, {\\it XMM}, {\\it ASCA}, {\\it RXTE}, \n{\\it BeppoSax}, and {\\it ROSAT}\\ which are shown as circles, downward pointing \ntriangles, upward pointing triangles, yellow squares, diamonds, and red \nsquares, respectively. The colors associated with each bar give the \ntype of emission observed based on the hardness ratio. Red bars indicate\ntime intervals when the HR2 of an individual source was $\\lesssim -0.3$ \nand the uncertainty in the relative error was $<$ 5\\%; the photons in this\ncase are primarily soft and these regions are associated with the SSS phase.\nOrange bars designate observations with the same hardness ratio but \nlarger errors. Yellow\nshows regions between observations where hard\/soft change occurred.\nThe orange and yellow regions represent the maximum limits of the soft \nphase since the transition occurred at some point during these times. \nSection \\ref{sssphase} describes the SSS phase in greater detail. \nGreen regions show times when the overall detected spectrum was \nhard, HR2 $>$ -0.3 and section \\ref{xrayevol} discusses this phase. \nFinally, blue represents \ntimes of non-detections. Table \\ref{colordescriptions} also gives the \ncolor descriptions for Figures \\ref{sssgood} - \\ref{sssother}.\n\n\\begin{deluxetable}{lll}\n\\tablecaption{Figures \\ref{sssgood} - \\ref{sssother} detection\ndefinitions and descriptions and symbol legend\\label{colordescriptions}}\n\\tablewidth{0pt}\n\\tabletypesize{\\scriptsize}\n\\tablehead{\n\\colhead{Color} & \\colhead{HR2\\tablenotemark{a} and error} & \n\\colhead{X-ray emission} \n}\n\\startdata\nBlue & \\nodata & Undetected \\\\\nGreen & $>$-0.3 & Hard \\\\\nYellow & \\nodata & Transition between Green and Orange\/Red classification. \\\\\nOrange & $\\lesssim$-0.3 and $>$ 5\\% error & Soft but with large uncertainty, highly variable during initial rise. \\\\\nRed & $\\lesssim$-0.3 and $<$ 5\\% error & Soft X-rays \\\\\n\\hline\n\\multicolumn{3}{c}{Symbol legend} \\\\\n\\hline\n{\\it Swift} & stars & \\\\\n{\\it Chandra} & circles & \\\\\n{\\it XMM} & downward pointing triangles & \\\\\n{\\it ASCA} & downward pointing triangles & \\\\\n{\\it RXTE} & yellow squares & \\\\\n{\\it BeppoSax} & diamonds & \\\\\n{\\it ROSAT} & red squares & \\\\\n\\enddata\n\\tablenotetext{a}{HR2=(H-S)\/(H+S) with\nHard(H)=1-10\\,keV and Soft(S)=0.3-1\\,keV}\n\\end{deluxetable}\n\nSeveral trends are evident in Figures \\ref{sssgood} - \\ref{sssother}.\nAs the FWHM decreases, the novae in the sample become SSS later and remain \nin the SSS phase longer. This behavior is consistent with \nlarger expansion velocity novae originating on higher mass WDs\n\\citep{2009cfdd.confE.199S}. In addition the early, hard detections \nare generally only observed in the high FWHM novae. The trends evident in \nFigures \\ref{sssgood} and \\ref{sssother} allow for a straightforward \ninterpretation of Figure \\ref{sssbad} - fast novae (loci at the top \nof the panel) are infrequently observed in the SSS phase as \nearly X-ray observations of these systems is often absent. The slower \nnovae at the bottom of the figure have not been followed with \nsufficient temporal coverage late in their evolution or they \nhave not yet reached the SSS phase or have ceased nuclear burning before\ntheir ejecta clears sufficiently to observe SSS emission.\n\nA note of caution about using Figures \\ref{sssgood} and \\ref{sssbad} to\ndetermine nuclear burning time scales is appropriate. These figures \nare based only on the strength and error of HR2 as provided in \nTable \\ref{fullswift} which is based on a fixed hardness threshold for\nall novae in the table.\nNovae that have significant intrinsic hard emission such as V407 Cyg\nmay not be classified as SSSs by this definition even though they \nhave soft X-ray light curves typical of nuclear burning and cessation\non the WD (see Section \\ref{v407cygSSS}). \nHigh extinction will have a similar effect. The red regions\ngenerally also overestimate the duration of the SSS since that phase is\nalso defined by a tremendous rise in the soft X-ray count rate.\nSections \\ref{S:ton} and \\ref{S:toff} provide the determination of nuclear \nburning time scales for the X-ray nova sample.\n\n\\begin{figure*}[htbp]\n\\includegraphics[angle=90,scale=0.60]{non_swift.ps}\n\\caption{Same as Figure \\ref{sssgood} but for pre-{\\it Swift}\\ \nSSS novae.\\label{sssother}}\n\\end{figure*}\n\nFor completeness, Table \\ref{othertable} gives a summary of all the \npublicly available, pointed {\\it XMM}\\ and {\\it Chandra}\\ nova observations.\nThe columns are the nova name, the observational \nidentifier, the exposure time, Julian date and day after visual \nmaximum of the observation, and a short comment on the result of \nthe observation. The instrument set up is also given in the 2nd \ncolumn for the {\\it Chandra}\\ observations. In some cases this data set \nprovides important information on the SSS status of some sources\ndue to a lack of or weak {\\it Swift}\\ detections. \nAn example would be the {\\it XMM}\\ observations\nof V574 Pup which confirms that there was a strong SSS during the \ninterval between 2005 and 2007 when there were no {\\it Swift}\\ data \n\\citep{Heltonthesis}.\n\n\\begin{deluxetable}{llrcrl}\n\\tablecaption{Pointed Chandra and XMM observations of recent novae \n\\label{othertable}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Name} & \\colhead{Obs ID\\tablenotemark{a}} & \n\\colhead{Exp (ks)} & \\colhead{JD} & \\colhead{Days\\tablenotemark{b}} & \n\\colhead{Result\\tablenotemark{c}}\n}\n\\startdata\nCI Aql & 2465 (ACIS-S) & 2.2 & 2452062 & 396 & Faint (1) \\\\\n & 2492 (ACIS-S) & 19.9 & 2452123 & 457 & Faint (1) \\\\\n & 0652760201 & 26.9 & 2455577 & 3912 & NPA \\\\\nCSS081007\\tablenotemark{d} & 9970 (HRC-S\/LETG) & 35.2 & 2454818 & 222:& SSS (2) \\\\ \nIM Nor & 3434 (ACIS-S) & 5.6 & 2452317 & 28 & Not detected (3) \\\\ \n & 2672 (ACIS-S) & 4.9 & 2452425 & 136 & Faint and hard (3) \\\\\nKT Eri & 12097 (HRC-S\/LETG)& 15.2 & 2455219 & 69 & SSS (4) \\\\\n & 12100 (HRC-S\/LETG)& 5.1 & 2455227 & 77 & SSS \\\\\n & 12101 (HRC-S\/LETG)& 5.1 & 2455233 & 83 & SSS \\\\\n\t & 12203 (HRC-S\/LETG)& 5.1 & 2455307 & 157 & SSS \\\\\nNova LMC 2000 & 0127720201 & 16.3 & 2451751 & 14 & Faint and hard (5) \\\\\n & 0127720301 & 10.0 & 2451785 & 48 & Hard (5) \\\\\n\t & 0127720401 & 10.5 & 2451998 & 291 & Not detected (5) \\\\\nNova LMC 2009a& 0610000301 & 37.7 & 2454957 & 90 & SSS \\\\\n & 0610000501 & 58.1 & 2455032 & 165 & SSS \\\\\n & 0604590301 & 31.9 & 2455063 & 196 & SSS \\\\ \n & 0604590401 & 51.2 & 2455097 & 230 & SSS \\\\ \nNova SMC 2005 & 0311590601 & 11.6 & 2453807 & 219 & Not detected \\\\\nU Sco & 12102 (HRC-S\/LETG)& 23.2 & 2455241 & 17 & SSS (6) \\\\\n & 0650300201 & 63.8 & 2455247 & 23 & SSS (7) \\\\\n & 0561580301 & 62.8 & 2455259 & 35 & SSS (7) \\\\\nV1065 Cen & 0555690301 & 9.4 & 2454837 & 714 & Not detected \\\\\nV1187 Sco & 4532 (ACIS-S) & 5.2 & 2453305 & 96 & $<$ 2keV + NVII line \\\\\n & 4533 (HRC-S\/LETG) & 26.1 & 2453401 & 181 & Not detected \\\\\n & 0404431101 & 4.7 & 2454161 & 941 & Not detected \\\\\n & 0404430301 & 9.4 & 2454161 & 941 & Not detected \\\\\n & 0555691001 & 7.1 & 2454904 & 1684 & Not detected \\\\ \nV1280 Sco & 0555690601 & 4.5 & 2454903 & 756 & 1-keV emission \\\\\nV2361 Cyg & 0405600101 & 11.1 & 2453868 & 456 & Not detected \\\\\n & 0405600401 & 14.9 & 2454028 & 616 & Not detected \\\\\nV2362 Cyg & 0506050101 & 9.9 & 2454225 & 394 & thermal plasma (8) \\\\\n & 0550190501 & 27.9 & 2454821 & 990 & Very weak \\\\\nV2467 Cyg & 0555690501 & 7.0 & 2454780 & 605 & SSS \\\\\nV2487 Oph & 0085580401 & 8.3 & 2451965 & 986 & thermal plasma (9)\\\\\n & 0085581401 & 8.1 & 2452157 & 1178 & thermal plasma (9)\\\\\n & 0085581701 & 7.6 & 2452331 & 1352 & thermal plasma (9)\\\\\n & 0085582001 & 8.5 & 2452541 & 1562 & thermal plasma + Fe K$\\alpha$ line (9)\\\\\nV2491 Cyg & 0552270501 & 39.3 & 2454606 & 39 & SSS (10) \\\\\n & 0552270601 & 30.0 & 2454616 & 49 & SSS (11) \\\\\nV2575 Oph & 0506050201 & 14.9 & 2454347 & 569 & Not detected \\\\ \nV2576 Oph & 0506050301 & 11.5 & 2454376 & 544 & Not detected \\\\\nV2615 Oph & 0555690401 & 9.7 & 2454922 & 735 & Not detected \\\\\nV351 Pup & 0304010101 & 51.8 & 2453525 & 4908 & Faint \\\\\nV458 Vul & 0555691401 & 11.7 & 2454780 & 459 & weak 1-keV emission \\\\\nV4633 Sgr & 0085580301 & 10.2 & 2451828 & 933 & weak (12) \\\\\n & 0085581201 & 7.3 & 2451977 & 1082 & weak (12)\\\\\n\t & 0085581301 & 11.6 & 2452159 & 1264 & weak (12)\\\\\nV4643 Sgr & 0148090101 & 11.9 & 2452716 & 750 & Not detected \\\\\n & 0148090501 & 11.0 & 2452894 & 928 & Not detected \\\\\nV5114 Sgr & 0404430401 & 7.9 & 2454167 & 1086 & Not detected \\\\\n & 0404431201 & 3.6 & 2454167 & 1086 & Not detected \\\\\nV5115 Sgr & 0405600301 & 9.2 & 2454005 & 566 & weak SSS \\\\\n & 0550190201 & 14.9 & 2454925 & 1486 & weak detection \\\\ \nV5116 Sgr & 0405600201 & 12.9 & 2454164 & 608 & SSS (13) \\\\\n & 7462 (HRC-S\/LETG) & 35.2 & 2454336 & 780 & SSS (14) \\\\\n & 0550190101 & 26.6 & 2454893 & 1337 & Not detected \\\\ \nV574 Pup & 0404430201 & 16.6 & 2454203 & 872 & SSS \\\\\nV598 Pup & 0510010901 & 5.5 & 2454402 & 146 & SSS (15) \\\\\nXMMSL1 J060636\\tablenotemark{d} & 0510010501 & 8.9 & 2454270 & 627 & SSS (16) \\\\\n\\enddata\n\\tablenotetext{a}{{\\it Chandra}\\ observations have a four digit IDs and are followed\nby the instrument configuration. {\\it XMM}\\ observations have 10 digit IDs.}\n\\tablenotetext{b}{Days since visual maximum, see Table \\ref{chartable}.}\n\\tablenotetext{c}{The number in parenthesis is the code to the published data.\nNPA stands for \"Not Publicly Available\" and indicates proprietary observations\nat the time of this publication.}\n\\tablenotetext{d}{Full novae names are CSS081007030559+054715 and\nXMMSL1 J060636.2-694933.} \\\\\n\\tablecomments{\n(1) \\citet{2002A&A...387..944G};\n(2) \\citet{2009ATel.1910....1N};\n(3) \\citet{2005ApJ...620..938O};\n(4) \\citet{2010ATel.2418....1N};\n(5) \\citet{2003A&A...405..703G};\n(6) \\citet{2010ATel.2451....1O};\n(7) \\citet{2010ATel.2469....1N};\n(8) \\citet{2007ATel.1226....1H};\n(9) \\citet{2007ASPC..372..519F}; \n(10) \\citet{2008ATel.1561....1N};\n(11) \\citet{2008ATel.1573....1N};\n(12) \\citet{2007ApJ...664..467H};\n(13) \\citet{2008ApJ...675L..93S};\n(14) \\citet{2007ATel.1202....1N};\n(15) \\citet{2008A&A...482L...1R};\n(16) \\citet{2009A&A...506.1309R}.\n}\n\\end{deluxetable}\n\n\\section{THE EARLY HARD X-RAY PHASE}\n\nSome novae have hard X-ray emission, e.g. $>$ 1 keV, early \nin the outburst. These novae tend to be fast or recurrent novae. This \ninitial hard emission is thought to arise from shock heated gas \ninside the ejecta or from collisions with external material, {\\it e.g} the \nwind of the red giant secondary in RS Oph \\citep{2006ApJ...652..629B,\n2006Natur.442..276S,2007ApJ...665..654V, 2009ApJ...691..418D}. Early hard \nX-ray emission observed in the very fast nova V838 Her has been attributed \nto intra-ejecta shocks from a secularly increasing ejection velocity \n\\citep{1992Natur.356..222L,1994MNRAS.271..155O}. Much later in the outburst\nwhen nuclear burning has ceased hard X-rays can again dominate. These hard\nX-rays come from line emission from the ejected shell and\/or emission \nfrom the accretion disk \\citep{2002AIPC..637..345K}, or in the case of\nRS Oph, the re-emergence of the declining shocked wind emission once the \nSSS emission has faded \\citep{2008ASPC..401..269B}.\n\nEvery nova with a FWHM $\\ge$ 3000 km s$^{-1}$ and observations within 100 \ndays after visual maximum in the {\\it Swift}\\ sample exhibited hard X-rays. \nThis detection rate is partially due to the fact that many of these novae \nwere high interest targets, {\\it e.g.} very bright at visual maximum (KT Eri),\nextreme ejection velocity (V2672 Oph), RN (RS Oph, V407 Cyg and\nU Sco), detected prior to outbursts as an X-ray source (V2491 Cyg), etc.; \nthus their early X-ray evolution was well documented. In addition, \na higher cadence of observations during the early phases greatly \nincreased the probability of discovery. \n\nThe evidence of initial hard X-ray emission for slow novae is sparse\nas few were well sampled early in their outbursts. Only V458 Vul\n\\citep{2009AJ....137.4160N,2009PASJ...61S..69T} had early \nobservations which showed a hard component with a duration of hundreds\nof days from its first observation $\\sim$ 70 days after visual maximum.\nThe lack of significant evidence of hard emission in the early outburst \nof slow novae is consistent with\nshocks, either within the ejecta or with a pre-outburst ambient medium, \nbeing the primary source of early hard X-ray emission in the\nfaster novae. Slower novae have lower ejection speeds\nand thus should either have less or delayed shock emission\n\\citep[see equ. 3 in][]{2006ApJ...652..629B}. Hard X-rays\nwere also detected late in the outburst of novae with extreme and multiple \nejection events.\nThe best example of this is V2362 Cyg \nwhich at the time of its unusually bright secondary maximum had\nalready doubled the width of its emission lines and was detected \nas a hard X-ray source \\citep{2008AJ....136.1815L}. \nSimilarly the slow nova V5558 Sgr was also \na late hard source. Its early light curve was marked by numerous \nsecondary maxima similar to V723 Cas \\citep{2008NewA...13..557P}. \n\nAnother interesting case is the slow nova V1280 Sco which was detected \nmultiple times between days 834 - 939 after outburst as an X-ray source. \n\\citet{2009ATel.2063....1N} found that the X-ray count rate was \nrelatively low and the SED was best fit with multiple thermal plasma \nmodels consistent with line emission. They attributed the lines \nfrom shock heating of the ejecta but this is difficult to reconcile \nwith how rapidly shock emission declines. \\citet{2010ApJ...724..480H} showed\nthat V1280 Sco had two bright secondary peaks after maximum. Thus, it is\npossible that this nova experienced additional ejection events later in\nthe outburst that contributed the necessary energy to power shocks.\nContemporary optical spectra from our Small and Moderate Aperture\nResearch Telescope System (SMARTS) nova monitoring program\nshow that the photosphere of V1280 Sco remains optically thick with \nP-Cygni profiles still present more than 4 years after outburst. \nAlternatively, the line emission may be from circumstellar gas \nphotoionized by the initial X-ray pulse of the explosion. Given the \nrelative proximity of V1280 Sco, ranging from 0.63$\\pm$0.10 kpc\n\\citep{2010ApJ...724..480H} to 1.6 kpc \\citep{2008A&A...487..223C}, any X-ray \nemission lines would be much brighter than most novae in our sample which has\na larger median distance of 5.5 kpc. Unfortunately, V1280 Sco was X-ray \nfaint making it impossible to determine the source of its X-ray emission.\n\n\n\\section{THE SSS PHASE\\label{sssphase}}\n\n\\subsection{Rise to X-ray Maximum and the ``Turn-on'' Time\\label{S:ton}}\n\nThe unprecedented temporal coverage of the early outburst in X-rays with\n{\\it Swift}\\ has fully revealed a new phenomenon during the rise to X-ray maximum.\nPrior to {\\it Swift}, V1974 Cyg had the best sampled X-ray light curve\n\\citep[see Fig. 1 in ][]{1996ApJ...456..788K}. The 18 {\\it ROSAT}\\ \nobservations showed a slow and monotonic rise to maximum. This light \ncurve evolution was expected as the obscuration from the ejecta clears \nand the effective temperature of the WD photosphere increases\n\\citep{1985ApJ...294..263M}. However {\\it Chandra}\\ observations of\nV1494 Aql \\citep{2003ApJ...584..448D} and V4743 Sgr\n\\citep{2003ApJ...594L.127N} hinted that this transition was not as smooth\nas previously observed, with short term \"bursts\", periodic oscillations,\nand sudden declines. \n\nWith daily and sometimes hourly {\\it Swift}\\ coverage,\nthe rise to X-ray maximum is unequivocally highly chaotic with large \nchanges in the count rate evident in all well observed {\\it Swift}\\ \nnovae to date. Figure \\ref{kteriearlylc} illustrates this \nphenomenon in KT Eri from the data available in Table \\ref{fullswift}.\nDuring the initial rise to X-ray maximum, it exhibited large\noscillations. The numerous large declines are even more dramatic when the\nobservational data sets are not grouped by observation\nID number as in Table \\ref{fullswift} \nbut broken into small increments (Walter et al. in prep).\nAt 76 days after visual maximum the variability became much smaller \nand the count rate stabilized around $\\sim$150 ct s$^{-1}$. \nIn addition to KT Eri \\citep{2010ATel.2392....1B},\nRS Oph \\citep{2011ApJ...727..124O}, U Sco \\citep{2010ATel.2430....1S},\nnova LMC 2009a \\citep{2009ATel.2025....1B,Bode2011},\nV2672 Oph \\citep{2009ATel.2173....1S}, \nV2491 Cyg \\citep{2010MNRAS.401..121P,2011arXiv1103.4543N}, \nand V458 Vul \\citep{2009AJ....137.4160N} \nall showed this large amplitude variability. The first three novae \nare known RNe while the next two and KT Eri are suspected to be\nRNe based on their observational characteristics. \nThe fact that the less energetic V458 Vul \nalso exhibited this phenomenon indicates that it is not just associated\nwith very fast or recurrent novae. \nSee Section \\ref{var} for further discussion of nova variability.\n\n\\begin{figure*}[htbp]\n\\plotone{KTEriearlylc.ps}\n\\caption{The early X-ray light curve of KT Eri in days since visual maximum.\nThe top panel shows the count rate and the lower panel gives the hardness\nratio, HR1. Dotted lines are added to the top panel to emphasize the\nvariability. Prior to day 65 KT Eri was faint and hard.\nBetween days 65 and 75 the source transitioned to the bright SSS phase\nwith large amplitude oscillations in the count rate and some corresponding\nchanges in HR1. After day 76 the both the count rate and\nhardness ratio significantly stabilized but still showed variability\n(see Section \\ref{var}).\n\\label{kteriearlylc}}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\plotone{turnonvsvel.ps}\n\\caption{SSS turn-on time of novae (Table \\ref{timescales})\nas a function of the ejection velocity (estimated from the FWHMs\nin Table \\ref{chartable}). \nFilled circles are known RNe. Half filled circles\nare suspected RNe based on their characteristics.\nFrom the top to the bottom the lines\nshow the relationship from Shore (2008; Eqn. 9.2) for ejected masses\nof 1$\\times$10$^{-3}$, 1$\\times$10$^{-4}$, 1$\\times$10$^{-5}$,\n1$\\times$10$^{-6}$, and 1$\\times$10$^{-7}$ M$_{\\odot}$, respectively.\nThe downward and upward arrows are estimated upper and lower limits.\n\\label{velturnon}}\n\\end{figure*}\n\nThe emergence of the SSS, referred to as ``turn-on'' time or t$_{on}$ \nhereafter, provides information on the mass of the ejected shell. \nThe turn-on times for the novae in this sample are given in \nTable \\ref{timescales}. The t$_{on}$ time is defined as the time \nafter visual maximum when the HR2 $<$ -0.8 and there is significant\nincrease in the soft count rate. Similarly the ``turn-off'' \ntime (t$_{off}$) is defined as the time after t$_{on}$ when the\nhardness ratio becomes harder, HR2 $>$ -0.8, and the soft count rate\ndeclines rapidly as nuclear burning ends. Note that these\ndefinitions should not be confused with the SSS phases as shown in \nFigures \\ref{sssgood} - \\ref{sssother} as t$_{on}$ and t$_{off}$ also\ninclude the change in the soft count rate. SSS emission can only be \nobserved when the ejecta column density declines \nto the point where the source can be observed. With the expansion\nvelocity and turn-on time, upper limits on the ejected mass can be\nestablished. \\citet{Shore08} gives the relationship (see Equation 9.2),\n\\begin{equation}\nM_{eject} \\sim 6 \\times 10^{-7} \\phi N_{H}(22) v_{exp}(1000)^2 t_{on}^2 M_{\\odot} \\\\\n\\end{equation}\nwhere $\\phi$ is the filling factor, N$_H$(22) is the column density \nin units of 10$^{22}$ cm$^{-2}$, v$_{exp}$(1000) is the expansion velocity in \nunits of 1000\\ km\\ s$^{-1}$, t$_{on}$ is the soft X-ray turn-on time\nin days and assumes spherical geometry. In this study, $\\phi$ = 0.1 and a\ncolumn density of 10$^{22}$ cm$^{-2}$ is used as the minimum N$_H$ for\nthe ejected shell to become transparent to soft X-rays.\nThe expansion velocity is determined from v$_{exp}$ = FWHM\/2.355\n\\citep{2010PASP..122..898M} where FWHM is the width of the Balmer\nlines near visual maximum as given in Table \\ref{chartable}.\nUsing the t$_{on}$ times from Table \\ref{timescales}, \nFigure \\ref{velturnon} shows the estimated ejected masses as a function \nof ejection velocity. \n\\btxt{Note that the velocities derived from these FWHMs are lower limits\nas the X-ray opacity in the ejecta depends on faster material. This has\nthe effect of shifting all the points in Figure \\ref{velturnon} to the \nright.}\nAccordingly the fastest novae, at the bottom \nright, U Sco and V2672 Oph, must have ejected much less than 10$^{-5}$ \nM$_{\\odot}$ otherwise they would not have been observed as SSS sources \nso early after outburst. This inference is consistent with independent \nejected mass estimates \n\\citep[{\\it e.g.} U Sco,][]{2000AJ....119.1359A,2010AJ....140.1860D,\n2010ApJ...720L.195D}.\nConversely, novae in the upper left corner must eject a significant \namount of material. Large mass ejection events are also inferred from \nthe optical spectra of novae like V1280 Sco which still showed P-Cygni \nlines 3 years after outburst \\citep{2010PASJ...62L...5S} and a year \nlater in our recent SMARTS spectroscopy. \n\nNote that external extinction from the ISM is not taken into account \nin Figure \\ref{velturnon} nor is the evolution of the \neffective temperature of the WD photosphere. Novae with large extinction\nmay never be observed in the SSS phase while a slow increase in the \nWD temperature after the ejecta has sufficiently cleared will delay\nthe onset of t$_{on}$ resulting in an overestimate of the ejected mass\nderived from Equ. 1. Both factors along with deviations\nfrom the underlying assumptions such as different filling factors\nand non-spherical symmetry, can lead to different mass values given\nin Figure \\ref{velturnon}. These limitations explain why two novae\nwith the same ejection velocities, V2468 Cyg and V5558 Sgr at \n425 km s$^{-1}$, can have divergent mass estimates due to different\nturn-on times.\n\n\\begin{deluxetable}{lcc}\n\\tablecaption{SSS X-ray time scales\\label{timescales}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Name} & \\colhead{turn-on} & \\colhead{turn-off} \\\\\n\\colhead{} & \\colhead{(d)} & \\colhead{(d)}\n}\n\\startdata\nCI Aql & \\nodata & $<$396 \\\\\nCSS 081007 & 185$\\pm$68 & 314$\\pm$68 \\\\\nGQ Mus & \\nodata & 3484.5$\\pm$159.5 \\\\\nIM Nor & $>$28 & $<$136 \\\\\nKT Eri & 71$\\pm$1 & 280$\\pm$10 \\\\\nLMC 1995 & $<$1087 & 2545$\\pm$426 \\\\\nLMC 2000 & $>$48 & $<$291 \\\\\nLMC 2009a & 95$\\pm$5 & 270$\\pm$10 \\\\\nRS Oph & 35$\\pm$5 & 70$\\pm$2 \\\\\nU Sco & 23$\\pm$1 & 34$\\pm$1 \\\\\nV1047 Cen & $>$144 & $<$972 \\\\\nV1065 Cen & \\nodata & $<$744\\tablenotemark{a} \\\\\nV1187 Sco & \\nodata & $<$181\\tablenotemark{a} \\\\\nV1213 Cen & $<$322 & $>$494 \\\\\nV1280 Sco & $>$928 & \\nodata \\\\\nV1281 Sco & $<$339 & 627$\\pm$194 \\\\\nV1494 Aql & 217.5$\\pm$30.5 & 515.5$\\pm$211.5 \\\\\nV1974 Cyg & 201$\\pm$54 & 561.5$\\pm$50.5 \\\\\nV2361 Cyg & \\nodata & $<$456 \\\\\nV2362 Cyg & \\nodata & $<$990 \\\\\nV2467 Cyg & $<$456 & 702$\\pm$97 \\\\\nV2468 Cyg & $<$586 & \\nodata \\\\\nV2487 Oph & \\nodata & $<$986 \\\\\nV2491 Cyg & 40$\\pm$2 & 44$\\pm$1 \\\\\nV2672 Oph & 22$\\pm$2 & 28$\\pm$2 \\\\\nV351 Pup & \\nodata & $<$490 \\\\\nV382 Vel & $<$185 & 245.5$\\pm$22.5 \\\\\nV407 Cyg & 15$\\pm$5 & 30$\\pm$5 \\\\\nV458 Vul & 406$\\pm$4 & $>$1051 \\\\\nV4633 Sgr & \\nodata & $<$934 \\\\\nV4743 Sgr & 115$\\pm$65 & 634$\\pm$108 \\\\\nV5114 Sgr & $<$1086 & \\nodata \\\\\nV5115 Sgr & $<$546 & 882$\\pm$336 \\\\\nV5116 Sgr & 332.75$\\pm$275.25 & 938$\\pm$126 \\\\\nV5558 Sgr & $>$850\\tablenotemark{b} & \\nodata \\\\\nV5583 Sgr & $<$81 & 149$\\pm$68 \\\\\nV574 Pup & 571$\\pm$302 & 1192.5$\\pm$82.5 \\\\\nV597 Pup & 143$\\pm$23 & 455$\\pm$15 \\\\\nV598 Pup & \\nodata & $<$127 \\\\\nV723 Cas & $<$3698 & $>$5308 \\\\\nV838 Her & \\nodata & $<$365 \\\\\nXMMSL1 J060636 & \\nodata & $<$291 \\\\\n\\enddata\n\\tablecomments{t$_{on}$ and t$_{off}$ bracket the time after visual maximum\nwhen the hardness ratio HR2 is softer than -0.8.}\n\\tablenotetext{a}{Evolution of $[$\\ion{Fe}{7}$]$ (6087\\AA) and lack of \n$[$\\ion{Fe}{10}$]$ (6375\\AA) in our SMARTS optical\nspectra are consistent with this upper limit from the X-ray non-detection.\nSee Section \\ref{fex}.}\n\\tablenotetext{b}{Optical spectra are slowly becoming more ionized which\nis consistent with slowly increasing SSS emission observed with {\\it Swift}.}\n\\end{deluxetable}\n\n\\subsection{Turn-off time\\label{S:toff}}\n\nTable \\ref{timescales} shows t$_{off}$ times or upper\/lower limits\nfor the novae in our sample. If optical light curve decline times,\n{\\it e.g.} t$_2$, are used as \nsimple proxies for WD masses then \nthere should be a relationship between \nt$_2$ and duration of the SSS phase. In Figure \\ref{t2turnoff} the\nturn-off time, t$_{off}$, is shown versus t$_2$.\nOverplotted as the solid line is the turn-off versus decline relationship\nof \\citet[][Equ. 31]{2010ApJ...709..680H} where t$_3$ was converted \nto t$_2$ using Equ. 7 in \\citet{2007ApJ...662..552H}.\nThe combined uncertainties of both equations is represented by \nthe two dotted lines. \\citet{2010ApJ...709..680H} find that the time when\nnuclear burning ends is $\\propto$ t$_{break}^{1.5}$ (Equ. 26), where\nt$_{break}$ is the time of the steepening of their model free-free \noptical-IR light curves. This relationship is derived using a series of \nsteady state models with a decreasing envelope mass to fit the observed \nmultiwavelength light curves. The X-ray and UV light curves are fit with \nblackbodies while the optical and IR curves use optically thin, free-free \nemission. The parameters of the model are the WD mass, composition of\nthe WD envelope, and its mass prior to outburst. While the general trend\nis similar, the observed data do not fit the \\citet{2010ApJ...709..680H} \nrelationship, especially when the sample is expanded to include the novae \nwith only upper or lower limits.\n\nThe relationship derived by \\citet{2010ApJ...709..680H} utilizes the\nt$_2$ derived from the $y$ band light curve instead of the $V$ band as in \nthis paper. The $y$ band is used by \\citet{2010ApJ...709..680H} since it \ngenerally samples the continuum where as the $V$ band can have a contribution\nin the red wing from strong H$\\alpha$ line emission. However, the difference\nin filters can not explain the poor agreement between the data and the\nrelationship in Fig. \\ref{t2turnoff} since there are similar numbers of\nnovae that fall above the line as below. If a contribution from H$\\alpha$\nin $V$ was significant then the disagreement would not be symmetric.\n\nSimilarly, Figure \\ref{velturnoff} shows the relationship between the \nFWHM and turn-off time with the dotted line depicting the \n\\citet{2003A&A...405..703G} turn-off vs. velocity relation. This relationship\nwas derived from all the SSS nova data available at the time which was\nonly 4 well constrained SSS novae and 4 novae with turnoff limits. With \nthe significantly larger sample currently available it is clear that \nthere is not a tight fit to the relationship.\nThis discrepancy is particularly acute for the slower novae in our sample \nwhich have turned off much sooner than expected. These Figures illustrate \nthat the gross behavior of novae is still poorly understood\nand confirm that the observational characteristics of an individual\nnova is governed by more than just the WD mass.\n\n\\begin{figure*}[htbp]\n\\plotone{newturnoffvst2.ps}\n\\caption{SSS turn-off time as a function of t$_2$ time with the\n\\citet{2010ApJ...709..680H} relationship (solid line) and its\nassociated uncertainty (dotted lines) overplotted. Upper and lower\nlimits are also shown. \nFilled circles are known RNe. Half filled circles \nare suspected RNe based on their characteristics.\n\\label{t2turnoff}}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\plotone{turnoffvsvel.ps}\n\\caption{SSS turn-off time as a function of the FWHM of H$\\alpha$ or\nH$\\beta$ near visual maximum. The relationship of \\citet{2003A&A...405..703G}\nis shown as the dotted line. Upper and lower limits are also shown.\nFilled circles are known RNe. Half filled circles\nare suspected RNe based on their characteristics.\n\\label{velturnoff}}\n\\end{figure*}\n\nAccurate determinations of the duration of nuclear burning\ncan also provide an independent ejected mass estimate. Recently,\n\\citet{2010ApJ...712L.143S} found that the ejected mass is only\ndependent on the total radiated energy, E$_{rad}$, and does not\nrequire knowledge about the geometry and structure of the shell\nas with other methods. E$_{rad}$ is not a trivial value to determine\nas it depends on the bolometric luminosity of the source and the\nduration of the outburst. {\\it Swift}\\ observations can potentially\ndetermine the bolometric flux both when the bulk of the emission is\nin a narrow wavelength region such as the early, optically thick\nphases in the UV and optical or later in the soft X-ray during the\nSSS phase. Estimates of the luminosity during both phases requires\nan accurate determination of the extinction and the distance.\nPerhaps the best example to use the \\citet{2010ApJ...712L.143S}\ntechnique on is RS Oph. With a bolometric luminosity of\n3$\\times$10$^{4}$ L$_{\\odot}$ from TMAP atmosphere models\n\\citep{2011ApJ...727..124O} and a t$_{off}$ of 70 days the estimated\nejected mass $\\sim$ 2$\\times$10$^{-6}$ M$_{\\odot}$. This is consistent\nwith the low mass estimates from the radio\n\\citep[(4$\\pm$2)$\\times$10$^{-7}$ M$_{\\odot}$;][]{2009MNRAS.395.1533E}\nand hydro-dynamical models of the X-ray behavior\n\\citep[1.1$\\times$10$^{-6}$ M$_{\\odot}$;][]{1992MNRAS.255..683O} and\n\\citep[$\\sim$5$\\times$10$^{-6}$ M$_{\\odot}$;][]{2009ApJ...691..418D}\n\n\\subsubsection{SSS phase durations}\n\nFigure \\ref{histogram}a shows the distribution of the duration of the SSS \nphase for this sample of novae. Since there are still relatively few \nnovae with well established turn-off times a coarse histogram with only three \nbins is used. The bins have durations of less than one year, between \none and three years, and greater than three years. Due to large \nuncertainties in their exact turn-off times, ten of the sample novae \ncannot be placed within a single bin and thus are shown as the smaller \ncross-hatched columns between the bins in which they might belong.\nOf the \\totallimitSSS\\ novae with detected SSS emission or with strong \nlimits on the duration of the SSS phase, 89\\%, have turned off in under \n3 years. There are only four novae, GQ Mus, LMC 1995, V574 Pup, and \nV723 Cas, with detected SSS emission beyond 3 years. V458 Vul and \nV1213 Cen were still SSSs at their last observations and could also \nexceed 3 years. A similar rapid turn-off was inferred from a search of\nthe {\\it ROSAT}\\ archive of novae with SSS detections. \\citet{2001A&A...373..542O} \nfound only 3 SSS novae among the 39 Galactic and Magellanic cloud novae \nin the {\\it ROSAT}\\ archive observed at least once within 10 years after \nvisual maximum. The median age of the 19 novae with documented turn-off \ntimes from this sample is 1.4 years.\n\n\n\\begin{figure*}[htbp]\n\\plottwo{histogram.ps}{M31histogram.ps}\n\\caption{Distribution of the durations of well established SSS novae \nin the Galaxy\/Magellanic Clouds (left figure) and M31 (right figure)\nfrom \\citet{2010A&A...523A..89H}.\nThe three duration bins are less than one year, between 1 and 3 years, \nand greater than three years. The hashed areas include the novae with \nonly limits on their turnoff time that precludes placing them in a \nspecific bin.\n\\label{histogram}}\n\\end{figure*}\n\n\nThe situation is different for nova surveys of M31 \n\\citep{2007A&A...465..375P,2010A&A...523A..89H} where SSSs identified as \nclassical novae 5-10 years after outburst are fairly common, {\\it e.g.}\n1995-05b, 1995-11c, and 1999-10a. Figure \\ref{histogram}b shows the same\nbins as before but with the 18 M31 novae detected as SSSs given in Table 9 \nof \\citet{2010A&A...523A..89H}. The difference can be explained by the\npredominance of slower novae in the M31 sample. The mean t$_2$ time \nof the nine M31 novae with reported decline times in the \n\\citet{2010A&A...523A..89H} SSS sample is 31 days whereas the peak \nfor our sample is significantly faster at 8 days (Figure \\ref{t2vsfwhm}). \nThe discrepancy in speed class between the two samples is due to selection \neffects. By design the Galactic\/Magellanic sample consists primarily \nof bright and hence faster novae. M31 surveys sample the entire \ngalaxy but with fewer {\\it Chandra}, {\\it XMM}\\ and {\\it Swift}\\ observations that are \nrandomly scattered in time. The M31 strategy finds many novae since\nthe observed M31 nova rate is greater, $\\sim$ 30 novae yr$^{-1}$ \n\\citep{1989AJ.....97.1622C} than that of the Milky Way\n\\citep[$\\sim$ 5 novae yr$^{-1}$][]{1997ApJ...487..226S}, however, with\nlimited time sampling slower novae with longer SSS phases are easier to\ndetect than fast novae with rapid turn-offs.\n\n{\\it ROSAT}\\ detected 2 SSS novae out of 21 Galactic novae for a Milky Way \ndetection frequency of 9.5\\% \\citep{2001A&A...373..542O}. \nIf the 4 RNe in the {\\it ROSAT}\\ list\nare discarded because their observations were taken $\\gtrsim$ 1 year\nafter outburst, the detection frequency increases to 11.8\\%. The M31\nsurvey has a similar low SSS detection frequency of 6.5\\% \n\\citep{2010AN....331..187P}. These two results show that it is difficult\nto catch novae during their SSS phase via random time sampling. \nHowever, a more systematic approach that 1) targets only \nbright and low extinction novae and 2) obtains multiple\nobservations early in the outburst may have a greater detection\nfrequency. Indeed, {\\it Swift}\\ has a significantly greater SSS detection rate of \n$\\sim$ 45\\% during its five years of operation with this more systematic\napproach. \n\n\\subsection{SSS emission in the hard X-ray spectrum of \nV407 Cyg\\label{v407cygSSS}}\n\nIn the initial analysis of the {\\it Swift}\\ data in \\citet{2011A&A...527A..98S} \na second soft component was required to fit some of the {\\it Swift}\\ X-ray spectra.\nHowever, there were insufficient counts to distinguish between a blackbody \nand an optically-thin plasma model. Assuming a distance of 2.7 kpc\n\\citep{1990MNRAS.242..653M}, the unabsorbed flux of the soft component\nin the day $<$30 model of Table 3 in \\citet{2011A&A...527A..98S} gives a \nblackbody luminosity of 2$\\times$10$^{37}$ erg s$^{-1}$ which is reasonable\nfor nuclear burning on a WD. To investigate whether the soft emission\nin V407 Cyg can be attributed to nuclear burning, we reanalyze the {\\it Swift}\\\nX-ray data with twice as many time bins as previously used. Figure\n\\ref{v407cygmodel} shows the results. As in \\citet{2011A&A...527A..98S}\nthe model abundances are allowed to vary but the temperatures are not \nsignificantly different if the abundances are constrained to be solar.\nThe data prior to day 10 and after day 50 can be fit with a single optically \nthin plasma model. The remaining 4 time bins all require a soft component\nwhich in this analysis is assumed to be a blackbody. Both the derived\nN$_H$ and the optically thin component temperature decline with time\nin the models. The blackbody effective temperature increases until the\nday 36 bin and declines in the day 45 bin, although the error bars are\nlarge enough that it could be constant over the last two dates. The \nderived luminosities (over the 0.3--10~keV X-ray band) \nfor the four dates with blackbody components are \n2.3$\\times$10$^{42}$ erg s$^{-1}$, 9.3$\\times$10$^{37}$ erg s$^{-1}$,\n1.9$\\times$10$^{35}$ erg s$^{-1}$, and 3.1$\\times$10$^{35}$ erg s$^{-1}$,\nrespectively, assuming a distance of 2.7 kpc. The extreme luminosity \nfor the day 16 bin cannot be considered reliable, given the very low \nfitted temperature of $\\sim$ 25~eV below the XRT 0.3~keV low-energy cut off.\nNevertheless, the results of fitting blackbodies to the {\\it Swift}\\ V407 Cyg \ndata are consistent with a scenario where the nuclear burning proceeded \non the WD surface near Eddington limits until about 30 days after \nvisual maximum. The fuel was consumed after that point leading to a \nrapid drop in the luminosity. Thus, although V407 Cyg was not a \ntrue SSS, its soft photon light curve was consistent \nwith expected evolution as seen in other novae.\n\n\\begin{figure*}[htbp]\n\\plotone{v407cygmodel.ps}\n\\caption{Results of model fits to the {\\it Swift}\\ V407 Cyg data set. The top\nleft panel shows the total 0.3-10 keV (squares) and soft 0.3-1 keV (circles)\nlight curves. The derived N$_H$ column for the 6 date bins is shown in \nthe top right panel. The Mekal temperature of the hotter, optical thin\nplasma model is shown in the middle left panel while the right middle panel\nshows the temperature of the blackbody fit to the softer component. A \nsecond, soft component is not needed in the first and last date bins.\nThe bottom panels show the observed (left) and unabsorbed (right) fluxes.\nSquares give the total from all components while the circles show just\nthe blackbody contributions. The right axis of the last panel also shows\nthe corresponding 0.3-10 keV luminosity assuming a 2.7 kpc distance.\n\\label{v407cygmodel}}\n\\end{figure*}\n\n\n\\section{DISCUSSION} \n\n\\subsection{Orbital period and turn-off time}\n\n\\citet{2003A&A...405..703G} found a correlation between the orbital \nperiod and X-ray turn-off time. However, at that time only four novae \nhad both well determined periods and X-ray turn-off times, GQ Mus, \nV1974 Cyg, V1494 Aql, and V382 Vel, and limits on CI Aql and U Sco. \nThe observed trend implied \nthat novae with short orbital periods had the longest duration SSS phases. \n\\citet{2003A&A...405..703G} attributed this relationship to \na feedback loop between the WD and its secondary. The luminous X-rays \nproduced during the SSS phase excessively heat the facing side of the\nsecondary in short period systems. The energy added to the outer\nlayers of the secondary causes it to expand, producing \nhigher mass loss leading to enhanced accretion of material onto \nthe WD. \n\nSince 2003, the turn-off times of \\newturnoffperiod\\ additional \nnovae with known periods have been determined. There are also strong \nlimits on the turn-off times of \\newturnoffperiodlimit\\\nother novae with known orbital periods. \nInclusion of this expanded sample, shown in Figure \\ref{periodvstoff}, \ncauses the trend between orbital period and duration of the \nSSS phase noted by \\citet{2003A&A...405..703G} to disappear. \nThe new distribution, with an increased sample size, shows \nno discernible correlation. Orbital separation apparently\nhas no effect on the duration of nuclear burning. \n\n\\begin{figure*}[htbp]\n\\plotone{newperiod.ps}\n\\caption{SSS turn-off time as a function of orbital period for novae with\nwell established turn-off times and novae with good upper (i.e. still in\nthe SSS phase) and lower limits. \nFilled circles are known RNe. Half filled circles\nare suspected RNe based on their characteristics.\nThe top plot shows the distribution\nhistogram of our sample (solid line) and of all the known novae (dotted\nline) from Table 2.5 in \\citet{Warner2008}.\n\\label{periodvstoff}}\n\\end{figure*}\n\nTo see if the lack of a trend could be explained by having a \nnon-representative sample of novae, the top panel of Figure \n\\ref{periodvstoff} shows the distribution in 1 hour orbital period \nbins of the updated \\citet{Warner2008} sample as the solid line. \nThe distribution of all novae with orbital periods is shown as the \ndotted line and shows that the SSS sample is a consistent sub-sample\nof the known nova period distribution.\n\n\\citet{2010AJ....139.1831S} claim a similar relationship between turn-off \ntime and orbital period, albeit in highly magnetized systems. They find that\nof the eight novae with quiescent luminosities $>$10$\\times$ brighter \nthan pre-eruption, all have long SSS phases, short orbital periods, highly \nmagnetized WDs, and very slow declines during quiescence. \nSimilar to \\citet{2003A&A...405..703G}, \\citet{2010AJ....139.1831S}\npropose that nuclear burning on the WD is prolonged by increased \naccretion from the close secondary but in this case efficiently funneled \non the WD by the strong magnetic fields. The 8 novae \n\\citet{2010AJ....139.1831S} cite are CP Pup, RW UMi, T Pyx, V1500 Cyg, \nGQ Mus, V1974 Cyg, V723 Cas, and V4633 Sgr. \n\nThe hypothesis that these specific characteristics enhance the SSS duration \ncan be directly evaluated using V4633 Sgr, GQ Mus, V1974 Cyg, and V723 Cas, \nsince they all have X-ray observations within the first 3 years of \noutburst. For CP Pup, RW UMi, T Pyx, and V1500 Cyg the assertion of a long \nlasting SSS emission phase depends on secondary evidence as none had any \ndirect X-ray observations during outburst. Lacking direct X-ray observations\nwe will ignore these 4 sources for the test.\n\nThe first X-ray observation of V4633 Sgr was obtained 934 \ndays after visual maximum but it and subsequent observations were of \na hard source implying that any SSS emission was missed.\nWith an upper limit of 2.5 years for its SSS emission, V4633 Sgr can not\nbe considered a long-lived SSS nova based on the distribution shown \nin Figure \\ref{histogram}a. The SSS duration in V1974 Cyg was\neven shorter and much better constrained at 1.53$\\pm$0.14 years. In\naddition, V1974 Cyg was not \"excessively\" luminous in outburst as alleged in \n\\citet{2010AJ....139.1831S}. Its early UV plus optical fluxes were\nconsistent with the Eddington luminosity of a WD with a mass range of \n0.9-1.4 M$_{\\sun}$ \\citep{1994ApJ...421..344S}. The later \"excessive\" X-ray \nluminosities of \\citet{1998ApJ...499..395B} were derived from blackbody fits \nwhich are known to predict higher luminosities than model atmospheres \nfit to the same data.\nWhile V723 Cas has the longest SSS duration known among novae ($\\gtrsim 15$ \nyrs), its orbital period is very long at 16.62 hrs and significantly longer\nthan that of GQ Mus, 1.43 hrs. The claim of magnetic activity in V723 Cas \nis based on the different periodicities observed in the early light curve \nindicating an intermediate polar (IP). However, the multiple periodicities \nused as evidence by \\citet{2010AJ....139.1831S} were from data \nobtained early in the outburst while the nova ejecta were still clearing \n\\citep{1998CoSka..28..121C}. Photometry obtained at this early stage of\ndevelopment frequently results in noisy periodograms. Data obtained\nlater in the outburst by \\citet{2007AstBu..62..125G} and over the\nthe last 4 years from our own photometric monitoring \\citep[Hamilton C.,\nprivate communication,][]{2007AAS...210.0404S} reveal a well defined\n16.7 hr period with a large $\\sim$ 1.5 magnitude amplitude in the UV, \noptical and NIR bands. There is no other evidence in the literature \nto support that V723 Cas is magnetic. \nOf the 4 novae with supporting X-ray observations, only GQ Mus fully\nmatches the criteria of a long lasting SSS on a magnetic WD in \na short period system. \n\nWith our expanded X-ray sample there are 3 additional novae with well \nconstrained SSS durations that can potentially be used to test the hypothesis.\nV4743 Sgr \\citep{2006AJ....132..608K}, V597 Pup \\citep{2009MNRAS.397..979W}, \nand V2467 Cyg \\citep{2008ATel.1723....1S} are IP candidates and thus \nbelieved to have strong magnetic fields. The orbital periods for \nV597 Pup and V2467 Cyg are relatively short at 2.66 and 3.8 hrs,\nrespectively, but the period in V4743 Sgr is much longer at 6.74\nhrs, see Table \\ref{chartable}. While the turn-off times for\nthese novae are all longer than one year they are not exceptionally\nlong, with durations of 1.74$\\pm$0.29, 1.25$\\pm$0.04, and 1.85$\\pm$0.33\nyears for V4743 Sgr, V597 Pup, and V2467 Cyg, respectively. Thus\nthe data available do not imply that short orbital period or\nstrong magnetic fields produce significantly longer SSS \nthan the average novae from our sample.\n\nAn interesting question is why there is no trend between orbital\nperiod and SSS duration as the underlying assumption of enhanced\naccretion due to heating of the secondary is sound. One reason \nwould be that there is no significant enhancement in the \nmass transfer rate from the illuminated secondary, perhaps from shielding\ndue to a thick disk. Another possibility is that there is an\neffect but it is subtle and affected by other variables such as the\nstrength of the magnetic field, composition of the accreted material,\nWD mass, etc. Another possibility is that an accretion disk can not\nform under the harsh conditions during the SSS phase which inhibits \nadditional mass transfer. More observations of novae with different \ncharacteristics are required in order to understand the underlying\nphysics.\n\n\\subsection{Dusty novae}\n\nThe creation, evolution, and eventual destruction of dust occurs on \nrelatively rapid time-scales in novae making them excellent objects \nfor understanding dust grain formation. One curious aspect of dust\nin novae is how grains can grow within the harsh photoionizing \nenvironment. A correlation of the recent \\textit{Spitzer}\nspectroscopic observations of dusty novae \n\\citep[see][for examples]{WoodStar11,Heltonthesis}\nwith this large X-ray sample can bring insights to why most novae do \nnot form dust and the reasons for the large differences in composition \nand amounts in the novae that do form dust.\n\nIn general, it is believed that grain growth occurs within dense clumps \nin the ejecta where they are shielded from hard radiation. Spectroscopic \nand direct imaging show that nova shells are inherently clumpy \n\\citep{1997AJ....114..258S,OBB08}. Grain formation inside dense clumps \nalso explains the higher frequency of dust in slow novae \n\\citep[see Table 13.1 in][]{ER08} as they eject more material at lower \nvelocities and suffer greater remnant shaping than fast novae and thus \nprovide more protection for grain formation. However, even fast novae \nwith small ejected masses have shown some dust formation, such as V838 Her\n\\citep{2007ApJ...657..453S}. A contrary view was proposed that ionization \nactually promotes dust formation via the accretion of grain clusters through\ninduced dipole interactions \\citep{2004A&A...417..695S}.\n\nKnown and likely dusty novae represent 31\\% of the X-ray sample but only \ntwo, V2467 Cyg and V574 Pup, were also SSSs. While there were no \ncharacteristic dips in either visual light curve indicating significant \ndust formation \\citep{2009AAS...21349125L,2005IBVS.5638....1S}, both novae \nshowed evidence of some dust formation from the presence of weak silicate \nemission features in the late \\textit{Spitzer} mid-IR spectra \n\\citep{WoodStar11}. In V2467 Cyg the first {\\it Swift}\\ X-ray detection was 458 \ndays after maximum. It was weak but dominated by soft photons. The following \n{\\it Swift}\\ observation on day 558 revealed the nova was still soft but\nwas also almost 3 times brighter. The {\\it Spitzer} spectra showing \nweak dust features were taken between these {\\it Swift}\\ observations, \naround day 480. V574 Pup was detected as a SSS by \n{\\it XMM}\\ and {\\it Swift}\\ 872 and 1116 days after visual maximum, respectively.\n{\\it Spitzer} observations taken around the same time as the {\\it Swift}\\ data \nshowed the same weak silicate emission features seen in V2467 Cyg. \nThe X-ray observations confirm that dust, albeit weak, can exist in the \nejecta when the amount of photoionizing radiation is at its peak. Detailed\nphotoionization modeling of these novae is required to determine if clumps\nexisted in the ejecta during this time and if the conditions were sufficient\nto protect the dust grains.\n\nThere are also two strong dust formers in the sample with hard X-ray emission.\nV2362 Cyg was detected numerous times by {\\it Swift}\\ \\citep{2008AJ....136.1815L} \nand twice with {\\it XMM}\\ \\citep{2007ATel.1226....1H} but none of the observations\nwere consistent with a SSS. However, V2362 Cyg had significant dust \nemission at the times of the {\\it Swift}\\ and first {\\it XMM}\\ observations. \nThe dust likely formed in the later extraordinary mass ejection event\nthat produced the large secondary peak in the light curve and increased\nejection velocities. The additional material would have absorbed the soft\nX-ray emission and delayed the onset of any SSS phase.\nIn the last {\\it XMM}\\ observation it was extremely faint indicating that \nif there was a SSS phase it was over by 990 days after maximum. \nV1280 Sco was detected as an X-ray source late in its outburst but the X-rays \nwere relatively hard and faint \\citep{2009ATel.2063....1N}. V1280 Sco \nhas yet to be observed as an SSS and its internal extinction is still large.\nIn both V2362 Cyg and V1280 Sco, grain growth was likely enhanced to\nproduce the large dust events due to the effective shielding of the \nlarge mass ejections.\n\n\\subsection{Variability during SSS phase \\label{var}}\n\nAt the maximum effective temperature, (2-8)$\\times$10$^5$ K, the bulk of\nthe emission in a nova outburst comes from X-rays which are primarily soft. \nAssuming the external column is low enough and the effective temperature\nis suitably high enough, this X-ray emission can be detected. The theory\nof constant bolometric luminosity predicts that at X-ray maximum the \nlight curve should be relatively constant since one is observing the \nmajority of the emitted flux. Constant bolometric luminosity has been \nobservationally verified in the early phase of the outburst from the\ncombined UV, optical and near-IR light data, {\\it e.g.} FH Ser\n\\citep{1974ApJ...189..303G}, V1668 Cyg \\citep{1981MNRAS.197..107S} and \nLMC 1988\\#1 \\citep{1998MNRAS.300..931S}. However, the expected X-ray \nplateau in all well studied {\\it Swift}\\ novae has been far from constant. \nIn addition, the rise to X-ray maximum also shows large amplitude \noscillations. What is the source of the variability during both phases?\n\nOne important caveat when discussing the {\\it Swift}\\ data is that the XRT \ncount rate is not a direct measure of the bolometric flux, only the \nportion that is emitted between 0.3 and 10 keV. During the SSS phase the \nvast majority of photons are emitted within this range but if the effective \ntemperature varies due to photospheric expansion or contraction, the XRT \ncount rate will change even if the source has a constant bolometric \nluminosity \\citep[see also][]{2011ApJ...727..124O}.\nFigure \\ref{xrtevol} illustrates how the estimated XRT count \nrate varies as a function of effective temperature for simple blackbody \nmodels (WebPIMMS\\footnote{http:\/\/heasarc.nasa.gov\/Tools\/w3pimms.html})\nassuming a constant luminosity and column density\nsee Section \\ref{Xrayvar}. A decline from 500,000 K\nto 400,000 K drops the total {\\it Swift}\\ XRT count rate by a factor of 6.\nThe change in HR1 is almost a factor of 10 while there is\nessentially no change in the HR2 hardness ratio. Thus changes\nin temperature might in principle account for the observed X-ray \noscillations, see Section \\ref{teffvar}. Why the temperature or radius \nof the WD photosphere would change on the observed \ntime scales remains an open question however. The next sections outline \npossible explanations for the variations seen during the SSS phase.\n\n\\subsubsection{Variable visibility of the WD\\label{Xrayvar}}\n\nFigures 2 and 5 in \\citet{2011ApJ...727..124O} show in exquisite detail\nthe rapid and extreme variability in the X-ray light curve and hardness\nratio evolution in RS Oph. In general the trend was for RS Oph to be softer\nat high X-ray flux but counter examples were also observed.\n\\citet{2011ApJ...727..124O} cite variable visibility of the hot WD as a \npossible explanation of the observed phenomena. Changes in the \nextinction can come from either variable ionization of the ejecta leading \nto changing extinction at higher ionization states or neutral absorption \nfrom high density clumps passing through the line of sight. Changes in \nthe ionization structure of the ejecta are unlikely given the rapid \nhour to day time-scales but are consistent with the crossing times of \nsmall, dense clumps traveling across the line of sight assuming transverse \nvelocities of a few percent of the radial velocity. There is evidence \nfor this at other wavelengths. For example, a sudden absorption component \nthat appeared in the Balmer lines of V2214 Oph in July 1988 was interpreted \nby \\citet{1991ApJ...376..721W} as the passage of an absorbing clump in \nfront of the the emitting region. Both types of absorption changes \nshould be manifest as a hardening of the X-ray spectrum or increase in \nthe hardness ratio with increasing soft flux emission consistent with \nthe counter examples of \\citet{2011ApJ...727..124O}.\n\nAs a test of the neutral absorption theory we use the model results \nfrom a recent photoionization analyses in WebPIMMS to determine the\ncount rates and hardness ratios for different column densities and \nsimulate the effect of clumps. The photoionization models require \ntwo components, high density clumps embedded within a larger diffuse medium\n\\citep[see][for details]{2007ApJ...657..453S,2010AJ....140.1347H}\nto fit the emission lines of the ejected shell. For convenience we\nuse the May 24th, 1991\nmodel parameters for V838 Her in Table 2 of \\citet{2007ApJ...657..453S}.\nThe model uses a blackbody with an effective temperature of 200,000 K to\nphotoionize a two component spherical shell. The model shell has a\nclump-to-diffuse density ratio of 3 with a radius equal to the expansion\nvelocity multiplied by the time since outburst. To facilitate comparisons \nwith the results in Figure \\ref{xrtevol}, the same unabsorbed bolometric \nflux is assumed. WebPIMMS predicts a {\\it Swift}\\ soft band count rate \nof 5.3$\\times$10$^{-3}$ ct s$^{-1}$ through the \nlower density diffuse gas (N$_H$ = 1.2$\\times$10$^{21}$ cm$^{-2}$)\nand 8.5$\\times$10$^{-6}$ ct s$^{-1}$ from the higher density clumps\n(N$_H$ = 3.7$\\times$10$^{21}$ cm$^{-2}$). While the total count rate\ndeclines by over 100$\\times$, the HR2 hardness ratio does not change with\nthis particular model. The HR2 can vary significantly when using different \nmodel parameters such as lower initial densities or higher clump to diffuse\ndensity ratios. Care is required when using hardness ratios of \nlow resolution data. In a SSS source with a hard X-ray component, such as \nRS Oph, the hardness ratio will increase if the soft component decreases \nfor any reason, not just due to absorption. \n\nAnother problem with variable visibility in RNe and very fast CNe is that\nthe amount of mass ejected is very low thus minimizing any effect the ejecta\nhave on the obscuration of the WD. The effect should be greater\nin slower novae with more ejected mass such as V458 Vul. In addition,\n\\citet{2011ApJ...727..124O} find that in RS Oph the ratio of high flux\nstates to low flux states as a function of energy is not consistent with\neither type of variable visibility of the WD. Rather the best fit comes from\nan increase in the effective temperature and declining radius,\nsee Section \\ref{teffvar}.\n\n\n\\begin{figure*}[htbp]\n\\plotone{webpimms.ps}\n\\caption{The logarithmic X-ray count rates, hardness ratios and logarithmic\n$uvw2$ count rates\nas a function of blackbody temperature as calculated by webPIMMS.\nAn unabsorbed, bolometric flux of 3.3$\\times$10$^{-8}$ erg\/s\/cm$^{-2}$ \n(1$\\times$10$^{38}$ erg s$^{-1}$ at 5 kpc) and N$_H$ of \n3$\\times$10$^{21}$ cm$^{-2}$ was used in all models.\nThe top panel shows the soft (0.3-1 keV, solid line and filled circles) \nand hard (0.1-10 keV, dashed line and triangles) count rates. The soft\ncontribution dominates at all effective temperatures. The middle panels \nshow the hardness ratios HR1(=H\/S) and HR2(=(H-S)\/(H+S)). The bottom panel \nshows how the $uvw2$ count rate increases as the blackbody temperature \ndeclines. \\label{xrtevol}}\n\\end{figure*}\n\n\\subsubsection{Periodic oscillations\\label{sssperiods}}\n\nThere are several proposed explanations of the periodic X-ray variations.\nIn the X-ray light curve of V1494 Aql, \\citet{2003ApJ...584..448D} found \nperiodicities that they attributed to non-radial g$^+$-mode pulsations. \nSimilar oscillations have been observed in V4743 Sgr \n\\citep{2003ApJ...594L.127N,2010MNRAS.405.2668D}.\n\nThe factor of almost ten decline in the {\\it XMM}\\ X-ray light curve of \nV5116 Sgr was interpreted by \\citet{2008ApJ...675L..93S} as a partial \neclipse of the WD since its duration was consistent with the orbital period. \nFiner binning of the day 762, 764, and 810 {\\it Swift}\\ observations of \nV5116 Sgr reveals the presence of a 500-800 second oscillation. This\nX-ray periodicity is significantly shorter than the 2.97 h orbital\nperiod found by \\citet{2008A&A...478..815D}. In addition,\nthe day 810 data show a strong flare that increases the count rate by\na factor of three with no significant change in the hardness ratio.\nThis was similar to the flare seen in V1494 Aql \\citep{2003ApJ...584..448D}.\nNo other flares were seen in the V5116 Sgr data set.\n\nOther orbital periods have been detected with {\\it Swift}.\nU Sco is a high inclination system with deep eclipses and an orbital\nperiod of 1.23 days \\citep{2001A&A...378..132E}. Deep eclipses were\nobserved in the 2010 outburst in the {\\it Swift}\\ UVOT light curves while the\nXRT light curves showed generally lower flux levels during the UV eclipses,\nbut did not otherwise exhibit clear eclipse signatures.\n\\citep{2010ATel.2442....1O}. A 1.19 day orbital period was deduced from \nthe {\\it Swift}\\ UVOT light curves in the RN nova LMC 2009a \n\\citep{2009ATel.2001....1B}. This orbital period was also observed in the \nXRT light curve during the SSS phase, but with a lag with respect to the \nUV\/optical of 0.24 days \\citep{Bode2011}.\n\nThe X-ray behavior in CSS 081007:030559+054715 was extremely unusual. This\nodd source was discovered well after optical maximum by the Catalina Real-time\nTransient Survey \\citep{2008ATel.1835....1P}. Its X-ray spectra were\nextremely soft, consistent with the low extinction along its position\nhigh above the Galactic plane ($b$ = -43.7$\\arcdeg$) \nwhich places it well outside the plane of the Galaxy where novae\nare generally not located. Figure \\ref{csslc} shows the {\\it Swift}\\ XRT\/UVOT\nlight curves compiled from the data in Table 2. \nTo first order both light curves are in phase with \nsignificant variability superimposed over three major maxima. \n\\citet{2010AN....331..156B} report that the \n{\\it Swift}\\ light curves are unique with a 1.77 day periodicity. They speculate \nthat the period is due to obscuration of the X-ray source in a high \ninclination system with a 1.77 day orbital period.\n\nOscillations significantly shorter than the hours-to-days of typical \nnovae orbital periods have also been detected with {\\it Swift}. Oscillations \nof order 35 s have been observed in RS Oph \n\\citep{2006ATel..770....1O,2011ApJ...727..124O}\nand KT Eri \\citep{2010ATel.2423....1B}. Some WDs have rotation \nperiods in this range \\citep[{\\it e.g.} 33 s in AE Aqr;][]{2008PASJ...60..387T}.\nIt seems unlikely that RS Oph and KT Eri should both have nearly identical \nrotation periods unless the pulsations are tied to the mass of the WD\nwhich for both novae are predicted near the Chandrashkar limit. \nAnother reason the observed\nvariability might not be associated with the rotating WD is that \nthe $\\sim$ 35 second periodicity is not always detected in the {\\it Swift}\\\nand {\\it Chandra}\\ X-ray light curves. The 35 second pulsations could be due to \na nuclear burning instability on the WD surface \n\\citep[see ][]{2011ApJ...727..124O}. If so, then the period then is a \nfunction of WD mass, and perhaps indicates that the WDs in RS Oph and KT Eri\nare near the Chandrasekhar mass.\n\n\\subsubsection{Temperature variations\\label{teffvar}}\n\nLong lived SSS, such as Cal 83, have non-periodic X-ray on\/off states. \n\\citet{2000A&A...354L..37R} speculate that the \ndecline in X-ray flux is due to accretion disk interactions such\nas an increase in the mass accretion rate causing the WD photosphere \nto expand and shifting the SED into the EUV. These sources then become \noptically brighter from the irradiation of the accretion disk and \nsecondary by the larger WD photosphere. The source remains X-ray faint \nuntil the WD photosphere shrinks back to its\noriginal size. Figure \\ref{v458vullc} shows similar behavior in the \n{\\it Swift}\\ X-ray and UV light curves of V458 Vul\ncompiled from the data in Table 2. The 100$\\times$ decline\nin the X-ray light curve is matched by a 1.5 magnitude increase\nin the UV light curve. Figure \\ref{xrtevol} shows that similar X-ray\nand UV variations can be achieved by large declines in the effective \ntemperature. For example, a decline from 700,000 K to 500,000 K \nproduces a factor of 85 decline in the X-ray count rate and a 1.1 \nmagnitude $uvw1$ band increase. \nIf the underlying phenomenon in V458 Vul \nis the same as proposed for RX J0513.9-6951 \\citep{2000A&A...354L..37R}\nand Cal 83 \\citep{2002A&A...387..944G} and the accretion disk has been\nreestablished, V458 Vul should have an orbital period of order one\nday to produce an accretion rate high enough to drive stable nuclear burning.\nHowever, \\citet{2010MNRAS.407L..21R} find a short orbital period of $\\sim$ \n98 minutes implying that V458 Vul will not have a long term SSS phase.\n\n\\begin{figure*}[htbp]\n\\plotone{csslc.ps}\n\\caption{The X-ray and uvw2 light curves of the particular nova\nCSS 081007:030559+054715. The X-ray and UV evolution are in phase.\n\\label{csslc}}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\plotone{v458vullc.ps}\n\\caption{{\\it Swift}\\ X-ray (top panel) and uvw1 (bottom panel) light curve \nfor V458 Vul. The X axis is the number of days after visual maximum. \nPrior to day 400, V458 Vul was in transition to X-ray maximum. \nAfter day 400 the majority of the X-ray observations had a count rate \nof $\\sim$ 1 ct s$^{-1}$. However, during the later phase there \nare three periods where the X-ray counts declined by about a factor \nof 100. During these times the uvw1 ($\\lambda_c$ = 2600\\AA)\nphotometric brightness increased by a magnitude.\n\\label{v458vullc}}\n\\end{figure*}\n\n\\subsection{Estimating time scales in a variable environment}\n\nThe variability of novae also raises questions about how confident \none can be in the determination of turn-off times. A prime example can be \nseen in the X-ray light curve of V458 Vul in Figure \\ref{v458vullc}. If \nmonitoring had stopped following the four observations between days 450 \nand 480 the subsequent recovery would never have been found, and it\nwould have been noted that V458 Vul had a turn-off time of 1.2 years \ninstead of $\\gtrsim$ 2.9 years. While this could be a significant \nproblem with the determination of a turn-off time in most novae, it is \nlikely that the phenomenon observed in V458 Vul is rare. The X-ray \nbehavior of V458 Vul, a 100$\\times$ decline in flux and a subsequent \nrecovery, is the \\textit{only} case observed in the \\totallimitswiftSSS\\ \nnovae studied by \\textit{Swift} with SSS emission.\n\\citet{2001A&A...373..542O} found no similar \"reborn\" SSSs in their review\nof the {\\it ROSAT}\\ all sky survey although some of the novae in M31 previously\nthought to be RNe with very rapid outburst time scales may actually be\nnormal novae but with on\/off states similar to those in V458 Vul. Since the \nsudden X-ray declines in V458 Vul also had corresponding UV rises, if these\nsource exist in M31, they should be easily found with X-ray and UV\ncapable facilities such as {\\it Swift}\\ and {\\it XMM}.\n\n\\subsection{SSS in RNe and the light curve plateau\\label{RNplateau}}\n\nA plateau in visible light of RNe is speculated to arise from the reradiation \nof the SSS emission from an accretion disk dominating the emission after \nthe free-free emission has faded \\citep{2008ASPC..401..206H}. Once nuclear \nburning ends and the accretion disk is no longer irradiated, the light curve \ncontinues its decline to quiescence. Figure 17.1 and \n17.2 show that the optical plateaus are nearly coincident with \nthe SSS emission in RS Oph and U Sco.\n\nThe other well observed RNe in the {\\it Swift}\\ archive are novae LMC 2009a \nand V407 Cyg. LMC 2009a was previously seen in outburst in 1971 \n\\citep{2009IAUC.9019....1L}. It had a much longer SSS phase than RS Oph \nand U Sco and it ended 270 days after maximum. Unfortunately, the V band light \ncurve compiled from the AAVSO archives and our own SMARTS photometry \ndoes not extend beyond 110 days after visual maximum so we can not \ndetermine whether an optical plateau was observed later in this outburst,\nsee Figure 17.3. However, the {\\it Swift}\\ uvw2 and SMARTS B\nband light curves are relatively flat during the SSS phase (see Bode et al. \nsubmitted) indicating LMC 2009a did go through an optical plateau phase.\nThe data\nare not as extensive for V407 Cyg but the rise in the soft X-ray emission\nconsistent with nuclear burning on the WD (see Section \\ref{v407cygSSS})\nis coincident with a short plateau in the optical light curve as shown\nin Figure 17.4.\n\nThere are three other novae with well observed SSS phases in the {\\it Swift}\\\narchive that are suspected to be RNe based on their outburst characteristics.\nThe novae are V2491 Cyg, KT Eri, and V2672 Oph. Figure 17.5\nshows that there is no indication of a plateau in V2491 Cyg while it was\na SSS. However, the SSS phase in V2491 Cyg was extremely short, $<$10 days,\nwhich may not be sufficient time to produce a noticeable optical plateau\nor the system did not have its accretion disk reform this early in the\noutburst. \n\nThe early outburst spectra of KT Eri were indicative of the He\/N class with \nhigh expansion velocities typical of RNe \\citep{2009ATel.2327....1R}. \nKT Eri also had short X-ray light curve modulation similar \nto RS Oph, see Section \\ref{sssperiods} and\n\\citet{2010ATel.2392....1B}, while Bode et al. (2011, in prep) draw \nattention to KT Eri's similarities with the X-ray behavior of LMC 2009a.\nThe X-ray and V band observations are shown in Figure 17.6.\nThe AAVSO V band light curve shows a flattening at 80 days after visual maximum\nor about 10 days after KT Eri became a SSS implying there was an\noptical plateau.\n\nThe case for V2672 Oph as a RNe is based on its extreme expansion velocities\nat maximum \\citep{2009IAUC.9064....2A} and early radio synchrotron emission\nsimilar to that observed in RS Oph \\citep{2009ATel.2195....1K}. \n\\citet{2010MNRAS.tmp.1484M} also find many similarities between V2672 Oph\nand U Sco. Unfortunately, the X-ray and optical observations were \nhampered due to the faintness at visual maximum and the relatively \nlarge column density.\nBased on the hardness ratio, V2672 Oph was in its SSS phase between days 15 \nand 30 after visual maximum (Figure 17.7). The AAVSO \nV band light curve is supplemented with SMARTS V band photometry which\nshows a plateau between day 10 through 50 after visual maximum.\n\nOf the 4 known RNe and 3 suspected RNe, there are sufficient optical data \nto reveal the presence of a plateau in six. Of those six all but V2491\nCyg have evidence of an optical plateau correlated with the X-ray SSS\nemission. However, \\citet{2010ApJS..187..275S} finds that not all Galactic \nRNe have optical plateaus. It is interesting to note that if the plateau\nphase is caused by reradiation off an accretion disk as suggested by\n\\citet{2008ASPC..401..206H} then there is no effect on the presence \nor strength of the plateau due to the inclination of the system. One\nwould expect the effect in more face-on systems like RS Oph,\n$i$ = 39$^{+1}_{-10}$$^{\\circ}$ \\citep{2009ApJ...703.1955R} than in\nedge-on systems such as U Sco, $i = 82.7\\pm2.9^{\\circ}$ \n\\citep{2001MNRAS.327.1323T}. Regardless of the root cause of optical \nplateaus, their presence can clearly be used as a proxy signature of \nSSS emission. \nHowever, it should be stressed that while optical plateaus likely \nindicate soft X-ray emission, the start and ending of this phase in the \noptical light curve does not necessarily correspond to the turn-on and\nturn-off times in the SSS phase. Relationships between the optical\ntimescales and the X-ray are only weakly correlated, e.g. \nFig. \\ref{t2turnoff}, and the two phases do not always align in the \nRN and suspected RN in this sample (Fig. 17.1-17.7).\n\nOptical\/NIR plateaus should only be observed in RNe and other fast \nnovae that eject very little mass. In slower novae the later spectra \n({\\it i.e.} several tens of weeks after maximum light) are dominated by \nhydrogen recombination and nebular line emission effectively hiding any \nirradiation effects. The continuum from the WD or a hot accretion disk \ncan only be observed after the ejecta have sufficiently cleared. \n\n{\\bf Fig. Set} \n\\figsetnum{17}\n\\figsettitle{X-ray and optical evolution}\n\n\n\\figsetgrpnum{17.1}\n\\figsetgrptitle{RS Oph}\n\\figsetplot{f17_1.eps}\n\\figsetgrpnote{X-ray and optical evolution of RS Oph. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. \\label{rsophplat}}\n\n\n\n\\figsetgrpnum{17.2}\n\\figsetgrptitle{U Sco}\n\\figsetplot{f17_2.eps}\n\\figsetgrpnote{X-ray and optical evolution of U Sco. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. \\label{uscoplat}}\n\n\n\n\\figsetgrpnum{17.3}\n\\figsetgrptitle{Nova LMC 2009 A}\n\\figsetplot{f17_3.eps}\n\\figsetgrpnote{X-ray and optical evolution of Nova LMC 2009a. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve and includes our own SMARTS photometry. \\label{nlmc09plat}}\n\n\n\n\\figsetgrpnum{17.4}\n\\figsetgrptitle{V407 Cyg}\n\\figsetplot{f17_4.eps}\n\\figsetgrpnote{X-ray and optical evolution of V407 Cyg. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. To accentuate the soft contribution to the total in the top panel, the squares show the soft, 0.3-1 keV, light curve. The V band light curve includes the AAVSO data (filled circles) and the photometry of \\citet{2011MNRAS.410L..52M} (diamonds). \\label{v407cygplat}}\n\n\n\n\\figsetgrpnum{17.5}\n\\figsetgrptitle{V2491 Cyg}\n\\figsetplot{f17_5.eps}\n\\figsetgrpnote{X-ray and optical evolution of V2491 Cyg. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. \\label{v2491cygplat}}\n\n\n\n\\figsetgrpnum{17.6}\n\\figsetgrptitle{KT Eri}\n\\figsetplot{f17_6.eps}\n\\figsetgrpnote{X-ray and optical evolution of KT Eri. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. The gaps in the light curves are due to KT Eri being behind the Sun. \\label{kteriplat}}\n\n\n\n\\figsetgrpnum{17.7}\n\\figsetgrptitle{V2672 Oph}\n\\figsetplot{f17_7.eps}\n\\figsetgrpnote{X-ray and optical evolution of V2672 Oph. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve and includes our own SMARTS photometry. \\label{v2672ophplat}}\n\n\n\n\n\\begin{figure*}[htbp]\n\\plotone{f17_1.eps}\n\\caption{X-ray and optical evolution of RS Oph. The top panel is the {\\it Swift}\\\nXRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, \n(H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel\nshows the AAVSO V band light curve. Similar figures for U Sco, Nova LMC 2009\nA, V407 Cyg, V2491 Cyg, KT Eri, and V2672 Oph are available in the electronic\nedition. \\label{rsophplat}}\n\\end{figure*}\n\n\\subsection{SSS proxies at other wavelengths: The $[$\\ion{Fe}{10}$]$\\ line\\label{fex}}\n\n\\btxt{\\citet{2001AJ....121.1126V} used the evolution of UV emission\nline light curves developed in \\citet{1996ApJ...463L..21S} for V1974 Cyg\nto estimate turn-off times. This allowed \\citet{2001AJ....121.1126V} to\ndetermine the nuclear burning timescales of five novae with no pointed\nX-ray observations but significant amounts of {\\it IUE} data. Unfortunately,\nit is currently difficult to obtain sufficient UV emission line data to \nutilize this technique while the optical plateau (\\S \\ref{RNplateau})\nonly applies to fast and recurrent novae. Another X-ray proxy is \nneeded for slower novae.}\n\nThe emergence of the coronal $[$\\ion{Fe}{10}$]$\\ 6375\\AA\\ line in the nebular spectra\nof novae has been long recognized as a strong indication of photoionization \nof the ejecta from a hot source \\citep[e.g.,][]{1989ApJ...341..968K}. \nWith an ionization potential of 235 eV, an ejected shell must be highly \nionized by a hot WD to produce $[$\\ion{Fe}{10}$]$. While shocks can produce \nhigh temperatures, they only contribute very early in the outburst\nand are insignificant during the later nebular phase when $[$\\ion{Fe}{10}$]$\\ is typically\nobserved, in all but the RS Oph-type RNe. For example, strong $[$\\ion{Fe}{10}$]$\\ \nand [\\ion{Fe}{14}] 5303\\AA\\ emission has been observed in RS Oph in all \noutbursts with adequate spectroscopic coverage \\citep{2009ApJ...703.1955R}. \nHowever, these lines appear well before the SSS phase begins. A \nrelationship between $[$\\ion{Fe}{10}$]$\\ and soft X-ray emission has not been previously \ndemonstrated but can be strengthened with our larger nova sample.\n\nSeven novae with confirmed SSS emission, GQ Mus \\citep{1989ApJ...341..968K},\nV1974 Cyg \\citep{1995A&A...294..488R}, V1494 Aql \\citep{2003A&A...404..997I},\nV723 Cas \\citep{2008AJ....135.1328N}, V574 Pup \\citep{Heltonthesis}, V597 Pup, \nand V1213 Cen \\citep{2010ATel.2904....1S} all had strong $[$\\ion{Fe}{10}$]$\\ lines in \ntheir late nebular spectra. Example spectra of V597 Pup and V1213 Cen from \nour SMARTS archive are shown in Figure \\ref{fexplots}. In addition, \nextensive optical spectra from our Steward Observatory northern \nhemisphere nova monitoring \ncampaign shows that V2467 Cyg may also have had weak $[$\\ion{Fe}{10}$]$\\ emission at the \nsame time it was a SSS but this can not be confirmed due to nearby \n\\ion{O}{1} lines. These novae clearly show that the presences of strong\n$[$\\ion{Fe}{10}$]$\\ in the optical spectrum is indicitive of underlying SSS emission.\nTo our knowledge there has never been a nova with strong $[$\\ion{Fe}{10}$]$\\ emission \nthat was not also a SSS during contemporaneous X-ray observations. \nWhile additional optically and X-ray observations are needed to fully test\nthis hypothesis, ground based spectroscopic monitoring is a powerful tool \nfor detecting SSS novae from $[$\\ion{Fe}{10}$]$\\ emission in novae with significant\nejected mass. The RNe and very fast CNe \nwith rapid turn-on\/off times are not strong photoionization sources \nlong enough to produce $[$\\ion{Fe}{10}$]$\\ in their meager ejected shells.\n\n\\begin{figure*}[htbp]\n\\plottwo{v1213cen_100627.ps}{v597pup080326.ps}\n\\caption{$[$\\ion{Fe}{10}$]$\\ 6375\\AA\\ emission in V1213 Cen (left) and V597 Pup (right)\nobtained on June 27th, 2010 (415 days from visual maximum) \nand March 26th, 2008 (133 days from visual maximum), respectively.\nThe $[$\\ion{Fe}{7}$]$ 6087\\AA\\ line is also visible in both spectra. Strong\n$[$\\ion{Fe}{10}$]$\\ emission relative to $[$\\ion{Fe}{7}$]$ is a hallmark of novae in\ntheir SSS phase. \n\\label{fexplots}}\n\\end{figure*}\n\n\n\\section{SUMMARY}\n\nOver the last decade our knowledge of the X-ray behavior of \nnovae has increased dramatically with the launch of \nthe latest generation\nX-ray facilities. Observations of novae when they are radiating the majority \nof their flux in the soft X-ray band provide critical insight into \nthe behavior of the WD and TNR processes. Currently \\totalSSS\\ \nGalactic\/Magellanic novae have been observed as SSSs of which \\totalswiftSSS\\ \nsuch classifications have come from \nover 2 Ms of {\\it Swift}\\ observations during the last five years. \n\nThis large sample shows that individual novae can differ significantly \nfrom fits to smaller ensemble data sets such as the t$_2$ relationship of \n\\citet{2010ApJ...709..680H} and the expansion velocity relationship of \n\\citet{2003A&A...405..703G}. Surprisingly, there is also no relationship \nbetween orbital period and the duration of nuclear burning. This large\ndata set confirms that many factors are in play in the evolution of the \nSSS phase.\n\nThe duration of nuclear burning on the WD is short, with 89\\% of the \nnovae have turned off within 3 years in this expanded sample. The median\nduration of the sample is 1.4 years. This contrasts with the same distribution\nin M31 which is peaked at longer burning novae. The difference is likely\na selection effect between the two surveys.\n\nThe new {\\it Swift}\\ data are also challenging our understanding of novae \nwith highly variable X-ray light curves both during the rise to and at\nX-ray maximum. Various mechanisms are likely at work to produce the \nvariability. Additional observations are warranted not only to help\ndecipher the current peculiar observations but also to be sure that we\nhave captured the full range of variability behaviors both periodic and\nnon-periodic that novae may yet produce. \nLong {\\it XMM}\\ and {\\it Chandra}\\ grating observations can explore the short \nterm oscillations more effectively than {\\it Swift}\\ whereas {\\it Swift}\\ can\neasily track the long term behavior such as turn-on and turn-off times.\nIn addition, simultaneous X-ray\/UV \nobservations only available through {\\it XMM}\\ and {\\it Swift}\\ will continue to\nbe a powerful tool to test the evolution of the emission from the WD \nduring the outburst.\n\nTo date no strong dust-forming novae have been detected as a SSS. \nV2362 Cyg did have detectable soft X-ray photons but it was not similar \nto any of the other SSS novae. While V574 Pup and V2467 Cyg were in\nthe SSS phase they had IR features indicating weak silicate dust emission.\nV1280 Sco had a large DQ Her-like dust event but also ejected so much \nmaterial and at a low velocity that it is still optically thick several\nyears after visual maximum. Any SSS phase will not be detected until\nthis material clears.\n\nThere are optical behaviors that track SSS emission in novae. \nFor the RNe with well defined plateaus \nin their optical light curves, RS Oph and U Sco, the X-ray light curves\nreach maximum around the same time. However, not all RNe and suspected\nRNe in the sample had optical plateaus even though they had well \ndocumented observations during X-ray maximum. An optical spectroscopic \nsignature indicative of an SSS phase is the presence\nof strong $[$\\ion{Fe}{10}$]$\\ 6375\\AA\\ emission. In the sample, all novae with $[$\\ion{Fe}{10}$]$\\ that\nwere subsequently observed in the X-ray were SSSs. These were slower\nnovae that ejected significantly more material than the RNe. The inverse \nof the $[$\\ion{Fe}{10}$]$\\ relationship does not hold since the source may turn off before \n$[$\\ion{Fe}{10}$]$\\ can be created in the ejecta. While the presences of neither optical\nplateaus or $[$\\ion{Fe}{10}$]$\\ has yet been shown to be simultaneous with SSS emission,\nthese relationships offer excellent oppertunities to use ground-based \nmonitoring to coordinate X-ray observations during the important \nSSS phase. \n\nAdditional X-ray data need to be collected since the sample is statistically \nmeager with only \\totalSSS\\ known SSS novae and is smaller still for novae\nwith early, hard X-ray detections. Trends can be difficult to confirm given\nthe wide range of behavior observed during the different X-ray phases.\nWith the sample heavily biased toward fast and recurrent novae, efforts \nshould be expended on novae that are not \ncurrently well represented in the X-ray sample such as slow and \ndust forming novae. The monitoring of the two slow novae that \nhave been detected as X-ray sources, V5558 Sgr and V1280 Sco, but \nhave not yet evolved to a SSS state, will help in \nunderstanding slow systems. Likewise, {\\it Swift}\\ monitoring of the \ntwo long lasting SSSs, V723 Cas and V458 Vul, are also of interest \nsince they are rare, and thus, important to our understanding\nof why they persist. \n\nFinally, it is important to continue to collect X-ray observations of \nnovae and build on this sample. This analysis shows that each nova is in\nsome ways unique and that attempts to predict their behavior based on a \nrelationship to a single observational value, {\\it e.g.} t$_2$ versus the \nnuclear burning timescale, is fraught with difficulties. Some of these\nproblems can be addressed by expanding the sample to include regions of\nthe parameter space that are not well represented. This X-ray sample\nincludes few slow novae which likely explains the differences between\nthe nuclear burning timescale of the Milky Way and M31 surveys.\nIt is also equally important to obtain numerous, high \nquality data for all bright novae through their evolution and at different\nwavelengths from X-ray to radio. Multiwavelength observations are critical\nto properly interpret nova phenomena such as the apparent early turn-off in\nV458 Vul and to verify periodicities seen in the X-ray, particularly\npotential orbital periods. With the understanding that comes from a few \nwell observed novae \nlike RS Oph and U Sco, the entire nova data set can be anchored to nova \ntheory. These large data sets also reveal new phenomena such as the \nstrong X-ray variability that is not appreciated in novae with \nsparser observations or detected at other wavelengths.\n\n\\acknowledgments\n\nThis research has made use of data obtained from NASA's {\\it Swift}\\ satellite. \nWe thank Neil Gehrels and the {\\it Swift}\\ team for generous allotments of ToO \nand fill in time. Funding support from NASA NNH08ZDA001N1. \nStony Brook University's initial participation in the SMARTS consortium was \nmade possible by generous contributions from the Dean of Arts and Sciences, \nthe Provost, and the Vice President for Research of Stony Brook University. \nWe acknowledge with thanks the variable star observations from the AAVSO \nInternational Database contributed by observers worldwide and used in this \nresearch. \nJPO, KP, PE \\& AB acknowledge the support of the STFC. \nSS acknowledges partial support from NASA and NSF grants to ASU. \nJJD was supported by NASA contract NAS8-39073 to the {\\it Chandra}\\ X-ray Center.\n\n{\\it Facilities:} \\facility{Swift(UVOT\/XRT)}, \\facility{AAVSO}, \n\\facility{CTIO:1.3m}, \\facility{CTIO:1.5m}, \\facility{Bok(B\\&C spectrograph)},\n\\facility{Spitzer(IRS)}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Acknowledgments}\nWe thank M. Calandra for useful discussions. We acknowledge funding from EU Graphene Flagship, ERC Grant Hetero2D, EPSRC Grant Nos. EP\/509K01711X\/1, EP\/K017144\/1, EP\/N010345\/1, EP\/M507799\/ 5101, and EP\/L016087\/1 and the Joint Project for the Internationalization of Research 2015 by Politecnico di Torino and Compagnia di San Paolo.\n\nThe authors declare no competing financial interests.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdylw b/data_all_eng_slimpj/shuffled/split2/finalzzdylw new file mode 100644 index 0000000000000000000000000000000000000000..49eb499561c20e5cc374001fb38f564b161c45a1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdylw @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nSecondary cosmic ray (CR) particles of an extensive air shower (EAS) approaching towards the ground as a thin disk through the atmosphere from the direction of their parent primary CR particle at the speed of light. After the first interaction point, the disk begins to form, and continues to grow, and then starts attenuating after the depth of shower maximum. The transverse and longitudinal momenta imparted on the shower particles emerging from their parent particles via the hadronic interactions would cause the lateral and longitudinal spreads for these particles in an EAS. consequently the periphery of successive iso-density contours get shortened about the EAS axis, which is analogous to an inclined inverted truncated cone.\n\\section{Cone model: An elliptic lateral density function}\nThe evolution of a conical shower profile of an inclined EAS is shown in the Fig. \\ref{Cone_Prof}. The geometric correction is done through the projection of the horizontal elliptic surface to the shower plane. Corresponding equi-density contours are shown in Fig. \\ref{2D_IsoDen}(a), (b). The projected electron density in the shower plane ($\\rho_s$) is\n\\begin{equation}\n\\label{eq:Geom_Corr}\n\\rho_s(r_s) = {\\rho_g(r_g)}\/{ \\cos\\Theta}\n\\end{equation}\nAn exponential fall of the density of the shower electrons results from the EAS attenuation with a factor $ e ^ {-\\eta \\cdot AB}$, where $\\eta$ is the attenuation length [1]. Electron density in the ground plane $(\\rho_g)$ would be \n\\begin{equation}\n\\label{eq:Attn_Dens}\n\\rho_g(r_g)= \\cos\\Theta \\cdot \\rho_s(r_s) \\cdot e^{-\\eta \\cdot AB}\n\\end{equation}\n\\begin{figure}[htbp]\n\t\\begin{minipage}{18pc}\n\t\t\\includegraphics[trim=0.8cm 0.8cm 0.8cm 1.5cm, clip=true, totalheight=0.22\\textheight, angle=0]{Conical_Geometry.pdf}\n\t\t\\caption{\\label{Cone_Prof}Conical shower profile.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}{18pc}\n\t\n\t\t\\includegraphics[trim=0.6cm -0.2cm 0.6cm -2.2cm, clip=true, width=0.16\\textheight]{Density_IsoLines_Ground.eps}\n\t\t\\includegraphics[trim=0.6cm -0.2cm 0.6cm -2.2cm, clip=true, width=0.16\\textheight]{Density_IsoLines_Shower.eps}\n\t\t\\caption{\\label{2D_IsoDen} 2-dimensional iso-density contours in the ground and shower plane.}\n\t\\end{minipage}\n\\end{figure}\n\nA characteristic function (CF) to describe the exponential behaviour of LDD with $r_s$ is proposed as follows,\n\\begin{equation}\n\\label{eq:CF}\n\\rho(r_s) \\simeq c \\cdot e^{-\\alpha (\\frac{r_s}{r_0})^\\kappa}\n\\end{equation}\nFinally the gap length parameter is given by,\n\\begin{equation}\n\\label{eq:Xc_1}\nx_C = 6813 y_R^{2-\\kappa} r_0^\\kappa \\eta (\\alpha \\kappa)^{-1}\n\t\\frac{\\tan \\Theta}{\\cos(\\Theta+\\sigma)} \n\t\\cdot\n\t\\frac{\\cos\\sigma}{H-r_s \\sin \\Theta}\n\\end{equation}\nSince $x_C>0$, the shower attenuation does shift the center of the ellipse towards the early part of the shower. Let us write the above equation as $x_C = 2 A_f y_R \\tan\\Theta$, where $A_f$ stands for,\n\\begin{equation}\n\\label{eq:Af}\nA_f = 6813 r_0^\\kappa \\eta (\\alpha \\kappa)^{-1}\n\\cdot\n\\frac{\\cos\\sigma}{2\\cos(\\Theta+\\sigma)(H-r_s \\sin \\Theta)}\n\\end{equation}\nThe modified length of semi-minor axis of an equi-density ellipse is,\n\\begin{equation}\n\\label{eq:yR}\n\ty_R = -2 A_f r_g \\cos\\beta_g \\tan\\Theta \\cdot \\frac{\\cos^2(\\Theta + \\sigma)}{\\cos^2\\sigma}\n\t+ r_g\\sqrt{1-\\cos^2\\beta_g \\sin^2(\\Theta+\\sigma)}\n\\end{equation}\nWe are familiar with most commonly used LDF as NKG function in CR air shower physics.\nThe polar angle dependent elliptic-LDF (ELDF) can be found from the NKG type Symmetric-LDF (SLDF) by making a substitution for the variable $r_s$ by $y_R$, and the equation for the ELDF finally takes the following structure. \n\\begin{equation\n\\label{eq:ELDF}\n\t\\rho(r_g,\\beta_g)=\\cos\\Theta\\cdot C(s_\\perp)N_e \\cdot (y_R\/r_0)^{s_\\perp-2} (1+y_R\/r_0)^{s_\\perp-4.5}\n\\end{equation}\nWhere, $C(s_\\perp)=\\frac{\\Gamma(4.5-s_\\perp)}{2\\pi r_0^2\\Gamma(s_\\perp)\\Gamma(4.5-2s_\\perp)}$ is the normalization factor, while $s_{\\perp}$, $r_0$ and $N_e$ respectively are called the age parameter, the moli\\`{e}re radius and shower size.\n\\section{Results and discussions}\nThe MC simulation code \\textit{CORSIKA} of version 7.69 with the hadronic interaction models QGSJet-01c and UrQMD is used. The reconstructed polar electron densities obtained using SLDF, ELDF including the projection and ELDF including both the attenuation and projection [2], at a core distance 50 m for an average 100 PeV proton shower with $\\Theta =50^o$. These are shown in Fig. \\ref{Polar_Dens_ELDF}, and the result reconfirms that the ELDF with GL is more appropriate for reconstruction of non-vertical EASs.\nIn the Fig. \\ref{LDD_PI}, the polar averaged LDD for P and Fe initiated showers are approximated by the CF (Eq. \\ref{eq:CF}). From the best possible fitting of the simulated data the parameter $\\alpha$ picks values 4.3 and 3.7 while $\\kappa$ takes 0.36 and 0.43 respectively for P and Fe.\nIn the Fig. \\ref{Iso_dens}, the center of the equi-density ellipse experiences a translation from \\textit{O} to \\textit{C} $(\\overline{OC} \\sim 9.75~m)$ solely due to attenuation of EAS electrons. On the other hand, the model predicted GL is about $6.35$~m, evaluated using the Eq. \\ref{eq:Xc_1}. \nThe GL parameter exhibits sensitivity to P and Fe initiated showers significantly for low-end values of $\\rho_e$ (Fig. \\ref{XcYr_PI}). GL is found to increase with energy of CRs (Fig. \\ref{XcYr_E}). The elongation of the iso-density curve with increasing $\\Theta$ is evident from the values of GL for different zenith angles (Fig. \\ref{XcYr_Z}). The model predicted values for GL which are shown by the dotted and short dashed lines (Fig. \\ref{XcYr_PI}-\\ref{XcYr_Z}) are in good agreement with the simulated data. We have studied the dependence of the GL on $\\Theta$ for a fixed electron density (Fig. \\ref{GL_Z_IsoD}) as well as at fixed $y_R$ value (Fig. \\ref{GL_Z_yR}), which shows a mass sensitivity of primary CR. A correlation between the mean GL with primary energy ($E$) corresponding to $\\Theta = 50^o$ and $\\rho_e = 1.5~m^{-2}$ for P and Fe induced EASs, is depicted in Fig. \\ref{GL_E}.\n\\begin{figure}[h]\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 1.cm 1.05cm, clip=true, totalheight=0.17\\textheight]{PDD_R50m_ELDF.eps}\n\t\t\\caption{\\label{Polar_Dens_ELDF}Ground plane polar density distribution.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.9cm 0.8cm, clip=true, totalheight=0.17\\textheight]{CF_PI.eps}\n\t\t\\caption{\\label{LDD_PI}Electron LDD fitted by CF.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{IsoDen_Curve.eps}\n\t\t\\caption{\\label{Iso_dens}Formation of GL from equi-density ellipse.}\n\t\\end{minipage}\n\\end{figure}\n\\begin{figure}[h]\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{XcYr_PI.eps}\n\t\t\\caption{\\label{XcYr_PI}$x_C$ vs $y_R$ for P and Fe initiated showers.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{XcYr_E.eps}\n\t\t\\caption{\\label{XcYr_E}$x_C$ vs $y_R$ for two energies.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{XcYr_Z.eps}\n\t\t\\caption{\\label{XcYr_Z}$x_C$ vs $y_R$ at two zenith angles.}\n\t\\end{minipage}\n\\end{figure} \n\\begin{figure}[h]\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{GL_Z_IsoD.eps}\n\t\t\\caption{\\label{GL_Z_IsoD}Mass sensitivity of GL from its variation with $\\Theta$ at fixed $\\rho_e$.}\n\t\\end{minipage}\\hspace{0.8pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{GL_Z_yR.eps}\n\t\t\\caption{\\label{GL_Z_yR}Mass sensitivity of GL from its variation with $\\Theta$ at fixed $y_R$.}\n\t\\end{minipage}\\hspace{0.8pc}%\n\t\\begin{minipage}[b]{2in}\n\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{GL_E.eps}\n\t\\caption{\\label{GL_E} Mass sensitivity of GL from its variation with $E$.}\n\\end{minipage}\n\n\\end{figure}\n\nApplication of ELDF has been incarnated in terms of the local age parameter (LAP) in Fig. \\ref{LAP_PI}.\nThe analytical expression for the LAP [3] between two adjacent radial distances $[r_i, r_j]$ is:\n\\begin{equation\n\\label{eq:LAP}\ns_{ij}=\\frac{ln(F_{ij}X_{ij}^2Y_{ij}^{4.5})}{X_{ij}Y_{ij}} \n\\end{equation}\nHere, $F_{ij}=\\rho(y_R(i))\/\\rho(y_R(j))$,\n$X_{ij} = y_R(i)\/y_R(j)$, \n$Y_{ij} = (\\frac{y_R(i)}{r_0}+1)\/(\\frac{y_R(j)}{r_0}+1)$\nand $r_0$ is the moli\\`{e}re radius obtained from the best fit value of LDD by CF. A correlation between the mean LAP with primary energy for P and Fe induced EASs are given in Fig. \\ref{MPAL_PI}.\n\\begin{figure}[htbp]\n\t\\begin{center}\n\t\t\\begin{minipage}[b]{2.5in}\n\t\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{LAP_PI.eps}\n\t\t\t\\caption{\\label{LAP_PI}Variation of LAP with $r_g$.}\n\t\t\\end{minipage}\\hspace{1pc}%\n\t\t\\begin{minipage}[b]{2.5in}\n\t\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{MLAP_PI.eps}\n\t\t\t\\caption{\\label{MPAL_PI} Mean LAP versus $E$.}\n\t\t\\end{minipage}\n\t\\end{center}\n\\end{figure}\n\\section{Conclusions \\& future outlook}\nIn this work a modeling of the atmospheric attenuation effect on the LDD of electrons is made considering the conical shower profile. \nThe magnitude of the GL that determines the attenuation power of shower particles for a non-vertical EAS, possesses a clear primary CR mass dependence. The ELDF has been used to the simulated electron densities to estimate the LAP, which manifests different radial variation. The variation of mean LAP with primary energy\/shower size clearly shows sensitivity to CR mass composition. \n\nThere is a scope to judge the high energy hadronic interaction model dependence of our results in future. An analysis of simulation data considering the LDD of muons and also the combined LDD of electrons and muons are in progress.\n\n\n\\section*{Acknowledgments}\n\\noindent \nRKD acknowledges the financial support under grant. no. 1513\/R-2020. from the University of North Bengal.\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\n\n\n\nIn the so-called unitary limit of a quantum Bose or Fermi gas, the\nscattering length $a$ diverges. This occurs at a fixed point of the \nrenormalization group, thus these systems provide interesting examples of \ninteracting, scale-invariant theories with dynamical exponent $z=2$, \ni.e. non-relativistic. \nThey can be realized experimentally by tuning the scattering\nlength to $\\pm \\infty$ using a Feshbach resonance.\n(See for instance \\cite{Experiment1,Experiment2} and references\ntherein.) They are also thought to occur at the surface of\nneutron stars. These systems have also attracted much theoretical \ninterest\\cite{Leggett,Nozieres,Ohashi,Ho,HoMueller,Astrakharchik,Nussinov,Perali,LeeShafer,Wingate,Bulgac,Drummond,Burovski,Nishida,Nikolic}. \nThere have even been some proposals to use the AdS\/CFT correspondence \nto learn about these models\\cite{Son,Maldacena,Herzog,Adams}. \n\n\nBecause of the scale-invariance, the only length scales in the problem\nare based on the density $n^{1\/d}$ where $d$ is the spatial dimension, \nand the thermal wavelength $\\lambda_T = \\sqrt{2\\pi\/mT}$. Equivalently,\nthe only energy scales are the chemical potential $\\mu$ and the temperature $T$. \nThe problem is challenging since there is no small paramater to expand in\nsuch as $n a^3$. Any possible critical point must occur at a \nspecific value of $x=\\mu\/T$. This can be translated into \nuniversal values for $n_c \\lambda_T^3$, or for fermions \nuniversal values for $T_c\/T_F$ where $\\epsilon_F = k_B T_F$ is the Fermi energy. \nFor instance the critical point of an ideal Bose gas is the simplest example,\nwhere $n_c \\lambda_T^3 = \\zeta (3\/2) = 2.61$. \n\n\n\nThe present work is the sequel to \\cite{PyeTon2}, where we used the S-matrix \nbased formulation\nof the quantum statistical mechanics developed in\\cite{LeClairS,PyeTon}. \nThis approach is very well-suited to the problem because in the unitary limit\nthe S-matrix $S=-1$, and kernels in the integral equations simplify. \nIn fact, this approach can be used to develop an expansion in $1\/a$. \nThe main formulas for the 2 and 3 dimensional cases of both bosons and fermions\nwere presented, however only the 2-dimensional case was analyzed in detail\nin \\cite{PyeTon2}. \nHere we analyze the 3-dimensional case. \n\n\n\nThe models considered are the simplest \nmodels of non-relativistic bosons or fermions with quartic\ninteractions. The bosonic model is defined by the action\nfor a complex scalar field $\\phi$. \n\\begin{equation}\n\\label{bosonaction}\nS = \\int d^3 {\\bf x} dt \\( i \\phi^\\dagger \\partial_t \\phi - \n\\frac{ |\\vec{\\nabla} \\phi |^2}{2m} - \\frac{g}{4} (\\phi^\\dagger \\phi)^2 \\)\n\\end{equation}\nFor fermions, due to the fermionic statistics, \none needs at least a 2-component field \n$\\psi_{\\uparrow , \\downarrow} $:\n\\begin{equation}\n\\label{fermionaction}\nS = \\int d^3 {\\bf x} dt \\( \\sum_{\\alpha=\\uparrow, \\downarrow} \ni \\psi^\\dagger_\\alpha \\partial_t \\psi_\\alpha - \n\\frac{|\\vec{\\nabla} \\psi_\\alpha|^2}{2m} - \\frac{g}{2} \n\\psi^\\dagger_\\uparrow \\psi_\\uparrow \\psi^\\dagger_\\downarrow \\psi_\\downarrow \\) \n\\end{equation}\nIn both cases, positive $g$ corresponds to repulsive interactions. \nThe bosonic theory only has a $U(1)$ symmetry. The fermionic theory\non the other hand has the much larger SO(5) symmetry. \nThis is evident from the work\\cite{Kapit} which considered a relativistic\nversion, since the same arguments apply to a non-relativistic kinetic term. \nThis is also clear from the work\\cite{Nikolic} which considered \nan $N$-component version with Sp(2N) symmetry, and noting that\nSp(4) = SO(5). \n\n\n\nThe interplay between the scattering length, the bound state, \nand the renormalization group fixed point was discussed \nin detail from the point of view of the\nS-matrix in \\cite{PyeTon2}. In 3 spatial dimensions the fixed point occurs\nat negative coupling $g_*= - 4 \\pi^2 \/m\\Lambda$, where $\\Lambda$ is \nan ultra-violet cut-off. For $g$ less than $g_*$, there is a bound\nstate that can Bose-Einstein condense (BEC), and for the fermionic case, \nthis is referred to as the BEC side. As $g_*$ is approached from this side,\nthe scattering length goes to $+\\infty$. The bound state disappears at\n$g_*$. When $g$ approaches $g_*$ from above, the scattering length \ngoes to $-\\infty$, and this is referred to as the BCS side. In this paper we\nwork on the BCS side of the cross-over since the bound state does not have\nto be incorporated into the thermodynamics. On this side the interactions\nare effectively attractive. \n\n\nTheoretical studies have mainly focussed on the fermionic case, \nand for the most part at zero temperature, which is appropriate for a large Fermi energy.\n The bosonic case has been less studied, since \na homogeneous \nbosonic gas with attractive interactions is thought to be unstable against\nmechanical collapse, and the collapse occurs before any kind of BEC. \nThe situation is actually different for harmonically trapped gases, where BEC can occur\\cite{trapped}. \nHowever studies of the homogeneous bosonic case were based on a small, \nnegative scattering length\\cite{Stoof,MuellerBaym,Thouless,YuLiLee},\nand it is not clear that the conclusions reached there can be extrapolated \nto the unitary limit. Since the density of\ncollapse is proportional to $1\/a$\\cite{MuellerBaym}, extrapolation to infinite scattering length \nsuggests that the gas collapses at zero density, which seems unphysical, since the gas could in\nprinciple be stabilized at finite temperature by thermal pressure. \n One can also point out that in the van der Waals gas, the\ncollapse is stabilized by a finite size of the atoms, which renders\nthe compressibility finite. In the unitary limit, there is nothing to play\nsuch a role. \n In the sequel we will present evidence that the unitary \nBose gas undergoes BEC when $n \\lambda_T^3 \\approx 1.3$. \n This lower value\nis consistent with the attractive interactions. We also estimate \nthe critical exponent describing how the compressibility diverges at the critical point. \n\n\n\n\nIn the next section we define the scaling functions that determine the free energy\nand density, and derive expressions for the energy and entropy per particle, specific heat per\nparticle, and compressibility. In section III the formulation of the unitary limit in\n\\cite{PyeTon2} is summarized. The two-component fermion case is analyzed in section IV.\nEvidence is presented for a critical point with $T_c\/T_F \\approx 0.1$, which is consistent with\nlattice Monte Carlo simulations. Bosons are analyzed in section V, where we present evidence\nfor BEC in this strongly interacting gas. Motivated by the conjectured lower bound\\cite{Kovtun} \nfor\nthe ratio of the viscosity to entropy density $\\eta\/s > \\hbar\/4\\pi k_B$ for relativistic systems,\nwe study this ratio for both the fermionic and bosonic cases in section VI.\n Our results for fermions are\nconsistent with experiments, with $\\eta\/s > 4.72$ times the conjectured lower bound.\nFor bosons, this ratio is minimized at the critical point where \n$\\eta\/s > 1.26$ times the bound. \n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Scaling functions in the unitary limit.} \n\n\n\nThe scale invariance in the unitary limit implies some\nuniversal scaling forms\\cite{Ho}. In this section we \ndefine various scaling functions with a meaningful normalization\nrelative to free particles. \n\nFirst consider a single species of bosonic or fermionic particle with\nmass $m$ at chemical potential $\\mu$ and temperature $T$. \n The free energy density\nhas the form\n\\begin{equation}\n\\label{freeenergy}\n\\CF = - \\zeta (5\/2) \\( \\frac{mT}{2\\pi} \\)^{3\/2} T \\, c(\\mu\/T) \n\\end{equation}\nwhere the scaling function $c$ is only a function of $x\\equiv \\mu\/T$.\n($\\zeta$ is Riemann's zeta function.) \nThe combination $\\sqrt{mT\/2\\pi} = 1\/\\lambda_T$, where \n$\\lambda_T$ is the thermal wavelength. For a single free boson or fermion:\n\\begin{equation}\n\\label{cfreelim}\n\\lim_{\\mu\/T \\to 0} ~~ c_{\\rm boson} = 1, ~~~~~~~\n c_{\\rm fermion} = 1- \\inv{2 \\sqrt{2}}\n~~~~~~({\\rm free ~ particles}). \n\\end{equation}\nIt is also convenient to define the scaling function $q$, which is a measure\nof the quantum degeneracy, in terms of the density as follows: \n\\begin{equation}\n\\label{nhat}\nn \\lambda_T^3 = q\n\\end{equation}\nThe two scaling functions $c$ and $q$ are of course related since\n$n= - \\partial \\CF \/ \\partial \\mu$, which leads to \n\\begin{equation}\n\\label{qcprime}\nq = \\zeta (5\/2) c'\n\\end{equation} \nwhere $c'$ is the derivative of $c$ with respect to $x$. \nHenceforth $g'$ will always denote the derivative of $g$ with respect to $x$.\nThe expressions for $c$ and $q$ for free theories will be implicit\nin the next section. \n\n\nAlso of interest are several energy per particle scaling functions. \nAt a renormalization group fixed point, the energy density\nis related to the free energy in the same way as for free particles:\n\\begin{equation}\n\\label{energyden}\n\\frac{E}{V} = -\\frac{3}{2} \\CF \n\\end{equation}\nwhere $V$ is the volume. For a free fermion, in the zero \ntemperature limit, the energy per particle $E\/N \\to \\frac{3}{5} \\mu$.\nThe Fermi energy is \n\\begin{equation}\n\\label{EF}\n\\epsilon_F =\\inv{m} \\( 3 \\pi^2 n\/\\sqrt{2} \\)^{2\/3} \n\\end{equation}\nThe above definition can also be used for bosons. \nSince $\\epsilon_F = \\mu$ in the zero temperature free fermionic gas, this leads\nus to define the scaling function $\\xi$:\n\\begin{equation}\n\\label{xi}\n\\xi (x) = \\frac{5}{3} \\frac{E}{N \\epsilon_F} = \\frac{5 \\zeta(5\/2)}{3}\n\\( \\frac{6}{\\pi} \\)^{1\/3} \\, \\frac{c}{q^{5\/3}}\n\\end{equation}\nIn the limit $T\\to 0$, i.e. $x\\to \\infty$, $\\xi \\to 1$ for a free\nfermion. \n\nA different energy per particle scaling function, $\\tilde{\\xi}$, is meaningful\nas $x\\to 0$:\n\\begin{equation}\n\\label{xitilde}\n\\frac{E}{N T} = \\frac{3 (2 \\sqrt{2} -1 ) \\zeta(5\/2) }{2(2 \\sqrt{2} -2) \n\\zeta(3\/2)} \n \\, \\tilde{\\xi} (x) \n\\end{equation}\nWith the above normalization $\\tilde{\\xi} = 1$ for a free fermion in the limit\n$x\\to 0$. In terms of the above scaling functions:\n\\begin{equation}\n\\label{xitildesc}\n\\tilde{\\xi} (x) = \\frac{ \\zeta (3\/2) \n( 2 \\sqrt{2} -2)}{(2 \\sqrt{2} -1)} \\, \\frac{c}{q} \n\\end{equation}\n\nThe entropy density is $s = - \\partial \\CF \/ \\partial T$, and the entropy per particle\ntakes the form\n\\begin{equation}\n\\label{sn}\n\\frac{s}{n} = \\zeta (5\/2) \n\\( \\frac{ 5 c\/2 - x c'}{q} \\) \n\\end{equation}\n\n \n\n\nNext consider the \n specific heat per particle at constant volume and particle number,\ni.e. constant density. One needs $\\partial x\/ \\partial T$ at constant density.\nUsing the fact that $n \\propto T^{3\/2} q$, at constant density\n$q \\propto T^{-3\/2}$. This gives\n\\begin{equation}\n\\label{CV.1}\nT \\( \\frac{\\partial x} {\\partial T} \\)_n = -\\frac{3}{2} \\frac{ q}{q'} \n\\end{equation}\nThe specific heat per particle is then:\n\\begin{equation}\n\\label{CV.2}\n\\frac{C_V}{N} = \\inv{N} \\( \\frac{\\partial E}{\\partial T} \\)_{N,V} =\n\\frac{\\zeta(5\/2)}{4} \\( 15 \\frac{c}{q} - 9 \\frac{c'}{q'} \\) \n\\end{equation}\n\nThe isothermal compressibility is defined as \n\\begin{equation}\n\\label{comp.1}\n\\kappa = - \\inv{V} \\( \\frac{ \\partial V}{\\partial p} \\)_T \n\\end{equation}\nwhere the pressure $p= - \\CF$. Since $n=N\/V$ and $N$ is kept fixed, \n\\begin{equation}\n\\label{comp.2}\n\\kappa = - n \\( \\frac{ \\partial n^{-1} }{\\partial p} \\)_T = \\inv{n T} \\frac{ q'}{q} =\n\\inv{T} \\( \\frac{mT}{2\\pi} \\)^{3\/2} \\frac{q'}{q^2} \n\\end{equation}\n\n\nFinally the equation of state can be expressed parametrically \nas follows. Given $n$ and $T$, one uses eq. (\\ref{nhat}) to find\n$x$ as a function of $n,T$. The pressure can then be written as\n\\begin{equation}\n\\label{eqnstate} \np = \\( \\frac{\\zeta(5\/2) c(x(n,T))}{q(x(n,T))} \\) n T\n\\end{equation}\n\n\n\nIn order to compare with numerical simulations and experiments,\nit will be useful to plot various quantities as a function of $q$ or \n$T\/T_F$:\n\\begin{equation}\n\\label{TTF}\n\\frac{T}{T_F} = \\( \\frac{4}{3 \\sqrt\\pi q} \\)^{2\/3} \n\\end{equation}\n\n\n\n\n\\section{Two-body scattering approximation}\n\n\n\nThe main features of the two-body scattering approximation developed in\n\\cite{PyeTon} are the following. Consider again first a single component\ngas. The filling fractions, or \noccupation numbers, are parameterized in terms of a pseudo-energy\n$\\varepsilon ({\\bf k} )$:\n\\begin{equation}\n\\label{fill}\nf({\\bf k} ) = \\inv{ e^{\\beta \\varepsilon ({\\bf k} ) } -s }\n\\end{equation}\nwhich determine the density:\n\\begin{equation}\n\\label{dens}\nn = \\int \\frac{d^3 {\\bf k}}{(2\\pi)^3} ~ \\inv{ e^{\\beta \\varepsilon ({\\bf k} ) } -s }\n\\end{equation}\nwhere $s=1,-1$ corresponds to bosons, fermions respectively\nand $\\beta = 1\/T$. \nThe consistent summation of 2-body scattering leads to \nan integral equation for the \n pseudo-energy $\\varepsilon ({\\bf k})$. It is convenient to define the\nquantity:\n\\begin{equation}\n\\label{ydef}\ny ({\\bf k} ) = e^{-\\beta (\\varepsilon({\\bf k}) - \\omega_{\\bf k} + \\mu )}\n\\end{equation}\nwhere $\\omega_{\\bf k} = {\\bf k}^2 \/ 2m$. Then $y$ satisfies the integral\nequation \n\\begin{equation}\n\\label{yinteq}\ny ({\\bf k} ) = 1 + \\beta \\int \\frac{d^3 {\\bf k}'}{(2\\pi)^3} \\, \nG({\\bf k} - {\\bf k}' ) \\frac{y({\\bf k}' )^{-1}}{e^{\\beta \\varepsilon ({\\bf k}')} -s}\n\\end{equation}\nThe free energy density is then\n\\begin{equation}\n\\label{freefoam2}\n\\CF = -T \\int \\frac{d^3 {\\bf k}}{ (2\\pi)^3} \\[ \n- s \\log ( 1- s e^{-\\beta \\varepsilon } ) \n-\\inv{2} \\frac{ (1-y^{-1} )}{e^{\\beta \\varepsilon} -s} \\] \n\\end{equation}\n\nThe kernel has the following structure:\n\\begin{equation}\n\\label{Gstructure}\nG = - \\frac{i}{\\mathcal{I}} \\log ( 1 + i \\mathcal{I} {\\cal M}}\t\\def\\CN{{\\cal N}}\t\\def\\CO{{\\cal O})\n\\end{equation}\nwhere ${\\cal M}}\t\\def\\CN{{\\cal N}}\t\\def\\CO{{\\cal O}$ is the scattering amplitude and $\\mathcal{I}$ represents\nthe available phase space for two-body scattering. The argument\nof the $\\log$ can be identified as the S-matrix function. \nIn the unitary limit, \n\\begin{equation}\n\\label{CMunit}\n{\\cal M}}\t\\def\\CN{{\\cal N}}\t\\def\\CO{{\\cal O} = \\frac{2i}{\\mathcal{I}} = \\frac{16 \\pi i}{m |{\\bf k} - {\\bf k}'|},\n\\end{equation}\nand the S-matrix equals $-1$. The kernel becomes\n\\begin{equation}\n\\label{kernel}\nG({\\bf k} - {\\bf k}' ) = \\mp \\frac{8 \\pi^2}{m |{\\bf k} - {\\bf k}'|} , \n\\end{equation} \nwhere the $-$ sign corresponds to $g$ being just below the fixed point\n$g_*$, where the scattering length $a\\to +\\infty$ on the BEC side,\nwhereas the $+$ sign corresponds to $a\\to - \\infty$ on the BCS side. \nAs explained in the Introduction, we work on the BCS side.\n\nThe angular integrals in eq. (\\ref{yinteq}) are easily performed. \nDefining the dimensionless variable $\\kappa = {\\bf k}^2 \/ 2mT$, \nthe integral equation becomes \n\\begin{equation}\n\\label{ykappa}\ny (\\kappa) = 1 + 4 \\int_0^\\infty d\\kappa' \\[ \n\\Theta (\\kappa - \\kappa') \\sqrt{\\kappa'\/\\kappa} + \n\\Theta (\\kappa' - \\kappa) \\] \\frac{z}{e^{\\kappa'} - s z y(\\kappa')} \n\\end{equation}\nwhere $z = e^{\\mu\/T}$ is the fugacity and $\\Theta(\\kappa)$ is the standard\nstep function equal to 1 for $\\kappa > 0$, zero otherwise. \n\n\nFinally comparing with the definitions in the last section \nthe scaling function for the density and free energy are\n\\begin{equation}\n\\label{nhatsc}\nq (x) = \\frac{2}{\\sqrt\\pi} \\int_0^\\infty d\\kappa \\sqrt{\\kappa} \n \\frac{ y(\\kappa) z }{e^\\kappa - s y(\\kappa) z} \n\\end{equation}\nand \n\\begin{equation}\n\\label{cscale}\nc = \\frac{2}{\\sqrt{\\pi} \\zeta (5\/2) } \n\\int_0^\\infty d\\kappa \\sqrt{\\kappa} \\( -s \\log \\( 1- s z y(\\kappa)\n e^{-\\kappa} \\) \n- \\inv{2} \\frac{ z ( y(\\kappa) - 1 ) }{e^\\kappa - s zy(\\kappa)} \\)\n\\end{equation}\nThe ideal, free gas limit corresponds to $y=1$ \nwhere $q= s {\\rm Li}_{3\/2} (s z)$ and $c= s {\\rm Li}_{5\/2} (sz)\/ \\zeta(5\/2)$,\nwhere ${\\rm Li}$ is the polylogarithm. The BEC critical point of the\nideal gas occurs at $\\mu=0$, i.e. $q=\\zeta(3\/2)$. \n\n\n\n\nConsider now two-component fermions with the action (\\ref{fermionaction}). \nHere the phase space factor $\\mathcal{I}$ is doubled and since $G\\propto 1\/\\mathcal{I}$, \nthe kernels have an extra $1\/2$:\n\\begin{equation}\n\\label{Gbosferm}\nG_{\\rm fermi} = \\inv{2} G_{\\rm bose} \n\\end{equation}\nDue to the SU(2) symmetry, the two-component fermion reduces to\ntwo identical copies of the above 1-component expressions, with the\nmodification (\\ref{Gbosferm}). \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Analysis of fermions}\n\n\n\n Recall that for 2 component fermions,\nthe $4$ is replaced by $2$ in eq. (\\ref{ykappa}), and by the SU(2) \nsymmetry, the system reduces to two identical copies of the one-component \nequations of the last sections; below we present results for \na single component, it being implicit that the free energy and density\nare doubled. \n\nThe integral equation for $y(\\kappa)$, eq. (\\ref{ykappa}), can be\nsolved numerically by iteration.\n One first substitutes \n$y_0 = 1$ on the right hand side and this gives the approximation\n$y_1$ for $y$. One then substitutes $y_1$ on the right hand side\nto generate $y_2$, etc. For regions of $z$ where there are no\ncritical points, this procedure converges rapidly, and as little\nas 5 iterations are needed. For fermions, as one approaches \nzero temperature, i.e. $x$ large and positive, more iterations are needed\nfor convergence. The following results are based on 50 iterations. \n\nWhen $z\\ll 1$, $y \\approx 1$, and the properties of the free ideal \ngas are recovered, since the gas is very dilute. There are solutions\nto eq. (\\ref{ykappa}) for all $-\\infty < x < \\infty$. ($x=\\mu\/T$). \nThe scaling function $c$, and it's comparision with a free theory, \nare shown in Figure \\ref{cF} as a function of $x$.\n The corrections to the free \ntheory become appreciable when $x>-2$. At $x=0$:\n\\begin{equation}\n\\label{czero}\nc(0) = 0.880, ~~~~~~\\tilde{\\xi} = 0.884, \n\\end{equation}\ncompared to the free gas values of $c(0) = 0.646$ and $\\tilde{\\xi} =1$. \n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$c$}\n\\psfrag{a}{$\\rm free$}\n\\includegraphics[width=10cm]{cF.eps} \n\\end{center}\n\\caption{$c(x)$ and its equivalent for a free theory as a function of\n$x=\\mu\/T$.} \n\\vspace{-2mm}\n\\label{cF} \n\\end{figure} \n\n\nThe scaling function $q$ for the density is shown as function of \n$x$ in Figure \\ref{qF}. Note that the density in the interacting case\nis always higher than for a free gas, due to the attractive interactions.\nAt $x=0$, $q(0) = 1.18$, whereas for a free gas $q=0.765$. \nAt low temperatures and high densities, $\\mu\/T \\gg 1$, the\noccupation numbers resemble that of a degenerate Fermi gas,\nas shown in Figure \\ref{fF}.\n\n \n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$q$}\n\\psfrag{a}{$\\rm free$}\n\\includegraphics[width=10cm]{qF.eps} \n\\end{center}\n\\caption{$q(x)$ and its equivalent for a free theory as a function of\n$x=\\mu\/T$.} \n\\vspace{-2mm}\n\\label{qF} \n\\end{figure} \n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$\\kappa = \\beta {\\bf k}^2 \/2m$}\n\\psfrag{y}{$f$}\n\\psfrag{a}{$x=15$}\n\\includegraphics[width=10cm]{fF.eps} \n\\end{center}\n\\caption{The occupation numbers as a function of $\\kappa$ for $x=5,10,15$.} \n\\vspace{-2mm}\n\\label{fF} \n\\end{figure} \n\n\n\n\nWhereas $c$ and $q$ are nearly featureless, other quantities \nseem to indicate a critical point, or phase transition, at \nlarge density. For instance, the entropy per particle \ndecreases with decreasing temperature up to $x < x_c \\approx 11.2$,\nas shown in Figure \\ref{snF}. Beyond this point the entropy per particle\nhas the unphysical behavior of increasing with temperature. \nA further indication that the region $x>x_c$ is unphysical is\nthat the specific heat per particle becomes negative, as shown in\nFigure \\ref{CVNF}. When $x\\ll 0$, $C_V\/N$ approaches the \nclassical value $3\/2$. This leads us to suggest a phase transition,\nat $x=x_c$, corresponding to the critical temperature \n$ T_c \/T_F \\approx 0.1$. As we will show, our analysis of\nthe viscosity to entropy-density ratio suggests a higher $T_c\/T_F$. \nThere have been numerous estimates of $T_c\/T_F$ based on various\napproximation schemes, mainly using Monte Carlo methods on the lattice\n\\cite{Perali, LeeShafer,Wingate,Bulgac,Drummond,Burovski}, \nquoting results for $T_c\/T_F$ between $0.05$ and $0.23$. The work\n\\cite{LeeShafer} puts an upper bound $T_c \/ T_F < 0.14$,\nand the most recent results of Burovski et. al. quote $T_c\/T_F =0.152(7)$.\nOur result is thus consistent with previous work. \nThe equation of state at this point follows from eq. (\\ref{eqnstate}):\n\\begin{equation}\n\\label{eqstF}\np = 4.95 n T\n\\end{equation}\n\n\n \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$s\/n$}\n\\includegraphics[width=10cm]{snF.eps} \n\\end{center}\n\\caption{Entropy per fermionic particle as a function of $x$.} \n\\vspace{-2mm}\n\\label{snF} \n\\end{figure} \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$C_V\/N$}\n\\includegraphics[width=10cm]{CVNF.eps} \n\\end{center}\n\\caption{Specific heat per particle as a function of $x$ for fermions.} \n\\vspace{-2mm}\n\\label{CVNF} \n\\end{figure} \n\n\n\nThe energy per particle, normalized to the Fermi energy $\\epsilon_F$,\ni.e. $E\/N \\epsilon_F = 3 \\xi \/5$, and the entropy per particle, \nare shown in Figures \\ref{xiF},\\ref{snFTTF} as a function of $T\/T_F$, where\n$k_B T_F = \\epsilon_F$. At high temperatures it matches that of a free\nFermi gas, in agreement with the Monte Carlo simulations in\n\\cite{Bulgac,Burovski}. Note that there is no sign of \npair-breaking at $T^*\/T_F = 0.5$ predicted in \\cite{Perali},\nand this also agrees with the Monte Carlo simulations.\n However at low temperatures in the vicinity\nof $T_c$, the agreement is not as good. This suggests our approximation\nis breaking down for very large $z$, i.e. the limit of zero temperature. \nThe same conclusion is reached by examining $\\mu\/\\epsilon_F$,\ndisplayed in Figure \\ref{muepF}, since the zero temperature \nepsilon expansion and Monte Carlo give\n $\\mu\/\\epsilon_F \\approx 0.4 - 0.5$\\cite{Son,Burovski}. \n\n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{X}{$T\/T_F$}\n\\psfrag{Y}{$\\frac{E}{N \\epsilon_F}$}\n\\includegraphics[width=10cm]{xiF.eps} \n\\end{center}\n\\caption{Energy per particle normalized to $\\epsilon_F$ as a function of\n$T\/T_F$.} \n\\vspace{-2mm}\n\\label{xiF} \n\\end{figure} \n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{X}{$T\/T_F$}\n\\psfrag{Y}{$s\/n$}\n\\includegraphics[width=10cm]{snFTTF.eps} \n\\end{center}\n\\caption{Entopy per particle as a function of\n$T\/T_F$.} \n\\vspace{-2mm}\n\\label{snFTTF} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{X}{$T\/T_F$}\n\\psfrag{Y}{$\\mu \/ \\epsilon_F$}\n\\includegraphics[width=10cm]{muepF.eps} \n\\end{center}\n\\caption{Chemical potential normalized to $\\epsilon_F$ as a function of\n$T\/T_F$.} \n\\vspace{-2mm}\n\\label{muepF} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Analysis of bosons} \n\n\n\n\n\n\nFor bosons we again solved the integral equation (\\ref{ykappa}) \nby iteration, starting from $y=1$. Since the occupation numbers decay\nquickly as a function of $\\kappa$, we introduced a cut-off $\\kappa < 10$. \nFor $x$ less than approximately $-2$, the gas behaves nearly classically. \n\nThe main feature of the solution to the integral equation is that for\n$x>x_c \\equiv -1.2741$, there is no solution that is smoothly connected to\nthe classical limit $x\\to -\\infty$. Numerically, when there is no solution\nthe iterative procedure fails to converge. \nThe free energy scaling function is plotted in Figure \\ref{cB}. \nNote that $c<1$, where $c=1$ is the free field value. \n We thus take the physical region to \nbe $x< x_c$. We find strong evidence that the gas undergoes BEC \nat $x=x_c$. In Figure \\ref{epofx}, we plot $\\varepsilon ({\\bf k}=0 )$ as a function of\n$x$, and ones sees that it goes to zero at $x_c$. This implies the occupation\nnumber $f$ diverges at ${\\bf k} =0$ at this critical point. \nOne clearly sees this behavior in Figure \\ref{fB}. \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$x=\\mu\/T $}\n\\psfrag{y}{$c$}\n\\psfrag{a}{$\\rm ideal$}\n\\includegraphics[width=10cm]{cB.eps} \n\\end{center}\n\\caption{The free-energy scaling function $c$ as a function of $\\mu\/T$ compared \nto the ideal gas case.} \n\\vspace{-2mm}\n\\label{cB} \n\\end{figure} \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$\\varepsilon ({\\bf k}=0)\/T$}\n\\psfrag{a}{$x_c$}\n\\includegraphics[width=10cm]{epofx.eps} \n\\end{center}\n\\caption{The pseudo-energy $\\varepsilon$ at ${\\bf k} =0$ as a function of $x=\\mu\/T$.} \n\\vspace{-2mm}\n\\label{epofx} \n\\end{figure} \n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$\\kappa = \\beta {\\bf k}^2 \/2m $}\n\\psfrag{y}{$f$}\n\\psfrag{a}{$x_c$}\n\\includegraphics[width=10cm]{fB.eps} \n\\end{center}\n\\caption{The occupation number $f(\\kappa)$ for $x=-1.275$ and $x_c = -1.2741$.} \n\\vspace{-2mm}\n\\label{fB} \n\\end{figure} \n\n\nThe compressibility is shown in Figure \\ref{compressB}, and diverges at\n$x_c$, again consistent with BEC. \nWe thus conclude that there is a critical point at $x_c$ which a \nstrongly interacting, scale invariant version of the ideal BEC. \nIn terms of the density, the critical point is:\n\\begin{equation}\n\\label{xcb}\nn_c \\lambda_T^3 = 1.325, ~~~~~~~~~( \\mu\/T = x_c = -1.2741 )\n\\end{equation}\nThe negative value of the chemical potential is consistent with the\neffectively attractive interactions. The above should be compared with \nthe ideal BEC of the free theory, where $x_c = 0$ and \n$n_c \\lambda_T^3 = \\zeta (3\/2) = 2.61$, which is higher by a factor of 2. \nAt the critical point the equation of state is \n\\begin{equation}\n\\label{eqstB}\np = 0.318 n T\n\\end{equation}\ncompared to $p = 0.514 nT$ for the free case. ($0.514 = \\zeta(5\/2)\/ \\zeta (3\/2)$). \n\nA critical exponent $\\nu$ characterizing the diverging compressibility can be defined as \n\\begin{equation}\n\\label{expnu}\n\\kappa \\sim (T-T_c)^{-\\nu}\n\\end{equation}\nA log-log plot of the compressibility verses $T-T_c$ shows an approximately\nstraight line, and we obtain $\\nu \\approx 0.69$. This should be compared with\nBEC in an ideal gas, where $\\nu \\approx 1.0$. Clearly the unitary gas version \nof BEC is in a different universality class. \n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$x=\\mu\/T $}\n\\psfrag{y}{$\\kappa T (mT)^{3\/2}$}\n\\includegraphics[width=10cm]{compressB.eps} \n\\end{center}\n\\caption{The compressibility $\\kappa $ as a function of $\\mu\/T$.} \n\\vspace{-2mm}\n\\label{compressB} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\nThe energy per particle scaling function $\\tilde{\\xi}$ at the critical point is\n$\\tilde{\\xi} (x_c) = 0.281$ compared to $0.453 = (2 \\sqrt{2} -2)\/(2 \\sqrt{2} -1)$ \nfor the free case. The entropy per particle and specific heat per particle\nare plotted in Figures \\ref{snB}, \\ref{CVNB} as a function of $T\/T_c$. \nAt large temperatures, as expected $C_V\/N = 3\/2$, i.e. the classical value. \nIt increases as $T$ is lowered, however in contrast to the ideal gas case,\nit then begins to decrease as $T$ approaches $T_c$. \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$T\/T_c$}\n\\psfrag{y}{$s\/n$}\n\\includegraphics[width=10cm]{snB.eps} \n\\end{center}\n\\caption{The entropy per particle as a function of $T\/T_c$.} \n\\vspace{-2mm}\n\\label{snB} \n\\end{figure} \n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$T\/T_c$}\n\\psfrag{y}{$C_V\/N $}\n\\includegraphics[width=10cm]{CVNB.eps} \n\\end{center}\n\\caption{The specific heat per particle as a function of $T\/T_c$.} \n\\vspace{-2mm}\n\\label{CVNB} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Entropy to viscosity ratio}\n\n\nConsider first a single component gas. \nIn kinetic theory, the shear viscosity can be expressed as \n\\begin{equation}\n\\label{shear.1}\n\\eta = \\inv{3} n \\bar{v} m \\ell_{\\rm free} \n\\end{equation}\nwhere $\\bar{v}$ is the average speed and $\\ell_{\\rm free} $ is the mean\nfree path. The mean free path is \n$\\ell_{\\rm free} = 1\/(\\sqrt{2} n \\sigma)$ where $\\sigma$ is the total\ncross-section. (The $\\sqrt{2}$ comes from the ratio of the mean speed\nto the mean relative speed\\cite{Reif}.) In the unitary limit\nthe S-matrix $S=-1$, which implies the scattering amplitude in\neq. (\\ref{CMunit}). This leads to \n\\begin{equation}\n\\label{shear.2}\n\\sigma = \\frac{ m^2 |{\\cal M}}\t\\def\\CN{{\\cal N}}\t\\def\\CO{{\\cal O}|^2}{4\\pi} \n= \\frac{16 \\pi}{|{\\bf k}|^2}\n\\end{equation}\nwhere $|k|$ is the momentum of one of the particles in the center of mass\nframe, i.e. $|{\\bf k}_1 - {\\bf k}_2| = 2 |{\\bf k}|$. This gives\n\\begin{equation}\n\\label{shear.3}\n\\eta = \\frac{m^3 \\bar{v}^3}{48 \\sqrt{2} \\pi} \n\\end{equation}\n\nSince the equation (\\ref{energyden}) is the same relation \nbetween the pressure and energy of a free gas, and the pressure\nis due to the kinetic energy, this implies\n\\begin{equation}\n\\label{shear.4}\n\\inv{2} m \\bar{v}^2 = E\/N = \\frac{3}{2} \\frac{c}{c'} T \n\\end{equation}\nSince the entropy density $s= - \\partial \\CF \/ \\partial T$, one finally has\n\\begin{equation}\n\\label{etas}\n\\frac{\\eta}{s} = \\frac{\\sqrt{3\\pi}}{8 \\zeta (5\/2)} \n \\( \\frac{c}{c'} \\)^{3\/2} \\inv{ 5 c \/2 - x c' } \n\\end{equation}\n\n\nFor two-component fermions, the available phase space $\\mathcal{I}$ is doubled.\nAlso, spin up particles only scatter with spin down. This implies\n$\\eta$ is $8$ times the above expression. Since the entropy density is doubled,\nthis implies that $\\eta\/s$ is $4$ times the expression eq. (\\ref{etas}). \n\n\nThe ratio $\\eta\/s$ for fermions as a function of $T\/T_F$ is shown in\nFigure \\ref{etasF}, and is in good agreement both quantitatively and\nqualitatively with the experimental data \nsummarized in \\cite{Schafer}. The lowest value occurs at $x=2.33$, \nwhich corresponds to $T\/T_F = 0.28$, and \n\\begin{equation}\n\\label{etasFlim}\n\\frac{\\eta}{s} > 4.72 \\frac{\\hbar}{4 \\pi k_B}\n\\end{equation}\nThe experimental data\nhas a minimum that is about $6$ times this bound. \nIn the free fermion theory the minimum occurs at $\\mu\/T \\approx 2.3$,\nwhich gives $\\eta\/s > 7.2 \\hbar \/ 4 \\pi k_B$. \n\n\n\n \n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{X}{$T\/T_F$}\n\\psfrag{Y}{$\\eta \/ s$}\n\\includegraphics[width=10cm]{etasF.eps} \n\\end{center}\n\\caption{The viscosity to entropy-density ratio as a function of\n$T\/T_F$ for fermions. The horizontal line is $1\/4\\pi$.} \n\\vspace{-2mm}\n\\label{etasF} \n\\end{figure} \n\n\n\n\n\n\n\nFor bosons, the ration $\\eta\/s$ is plotted in Figure \\ref{etasB} as\na function of $T\/T_c$. One sees that it has a minimum at the critical point,\nwhere \n\\begin{equation}\n\\label{etasBlim}\n\\frac{\\eta}{s} > 1.26 \\frac{\\hbar}{4 \\pi k_B}\n\\end{equation}\n Thus the bosonic gas \nat the unitary critical point is a more perfect fluid than that of fermions. \nOn the other hand, the ideal Bose gas at the critical point has a lower value:\n\\begin{equation}\n\\label{etasBfree}\n\\frac{\\eta}{s} \\Bigg\\vert_{\\rm ideal} = \\frac{ \\sqrt{ 3 \\pi \\zeta (5\/2)}}\n{20 \\zeta (3\/2)^{3\/2}} = 0.53 \\frac{\\hbar}{4 \\pi k_B}\n\\end{equation}\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$T\/T_c$}\n\\psfrag{y}{$\\eta \/ s$}\n\\includegraphics[width=10cm]{etasB.eps} \n\\end{center}\n\\caption{The viscosity to entropy-density ratio as a function of\n$T\/T_c$ for bosons. The horizontal line is $1\/4\\pi$.} \n\\vspace{-2mm}\n\\label{etasB} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\n\n\nWe presented a novel analytic treatment of unitary Bose and Fermi gases\nat finite temperature and chemical potential \nusing a new formulation of statistical mechanics\nbased on the exact, zero temperature, 2-body scattering. \nOur results appear to be consistent with lattice Monte Carlo methods. \nAll of the thermodynamic functions, such as entropy per particle, energy\nper particle, specific heat, compressibility, and viscosity \nare readily calculated once one numerically solves the integral equation\nfor the pseudo-energy. \n\nFor fermions, our 2-body approximation is good if the temperatures are not\ntoo low. We estimated $T_c\/T_F \\approx 0.1$, where the critical \npoint occurs at $\\mu\/T \\approx 11.2$. For bosons we presented\nevidence for a strongly interacting version of BEC at the critical \npoint $n \\lambda_T^3 \\approx 1.3$, corresponding to \n$\\mu \/ T = -1.27$. \n\n\n\n\n\n\n\\section{Acknowledgments}\n\n\nWe wish to thank Erich Mueller for helpful discussions. \n This work is supported by the National Science Foundation\nunder grant number NSF-PHY-0757868. \n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\\IEEEPARstart{T}o perform diagnosis and prognosis of cardiovascular disease (CVD) medical experts depend on the reliable quantification of cardiac function \\cite{white1987left}. Cardiac magnetic resonance imaging (CMRI) is currently considered the reference standard for quantification of ventricular volumes, mass and function \\cite{grothues2002comparison}. Short-axis CMR imaging, covering the entire left and right ventricle (LV resp. RV) is routinely used to determine quantitative parameters of both ventricle's function. This requires manual or semi-automatic segmentation of corresponding cardiac tissue structures for end-diastole (ED) and end-systole (ES). \n\nExisting semi-automated or automated segmentation methods for CMRIs regularly require (substantial) manual intervention caused by lack of robustness. Manual or semi-automatic segmentation across a complete cardiac cycle, comprising \\num{20} to \\num{40} phases per patient, enables computation of parameters quantifying cardiac motion with potential diagnostic implications but due to the required workload, this is practically infeasible. Consequently, segmentation is often performed at end-diastole and end-systole precluding comprehensive analysis over complete cardiac cycle. \n\nRecently\\cite{litjens2017survey, leiner2019machine}, deep learning segmentation methods have shown to outperform traditional approaches such as those exploiting level set, graph-cuts, deformable models, cardiac atlases and statistical models \\cite{petitjean2011review, peng2016review}. However, recent comparison of a number of automatic methods showed that even the best performing methods generated anatomically implausible segmentations in more than 80\\% of the CMRIs \\cite{bernard2018deep}. Such errors do not occur when experts perform segmentation. To achieve acceptance in clinical practice these shortcomings of the automatic approaches need to be alleviated by further development. This can be achieved by generating more accurate segmentation result or by development of approaches that automatically detect segmentation failures. \n\nIn manual and automatic segmentation of short-axis CMRI, largest segmentation inaccuracies are typically located in the most basal and apical slices due to low tissue contrast ratios \\cite{suinesiaputra2015quantification}. To increase segmentation performance, several methods have been proposed \\cite{tan2018fully, zheng20183, savioli2018automated, bai2018automated}. Tan et al. \\cite{tan2018fully} used a convolutional neural network (CNN) to regress anatomical landmarks from long-axis views (orthogonal to short-axis). They exploited the landmarks to determine most basal and apical slices in short-axis views and thereby constraining the automatic segmentation of CMRIs. This resulted in increased robustness and performance. Other approaches leverage spatial \\cite{zheng20183} or temporal \\cite{savioli2018automated, bai2018automated} information to increase segmentation consistency and performance in particular in the difficult basal and apical slices.\n\nAn alternative approach to preventing implausible segmentation results is by incorporating knowledge about the highly constrained shape of the heart. Oktay et al. \\cite{oktay2017anatomically} developed an anatomically constrained neural network (NN) that infers shape constraints using an auto-encoder during segmentation training. Duan et al. \\cite{duan2019automatic} developed a deep learning segmentation approach for CMRIs that used atlas propagation to explicitly impose a shape refinement. This was especially beneficial in the presence of image acquisition artifacts. Recently, Painchaud et al. \\cite{painchaud2019cardiac} developed a post-processing approach to detect and transform anatomically implausible cardiac segmentations into valid ones by defining cardiac anatomical metrics. Applying their approach to various state-of-the-art segmentation methods the authors showed that the proposed method provides strong anatomical guarantees without hampering segmentation accuracy. \n\n\\begin{figure*}[t]\n\t\\center\n\t\\includegraphics[width=6in, height=3.5in]{figures\/overview_approach.pdf}%\n\t\n\t\\caption{Overview of proposed two step approach. Step 1 (left): Automatic CNN segmentation of CMR images combined with assessment of segmentation uncertainties. Step 2 (right): Differentiate tolerated errors from segmentation failures (to be detected) using distance transform maps based on reference segmentations. Detection of image regions containing segmentation failures using CNN which takes CMR images and segmentation uncertainties as input. Manual corrected segmentation failures (green) based on detected image regions.}\n\t\\label{fig_overview_method}\n\\end{figure*}\n\n\nA different research trend focuses on detecting segmentation failures, i.e. on automated quality control for image segmentation. These methods can be divided in those that predict segmentation quality using image at hand or corresponding automatic segmentation result, and those that assess and exploit predictive uncertainties to detect segmentation failure. \n\nRecently, two methods were proposed to detect segmentation failures in large-scale cardiac MR imaging studies to remove these from subsequent analysis\\cite{alba2018automatic, robinson2019automated}. Robinson et al. \\cite{robinson2019automated} using the approach of Reverse Classification Accuracy (RCA) \\cite{valindria2017reverse} predicted CMRI segmentation metrics to detect failed segmentations. They achieved good agreement between predicted metrics and visual quality control scores. Alba et al. \\cite{alba2018automatic} used statistical, pattern and fractal descriptors in a random forest classifier to directly detect segmentation contour failures without intermediate regression of segmentation accuracy metrics. \n\nMethods for automatic quality control were also developed for other applications in medical image analysis. Frounchi et al. \\cite{frounchi2011automating} extracted features from the segmentation results of the left ventricle in CT scans. Using the obtained features the authors trained a classifier that is able to discriminate between consistent and inconsistent segmentations. To distinguish between acceptable and non-acceptable segmentations Kohlberger el al. \\cite{kohlberger2012evaluating} proposed to directly predict multi-organ segmentation accuracy in CT scans using a set of features extracted from the image and corresponding segmentation. \n\nA number of methods aggregate voxel-wise uncertainties into an overall score to identify insufficiently accurate segmentations. For example, Nair et al. \\cite{nair2018exploring} computed an overall score for target segmentation structure from voxel-wise predictive uncertainties. The method was tested for detection of Multiple Sclerosis in brain MRI. The authors showed that rejecting segmentations with high uncertainty scores led to increased detection accuracy indicating that correct segmentations contain lower uncertainties than incorrect ones. Similarly, to assess segmentation quality of brain MRIs Jungo et al. \\cite{jungo2018uncertainty} aggregated voxel-wise uncertainties into a score per target structure\nand showed that the computed uncertainty score enabled identification of erroneous segmentations.\n\nUnlike approaches evaluating segmentation directly, several methods use predictive uncertainties to predict segmentation metrics and thereby evaluate segmentation performance \\cite{roy2019bayesian, devries2018leveraging}. For example, Roy et al. \\cite{roy2019bayesian} aggregated voxel-wise uncertainties into four scores per segmented structure in brain MRI. The authors showed that computed scores can be used to predict the Intersection over Union and hence, to determine segmentation accuracy. \nSimilar idea was presented by DeVries et al. \\cite{devries2018leveraging} that predicted segmentation accuracy per patient using an auxiliary neural network that leverages the dermoscopic image, automatic segmentation result and obtained uncertainties. The researchers showed that a predicted segmentation accuracy is useful for quality control.\n\nWe build on our preliminary work where automatic segmentation of CMR images using a dilated CNN was combined with assessment of two measures of segmentation uncertainties \\cite{sander2019towards}. For the first measure the multi-class entropy per voxel (entropy maps) was computed using the output distribution. For the second measure Bayesian uncertainty maps were acquired using Monte Carlo dropout (MC-dropout) \\cite{gal2016dropout}. In \\cite{sander2019towards} we showed that the obtained uncertainties almost entirely cover the regions of incorrect segmentation i.e. that uncertainties are calibrated. In the current work we extend our preliminary research in two ways. First, we assess impact of CNN architecture on the segmentation performance and calibration of uncertainty maps by evaluating three existing state-of-the-art CNNs. Second, we employ an auxiliary CNN (detection network) that processes a cardiac MRI and corresponding spatial uncertainty map (Entropy or Bayesian) to automatically detect segmentation failures. We differentiate errors that may be within the range of inter-observer variability and hence do not necessarily require correction (tolerated errors) from the errors that an expert would not make and hence require correction (segmentation failures). Given that overlap measures do not capture fine details of the segmentation results and preclude us to differentiate two types of segmentation errors, in this work, we define segmentation failure using a metric of boundary distance. In \\cite{sander2019towards} we found that degree of calibration of uncertainty maps is dependent on the loss function used to train the CNN. Nevertheless, in the current work we show that uncalibrated uncertainty maps are useful to detect local segmentation failures. \t\nIn contrast to previous methods that detect segmentation failure per-patient or per-structure\\cite{roy2019bayesian, devries2018leveraging}, we propose to detect segmentation failures per image region. We expect that inspection and correction of segmentation failures using image regions rather than individual voxels or images would simplify correction process. To show the potential of our approach and demonstrate that combining automatic segmentation with manual correction of the detected segmentation failures per region results in higher segmentation performance we performed two additional experiments. In the first experiment, correction of detected segmentation failures was simulated in the complete data set. In the second experiment, correction was performed by an expert in a subset of images. Using publicly available set of CMR scans from MICCAI 2017 ACDC challenge \\cite{bernard2018deep}, the performance was evaluated before and after simulating the correction of detected segmentation failures as well as after manual expert correction.\n\n\\section{Data}\n\nIn this study data from the MICCAI \\num{2017} Automated Cardiac Diagnosis Challenge (ACDC) \\cite{bernard2018deep} was used. The dataset consists of cardiac cine MR images (CMRIs) from 100 patients uniformly distributed over normal cardiac function and four disease groups: dilated cardiomyopathy, hypertrophic cardiomyopathy, heart failure with infarction, and right ventricular abnormality. Detailed acquisition protocol is described by Bernard et al.~\\cite{bernard2018deep}. Briefly, short-axis CMRIs were acquired with two MRI scanners of different magnetic strengths (\\num{1.5} and \\num{3.0} T). Images were made during breath hold using a conventional steady-state free precession (SSFP) sequence. CMRIs have an in-plane resolution ranging from \\num{1.37} to \\SI{1.68}{\\milli\\meter} (average reconstruction matrix \\num{243} $\\times$ \\num{217} voxels) with slice spacing varying from \\num{5} to \\SI{10}{\\milli\\meter}. Per patient 28 to 40 volumes are provided covering partially or completely one cardiac cycle. Each volume consists of on average ten slices covering the heart. Expert manual reference segmentations are provided for the LV cavity, RV endocardium and LV myocardium (LVM) for all CMRI slices at ED and ES time frames. To correct for intensity differences among scans, voxel intensities of each volume were scaled to the [\\num{0.0}, \\num{1.0}] range using the minimum and maximum of the volume. Furthermore, to correct for differences in-plane voxel sizes, image slices were resampled to \\num{1.4}$\\times\\SI{1.4}{\\milli\\meter}^2$. \n\n\\begin{figure*}[!t]\n\t\\captionsetup[subfigure]{justification=centering}\n\t\\centering\n\n\t\\subfloat[]{\\includegraphics[width=5in]{figures\/qualitative\/auto\/patient099_slice02_ES_emap_auto.pdf}%\n\t\t\\label{fig_seg_qual_example1}}\n\t\n\t\\subfloat[]{\\includegraphics[width=5in]{figures\/qualitative\/auto\/patient097_slice00_ES_emap_auto.pdf}%\n\t\t\\label{fig_seg_qual_example2}}\n\t\n\t\\caption{Examples of automatic segmentations generated by different segmentation models for two cardiac MRI scans (rows) at ES at the base of the heart.}\n\t\\label{fig_seg_qualitative_results}\n\\end{figure*}\n\n\\section{Methods}\n\nTo investigate uncertainty of the segmentation, anatomical structures in CMR images are segmented using a CNN. To investigate whether the approach generalizes to different segmentation networks, three state-of-the-art CNNs were evaluated. For each segmentation model two measures of predictive uncertainty were obtained per voxel. Thereafter, to detect and correct local segmentation failures an auxiliary CNN (detection network) that analyzes a cardiac MRI was used. Finally, this leads to the uncertainty map allowing detection of image regions that contain segmentation failures. Figure~\\ref{fig_overview_method} visualizes this approach.\n\n\n\\subsection{Automatic segmentation of cardiac MRI}\n\nTo perform segmentation of LV, RV, and LVM in cardiac MR images i.e. \\num{2}D CMR scans, three state-of-the-art CNNs are trained. Each of the three networks takes a CMR image as input and has four output channels providing probabilities for the three cardiac structures (LV, RV, LVM) and background. Softmax probabilities are calculated over the four tissue classes. Patient volumes at ED and ES are processed separately. During inference the \\num{2}D automatic segmentation masks are stacked into a \\num{3}D volume per patient and cardiac phase. After segmentation, the largest \\num{3}D connected component for each class is retained and volumes are resampled to their original voxel resolution. Segmentation networks differ substantially regarding architecture, number of parameters and receptive field size. To assess predictive uncertainties from the segmentation models \\textit{Monte Carlo dropout} (MC-dropout) introduced by Gal \\& Ghahramani \\cite{gal2016dropout} is implemented in every network. The following three segmentation networks were evaluated: Bayesian Dilated CNN, Bayesian Dilated Residual Network, Bayesian U-net.\n\n\\vspace{1ex}\n\\noindent \\textbf{Bayesian Dilated CNN (DN)}: The Bayesian DN architecture comprises a sequence of ten convolutional layers. Layers \\num{1} to \\num{8} serve as feature extraction layers with small convolution kernels of size \\num{3}$\\times$\\num{3} voxels. No padding is applied after convolutions. The number of kernels increases from \\num{32} in the first eight layers, to \\num{128} in the final two fully connected classification layers, implemented as \\num{1}$\\times$\\num{1} convolutions. The dilation level is successively increased between layers \\num{2} and \\num{7} from \\num{2} to \\num{32} which results in a receptive field for each voxel of \\num{131}$\\times$\\num{131} voxels, or \\num{18.3}$\\times$ $\\SI{18.3}{\\centi\\meter}^2$. All trainable layers except the final layer use rectified linear activation functions (ReLU). To enhance generalization performance, the model uses batch normalization in layers \\num{2} to \\num{9}. In order to convert the original DN~\\cite{wolterink2017automatic} into a Bayesian DN dropout is added as the last operation in all but the final layer and \\num{10} percent of a layer's hidden units are randomly switched off. \n\n\\vspace{1ex}\n\\noindent \\textbf{Bayesian Dilated Residual Network (DRN)}: The Bayesian DRN is based on the original DRN from Yu et al. \\cite{yu2017dilated} for image segmentation. More specifically, the DRN-D-22\\cite{yu2017dilated} is used which consists of a feature extraction module with output stride eight followed by a classifier implemented as fully convolutional layer with \\num{1}$\\times$\\num{1} convolutions. Output of the classifier is upsampled to full resolution using bilinear interpolation. The convolutional feature extraction module comprises eight levels where the number of kernels increases from \\num{16} in the first level, to \\num{512} in the two final levels. The first convolutional layer in level \\num{1} uses \\num{16} kernels of size \\num{7}$\\times$\\num{7} voxels and zero-padding of size \\num{3}. The remaining trainable layers use small \\num{3}$\\times$\\num{3} voxel kernels and zero-padding of size \\num{1}. Level \\num{2} to \\num{4} use a strided convolution of size \\num{2}. To further increase the receptive field convolutional layers in level \\num{5}, \\num{6} and \\num{7} use a dilation factor of \\num{2}, \\num{4} and \\num{2}, respectively. Furthermore, levels \\num{3} to \\num{6} consist of two residual blocks. All convolutional layers of the feature extraction module are followed by batch normalization, ReLU function and dropout. Adding dropout and switching off \\num{10} percent of a layer's hidden units converts the original DRN~\\cite{yu2017dilated} into a Bayesian DRN.\n\n\\vspace{1ex}\n\\noindent \\textbf{Bayesian U-net (U-net)}: The standard architecture of the U-net~\\cite{ronneberger2015u} is used. The network is fully convolutional and consists of a contracting, bottleneck and expanding path. The contracting and expanding path each consist of four blocks i.e. resolution levels which are connected by skip connections. The first block of the contracting path contains two convolutional layers using a kernel size of \\num{3}$\\times$\\num{3} voxels and zero-padding of size \\num{1}. Downsampling of the input is accomplished by employing a max pooling operation in block \\num{2} to \\num{4} of the contracting path and the bottleneck using a convolutional kernel of size \\num{2}$\\times$\\num{2} voxels and stride \\num{2}. Upsampling is performed by a transposed convolutional layer in block \\num{1} to \\num{4} of the expanding path using the same kernel size and stride as the max pooling layers. Each downsampling and upsampling layer is followed by two convolutional layers using \\num{3}$\\times$\\num{3} voxel kernels with zero-padding size \\num{1}. The final convolutional layer of the network acts as a classifier and uses \\num{1}$\\times$\\num{1} convolutions to reduce the number of output channels to the number of segmentation classes. The number of kernels increases from \\num{64} in the first block of the contracting path to \\num{1024} in the bottleneck. In contrast, the number of kernels in the expanding path successively decreases from \\num{1024} to \\num{64}. In deviation to the standard U-net instance normalization is added to all convolutional layers in the contracting path and ReLU non-linearities are replaced by LeakyReLU functions because this was found to slightly improve segmentation performance. In addition, to convert the deterministic model into a Bayesian neural network dropout is added as the last operation in each block of the contracting and expanding path and \\num{10} percent of a layer's hidden units are randomly switched off.\n\n\\subsection{Assessment of predictive uncertainties} \\label{uncertainty_maps}\nTo detect failures in segmentation masks generated by CNNs in testing, spatial uncertainty maps of the obtained segmentations are generated. For each voxel in the image two measures of uncertainty are calculated. First, a computationally cheap and straightforward measure of uncertainty is the entropy of softmax probabilities over the four tissue classes which are generated by the segmentation networks. Using these, normalized entropy maps $\\bf E \\in [0, 1]^{H\\times W}$ (e-map) are computed where $H$ and $W$ denote the height and width of the original CMRI, respectively.\n\nSecond, by applying MC-dropout in testing, softmax probabilities with a number of samples $T$ per voxel are obtained. As an overall measure of uncertainty the mean standard deviation of softmax probabilities per voxel over all tissue classes $C$\\label{ref:maximum_variance} is computed\n\n\n\\begingroup\n\\small\n\\begin{align}\n\\textbf{B} (I)^{(x, y)} &= \\frac{1}{C} \\sum_{c=1}^{C} \\sqrt{\\frac{1}{T-1} \\sum_{t=1}^{T} \\big(p_t(I)^{(x, y, c)} - \\hat{\\mu}^{(x, y, c)} \\big)^2 } \\; ,\n\\end{align}\n\\endgroup\n\nwhere $\\textbf{B}(I)^{(x, y)} \\in [0, 1]$ denotes the normalized value of the Bayesian uncertainty map (b-map) at position $(x, y)$ in \\num{2}D slice $I$, $C$ is equal to the number of classes, $T$ is the number of samples and $p_t(I)^{(x, y, c)}$ denotes the softmax probability at position $(x, y)$ in image $I$ for class $c$. The predictive mean per class $\\hat{\\mu}^{(x, y, c)}$ of the samples is computed as follows:\n\n\\begingroup\n\\small\n\\begin{align}\n\\hat{\\mu}^{(x, y, c)} &= \\frac{1}{T} \\sum_{t=1}^{T} p_t(I)^{(x, y, c)} \\; .\n\\end{align}\n\\endgroup\n\nIn addition, the predictive mean per class is used to determine the tissue class per voxel.\n\n\\subsection{Calibration of uncertainty maps} \nIdeally, incorrectly segmented voxels as defined by the reference labels should be covered by higher uncertainties than correctly segmented voxels. In such a case the spatial uncertainty maps are perfectly calibrated. \\textit{Risk-coverage curves} introduced by Geifman et al.\\cite{geifman2017selective} visualize whether incorrectly segmented voxels are covered by higher uncertainties than those that are correctly segmented. Risk-coverage curves convey the effect of avoiding segmentation of voxels above a specific uncertainty value on the reduction of segmentation errors (i.e. risk reduction) while at the same time quantifying the voxels that were omitted from the classification task (i.e. coverage). \n\nTo generate risk-coverage curves first, each patient volume is cropped based on a minimal enclosing parallelepiped bounding box that is placed around the reference segmentations to reduce the number of background voxels. Note that this is only performed to simplify the analysis of the risk-coverage curves. Second, voxels of the cropped patient volume are ranked based on their uncertainty value in descending order. Third, to obtain uncertainty threshold values per patient volume the ranked voxels are partitioned into \\num{100} percentiles based on their uncertainty value. Finally, per patient volume each uncertainty threshold is evaluated by computing a coverage and a risk measure. Coverage is the percentage of voxels in a patient volume at ED or ES that is automatically segmented. Voxels in a patient volume above the threshold are discarded from automatic segmentation and would be referred to an expert. The number of incorrectly segmented voxels per patient volume is used as a measure of risk. Using bilinear interpolation risk measures are computed per patient volume between $[0, 100]$ percent.\n\n\n\\subsection{Detection of segmentation failures}\n\nTo detect segmentation failures uncertainty maps are used but direct application of uncertainties is infeasible because many correctly segmented voxels, such as those close to anatomical structure boundaries, have high uncertainty. Hence, an additional patch-based CNN (detection network) is used that takes a cardiac MR image together with the corresponding spatial uncertainty map as input. For each patch of \\num{8}$\\times$\\num{8} voxels the network generates a probability indicating whether it contains segmentation failure. In the following, the terms patch and region are used interchangeably.\n\nThe detection network is a shallow Residual Network (S-ResNet) \\cite{he2016deep} consisting of a feature extraction module with output stride eight followed by a classifier indicating the presence of segmentation failure. The first level of the feature extraction module consists of two convolutional layers. The first layer uses \\num{16} kernels of \\num{7}$\\times$\\num{7} voxels and zero-padding of size \\num{3} and second layer \\num{32} kernels of \\num{3}$\\times$\\num{3} voxels and zero-padding of \\num{1} voxel. Level \\num{2} to \\num{4} each consist of one residual block that contains two convolutional layers with \\num{3}$\\times$\\num{3} voxels kernels with zero-padding of size \\num{1}. The first convolutional layer of each residual block uses a strided convolution of \\num{2} voxels to downsample the input. All convolutional layers of the feature extraction module are followed by batch normalization and ReLU function. The number of kernels in the feature extraction module increases from \\num{16} in level \\num{1} to \\num{128} in level \\num{4}. The network is a \\num{2}D patch-level classifier and requires that the size of the two input slices is a multiple of the patch-size. \\label{patch_size} The final classifier consists of three fully convolutional layers, implemented as \\num{1}$\\times$\\num{1} convolutions, with \\num{128} feature maps in the first two layers. The final layer has two channels followed by a softmax function which indicates whether the patch contains segmentation failure. Furthermore, to regularize the model dropout layers ($p=0.5$) were added between the residual blocks and the fully convolutional layers of the classifier.\n\n\\begin{table*}\n\t\\caption{Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout enabled during testing) in terms of Dice coefficient (top) and Hausdorff distance (bottom) (mean $\\pm$ standard deviation). Each combination comprises a block of two rows. A row in which column \\textit{Uncertainty map for detection} indicates e- or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in black\/bold are ranked first in the segmentation only task whereas numbers accentuated in red\/bold are ranked first in the combined segmentation \\& detection task. The last row states the performance of the winning model in the ACDC challenge (on \\num{100} patient images) \\cite{isensee2017automatic}. Number with asterisk indicates statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach. Best viewed in color.}\n\t\\label{table_overall_segmentation_performance}\n\t\\centering\n\t\\tiny\n\t\\subfloat[Dice coefficient]{\n\t\t\\begin{tabular}{| C{1.6cm} | C{0.8cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\textbf{Uncertainty} & \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Model} & \\textbf{map for detection} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-Brier & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.928$\\pm$0.04) & \\phantom{x}0.875$\\pm$0.03) & \\phantom{x}0.901$\\pm$0.11 & \\phantom{x}0.832$\\pm$0.10) & \\phantom{x}0.884$\\pm$0.04 \\\\\n\t\t\t& \\textbf{e-map} & *0.965$\\pm$0.01 & *0.949$\\pm$0.02 & *0.885$\\pm$0.03 & *0.937$\\pm$0.06 & *0.905$\\pm$0.05 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-Brier+MC & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.922$\\pm$0.04 & \\phantom{x}0.875$\\pm$0.04 & \\phantom{x}0.912$\\pm$0.08 & \\phantom{x}0.839$\\pm$0.11 & \\phantom{x}0.882$\\pm$0.04 \\\\ \n\t\t\t& \\textbf{b-map} & *0.966$\\pm$0.01 & *0.950$\\pm$0.01 & *0.886$\\pm$0.03 & \\textbf{\\textcolor{red}{*0.942}}$\\pm$0.03 & *0.916$\\pm$0.04 & *0.912$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-soft-Dice & & \\phantom{x}0.960$\\pm$0.02 & \\phantom{x}0.921$\\pm$0.04 & \\phantom{x}0.870$\\pm$0.04 & \\phantom{x}0.909$\\pm$0.08 & \\phantom{x}0.812$\\pm$0.12 & \\phantom{x}0.879$\\pm$0.04 \\\\\n\t\t\t& \\textbf{e-map} & *0.965$\\pm$0.01 & *0.945$\\pm$0.02 & *0.879$\\pm$0.04 & *0.938$\\pm$0.03 & *0.891$\\pm$0.06 & *0.905$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-soft-Dice+MC & & \\phantom{x}0.958$\\pm$0.02 & \\phantom{x}0.913$\\pm$0.05 & \\phantom{x}0.868$\\pm$0.04 & \\phantom{x}0.907$\\pm$0.07 & \\phantom{x}0.818$\\pm$0.12 & \\phantom{x}0.875$\\pm$0.04 \\\\\n\t\t\t& \\textbf{b-map} & *0.964$\\pm$0.01 & *0.944$\\pm$0.02 & *0.877$\\pm$0.04 & *0.939$\\pm$0.03 & *0.900$\\pm$0.05 & *0.904$\\pm$0.03 \\\\ \t\t\t\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-CE & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.929$\\pm$0.03 & \\phantom{x}0.878$\\pm$0.03 & \\phantom{x}0.912$\\pm$0.06 & \\phantom{x}0.850$\\pm$0.09 & \\phantom{x}0.891$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{e-map} & \\phantom{x}0.964$\\pm$0.01 & *0.943$\\pm$0.02 & *0.886$\\pm$0.03 & *0.937$\\pm$0.03 & *0.899$\\pm$0.04 & *0.908$\\pm$0.03 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-CE+MC & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.926$\\pm$0.03 & \\phantom{x}0.877$\\pm$0.03 & \\phantom{x}0.913$\\pm$0.06 & \\phantom{x}0.847$\\pm$0.10 & \\phantom{x}0.890$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{b-map} & *0.965$\\pm$0.01 & *0.948$\\pm$0.01 & *0.887$\\pm$0.03 & *0.939$\\pm$0.03 & *0.911$\\pm$0.04 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-soft-Dice & & \\phantom{x}0.964$\\pm$0.01 & \\phantom{x}\\textbf{0.937}$\\pm$0.02 & \\phantom{x}0.888$\\pm$0.03 & \\phantom{x}\\textbf{0.919}$\\pm$0.06 & \\phantom{x}0.856$\\pm$0.09 & \\phantom{x}\\textbf{0.900}$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{e-map} & \\phantom{x}0.967$\\pm$0.01 & *0.945$\\pm$0.02 & \\phantom{x}0.893$\\pm$0.03 & \\phantom{x}0.934$\\pm$0.04 & *0.892$\\pm$0.06 & *0.911$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-soft-Dice+MC & & \\phantom{x}0.963$\\pm$0.02 & \\phantom{x}0.935$\\pm$0.03 & \\phantom{x}0.886$\\pm$0.03 & \\phantom{x}0.921$\\pm$0.06 & \\phantom{x}\\textbf{0.857}$\\pm$0.09 & \\phantom{x}0.899$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{b-map} & \\phantom{x}0.967$\\pm$0.01 & *0.947$\\pm$0.02 & \\phantom{x}0.893$\\pm$0.03 & *0.938$\\pm$0.03 & *0.907$\\pm$0.04 & *0.912$\\pm$0.03 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-CE & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.923$\\pm$0.05 & \\phantom{x}0.878$\\pm$0.03 & \\phantom{x}0.907$\\pm$0.07 & \\phantom{x}0.840$\\pm$0.08 & \\phantom{x}0.885$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{e-map} & \\phantom{x}0.966$\\pm$0.01 & *0.946$\\pm$0.02 & *0.890$\\pm$0.03 & *0.935$\\pm$0.04 & *0.901$\\pm$0.06 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-CE+MC & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.926$\\pm$0.04 & \\phantom{x}0.879$\\pm$0.03 & \\phantom{x}0.909$\\pm$0.07 & \\phantom{x}0.849$\\pm$0.07 & \\phantom{x}0.887$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{b-map} & \\phantom{x}0.967$\\pm$0.01 & \\textbf{\\textcolor{red}{*0.954}}$\\pm$0.02 & *0.893$\\pm$0.03 & *0.940$\\pm$0.04 & \\textbf{\\textcolor{red}{*0.920}}$\\pm$0.04 & \\textbf{\\textcolor{red}{*0.914}}$\\pm$0.03 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-soft-Dice & & \\phantom{x}\\textbf{0.965}$\\pm$0.02 & \\phantom{x}0.928$\\pm$0.04 & \\phantom{x}0.888$\\pm$0.03 & \\phantom{x}0.914$\\pm$0.08 & \\phantom{x}0.844$\\pm$0.09 & \\phantom{x}0.896$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{e-map} & \\phantom{x}0.968$\\pm$0.01 & *0.943$\\pm$0.03 & *0.898$\\pm$0.03 & \\phantom{x}0.930$\\pm$0.05 & *0.886$\\pm$0.07 & *0.911$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-soft-Dice+MC & & \\phantom{x}\\textbf{0.965}$\\pm$0.02 & \\phantom{x}0.929$\\pm$0.04 & \\phantom{x}\\textbf{0.889}$\\pm$0.03 & \\phantom{x}0.911$\\pm$0.10 & \\phantom{x}0.845$\\pm$0.09 & \\phantom{x}0.897$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{b-map} & \\phantom{x}\\textbf{\\textcolor{red}{0.968}}$\\pm$0.01 & *0.948$\\pm$0.03 & \\textbf{\\textcolor{red}{*0.900}}$\\pm$0.03 & \\phantom{x}0.928$\\pm$0.09 & *0.895$\\pm$0.06 & \\textbf{\\textcolor{red}{*0.914}}$\\pm$0.03 \\\\\n\t\t\t\n\t\t\t\\hdashline\n\t\t\tIsensee et al. & & \\phantom{x}0.966& \\phantom{x}0.941 & \\phantom{x}0.899 & \\phantom{x}0.924 & \\phantom{x}0.875\t& \\phantom{x}0.908 \\\\\n\t\t\t\n\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular}\n\t\t\\label{table_seg_perf_dsc}\n\t} \n\t\\vspace{13ex}\n\t\\centering\n\t\\tiny\n\t\\subfloat[Hausdorff Distance]{\n\t\t\\begin{tabular}{| C{1.6cm} | C{0.8cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\textbf{Uncertainty}& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Model} & \\textbf{map for detection} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-Brier & & \\phantom{x}6.7$\\pm$3.1 & \\phantom{x}13.5$\\pm$5.9 & \\phantom{x}10.2$\\pm$6.9 & \\phantom{x}10.7$\\pm$7.7 & \\phantom{x}16.7$\\pm$6.8 & \\phantom{x}12.3$\\pm$5.8 \\\\\n\t\t\t& \\textbf{e-map} & *5.7$\\pm$2.7 & *11.7$\\pm$5.2 & *\\phantom{x}8.3$\\pm$5.9 & *\\phantom{x}8.0$\\pm$6.5 & *14.2$\\pm$5.6 & *\\phantom{x}9.7$\\pm$5.0 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-Brier+MC & & \\phantom{x}6.9$\\pm$3.3 & \\phantom{x}13.1$\\pm$5.2 & \\phantom{xx}9.9$\\pm$5.9 & \\phantom{xx}9.9$\\pm$5.7 & \\phantom{x}15.0$\\pm$6.1 & \\phantom{x}12.0$\\pm$5.2 \\\\\n\t\t\t& \\textbf{b-map} & *5.5$\\pm$2.6 & *10.6$\\pm$5.1 & *\\phantom{x}7.4$\\pm$4.2 & *\\phantom{x}7.5$\\pm$6.0 & *12.6$\\pm$5.6 & *\\phantom{x}8.8$\\pm$4.0 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-soft-Dice & & \\phantom{x}7.1$\\pm$3.5 & \\phantom{x}14.8$\\pm$6.8 & \\phantom{x}11.0$\\pm$6.6 & \\phantom{x}10.2$\\pm$5.6 & \\phantom{x}17.7$\\pm$7.8 & \\phantom{x}12.9$\\pm$6.2 \\\\\n\t\t\t& \\textbf{e-map} & *5.6$\\pm$2.8 & *12.6$\\pm$5.5 & *\\phantom{x}8.6$\\pm$4.6 & *\\phantom{x}8.0$\\pm$5.0 & *14.6$\\pm$5.9 & *\\phantom{x}9.6$\\pm$4.5 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-soft-Dice+MC & & \\phantom{x}7.7$\\pm$3.9 & \\phantom{x}14.4$\\pm$6.0 & \\phantom{x}10.5$\\pm$4.9 & \\phantom{x}10.1$\\pm$5.3 & \\phantom{x}17.2$\\pm$8.0 & \\phantom{x}12.5$\\pm$5.3 \\\\\n\t\t\t& \\textbf{b-map} & *6.3$\\pm$3.4 & *11.5$\\pm$4.0 & *\\phantom{x}8.6$\\pm$4.8 & *\\phantom{x}7.8$\\pm$4.6 & *13.6$\\pm$4.9 & *\\phantom{x}9.6$\\pm$4.7 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-CE & & \\phantom{x}5.5$\\pm$2.6 & \\phantom{x}11.7$\\pm$5.4 & \\phantom{xx}8.2$\\pm$6.2 & \\phantom{xx}9.1$\\pm$6.4 & \\phantom{x}13.7$\\pm$5.6 & \\phantom{xx}8.9$\\pm$5.3 \\\\\n\t\t\t& \\textbf{e-map} & *4.5$\\pm$1.9 & *\\phantom{x}9.0$\\pm$4.5 & *\\phantom{x}6.3$\\pm$4.1 & *\\phantom{x}6.2$\\pm$4.4 & *11.1$\\pm$5.3 & \\textbf{\\textcolor{red}{*\\phantom{x}6.7}}$\\pm$4.2 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-CE+MC & & \\phantom{x}5.6$\\pm$2.6 & \\phantom{x}11.9$\\pm$5.5 & \\phantom{xx}8.0$\\pm$5.9 & \\phantom{xx}8.7$\\pm$5.5 & \\phantom{x}13.5$\\pm$5.9 & \\phantom{xx}\\textbf{8.5}$\\pm$4.5 \\\\\t\t \n\t\t\t& \\textbf{b-map} & \\textbf{\\textcolor{red}{*4.2}}$\\pm$1.6 & \\textbf{\\textcolor{red}{*\\phantom{x}8.1}}$\\pm$3.7 & \\textbf{\\textcolor{red}{*\\phantom{x}6.1}}$\\pm$4.2 & \\textbf{\\textcolor{red}{*\\phantom{x}5.4}}$\\pm$3.6 & \\textbf{\\textcolor{red}{*10.1}}$\\pm$5.5 & *\\phantom{x}6.8$\\pm$3.8 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-soft-Dice & & \\phantom{x}\\textbf{5.5}$\\pm$2.8 & \\phantom{x}11.9$\\pm$6.1 & \\phantom{xx}\\textbf{7.7}$\\pm$5.9 & \\phantom{xx}8.5$\\pm$5.0 & \\phantom{x}13.5$\\pm$5.5 & \\phantom{xx}8.9$\\pm$5.1 \\\\\n\t\t\t& \\textbf{e-map} & *4.6$\\pm$2.2 & *\\phantom{x}9.4$\\pm$4.5 & \\phantom{xx}6.7$\\pm$4.7 & *\\phantom{x}6.7$\\pm$4.4 & *11.6$\\pm$5.4 & *\\phantom{x}7.0$\\pm$3.3 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-soft-Dice+MC & & \\phantom{x}5.7$\\pm$3.2 & \\phantom{x}\\textbf{11.5}$\\pm$5.1 & \\phantom{xx}8.0$\\pm$5.5 & \\phantom{xx}\\textbf{8.3}$\\pm$4.5 & \\phantom{x}\\textbf{13.3}$\\pm$5.1 & \\phantom{xx}8.9$\\pm$5.1 \\\\\n\t\t\t& \\textbf{b-map} & *4.5$\\pm$2.2 & *\\phantom{x}9.3$\\pm$4.5 & *\\phantom{x}6.3$\\pm$4.0 & *\\phantom{x}6.2$\\pm$4.1 & *10.4$\\pm$5.0 & *\\phantom{x}7.0$\\pm$3.4 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-CE & & \\phantom{x}6.4$\\pm$4.3 & \\phantom{x}15.7$\\pm$8.6 & \\phantom{xx}9.0$\\pm$6.0 & \\phantom{xx}9.7$\\pm$5.3 & \\phantom{x}17.0$\\pm$7.7 & \\phantom{x}12.7$\\pm$8.2 \\\\\n\t\t\t& \\textbf{e-map} & *4.9$\\pm$3.9 & *12.2$\\pm$8.1 & *\\phantom{x}7.1$\\pm$5.6 & *\\phantom{x}6.1$\\pm$3.2 & *12.6$\\pm$6.5 & *\\phantom{x}8.4$\\pm$6.3 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-CE+MC & & \\phantom{x}6.2$\\pm$4.2 & \\phantom{x}15.3$\\pm$8.4 & \\phantom{xx}8.8$\\pm$5.8 & \\phantom{xx}9.2$\\pm$5.0 & \\phantom{x}16.5$\\pm$7.6 & \\phantom{x}12.0$\\pm$8.0 \\\\\n\t\t\t& \\textbf{b-map} & *4.3$\\pm$1.6 & *\\phantom{x}9.9$\\pm$6.6 & *\\phantom{x}6.7$\\pm$4.8 & *\\phantom{x}5.4$\\pm$2.8 & *10.3$\\pm$4.7 & *\\phantom{x}7.6$\\pm$6.2 \\\\\n\t\t\t\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-soft-Dice & & \\phantom{x}6.1$\\pm$3.9 & \\phantom{x}14.1$\\pm$7.6 & \\phantom{x}10.6$\\pm$8.4 & \\phantom{xx}9.2$\\pm$7.1 & \\phantom{x}16.3$\\pm$7.5 & \\phantom{x}12.6$\\pm$9.6 \\\\\n\t\t\t& \\textbf{e-map} & *4.6$\\pm$2.3 & *11.3$\\pm$7.2 & *\\phantom{x}7.5$\\pm$5.5 & *\\phantom{x}7.3$\\pm$6.5 & *13.7$\\pm$7.6 & *\\phantom{x}9.8$\\pm$8.0 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-soft-Dice+MC & & \\phantom{x}6.2$\\pm$3.9 & \\phantom{x}14.1$\\pm$7.7 & \\phantom{x}10.5$\\pm$8.7 & \\phantom{xx}9.0$\\pm$7.0 & \\phantom{x}15.8$\\pm$7.5 & \\phantom{x}12.1$\\pm$9.2 \\\\\n\t\t\t& \\textbf{b-map} & *4.5$\\pm$2.1 & *10.4$\\pm$7.2 & *\\phantom{x}7.6$\\pm$7.0 & *\\phantom{x}7.3$\\pm$6.9 & *12.9$\\pm$6.6 & *\\phantom{x}9.8$\\pm$8.4 \\\\\n\t\t\t\\hdashline\n\t\t\tIsensee et al. & & \\phantom{x}7.1 & \\phantom{x}14.3 & \\phantom{xx}8.9 & \\phantom{xx}9.8 & \\phantom{x}16.3 & \\phantom{x}10.4 \\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular}\n\t\t\\label{table_seg_perf_hd}\n\t} \n\t\n\\end{table*} \n\n\n\\section{Evaluation}\\label{evaluation}\n\nAutomatic segmentation performance, as well as performance after simulating the correction of detected segmentation failures and after manual expert correction was evaluated. For this, the \\num{3}D Dice-coefficient (DC) and \\num{3}D Hausdorff distance (HD) between manual and (corrected) automatic segmentation were computed. Furthermore, the following clinical metrics were computed for manual and (corrected) automatic segmentation: left ventricle end-diastolic volume (EDV); left ventricle ejection fraction (EF); right ventricle EDV; right ventricle ejection fraction; and left ventricle myocardial mass. Following Bernard et al.\\cite{bernard2018deep} for each of the clinical metrics three performance indices were computed using the measurements based on manual and (corrected) automatic segmentation: Pearson correlation coefficient; mean difference (bias and standard deviation); and mean absolute error (MAE).\n\nTo evaluate detection performance of the automatic method precision-recall curves of identification of slices that require correction were computed. A slice is considered positive in case it consists of at least one image region with a segmentation failure. To achieve accurate segmentation in clinic, identification of slices that contain segmentation failures might ease manual correction of automatic segmentations in daily practice. To further evaluate detection performance detection rate of segmentation failures was assessed on a voxel level. More specific, sensitivity against the number of false positive regions was evaluated because manual correction is presumed to be performed at this level.\n\nFinally, after simulation and manual correction of the automatically detected segmentation failures, segmentation was re-evaluated and significance of the difference between the DCs, HDs and clinical metrics was tested with a Mann\u2013Whitney U test.\n\n\n\\section{Experiments}\n\nTo use stratified four-fold cross-validation the dataset was split into training (75\\%) and test (25\\%) set. The splitting was done on a patient level, so there was no overlap in patient data between training and test sets. Furthermore, patients were randomly chosen from each of the five patient groups w.r.t. disease. Each patient has one volume for ED and ES time points, respectively. \n\n\\subsection{Training segmentation networks} \\label{training_segmentation}\n\nDRN and U-net were trained with a patch size of \\num{128}$\\times$\\num{128} voxels which is a multiple of their output stride of the contracting path. In the training of the dilated CNN (DN) images with \\num{151}$\\times$\\num{151} voxel samples were used. Zero-padding to \\num{281}$\\times$\\num{281} was performed to accommodate the \\num{131}$\\times$\\num{131} voxel receptive field that is induced by the dilation factors. Training samples were randomly chosen from training set and augmented by \\num{90} degree rotations of the images. All models were initially trained with three loss functions: soft-Dice\\cite{milletari2016v} (SD); cross-entropy (CE); and Brier loss\\cite{brier1950verification}. However, for the evaluation of the combined segmentation and detection approach for each model architecture the two best performing loss functions were chosen: soft-Dice for all models; cross-entropy for DRN and U-net and Brier loss for DN. For completeness, we provide the equations for all three used loss functions.\n\n\n\\begingroup\n\\small\n\\begin{align}\n\\text{soft-Dice}_{c} = \\frac{\\sum_{i=1}^{N} R_{c}(i) \\; A_{c}(i) }{\\sum_{i=1}^{N} R_{c}(i) + \\sum_{i=1}^{N} A_{c}(i)} \\; ,\n\\end{align}\n\\endgroup\nwhere $N$ denotes the number of voxels in an image, $R_{c}$ is the binary reference image for class $c$ and $A_{c}$ is the probability map for class $c$.\n\n\\begingroup\n\\small\n\\begin{align}\n\\begin{split}\n\\text{Cross-Entropy}_{c} &= - \\; \\sum_{i=1}^{N} t_{ic} \\; \\log \\; p(y_i=c|x_i) \\; , \\\\& \\text{ where } t_{ic} = 1 \\text{ if } y_{i}=c, \\text{ and \\num{0} otherwise.}\n\\end{split}\n\\end{align}\n\\endgroup\n\n\\begingroup\n\\small\n\\begin{align}\n\\begin{split}\n\\text{Brier}_{c} &= \\sum_{i=1}^{N} \\big(t_{ic} - p(y_i=c|x_{i}) \\big)^2 \\; , \\\\ &\\text{ where } t_{ic} = 1 \\text{ if } y_{i}=c, \\text{ and \\num{0} otherwise.}\n\\end{split}\n\\end{align}\n\\endgroup\n\nwhere $N$ denotes the number of voxels in an image and $p$ denotes the probability for a specific voxel $x_i$ with corresponding reference label $y_i$ for class $c$.\n\nChoosing Brier loss to train the DN model instead of CE was motivated by our preliminary work which showed that segmentation performance of DN model was best when trained with Brier loss\\cite{sander2019towards}.\n\nAll models were trained for 100,000 iterations. DRN and U-net were trained with a learning rate of \\num{0.001} which decayed with a factor of \\num{0.1} after every 25,000 steps. Training DN used the snapshot ensemble technique~\\cite{huang2017snapshot}, where after every 10,000 iterations the learning rate was reset to its original value of \\num{0.02}.\n\nAll three segmentation networks were trained using mini-batch stochastic gradient descent using a batch size of \\num{16}. Network parameters were optimized using the Adam optimizer \\cite{kingmadp}. Furthermore, models were regularized with weight decay to increase generalization performance. \n\n\\subsection{Training detection network}\\label{label_training_detection}\n\nTo train the detection model a subset of the errors performed by the segmentation model is used. Segmentation errors that presumably are within the range of inter-observer variability and therefore do not inevitably require correction (tolerated errors) are excluded from the set of errors that need to be detected and corrected (segmentation failures). To distinguish between tolerated errors and the set of segmentation failures $\\mathcal{S}_I$ the Euclidean distance of an incorrectly segmented voxel to the boundary of the reference target structure is used. For each anatomical structure a \\num{2}D distance transform map is computed that provides for each voxel the distance to the anatomical structure boundary. To differentiate between tolerated errors and the set of segmentation failures $\\mathcal{S}_I$ an acceptable tolerance threshold is applied. A more rigorous threshold is used for errors located inside compared to outside of the anatomical structure because automatic segmentation methods have a tendency to undersegment cardiac structures in CMRI. Hence, in all experiments the acceptable tolerance threshold was set to three voxels (equivalent to on average \\SI{4.65}{\\milli\\meter}) and two voxels (equivalent to on average \\SI{3.1}{\\milli\\meter}) for segmentation errors located outside and inside the target structure. Furthermore, a segmentation error only belongs to $\\mathcal{S}_I$ if it is part of a \\num{2}D \\num{4}-connected cluster of minimum size \\num{10} voxels. This value was found in preliminary experiments by evaluating values $\\{1, 5, 10, 15, 20\\}$. However, for apical slices all segmentation errors are included in $\\mathcal{S}_I$ regardless of fulfilling the minimum size requirement because in these slices anatomical structures are relatively small and manual segmentation is prone to large inter-observer variability~\\cite{bernard2018deep}. Finally, segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures.\n\nUsing the set $\\mathcal{S}_I$ a binary label $t_j$ is assigned to each patch $P_j^{(I)}$ indicating whether $P_j^{(I)}$ contains at least one voxel belonging to set $\\mathcal{S}_I$ where $j \\in \\{1 \\dots M \\}$ and $M$ denotes the number of patches in a slice $I$. \n\nThe detection network is trained by minimizing a weighted binary cross-entropy loss:\n\n\\begingroup\n\\small\n\\begin{equation} \\label{eq_detection_loss}\n\\mathcal{L}_{DT} = - \\sum_{j \\in P^{(I)}} w_{pos} \\; t_j \\log p_j + (1 - t_j) \\log (1 - p_j) \\; ,\n\\end{equation}\n\\endgroup\n\nwhere $w_{pos}$ represents a scalar weight, $t_j$ denotes the binary reference label and $p_j$ is the softmax probability indicating whether a particular image region $P_j^{(I)}$ contains at least one segmentation failure. The average percentage of regions in a patient volume containing segmentation failures ranges from \\num{1.5} to \\num{3} percent depending on the segmentation architecture and loss function used to train the segmentation model. To train a detection network $w_{pos}$ was set to the ratio between the average percentage of negative samples divided by the average percentage of positive samples.\n\nEach fold was trained using spatial uncertainty maps and automatic segmentation masks generated while training the segmentation networks. Hence, there was no overlap in patient data between training and test set across segmentation and detection tasks. In total \\num{12} detection models were trained and evaluated resulting from the different combination of \\num{3} model architectures (DRN, DN and U-net), \\num{2} loss functions (DRN and U-net with CE and soft-Dice, DN with Brier and soft-Dice) and \\num{2} uncertainty maps (e-maps, b-maps).\n\n\n\\begin{table*}\n\t\\caption{Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout (MC) enabled during testing) in terms of clinical metrics: left ventricle (LV) end-diastolic volume (EDV); LV ejection fraction (EF); right ventricle (RV) EDV; RV ejection fraction; and LV myocardial mass. Quantitative results compare clinical metrics based on reference segmentations with 1) automatic segmentations and 2) simulated manual correction of automatic segmentations using spatial uncertainty maps. $\\rho$ denotes the Pearson correlation coefficient, \\textit{bias} denotes the mean difference between the two measurements (mean $\\pm$ standard deviation) and \\textit{MAE} denotes the mean absolute error between the two measurements. Each combination comprises a block of two rows. A row in which column \\textit{Uncertainty map for detection} indicates e- or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in black\/bold are ranked first in the segmentation only task. Numbers in red indicate statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach for the specific clinical metric. Best viewed in color.}\n\t\\label{table_cardiac_function_indices}\n\t\\tiny\n\t\\centering\n\t\\begin{tabular}{| C{1.6cm} | C{1.cm} | C{0.3cm} c C{0.3cm} | C{0.2cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} |}\n\t\t\\hline\n\t\t& \\multicolumn{1}{c|}{\\thead{\\textbf{Uncertainty} \\\\ \\textbf{map for} \\\\ \\textbf{detection}}} & \\multicolumn{3}{c}{\\textbf{LV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{LV$_{EF}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EF}$}} & \\multicolumn{3}{c|}{\\textbf{LVM$_{Mass}$}} \\\\\n\t\n\t\n\t\t\\textbf{Method} & & \\textbf{$\\rho$} & \\multicolumn{1}{l}{\\textbf{bias$\\pm\\sigma$}} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\multicolumn{1}{c}{\\textbf{$\\rho$}} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} \\\\\n\t\t\\hline\n\t\t\\rowcolor{LightGreen}\n\t\tDN-Brier & & 0.997 & \\phantom{x}\\textbf{0.0$\\pm$6.1} & 4.5 & 0.892 & \\phantom{x}2.2$\\pm$\\phantom{x}9.2 & 4.2 & 0.977 & \\textbf{-0.2$\\pm$11.8} & \\phantom{x}8.5 & 0.834 & 5.3$\\pm$10.3 & \\phantom{x}8.5 & 0.984 & -2.7$\\pm$\\phantom{x}9.0 & \\phantom{x}\\textbf{7.0} \\\\\n\t\t& e-map & 0.997 & \\phantom{x}0.0$\\pm$5.5 & 4.0 & 0.982 & \\phantom{x}0.1$\\pm$\\phantom{x}3.8 & 2.2 & 0.992 & \\phantom{x}0.0$\\pm$\\phantom{x}6.9 & \\phantom{x}5.2 & 0.955 & 1.9$\\pm$\\phantom{x}5.5 & \\phantom{x}4.1 & 0.986 & -2.1$\\pm$\\phantom{x}8.4 & \\phantom{x}6.6 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tDN-Brier+MC & & 0.997 & \\phantom{x}1.6$\\pm$6.0 & 4.4 & 0.921 & \\phantom{x}1.1$\\pm$\\phantom{x}7.9 & 3.9 & 0.975 & \\phantom{x}6.7$\\pm$12.4 & \\phantom{x}9.6 & 0.854 & 3.5$\\pm$\\phantom{x}9.9 & \\phantom{x}7.7 & 0.984 & \\phantom{x}0.7$\\pm$\\phantom{x}9.2 & \\phantom{x}7.1 \\\\\n\t\t& b-map & 0.998 & \\phantom{x}1.0$\\pm$5.3 & 3.9 & 0.991 & \\phantom{x}0.0$\\pm$\\phantom{x}2.7 & 1.9 & 0.993 & \\phantom{x}3.2$\\pm$\\phantom{x}6.7 & \\phantom{x}5.7 & 0.975 & 0.8$\\pm$\\phantom{x}4.0 & \\phantom{x}3.0 & 0.987 & \\phantom{x}0.1$\\pm$\\phantom{x}8.3 & \\phantom{x}6.5 \\\\\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightGreen}\n\t\tDN-soft-Dice & & 0.996 & \\phantom{x}1.2$\\pm$6.5 & 4.9 & 0.918 & \\phantom{x}1.5$\\pm$\\phantom{x}8.0 & 3.9 & 0.972 & \\phantom{x}\\textbf{0.2$\\pm$13.0} & \\phantom{x}9.6 & 0.802 & 7.2$\\pm$11.3 & 10.2 & 0.982 & -4.5$\\pm$\\phantom{x}9.6 & \\phantom{x}8.5 \\\\\n\t\t& e-map & 0.997 & \\phantom{x}1.0$\\pm$5.5 & 4.2 & 0.989 & \\phantom{x}0.2$\\pm$\\phantom{x}3.0 & 2.2 & 0.990 & \\phantom{x}0.2$\\pm$\\phantom{x}7.6 & \\phantom{x}5.9 & \\textcolor{red}{0.940} & \\textcolor{red}{3.3$\\pm$\\phantom{x}6.2} & \\textcolor{red}{\\phantom{x}5.2} & 0.983 & -4.3$\\pm$\\phantom{x}9.3 & \\phantom{x}8.2 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tDN-soft-Dice+MC & & 0.996 & \\phantom{x}3.2$\\pm$7.1 & 5.6 & 0.958 & \\phantom{x}0.4$\\pm$\\phantom{x}5.7 & 3.6 & 0.964 & \\phantom{x}8.1$\\pm$14.9 & 12.3 & 0.827 & 4.8$\\pm$11.0 & \\phantom{x}8.9 & 0.978 & -0.7$\\pm$10.7 & \\phantom{x}8.3 \\\\\n\t\t& b-map & 0.997 & \\phantom{x}2.2$\\pm$5.6 & 4.4 & 0.988 & -0.2$\\pm$\\phantom{x}3.1 & 2.2 & 0.990 & \\phantom{x}4.0$\\pm$\\phantom{x}7.7 & \\phantom{x}7.0 & 0.959 & 1.8$\\pm$\\phantom{x}5.1 & \\phantom{x}4.1 & 0.982 & -1.4$\\pm$\\phantom{x}9.5 & \\phantom{x}7.6 \\\\\n\t\t\n\t\t\\hdashline[5pt\/5pt]\n\t\t\\rowcolor{LightGreen}\n\t\tDRN-CE & & 0.997 & -0.2$\\pm$5.5 & 4.1 & 0.968 & \\phantom{x}1.2$\\pm$\\phantom{x}5.0 & 3.5 & 0.976 & \\phantom{x}1.5$\\pm$12.1 & \\phantom{x}8.5 & 0.870 & 1.3$\\pm$\\phantom{x}9.2 & \\phantom{x}6.9 & 0.980 & \\phantom{x}\\textbf{0.6$\\pm$10.2} & \\phantom{x}7.8 \\\\\n\t\t& e-map & 0.998 & \\phantom{x}0.2$\\pm$4.5 & 3.5 & 0.992 & \\phantom{x}0.2$\\pm$\\phantom{x}2.5 & 1.9 & 0.988 & \\phantom{x}1.4$\\pm$\\phantom{x}8.5 & \\phantom{x}6.2 & 0.952 & 0.8$\\pm$\\phantom{x}5.6 & \\phantom{x}4.2 & 0.985 & \\phantom{x}0.4$\\pm$\\phantom{x}8.7 & \\phantom{x}6.8 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tDRN-CE+MC & & \\textbf{0.998} & \\phantom{x}1.0$\\pm$4.9 & \\textbf{3.9} & 0.972 & \\phantom{x}0.8$\\pm$\\phantom{x}4.6 & 3.1 & 0.973 & \\phantom{x}4.8$\\pm$12.8 & \\phantom{x}9.4 & 0.876 & \\textbf{0.4$\\pm$\\phantom{x}9.1} & \\phantom{x}\\textbf{6.6} & 0.981 & \\phantom{x}1.9$\\pm$\\phantom{x}9.9 & \\phantom{x}7.6 \\\\\n\t\t& b-map & 0.998 & \\phantom{x}0.7$\\pm$4.6 & 3.6 & 0.992 & -0.1$\\pm$\\phantom{x}2.5 & 1.8 & 0.992 & \\phantom{x}2.9$\\pm$\\phantom{x}6.9 & \\phantom{x}5.7 & 0.967 & 0.6$\\pm$\\phantom{x}4.6 & \\phantom{x}3.4 & 0.987 & \\phantom{x}1.2$\\pm$\\phantom{x}8.3 & \\phantom{x}6.6 \\\\\n\t\t\\hdashline[1pt\/2pt]\n\t\t\n\t\t\\rowcolor{LightGreen}\n\t\tDRN-soft-Dice & & \\textbf{0.998} & \\phantom{x}0.8$\\pm$5.1 & 4.0 & 0.976 & \\phantom{x}0.2$\\pm$\\phantom{x}4.4 & 3.0 & 0.980 & \\phantom{x}\\textbf{0.2$\\pm$11.0} & \\phantom{x}\\textbf{7.5} & \\textbf{0.882} & 3.1$\\pm$\\phantom{x}8.7 & \\phantom{x}6.8 & 0.984 & -3.5$\\pm$\\phantom{x}9.1 & \\phantom{x}7.5 \\\\\n\t\t& e-map & 0.998 & \\phantom{x}0.7$\\pm$4.4 & 3.5 & 0.987 & -0.1$\\pm$\\phantom{x}3.1 & 2.2 & 0.987 & \\phantom{x}0.1$\\pm$\\phantom{x}9.1 & \\phantom{x}6.4 & 0.938 & 1.9$\\pm$\\phantom{x}6.3 & \\phantom{x}4.9 & 0.986 & -3.5$\\pm$\\phantom{x}8.7 & \\phantom{x}7.1 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tDRN-soft-Dice+MC & & \\textbf{0.998} & \\phantom{x}1.8$\\pm$5.1 & \\textbf{3.9} & \\textbf{0.979} & -0.3$\\pm$\\phantom{x}4.1 & 2.9 & 0.977 & \\phantom{x}3.5$\\pm$11.7 & \\phantom{x}8.1 & 0.868 & 1.7$\\pm$\\phantom{x}9.5 & \\phantom{x}6.8 & 0.983 & -1.4$\\pm$\\phantom{x}9.5 & \\phantom{x}7.4 \\\\\n\t\t& b-map & 0.998 & \\phantom{x}1.7$\\pm$4.7 & 3.7 & 0.990 & -0.2$\\pm$\\phantom{x}2.9 & 2.1 & 0.989 & \\phantom{x}2.3$\\pm$\\phantom{x}8.1 & \\phantom{x}5.8 & 0.959 & 0.8$\\pm$\\phantom{x}5.2 &\\phantom{x}3.8 & 0.986 & -1.3$\\pm$\\phantom{x}8.5 & \\phantom{x}6.8 \\\\\n\t\t\n\t\t\\hdashline[5pt\/5pt]\n\t\t\\rowcolor{LightGreen}\n\t\tU-net-CE & & 0.995 & -4.7$\\pm$7.2 & 6.1 & 0.954 & \\phantom{x}4.1$\\pm$\\phantom{x}6.0 & 5.1 & 0.963 & -7.6$\\pm$15.2 & 12.1 & 0.870 & 5.6$\\pm$\\phantom{x}9.0 & \\phantom{x}8.1 & 0.971 & -8.5$\\pm$12.2 & 11.5 \\\\\n\t\t& e-map & 0.998 & -3.2$\\pm$4.8 & 4.4 & 0.992 & \\phantom{x}1.7$\\pm$\\phantom{x}2.6 & 2.4 & 0.987 & -4.1$\\pm$\\phantom{x}9.1 & \\phantom{x}6.7 & 0.957 & 2.6$\\pm$\\phantom{x}5.2 & \\phantom{x}4.1 & 0.983 & -5.7$\\pm$\\phantom{x}9.3 & \\phantom{x}8.2 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tU-net-CE+MC & & 0.995 & -4.3$\\pm$7.2 & 5.9 & 0.958 & \\phantom{x}3.8$\\pm$\\phantom{x}5.8 & 4.9 & 0.968 & -4.8$\\pm$14.1 & 10.7 & 0.867 & 5.0$\\pm$\\phantom{x}9.1 & \\phantom{x}7.9 & 0.972 & -8.1$\\pm$12.0 & 11.1 \\\\\n\t\t& b-map & 0.997 & -3.5$\\pm$5.5 & 4.9 & 0.990 & \\phantom{x}1.6$\\pm$\\phantom{x}2.9 & 2.6 & 0.992 & -1.8$\\pm$\\phantom{x}7.0 & \\phantom{x}4.9 & 0.974 & 1.6$\\pm$\\phantom{x}4.1 & \\phantom{x}3.3 & 0.981 & -6.8$\\pm$10.0 & \\phantom{x}9.4 \\\\\n\t\t\\hdashline[1pt\/2pt]\n\t\t\n\t\t\\rowcolor{LightGreen}\n\t\tU-net-soft-Dice & & 0.997 & -2.0$\\pm$6.0 & 4.5 & 0.853 & \\phantom{x}3.6$\\pm$10.9 & 5.0 & 0.968 & -1.0$\\pm$14.1 & 10.0 & 0.782 & 4.8$\\pm$11.6 & \\phantom{x}9.0 & \\textbf{0.985} & -7.7$\\pm$\\phantom{x}8.8 & \\phantom{x}9.2 \\\\\n\t\t& e-map & 0.997 & -1.7$\\pm$5.3 & 4.1 & 0.969 & \\phantom{x}1.9$\\pm$\\phantom{x}4.9 & 3.3 & 0.981 & -0.1$\\pm$10.9 & \\phantom{x}7.5 & 0.919 & 3.3$\\pm$\\phantom{x}7.0 & \\phantom{x}5.9 & 0.984 & -6.6$\\pm$\\phantom{x}9.0 & \\phantom{x}8.7 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tU-net-soft-Dice+MC & & 0.997 & -1.8$\\pm$5.9 & 4.4 & 0.941 & \\phantom{x}3.0$\\pm$\\phantom{x}6.7 & 4.4 & 0.969 & \\phantom{x}0.6$\\pm$13.9 & \\phantom{x}9.8 & 0.792 & 4.4$\\pm$11.3 & \\phantom{x}8.7 & \\textbf{0.985} & -7.2$\\pm$\\phantom{x}8.9 & \\phantom{x}8.9 \\\\\n\t\t& b-map & 0.997 & -1.5$\\pm$5.3 & 4.1 & 0.979 & \\phantom{x}1.1$\\pm$\\phantom{x}4.1 & 2.9 & 0.985 & \\phantom{x}1.2$\\pm$\\phantom{x}9.4 & \\phantom{x}6.5 & 0.939 & 2.9$\\pm$\\phantom{x}6.2 & \\phantom{x}4.9 & 0.984 & -5.9$\\pm$\\phantom{x}9.0 & \\phantom{x}8.5 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table*}\n\n\nThe patches used to train the network were selected randomly (\\nicefrac{2}{3}), or were forced (\\nicefrac{1}{3}) to contain at least one segmentation failure by randomly selecting a scan containing segmentation failure, followed by random sampling of a patch containing at least one segmentation failure. During training the patch size was fixed to \\num{80}$\\times$\\num{80} voxels. To reduce the number of background voxels during testing, inputs were cropped based on a minimal enclosing, rectangular bounding box that was placed around the automatic segmentation mask. Inputs always had a minimum size of \\num{80}$\\times$\\num{80} voxels or were forced to a multiple of the output grid spacing of eight voxels in both direction required by the patch-based detection network. The patches of size \\num{8}$\\times$\\num{8} voxels did not overlap. In cases where the automatic segmentation mask only contains background voxels (scans above the base or below apex of the heart) input scans were center-cropped to a size of \\num{80}$\\times$\\num{80} voxels. \n\nModels were trained for 20,000 iterations using mini-batch stochastic gradient descent with batch-size \\num{32} and Adam as optimizer\\cite{kingmadp}. Learning rate was set to \\num{0.0001} and decayed with a factor of \\num{0.1} after \\num{10,000} steps. Furthermore, dropout percentage was set to \\num{0.5} and weight decay was applied to increase generalization performance.\n\n\\begin{figure*}[t]\n\t\\center\n\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/all_models_froc_voxel_detection_rate.pdf}%\n\t\t\\label{fig_froc_voxel_detection}}\n\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/all_models_slices_prec_recall.pdf}%\n\t\t\\label{fig_prec_rec_slice_detection}}\n\t\n\t\\caption{Detection performance of segmentation failures generated by different combination of segmentation architectures and loss functions. (a) Sensitivity for detection of segmentation failures on voxel level (y-axis) as a function of number of false positive image regions (x-axis). (b) Precision-recall curve for detection of slices containing segmentation failures (where AP denotes average precision). Results are split between entropy and Bayesian uncertainty maps. Each figure contains a curve for the six possible combination of models (three) and loss functions (two). SD denotes soft-Dice and CE cross-entropy, respectively.}\n\t\\label{fig_dt_perf_all_models}\n\\end{figure*}\n\n\\subsection{Segmentation using correction of the detected segmentation failures}\n\nTo investigate whether correction of detected segmentation failures increases segmentation performance two scenarios were performed. In the first scenario manual correction of the detected failures by an expert was simulated for all images at ED and ES time points of the ACDC dataset. For this purpose, in image regions that were detected to contain segmentation failure predicted labels were replaced with reference labels. In the second scenario manual correction of the detected failures was performed by an expert in a random subset of \\num{50} patients of the ACDC dataset. The expert was shown CMRI slices for ED and ES time points together with corresponding automatic segmentation masks for the RV, LV and LV myocardium. Image regions detected to contain segmentation failures were indicated in slices and the expert was only allowed to change the automatic segmentations in these indicated regions. Annotation was performed following the protocol described in\\cite{bernard2018deep}. Furthermore, expert was able to navigate through all CMRI slices of the corresponding ED and ES volumes.\n\n\\section{Results}\n\nIn this section we first present results for the segmentation-only task followed by description of the combined segmentation and detection results. \n\n\\subsection{Segmentation-only approach} \\label{results_seg_only}\n\nTable~\\ref{table_overall_segmentation_performance} lists quantitative results for segmentation-only and combined segmentation and detection approach in terms of Dice coefficient and Hausdorff distance. These results show that DRN and U-net achieve similar Dice coefficients and outperformed the DN network for all anatomical structures at end-systole. Differences in the achieved Hausdorff distances among the methods are present for all anatomical structures and for both time points. The DRN model achieved the highest and the DN network the lowest Hausdorff distance.\n\nTable~\\ref{table_cardiac_function_indices} lists results of the evaluation in terms of clinical metrics. These results reveal noticeable differences between models for ejection fraction (EF) of left and right ventricle, respectively. We can observe that U-net trained with the soft-Dice and the Dilated Network (DN) trained with Brier or soft-Dice loss achieved considerable lower accuracy for LV and RV ejection fraction compared to DRN. Overall, the DRN model achieved highest performance for all clinical metrics.\n\n\\noindent \\textbf{Effect of model architecture on segmentation}: Although quantitative differences between models are small, qualitative evaluation discloses that automatic segmentations differ substantially between the models. Figure~\\ref{fig_seg_qualitative_results} shows that especially in regions where the models perform poorly (apical and basal slices) the DN model more often produced anatomically implausible segmentations compared to the DRN and U-net. This seems to be correlated with the performance differences in Hausdorff distance.\n\n\\noindent \\textbf{Effect of loss function on segmentation}: The results indicate that the choice of loss function only slightly affects the segmentation performance. DRN and U-net perform marginally better when trained with soft-Dice compared to cross-entropy whereas DN performs better when trained with Brier loss than with soft-Dice. For DN this is most pronounced for the RV at ES.\n\nA considerable effect of the loss function on the accuracy of the LV and RV ejection fraction can be observed for the U-net model. On both metrics U-net achieved the lowest accuracy of all models when trained with the soft-Dice loss.\n\n\\noindent \\textbf{Effect of MC dropout on segmentation}: The results show that enabling MC-dropout during testing seems to result in slightly improved HD while it does not affect DC. \n\\begin{table}\n\t\\caption{Average precision and percentage of slices with segmentation failures generated by Dilated Network (DN), Dilated Residual Network (DRN) and U-net when trained with soft-Dice (SD), CE or Brier loss. Per patient, average precision of detected slices with failure using e- or b-maps (\\num{2}$^{nd}$ and \\num{3}$^{rd}$ columns). Per patient, average percentage of slices containing segmentation failures (reference for detection task) (\\num{4}$^{th}$ and \\num{5}$^{th}$ columns).}\n\t\\label{table_evaluation_slice_detection}\n\t\\begin{tabular}{l C{1.4cm} C{1.4cm} C{1.4cm} C{1.4cm} }\n\t\t\\textbf{Model} & \\multicolumn{2}{c}{\\textbf{Average precision}} & \\multicolumn{2}{c}{ \\thead{\\textbf{\\% of slices} \\\\ \\textbf{with segmentation failures} }} \\\\\n\t\t& e-map & b-map & e-map & b-map \\\\\n\t\t\\hline\n\t\tDN-Brier & 84.0 & 83.0 & 53.7 & 52.4 \\\\\n\t\tDN-SD & 87.0 & 85.0 & 58.3 & 58.1 \\\\\n\t\t\\hdashline\n\t\tDRN-CE & 75.0 & 69.0 & 39.5 & 39.4 \\\\\n\t\tDRN-SD & 67.0 & 67.0 & 34.9 & 33.7\\\\\n\t\t\\hdashline\n\t\tU-net-CE & 81.0 & 75.0 & 54.8 & 52.5 \\\\\n\t\tU-net-SD & 76.0 & 76.0 & 46.7 & 45.5 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\subsection{Detection of segmentation failures}\n\n\\noindent \\textbf{Detection of segmentation failures on voxel level}: To evaluate detection performance of segmentation failures on voxel level Figure~\\ref{fig_froc_voxel_detection} shows average voxel detection rate as a function of false positively detected regions. This was done for each combination of model architecture and loss function exploiting e- (Figure~\\ref{fig_froc_voxel_detection}, left) or b-maps (Figure~\\ref{fig_froc_voxel_detection}, right). These results show that detection performance of segmentation failures depends on segmentation model architecture, loss function and uncertainty map. \n\nThe influence of (segmentation) model architecture and loss function on detection performance is slightly stronger when e-maps were used as input for the detection task compared to b-maps. Detection rates are consistently lower when segmentation failures originate from segmentation models trained with soft-Dice loss compared to models trained with CE or Brier loss. Overall, detection rates are higher when b-maps were exploited for the detection task compared to e-maps.\n\n\\begin{table*}\n\t\\caption{Comparing performance of segmentation-only approach (auto-only) with combined segmentation and detection approach for two scenarios: simulated correction of detected segmentation failures (auto$+$simulation); and manual correction of detected segmentation failures by an expert (auto$+$expert). Automatic segmentations were obtained from a U-net trained with cross-entropy. Evaluation was performed on a subset of \\num{50} patients from the ACDC dataset. Scenarios are compared against segmentation-only approach (auto-only) in terms of (a) Dice Coefficient (b) Hausdorff Distance and (c) Clinical metrics. Results obtained from simulated manual correction represent an upper bound on the maximum achievable performance. Detection network was trained with e-maps. Number with asterisk indicates statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach. Best viewed in color.}\n\t\\label{table_manual_corr_performance}\n\t\\centering\n\t\\small\n\t\\subfloat[\\textbf{Dice coefficient:} Mean $\\pm$ standard deviation for left ventricle (LV), right ventricle (RV) and left ventricle myocardium (LVM).]{\n\t\t\\begin{tabular}{| C{2.cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Scenario} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\tauto-only & \\phantom{x}0.964$\\pm$0.02 & \\phantom{x}0.927$\\pm$0.04 & \\phantom{x}0.883$\\pm$0.03 & \\phantom{x}0.916$\\pm$0.05 & \\phantom{x}0.854$\\pm$0.08 & \\phantom{x}0.886$\\pm$0.04 \\\\ \n\t\t\tauto$+$simulation & \\phantom{x}0.967$\\pm$0.01 & *0.948$\\pm$0.03 & *0.894$\\pm$0.03 & *0.939$\\pm$0.03 & *0.915$\\pm$0.04 & *0.910$\\pm$0.03 \\\\ \n\t\t\n\t\t\tauto$+$expert & \\phantom{x}0.965$\\pm$0.02 & \\phantom{x}0.940$\\pm$0.03 & \\phantom{x}0.885$\\pm$0.03 & \\phantom{x}0.927$\\pm$0.04 & \\phantom{x}0.868$\\pm$0.07 & \\phantom{x}0.894$\\pm$0.03 \\\\ \n\t\t\t\n\t\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular}\n\t\t\\label{table_manual_seg_perf_dsc}\n\t} \n\t\n\t\\centering\n\t\n\t\\subfloat[\\textbf{Hausdorff Distance:} Mean $\\pm$ standard deviation for left ventricle (LV), right ventricle (RV) and left ventricle myocardium (LVM).]{\n\t\t\\begin{tabular}{| C{2.cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Scenario} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\tauto-only & \\phantom{x}5.6$\\pm$3.3 & \\phantom{x}15.7$\\pm$9.7 & \\phantom{x}8.5$\\pm$6.4 & \\phantom{x}9.2$\\pm$5.8 & \\phantom{x}16.5$\\pm$8.8 & \\phantom{x}13.4$\\pm$10.5 \\\\\n\t\t\tauto$+$simulation & \\phantom{x}4.5$\\pm$2.1 & *\\phantom{x}9.0$\\pm$4.6 & *5.9$\\pm$3.4 & *5.2$\\pm$2.5 & *10.3$\\pm$3.7 & *\\phantom{x}6.6$\\pm$2.9 \\\\\n\t\t\n\t\t\tauto$+$expert & \\phantom{x}4.9$\\pm$2.8 & *\\phantom{x}9.8$\\pm$4.3 & \\phantom{x}7.3$\\pm$4.3 & \\phantom{x}7.2$\\pm$3.3 & *12.5$\\pm$4.7 & *\\phantom{x}8.3$\\pm$3.5 \\\\\n\t\t\t\n\t\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\label{table_manual_seg_perf_hd}\n\t} \n\t\n\t\\subfloat[\\textbf{Clinical metrics:} a) Left ventricle (LV) end-diastolic volume (EDV) b) LV ejection fraction (EF) c) Right ventricle (RV) EDV d) RV ejection fraction e) LV myocardial mass. Quantitative results compare clinical metrics based on reference segmentations with 1) automatic segmentations; 2) simulated manual correction and 3) manual expert correction of automatic segmentations using spatial uncertainty maps. $\\rho$ denotes the Pearson correlation coefficient, \\textit{bias} denotes the mean difference between the two measurements (mean $\\pm$ standard deviation) and \\textit{MAE} denotes the mean absolute error between the two measurements.]{\n\t\t\\label{table_manual_cardiac_function_indices}\n\t\t\\small\n\t\t\\begin{tabular}{| C{2.cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm}C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} |}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c}{\\textbf{LV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{LV$_{EF}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EF}$}} & \\multicolumn{3}{c|}{\\textbf{LVM$_{Mass}$}} \\\\\n\t\t\t\n\t\t\t\\textbf{Scenario} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\multicolumn{1}{c}{\\textbf{$\\rho$}} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} \\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\tauto-only & 0.995 & -4.4 $\\pm$7.0 & 5.7 & 0.927 & 5.0 $\\pm$7.1 & 5.8 & 0.962 & -6.4 $\\pm$16.2 & 11.9 & 0.878 & 5.8 $\\pm$8.7 & 8.0 & 0.979 & -6.4 $\\pm$10.6 & 9.5 \\\\\n\t\t\t\n\t\t\tauto$+$simulation & 0.998 & -3.9 $\\pm$5.2 & 4.8 & 0.989 & 2.3 $\\pm$2.9 & 2.9 & 0.984 & -3.7 $\\pm$10.4 & 6.8 & 0.954 & 2.7 $\\pm$5.5 & 4.5 & 0.983 & -5.5 $\\pm$9.6 & 8.1 \\\\\n\t\t\n\t\t\t\n\t\t\tauto$+$expert & 0.996 & -4.3 $\\pm$6.5 & 5.5 & 0.968 & 2.7 $\\pm$4.8 & 4.3 & 0.976 & -3.2 $\\pm$12.9 & 8.3 & 0.883 & 5.1 $\\pm$8.6 & 7.7 & 0.980 & -6.2 $\\pm$10.2 & 9.1 \\\\\n\t\t\t\t\\hline\n\t\t\\end{tabular}\n\t}\n\t\n\\end{table*}\n\n\\vspace{1ex}\n\\noindent \\textbf{Detection of slices with segmentation failures}: To evaluate detection performance w.r.t. slices containing segmentation failures precision-recall curves for each combination of model architecture and loss function using e-maps (Figure~\\ref{fig_prec_rec_slice_detection}, left) or b-maps (Figure~\\ref{fig_prec_rec_slice_detection}, right) are shown. The results show that detection performance of slices containing segmentation failures is slightly better for all models when using e-maps. Furthermore, the detection network achieves highest performance using uncertainty maps obtained from the DN model and the lowest when exploiting e- or b-maps obtained from the DRN model. Table~\\ref{table_evaluation_slice_detection} shows the average precision of detected slices with segmentation failures per patient, as well as the average percentage of slices that do contain segmentation failures (reference for detection task). The results illustrate that these measures are positively correlated i.e. that precision of detected slices in a patient volume is higher if the volume contains more slices that need correction. On average the DN model generates cardiac segmentations that contain more slices with at least one segmentation failure compared to U-net (ranks second) and DRN (ranks third). A higher number of detected slices containing segmentation failures implies an increased workload for manual correction.\n\n\\subsection{Calibration of uncertainty maps} \\label{result_eval_quality_umaps}\n\nFigure~\\ref{fig_risk_cov_comparison} shows risk-coverage curves for each combination of model architectures, uncertainty maps and loss functions (Figure~\\ref{fig_risk_cov_comparison} left: CE or Brier loss, Figure~\\ref{fig_risk_cov_comparison} right: soft-Dice). The results show an effect of the loss function on slope and convergence of the curves. Segmentation errors of models trained with the soft-Dice loss are less frequently covered by higher uncertainties than models trained with CE or Brier loss (steeper slope and lower minimum are better). This difference is more pronounced for e-maps. Models trained with the CE or Brier loss only slightly differ concerning convergence and their slopes are approximately identical. In contrast, the curves of the models trained with the soft-Dice differ regarding their slope and achieved minimum. Comparing e- and b-map of the DN-SD and U-net-SD models the results reveal that the curve for b-map has a steeper slope and achieves a lower minimum compared to the e-map. For the DRN-SD model these differences are less striking. In general for a specific combination of model and loss function the risk-coverage curves using b-maps achieve a lower minimum compared to e-maps.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=4.7in, height=2.8in]{figures\/risk_cov\/cov_risk_curve_both_seg_errors.pdf}\n\t\n\t\\caption{Comparison of risk-coverage curves for different combination of model architectures, loss functions and uncertainty maps. Results are separated for loss functions (left cross-entropy and Brier, right soft-Dice loss). \\num{100}\\% coverage means that none of the voxels is discarded based on its uncertainty whereas a coverage of \\num{0}\\% denotes the scenario in which all predictions are replaced by their reference labels. Note, all models were trained with two different loss functions (1) soft-Dice (SD) for all models (2) cross-entropy (CE) for DRN and U-net and Brier loss for DN.}\n\t\\label{fig_risk_cov_comparison}\n\\end{figure*}\n\n\\begin{table*}\n\t\\caption{Effect of number of Monte Carlo (MC) samples on segmentation performance in terms of (a) Dice coefficient (DC) and (b) Hausdorff Distance (HD) (mean $\\pm$ standard deviation). Higher DC and lower HD is better. Abbreviations: Cross-Entropy (CE), Dilated Residual Network (DRN) and Dilated Network (DN).} \n\t\\label{table_seg_perf_per_samples}\n\t\\small\n\t\\centering\n\t\\subfloat[Dice coefficient]{\n\t\t\\begin{tabular}{|c C{1.5cm} C{1.5cm} c|}\n\t\t\t\\hline\n\t\t\t\\textbf{\\thead{Number of \\\\ MC samples}} & DRN-CE & U-net-CE & DN-soft-Dice\\\\\n\t\t\t\\hline\n\t\t\t1 & 0.894$\\pm$0.07 & 0.896$\\pm$0.07 & 0.871$\\pm$0.09 \\\\\n\t\t\t3 & 0.900$\\pm$0.07 & 0.901$\\pm$0.07 & 0.883$\\pm$0.08 \\\\\n\t\t\t5 & 0.902$\\pm$0.07 & 0.901$\\pm$0.07 & 0.887$\\pm$0.08 \\\\\n\t\t\t7 & 0.903$\\pm$0.07 & 0.901$\\pm$0.07 & 0.888$\\pm$0.08 \\\\\n\t\t\t10 & 0.904$\\pm$0.06 & 0.902$\\pm$0.07 &0.890$\\pm$0.08 \\\\\n\t\t\t20 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.890$\\pm$0.08 \\\\\n\t\t\t30 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.891$\\pm$0.08 \\\\\n\t\t\t60 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.891$\\pm$0.08 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t}\n\t\\subfloat[Hausdorff Distance]{\n\t\t\\begin{tabular}{|c C{1.5cm} C{1.5cm} c|}\n\t\t\t\\hline\n\t\t\t\\textbf{\\thead{Number of \\\\ MC samples}}& DRN-CE & U-net-CE & DN-soft-Dice \\\\\n\t\t\t\\hline\n\t\t\t1 & 9.88$\\pm$5.76 & 11.79$\\pm$8.23 & 13.54$\\pm$7.14 \\\\\n\t\t\t3 & 9.70$\\pm$6.13 & 11.40$\\pm$7.78 & 12.71$\\pm$6.79 \\\\\n\t\t\t5 & 9.54$\\pm$6.07 & 11.37$\\pm$7.81 & 12.06$\\pm$6.29 \\\\\n\t\t\t7 & 9.38$\\pm$5.86 & 11.29$\\pm$7.86 & 12.08$\\pm$6.38 \\\\\n\t\t\t10 & 9.38$\\pm$5.91 & 11.24$\\pm$7.71 & 11.85$\\pm$6.34 \\\\\n\t\t\t20 & 9.37$\\pm$5.83 & 11.27$\\pm$7.79 & 11.90$\\pm$6.52 \\\\\n\t\t\t30 & 9.39$\\pm$5.91 & 11.32$\\pm$7.93 & 11.90$\\pm$6.48 \\\\\n\t\t\t60 & 9.39$\\pm$5.93 & 11.22$\\pm$7.83 & 11.89$\\pm$6.56 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\t\n\t}\n\\end{table*}\n\n\\subsection{Correction of automatically identified segmentation failures} \\label{results_combined_approach}\n\n\\textbf{Simulated correction:} The results listed in Table~\\ref{table_overall_segmentation_performance} and \\ref{table_cardiac_function_indices} show that the proposed method consisting of segmentation followed by simulated manual correction of detected segmentation failures delivers accurate segmentation for all tissues over ED and ES points. Correction of detected segmentation failures improved the performance in terms of DC, HD and clinical metrics for all combinations of model architectures, loss functions and uncertainty measures. Focusing on the DC after correction of detected segmentation failures the results reveal that performance differences between evaluated models decreased compared to the segmentation-only task. This effect is less pronounced for HD where the DRN network clearly achieved superior results in the segmentation-only and combined approach. The DN performs the least of all models but achieves the highest absolute DC performance improvements in the combined approach for RV at ES. Overall, the results in Table~\\ref{table_overall_segmentation_performance} disclose that improvements attained by the combined approach are almost all statistically significant ($p \\leq 0.05$) at ES and frequently at ED (\\num{96}\\% resp. \\num{83}\\% of the cases). Moreover, improvements are in \\num{99}\\% of the cases statistically significant for HD compared to \\num{81}\\% of the cases for DC.\n\nResults in terms of clinical metrics shown in Table~\\ref{table_cardiac_function_indices} are inline with these findings. We observe that segmentation followed by simulated manual correction of detected segmentation failures resulted in considerably higher accuracy for LV and RV ejection fraction. Achieved improvements for clinical metrics are only statistically significant ($p \\leq 0.05$) in one case for RV ejection fraction.\t\n\nIn general, the effect of correction of detected segmentation failures is more pronounced in cases where the segmentation-only approach achieved relatively low accuracy (e.g. DN-SD for RV at ES). Furthermore, performance gains are largest for RV and LV at ES and for ejection fraction of both ventricles.\n\n\n\\begin{figure*}[!t]\n\t\\captionsetup[subfigure]{justification=centering}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient018_slice07_ES_bmap_acorr.pdf}%\n\t\t\\label{fig_qual_result_sim_corr_example1}}\n\t\n\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient070_slice05_ED_emap_acorr.pdf}%\n\t\t\\label{fig_qual_result_sim_corr_example2}}\n\t\n\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient081_slice01_ES_emap_acorr.pdf}%\n\t\t\\label{fig_qual_result_sim_corr_example3}}\n\t\\caption{Three patients showing results of combined segmentation and detection approach consisting of segmentation followed by simulated manual correction of detected segmentation failures. First column shows MRI (top) and reference segmentation (bottom). Results for automatic segmentation and simulated manual correction respectively achieved by: Dilated Network (DN-Brier, \\num{2}$^{nd}$ and \\num{5}$^{th}$ columns); Dilated Residual Network (DRN-soft-Dice, \\num{3}$^{rd}$ and \\num{6}$^{th}$ columns); and U-net (soft-Dice, \\num{4}$^{th}$ and \\num{7}$^{th}$ columns).}\n\t\\label{fig_seg_detection_qualitative_results}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\t\\captionsetup[subfigure]{justification=centering}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient048_slice02_mcorr.pdf}%\n\t\t\\label{fig_qual_result_man_corr_example1} \\hspace{3ex}}\n\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient091_slice01_mcorr.pdf}%\n\t\t\\label{fig_qual_result_man_corr_example2}}\n\t\n\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient075_slice06_mcorr.pdf}%\n\t\t\\label{fig_qual_result_man_corr_example3} \\hspace{3ex}}\n\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient006_slice01_mcorr.pdf}%\n\t\t\\label{fig_qual_result_man_corr_example4}} \n\t\n\t\\caption{Four patients showing results of combined segmentation and detection approach consisting of segmentation followed by manual expert correction of detected segmentation failures. Expert was only allowed to adjust the automatic segmentations in regions where the detection network predicted segmentation failures (orange contour shown in 2$^{nd}$ column). Automatic segmentations were generated by a U-net trained with the cross-entropy loss. Segmentation failure detection was performed using entropy maps.}\n\t\\label{fig_qualitative_results_man_corr}\n\\end{figure*}\n\nThe best overall performance is achieved by the DRN model trained with cross-entropy loss while exploiting entropy maps in the detection task. Moreover, the proposed two step approach attained slightly better results using Bayesian maps compared to entropy maps. \n\n\\vspace{1ex}\n\\textbf{Manual correction}: Table~\\ref{table_manual_corr_performance} lists results for the combined automatic segmentation and detection approach followed by \\textit{manual} correction of detected segmentation failures by an expert. The results show that this correction led to improved segmentation performance in terms of DC, HD and clinical metrics. Improvements in terms of HD are in \\num{50} percent of the cases statistically significant ($p \\leq 0.05$) and most pronounced for RV and LV at end-systole. \n\nQualitative examples of the proposed approach are visualized in Figures~\\ref{fig_seg_detection_qualitative_results} and \\ref{fig_qualitative_results_man_corr} for simulated correction and manual correction of the automatically detected segmentation failures, respectively. For the illustrated cases (simulated) manual correction of detected segmentation failures leads to increased segmentation performance.\nOn average manual correction of automatic segmentations took less than \\num{2} minutes for ED and ES volumes of one patient compared to \\num{20} minutes that is typically needed by an expert for the same task.\n\n\n\\section{Ablation Study}\n\nTo demonstrate the effect of different hyper-parameters in the method, a number of experiments were performed. In the following text these are detailed.\n\n\\subsection{Impact of number of Monte Carlo samples on segmentation performance}\n\nTo investigate the impact of the number of Monte Carlo (MC) samples on the segmentation performance validation experiments were performed for all three segmentation architectures (Dilated Network, Dilated Residual Network and U-net) using $T$ $\\in \\{1, 3, 5, 7, 10, 20, 30, 60\\}$ samples. Results of these experiments are listed in Table~\\ref{table_seg_perf_per_samples}. We observe that segmentation performance started to converge using \\num{7} samples only. Performance improvements using an increased number of MC samples were largest for the Dilated Network. Overall, using more than \\num{10} samples did not increase segmentation performance. Hence, in the presented work $T$ was set to \\num{10}.\t\n\n\\subsection{Effect of patch-size on detection performance}\n\nThe combined segmentation and detection approach detects segmentation failures on region level. To investigate the effect of patch-size on detection performance three different patch-sizes were evaluated: \\num{4}$\\times$\\num{4}, \\num{8}$\\times$\\num{8}, and \\num{16}$\\times$\\num{16} voxels. The results are shown in Figure~\\ref{fig_grid_compare}. We can observe in Figure~\\ref{fig_fn_grid_froc_voxel_detection} that larger patch-sizes result in a lower number of false positive regions. The result is potentially caused by the decreasing number of regions in an image when using larger patch-sizes compared to smaller patch-sizes. Furthermore, Figure~\\ref{fig_fn_grid_prec_rec_slice_detection} reveals that slice detection performance is only slightly influenced by patch-size. To ease manual inspection and correction by an expert, it is desirable to keep region-size i.e. patch-size small. Therefore, in the experiments a patch-size of \\num{8}$\\times$\\num{8} voxels was used.\n\n\\begin{figure*}[t]\n\t\\center\n\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/compare_grid_froc_voxel_detection_rate.pdf}%\n\t\t\\label{fig_fn_grid_froc_voxel_detection}}\n\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/compare_grid_slices_prec_recall.pdf}%\n\t\t\\label{fig_fn_grid_prec_rec_slice_detection}} \n\t\n\t\\caption{Detection performance for three different patch-sizes specified in voxels. (a) Sensitivity for detection of segmentation failures on voxel level (y-axis) versus number of false positive image regions (x-axis). (b) Precision-recall curve for detection of slices containing segmentation failures (where AP denotes average precision). Results are split between entropy and Bayesian uncertainty maps. In the experiments patch-size was set to \\num{8}$\\times$\\num{8} voxels.}\n\t\\label{fig_grid_compare}\n\\end{figure*}\n\n\\subsection{Impact of tolerance threshold on number of segmentation failures}\n\nTo investigate the impact of the tolerance threshold separating segmentation failures and tolerable segmentation errors, we calculated the ratio of the number of segmentation failures and all errors i.e. the sum of tolerable errors and segmentation failures. Figure~\\ref{fig_threshold_compare} shows the results. We observe that at least half of the segmentation failures are located within a tolerance threshold i.e. distance of two to three voxels of the target structure boundary as defined by the reference annotation. Furthermore, the mean percentage of failures per volume is considerably lower for the Dilated Residual Network (DRN) and highest for the Dilated Network. This result is inline with our earlier finding (see Table~\\ref{table_evaluation_slice_detection}) that average percentage of slices that do contain segmentation failures is lowest for the DRN model.\n\n\\section{Discussion}\n\nWe have described a method that combines automatic segmentation and assessment of uncertainty in cardiac MRI with detection of image regions containing segmentation failures. The results show that combining automatic segmentation with manual correction of detected segmentation failures results in higher segmentation performance.\nIn contrast to previous methods that detected segmentation failures per patient or per structure, we showed that it is feasible to detect segmentation failures per image region. In most of the experimental settings, simulated manual correction of detected segmentation failures for LV, RV and LVM at ED and ES led to statistically significant improvements. These results represent the upper bound on the maximum achievable performance\tfor the manual expert correction task. Furthermore, results show that manual expert correction of detected segmentation failures led to consistently improved segmentations. However, these results are not on par with the simulated expert correction scenario. This is not surprising because inter-observer variability is high for the presented task and annotation protocols may differ between clinical environments. Moreover, qualitative results of the manual expert correction reveal that manual correction of the detected segmentation failures can prevent anatomically implausible segmentations (see Figure~\\ref{fig_qualitative_results_man_corr}).\nTherefore, the presented approach can potentially simplify and accelerate correction process and has the capacity to increase the trustworthiness of existing automatic segmentation methods in daily clinical practice.\n\n\nThe proposed combined segmentation and detection approach was evaluated using three state-of-the-art deep learning segmentation architectures. The results suggest that our approach is generic and applicable to different model architectures. Nevertheless, we observe noticeable differences between the different combination of model architectures, loss functions and uncertainty measures. In the segmentation-only task the DRN clearly outperforms the other two models in the evaluation of the boundary of the segmented structure. Moreover, qualitative analysis of the automatic segmentation masks suggests that DRN generates less often anatomically implausible and fragmented segmentations than the other models. We assume that clinical experts would prefer such segmentations although they are not always perfect. Furthermore, even though DRN and U-net achieve similar performance in regard to DC we assume that less fragmented segmentation masks would increase trustworthiness of the methods. \n\n\\begin{figure*}[t]\n\t\\center\n\t\\subfloat[]{\n\t\t\\includegraphics[width=3.in]{figures\/dt\/tp_per_threshold_out_struc.pdf}%\n\t\t\\label{fig_threshold_struc_out_compare}\n\t}\n\t\\subfloat[]{\n\t\t\\includegraphics[width=3.in]{figures\/dt\/tp_per_threshold_in_struc.pdf}%\n\t\t\\label{fig_threshold_struc_in_compare}\n\t}\n\t\\caption{Mean percentage of the segmentation failures per volume (y-axis) in the set of all segmentation errors (tolerable errors$+$segmentation failures) depending on the tolerance threshold (x-axis). Red, dashed vertical line indicates threshold value that was used throughout the experiments. Results are split between segmentation errors located (a) outside and (b) inside the target structure. Each figure contains a curve for U-net, Dilated Network (DN) and Dilated Residual Network (DRN) trained with the soft-Dice (SD) loss. Segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures and therefore, they are independent of the applied tolerance threshold.}\n\t\\label{fig_threshold_compare}\n\\end{figure*}\n\nIn agreement with our preliminary work we found that uncertainty maps obtained from a segmentation model trained with soft-Dice loss have a lower degree of uncertainty calibration compared to when trained with one of the other two loss functions (cross-entropy and Brier)\\cite{sander2019towards}. Nevertheless, the results of the combined segmentation and detection approach showed that a lower degree of uncertainty calibration only slightly deteriorated the detection performance of segmentation failures for the larger segmentation models (DRN and U-net) when exploiting uncertainty information from e-maps. Hendrycks and Gimpel \\cite{hendrycks2016baseline} showed that softmax probabilities generated by deep learning networks have poor direct correspondence to confidence. However, in agreement with Geifman et al. \\cite{geifman2017selective} we presume that probabilities and hence corresponding entropies obtained from softmax function are ranked consistently i.e. entropy can potentially be used as a relative uncertainty measure in deep learning. In addition, we detect segmentation failures per image region and therefore, our approach does not require perfectly calibrated uncertainty maps. Furthermore, results of the combined segmentation and detection approach revealed that detection performance of segmentation failures using b-maps is almost independent of the loss function used to train the segmentation model. In line with Jungo et al. \\cite{jungo2019assessing} we assume that by enabling MC-dropout in testing and computing the mean softmax probabilities per class leads to better calibrated probabilities and b-maps. This assumption is in agreement with Srivastava et al.~\\cite{srivastava2014dropout} where a CNN with dropout used at testing is interpreted as an ensemble of models.\n\nQuantitative evaluation in terms of Dice coefficient and Hausdorff distance reveals that proposed combined segmentation and detection approach leads to significant performance increase. However, the results also demonstrate that the correction of the detected failures allowed by the combined approach does not lead to statistically significant improvement in clinical metrics. This is not surprising because state-of-the-art automatic segmentation methods are not expected to lead to large volumetric errors \\cite{bernard2018deep} and standard clinical measures are not sensitive to small segmentation errors. Nevertheless, errors of the current state-of-the-art automatic segmentation methods may lead to anatomically implausible segmentations \\cite{bernard2018deep} that may cause distrust in clinical application. Besides increase in trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs, improved segmentations are a prerequisite for advanced functional analysis of the heart e.g. motion analysis\\cite{bello2019deep} and very detailed morphology analysis such as myocardial trabeculae in adults\\cite{meyer2020genetic}.\n\nFor the ACDC dataset used in this manuscript, Bernard et al.\\cite{bernard2018deep} reported inter-observer variability ranging from \\num{4} to \\SI{14.1}{\\milli\\meter} (equivalent to on average \\num{2.6} to \\num{9} voxels). To define the set of segmentation failures, we employed a strict tolerance threshold on distance metric to distinguish between tolerated segmentation errors and segmentation failures (see Ablation study). Stricter tolerance threshold was used because the thresholding is performed in \\num{2}D, while evaluation of segmentation is done in \\num{3}D. Large slice thickness in cardiac MR could lead to a discrepancy between the two. As a consequence of this strict threshold results listed in Table~\\ref{table_evaluation_slice_detection} show that almost all patient volumes contain at least one slice with a segmentation failure. This might render the approach less feasible in clinical practice. Increasing the threshold decreases the number of segmentation failures and slices containing segmentation failures (see Figure~\\ref{fig_threshold_compare}) but also lowers the upper bound on the maximum achievable performance. Therefore, to show the potential of our proposed approach we chose to apply a strict tolerance threshold. Nevertheless, we realize that although manual correction of detected segmentation failures leads to increased segmentation accuracy the performance of precision-recall is limited (see Figure~\\ref{fig_dt_perf_all_models}) and hence, should be a focus of future work. \n\nThe presented patch-based detection approach combined with (simulated) manual correction can in principle lead to stitching artefacts in the resulting segmentation masks. A voxel-based detection approach could potentially solve this. However, voxel-based detection methods are more challenging to train due to the very small number of voxels in an image belonging to the set of segmentation failures.\n\nEvaluation of the proposed approach for \\num{12} possible combinations of segmentation models (three), loss functions (two) and uncertainty maps (two) resulted in an extensive number of experiments. Nevertheless, future work could extend evaluation to other segmentation models, loss functions or combination of losses. Furthermore, our approach could be evaluated using additional uncertainty estimation techniques e.g. by means of ensembling of networks \\cite{lakshminarayanan2017simple} or variational dropout \\cite{kingma2015variational}. In addition, previous work by Kendall and Gal~\\cite{kendall2017uncertainties}, Tanno et al. \\cite{tanno2019uncertainty} has shown that the quality of uncertainty estimates can be improved if model (epistemic) and data (aleatoric) uncertainty are assessed simultaneously with separate measures. The current study focused on the assessment of model uncertainty by means of MC-dropout and entropy which is a combination of epistemic and aleatoric uncertainty. Hence, future work could investigate whether additional estimation of aleatoric uncertainty improves the detection of segmentation failures. \n\nFurthermore, to develop an end-to-end approach future work could incorporate the detection of segmentation failures into the segmentation network. Besides, adding the automatic segmentations to the input of the detection network could increase the detection performance. \n\nFinally, the proposed approach is not specific to cardiac MRI segmentation. Although data and task specific training would be needed the approach could potentially be applied to other image modalities and segmentation tasks.\n\n\\section{Conclusion}\n\nA method combining automatic segmentation and assessment of segmentation uncertainty in cardiac MR with detection of image regions containing local segmentation failures has been presented. The combined approach, together with simulated and manual correction of detected segmentation failures, increases performance compared to segmentation-only. The proposed method has the potential to increase trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs. \n\n\\section*{Data and code availability}\nAll models were implemented using the PyTorch\\cite{paszke2017automatic} framework and trained on one Nvidia GTX Titan X GPU with \\num{12} GB memory. The code to replicate the study is publicly available at \\href{https:\/\/github.com\/toologicbv\/cardiacSegUncertainty}{https:\/\/github.com\/toologicbv\/cardiacSegUncertainty}.\n\n\n\\section*{Acknowledgements}\n\nThis study was performed within the DLMedIA program (P15-26) funded by Dutch Technology Foundation with participation of PIE Medical Imaging.\n\n\\section*{Author contributions statement}\n\nJ.S., B.D.V. and I.I. designed the concept of the study. J.S. conducted the experiments. J.S., B.D.V. and I.I. wrote the manuscript. All authors reviewed the manuscript. \n\n\\section*{Additional information}\n\n\\textbf{Competing interests}: The authors declare that they have no competing interests. \n\n\\ifCLASSOPTIONcaptionsoff\n\\newpage\n\\fi\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\\section*{Introduction}\n\t\n\t\\IEEEPARstart{T}o perform diagnosis and prognosis of cardiovascular disease (CVD) medical experts depend on the reliable quantification of cardiac function \\cite{white1987left}. Cardiac magnetic resonance imaging (CMRI) is currently considered the reference standard for quantification of ventricular volumes, mass and function \\cite{grothues2002comparison}. Short-axis CMR imaging, covering the entire left and right ventricle (LV resp. RV) is routinely used to determine quantitative parameters of both ventricle's function. This requires manual or semi-automatic segmentation of corresponding cardiac tissue structures for end-diastole (ED) and end-systole (ES). \n\t\n\tExisting semi-automated or automated segmentation methods for CMRIs regularly require (substantial) manual intervention caused by lack of robustness. Manual or semi-automatic segmentation across a complete cardiac cycle, comprising \\num{20} to \\num{40} phases per patient, enables computation of parameters quantifying cardiac motion with potential diagnostic implications but due to the required workload, this is practically infeasible. Consequently, segmentation is often performed at end-diastole and end-systole precluding comprehensive analysis over complete cardiac cycle. \n\t\n\tRecently\\cite{litjens2017survey, leiner2019machine}, deep learning segmentation methods have shown to outperform traditional approaches such as those exploiting level set, graph-cuts, deformable models, cardiac atlases and statistical models \\cite{petitjean2011review, peng2016review}. However, recent comparison of a number of automatic methods showed that even the best performing methods generated anatomically implausible segmentations in more than 80\\% of the CMRIs \\cite{bernard2018deep}. Such errors do not occur when experts perform segmentation. To achieve acceptance in clinical practice these shortcomings of the automatic approaches need to be alleviated by further development. This can be achieved by generating more accurate segmentation result or by development of approaches that automatically detect segmentation failures. \n\t\n\tIn manual and automatic segmentation of short-axis CMRI, largest segmentation inaccuracies are typically located in the most basal and apical slices due to low tissue contrast ratios \\cite{suinesiaputra2015quantification}. To increase segmentation performance, several methods have been proposed \\cite{tan2018fully, zheng20183, savioli2018automated, bai2018automated}. Tan et al. \\cite{tan2018fully} used a convolutional neural network (CNN) to regress anatomical landmarks from long-axis views (orthogonal to short-axis). They exploited the landmarks to determine most basal and apical slices in short-axis views and thereby constraining the automatic segmentation of CMRIs. This resulted in increased robustness and performance. Other approaches leverage spatial \\cite{zheng20183} or temporal \\cite{savioli2018automated, bai2018automated} information to increase segmentation consistency and performance in particular in the difficult basal and apical slices.\n\t\n\tAn alternative approach to preventing implausible segmentation results is by incorporating knowledge about the highly constrained shape of the heart. Oktay et al. \\cite{oktay2017anatomically} developed an anatomically constrained neural network (NN) that infers shape constraints using an auto-encoder during segmentation training. Duan et al. \\cite{duan2019automatic} developed a deep learning segmentation approach for CMRIs that used atlas propagation to explicitly impose a shape refinement. This was especially beneficial in the presence of image acquisition artifacts. Recently, Painchaud et al. \\cite{painchaud2019cardiac} developed a post-processing approach to detect and transform anatomically implausible cardiac segmentations into valid ones by defining cardiac anatomical metrics. Applying their approach to various state-of-the-art segmentation methods the authors showed that the proposed method provides strong anatomical guarantees without hampering segmentation accuracy. \n\t\n\t\\begin{figure}[t]\n\t\t\\center\n\t\t\\includegraphics[width=6in, height=3.5in]{figures\/overview_approach.pdf}%\n\t\t\n\t\t\\caption{Overview of proposed two step approach. Step 1 (left): Automatic CNN segmentation of CMR images combined with assessment of segmentation uncertainties. Step 2 (right): Differentiate tolerated errors from segmentation failures (to be detected) using distance transform maps based on reference segmentations. Detection of image regions containing segmentation failures using CNN which takes CMR images and segmentation uncertainties as input. Manual corrected segmentation failures (green) based on detected image regions.}\n\t\t\\label{fig_overview_method}\n\t\\end{figure}\n\t\n\t\n\tA different research trend focuses on detecting segmentation failures, i.e. on automated quality control for image segmentation. These methods can be divided in those that predict segmentation quality using image at hand or corresponding automatic segmentation result, and those that assess and exploit predictive uncertainties to detect segmentation failure. \n\t\n\tRecently, two methods were proposed to detect segmentation failures in large-scale cardiac MR imaging studies to remove these from subsequent analysis\\cite{alba2018automatic, robinson2019automated}. Robinson et al. \\cite{robinson2019automated} using the approach of Reverse Classification Accuracy (RCA) \\cite{valindria2017reverse} predicted CMRI segmentation metrics to detect failed segmentations. They achieved good agreement between predicted metrics and visual quality control scores. Alba et al. \\cite{alba2018automatic} used statistical, pattern and fractal descriptors in a random forest classifier to directly detect segmentation contour failures without intermediate regression of segmentation accuracy metrics. \n\t\n\tMethods for automatic quality control were also developed for other applications in medical image analysis. Frounchi et al. \\cite{frounchi2011automating} extracted features from the segmentation results of the left ventricle in CT scans. Using the obtained features the authors trained a classifier that is able to discriminate between consistent and inconsistent segmentations. To distinguish between acceptable and non-acceptable segmentations Kohlberger el al. \\cite{kohlberger2012evaluating} proposed to directly predict multi-organ segmentation accuracy in CT scans using a set of features extracted from the image and corresponding segmentation. \n\t\n\tA number of methods aggregate voxel-wise uncertainties into an overall score to identify insufficiently accurate segmentations. For example, Nair et al. \\cite{nair2018exploring} computed an overall score for target segmentation structure from voxel-wise predictive uncertainties. The method was tested for detection of Multiple Sclerosis in brain MRI. The authors showed that rejecting segmentations with high uncertainty scores led to increased detection accuracy indicating that correct segmentations contain lower uncertainties than incorrect ones. Similarly, to assess segmentation quality of brain MRIs Jungo et al. \\cite{jungo2018uncertainty} aggregated voxel-wise uncertainties into a score per target structure\n\tand showed that the computed uncertainty score enabled identification of erroneous segmentations.\n\t\n\tUnlike approaches evaluating segmentation directly, several methods use predictive uncertainties to predict segmentation metrics and thereby evaluate segmentation performance \\cite{roy2019bayesian, devries2018leveraging}. For example, Roy et al. \\cite{roy2019bayesian} aggregated voxel-wise uncertainties into four scores per segmented structure in brain MRI. The authors showed that computed scores can be used to predict the Intersection over Union and hence, to determine segmentation accuracy. \n\tSimilar idea was presented by DeVries et al. \\cite{devries2018leveraging} that predicted segmentation accuracy per patient using an auxiliary neural network that leverages the dermoscopic image, automatic segmentation result and obtained uncertainties. The researchers showed that a predicted segmentation accuracy is useful for quality control.\n\t\n\tWe build on our preliminary work where automatic segmentation of CMR images using a dilated CNN was combined with assessment of two measures of segmentation uncertainties \\cite{sander2019towards}. For the first measure the multi-class entropy per voxel (entropy maps) was computed using the output distribution. For the second measure Bayesian uncertainty maps were acquired using Monte Carlo dropout (MC-dropout) \\cite{gal2016dropout}. In \\cite{sander2019towards} we showed that the obtained uncertainties almost entirely cover the regions of incorrect segmentation i.e. that uncertainties are calibrated. In the current work we extend our preliminary research in two ways. First, we assess impact of CNN architecture on the segmentation performance and calibration of uncertainty maps by evaluating three existing state-of-the-art CNNs. Second, we employ an auxiliary CNN (detection network) that processes a cardiac MRI and corresponding spatial uncertainty map (Entropy or Bayesian) to automatically detect segmentation failures. We differentiate errors that may be within the range of inter-observer variability and hence do not necessarily require correction (tolerated errors) from the errors that an expert would not make and hence require correction (segmentation failures). Given that overlap measures do not capture fine details of the segmentation results and preclude us to differentiate two types of segmentation errors, in this work, we define segmentation failure using a metric of boundary distance. In \\cite{sander2019towards} we found that degree of calibration of uncertainty maps is dependent on the loss function used to train the CNN. Nevertheless, in the current work we show that uncalibrated uncertainty maps are useful to detect local segmentation failures. \t\n\tIn contrast to previous methods that detect segmentation failure per-patient or per-structure\\cite{roy2019bayesian, devries2018leveraging}, we propose to detect segmentation failures per image region. We expect that inspection and correction of segmentation failures using image regions rather than individual voxels or images would simplify correction process. To show the potential of our approach and demonstrate that combining automatic segmentation with manual correction of the detected segmentation failures per region results in higher segmentation performance we performed two additional experiments. In the first experiment, correction of detected segmentation failures was simulated in the complete data set. In the second experiment, correction was performed by an expert in a subset of images. Using publicly available set of CMR scans from MICCAI 2017 ACDC challenge \\cite{bernard2018deep}, the performance was evaluated before and after simulating the correction of detected segmentation failures as well as after manual expert correction.\n\t\n\t\\section*{Data}\n\t\n\tIn this study data from the MICCAI \\num{2017} Automated Cardiac Diagnosis Challenge (ACDC) \\cite{bernard2018deep} was used. The dataset consists of cardiac cine MR images (CMRIs) from 100 patients uniformly distributed over normal cardiac function and four disease groups: dilated cardiomyopathy, hypertrophic cardiomyopathy, heart failure with infarction, and right ventricular abnormality. Detailed acquisition protocol is described by Bernard et al.~\\cite{bernard2018deep}. Briefly, short-axis CMRIs were acquired with two MRI scanners of different magnetic strengths (\\num{1.5} and \\num{3.0} T). Images were made during breath hold using a conventional steady-state free precession (SSFP) sequence. CMRIs have an in-plane resolution ranging from \\num{1.37} to \\SI{1.68}{\\milli\\meter} (average reconstruction matrix \\num{243} $\\times$ \\num{217} voxels) with slice spacing varying from \\num{5} to \\SI{10}{\\milli\\meter}. Per patient 28 to 40 volumes are provided covering partially or completely one cardiac cycle. Each volume consists of on average ten slices covering the heart. Expert manual reference segmentations are provided for the LV cavity, RV endocardium and LV myocardium (LVM) for all CMRI slices at ED and ES time frames. To correct for intensity differences among scans, voxel intensities of each volume were scaled to the [\\num{0.0}, \\num{1.0}] range using the minimum and maximum of the volume. Furthermore, to correct for differences in-plane voxel sizes, image slices were resampled to \\num{1.4}$\\times\\SI{1.4}{\\milli\\meter}^2$. \n\t\n\t\\begin{figure}[!t]\n\t\t\\captionsetup[subfigure]{justification=centering}\n\t\t\\centering\n\t\n\t\t\\subfloat[]{\\includegraphics[width=5in]{figures\/qualitative\/auto\/patient099_slice02_ES_emap_auto.pdf}%\n\t\t\\label{fig_seg_qual_example1}}\n\t\n\t\t\\subfloat[]{\\includegraphics[width=5in]{figures\/qualitative\/auto\/patient097_slice00_ES_emap_auto.pdf}%\n\t\t\\label{fig_seg_qual_example2}}\n\t\t\n\t\t\\caption{Examples of automatic segmentations generated by different segmentation models for two cardiac MRI scans (rows) at ES at the base of the heart.}\n\t\t\\label{fig_seg_qualitative_results}\n\t\\end{figure}\n\t\n\t\\section*{Methods}\n\t\n\tTo investigate uncertainty of the segmentation, anatomical structures in CMR images are segmented using a CNN. To investigate whether the approach generalizes to different segmentation networks, three state-of-the-art CNNs were evaluated. For each segmentation model two measures of predictive uncertainty were obtained per voxel. Thereafter, to detect and correct local segmentation failures an auxiliary CNN (detection network) that analyzes a cardiac MRI was used. Finally, this leads to the uncertainty map allowing detection of image regions that contain segmentation failures. Figure~\\ref{fig_overview_method} visualizes this approach.\n\t\n\t\n\t\\subsection*{Automatic segmentation of cardiac MRI}\n\t\n\tTo perform segmentation of LV, RV, and LVM in cardiac MR images i.e. \\num{2}D CMR scans, three state-of-the-art CNNs are trained. Each of the three networks takes a CMR image as input and has four output channels providing probabilities for the three cardiac structures (LV, RV, LVM) and background. Softmax probabilities are calculated over the four tissue classes. Patient volumes at ED and ES are processed separately. During inference the \\num{2}D automatic segmentation masks are stacked into a \\num{3}D volume per patient and cardiac phase. After segmentation, the largest \\num{3}D connected component for each class is retained and volumes are resampled to their original voxel resolution. Segmentation networks differ substantially regarding architecture, number of parameters and receptive field size. To assess predictive uncertainties from the segmentation models \\textit{Monte Carlo dropout} (MC-dropout) introduced by Gal \\& Ghahramani \\cite{gal2016dropout} is implemented in every network. The following three segmentation networks were evaluated: Bayesian Dilated CNN, Bayesian Dilated Residual Network, Bayesian U-net.\n\t\n\t\\vspace{1ex}\n\t\\noindent \\textbf{Bayesian Dilated CNN (DN)}: The Bayesian DN architecture comprises a sequence of ten convolutional layers. Layers \\num{1} to \\num{8} serve as feature extraction layers with small convolution kernels of size \\num{3}$\\times$\\num{3} voxels. No padding is applied after convolutions. The number of kernels increases from \\num{32} in the first eight layers, to \\num{128} in the final two fully connected classification layers, implemented as \\num{1}$\\times$\\num{1} convolutions. The dilation level is successively increased between layers \\num{2} and \\num{7} from \\num{2} to \\num{32} which results in a receptive field for each voxel of \\num{131}$\\times$\\num{131} voxels, or \\num{18.3}$\\times$ $\\SI{18.3}{\\centi\\meter}^2$. All trainable layers except the final layer use rectified linear activation functions (ReLU). To enhance generalization performance, the model uses batch normalization in layers \\num{2} to \\num{9}. In order to convert the original DN~\\cite{wolterink2017automatic} into a Bayesian DN dropout is added as the last operation in all but the final layer and \\num{10} percent of a layer's hidden units are randomly switched off. \n\t\n\t\\vspace{1ex}\n\t\\noindent \\textbf{Bayesian Dilated Residual Network (DRN)}: The Bayesian DRN is based on the original DRN from Yu et al. \\cite{yu2017dilated} for image segmentation. More specifically, the DRN-D-22\\cite{yu2017dilated} is used which consists of a feature extraction module with output stride eight followed by a classifier implemented as fully convolutional layer with \\num{1}$\\times$\\num{1} convolutions. Output of the classifier is upsampled to full resolution using bilinear interpolation. The convolutional feature extraction module comprises eight levels where the number of kernels increases from \\num{16} in the first level, to \\num{512} in the two final levels. The first convolutional layer in level \\num{1} uses \\num{16} kernels of size \\num{7}$\\times$\\num{7} voxels and zero-padding of size \\num{3}. The remaining trainable layers use small \\num{3}$\\times$\\num{3} voxel kernels and zero-padding of size \\num{1}. Level \\num{2} to \\num{4} use a strided convolution of size \\num{2}. To further increase the receptive field convolutional layers in level \\num{5}, \\num{6} and \\num{7} use a dilation factor of \\num{2}, \\num{4} and \\num{2}, respectively. Furthermore, levels \\num{3} to \\num{6} consist of two residual blocks. All convolutional layers of the feature extraction module are followed by batch normalization, ReLU function and dropout. Adding dropout and switching off \\num{10} percent of a layer's hidden units converts the original DRN~\\cite{yu2017dilated} into a Bayesian DRN.\n\t\n\t\\vspace{1ex}\n\t\\noindent \\textbf{Bayesian U-net (U-net)}: The standard architecture of the U-net~\\cite{ronneberger2015u} is used. The network is fully convolutional and consists of a contracting, bottleneck and expanding path. The contracting and expanding path each consist of four blocks i.e. resolution levels which are connected by skip connections. The first block of the contracting path contains two convolutional layers using a kernel size of \\num{3}$\\times$\\num{3} voxels and zero-padding of size \\num{1}. Downsampling of the input is accomplished by employing a max pooling operation in block \\num{2} to \\num{4} of the contracting path and the bottleneck using a convolutional kernel of size \\num{2}$\\times$\\num{2} voxels and stride \\num{2}. Upsampling is performed by a transposed convolutional layer in block \\num{1} to \\num{4} of the expanding path using the same kernel size and stride as the max pooling layers. Each downsampling and upsampling layer is followed by two convolutional layers using \\num{3}$\\times$\\num{3} voxel kernels with zero-padding size \\num{1}. The final convolutional layer of the network acts as a classifier and uses \\num{1}$\\times$\\num{1} convolutions to reduce the number of output channels to the number of segmentation classes. The number of kernels increases from \\num{64} in the first block of the contracting path to \\num{1024} in the bottleneck. In contrast, the number of kernels in the expanding path successively decreases from \\num{1024} to \\num{64}. In deviation to the standard U-net instance normalization is added to all convolutional layers in the contracting path and ReLU non-linearities are replaced by LeakyReLU functions because this was found to slightly improve segmentation performance. In addition, to convert the deterministic model into a Bayesian neural network dropout is added as the last operation in each block of the contracting and expanding path and \\num{10} percent of a layer's hidden units are randomly switched off.\n\t\n\t\\subsection*{Assessment of predictive uncertainties} \\label{uncertainty_maps}\n\tTo detect failures in segmentation masks generated by CNNs in testing, spatial uncertainty maps of the obtained segmentations are generated. For each voxel in the image two measures of uncertainty are calculated. First, a computationally cheap and straightforward measure of uncertainty is the entropy of softmax probabilities over the four tissue classes which are generated by the segmentation networks. Using these, normalized entropy maps $\\bf E \\in [0, 1]^{H\\times W}$ (e-map) are computed where $H$ and $W$ denote the height and width of the original CMRI, respectively.\n\t\n\tSecond, by applying MC-dropout in testing, softmax probabilities with a number of samples $T$ per voxel are obtained. As an overall measure of uncertainty the mean standard deviation of softmax probabilities per voxel over all tissue classes $C$\\label{ref:maximum_variance} is computed\n\t\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\\textbf{B} (I)^{(x, y)} &= \\frac{1}{C} \\sum_{c=1}^{C} \\sqrt{\\frac{1}{T-1} \\sum_{t=1}^{T} \\big(p_t(I)^{(x, y, c)} - \\hat{\\mu}^{(x, y, c)} \\big)^2 } \\; ,\n\t\\end{align}\n\t\\endgroup\n\t\n\twhere $\\textbf{B}(I)^{(x, y)} \\in [0, 1]$ denotes the normalized value of the Bayesian uncertainty map (b-map) at position $(x, y)$ in \\num{2}D slice $I$, $C$ is equal to the number of classes, $T$ is the number of samples and $p_t(I)^{(x, y, c)}$ denotes the softmax probability at position $(x, y)$ in image $I$ for class $c$. The predictive mean per class $\\hat{\\mu}^{(x, y, c)}$ of the samples is computed as follows:\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\\hat{\\mu}^{(x, y, c)} &= \\frac{1}{T} \\sum_{t=1}^{T} p_t(I)^{(x, y, c)} \\; .\n\t\\end{align}\n\t\\endgroup\n\t\n\tIn addition, the predictive mean per class is used to determine the tissue class per voxel.\n\t\n\t\\subsection*{Calibration of uncertainty maps} \n\tIdeally, incorrectly segmented voxels as defined by the reference labels should be covered by higher uncertainties than correctly segmented voxels. In such a case the spatial uncertainty maps are perfectly calibrated. \\textit{Risk-coverage curves} introduced by Geifman et al.\\cite{geifman2017selective} visualize whether incorrectly segmented voxels are covered by higher uncertainties than those that are correctly segmented. Risk-coverage curves convey the effect of avoiding segmentation of voxels above a specific uncertainty value on the reduction of segmentation errors (i.e. risk reduction) while at the same time quantifying the voxels that were omitted from the classification task (i.e. coverage). \n\t\n\tTo generate risk-coverage curves first, each patient volume is cropped based on a minimal enclosing parallelepiped bounding box that is placed around the reference segmentations to reduce the number of background voxels. Note that this is only performed to simplify the analysis of the risk-coverage curves. Second, voxels of the cropped patient volume are ranked based on their uncertainty value in descending order. Third, to obtain uncertainty threshold values per patient volume the ranked voxels are partitioned into \\num{100} percentiles based on their uncertainty value. Finally, per patient volume each uncertainty threshold is evaluated by computing a coverage and a risk measure. Coverage is the percentage of voxels in a patient volume at ED or ES that is automatically segmented. Voxels in a patient volume above the threshold are discarded from automatic segmentation and would be referred to an expert. The number of incorrectly segmented voxels per patient volume is used as a measure of risk. Using bilinear interpolation risk measures are computed per patient volume between $[0, 100]$ percent.\n\t\n\t\n\t\\subsection*{Detection of segmentation failures}\n\t\n\tTo detect segmentation failures uncertainty maps are used but direct application of uncertainties is infeasible because many correctly segmented voxels, such as those close to anatomical structure boundaries, have high uncertainty. Hence, an additional patch-based CNN (detection network) is used that takes a cardiac MR image together with the corresponding spatial uncertainty map as input. For each patch of \\num{8}$\\times$\\num{8} voxels the network generates a probability indicating whether it contains segmentation failure. In the following, the terms patch and region are used interchangeably.\n\t\n\tThe detection network is a shallow Residual Network (S-ResNet) \\cite{he2016deep} consisting of a feature extraction module with output stride eight followed by a classifier indicating the presence of segmentation failure. The first level of the feature extraction module consists of two convolutional layers. The first layer uses \\num{16} kernels of \\num{7}$\\times$\\num{7} voxels and zero-padding of size \\num{3} and second layer \\num{32} kernels of \\num{3}$\\times$\\num{3} voxels and zero-padding of \\num{1} voxel. Level \\num{2} to \\num{4} each consist of one residual block that contains two convolutional layers with \\num{3}$\\times$\\num{3} voxels kernels with zero-padding of size \\num{1}. The first convolutional layer of each residual block uses a strided convolution of \\num{2} voxels to downsample the input. All convolutional layers of the feature extraction module are followed by batch normalization and ReLU function. The number of kernels in the feature extraction module increases from \\num{16} in level \\num{1} to \\num{128} in level \\num{4}. The network is a \\num{2}D patch-level classifier and requires that the size of the two input slices is a multiple of the patch-size. \\label{patch_size} The final classifier consists of three fully convolutional layers, implemented as \\num{1}$\\times$\\num{1} convolutions, with \\num{128} feature maps in the first two layers. The final layer has two channels followed by a softmax function which indicates whether the patch contains segmentation failure. Furthermore, to regularize the model dropout layers ($p=0.5$) were added between the residual blocks and the fully convolutional layers of the classifier.\n\t\t\n\t\\begin{table*}\n\t\t\\caption{Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout enabled during testing) in terms of Dice coefficient (top) and Hausdorff distance (bottom) (mean $\\pm$ standard deviation). Each combination comprises a block of two rows. A row in which column \\textit{Uncertainty map for detection} indicates e- or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in black\/bold are ranked first in the segmentation only task whereas numbers accentuated in red\/bold are ranked first in the combined segmentation \\& detection task. The last row states the performance of the winning model in the ACDC challenge (on \\num{100} patient images) \\cite{isensee2017automatic}. Number with asterisk indicates statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach. Best viewed in color.}\n\t\t\\label{table_overall_segmentation_performance}\n\t\t\\centering\n\t\t\\tiny\n\t\t\\subfloat[Dice coefficient]{\n\t\t\t\\begin{tabular}{| C{1.6cm} | C{0.8cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\t\\hline\n\t\t\t\t& \\textbf{Uncertainty} & \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\t\\textbf{Model} & \\textbf{map for detection} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\t\\hline\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDN-Brier & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.928$\\pm$0.04) & \\phantom{x}0.875$\\pm$0.03) & \\phantom{x}0.901$\\pm$0.11 & \\phantom{x}0.832$\\pm$0.10) & \\phantom{x}0.884$\\pm$0.04 \\\\\n\t\t\t\t& \\textbf{e-map} & *0.965$\\pm$0.01 & *0.949$\\pm$0.02 & *0.885$\\pm$0.03 & *0.937$\\pm$0.06 & *0.905$\\pm$0.05 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\t\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDN-Brier+MC & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.922$\\pm$0.04 & \\phantom{x}0.875$\\pm$0.04 & \\phantom{x}0.912$\\pm$0.08 & \\phantom{x}0.839$\\pm$0.11 & \\phantom{x}0.882$\\pm$0.04 \\\\ \n\t\t\t\t& \\textbf{b-map} & *0.966$\\pm$0.01 & *0.950$\\pm$0.01 & *0.886$\\pm$0.03 & \\textbf{\\textcolor{red}{*0.942}}$\\pm$0.03 & *0.916$\\pm$0.04 & *0.912$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDN-soft-Dice & & \\phantom{x}0.960$\\pm$0.02 & \\phantom{x}0.921$\\pm$0.04 & \\phantom{x}0.870$\\pm$0.04 & \\phantom{x}0.909$\\pm$0.08 & \\phantom{x}0.812$\\pm$0.12 & \\phantom{x}0.879$\\pm$0.04 \\\\\n\t\t\t\t& \\textbf{e-map} & *0.965$\\pm$0.01 & *0.945$\\pm$0.02 & *0.879$\\pm$0.04 & *0.938$\\pm$0.03 & *0.891$\\pm$0.06 & *0.905$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDN-soft-Dice+MC & & \\phantom{x}0.958$\\pm$0.02 & \\phantom{x}0.913$\\pm$0.05 & \\phantom{x}0.868$\\pm$0.04 & \\phantom{x}0.907$\\pm$0.07 & \\phantom{x}0.818$\\pm$0.12 & \\phantom{x}0.875$\\pm$0.04 \\\\\n\t\t\t\t& \\textbf{b-map} & *0.964$\\pm$0.01 & *0.944$\\pm$0.02 & *0.877$\\pm$0.04 & *0.939$\\pm$0.03 & *0.900$\\pm$0.05 & *0.904$\\pm$0.03 \\\\ \t\t\t\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDRN-CE & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.929$\\pm$0.03 & \\phantom{x}0.878$\\pm$0.03 & \\phantom{x}0.912$\\pm$0.06 & \\phantom{x}0.850$\\pm$0.09 & \\phantom{x}0.891$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{e-map} & \\phantom{x}0.964$\\pm$0.01 & *0.943$\\pm$0.02 & *0.886$\\pm$0.03 & *0.937$\\pm$0.03 & *0.899$\\pm$0.04 & *0.908$\\pm$0.03 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDRN-CE+MC & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.926$\\pm$0.03 & \\phantom{x}0.877$\\pm$0.03 & \\phantom{x}0.913$\\pm$0.06 & \\phantom{x}0.847$\\pm$0.10 & \\phantom{x}0.890$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{b-map} & *0.965$\\pm$0.01 & *0.948$\\pm$0.01 & *0.887$\\pm$0.03 & *0.939$\\pm$0.03 & *0.911$\\pm$0.04 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDRN-soft-Dice & & \\phantom{x}0.964$\\pm$0.01 & \\phantom{x}\\textbf{0.937}$\\pm$0.02 & \\phantom{x}0.888$\\pm$0.03 & \\phantom{x}\\textbf{0.919}$\\pm$0.06 & \\phantom{x}0.856$\\pm$0.09 & \\phantom{x}\\textbf{0.900}$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{e-map} & \\phantom{x}0.967$\\pm$0.01 & *0.945$\\pm$0.02 & \\phantom{x}0.893$\\pm$0.03 & \\phantom{x}0.934$\\pm$0.04 & *0.892$\\pm$0.06 & *0.911$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDRN-soft-Dice+MC & & \\phantom{x}0.963$\\pm$0.02 & \\phantom{x}0.935$\\pm$0.03 & \\phantom{x}0.886$\\pm$0.03 & \\phantom{x}0.921$\\pm$0.06 & \\phantom{x}\\textbf{0.857}$\\pm$0.09 & \\phantom{x}0.899$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{b-map} & \\phantom{x}0.967$\\pm$0.01 & *0.947$\\pm$0.02 & \\phantom{x}0.893$\\pm$0.03 & *0.938$\\pm$0.03 & *0.907$\\pm$0.04 & *0.912$\\pm$0.03 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tU-net-CE & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.923$\\pm$0.05 & \\phantom{x}0.878$\\pm$0.03 & \\phantom{x}0.907$\\pm$0.07 & \\phantom{x}0.840$\\pm$0.08 & \\phantom{x}0.885$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{e-map} & \\phantom{x}0.966$\\pm$0.01 & *0.946$\\pm$0.02 & *0.890$\\pm$0.03 & *0.935$\\pm$0.04 & *0.901$\\pm$0.06 & *0.909$\\pm$0.03 \\\\ \n\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tU-net-CE+MC & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.926$\\pm$0.04 & \\phantom{x}0.879$\\pm$0.03 & \\phantom{x}0.909$\\pm$0.07 & \\phantom{x}0.849$\\pm$0.07 & \\phantom{x}0.887$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{b-map} & \\phantom{x}0.967$\\pm$0.01 & \\textbf{\\textcolor{red}{*0.954}}$\\pm$0.02 & *0.893$\\pm$0.03 & *0.940$\\pm$0.04 & \\textbf{\\textcolor{red}{*0.920}}$\\pm$0.04 & \\textbf{\\textcolor{red}{*0.914}}$\\pm$0.03 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tU-net-soft-Dice & & \\phantom{x}\\textbf{0.965}$\\pm$0.02 & \\phantom{x}0.928$\\pm$0.04 & \\phantom{x}0.888$\\pm$0.03 & \\phantom{x}0.914$\\pm$0.08 & \\phantom{x}0.844$\\pm$0.09 & \\phantom{x}0.896$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{e-map} & \\phantom{x}0.968$\\pm$0.01 & *0.943$\\pm$0.03 & *0.898$\\pm$0.03 & \\phantom{x}0.930$\\pm$0.05 & *0.886$\\pm$0.07 & *0.911$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tU-net-soft-Dice+MC & & \\phantom{x}\\textbf{0.965}$\\pm$0.02 & \\phantom{x}0.929$\\pm$0.04 & \\phantom{x}\\textbf{0.889}$\\pm$0.03 & \\phantom{x}0.911$\\pm$0.10 & \\phantom{x}0.845$\\pm$0.09 & \\phantom{x}0.897$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{b-map} & \\phantom{x}\\textbf{\\textcolor{red}{0.968}}$\\pm$0.01 & *0.948$\\pm$0.03 & \\textbf{\\textcolor{red}{*0.900}}$\\pm$0.03 & \\phantom{x}0.928$\\pm$0.09 & *0.895$\\pm$0.06 & \\textbf{\\textcolor{red}{*0.914}}$\\pm$0.03 \\\\\n\t\t\t\t\n\t\t\t\t\\hdashline\n\t\t\t\tIsensee et al. & & \\phantom{x}0.966& \\phantom{x}0.941 & \\phantom{x}0.899 & \\phantom{x}0.924 & \\phantom{x}0.875\t& \\phantom{x}0.908 \\\\\n\t\t\t\t\n\t\t\t\t\\hline\n\t\t\t\t\n\t\t\t\\end{tabular}\n\t\t\t\\label{table_seg_perf_dsc}\n\t\t} \n\t\t\\vspace{13ex}\n\t\t\\centering\n\t\t\\tiny\n\t\t\\subfloat[Hausdorff Distance]{\n\t\t\t\\begin{tabular}{| C{1.6cm} | C{0.8cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\t\\hline\n\t\t\t\t& \\textbf{Uncertainty}& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\t\\textbf{Model} & \\textbf{map for detection} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\t\\hline\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDN-Brier & & \\phantom{x}6.7$\\pm$3.1 & \\phantom{x}13.5$\\pm$5.9 & \\phantom{x}10.2$\\pm$6.9 & \\phantom{x}10.7$\\pm$7.7 & \\phantom{x}16.7$\\pm$6.8 & \\phantom{x}12.3$\\pm$5.8 \\\\\n\t\t\t\t& \\textbf{e-map} & *5.7$\\pm$2.7 & *11.7$\\pm$5.2 & *\\phantom{x}8.3$\\pm$5.9 & *\\phantom{x}8.0$\\pm$6.5 & *14.2$\\pm$5.6 & *\\phantom{x}9.7$\\pm$5.0 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDN-Brier+MC & & \\phantom{x}6.9$\\pm$3.3 & \\phantom{x}13.1$\\pm$5.2 & \\phantom{xx}9.9$\\pm$5.9 & \\phantom{xx}9.9$\\pm$5.7 & \\phantom{x}15.0$\\pm$6.1 & \\phantom{x}12.0$\\pm$5.2 \\\\\n\t\t\t\t& \\textbf{b-map} & *5.5$\\pm$2.6 & *10.6$\\pm$5.1 & *\\phantom{x}7.4$\\pm$4.2 & *\\phantom{x}7.5$\\pm$6.0 & *12.6$\\pm$5.6 & *\\phantom{x}8.8$\\pm$4.0 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDN-soft-Dice & & \\phantom{x}7.1$\\pm$3.5 & \\phantom{x}14.8$\\pm$6.8 & \\phantom{x}11.0$\\pm$6.6 & \\phantom{x}10.2$\\pm$5.6 & \\phantom{x}17.7$\\pm$7.8 & \\phantom{x}12.9$\\pm$6.2 \\\\\n\t\t\t\t& \\textbf{e-map} & *5.6$\\pm$2.8 & *12.6$\\pm$5.5 & *\\phantom{x}8.6$\\pm$4.6 & *\\phantom{x}8.0$\\pm$5.0 & *14.6$\\pm$5.9 & *\\phantom{x}9.6$\\pm$4.5 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDN-soft-Dice+MC & & \\phantom{x}7.7$\\pm$3.9 & \\phantom{x}14.4$\\pm$6.0 & \\phantom{x}10.5$\\pm$4.9 & \\phantom{x}10.1$\\pm$5.3 & \\phantom{x}17.2$\\pm$8.0 & \\phantom{x}12.5$\\pm$5.3 \\\\\n\t\t\t\t& \\textbf{b-map} & *6.3$\\pm$3.4 & *11.5$\\pm$4.0 & *\\phantom{x}8.6$\\pm$4.8 & *\\phantom{x}7.8$\\pm$4.6 & *13.6$\\pm$4.9 & *\\phantom{x}9.6$\\pm$4.7 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDRN-CE & & \\phantom{x}5.5$\\pm$2.6 & \\phantom{x}11.7$\\pm$5.4 & \\phantom{xx}8.2$\\pm$6.2 & \\phantom{xx}9.1$\\pm$6.4 & \\phantom{x}13.7$\\pm$5.6 & \\phantom{xx}8.9$\\pm$5.3 \\\\\n\t\t\t\t& \\textbf{e-map} & *4.5$\\pm$1.9 & *\\phantom{x}9.0$\\pm$4.5 & *\\phantom{x}6.3$\\pm$4.1 & *\\phantom{x}6.2$\\pm$4.4 & *11.1$\\pm$5.3 & \\textbf{\\textcolor{red}{*\\phantom{x}6.7}}$\\pm$4.2 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDRN-CE+MC & & \\phantom{x}5.6$\\pm$2.6 & \\phantom{x}11.9$\\pm$5.5 & \\phantom{xx}8.0$\\pm$5.9 & \\phantom{xx}8.7$\\pm$5.5 & \\phantom{x}13.5$\\pm$5.9 & \\phantom{xx}\\textbf{8.5}$\\pm$4.5 \\\\\t\t \n\t\t\t\t& \\textbf{b-map} & \\textbf{\\textcolor{red}{*4.2}}$\\pm$1.6 & \\textbf{\\textcolor{red}{*\\phantom{x}8.1}}$\\pm$3.7 & \\textbf{\\textcolor{red}{*\\phantom{x}6.1}}$\\pm$4.2 & \\textbf{\\textcolor{red}{*\\phantom{x}5.4}}$\\pm$3.6 & \\textbf{\\textcolor{red}{*10.1}}$\\pm$5.5 & *\\phantom{x}6.8$\\pm$3.8 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDRN-soft-Dice & & \\phantom{x}\\textbf{5.5}$\\pm$2.8 & \\phantom{x}11.9$\\pm$6.1 & \\phantom{xx}\\textbf{7.7}$\\pm$5.9 & \\phantom{xx}8.5$\\pm$5.0 & \\phantom{x}13.5$\\pm$5.5 & \\phantom{xx}8.9$\\pm$5.1 \\\\\n\t\t\t\t& \\textbf{e-map} & *4.6$\\pm$2.2 & *\\phantom{x}9.4$\\pm$4.5 & \\phantom{xx}6.7$\\pm$4.7 & *\\phantom{x}6.7$\\pm$4.4 & *11.6$\\pm$5.4 & *\\phantom{x}7.0$\\pm$3.3 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDRN-soft-Dice+MC & & \\phantom{x}5.7$\\pm$3.2 & \\phantom{x}\\textbf{11.5}$\\pm$5.1 & \\phantom{xx}8.0$\\pm$5.5 & \\phantom{xx}\\textbf{8.3}$\\pm$4.5 & \\phantom{x}\\textbf{13.3}$\\pm$5.1 & \\phantom{xx}8.9$\\pm$5.1 \\\\\n\t\t\t\t& \\textbf{b-map} & *4.5$\\pm$2.2 & *\\phantom{x}9.3$\\pm$4.5 & *\\phantom{x}6.3$\\pm$4.0 & *\\phantom{x}6.2$\\pm$4.1 & *10.4$\\pm$5.0 & *\\phantom{x}7.0$\\pm$3.4 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tU-net-CE & & \\phantom{x}6.4$\\pm$4.3 & \\phantom{x}15.7$\\pm$8.6 & \\phantom{xx}9.0$\\pm$6.0 & \\phantom{xx}9.7$\\pm$5.3 & \\phantom{x}17.0$\\pm$7.7 & \\phantom{x}12.7$\\pm$8.2 \\\\\n\t\t\t\t& \\textbf{e-map} & *4.9$\\pm$3.9 & *12.2$\\pm$8.1 & *\\phantom{x}7.1$\\pm$5.6 & *\\phantom{x}6.1$\\pm$3.2 & *12.6$\\pm$6.5 & *\\phantom{x}8.4$\\pm$6.3 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tU-net-CE+MC & & \\phantom{x}6.2$\\pm$4.2 & \\phantom{x}15.3$\\pm$8.4 & \\phantom{xx}8.8$\\pm$5.8 & \\phantom{xx}9.2$\\pm$5.0 & \\phantom{x}16.5$\\pm$7.6 & \\phantom{x}12.0$\\pm$8.0 \\\\\n\t\t\t\t& \\textbf{b-map} & *4.3$\\pm$1.6 & *\\phantom{x}9.9$\\pm$6.6 & *\\phantom{x}6.7$\\pm$4.8 & *\\phantom{x}5.4$\\pm$2.8 & *10.3$\\pm$4.7 & *\\phantom{x}7.6$\\pm$6.2 \\\\\n\t\t\t\t\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tU-net-soft-Dice & & \\phantom{x}6.1$\\pm$3.9 & \\phantom{x}14.1$\\pm$7.6 & \\phantom{x}10.6$\\pm$8.4 & \\phantom{xx}9.2$\\pm$7.1 & \\phantom{x}16.3$\\pm$7.5 & \\phantom{x}12.6$\\pm$9.6 \\\\\n\t\t\t\t& \\textbf{e-map} & *4.6$\\pm$2.3 & *11.3$\\pm$7.2 & *\\phantom{x}7.5$\\pm$5.5 & *\\phantom{x}7.3$\\pm$6.5 & *13.7$\\pm$7.6 & *\\phantom{x}9.8$\\pm$8.0 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tU-net-soft-Dice+MC & & \\phantom{x}6.2$\\pm$3.9 & \\phantom{x}14.1$\\pm$7.7 & \\phantom{x}10.5$\\pm$8.7 & \\phantom{xx}9.0$\\pm$7.0 & \\phantom{x}15.8$\\pm$7.5 & \\phantom{x}12.1$\\pm$9.2 \\\\\n\t\t\t\t& \\textbf{b-map} & *4.5$\\pm$2.1 & *10.4$\\pm$7.2 & *\\phantom{x}7.6$\\pm$7.0 & *\\phantom{x}7.3$\\pm$6.9 & *12.9$\\pm$6.6 & *\\phantom{x}9.8$\\pm$8.4 \\\\\n\t\t\t\t\\hdashline\n\t\t\t\tIsensee et al. & & \\phantom{x}7.1 & \\phantom{x}14.3 & \\phantom{xx}8.9 & \\phantom{xx}9.8 & \\phantom{x}16.3 & \\phantom{x}10.4 \\\\\n\t\t\t\t\\hline\n\t\t\t\t\n\t\t\t\\end{tabular}\n\t\t\t\\label{table_seg_perf_hd}\n\t\t} \n\t\t\n\t\\end{table*} \n\t\n\t\n\t\\section*{Evaluation}\\label{evaluation}\n\t\t\n\tAutomatic segmentation performance, as well as performance after simulating the correction of detected segmentation failures and after manual expert correction was evaluated. For this, the \\num{3}D Dice-coefficient (DC) and \\num{3}D Hausdorff distance (HD) between manual and (corrected) automatic segmentation were computed. Furthermore, the following clinical metrics were computed for manual and (corrected) automatic segmentation: left ventricle end-diastolic volume (EDV); left ventricle ejection fraction (EF); right ventricle EDV; right ventricle ejection fraction; and left ventricle myocardial mass. Following Bernard et al.\\cite{bernard2018deep} for each of the clinical metrics three performance indices were computed using the measurements based on manual and (corrected) automatic segmentation: Pearson correlation coefficient; mean difference (bias and standard deviation); and mean absolute error (MAE).\n\n\tTo evaluate detection performance of the automatic method precision-recall curves of identification of slices that require correction were computed. A slice is considered positive in case it consists of at least one image region with a segmentation failure. To achieve accurate segmentation in clinic, identification of slices that contain segmentation failures might ease manual correction of automatic segmentations in daily practice. To further evaluate detection performance detection rate of segmentation failures was assessed on a voxel level. More specific, sensitivity against the number of false positive regions was evaluated because manual correction is presumed to be performed at this level.\n\t\n\tFinally, after simulation and manual correction of the automatically detected segmentation failures, segmentation was re-evaluated and significance of the difference between the DCs, HDs and clinical metrics was tested with a Mann\u2013Whitney U test.\n\t\n\n\t\n\t\\section*{Experiments}\n\t\n\tTo use stratified four-fold cross-validation the dataset was split into training (75\\%) and test (25\\%) set. The splitting was done on a patient level, so there was no overlap in patient data between training and test sets. Furthermore, patients were randomly chosen from each of the five patient groups w.r.t. disease. Each patient has one volume for ED and ES time points, respectively. \n\t\n\t\\subsection*{Training segmentation networks} \\label{training_segmentation}\n\t\n\tDRN and U-net were trained with a patch size of \\num{128}$\\times$\\num{128} voxels which is a multiple of their output stride of the contracting path. In the training of the dilated CNN (DN) images with \\num{151}$\\times$\\num{151} voxel samples were used. Zero-padding to \\num{281}$\\times$\\num{281} was performed to accommodate the \\num{131}$\\times$\\num{131} voxel receptive field that is induced by the dilation factors. Training samples were randomly chosen from training set and augmented by \\num{90} degree rotations of the images. All models were initially trained with three loss functions: soft-Dice\\cite{milletari2016v} (SD); cross-entropy (CE); and Brier loss\\cite{brier1950verification}. However, for the evaluation of the combined segmentation and detection approach for each model architecture the two best performing loss functions were chosen: soft-Dice for all models; cross-entropy for DRN and U-net and Brier loss for DN. For completeness, we provide the equations for all three used loss functions.\n\t\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\t\\text{soft-Dice}_{c} = \\frac{\\sum_{i=1}^{N} R_{c}(i) \\; A_{c}(i) }{\\sum_{i=1}^{N} R_{c}(i) + \\sum_{i=1}^{N} A_{c}(i)} \\; ,\n\t\\end{align}\n\t\\endgroup\n\twhere $N$ denotes the number of voxels in an image, $R_{c}$ is the binary reference image for class $c$ and $A_{c}$ is the probability map for class $c$.\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\t\\text{Cross-Entropy}_{c} = - \\; \\sum_{i=1}^{N} t_{ic} \\; \\log \\; p(y_i=c|x_i) \\; , \\text{ where } t_{ic} = 1 \\text{ if } y_{i}=c, \\text{ and \\num{0} otherwise.}\n\t\\end{align}\n\t\\endgroup\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\t\\text{Brier}_{c} = \\sum_{i=1}^{N} \\big(t_{ic} - p(y_i=c|x_{i}) \\big)^2 \\; , \\text{ where } t_{ic} = 1 \\text{ if } y_{i}=c, \\text{ and \\num{0} otherwise.}\n\t\\end{align}\n\t\\endgroup\n\t\n\twhere $N$ denotes the number of voxels in an image and $p$ denotes the probability for a specific voxel $x_i$ with corresponding reference label $y_i$ for class $c$.\n\t\n\tChoosing Brier loss to train the DN model instead of CE was motivated by our preliminary work which showed that segmentation performance of DN model was best when trained with Brier loss\\cite{sander2019towards}.\n\t\n\tAll models were trained for 100,000 iterations. DRN and U-net were trained with a learning rate of \\num{0.001} which decayed with a factor of \\num{0.1} after every 25,000 steps. Training DN used the snapshot ensemble technique~\\cite{huang2017snapshot}, where after every 10,000 iterations the learning rate was reset to its original value of \\num{0.02}.\n\t\n\tAll three segmentation networks were trained using mini-batch stochastic gradient descent using a batch size of \\num{16}. Network parameters were optimized using the Adam optimizer \\cite{kingmadp}. Furthermore, models were regularized with weight decay to increase generalization performance. \n\t\n\t\\subsection*{Training detection network}\\label{label_training_detection}\n\t\n\tTo train the detection model a subset of the errors performed by the segmentation model is used. Segmentation errors that presumably are within the range of inter-observer variability and therefore do not inevitably require correction (tolerated errors) are excluded from the set of errors that need to be detected and corrected (segmentation failures). To distinguish between tolerated errors and the set of segmentation failures $\\mathcal{S}_I$ the Euclidean distance of an incorrectly segmented voxel to the boundary of the reference target structure is used. For each anatomical structure a \\num{2}D distance transform map is computed that provides for each voxel the distance to the anatomical structure boundary. To differentiate between tolerated errors and the set of segmentation failures $\\mathcal{S}_I$ an acceptable tolerance threshold is applied. A more rigorous threshold is used for errors located inside compared to outside of the anatomical structure because automatic segmentation methods have a tendency to undersegment cardiac structures in CMRI. Hence, in all experiments the acceptable tolerance threshold was set to three voxels (equivalent to on average \\SI{4.65}{\\milli\\meter}) and two voxels (equivalent to on average \\SI{3.1}{\\milli\\meter}) for segmentation errors located outside and inside the target structure. Furthermore, a segmentation error only belongs to $\\mathcal{S}_I$ if it is part of a \\num{2}D \\num{4}-connected cluster of minimum size \\num{10} voxels. This value was found in preliminary experiments by evaluating values $\\{1, 5, 10, 15, 20\\}$. However, for apical slices all segmentation errors are included in $\\mathcal{S}_I$ regardless of fulfilling the minimum size requirement because in these slices anatomical structures are relatively small and manual segmentation is prone to large inter-observer variability~\\cite{bernard2018deep}. Finally, segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures.\n\t\n\tUsing the set $\\mathcal{S}_I$ a binary label $t_j$ is assigned to each patch $P_j^{(I)}$ indicating whether $P_j^{(I)}$ contains at least one voxel belonging to set $\\mathcal{S}_I$ where $j \\in \\{1 \\dots M \\}$ and $M$ denotes the number of patches in a slice $I$. \n\t\n\tThe detection network is trained by minimizing a weighted binary cross-entropy loss:\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{equation} \\label{eq_detection_loss}\n\t\t\\mathcal{L}_{DT} = - \\sum_{j \\in P^{(I)}} w_{pos} \\; t_j \\log p_j + (1 - t_j) \\log (1 - p_j) \\; ,\n\t\\end{equation}\n\t\\endgroup\n\t\n\twhere $w_{pos}$ represents a scalar weight, $t_j$ denotes the binary reference label and $p_j$ is the softmax probability indicating whether a particular image region $P_j^{(I)}$ contains at least one segmentation failure. The average percentage of regions in a patient volume containing segmentation failures ranges from \\num{1.5} to \\num{3} percent depending on the segmentation architecture and loss function used to train the segmentation model. To train a detection network $w_{pos}$ was set to the ratio between the average percentage of negative samples divided by the average percentage of positive samples.\n\t\n\tEach fold was trained using spatial uncertainty maps and automatic segmentation masks generated while training the segmentation networks. Hence, there was no overlap in patient data between training and test set across segmentation and detection tasks. In total \\num{12} detection models were trained and evaluated resulting from the different combination of \\num{3} model architectures (DRN, DN and U-net), \\num{2} loss functions (DRN and U-net with CE and soft-Dice, DN with Brier and soft-Dice) and \\num{2} uncertainty maps (e-maps, b-maps).\n\t\n\t\n\t\\begin{table}\n\t\t\\caption{Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout (MC) enabled during testing) in terms of clinical metrics: left ventricle (LV) end-diastolic volume (EDV); LV ejection fraction (EF); right ventricle (RV) EDV; RV ejection fraction; and LV myocardial mass. Quantitative results compare clinical metrics based on reference segmentations with 1) automatic segmentations and 2) simulated manual correction of automatic segmentations using spatial uncertainty maps. $\\rho$ denotes the Pearson correlation coefficient, \\textit{bias} denotes the mean difference between the two measurements (mean $\\pm$ standard deviation) and \\textit{MAE} denotes the mean absolute error between the two measurements. Each combination comprises a block of two rows. A row in which column \\textit{Uncertainty map for detection} indicates e- or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in black\/bold are ranked first in the segmentation only task. Numbers in red indicate statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach for the specific clinical metric. Best viewed in color.}\n\t\t\\label{table_cardiac_function_indices}\n\t\t\\tiny\n\t\t\\begin{tabular}{| C{1.6cm} | C{1.cm} | C{0.3cm} c C{0.3cm} | C{0.2cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} |}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{1}{c|}{\\thead{\\textbf{Uncertainty} \\\\ \\textbf{map for} \\\\ \\textbf{detection}}} & \\multicolumn{3}{c}{\\textbf{LV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{LV$_{EF}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EF}$}} & \\multicolumn{3}{c|}{\\textbf{LVM$_{Mass}$}} \\\\\n\t\t\n\t\t\n\t\t\t\\textbf{Method} & & \\textbf{$\\rho$} & \\multicolumn{1}{l}{\\textbf{bias$\\pm\\sigma$}} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\multicolumn{1}{c}{\\textbf{$\\rho$}} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} \\\\\n\t\t\t\\hline\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-Brier & & 0.997 & \\phantom{x}\\textbf{0.0$\\pm$6.1} & 4.5 & 0.892 & \\phantom{x}2.2$\\pm$\\phantom{x}9.2 & 4.2 & 0.977 & \\textbf{-0.2$\\pm$11.8} & \\phantom{x}8.5 & 0.834 & 5.3$\\pm$10.3 & \\phantom{x}8.5 & 0.984 & -2.7$\\pm$\\phantom{x}9.0 & \\phantom{x}\\textbf{7.0} \\\\\n\t\t\t& e-map & 0.997 & \\phantom{x}0.0$\\pm$5.5 & 4.0 & 0.982 & \\phantom{x}0.1$\\pm$\\phantom{x}3.8 & 2.2 & 0.992 & \\phantom{x}0.0$\\pm$\\phantom{x}6.9 & \\phantom{x}5.2 & 0.955 & 1.9$\\pm$\\phantom{x}5.5 & \\phantom{x}4.1 & 0.986 & -2.1$\\pm$\\phantom{x}8.4 & \\phantom{x}6.6 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-Brier+MC & & 0.997 & \\phantom{x}1.6$\\pm$6.0 & 4.4 & 0.921 & \\phantom{x}1.1$\\pm$\\phantom{x}7.9 & 3.9 & 0.975 & \\phantom{x}6.7$\\pm$12.4 & \\phantom{x}9.6 & 0.854 & 3.5$\\pm$\\phantom{x}9.9 & \\phantom{x}7.7 & 0.984 & \\phantom{x}0.7$\\pm$\\phantom{x}9.2 & \\phantom{x}7.1 \\\\\n\t\t\t& b-map & 0.998 & \\phantom{x}1.0$\\pm$5.3 & 3.9 & 0.991 & \\phantom{x}0.0$\\pm$\\phantom{x}2.7 & 1.9 & 0.993 & \\phantom{x}3.2$\\pm$\\phantom{x}6.7 & \\phantom{x}5.7 & 0.975 & 0.8$\\pm$\\phantom{x}4.0 & \\phantom{x}3.0 & 0.987 & \\phantom{x}0.1$\\pm$\\phantom{x}8.3 & \\phantom{x}6.5 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-soft-Dice & & 0.996 & \\phantom{x}1.2$\\pm$6.5 & 4.9 & 0.918 & \\phantom{x}1.5$\\pm$\\phantom{x}8.0 & 3.9 & 0.972 & \\phantom{x}\\textbf{0.2$\\pm$13.0} & \\phantom{x}9.6 & 0.802 & 7.2$\\pm$11.3 & 10.2 & 0.982 & -4.5$\\pm$\\phantom{x}9.6 & \\phantom{x}8.5 \\\\\n\t\t\t& e-map & 0.997 & \\phantom{x}1.0$\\pm$5.5 & 4.2 & 0.989 & \\phantom{x}0.2$\\pm$\\phantom{x}3.0 & 2.2 & 0.990 & \\phantom{x}0.2$\\pm$\\phantom{x}7.6 & \\phantom{x}5.9 & \\textcolor{red}{0.940} & \\textcolor{red}{3.3$\\pm$\\phantom{x}6.2} & \\textcolor{red}{\\phantom{x}5.2} & 0.983 & -4.3$\\pm$\\phantom{x}9.3 & \\phantom{x}8.2 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-soft-Dice+MC & & 0.996 & \\phantom{x}3.2$\\pm$7.1 & 5.6 & 0.958 & \\phantom{x}0.4$\\pm$\\phantom{x}5.7 & 3.6 & 0.964 & \\phantom{x}8.1$\\pm$14.9 & 12.3 & 0.827 & 4.8$\\pm$11.0 & \\phantom{x}8.9 & 0.978 & -0.7$\\pm$10.7 & \\phantom{x}8.3 \\\\\n\t\t\t& b-map & 0.997 & \\phantom{x}2.2$\\pm$5.6 & 4.4 & 0.988 & -0.2$\\pm$\\phantom{x}3.1 & 2.2 & 0.990 & \\phantom{x}4.0$\\pm$\\phantom{x}7.7 & \\phantom{x}7.0 & 0.959 & 1.8$\\pm$\\phantom{x}5.1 & \\phantom{x}4.1 & 0.982 & -1.4$\\pm$\\phantom{x}9.5 & \\phantom{x}7.6 \\\\\n\t\t\t\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-CE & & 0.997 & -0.2$\\pm$5.5 & 4.1 & 0.968 & \\phantom{x}1.2$\\pm$\\phantom{x}5.0 & 3.5 & 0.976 & \\phantom{x}1.5$\\pm$12.1 & \\phantom{x}8.5 & 0.870 & 1.3$\\pm$\\phantom{x}9.2 & \\phantom{x}6.9 & 0.980 & \\phantom{x}\\textbf{0.6$\\pm$10.2} & \\phantom{x}7.8 \\\\\n\t\t\t& e-map & 0.998 & \\phantom{x}0.2$\\pm$4.5 & 3.5 & 0.992 & \\phantom{x}0.2$\\pm$\\phantom{x}2.5 & 1.9 & 0.988 & \\phantom{x}1.4$\\pm$\\phantom{x}8.5 & \\phantom{x}6.2 & 0.952 & 0.8$\\pm$\\phantom{x}5.6 & \\phantom{x}4.2 & 0.985 & \\phantom{x}0.4$\\pm$\\phantom{x}8.7 & \\phantom{x}6.8 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-CE+MC & & \\textbf{0.998} & \\phantom{x}1.0$\\pm$4.9 & \\textbf{3.9} & 0.972 & \\phantom{x}0.8$\\pm$\\phantom{x}4.6 & 3.1 & 0.973 & \\phantom{x}4.8$\\pm$12.8 & \\phantom{x}9.4 & 0.876 & \\textbf{0.4$\\pm$\\phantom{x}9.1} & \\phantom{x}\\textbf{6.6} & 0.981 & \\phantom{x}1.9$\\pm$\\phantom{x}9.9 & \\phantom{x}7.6 \\\\\n\t\t\t& b-map & 0.998 & \\phantom{x}0.7$\\pm$4.6 & 3.6 & 0.992 & -0.1$\\pm$\\phantom{x}2.5 & 1.8 & 0.992 & \\phantom{x}2.9$\\pm$\\phantom{x}6.9 & \\phantom{x}5.7 & 0.967 & 0.6$\\pm$\\phantom{x}4.6 & \\phantom{x}3.4 & 0.987 & \\phantom{x}1.2$\\pm$\\phantom{x}8.3 & \\phantom{x}6.6 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-soft-Dice & & \\textbf{0.998} & \\phantom{x}0.8$\\pm$5.1 & 4.0 & 0.976 & \\phantom{x}0.2$\\pm$\\phantom{x}4.4 & 3.0 & 0.980 & \\phantom{x}\\textbf{0.2$\\pm$11.0} & \\phantom{x}\\textbf{7.5} & \\textbf{0.882} & 3.1$\\pm$\\phantom{x}8.7 & \\phantom{x}6.8 & 0.984 & -3.5$\\pm$\\phantom{x}9.1 & \\phantom{x}7.5 \\\\\n\t\t\t& e-map & 0.998 & \\phantom{x}0.7$\\pm$4.4 & 3.5 & 0.987 & -0.1$\\pm$\\phantom{x}3.1 & 2.2 & 0.987 & \\phantom{x}0.1$\\pm$\\phantom{x}9.1 & \\phantom{x}6.4 & 0.938 & 1.9$\\pm$\\phantom{x}6.3 & \\phantom{x}4.9 & 0.986 & -3.5$\\pm$\\phantom{x}8.7 & \\phantom{x}7.1 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-soft-Dice+MC & & \\textbf{0.998} & \\phantom{x}1.8$\\pm$5.1 & \\textbf{3.9} & \\textbf{0.979} & -0.3$\\pm$\\phantom{x}4.1 & 2.9 & 0.977 & \\phantom{x}3.5$\\pm$11.7 & \\phantom{x}8.1 & 0.868 & 1.7$\\pm$\\phantom{x}9.5 & \\phantom{x}6.8 & 0.983 & -1.4$\\pm$\\phantom{x}9.5 & \\phantom{x}7.4 \\\\\n\t\t\t& b-map & 0.998 & \\phantom{x}1.7$\\pm$4.7 & 3.7 & 0.990 & -0.2$\\pm$\\phantom{x}2.9 & 2.1 & 0.989 & \\phantom{x}2.3$\\pm$\\phantom{x}8.1 & \\phantom{x}5.8 & 0.959 & 0.8$\\pm$\\phantom{x}5.2 &\\phantom{x}3.8 & 0.986 & -1.3$\\pm$\\phantom{x}8.5 & \\phantom{x}6.8 \\\\\n\t\t\t\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-CE & & 0.995 & -4.7$\\pm$7.2 & 6.1 & 0.954 & \\phantom{x}4.1$\\pm$\\phantom{x}6.0 & 5.1 & 0.963 & -7.6$\\pm$15.2 & 12.1 & 0.870 & 5.6$\\pm$\\phantom{x}9.0 & \\phantom{x}8.1 & 0.971 & -8.5$\\pm$12.2 & 11.5 \\\\\n\t\t\t& e-map & 0.998 & -3.2$\\pm$4.8 & 4.4 & 0.992 & \\phantom{x}1.7$\\pm$\\phantom{x}2.6 & 2.4 & 0.987 & -4.1$\\pm$\\phantom{x}9.1 & \\phantom{x}6.7 & 0.957 & 2.6$\\pm$\\phantom{x}5.2 & \\phantom{x}4.1 & 0.983 & -5.7$\\pm$\\phantom{x}9.3 & \\phantom{x}8.2 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-CE+MC & & 0.995 & -4.3$\\pm$7.2 & 5.9 & 0.958 & \\phantom{x}3.8$\\pm$\\phantom{x}5.8 & 4.9 & 0.968 & -4.8$\\pm$14.1 & 10.7 & 0.867 & 5.0$\\pm$\\phantom{x}9.1 & \\phantom{x}7.9 & 0.972 & -8.1$\\pm$12.0 & 11.1 \\\\\n\t\t\t& b-map & 0.997 & -3.5$\\pm$5.5 & 4.9 & 0.990 & \\phantom{x}1.6$\\pm$\\phantom{x}2.9 & 2.6 & 0.992 & -1.8$\\pm$\\phantom{x}7.0 & \\phantom{x}4.9 & 0.974 & 1.6$\\pm$\\phantom{x}4.1 & \\phantom{x}3.3 & 0.981 & -6.8$\\pm$10.0 & \\phantom{x}9.4 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-soft-Dice & & 0.997 & -2.0$\\pm$6.0 & 4.5 & 0.853 & \\phantom{x}3.6$\\pm$10.9 & 5.0 & 0.968 & -1.0$\\pm$14.1 & 10.0 & 0.782 & 4.8$\\pm$11.6 & \\phantom{x}9.0 & \\textbf{0.985} & -7.7$\\pm$\\phantom{x}8.8 & \\phantom{x}9.2 \\\\\n\t\t\t& e-map & 0.997 & -1.7$\\pm$5.3 & 4.1 & 0.969 & \\phantom{x}1.9$\\pm$\\phantom{x}4.9 & 3.3 & 0.981 & -0.1$\\pm$10.9 & \\phantom{x}7.5 & 0.919 & 3.3$\\pm$\\phantom{x}7.0 & \\phantom{x}5.9 & 0.984 & -6.6$\\pm$\\phantom{x}9.0 & \\phantom{x}8.7 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-soft-Dice+MC & & 0.997 & -1.8$\\pm$5.9 & 4.4 & 0.941 & \\phantom{x}3.0$\\pm$\\phantom{x}6.7 & 4.4 & 0.969 & \\phantom{x}0.6$\\pm$13.9 & \\phantom{x}9.8 & 0.792 & 4.4$\\pm$11.3 & \\phantom{x}8.7 & \\textbf{0.985} & -7.2$\\pm$\\phantom{x}8.9 & \\phantom{x}8.9 \\\\\n\t\t\t& b-map & 0.997 & -1.5$\\pm$5.3 & 4.1 & 0.979 & \\phantom{x}1.1$\\pm$\\phantom{x}4.1 & 2.9 & 0.985 & \\phantom{x}1.2$\\pm$\\phantom{x}9.4 & \\phantom{x}6.5 & 0.939 & 2.9$\\pm$\\phantom{x}6.2 & \\phantom{x}4.9 & 0.984 & -5.9$\\pm$\\phantom{x}9.0 & \\phantom{x}8.5 \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\n\tThe patches used to train the network were selected randomly (\\nicefrac{2}{3}), or were forced (\\nicefrac{1}{3}) to contain at least one segmentation failure by randomly selecting a scan containing segmentation failure, followed by random sampling of a patch containing at least one segmentation failure. During training the patch size was fixed to \\num{80}$\\times$\\num{80} voxels. To reduce the number of background voxels during testing, inputs were cropped based on a minimal enclosing, rectangular bounding box that was placed around the automatic segmentation mask. Inputs always had a minimum size of \\num{80}$\\times$\\num{80} voxels or were forced to a multiple of the output grid spacing of eight voxels in both direction required by the patch-based detection network. The patches of size \\num{8}$\\times$\\num{8} voxels did not overlap. In cases where the automatic segmentation mask only contains background voxels (scans above the base or below apex of the heart) input scans were center-cropped to a size of \\num{80}$\\times$\\num{80} voxels. \n\t\n\tModels were trained for 20,000 iterations using mini-batch stochastic gradient descent with batch-size \\num{32} and Adam as optimizer\\cite{kingmadp}. Learning rate was set to \\num{0.0001} and decayed with a factor of \\num{0.1} after \\num{10,000} steps. Furthermore, dropout percentage was set to \\num{0.5} and weight decay was applied to increase generalization performance.\n\t\n\t\\begin{figure}[t]\n\t\t\\center\n\t\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/all_models_froc_voxel_detection_rate.pdf}%\n\t\t\t\\label{fig_froc_voxel_detection}}\n\t\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/all_models_slices_prec_recall.pdf}%\n\t\t\t\\label{fig_prec_rec_slice_detection}}\n\t\t\n\t\t\\caption{Detection performance of segmentation failures generated by different combination of segmentation architectures and loss functions. (a) Sensitivity for detection of segmentation failures on voxel level (y-axis) as a function of number of false positive image regions (x-axis). (b) Precision-recall curve for detection of slices containing segmentation failures (where AP denotes average precision). Results are split between entropy and Bayesian uncertainty maps. Each figure contains a curve for the six possible combination of models (three) and loss functions (two). SD denotes soft-Dice and CE cross-entropy, respectively.}\n\t\t\\label{fig_dt_perf_all_models}\n\t\\end{figure}\n\n\t\\subsection*{Segmentation using correction of the detected segmentation failures}\n\t\n\tTo investigate whether correction of detected segmentation failures increases segmentation performance two scenarios were performed. In the first scenario manual correction of the detected failures by an expert was simulated for all images at ED and ES time points of the ACDC dataset. For this purpose, in image regions that were detected to contain segmentation failure predicted labels were replaced with reference labels. In the second scenario manual correction of the detected failures was performed by an expert in a random subset of \\num{50} patients of the ACDC dataset. The expert was shown CMRI slices for ED and ES time points together with corresponding automatic segmentation masks for the RV, LV and LV myocardium. Image regions detected to contain segmentation failures were indicated in slices and the expert was only allowed to change the automatic segmentations in these indicated regions. Annotation was performed following the protocol described in\\cite{bernard2018deep}. Furthermore, expert was able to navigate through all CMRI slices of the corresponding ED and ES volumes.\n\n\t\n\t\\section*{Results}\n\t\n\tIn this section we first present results for the segmentation-only task followed by description of the combined segmentation and detection results. \n\t\n\t\\subsection*{Segmentation-only approach} \\label{results_seg_only}\n\t\n\tTable~\\ref{table_overall_segmentation_performance} lists quantitative results for segmentation-only and combined segmentation and detection approach in terms of Dice coefficient and Hausdorff distance. These results show that DRN and U-net achieve similar Dice coefficients and outperformed the DN network for all anatomical structures at end-systole. Differences in the achieved Hausdorff distances among the methods are present for all anatomical structures and for both time points. The DRN model achieved the highest and the DN network the lowest Hausdorff distance.\n\t\n\tTable~\\ref{table_cardiac_function_indices} lists results of the evaluation in terms of clinical metrics. These results reveal noticeable differences between models for ejection fraction (EF) of left and right ventricle, respectively. We can observe that U-net trained with the soft-Dice and the Dilated Network (DN) trained with Brier or soft-Dice loss achieved considerable lower accuracy for LV and RV ejection fraction compared to DRN. Overall, the DRN model achieved highest performance for all clinical metrics.\n\n\t\\noindent \\textbf{Effect of model architecture on segmentation}: Although quantitative differences between models are small, qualitative evaluation discloses that automatic segmentations differ substantially between the models. Figure~\\ref{fig_seg_qualitative_results} shows that especially in regions where the models perform poorly (apical and basal slices) the DN model more often produced anatomically implausible segmentations compared to the DRN and U-net. This seems to be correlated with the performance differences in Hausdorff distance.\n\t\n\t\\noindent \\textbf{Effect of loss function on segmentation}: The results indicate that the choice of loss function only slightly affects the segmentation performance. DRN and U-net perform marginally better when trained with soft-Dice compared to cross-entropy whereas DN performs better when trained with Brier loss than with soft-Dice. For DN this is most pronounced for the RV at ES.\n\t\n\tA considerable effect of the loss function on the accuracy of the LV and RV ejection fraction can be observed for the U-net model. On both metrics U-net achieved the lowest accuracy of all models when trained with the soft-Dice loss.\n\t\n\t\\noindent \\textbf{Effect of MC dropout on segmentation}: The results show that enabling MC-dropout during testing seems to result in slightly improved HD while it does not affect DC. \n\n\t\\begin{table}\n\t\t\\caption{Average precision and percentage of slices with segmentation failures generated by Dilated Network (DN), Dilated Residual Network (DRN) and U-net when trained with soft-Dice (SD), CE or Brier loss. Per patient, average precision of detected slices with failure using e- or b-maps (\\num{2}$^{nd}$ and \\num{3}$^{rd}$ columns). Per patient, average percentage of slices containing segmentation failures (reference for detection task) (\\num{4}$^{th}$ and \\num{5}$^{th}$ columns).}\n\t\t\\label{table_evaluation_slice_detection}\n\t\t\\begin{tabular}{l C{1.4cm} C{1.4cm} C{1.4cm} C{1.4cm} }\n\t\t\t\\textbf{Model} & \\multicolumn{2}{c}{\\textbf{Average precision}} & \\multicolumn{2}{c}{ \\thead{\\textbf{\\% of slices} \\\\ \\textbf{with segmentation failures} }} \\\\\n\t\t\t& e-map & b-map & e-map & b-map \\\\\n\t\t\t\\hline\n\t\t\tDN-Brier & 84.0 & 83.0 & 53.7 & 52.4 \\\\\n\t\t\tDN-SD & 87.0 & 85.0 & 58.3 & 58.1 \\\\\n\t\t\t\\hdashline\n\t\t\tDRN-CE & 75.0 & 69.0 & 39.5 & 39.4 \\\\\n\t\t\tDRN-SD & 67.0 & 67.0 & 34.9 & 33.7\\\\\n\t\t\t\\hdashline\n\t\t\tU-net-CE & 81.0 & 75.0 & 54.8 & 52.5 \\\\\n\t\t\tU-net-SD & 76.0 & 76.0 & 46.7 & 45.5 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\\subsection*{Detection of segmentation failures}\n\t\n\t\\noindent \\textbf{Detection of segmentation failures on voxel level}: To evaluate detection performance of segmentation failures on voxel level Figure~\\ref{fig_froc_voxel_detection} shows average voxel detection rate as a function of false positively detected regions. This was done for each combination of model architecture and loss function exploiting e- (Figure~\\ref{fig_froc_voxel_detection}, left) or b-maps (Figure~\\ref{fig_froc_voxel_detection}, right). These results show that detection performance of segmentation failures depends on segmentation model architecture, loss function and uncertainty map. \n\t\n\tThe influence of (segmentation) model architecture and loss function on detection performance is slightly stronger when e-maps were used as input for the detection task compared to b-maps. Detection rates are consistently lower when segmentation failures originate from segmentation models trained with soft-Dice loss compared to models trained with CE or Brier loss. Overall, detection rates are higher when b-maps were exploited for the detection task compared to e-maps.\n\t\t\n\\begin{table*}\n\t\\caption{Comparing performance of segmentation-only approach (auto-only) with combined segmentation and detection approach for two scenarios: simulated correction of detected segmentation failures (auto$+$simulation); and manual correction of detected segmentation failures by an expert (auto$+$expert). Automatic segmentations were obtained from a U-net trained with cross-entropy. Evaluation was performed on a subset of \\num{50} patients from the ACDC dataset. Scenarios are compared against segmentation-only approach (auto-only) in terms of (a) Dice Coefficient (b) Hausdorff Distance and (c) Clinical metrics. Results obtained from simulated manual correction represent an upper bound on the maximum achievable performance. Detection network was trained with e-maps. Number with asterisk indicates statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach. Best viewed in color.}\n\t\\label{table_manual_corr_performance}\n\t\\centering\n\t\\small\n\t\\subfloat[\\textbf{Dice coefficient:} Mean $\\pm$ standard deviation for left ventricle (LV), right ventricle (RV) and left ventricle myocardium (LVM).]{\n\t\t\\begin{tabular}{| C{2.cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Scenario} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\tauto-only & \\phantom{x}0.964$\\pm$0.02 & \\phantom{x}0.927$\\pm$0.04 & \\phantom{x}0.883$\\pm$0.03 & \\phantom{x}0.916$\\pm$0.05 & \\phantom{x}0.854$\\pm$0.08 & \\phantom{x}0.886$\\pm$0.04 \\\\ \n\t\t\tauto$+$simulation & \\phantom{x}0.967$\\pm$0.01 & *0.948$\\pm$0.03 & *0.894$\\pm$0.03 & *0.939$\\pm$0.03 & *0.915$\\pm$0.04 & *0.910$\\pm$0.03 \\\\ \n\t\t\n\t\t\tauto$+$expert & \\phantom{x}0.965$\\pm$0.02 & \\phantom{x}0.940$\\pm$0.03 & \\phantom{x}0.885$\\pm$0.03 & \\phantom{x}0.927$\\pm$0.04 & \\phantom{x}0.868$\\pm$0.07 & \\phantom{x}0.894$\\pm$0.03 \\\\ \n\t\t\t\n\t\t\t\\bottomrule\n\t\t\t\n\t\t\\end{tabular}\n\t\t\\label{table_manual_seg_perf_dsc}\n\t} \n\n\t\\centering\n\n\t\\subfloat[\\textbf{Hausdorff Distance:} Mean $\\pm$ standard deviation for left ventricle (LV), right ventricle (RV) and left ventricle myocardium (LVM).]{\n\t\t\\begin{tabular}{| C{2.cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Scenario} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\tauto-only & \\phantom{x}5.6$\\pm$3.3 & \\phantom{x}15.7$\\pm$9.7 & \\phantom{x}8.5$\\pm$6.4 & \\phantom{x}9.2$\\pm$5.8 & \\phantom{x}16.5$\\pm$8.8 & \\phantom{x}13.4$\\pm$10.5 \\\\\n\t\t\tauto$+$simulation & \\phantom{x}4.5$\\pm$2.1 & *\\phantom{x}9.0$\\pm$4.6 & *5.9$\\pm$3.4 & *5.2$\\pm$2.5 & *10.3$\\pm$3.7 & *\\phantom{x}6.6$\\pm$2.9 \\\\\n\t\t\n\t\t\tauto$+$expert & \\phantom{x}4.9$\\pm$2.8 & *\\phantom{x}9.8$\\pm$4.3 & \\phantom{x}7.3$\\pm$4.3 & \\phantom{x}7.2$\\pm$3.3 & *12.5$\\pm$4.7 & *\\phantom{x}8.3$\\pm$3.5 \\\\\n\t\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\label{table_manual_seg_perf_hd}\n\t} \n\n\t\\subfloat[\\textbf{Clinical metrics:} a) Left ventricle (LV) end-diastolic volume (EDV) b) LV ejection fraction (EF) c) Right ventricle (RV) EDV d) RV ejection fraction e) LV myocardial mass. Quantitative results compare clinical metrics based on reference segmentations with 1) automatic segmentations; 2) simulated manual correction and 3) manual expert correction of automatic segmentations using spatial uncertainty maps. $\\rho$ denotes the Pearson correlation coefficient, \\textit{bias} denotes the mean difference between the two measurements (mean $\\pm$ standard deviation) and \\textit{MAE} denotes the mean absolute error between the two measurements.]{\n\t\t\\label{table_manual_cardiac_function_indices}\n\t\t\\small\n\t\t\\begin{tabular}{| C{2.cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm}C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} |}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c}{\\textbf{LV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{LV$_{EF}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EF}$}} & \\multicolumn{3}{c|}{\\textbf{LVM$_{Mass}$}} \\\\\n\n\t\t\t \\textbf{Scenario} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\multicolumn{1}{c}{\\textbf{$\\rho$}} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} \\\\\n\t\t\t\\hline\n\t\t\n\t\t auto-only & 0.995 & -4.4 $\\pm$7.0 & 5.7 & 0.927 & 5.0 $\\pm$7.1 & 5.8 & 0.962 & -6.4 $\\pm$16.2 & 11.9 & 0.878 & 5.8 $\\pm$8.7 & 8.0 & 0.979 & -6.4 $\\pm$10.6 & 9.5 \\\\\n\t\t\t \n\t\t\t auto$+$simulation & 0.998 & -3.9 $\\pm$5.2 & 4.8 & 0.989 & 2.3 $\\pm$2.9 & 2.9 & 0.984 & -3.7 $\\pm$10.4 & 6.8 & 0.954 & 2.7 $\\pm$5.5 & 4.5 & 0.983 & -5.5 $\\pm$9.6 & 8.1 \\\\\n\t\t\t\n\t\t\t \n\t\t\t auto$+$expert & 0.996 & -4.3 $\\pm$6.5 & 5.5 & 0.968 & 2.7 $\\pm$4.8 & 4.3 & 0.976 & -3.2 $\\pm$12.9 & 8.3 & 0.883 & 5.1 $\\pm$8.6 & 7.7 & 0.980 & -6.2 $\\pm$10.2 & 9.1 \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\n\\end{table*}\n\t\n\t\\vspace{1ex}\n\t\\noindent \\textbf{Detection of slices with segmentation failures}: To evaluate detection performance w.r.t. slices containing segmentation failures precision-recall curves for each combination of model architecture and loss function using e-maps (Figure~\\ref{fig_prec_rec_slice_detection}, left) or b-maps (Figure~\\ref{fig_prec_rec_slice_detection}, right) are shown. The results show that detection performance of slices containing segmentation failures is slightly better for all models when using e-maps. Furthermore, the detection network achieves highest performance using uncertainty maps obtained from the DN model and the lowest when exploiting e- or b-maps obtained from the DRN model. Table~\\ref{table_evaluation_slice_detection} shows the average precision of detected slices with segmentation failures per patient, as well as the average percentage of slices that do contain segmentation failures (reference for detection task). The results illustrate that these measures are positively correlated i.e. that precision of detected slices in a patient volume is higher if the volume contains more slices that need correction. On average the DN model generates cardiac segmentations that contain more slices with at least one segmentation failure compared to U-net (ranks second) and DRN (ranks third). A higher number of detected slices containing segmentation failures implies an increased workload for manual correction.\n\t\n\t\\subsection*{Calibration of uncertainty maps} \\label{result_eval_quality_umaps}\n\t\n\tFigure~\\ref{fig_risk_cov_comparison} shows risk-coverage curves for each combination of model architectures, uncertainty maps and loss functions (Figure~\\ref{fig_risk_cov_comparison} left: CE or Brier loss, Figure~\\ref{fig_risk_cov_comparison} right: soft-Dice). The results show an effect of the loss function on slope and convergence of the curves. Segmentation errors of models trained with the soft-Dice loss are less frequently covered by higher uncertainties than models trained with CE or Brier loss (steeper slope and lower minimum are better). This difference is more pronounced for e-maps. Models trained with the CE or Brier loss only slightly differ concerning convergence and their slopes are approximately identical. In contrast, the curves of the models trained with the soft-Dice differ regarding their slope and achieved minimum. Comparing e- and b-map of the DN-SD and U-net-SD models the results reveal that the curve for b-map has a steeper slope and achieves a lower minimum compared to the e-map. For the DRN-SD model these differences are less striking. In general for a specific combination of model and loss function the risk-coverage curves using b-maps achieve a lower minimum compared to e-maps.\n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=4.7in, height=2.8in]{figures\/risk_cov\/cov_risk_curve_both_seg_errors.pdf}\n\t\t\n\t\t\\caption{Comparison of risk-coverage curves for different combination of model architectures, loss functions and uncertainty maps. Results are separated for loss functions (left cross-entropy and Brier, right soft-Dice loss). \\num{100}\\% coverage means that none of the voxels is discarded based on its uncertainty whereas a coverage of \\num{0}\\% denotes the scenario in which all predictions are replaced by their reference labels. Note, all models were trained with two different loss functions (1) soft-Dice (SD) for all models (2) cross-entropy (CE) for DRN and U-net and Brier loss for DN.}\n\t\t\\label{fig_risk_cov_comparison}\n\t\\end{figure}\n\t\n\t\\begin{table}\n\t\t\\caption{Effect of number of Monte Carlo (MC) samples on segmentation performance in terms of (a) Dice coefficient (DC) and (b) Hausdorff Distance (HD) (mean $\\pm$ standard deviation). Higher DC and lower HD is better. Abbreviations: Cross-Entropy (CE), Dilated Residual Network (DRN) and Dilated Network (DN).} \n\t\t\\label{table_seg_perf_per_samples}\n\t\t\\small\n\t\t\\subfloat[Dice coefficient]{\n\t\t\t\\begin{tabular}{|c C{1.5cm} C{1.5cm} c|}\n\t\t\t\t\\hline\n\t\t\t\t\\textbf{\\thead{Number of \\\\ MC samples}} & DRN-CE & U-net-CE & DN-soft-Dice\\\\\n\t\t\t\t\\hline\n\t\t\t\t1 & 0.894$\\pm$0.07 & 0.896$\\pm$0.07 & 0.871$\\pm$0.09 \\\\\n\t\t\t\t3 & 0.900$\\pm$0.07 & 0.901$\\pm$0.07 & 0.883$\\pm$0.08 \\\\\n\t\t\t\t5 & 0.902$\\pm$0.07 & 0.901$\\pm$0.07 & 0.887$\\pm$0.08 \\\\\n\t\t\t\t7 & 0.903$\\pm$0.07 & 0.901$\\pm$0.07 & 0.888$\\pm$0.08 \\\\\n\t\t\t\t10 & 0.904$\\pm$0.06 & 0.902$\\pm$0.07 &0.890$\\pm$0.08 \\\\\n\t\t\t\t20 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.890$\\pm$0.08 \\\\\n\t\t\t\t30 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.891$\\pm$0.08 \\\\\n\t\t\t\t60 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.891$\\pm$0.08 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\t\\subfloat[Hausdorff Distance]{\n\t\t\t\\begin{tabular}{|c C{1.5cm} C{1.5cm} c|}\n\t\t\t\t\\hline\n\t\t\t\t\\textbf{\\thead{Number of \\\\ MC samples}}& DRN-CE & U-net-CE & DN-soft-Dice \\\\\n\t\t\t\t\\hline\n\t\t\t\t1 & 9.88$\\pm$5.76 & 11.79$\\pm$8.23 & 13.54$\\pm$7.14 \\\\\n\t\t\t\t3 & 9.70$\\pm$6.13 & 11.40$\\pm$7.78 & 12.71$\\pm$6.79 \\\\\n\t\t\t\t5 & 9.54$\\pm$6.07 & 11.37$\\pm$7.81 & 12.06$\\pm$6.29 \\\\\n\t\t\t\t7 & 9.38$\\pm$5.86 & 11.29$\\pm$7.86 & 12.08$\\pm$6.38 \\\\\n\t\t\t\t10 & 9.38$\\pm$5.91 & 11.24$\\pm$7.71 & 11.85$\\pm$6.34 \\\\\n\t\t\t\t20 & 9.37$\\pm$5.83 & 11.27$\\pm$7.79 & 11.90$\\pm$6.52 \\\\\n\t\t\t\t30 & 9.39$\\pm$5.91 & 11.32$\\pm$7.93 & 11.90$\\pm$6.48 \\\\\n\t\t\t\t60 & 9.39$\\pm$5.93 & 11.22$\\pm$7.83 & 11.89$\\pm$6.56 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\t\n\t\t}\n\t\\end{table}\n\t\n\t\\subsection*{Correction of automatically identified segmentation failures} \\label{results_combined_approach}\n\t\n\t\\textbf{Simulated correction:} The results listed in Table~\\ref{table_overall_segmentation_performance} and \\ref{table_cardiac_function_indices} show that the proposed method consisting of segmentation followed by simulated manual correction of detected segmentation failures delivers accurate segmentation for all tissues over ED and ES points. Correction of detected segmentation failures improved the performance in terms of DC, HD and clinical metrics for all combinations of model architectures, loss functions and uncertainty measures. Focusing on the DC after correction of detected segmentation failures the results reveal that performance differences between evaluated models decreased compared to the segmentation-only task. This effect is less pronounced for HD where the DRN network clearly achieved superior results in the segmentation-only and combined approach. The DN performs the least of all models but achieves the highest absolute DC performance improvements in the combined approach for RV at ES. Overall, the results in Table~\\ref{table_overall_segmentation_performance} disclose that improvements attained by the combined approach are almost all statistically significant ($p \\leq 0.05$) at ES and frequently at ED (\\num{96}\\% resp. \\num{83}\\% of the cases). Moreover, improvements are in \\num{99}\\% of the cases statistically significant for HD compared to \\num{81}\\% of the cases for DC.\n\t\n\tResults in terms of clinical metrics shown in Table~\\ref{table_cardiac_function_indices} are inline with these findings. We observe that segmentation followed by simulated manual correction of detected segmentation failures resulted in considerably higher accuracy for LV and RV ejection fraction. Achieved improvements for clinical metrics are only statistically significant ($p \\leq 0.05$) in one case for RV ejection fraction.\t\n\t\n\tIn general, the effect of correction of detected segmentation failures is more pronounced in cases where the segmentation-only approach achieved relatively low accuracy (e.g. DN-SD for RV at ES). Furthermore, performance gains are largest for RV and LV at ES and for ejection fraction of both ventricles.\n\t\n\n\t\\begin{figure}[!t]\n\t\t\\captionsetup[subfigure]{justification=centering}\n\t\t\\centering\n\t\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient018_slice07_ES_bmap_acorr.pdf}%\n\t\t\t\\label{fig_qual_result_sim_corr_example1}}\n\t\t\n\t\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient070_slice05_ED_emap_acorr.pdf}%\n\t\t\t\\label{fig_qual_result_sim_corr_example2}}\n\t\t\n\t\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient081_slice01_ES_emap_acorr.pdf}%\n\t\t\t\\label{fig_qual_result_sim_corr_example3}}\n\t\t\\caption{Three patients showing results of combined segmentation and detection approach consisting of segmentation followed by simulated manual correction of detected segmentation failures. First column shows MRI (top) and reference segmentation (bottom). Results for automatic segmentation and simulated manual correction respectively achieved by: Dilated Network (DN-Brier, \\num{2}$^{nd}$ and \\num{5}$^{th}$ columns); Dilated Residual Network (DRN-soft-Dice, \\num{3}$^{rd}$ and \\num{6}$^{th}$ columns); and U-net (soft-Dice, \\num{4}$^{th}$ and \\num{7}$^{th}$ columns).}\n\t\t\\label{fig_seg_detection_qualitative_results}\n\t\\end{figure}\n\t\n\t\\begin{figure}[!t]\n\t\t\\captionsetup[subfigure]{justification=centering}\n\t\t\\centering\n\t\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient048_slice02_mcorr.pdf}%\n\t\t\t\\label{fig_qual_result_man_corr_example1} \\hspace{3ex}}\n\t\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient091_slice01_mcorr.pdf}%\n\t\t\t\\label{fig_qual_result_man_corr_example2}}\n\t\t\n\t\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient075_slice06_mcorr.pdf}%\n\t\t\t\\label{fig_qual_result_man_corr_example3} \\hspace{3ex}}\n\t\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient006_slice01_mcorr.pdf}%\n\t\t\t\\label{fig_qual_result_man_corr_example4}} \n\t\t\n\t\t\\caption{Four patients showing results of combined segmentation and detection approach consisting of segmentation followed by manual expert correction of detected segmentation failures. Expert was only allowed to adjust the automatic segmentations in regions where the detection network predicted segmentation failures (orange contour shown in 2$^{nd}$ column). Automatic segmentations were generated by a U-net trained with the cross-entropy loss. Segmentation failure detection was performed using entropy maps.}\n\t\t\\label{fig_qualitative_results_man_corr}\n\t\\end{figure}\n\t\n\n\tThe best overall performance is achieved by the DRN model trained with cross-entropy loss while exploiting entropy maps in the detection task. Moreover, the proposed two step approach attained slightly better results using Bayesian maps compared to entropy maps. \n\t\n\t\\vspace{1ex}\n\t\\textbf{Manual correction}: Table~\\ref{table_manual_corr_performance} lists results for the combined automatic segmentation and detection approach followed by \\textit{manual} correction of detected segmentation failures by an expert. The results show that this correction led to improved segmentation performance in terms of DC, HD and clinical metrics. Improvements in terms of HD are in \\num{50} percent of the cases statistically significant ($p \\leq 0.05$) and most pronounced for RV and LV at end-systole. \n\t\n\tQualitative examples of the proposed approach are visualized in Figures~\\ref{fig_seg_detection_qualitative_results} and \\ref{fig_qualitative_results_man_corr} for simulated correction and manual correction of the automatically detected segmentation failures, respectively. For the illustrated cases (simulated) manual correction of detected segmentation failures leads to increased segmentation performance.\n\tOn average manual correction of automatic segmentations took less than \\num{2} minutes for ED and ES volumes of one patient compared to \\num{20} minutes that is typically needed by an expert for the same task.\n\t\n\t\n\t\\section*{Ablation Study}\n\t\n\tTo demonstrate the effect of different hyper-parameters in the method, a number of experiments were performed. In the following text these are detailed.\n\t\n\t\\subsection*{Impact of number of Monte Carlo samples on segmentation performance}\n\n\tTo investigate the impact of the number of Monte Carlo (MC) samples on the segmentation performance validation experiments were performed for all three segmentation architectures (Dilated Network, Dilated Residual Network and U-net) using $T$ $\\in \\{1, 3, 5, 7, 10, 20, 30, 60\\}$ samples. Results of these experiments are listed in Table~\\ref{table_seg_perf_per_samples}. We observe that segmentation performance started to converge using \\num{7} samples only. Performance improvements using an increased number of MC samples were largest for the Dilated Network. Overall, using more than \\num{10} samples did not increase segmentation performance. Hence, in the presented work $T$ was set to \\num{10}.\t\n\t\t\n\t\\subsection*{Effect of patch-size on detection performance}\n\t\n\tThe combined segmentation and detection approach detects segmentation failures on region level. To investigate the effect of patch-size on detection performance three different patch-sizes were evaluated: \\num{4}$\\times$\\num{4}, \\num{8}$\\times$\\num{8}, and \\num{16}$\\times$\\num{16} voxels. The results are shown in Figure~\\ref{fig_grid_compare}. We can observe in Figure~\\ref{fig_fn_grid_froc_voxel_detection} that larger patch-sizes result in a lower number of false positive regions. The result is potentially caused by the decreasing number of regions in an image when using larger patch-sizes compared to smaller patch-sizes. Furthermore, Figure~\\ref{fig_fn_grid_prec_rec_slice_detection} reveals that slice detection performance is only slightly influenced by patch-size. To ease manual inspection and correction by an expert, it is desirable to keep region-size i.e. patch-size small. Therefore, in the experiments a patch-size of \\num{8}$\\times$\\num{8} voxels was used.\n\t\n\t\\begin{figure}[t]\n\t\t\\center\n\t\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/compare_grid_froc_voxel_detection_rate.pdf}%\n\t\t\t\\label{fig_fn_grid_froc_voxel_detection}}\n\t\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/compare_grid_slices_prec_recall.pdf}%\n\t\t\t\\label{fig_fn_grid_prec_rec_slice_detection}} \n\t\t\n\t\t\\caption{Detection performance for three different patch-sizes specified in voxels. (a) Sensitivity for detection of segmentation failures on voxel level (y-axis) versus number of false positive image regions (x-axis). (b) Precision-recall curve for detection of slices containing segmentation failures (where AP denotes average precision). Results are split between entropy and Bayesian uncertainty maps. In the experiments patch-size was set to \\num{8}$\\times$\\num{8} voxels.}\n\t\t\\label{fig_grid_compare}\n\t\\end{figure}\n\t\n\t\\subsection*{Impact of tolerance threshold on number of segmentation failures}\n\t\n\tTo investigate the impact of the tolerance threshold separating segmentation failures and tolerable segmentation errors, we calculated the ratio of the number of segmentation failures and all errors i.e. the sum of tolerable errors and segmentation failures. Figure~\\ref{fig_threshold_compare} shows the results. We observe that at least half of the segmentation failures are located within a tolerance threshold i.e. distance of two to three voxels of the target structure boundary as defined by the reference annotation. Furthermore, the mean percentage of failures per volume is considerably lower for the Dilated Residual Network (DRN) and highest for the Dilated Network. This result is inline with our earlier finding (see Table~\\ref{table_evaluation_slice_detection}) that average percentage of slices that do contain segmentation failures is lowest for the DRN model.\n\t\n\t\\section*{Discussion}\n\t\n\tWe have described a method that combines automatic segmentation and assessment of uncertainty in cardiac MRI with detection of image regions containing segmentation failures. The results show that combining automatic segmentation with manual correction of detected segmentation failures results in higher segmentation performance.\n\tIn contrast to previous methods that detected segmentation failures per patient or per structure, we showed that it is feasible to detect segmentation failures per image region. In most of the experimental settings, simulated manual correction of detected segmentation failures for LV, RV and LVM at ED and ES led to statistically significant improvements. These results represent the upper bound on the maximum achievable performance\tfor the manual expert correction task. Furthermore, results show that manual expert correction of detected segmentation failures led to consistently improved segmentations. However, these results are not on par with the simulated expert correction scenario. This is not surprising because inter-observer variability is high for the presented task and annotation protocols may differ between clinical environments. Moreover, qualitative results of the manual expert correction reveal that manual correction of the detected segmentation failures can prevent anatomically implausible segmentations (see Figure~\\ref{fig_qualitative_results_man_corr}).\n\tTherefore, the presented approach can potentially simplify and accelerate correction process and has the capacity to increase the trustworthiness of existing automatic segmentation methods in daily clinical practice.\n\n\t\n\t\n\tThe proposed combined segmentation and detection approach was evaluated using three state-of-the-art deep learning segmentation architectures. The results suggest that our approach is generic and applicable to different model architectures. Nevertheless, we observe noticeable differences between the different combination of model architectures, loss functions and uncertainty measures. In the segmentation-only task the DRN clearly outperforms the other two models in the evaluation of the boundary of the segmented structure. Moreover, qualitative analysis of the automatic segmentation masks suggests that DRN generates less often anatomically implausible and fragmented segmentations than the other models. We assume that clinical experts would prefer such segmentations although they are not always perfect. Furthermore, even though DRN and U-net achieve similar performance in regard to DC we assume that less fragmented segmentation masks would increase trustworthiness of the methods. \n\t\n\t\\begin{figure}[t]\n\t\t\\center\n\t\t\\subfloat[]{\n\t\t\t\\includegraphics[width=3.in]{figures\/dt\/tp_per_threshold_out_struc.pdf}%\n\t\t\t\\label{fig_threshold_struc_out_compare}\n\t\t}\n\t\t\\subfloat[]{\n\t\t\t\\includegraphics[width=3.in]{figures\/dt\/tp_per_threshold_in_struc.pdf}%\n\t\t\t\\label{fig_threshold_struc_in_compare}\n\t\t}\n\t\t\\caption{Mean percentage of the segmentation failures per volume (y-axis) in the set of all segmentation errors (tolerable errors$+$segmentation failures) depending on the tolerance threshold (x-axis). Red, dashed vertical line indicates threshold value that was used throughout the experiments. Results are split between segmentation errors located (a) outside and (b) inside the target structure. Each figure contains a curve for U-net, Dilated Network (DN) and Dilated Residual Network (DRN) trained with the soft-Dice (SD) loss. Segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures and therefore, they are independent of the applied tolerance threshold.}\n\t\t\\label{fig_threshold_compare}\n\t\\end{figure}\n\t\n\tIn agreement with our preliminary work we found that uncertainty maps obtained from a segmentation model trained with soft-Dice loss have a lower degree of uncertainty calibration compared to when trained with one of the other two loss functions (cross-entropy and Brier)\\cite{sander2019towards}. Nevertheless, the results of the combined segmentation and detection approach showed that a lower degree of uncertainty calibration only slightly deteriorated the detection performance of segmentation failures for the larger segmentation models (DRN and U-net) when exploiting uncertainty information from e-maps. Hendrycks and Gimpel \\cite{hendrycks2016baseline} showed that softmax probabilities generated by deep learning networks have poor direct correspondence to confidence. However, in agreement with Geifman et al. \\cite{geifman2017selective} we presume that probabilities and hence corresponding entropies obtained from softmax function are ranked consistently i.e. entropy can potentially be used as a relative uncertainty measure in deep learning. In addition, we detect segmentation failures per image region and therefore, our approach does not require perfectly calibrated uncertainty maps. Furthermore, results of the combined segmentation and detection approach revealed that detection performance of segmentation failures using b-maps is almost independent of the loss function used to train the segmentation model. In line with Jungo et al. \\cite{jungo2019assessing} we assume that by enabling MC-dropout in testing and computing the mean softmax probabilities per class leads to better calibrated probabilities and b-maps. This assumption is in agreement with Srivastava et al.~\\cite{srivastava2014dropout} where a CNN with dropout used at testing is interpreted as an ensemble of models.\n\t\n\tQuantitative evaluation in terms of Dice coefficient and Hausdorff distance reveals that proposed combined segmentation and detection approach leads to significant performance increase. However, the results also demonstrate that the correction of the detected failures allowed by the combined approach does not lead to statistically significant improvement in clinical metrics. This is not surprising because state-of-the-art automatic segmentation methods are not expected to lead to large volumetric errors \\cite{bernard2018deep} and standard clinical measures are not sensitive to small segmentation errors. Nevertheless, errors of the current state-of-the-art automatic segmentation methods may lead to anatomically implausible segmentations \\cite{bernard2018deep} that may cause distrust in clinical application. Besides increase in trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs, improved segmentations are a prerequisite for advanced functional analysis of the heart e.g. motion analysis\\cite{bello2019deep} and very detailed morphology analysis such as myocardial trabeculae in adults\\cite{meyer2020genetic}.\n\t\n\tFor the ACDC dataset used in this manuscript, Bernard et al.\\cite{bernard2018deep} reported inter-observer variability ranging from \\num{4} to \\SI{14.1}{\\milli\\meter} (equivalent to on average \\num{2.6} to \\num{9} voxels). To define the set of segmentation failures, we employed a strict tolerance threshold on distance metric to distinguish between tolerated segmentation errors and segmentation failures (see Ablation study). Stricter tolerance threshold was used because the thresholding is performed in \\num{2}D, while evaluation of segmentation is done in \\num{3}D. Large slice thickness in cardiac MR could lead to a discrepancy between the two. As a consequence of this strict threshold results listed in Table~\\ref{table_evaluation_slice_detection} show that almost all patient volumes contain at least one slice with a segmentation failure. This might render the approach less feasible in clinical practice. Increasing the threshold decreases the number of segmentation failures and slices containing segmentation failures (see Figure~\\ref{fig_threshold_compare}) but also lowers the upper bound on the maximum achievable performance. Therefore, to show the potential of our proposed approach we chose to apply a strict tolerance threshold. Nevertheless, we realize that although manual correction of detected segmentation failures leads to increased segmentation accuracy the performance of precision-recall is limited (see Figure~\\ref{fig_dt_perf_all_models}) and hence, should be a focus of future work. \n\t\n\tThe presented patch-based detection approach combined with (simulated) manual correction can in principle lead to stitching artefacts in the resulting segmentation masks. A voxel-based detection approach could potentially solve this. However, voxel-based detection methods are more challenging to train due to the very small number of voxels in an image belonging to the set of segmentation failures.\n\t\n\tEvaluation of the proposed approach for \\num{12} possible combinations of segmentation models (three), loss functions (two) and uncertainty maps (two) resulted in an extensive number of experiments. Nevertheless, future work could extend evaluation to other segmentation models, loss functions or combination of losses. Furthermore, our approach could be evaluated using additional uncertainty estimation techniques e.g. by means of ensembling of networks \\cite{lakshminarayanan2017simple} or variational dropout \\cite{kingma2015variational}. In addition, previous work by Kendall and Gal~\\cite{kendall2017uncertainties}, Tanno et al. \\cite{tanno2019uncertainty} has shown that the quality of uncertainty estimates can be improved if model (epistemic) and data (aleatoric) uncertainty are assessed simultaneously with separate measures. The current study focused on the assessment of model uncertainty by means of MC-dropout and entropy which is a combination of epistemic and aleatoric uncertainty. Hence, future work could investigate whether additional estimation of aleatoric uncertainty improves the detection of segmentation failures. \n\t\n\tFurthermore, to develop an end-to-end approach future work could incorporate the detection of segmentation failures into the segmentation network. Besides, adding the automatic segmentations to the input of the detection network could increase the detection performance. \n\t\n\tFinally, the proposed approach is not specific to cardiac MRI segmentation. Although data and task specific training would be needed the approach could potentially be applied to other image modalities and segmentation tasks.\n\t\n\t\\section*{Conclusion}\n\t\n\tA method combining automatic segmentation and assessment of segmentation uncertainty in cardiac MR with detection of image regions containing local segmentation failures has been presented. The combined approach, together with simulated and manual correction of detected segmentation failures, increases performance compared to segmentation-only. The proposed method has the potential to increase trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs. \n\t\n\t\\section*{Data and code availability}\n\tAll models were implemented using the PyTorch\\cite{paszke2017automatic} framework and trained on one Nvidia GTX Titan X GPU with \\num{12} GB memory. The code to replicate the study is publicly available at \\href{https:\/\/github.com\/toologicbv\/cardiacSegUncertainty}{https:\/\/github.com\/toologicbv\/cardiacSegUncertainty}.\n\t\n\n\t\\input{main_article.bbl}\n\t\n\t\\section*{Acknowledgements}\n\t\n\tThis study was performed within the DLMedIA program (P15-26) funded by Dutch Technology Foundation with participation of PIE Medical Imaging.\n\t\n\t\\section*{Author contributions statement}\n\t\n\tJ.S., B.D.V. and I.I. designed the concept of the study. J.S. conducted the experiments. J.S., B.D.V. and I.I. wrote the manuscript. All authors reviewed the manuscript. \n\t\n\t\\section*{Additional information}\n\t\n\t\\textbf{Competing interests}: The authors declare that they have no competing interests. \n\t\n\t\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nEntropy production plays a fundamental role in both classical and quantum thermodynamics: by being related to the second law at a fundamental level, it enables to identify and quantify the irreversibility of physical phenomena \\cite{DeGrootMazur}. This intimate connection has raised a great deal of interest in relation to the theory of open quantum systems, where one is concerned about the dynamics of a system interacting with the infinitely many environmental degrees of freedom \\cite{Breuer-Petruccione}. In this scenario, a plethora of genuine quantum effects is brought about and a general and exhausting theory of entropy production is hitherto missing \\cite{Mauro_thermo2018}.\n\nThe second law of thermodynamics can be expressed in the form of a lower bound to the entropy change $\\Delta S$ undergone by the state of a given system that exchanges an amount of heat $Q$ when interacting with a bath at temperature $T$, that is\n\\begin{align}\n\\Delta S \\ge \\int \\frac{\\delta Q}{T} \\ .\n\\end{align}\nThe strict inequality holds if the process that the system is undergoing is irreversible. One can thus define the entropy production $\\Sigma$ as\n\\begin{align}\n\\label{eq:entropy_prod_def}\n\\Sigma \\equiv \\Delta S - \\int \\frac{\\delta Q}{T} \\ge 0 \\ .\n\\end{align}\nFrom \\Cref{eq:entropy_prod_def}, one can obtain the following expression involving the rates~\\cite{Landi2013, PhysRevLett.118.220601}:\n\\begin{align}\n\\frac{{\\rm{d}} S}{{\\rm{d}} t} = \\Pi(t) - \\Phi(t) \\ ,\n\\end{align}\nwhere $\\Pi(t)$ is the entropy production rate and $\\Phi(t)$ is the entropy flux from the system to the environment: at any time $t$, in addition to the entropy that is flowing from the system to the environment, there might thus be a certain amount of entropy intrinsically produced by the process and quantified by $\\Pi(t)$. \n\nEntropy production is an interesting quantity to monitor in the study of open quantum systems, since this is the context where irreversibility is unavoidably implied. The issue has been addressed in order to obtain an interesting characterisation and measure of the irreversibility of the system dynamics \\cite{Mauro_thermo2018}. In particular, it has been recently shown that the entropy production of an open quantum system can be split into different contributions: one is classically related to population unbalances, while the other is a genuine quantum contribution due to coherences \\cite{Landi:19, POLKOVNIKOV:2011, PhysRevE.99.042105}. This fundamental result holds whenever the system dynamics is either described by a map microscopically derived through the Davies approach or in the case of a finite map encompassing thermal operations~\\cite{Landi:19}. \nMost of these works, though, are solely focused on the Markovian case, when the information is monotonically flowing from the system to the environment. Under this hypothesis, the open dynamics is formally described by a quantum dynamical semigroup; this is essential to mathematically prove that the entropy production is a non-negative quantity \\cite{Spohn1978, Alicki_1979, PhysRevA.68.032105}. \nMoreover, whenever the quantum system undergoing evolution is composite (i.e., multipartite) beside system-environment correlations, also inter-system correlations will contribute to the overall entropy production. A full account of the role of such correlations (entanglement above all) on the entropy balance is not known.\n\n\nHowever, a strictly Markovian description of the dynamics does not encompass all possible evolutions. There might be circumstances in which there is no clear separation of time-scales between system and environment: this hampers the application of the Born-Markov approximation \\cite{Breuer-Petruccione}. In some cases, a backflow of information going from the environment to the system is observable, usually interpreted as a signature of a quantum non-Markovian process \\cite{Rivas:14, Breuer:16}. From a thermodynamical perspective the non-negativity of the entropy production rate is not always guaranteed, as there might be intervals of time in which it attains negative values. It has been argued that this should not be interpreted as a violation of the second law of thermodynamics \\cite{Benatti2017}, but it should call for a careful use of the theory, in the sense that -- in the entropy production balance -- the role of the environment cannot be totally neglected. This idea can be justified in terms of the backflow of information that quantum non-Markovianity entails: the system retrieves some of the information that has been previously lost because of its interaction with the surroundings.\n\nIn this paper, we investigate the way initial correlations affect the entropy production rate in an open quantum system by considering the case of non-Markovian Brownian motion. \nWe focus on the case of an uncoupled bipartite system connected to two independent baths. The rationale behind this choice is related to the fact that any interaction between the two oscillators would likely generate, during the evolution, quantum correlations between the two parties. In general, the entanglement dynamically generated through the interaction would be detrimental to the transparency of the picture we would like to deliver, as it would be difficult to isolate the contribution to $\\Pi(t)$ coming from the initial inter-system correlations. To circumvent this issue, in our study we choose a configuration where the inter-system dynamics is trivial (two independent relaxation processes) but the bipartite state is initially correlated. \\textit{De facto}, the entanglement initially present in the state of our ``medium'' acts as an extra knob which can be tuned to change the rate of entropy production, thus steering the thermodynamics of the open system that we consider.\n\nThe paper is organised as follows. In \\Cref{sec:Gauss} we introduce a closed expression of the entropy production rate for a system whose dynamics is described in terms of a differential equation in the Lyapunov equation. In \\Cref{sec:QBM} we introduce the model we would like to study: a system of two uncoupled harmonic oscillators, described by a non-Markovian time-local master equation. We also discuss the spectral properties of the two local reservoirs. This minimal, yet insightful, setting allows us to investigate -- both numerically and analytically -- how different initial states can affect the entropy production rate. We investigate this relation in depth in \\Cref{sec:main}, where, by resorting to a useful parametrisation for two-mode entangled states, we focus on the role of the purity of the total two-mode state and on the link between the entanglement we input in the initial state and the resulting entropy production. In \\Cref{sec:Markovian_Limit} we assess whether our results survive when we take the Markovian limit. Finally, in \\Cref{sec:conclusions}, we summarise the evidence we get and we eventually draw our conclusions.\n\n\\section{Entropy production rate for Gaussian systems}\n\\label{sec:Gauss}\n\nWe restrict our investigation to the relevant case of Gaussian systems.\n\\cite{Ferraro:05, Serafini:17, Carmichael}. This choice dramatically simplifies the study of our system dynamics, since the evolution equations only involve the finite-dimensional covariance matrix (CM) of the canonically conjugated quadrature operators. \nAccording to our notation, the CM $\\boldsymbol{\\sigma}$, defined as\n\\begin{align}\n\\label{eq:cov_def}\n\\sigma_{ij} =\\braket{\\{X_i, X_j \\}} - 2\\braket{X_i}\\braket{X_j} ,\n\\end{align}\nsatisfies the Lyapunov equation\n\\begin{align}\n\\label{eq:Lyapunov}\n\\dot{\\boldsymbol{\\sigma}} = \\mathbf{A} \\boldsymbol{\\sigma} + \\boldsymbol{\\sigma} {\\mathbf{A}^{\\rm{T}}} + \\mathbf{D} ,\n\\end{align}\nwhere $\\mathbf{A}$ and $\\mathbf{D}$ are the drift and the diffusion matrices, respectively, and $\\mathbf{X} = \\{q_1,p_1,\\ldots, q_N,p_N\\}^{\\rm T}$ is the vector of quadratures for $N$ bosonic modes. \nIn particular, the CM representing a two-mode Gaussian state can always be brought in the standard form \\cite{Ferraro:05, Serafini:17}:\n\\begin{align}\n\\label{eq:sigma_sf}\n\\boldsymbol{\\sigma} = \\begin{pmatrix}\na & 0 & c_{+} & 0 \\\\\n0 & a & 0 & c_{-} \\\\\nc_{+} & 0 & b & 0 \\\\\n0 & c_{-} & 0 & b \n \\end{pmatrix},\n\\end{align}\nwhere the entries $a, \\ b$, and $c_{\\pm}$ are real numbers. Furthermore, a necessary and sufficient condition for separability of a two-mode Gaussian state is given by the Simon criterion \\cite{PhysRevA.72.032334}:\n\\begin{align}\n\\label{eq:uncertainty}\n\\tilde{\\nu}_{-} \\ge 1, \n\\end{align}\nwhere $\\tilde{\\nu}_{-}$ is the smallest symplectic eigenvalue of the partially transposed CM $\\tilde{\\boldsymbol{\\sigma}} = \\boldsymbol{P\\sigma P}$, being $\\mathbf{P} = {\\rm diag} (1,1,1,-1)$. This bound expresses in the phase-space language the Peres-Horodecki PPT (Positive Partial Transpose) criterion for separability \\cite{Peres:96, Horodecki:97, Simon:14, PhysRevA.72.032334}.\n\nTherefore, the smallest symplectic eigenvalue encodes all the information needed to quantify the entanglement for arbitrary two-modes Gaussian states. For example, one can measure the entanglement through the violation of the PPT criterion \\cite{PhysRevLett.90.027901}. Quantitatively, this is given by the logarithmic negativity of a quantum state $\\varrho$, which -- in the continuous variables formalism -- can be computed considering the following formula \\cite{PhysRevA.65.032314, PhysRevA.72.032334}:\n\\begin{align}\n\\label{eq:log_neg}\nE_{\\mathcal{N}} ( \\varrho) = {\\rm max} \\left [ 0, - \\ln{\\tilde{\\nu}_{-}}\\right ] .\n\\end{align}\nGiven the global state $\\varrho$ and the two single-mode states $\\varrho_{i} = {\\rm Tr}_{j \\ne i} \\varrho$, global $\\mu \\equiv {\\rm Tr} \\varrho^2$ and the local $\\mu_{1,2} \\equiv {\\rm Tr} \\varrho_{1,2}^2$ purities can be used to characterise entanglement in Gaussian systems. It has been shown that two different classes of extremal states can be identified: states of maximum negativity for fixed global and local purities (GMEMS) and states of minimum negativity for fixed global and local purities (GLEMS) \\cite{PhysRevLett.92.087901}.\n\nMoreover, the continuous variables approach provides a remarkable advantage: the open quantum system dynamics can be remapped into a Fokker-Plank equation for the Wigner function of the system. This formal result enables us to carry out our study of the entropy production using a different approach based on phase-space methods, instead of resorting to the usual approach based on von Neumann entropy. The harmonic nature of the system we would like to consider makes our choice perfectly appropriate to our study and, as we will show in \\Cref{sec:main,sec:Markovian_Limit}, well suited to systematically scrutinise inter-system correlations. Our analysis is thus based on the Wigner entropy production rate \\cite{PhysRevLett.118.220601}, defined as\n\\begin{align}\n\\Pi(t) \\equiv - \\partial_t K(W(t) || W_{\\rm eq}),\n\\end{align}\nwhere $K(W|| W_{\\rm eq})$ is the Wigner relative entropy between the Wigner function $W$ of the system and its expression for the equilibrium state $W_{\\rm eq}$.\n\nFurthermore, we are in the position of using the closed expressions for $\\Phi(t)$ and $\\Pi(t)$ coming from the theory of classical stochastic processes \\cite{Brunelli, Landi2013}. In particular, it has been shown that the entropy production rate $\\Pi(t)$ can be expressed in terms of the matrices $\\mathbf{A},\\mathbf{D},\\boldsymbol{\\sigma}$ as~\\cite{Brunelli}\n\\begin{equation}\n\\label{eq:entropy_prod_rate}\n\\begin{aligned}\n\\Pi(t) &= \\frac{1}{2} {\\rm{Tr}} [ \\boldsymbol{\\sigma}^{-1} \\mathbf{D}] + 2 {\\rm{Tr}} [ \\mathbf{A}^{{\\rm irr}}] \\\\ \n&+ 2 {\\rm{Tr}} [ \\left (\\mathbf{A}^{{\\rm irr}}\\right )^{{\\rm T}} \\mathbf{D}^{-1} \\mathbf{A}^{{\\rm irr}} \\boldsymbol{\\sigma}] \\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{A}^{{\\rm irr}}$ is the irreversible part of matrix $\\mathbf{A}$, given by\n$\\mathbf{A}^{{\\rm irr}} = \\left ( \\mathbf{A} + \\mathbf{E} \\mathbf{A} \\mathbf{E}^{{\\rm T}}\\right )\/2$,\nwhere $\\mathbf{E} = {\\rm diag}(1,-1,1-1)$ is the symplectic representation of time reversal operator.\n\n\\section{Quantum Brownian motion}\n\\label{sec:QBM}\nWe study the relation between the preparation of the initial state and the entropy production rate considering a rather general example: the quantum Brownian motion \\cite{Breuer-Petruccione, Weiss1999}, also known as Caldeira-Leggett model \\cite{CaldeiraLeggett1983a} . More specifically, we consider the case of a harmonic oscillator interacting with a bosonic reservoir made of independent harmonic oscillators. The study of such a paradigmatic system has been widely explored in both the Markovian \\cite{Breuer-Petruccione, CaldeiraLeggett1983a} and non-Markovian \\cite{PhysRevD.45.2843} regimes using the influence functional method: in this case, one can trace out the environmental degrees of freedom exactly. \nOne can also solve the dynamics of this model using the open quantum systems formalism \\cite{Breuer-Petruccione, Rivas2012}, where the Brownian particle represents the system, while we identify the bosonic reservoir with the environment. \nThe usual approach relies on the following set of assumptions, which are collectively known as Born-Markov approximation~\\cite{Breuer-Petruccione}:\n\\begin{enumerate}\n\\item The system is weakly coupled to the environment.\n\\item The initial system-environment state is factorised.\n\\item \\label{Born-Markov3} It is possible to introduce a separation of the timescales governing the system dynamics and the decay of the environmental correlations.\n\\end{enumerate}\nHowever, we aim to solve the dynamics in a more general scenario, without resorting to assumption \\ref{Born-Markov3}. We are thus considering the case in which, although the system-environment coupling is weak, non-Markovian effects may still be relevant. Under such conditions, one can derive a time-local master equation for the reduced dynamics of the system \\cite{PhysRevA.67.042108, Int1}.\n\nMore specifically, we consider a system consisting of two quantum harmonic oscillators, each of them interacting with its own local reservoir (see \\Cref{fig:system}). Each of the two reservoirs is modelled as a system of system of $N$ non-interacting bosonic modes. \nIn order to understand the dependence of the entropy production upon the initial correlations, we choose the simplest case in which the two oscillators are identical, i.e., characterised by the same bare frequency $\\omega_0$ and the same temperature $T$, and they are uncoupled, so that only the initial preparation of the global state may entangle them. The Hamiltonian of the global system thus reads as (we consider units such that $\\hbar = 1$ throughout the paper)\n\\begin{align}\n\\label{eq:Hamiltonian}\nH & = \\sum_{j=1,2} \\omega_0 \\, a_{j}^{\\dagger} a_j + \\sum_{j=1,2} \\sum_k \\omega_{jk} \\, b_{jk}^{\\dagger} b_{jk} \\nonumber \\\\\n& + \\alpha \\sum_{j=1,2} \\sum_k \\left ( \\frac{a_j + a_j^{\\dagger}}{\\sqrt{2}}\\right ) \\left ( g_{jk}^{*} b_{jk} + g_{jk} b_{jk}^{\\dagger} \\right ),\n\\end{align}\nwhere $a_j^{\\dagger}$ ($a_j$) and $b_{jk}^{\\dagger}$ ($b_{jk}$) are the system and reservoirs creation (annihilation) operators, respectively, while $\\omega_{1k}$ and $\\omega_{2k}$ are the frequencies of the reservoirs modes. The dimensionless constant $\\alpha$ represents the coupling strength between each of the two subsystems and the their local bath, while the constants $g_{jk}$ quantify the coupling between the $j-$th oscillator ($j=1,2$) and the $k-$th mode of its respective reservoir. These quantities therefore appear in the definition of the spectral density (SD)\n\\begin{align}\n\\label{eq:SD-def}\nJ_{j} (\\omega) = \\sum_{k} |g_{jk}|^2 \\, \\delta (\\omega - \\omega_{jk}) \\ .\n\\end{align}\nIn what follows, we will the consider the case of symmetric reservoirs, i.e., $J_1(\\omega) = J_2(\\omega) \\equiv J(\\omega)$. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{system.pdf}\n\\caption{\\small{System of two uncoupled quantum harmonic oscillators interacting with their local reservoirs. The latter are characterised by the same temperature $T$ and the same spectral properties. The two parties of the systems are initially correlated and we study their dynamics under the secular approximation so that non-Markovian effects are present.}}\n\\label{fig:system}\n\\end{figure}\n\nWe would also like to work in the secular approximation by averaging over the fast oscillating terms after tracing out the environment: unlike the rotating-wave approximation, in this limit not all non-Markovian effects are washed out \\cite{PhysRevA.67.042108}.\n\nUnder these assumptions, the dynamics of this system is governed by a time-local master equation, that in the interaction picture reads as\n\\begin{equation}\n\\label{eq:ME_sec}\n\\begin{aligned}\n\\dot{\\rho} (t) = &- \\frac{\\Delta(t) + \\gamma(t)}{2} \\sum_{j=1,2} \\left ( \\{\\adag_j a_j, \\rho\\} - 2 a_j \\rho \\adag_j \\right ) \\\\\n& - \\frac{\\Delta(t) - \\gamma(t)}{2} \\sum_{j=1,2} \\left (\\{a_j \\adag_j, \\rho \\}- 2 \\adag_j \\rho a_j \\right ),\n\\end{aligned}\n\\end{equation}\nwhere $\\rho$ is the reduced density matrix of the global system, while the time dependent coefficients $\\Delta(t)$ and $\\gamma(t)$ account for diffusion and dissipation, respectively. \nThe coefficients in \\Cref{eq:ME_sec} have a well-defined physical meaning: $(\\Delta(t) + \\gamma(t))\/2$ is the rate associated with the incoherent loss of excitations from the system, while $(\\Delta(t) - \\gamma(t))\/2$ is the rate of incoherent pumping.\n\n\nThe coefficients $\\Delta(t)$ and $\\gamma(t)$ are ultimately related to the spectral density $J(\\omega)$ as\n\\begin{align}\n\\label{eq:QBM_delta}\n\\Delta(t) \\equiv \\frac{1}{2} \\int_{0}^{t} \\kappa (\\tau) \\cos{(\\omega_0 \\tau}) \\, d \\tau,\n\\end{align}\n\\begin{align}\n\\label{eq:QBM_gamma}\n\\gamma (t) \\equiv \\frac{1}{2} \\int_{0}^{t} \\mu (\\tau) \\sin{(\\omega_0 \\tau}) \\, d \\tau,\n\\end{align}\nwhere $\\kappa(\\tau)$ and $\\mu(\\tau)$ are the noise and dissipation kernels, respectively, which -- assuming reservoirs in thermal equilibrium -- are given by \n\n\n\\begin{equation}\n\\label{eq:kappamu}\n\\left[\n\\begin{matrix}\n\\kappa(\\tau)\\\\\n\\mu(\\tau)\n\\end{matrix}\\right]=\n2 \\alpha^2 \\int\\limits_{0}^{\\mathcal{1}} J (\\omega) \\left[\\begin{matrix}\n\\cos{(\\omega \\tau)} \\coth{\\left(\\frac{\\beta}{2} \\omega \\right)}\\\\\n\\sin(\\omega\\tau)\\end{matrix}\\right] d \\, \\omega,\n\\end{equation}\nwhere $\\beta = (k_B T)^{-1}$ is the inverse temperature and $k_B$ the Boltzmann constant.\n\nMoreover, it can be shown that the dynamics of a harmonic system that is linearly coupled to an environment can be described in terms of a differential equation in the Lyapunov form given by \\Cref{eq:Lyapunov} \\cite{Serafini:17}. We can indeed notice that in \\Cref{eq:Hamiltonian} the interaction between each harmonic oscillator and the local reservoir is expressed by a Hamiltonian that is bilinear (i.e., quadratic) in the system and reservoir creation and annihilation operators. Hamiltonians of this form lead to a master equation as in \\Cref{eq:ME_sec}, where the dissipators are quadratic in the system creation and annihilation operators $\\adag_j, a_j$. Under these conditions, one can recast the dynamical equations in the Lyapunov form in \\Cref{eq:Lyapunov} \\cite{Ferraro:05}, where the matrices $\\mathbf{A}$ and $\\mathbf{D}$ are time-dependent, due to non-Markovianity. Indeed, we get $\\mathbf{A} = -\\gamma(t) \\mathbbm{1}_4$ and $\\mathbf{D}= 2 \\Delta(t) \\mathbbm{1}_4$ (here $ \\mathbbm{1}_4$ is the $4 \\times 4$ identity matrix).\n\nThe resulting Lyapunov equation can be analytically solved, giving the following closed expression for the CM at a time $t$:\n\\begin{align}\n\\label{eq:sigma_t}\n\\boldsymbol{\\sigma}(t) = \\boldsymbol{\\sigma}(0) e^{-\\Gamma(t)} + 2 \\Delta_{\\Gamma}(t) \\mathbbm{1}_4,\n\\end{align}\nwith\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:del_gamma}\n\\Gamma(t) \\equiv 2 \\int_{0}^{t} d \\tau \\, \\gamma(\\tau)\\,\\,\n\\text{and}\\,\\,\n\\Delta_{\\Gamma} (t) \\equiv e^{-\\Gamma(t)} \\int_{0}^{t} d \\tau \\, \\Delta(\\tau) e^{\\Gamma(\\tau)}.\n\\end{aligned}\n\\end{equation}\nMoreover, a straightforward calculation allows us to determine the steady state of our two-mode system. By imposing $\\dot{\\boldsymbol{\\sigma}} \\equiv 0$ in \\Cref{eq:Lyapunov}, one obtains that the system relaxes towards a diagonal state with associated CM ${\\boldsymbol{\\sigma}}_{\\mathcal{1}} \\equiv \\Delta(\\infty)\/\\gamma(\\infty) \\mathbbm{1}_4$. By plugging ${\\boldsymbol{\\sigma}}_{\\mathcal{1}}$ in \\Cref{eq:entropy_prod_rate}, we find $\\Pi_\\infty\\equiv\\lim_{t\\rightarrow \\infty}\\Pi(t)=0$, showing a vanishing entropy production at the steady state. This instance can also be justified by noticing that, as $t \\to \\mathcal{1}$, we approach the Markovian limit. Therefore, the Brownian particles, exclusively driven by the interaction with their local thermal baths, will be relaxing toward the canonical Gibbs state with a vanishing associated entropy production rate \\cite{Breuer-Petruccione, Spohn1978, Landi:19}.\n\n\n\\subsection{Choice of the spectral density} \nIn order to obtain a closed expression for the time-dependent rates $\\Delta(t)$ and $\\gamma(t)$, one has to assume a specific form for the spectral density $J(\\omega)$, which -- to generate an irreversible dynamics -- is assumed to be a continuous function of the frequency $\\omega$. In quite a general way, we can express the SD as\n\\begin{align}\nJ(\\omega) = \\eta \\ \\omega_c^{1-\\epsilon} \\ \\omega^\\epsilon \\; f(\\omega, \\omega_c),\n\\end{align}\nwhere $\\epsilon>0$ is known as the Ohmicity parameter and $\\eta > 0$. Depending on the value of $\\epsilon$, the SD is said to be Ohmic ($\\epsilon=1$), super-Ohmic ($\\epsilon>1)$, or sub-Ohmic ($\\epsilon<1$). The function $f(\\omega, \\omega_c)$ represents the SD cut-off and $\\omega_c$ is the cut-off frequency. Such function is introduced so that $J(\\omega)$ vanishes for $\\omega \\to 0$ and $\\omega \\to \\mathcal{1}$. We focus on two different functional forms for $f(\\omega, \\omega_c)$, namely, the Lorentz-Drude cut-off $f(\\omega, \\omega_c) \\equiv \\omega_c^2 \/ (\\omega_c^2 + \\omega^2)$ and the exponential cut-off $f(\\omega, \\omega_c) \\equiv e^{-\\omega\/\\omega_c}$.\nIn particular, we choose an Ohmic SD with a Lorentz-Drude cut-off\n\\begin{align}\n\\label{eq:SD_Ohm_LD}\nJ(\\omega) = \\frac{2 \\omega}{\\pi} \\frac{\\omega_c^2}{\\omega_c^2 + \\omega^2},\n\\end{align}\nwhere $\\eta \\equiv 2 \/ \\pi$. Note that this choice is mathematically convenient, but is inconsistent from a physical point of view, as it implies instantaneous dissipation, as acknowledged in Refs.~\\cite{PhysRevD.45.2843, PazZurek2001}.\nWe also consider the following SDs\n\\begin{align}\n\\label{eq:SD_exp}\nJ(\\omega) = \\omega_c^{1-\\epsilon} \\ \\omega^\\epsilon \\; e^{-\\omega\/\\omega_c},\n\\end{align}\nwith $\\epsilon=1, \\, 3, \\,1\/2$ and $\\eta \\equiv 1$, as the coupling strength is already contained in the constant $\\alpha$. In all these cases, the time-dependent coefficients $\\Delta(t)$ and $\\gamma(t)$ can be evaluated analytically~\\cite{PhysRevA.80.062324}. \n\n\n\\section{Initial correlations and entropy production}\n\\label{sec:main}\nWe can now use our system to claim that initial correlations shared by the non-interacting oscillators do play a role in the entropy production rate. We do this by employing a parametrisation that covers different initial preparations~\\cite{PhysRevA.72.032334}. The entries of the matrix given by \\Cref{eq:sigma_sf} can be expressed as follows\n\\begin{equation}\n\\label{eq:sigma_a_b}\na = s +d, \\qquad b = s-d\n\\end{equation}\nand\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:sigma_c}\nc_{\\pm} = \\frac{ \\sqrt{\\left (4d^2 + f \\right )^2{-}4g^2} \\pm \\sqrt{\\left (4s^2 + f \\right )^2{-}4g^2} }{4 \\sqrt{s^2 -d^2}},\n\\end{aligned}\n\\end{equation}\nwith $f = (g^2 +1)(\\lambda -1) \/2- (2d^2+g)(\\lambda +1)$. This allows us to parametrise the CM using four parameters: $s, d, g, \\lambda$. The local purities are controlled by the parameters $s$ and $d$ as $\\mu_1 = (s+d)^{-1}$ and $\\mu_2 = (s-d)^{-1}$, while the global purity is $\\mu=1\/g$. Furthermore, in order to ensure legitimacy of a CM, the following constraints should be fulfilled\n\\begin{align}\n\\label{eq:constraints}\ns \\ge 1, \\quad | d | \\le s -1, \\quad g \\ge 2| d | + 1.\n\\end{align}\nOnce the three aforementioned purities are given, the remaining degree of freedom required to determine the negativities is controlled by the parameter $\\lambda$, which encompasses all the possible entangled two-modes Gaussian states. The two classes of extremal states are obtained upon suitable choice of $\\lambda$. For $\\lambda=-1$ ($\\lambda = +1$) we recover the GLEMS (GMEMS) mentioned in~\\Cref{sec:Gauss}. \n\nTo show a preview of our results, we start with a concrete case shown in \\Cref{fig1}. We prepare the system in a pure ($g=1$) symmetric ($d=0$) state, and investigate the effects of initial correlations on $\\Pi(t)$ by comparing the value taken by this quantity for such an initial preparation with what is obtained by considering the covariance matrix associated with the tensor product of the local states of the oscillators, i.e., by forcefully removing the correlations among them. Non-Markovian effects are clearly visible in the oscillations of the entropy production and lead to negative values of $\\Pi(t)$ in the first part of the evolution. This is in stark contrast with the Markovian case, which entails non-negativity of the entropy production rate. Crucially, we see that, for a fixed initial value of the local energies, the presence of initial correlations enhances the amount of entropy produced at later times, increasing the amplitude of its oscillations. We also stress that both curves eventually settle to zero (on a longer timescale than shown in \\Cref{fig1}) as argued in Sec.~\\ref{sec:QBM}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{correlations.pdf}\n\\caption{\\small{Entropy production rate in a system of two non-interacting oscillators undergoing the non-Markovian dynamics described in~\\Cref{sec:QBM}. We compare the behaviour of the entropy production rate resulting from a process where the system is initialised in a state with no initial correlations (solid line) to what is obtained starting from a correlated state (dashed-dotted line). The latter case refers to the preparation of a system in a pure ($g=1$), symmetric ($d=0$) squeezed state ($\\lambda = 1$). The former situation, instead, corresponds to taking the tensor product of the local states. In this plot we have taken $s=2$ and an Ohmic SD with Lorentz-Drude cut-off. The system parameters are $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig1}\n\\end{figure}\n\nWe now move to a more systematic investigation of $\\Pi(t)$ and its dependence on the specific choice of $s, d, g, \\lambda$.\nIn order to separate the contributions, we first study the behaviour of $\\Pi(t)$ when we vary one of those parameters, while all the others are fixed. We can first rule out the contribution of thermal noise by considering the case in which the reservoirs are in their vacuum state. Such zero-temperature limit can be problematic, as some approaches to the quantification of entropy production fail to apply in this limit~\\cite{PhysRevLett.118.220601}. In contrast, phase-space methods based on the R\\'{e}nyi-$2$-Wigner entropy allow to treat such a limit without pathological behaviours associated with such {\\it zero-temperature catastrophe}~\\cite{PhysRevLett.118.220601}. This formal consistency is preserved also in the case of a system whose dynamics is described by \\Cref{eq:ME_sec}, as shown in \\Cref{fig_vacuum}.\nWe take $T=0$ and choose an Ohmic SD with an exponential cut-off -- given by \\Cref{eq:SD_exp} -- with $\\epsilon=1$. \nThe map describing the dynamics converges to a stationary state characterised by a vanishing $\\Pi(t)$, although the oscillations are damped to zero more slowly, as non-Markovian effects are more persistent in the presence of zero-temperature reservoirs. Furthermore, we notice that the differences between different initial states are most pronounced in correspondence of the first peak: this suggests that the maximum value for the entropy production can be reasonably chosen as an apt figure of merit to distinguish the differences due to state preparation. \nSupported by this evidence, we adopt the value of the first maximum of $\\Pi(t)$ as an indicator of the irreversibility generated in the relaxation dynamics by different initial preparations.\n\nIn the inset of \\Cref{fig_vacuum} we show the logarithmic negativity given by \\Cref{eq:log_neg}. The interaction with zero-temperature reservoirs does not cause detrimental effects to entanglement, as the latter is preserved over time \\cite{PhysRevLett.100.220401,PhysRevA.75.062119, PhysRevA.80.062324}.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_vacuum_g.pdf}\n\\caption{\\small{Entropy production rates corresponding to independent zero-temperature reservoirs. We consider preparations of the initial global state corresponding to different values of parameter $g$ (related to the global purity of the state), while fixing $s=2$, $d=0$, and $\\lambda = 1$. In the inset we plot the logarithmic negativity for the same choice of parameters: entanglement persists over time up to the reach of a steady state of the dynamics. We have taken an Ohmic SD with an exponential cut-off with $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$.}}\n\\label{fig_vacuum}\n\\end{figure}\n\n\nWe also address the case of finite-temperature reservoirs and an Ohmic SD with Lorentz-Drude cut-off given by \\Cref{eq:SD_Ohm_LD}~\\footnote{the analysis can easily be extended to the exponential cut-off in \\Cref{eq:SD_exp} for the Ohmic ($\\epsilon=1$), super-Ohmic ($\\epsilon=3$) and sub-Ohmic ($\\epsilon=1\/2$) case}. \nWe thus fix $s, d, \\lambda$ and let $g$ vary to explore the role played by the global purity. \\Cref{fig2} shows that, by increasing $g$ -- i.e., by reducing the purity of the global state -- $\\Pi(t)$ decreases: an initial state with larger purity lies far from an equilibrium state at the given temperature of the environment and is associated with a larger degree of initial entanglement [cf. inset of \\Cref{fig2} and the analysis reported in \\Cref{sec:entanglement}], which translates in a larger entropy production rate. Furthermore, our particular choice of the physical parameters leads to the observation of ``entanglement sudden death''~\\cite{PhysRevLett.100.220401, PhysRevA.80.062324}: an initial state with non-null logarithmic negativity completely disentangles in a finite time due to interaction with environment, the disentangling time being shortened by a growing $g$ [cf. inset of \\Cref{fig2}].\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_LD_g.pdf}\n\\caption{\\small{Entropy production rates corresponding to different preparations of the initial global state. We consider different values of parameter $g$, while taking $s=2$, $d=0$, and $\\lambda = 1$. In the inset we plot the logarithmic negativity for the same choice of the parameters. The dynamics of the system has been simulated using an Ohmic SD with a Lorentz-Drude cut-off. The system parameters are $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig2}\n\\end{figure}\n\nSimilarly, we can bias the local properties of the oscillators by varying $d$ and, in turn, $g= 2d +1$, while keeping $s, \\lambda$ fixed: in \\Cref{fig3} we can observe that, when the global energy is fixed, the asymmetry in the local energies -- and purities $\\mu_1$ and $\\mu_2$ -- reduces the entropy production rate. In the inset we show that, by increasing the asymmetry between the two modes, the entanglement takes less time to die out. These results are consistent with the trends observed in \\Cref{fig2}.\nIndeed, a bias in the local energies would make the reduced state of one of the two oscillators more mixed, and thus less prone to preserve the entanglement that is initially set in the joint harmonic state. Such imbalance would give different weights to the two local dissipation processes, thus establishing an effective preferred local channel for dissipation. In turn, this would result in a lesser weight to the contribution given by correlations.\n\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_LD_d.pdf}\n\\caption{\\small{Entropy production rates corresponding to different values of $d$ and $g=2d+1$ in the parametrisation of the initial state (we have taken $s=4$ and $\\lambda = 1$). In the inset, the behaviour of the logarithmic negativity is shown. In this figure, we take an Ohmic SD with a Lorentz-Drude cut-off and $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig3}\n\\end{figure}\n\nWe conclude our analysis in this Section by exploring the parameter space in a more systematic way by fixing the global energy $s$ and randomly choosing the three parameters left, provided that the constraints in \\Cref{eq:constraints} are fulfilled. We see from \\Cref{fig4} that the curve for $\\Pi(t)$ comprising all the others is the one corresponding to unit global purity, i.e., $g=1$, and $d=0$, $\\lambda=1$ (dashed line). The globally pure state is indeed the furthest possible from a diagonal one: the rate at which entropy production varies is increased in order to reach the final diagonal state ${\\boldsymbol{\\sigma}}_{\\mathcal{1}}$. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_LD_pure.pdf}\n\\caption{\\small{Entropy production rates $\\Pi(t)$ (absolute value) as a function of time. The initial CM is parametrised by fixing $s$ ($s=10$ in the figure) and randomly choosing $d, g, \\lambda$ such that they are uniformly distributed in the intervals $[0,s-1]$, $[2d+1,d+10]$ and $[-1,1]$ respectively. The figure reports $N_{R} = 1000$ different realisations of the initial state. The dashed line corresponds to the globally pure state ($g=1$) with $d=0$, $\\lambda=1$. All the plots are obtained considering an Ohmic SD with a Lorentz-Drude cut-off and $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig4}\n\\end{figure}\n\n\\subsection{Dependence on the initial entanglement}\n\\label{sec:entanglement}\n\nWe now compare the trends corresponding to different choices of the parameters characterising the initial state. As non-Markovian effects are reflected in oscillating behaviour of the entropy production, we can contrast cases corresponding to different initial preparations by looking at the maximum and the minimum values $\\Pi_{\\rm max}$ and $\\Pi_{\\rm min}$ that the entropy production rate assumes for each choice of the parameters. Taking into account the evidence previously gathered, in the simulations reported in this Section we fix the minimum value for $g$, i.e., $g=2d+1$, and $\\lambda = +1$ as significant for the points that we want to put forward. In fact, with such choices we are able to parametrise the initial state with a minimum number of variables, while retaining the significant features that we aim at stressing. We can further assume, without loss of generality, $d \\ge 0$: this is simply equivalent to assuming that the first oscillator is initially prepared in a state with a larger degree of mixedness than the second one, i.e., $\\mu_1 \\le \\mu_2$. In this case, we can express $d$ in terms of the smallest symplectic eigenvalue of the partially transposed CM $\\tilde{\\nu}_{-}$. Therefore, taking into account the constraints given by \\Cref{eq:constraints}, one has that $d = -\\frac{1}{2} ( \\tilde{\\nu}_{-}^2 - 2s \\tilde{\\nu}_{-} +1)$.\nWe already mentioned in \\Cref{sec:QBM} that we are able to derive a closed expression for the CM at any time $t$, given by \\Cref{eq:sigma_t}. We can further notice that the positive and negative peaks in the entropy production rate are attained at short times. We can thus perform a Taylor expansion of $\\Delta(t)$ in \\Cref{eq:del_gamma} to obtain\n\\begin{equation}\n\\begin{aligned}\n\\Delta_{\\Gamma} (t) = \n[1- \\Gamma(t)] \\int_{0}^{t} d \\tau \\Delta(\\tau) + \\int_{0}^{t} d \\tau \\; \\Gamma(\\tau) \\Delta(\\tau) + \\mathcal{O} (\\alpha^4) \\ .\n\\end{aligned}\n\\end{equation}\nAs $\\Delta(t) \\propto \\alpha^2$ and $\\Gamma(t) \\propto \\alpha^2$, we can retain only the first term consistently with the weak coupling approximation we are resorting to. Therefore, we can recast \\Cref{eq:sigma_t} in a form that is more suitable for numerical evaluations, namely\n\\begin{align}\n\\label{eq:sigma_t_wc}\n\\boldsymbol{\\sigma}(t) = \\left [ 1 - \\Gamma(t) \\right ] \\boldsymbol{\\sigma}(0) + \\left [ 2 \\int_{0}^{t} d \\tau \\Delta(\\tau) \\right ] \\mathbbm{1}_4 \\ .\n\\end{align}\nBy substituting \\Cref{eq:sigma_t_wc} into \\Cref{eq:entropy_prod_rate}, we get the analytic expression for the entropy production rate, given in \\Cref{app:a} for the sake of completeness but whose explicit form is not crucial for our analysis here. \n\nIn this way, all the information about the initial state is encoded in the value of $\\tilde{\\nu}_{-}$ while $s$ is fixed. Note that this expression holds for any SD: once we choose the latter, we can determine the time-dependent coefficients $\\Delta(t)$ and $\\gamma(t)$ and thus the entropy production rate $\\Pi(t)$. We can then compute the maximum of the entropy production rate and study the behaviour of $\\Pi_{\\rm max}$ and $\\Pi_{\\rm min}$ as functions of the entanglement negativity $E_{\\mathcal{N}}$ at $t=0$. In \\Cref{fig5} we compare numerical results to the curve obtained by considering the analytical solution discussed above and reported in \\Cref{eq:entropy_prod_rate_general}. Remarkably, we observe a monotonic behaviour of our chosen figure of merit with the initial entanglement negativity: the more entanglement we input at $t=0$ the higher the maximum of the entropy production rate is. We can get to the same conclusion (in absolute value) when we consider the negative peak $\\Pi_{\\rm min}$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_LD_in_neg.pdf}\n\\caption{\\small{Maximum and minimum of the entropy production rate $\\Pi_{\\rm max}$ and $\\Pi_{\\rm min}$ as functions of the entanglement negativity at $t=0$. We take $s=4$, $g=2d +1$, $\\lambda = 1$, while $0 \\le d \\le 3$. We compare the numerical results (triangles and circles) to the analytical solution in \\Cref{eq:entropy_prod_rate_general} (solid line). We have used an Ohmic SD with a Lorentz-Drude cut-off and $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.01 \\omega_0^{-1}$.}}\n\\label{fig5}\n\\end{figure}\n\nThe monotonic behavior highlighted above holds regardless of the specific form of the spectral density. In \\Cref{fig6} we study $\\Pi_{\\rm max}$ against the smallest symplectic eigenvalue $\\tilde{\\nu}_{-}$ of the partially transposed CM for the various spectral densities we have considered, finding evidence of a power law of the form $\\Pi_{\\rm max} \\propto \\tilde{\\nu}_{-}^{\\delta}$, where the exponent $\\delta$ depends on the reservoir's spectral properties.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Exponents.pdf}\n\\caption{\\small{Plot of $\\Pi_{\\rm max}$ against the smallest symplectic eigenvalue of the partially transposed CM at $t=0$ (logarithmic scale) for different SDs (as stated in the legend). The initial state is prepared using the same parametrisation chosen in \\Cref{fig5}, while we have taken $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig6}\n\\end{figure}\n\n\n\n\\section{Markovian limit}\n\\label{sec:Markovian_Limit}\nWe are now interested in assessing whether the analytical and numerical results gathered so far bear dependence on the non-Markovian character of the dynamics. With this in mind, we explore the Markovian limit, in which the problem is fully amenable to an analytical solution, that can also be used to validate our numerical results. Such limit is obtained by simply choosing an Ohmic SD with a Lorentz-Drude regularisation -- \\Cref{eq:SD_Ohm_LD} -- and taking the long time and high temperature limits, i.e. $\\omega_0 t \\gg 1$ and $\\beta^{-1} \\gg \\omega_0$. This yields the time-independent coefficients\n\\begin{align}\n\\frac{\\Delta(t) - \\gamma(t)}{2}& \\longrightarrow \\gamma_M \\left ( 2 \\bar{n}(\\omega_0) + 1\\right ) , \\\\\n\\frac{\\Delta(t) + \\gamma(t)}{2}& \\longrightarrow \\gamma_M \\, \\bar{n}(\\omega_0) \\ ,\n\\end{align}\nwhere $\\bar{n}(\\omega_0) = (e^{\\beta \\omega_0} -1)^{-1}$ is the average number of excitations at a given frequency $\\omega_0$, whereas $\\gamma_M \\equiv 2 \\alpha^2 \\omega_c^2 \\omega_0 \/ (\\omega_c^2 + \\omega_0^2)$. \nTherefore, \\Cref{eq:ME_sec} reduces to a master equation describing the dynamics of two uncoupled harmonic oscillators undergoing Markovian dynamics, for which we take $\\mathbf{A} = -\\gamma_M \\mathbbm{1}_4$ and $\\mathbf{D}= 2 \\gamma_M (2 \\bar{n}(\\omega_0) +1) \\mathbbm{1}_4$ in \\Cref{eq:Lyapunov}.\n\nWorking along the same lines as in the non-Markovian case, we study the behavior of $\\Pi(t)$ by suitably choosing the parameters encoding the preparation of the initial state. For example, in \\Cref{fig7} we plot the entropy production rate as a function of time for different values of $g$. The limiting procedure gives back a coarse-grained dynamics monotonically decreasing toward the thermal state, to which it corresponds a non-negative entropy production rate, asymptotically vanishing in the limit $t \\to \\mathcal{1}$ . Moreover, the memoryless dynamics leads to a monotonic decrease of the entanglement negativity, as shown in the inset of \\Cref{fig7}.\nIn this case, the globally pure state ($g=1$, dashed line in \\Cref{fig8}) still plays a special role: all the curves corresponding to value of $g$ smaller that the unity remains below it.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Markov_g.pdf}\n\\caption{\\small{Entropy production rates corresponding to different preparations of the initial global state in the Markovian limit. We have taken different values of $g$ (thus varying the global purity of the state of the system) with $s=2$, $d=0$, $\\lambda = 1$, $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.01 \\omega_0^{-1}$.}}\n\\label{fig7}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Markov_pure.pdf}\n\\caption{\\small{Entropy production rate $\\Pi(t)$ plotted against the dimensionless time $\\omega_0 t$ in the Markovian regime. The initial CM is parametrised by setting $s=10$ and randomly sampling (in a uniform manner) $d, g, \\lambda$ from the intervals $[0,s-1]$, $[2d+1,d+10]$ and $[-1,1]$, respectively. We present $N_{R} = 100$ different realisations of the initial state. The dashed line represent the state with unit global purity ($g=1$) and $d=0, \\; \\lambda=1$. For the dynamics, we have taken $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.01 \\omega_0^{-1}$.}}\n\\label{fig8}\n\\end{figure}\n\nThe Markovian limit provides a useful comparison in terms of integrated quantities. In this respect, we can study what happens to the entropy production $\\Sigma = \\int_{0}^{+\\mathcal{1}} \\Pi(t) dt$. Although the non-Markovian dynamics entails the negativity of the entropy production rate in certain intervals of time, the overall entropy production is larger than the quantity we would get in the corresponding Markovian case, as can be noticed in \\Cref{fig9}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{sigma.pdf}\n\\caption{\\small{Entropy production in the non-Markovian case (solid line) as a function $E_{\\cal N}(t=0)$, compared with its counterpart achieved in the corresponding Markovian limit (dashed line). We have taken $s=2$, $d=0$, $\\lambda = 1$, $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.01 \\omega_0^{-1}$.}}\n\\label{fig9}\n\\end{figure}\n\nFinally, we can study the dependence of the Markovian entropy production rate on the initial entanglement. Note that, in this limit, \\Cref{eq:entropy_prod_rate} yields an analytic expression for the entropy production rate at a generic time $t$, which we write explicitly in \\Cref{eq:entropy_prod_rate_Markov}. From our numerical inspection, we have seen that the entropy production rate is maximum at $t=0$, so that\n\\begin{align}\n\\label{eq:entropy_prod_rate_Markov_max}\n\\Pi_{\\rm max} \\equiv \\Pi(0) & = - 8 \\gamma_M + 4 s \\, \\gamma_M \\tanh\\left (\\frac{\\beta \\omega_0}{2} \\right ) \\nonumber \\\\ \n& +\\frac{4 s \\, \\gamma_M \\coth \\left (\\frac{\\beta \\omega_0}{2} \\right )}{ (2 s -\\tilde{\\nu}_{-}) \\tilde{\\nu}_{-}} \\ .\n\\end{align} \n\nIf we fix the parameter $s$ and plot $\\Pi_{\\rm max}$ against $\\tilde{\\nu}_{-}$, we can contrast analytical and numerical results [cf.~\\Cref{fig10}]. We can draw the same conclusion as in the non-Markovian case: the more entanglement we input, the higher the entropy production rate.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Markov_nu.pdf}\n\\caption{\\small{Markovian limit: maximum entropy production rate $\\Pi_{\\rm max}$ as a function of the minimum symplectic eigenvalue of the partially transposed CM at $t=0$. We have taken $s=4$, $g=2d +1$, $\\lambda = 1$, $0 \\le d \\le 3$, $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$. We compare the curve obtained numerically (triangles) to the analytical trend (solid line) found through \\Cref{eq:entropy_prod_rate_Markov_max}.}}\n\\label{fig10}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe have studied -- both numerically and analytically -- the dependence of the entropy production rate on the initial correlations between the components of a composite system. We have considered two non-interacting oscillators exposed to the effects of local thermal reservoirs. By using a general parametrisation of the initial state of the system, we have systematically explored different physical scenarios. We have established that correlations play an important role in the rate at which entropy is intrinsically produced during the process. Indeed, we have shown that, when the system is prepared in a globally pure state, we should expect a higher entropy production rate. This is the case -- regardless of the spectral density chosen -- for initial entangled states of the oscillators: larger initial entanglement is associated with higher rates of entropy production, which turns out to be a monotonic function of the initial degree of entanglement. \nRemarkably, our analysis takes into full consideration signatures of non-Markovianity in the open system dynamics. \n\nIt would be interesting, and indeed very important, to study how such conclusions are affected by the possible interaction between the constituents of our system, a situation that is currently at the focus of our investigations, as well as non-Gaussian scenarios involving either non-quadratic Hamiltonians or spin-like systems.\n\n\\acknowledgements\nWe thank R. Puebla for insightful discussions and valuable feedback about the work presented in this paper.\nWe acknowledge support from the H2020 Marie Sk{\\l}odowska-Curie COFUND project SPaRK (Grant nr.~754507), the H2020-FETPROACT-2016 HOT (Grant nr.~732894), the H2020-FETOPEN-2018-2020 project TEQ (Grant nr.~766900), the DfE-SFI Investigator Programme (Grant 15\/IA\/2864), COST Action CA15220, the Royal Society Wolfson Research Fellowship (RSWF\\textbackslash R3\\textbackslash183013) and International Exchanges Programme (IEC\\textbackslash R2\\textbackslash192220), the Leverhulme Trust Research Project Grant (Grant nr.~RGP-2018-266). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n{H\\,{\\scshape ii}~regions}, that are an outcome of the photoionization of newly forming high-mass stars ($\\textup{M} \\gtrsim 8~M_{\\odot}$), not only play a crucial role in understanding processes involved in high-mass star formation but also reveal the various feedback mechanisms at play on the surrounding ambient interstellar medium (ISM) and the natal molecular cloud. Numerous observational and theoretical studies of {H\\,{\\scshape ii}~regions} have been carried out in the last two decades. However, dedicated multiwavelength studies of star-forming complexes add to the valuable observational database that provide a detailed and often crucial insight into the intricacies involved.\n \nIn this work, we study the massive star-forming region IRAS 17149$-$3916. This region is named RCW 121 in the catalog of $\\rm H{\\alpha}$ emission in the Southern Milky Way \\citep{1960MNRAS.121..103R}. The mid-infrared (MIR) dust bubble, S6, from the catalog of \\citet{2006ApJ...649..759C} is seen to be associated with this complex. IRAS 17149$-$3916 has a bolometric luminosity of $\\sim~9.4 \\times 10^4~L_{\\odot}$ \\citep{2006A&A...447..221B}. In literature, several kinematic distance estimates are found for this complex. The near and far kinematic distance estimates range between 1.6 -- 2.2 and 14.5 -- 17.7~kpc, respectively \\citep{{1997MNRAS.291..261W},{2004ApJS..154..553S},{2006A&A...447..221B},{2010ApJ...716.1478W}}. In a recent paper, \\citet{2014MNRAS.437..606T} use the spectral classification of the candidate ionizing star along with near-infrared (NIR) photometry to place this complex at 2~kpc. This is in agreement with the near kinematic distance estimates and conforms to the argument of \\citet{2006A&A...447..221B} for assuming the near kinematic distance of 2.1~kpc based on the height above the Galactic plane. Based on the above discussion, we assume a distance of 2.0~kpc in this work. \n\nThis star-forming region has been observed as part of several radio continuum surveys at 2.65~GHz \\citep{1969AuJPA..11...27B}, 4.85~GHz \\citep{1994ApJS...91..111W}, and more recently at 18 and 22.8~GHz by \\citet{2013A&A...550A..21S}. Using NIR data, \\citet{2006AJ....131..951R} detect a cluster of young massive stars associated with this IRAS source. These authors also suggest IRS-1, the brightest source in the cluster, to be the likely ionizing star of the {H\\,{\\scshape ii}~region} detected in radio wavelengths. \n\\citet{2008A&A...486..807A} probed the $^{12}\\rm CO$ molecular gas in the region. Based on this observation, they conclude that RCW 121 and RCW 122 are possibly unrelated star-forming regions belonging to a giant molecular complex while negating the speculation of these being triggered by Wolf-Rayet stars located in the HM~1 cluster. In the most recent work on this source, \\citet{2014MNRAS.437..606T} re-visit the cluster detected by \\citet{2006AJ....131..951R}. \nThese authors also detect three bright {\\it Herschel} clumps, the positions of which coincide with three of the 1.2~mm clumps of \\citet{2006A&A...447..221B}. \n\nIntroducing the IRAS 17149$-$3916 complex, in Fig.~\\ref{fig_intro}, we show the colour composite image of the associated region. The 5.8\\,$\\rm \\mu m$ IRAC band, which is mostly a dust tracer \\citep{2010ApJ...716.1478W}, displays an almost closed, elliptical ring emission morphology. The extent of the bubble, S6, as estimated by \\citet{2006ApJ...649..759C} traces this. The cold dust component, revealed by the \\textit{Herschel} 350\\,$\\rm \\mu m$ emission, is distributed along the bubble periphery with easily discernible cold dust clumps. Ionized gas sampled in the SuperCosmos $\\rm H{\\alpha}$ survey \\citep{2005MNRAS.362..689P} fills the south-west part of the bubble and extends beyond towards south. MIR emission at 21\\,$\\rm \\mu m$ is localized towards the south-west rim of the bubble. This emission is seen to spatially correlate with the central, bright region of ionized gas (see Fig.~\\ref{radioim})\nand is generally believed to be due to the Ly$\\alpha$ heating of dust grains to temperatures of around 100~K \\citep{1991MNRAS.251..584H}.\n\nIn this paper, we present an in-depth multiwavelength study of this star-forming region. In discussing the investigation carried out, we present the radio observations and the related data reduction procedure followed in Section \\ref{obs-data}. This section also briefly discusses the salient features of the archival data used in the study. Section \\ref{results} presents the results obtained for the associated ionized gas and dust environment. Section \\ref{discussion} delves into the detailed discussion and interpretation related to the observed morphology of the ionized gas, investigation of the pillar structures, dust clumps in the realm of triggered star formation, and the nature of the detected dust clumps and cores. Section \\ref{conclusion} highlights the main results obtained in this study. \n \n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.5]{Fig1.eps}\n\\caption{Colour-composite image towards IRAS 17149$-$3916 with the MSX 21\\,$\\rm \\mu m$ (red), SuperCosmos $\\rm H_{\\alpha}$ (blue) and IRAC 5.8 \\,$\\rm \\mu m$ (green) are shown overlaid with the contours of the \\textit{Herschel} 350\\,$\\rm \\mu m$ map. The contour levels are 600, 700, 1000, 1500, 2000, 2500, 4000, and 6000~MJy\/sr. The ellipse shows the position and the extent of the bubble, S6, as identified by \\citet{2006ApJ...649..759C}.}\n\\label{fig_intro}\n\\end{figure*}\n\n\\section{Observation, data reduction and archival data} \\label{obs-data}\n\\subsection{Radio Continuum Observation } \\label{radio_obs}\nTo probe the ionized component of IRAS 17149$-$3916, we have carried out low-frequency radio continuum observations of the region at 610 and 1280~MHz with the Giant Meterwave Radio Telescope (GMRT), Pune, India. GMRT offers a hybrid configuration of 30 antennas (each of diameter 45~m) arranged in a Y-shaped layout. The three arms contain 18 evenly placed antennas and provide the largest baseline of $\\sim 25$~km. The central $\\rm 1\\,km^2$ region houses 12 antennas that are randomly oriented with shortest possible baseline of $\\sim 100$\\,m.\nA comprehensive overview of GMRT systems can be found in \\citet{1991ASPC...19..376S}.\nThe target was observed with the full array for nearly full-synthesis to maximize the {\\it uv} coverage which is required to detect the extended, diffuse emission. Observations were carried out during August 2014 at 610 and 1280\\,MHz with a bandwidth of 32 MHz over 256 channels. In the full-array mode, the resolution is $\\sim$5 and 2~arcsec and the largest detectable spatial scale is $\\sim$17 and 7~arcmin at 610 and 1280~MHz, respectively. Radio sources 3C 286 and 3C 48 were selected as the primary flux calibrators. The phase calibrator, 1714-252, was observed after each 40-min scan of the target to calibrate the phase and amplitude variations over the full observing run. The details of the GMRT radio observations and constructed radio continuum maps are listed in Table \\ref{radio_tab}.\n\nAstronomical Image Processing System (AIPS) was used to reduce the radio continuum data where we follow the procedure detailed in \\citet{2017MNRAS.472.4750D} and \\citet{2019MNRAS.485.1775I}. The data sets are carefully examined to identify bad data (non-working antennas, bad baselines, RFI, etc.), employing the tasks, {\\tt TVFLG} and {\\tt UVPLT}. Following standard procedure, the gain and bandpass calibration is carried out after flagging bad data. Subsequent to bandpass calibration, channel averaging is done while keeping bandwidth smearing negligible. Continuum maps at both frequencies are generated using the task {\\tt IMAGR}, adopting wide-field imaging technique to account for w-term effects. Several iterations of self-calibration (phase-only) are performed to minimize phase errors and improve the image quality. Subsequently, primary beam correction was carried out for all the generated maps. \n\nIn order to obtain a reliable estimate of the flux density, contribution from the Galactic diffuse emission needs to be accounted for. This emission follows a power-law spectrum with a steep negative spectral index of $-2.55$ \\citep{1999A&AS..137....7R} and hence has a significant contribution at the low GMRT frequencies (especially at 610~MHz). This results in the increase in system temperature, which becomes particularly crucial when observing close to the Galactic plane as is the case with our target, IRAS 17149$-$3916. The flux calibrators lie away from the Galactic plane and for such sources at high Galactic latitudes, the Galactic diffuse emission would be negligible. This makes it essential to quantify the system temperature correction to be applied in order to get an accurate estimate of the flux density. Since measurement of the variation in the system temperature of the antennas at GMRT are not automatically implemented during observations, we adopt the commonly used Haslam approximation discussed in \\citet{2015MNRAS.451...59M} and implemented in \\citet{2019MNRAS.485.1775I}.\n\nThe sky temperature, ${T_{\\rm sky}}$, at frequency $\\nu$ for the location of our source is determined using the equation\n\\begin{equation}\nT_{\\rm sky,\\nu} = T_{\\rm sky}^{408}\\bigg(\\frac{\\nu}{408~\\textrm{MHz}}\\bigg)^\\gamma\n\\end{equation}\n\\noindent\nwhere $\\gamma = -2.55$ is the spectral index of the Galactic diffuse emission and $\\it{T_{\\rm sky}^{\\rm 408}}$ \nis the sky temperature at 408~MHz obtained from the all-sky 408~MHz survey of \\citet{1982A&AS...47....1H}. Using this method, we estimate the scaling factors of 2.2 and 1.24 at 610 and 1280~MHz, respectively, which are used to rescale and obtain the final maps.\n\\begin{table}\n\\caption{Details of the GMRT radio continuum observations.} \n\\begin{center}\n\\label{radio_tab}\n\\begin{tabular}{lll}\n\\\\ \n\\hline \\hline\n & 610 MHz & 1280 MHz \\\\\n\\hline\nDate of Obs. & 8 August 2014 & 14 August 2014 \\\\\nFlux Calibrators & 3C286, 3C48 & 3C286, 3C48 \\\\\nPhase Calibrators & 1714-252 & 1714-252 \\\\\nOn-source integration time & $\\sim$4~h & $\\sim$4~h \\\\\nSynth. beam & 10.69\\arcsec$\\times$6.16\\arcsec & 4.41\\arcsec$\\times$2.24\\arcsec \\\\\nPosition angle. (deg) & 7.04 & 6.37 \\\\\n{\\it rms} noise (mJy\/beam) & 0.41 & 0.07 \\\\\nInt. Flux Density (Jy) & $14.1 \\pm 1.4$ & $12.6 \\pm 1.3$ \\\\\n{\\small (integrated within $3\\sigma$ level)} & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Other available data}\\label{data_archive}\n\\subsubsection{Near-infrared data from 2MASS} \\label{data_2mass}\nNIR ($\\rm JHK_s$) data for point sources around our region of interest has been obtained from the Two Micron All Sky Survey (2MASS) Point Source Catalog (PSC). Resolution of 2MASS images is $\\sim$5.0 arcsec. We select those sources for which the ``read-flag'' values are 1 - 3 to ensure a sample with reliable photometry. This data is used to study the stellar population associated with IRAS 17149$-$3916.\n\n\\subsubsection{Mid-infrared data from Spitzer} \\label{data_spitzer}\n\nThe MIR images towards IRAS 17149$-$3916 are obtained from the archives of the Galactic Legacy Infrared Midplane Survey Extraordinaire (GLIMPSE) survey of the {\\it Spitzer} Space Telescope. We retrieve images towards the region in the four Infrared Array Camera (IRAC; \\citealt{2004ApJS..154...10F}) bands (3.6, 4.5, 5.8, 8.0\\,$\\rm \\mu m$). These images have an angular resolution of $\\lesssim 2$ arcsec with a pixel size of $\\sim 0.6$ arcsec. We utilize these images in our study to present the morphology of the MIR emission associated with the region.\n\n\\subsubsection{Far-infrared data from Herschel} \\label{data_herschel}\nThe far-infrared (FIR) data used in this paper have been obtained from the {\\it Herschel} Space Observatory archives. Level 2.5 processed 70 - 500\\,$\\rm \\mu m$ images from Spectral and Photometric Imaging Receiver (SPIRE; \\citealt{2010A&A...518L...3G}) and JScanam images from the Photodetector Array Camera and Spectrometer (PACS; \\citealt{2010A&A...518L...2P}), that were observed as part of the {\\it Herschel} infrared Galactic plane Survey (Hi-GAL; \\citealt{2008A&A...481..345M}), were retrieved. We use the FIR data to examine cold dust emission and investigate the cold dust clumps in the regions.\n\n\\subsubsection{Atacama Large Millimeter Array archival data} \\label{data_alma}\nWe make use of the 1.4\\,mm (Band~6) continuum maps obtained from the archives of {\\it Atacama Large Millimeter Array (ALMA)} to identify the compact dust cores associated with IRAS 17149$-$3916. These observations were made in 2017 (PI: A.Sanchez-Monge \\#2016.1.00191.S) using the extended 12m-Array configuration towards four pointings, S61, S62, S63, and S64. Each of these pointings sample different regions of the IRAS 17149$-$3916 complex. The retrieved maps have an angular resolution of 1.4\\,arcsec $\\times$ 0.9\\,arcsec and a pixel scale of 0.16\\,arcsec.\n\n\\subsubsection{Molecular line data from MALT90 survey}\n\nThe Millimeter Astronomy Legacy Team 90 GHz survey (MALT90; \\citealt{{2011ApJS..197...25F},{2013PASA...30...57J}}) was carried out using the Australia Telescope National Facility (ATNF) Mopra 22-m telescope with an aim to characterize molecular clumps associated with {H\\,{\\scshape ii}~regions}. The survey dataset contains molecular line maps of more than 2000 dense cores lying in the plane of the Galaxy, the corresponding sources of which were taken from the ATLASGAL 870\\,$\\rm \\mu m$ continuum survey. The MALT90 survey covers 16 molecular line transitions lying near 90 GHz with a spectral resolution of $0.11$ km s$^{-1}$ and an angular resolution of 38\\,arcsec. In this study we use the optically thin $\\rm N_2 H^+$ line spectra to carry out the virial analysis of the detected dust clumps associated with this complex.\n\n\\section{Results}\n\\label{results}\n\\subsection{Ionized gas} \\label{ionized}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.3]{Fig2a.eps}\n\\includegraphics[scale=0.3]{Fig2b.eps}\n\\caption{{\\it Top}: 610\\,MHz GMRT map of the region associated with IRAS 17149$-$3916 overlaid with the 3$\\sigma$ ($\\sigma = 0.6\\,\\rm mJy\\,beam^{-1}$) level contour in light blue. White contours correspond to the 18\\,GHz emission mapped with ATCA \\citep{2013A&A...550A..21S}, with contour levels starting from 3$\\sigma$ and increasing in steps of 8$\\sigma$ ($\\sigma = 1.2\\times 10^{-2}\\,\\rm Jy\\,beam^{-1}$). {\\it Bottom:} Same as the top panel but for 1280\\,MHz GMRT ($\\sigma = 0.1\\,\\rm mJy\\,beam^{-1}$) and 22.8\\,GHz ATCA map with the contour levels starting from 3$\\sigma$ and increasing in steps of 5$\\sigma$ ($\\sigma = 1.2\\times 10^{-2}\\,\\rm Jy\\,beam^{-1}$). The GMRT maps presented here have been convolved to a resolution of 12\\,arcsec for 610~MHz and 5\\,arcsec for 1280~MHz. The beams sizes of the GMRT and ATCA maps are shown in the lower left and right hand corner, respectively.}\n\\label{radioim}\n\\end{figure}\nWe present the first low-frequency radio maps of the region associated with IRAS 17149$-$3916 obtained using the GMRT. The continuum emission mapped at 610 and 1280~MHz are shown in Fig.~\\ref{radioim}. The ionized gas reveals an interesting, large-extent cometary morphology where the head lies in the west direction with a fan-like diffuse tail opening in the east. The tail has a north-south extension of $\\sim$ 6 arcmin. The bright radio emission near the head displays a `horse shoe' shaped structure that opens towards the north-east and mostly traces the south-western portion of the dust ring structure presented in Fig.~\\ref{fig_intro}. This is enveloped within the extended and faint, diffuse emission. In addition, there are several discrete emission peaks seen at both frequencies. The {\\it rms} noise and the integrated flux density values estimated are listed in Table \\ref{radio_tab}. For the latter, the flux density is integrated within the respective 3$\\sigma$ contours. The quoted errors are estimated following the method discussed in \\citet{2018A&A...612A..36D}.\n\nAlso included in the figure are contours showing the high-frequency ATCA observations at 18 and 22.8~GHz, from \\citet{2013A&A...550A..21S}. These snapshot ($\\sim$ 10~mins) ATCA maps sample only the brightest region towards the head and the emission at 18~GHz is seen to be more extended. The GMRT and ATCA maps reveal the presence of several distinct peaks which are likely to be externally ionized density enhanced regions thus suggesting a clumpy medium. \nThe fact that some of these peaks could also be internally ionized by newly formed massive stars cannot be ruled out. Hence, further detailed study is required to understand the nature of these radio peaks. \nA careful search of the SIMBAD\/NED database rules out any possible association with background\/foreground radio sources in the line of sight. Comparing with Fig.~\\ref{fig_intro}, the ionized emission traced in the $\\rm H{\\alpha}$ image agrees well with the GMRT maps. \\citet{2006AJ....131..951R} and \\citet{2014MNRAS.437..606T} present the ionized emission mapped in the Br$\\gamma$ line which is localized to the central part and mostly correlates with the bright emission seen in the GMRT maps. \n\nAssuming the radio emission at 1280~MHz to be optically thin and emanating\nfrom a homogeneous, isothermal medium, we derive several physical parameters of the detected {H\\,{\\scshape ii}~region} using the following expressions from \\citet{2016A&A...588A.143S},\\\\\n\n{\\it Lyman continuum photon flux ($N_{\\rm Ly}$):}\n\n\\begin{equation}\n\\begin{split}\n\\left( \\frac{N_{\\rm Ly}}{\\rm s^{-1}}\\right) = 4.771 \\times 10^{42} \\left(\\frac{F_\\nu}{\\rm Jy}\\right) \\left( \\frac{T_{\\rm e}}{\\rm K}\\right)^{-0.45} \\\\\n\\times \\left( \\frac{\\nu}{\\rm GHz}\\right)^{0.1} \\left( \\frac{D}{\\rm pc}\\right) ^{2}\n\\end{split}\n\\label{Lyman_flux}\n\\end{equation}\n\n{\\it Electron number density ($n_{\\rm e}$):}\n\\begin{equation}\n\\begin{split}\n\\left ( \\frac{n_{\\rm e}}{\\rm cm^{-3}}\\right) = 2.576 \\times 10^6 \\left(\\frac{F_\\nu}{\\rm Jy}\\right)^{0.5} \\left( \\frac{T_{\\rm e}}{\\rm K}\\right)^{0.175} \\left( \\frac{\\nu}{\\rm GHz}\\right)^{0.05} \\\\\n\\times \\left( \\frac{\\theta_{\\rm source}}{\\rm arcsec}\\right)^{-1.5} \\left( \\frac{D}{\\rm pc}\\right) ^{-0.5}\n\\end{split}\n\\label{e_no_density}\n\\end{equation}\n\n{\\it Emission measure (EM):}\n\\begin{equation}\n\\begin{split}\n\\left( \\frac{\\rm EM}{\\rm pc\\ cm^{-6}}\\right) = 3.217 \\times 10^7 \\left(\\frac{F_\\nu}{\\rm Jy}\\right) \\left( \\frac{T_{\\rm e}}{\\rm K}\\right)^{0.35} \\\\\n\\times \\left( \\frac{\\nu}{\\rm GHz}\\right)^{0.1} \\left( \\frac{\\theta_{\\rm source}}{\\rm arcsec}\\right)^{-2}\n\\end{split}\n\\label{emission_measure}\n\\end{equation}\nwhere, $F_{\\nu}$ is the integrated flux density of the ionized region, $T_{\\rm e}$ is the electron temperature, $\\nu$ is the frequency, $\\theta_{\\rm source}$ is the angular diameter of the {H\\,{\\scshape ii}~region} and D is the distance to this region. $T_{\\rm e}$ is taken to be 5000~K from the radio recombination line estimates by \\citet{1987A&A...171..261C}. Approximating the emission region to an ellipse, the angular source size ($\\theta_{\\rm source}$) is taken to be the geometric mean of the axes of the ellipse and is estimated to be 6.25~arcmin (3.6~pc). The derived physical parameters are listed in Table \\ref{radio-physical-param}. \n\n\\begin{table}\n\\caption{Derived physical parameters of the {H\\,{\\scshape ii}~region} associated with IRAS 17149$-$3916.}\n\\begin{center}\n\\begin{tabular}{ccccc}\n\\hline\nSize & log $N_{\\rm Ly}$ & EM & $n_{\\rm e}$ & Spectral Type \\\\\n(pc) & & (cm$^{-6}$pc) & (cm$^{-3}$) & \\\\\n\\hline \n\n3.6 & 48.73 & 5.8$\\times$10$^{5}$ & 1.3$\\times$10$^{2}$ & O6.5V -- O7V \\\\ \\hline\n\\end{tabular}\n\\label{radio-physical-param}\n\\end{center}\n\\end{table}\n \nIf we assume a single star to be ionizing the H\\,{\\scshape ii}~region\\ and compare the Lyman-continuum photon flux obtained from the 1280~MHz map with the parameters of O-type stars presented in \\citet[Table 1;][]{2005A&A...436.1049M}, we estimate its spectral type to be O6.5V -- O7V. \nThis can be considered as a lower limit as the emission at 1280~MHz could be optically thick as well. In addition, one needs to account for dust absorption of Lyman continuum photons, which can be significant as shown by many studies \\citep[e.g.][]{2011A&A...525A.132P}. \nThe estimated spectral type suggests a mass range of $\\sim 20 - 40~ M_{\\odot}$ for the ionizing star \\citep{2005A&A...436.1049M}. \n\nTo decipher the nature of the ionized emission, we determine the spectral index, $\\alpha$ which is defined as $F_{\\nu} \\propto \\nu^{\\alpha}$. The flux density, $F_{\\nu}$, is calculated from the GMRT radio maps. For this, we generate two new radio maps at 610 and 1280~MHz by setting the $\\it uv$ range to a common value of $0.14 - 39.7 $~k$\\lambda$. This ensures similar spatial scales being probed at both frequencies. Further, the beam size for both the maps is set to $\\rm 12~arcsec \\times 12~arcsec$. $F_{\\nu}$ is obtained by integrating within the area defined by the 3$\\sigma$ contour of the new 610~MHz map. \nThe integrated flux density values are estimated to be $\\rm 13.7 \\pm 1.3 \\,Jy\\,$, $\\rm 12.1 \\pm 1.2 \\,Jy\\,$ at 610 and 1280~MHz, respectively. \nThese yield a spectral index of $-0.17 \\pm 0.19$. Similar values are obtained for the central, bright radio emission as well. Within the quoted uncertainties, the average spectral index is fairly consistent with optically thin, free-free emission as expected from {H\\,{\\scshape ii}~regions} which are usually dominated by thermal emission. Spectral index estimate of $-0.1$, consistent with optically thin thermal emission, is also obtained by combining the GMRT flux density values with the available single dish measurements at 2.65~GHz \\citep{1969AuJPA..11...27B} and 4.85~GHz \\citep{1994ApJS...91..111W}.\n\n\\subsection{The dust environment}\n\\label{mir-dust}\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.4]{Fig3.eps}\n\\caption{Dust emission associated with IRAS 17149$-$3916 from the {\\it Spitzer}-GLIMPSE and the {\\it Herschel}-Hi-GAL surveys.} \n\\label{iremission}\n\\end{figure*}\nThe warm and the cold dust emission associated with IRAS 17149$-$3916 unravels interesting morphological features like a bubble, pillars, filaments, arcs, and clumps, that strongly suggest this to be a very active star forming complex where the profound radiative and mechanical feedback of massive stars on the surrounding ISM is clearly observed. Fig.~\\ref{iremission} compiles the MIR and FIR emission towards IRAS 17149$-$3916 from the GLIMPSE and Hi-GAL surveys. Apart from the stellar population probed at the shorter wavelengths, the diffuse emission seen in the IRAC-GLIMPSE images would be dominated by the emissions from polycyclic aromatic hydrocarbons (PAHs) excited by the UV photons in the photodissociation regions \\citep{2012A&A...542A..10A,2012ApJ...760..149P}. Close to the hot stars, there would also be significant contribution from thermally emitting warm dust that is heated by stellar radiation \\citep{2008ApJ...681.1341W}. In {H\\,{\\scshape ii}~regions}, emission from dust heated by trapped Ly$\\alpha$ photons \\citep{1991MNRAS.251..584H}, would also be present in these IRAC bands. In the wavelength regime of the 21~$\\rm \\mu m$ MSX, the emission is either associated with stochastically heated Very Small Grains (VSGs) or thermal emission from hot big grains (BGs). As we go to the FIR Hi-GAL maps, cold dust emission dominates and shows up as distinct clumps and filamentary structures where, emission in the 70~$\\rm \\mu m$ band is dominated by the VSGs and the longer wavelength bands like 250~$\\rm \\mu m$ band trace emissions from the BGs \\citep{2012ApJ...760..149P}.\n\n\\subsubsection*{Dust temperature and column density maps}\nTo understand the nature of cold dust emission, we generate the dust temperature and the molecular hydrogen column density maps following the procedure detailed in \\citet{2018A&A...612A..36D} and \\citet{2019MNRAS.485.1775I} and briefly stated here. A pixel-by-pixel modified blackbody modelling to the observed spectral energy distribution is carried out. As discussed in these two papers, the 70\\,$\\rm \\mu m$ data is not used because there would be appreciable contribution from the warm dust component hence rendering a single modified blackbody fit inaccurate. Thus, we have the FIR emission at 160, 250, 350, and 500\\,$\\rm \\mu m$ mostly on the Rayleigh-Jeans part to constrain the model given by\n\\begin{equation}\nF_{\\nu}-I_{\\rm bg} = B_{\\nu}(\\nu,T_{\\rm d})~\\Omega~(1-{\\rm e}^{-\\tau_{\\nu}}) \n\\label{MBB-Eqn}\n\\end{equation}\nwhere, $F_{\\nu}$ is the observed flux density, $I_{\\rm bg}$ is the background flux density, $B_{\\nu}(\\nu,T_{\\rm d})$ is the Planck function at the dust temperature $\\rm T_{\\rm d}$, $\\Omega$ is the solid angle subtended by a pixel (all maps are convolved to a common resolution of 35.7~arcsec and regridded to a common pixel size of $\\rm 14~arcsec\\times14~arcsec$). The background flux is estimated from a nearby region relatively free of clumpy and bright emission.\nThe optical depth $\\tau_{\\nu}$ in Eqn. \\ref{MBB-Eqn} can be expressed as\n\\begin{equation}\n\\tau_{\\nu} = \\mu_{\\rm H_2}~ N({\\rm H_2})~ m_{\\rm H}~ \\kappa_{\\nu}\n\\end{equation}\nwhere, $\\mu_{\\rm H_2}$ is the mean molecular weight which is taken as 2.8 \\citep{2008A&A...487..993K}, $N({\\rm H_2})$ is the column density, $m_{\\rm H}$ is the mass of hydrogen atom and $\\kappa_{\\nu}$ ($\\rm cm^2\\,g^{-1}$) is the dust opacity which is given as \\citep{1983QJRAS..24..267H}\n\\begin{equation}\n\\kappa_{\\nu} = 0.1\\left ( \\frac{\\nu}{1200~{\\rm GHz}} \\right )^{\\beta} \n\\label{kappa}\n\\end{equation}\nHere, $\\beta$ denotes the dust emissivity spectral index and a typical value of 2 estimated in several star regions is assumed. \nIn fitting the modified blackbody to the observed flux densities, $N({\\rm H_2})$ and $T_{\\rm d}$ are kept as free parameters. \n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.32]{Fig4a.eps}\n\\includegraphics[scale=0.32]{Fig4b.eps}\n\\includegraphics[scale=0.32]{Fig4c.eps}\n\\caption{Column density (a), dust temperature (b), and chi-square ($\\chi^{2}$) (c) maps of the region associated with the H\\,{\\scshape ii}\\ region. The 610~MHz ionized emission overlaid as magenta and gray contours on column density and dust temperature maps, respectively. The contour levels are 0.01, 0.03, 0.06, 0.12, and 0.2~Jy\/beam. The retrieved clump apertures are shown on the column density (in blue) and dust temperature (in black) maps following the nomenclature as discussed in the text. The black contour in (a) shows the area integrated for estimating $n_0$ (refer Section \\ref{clumps-CC}.)}\n\\label{cdtchi}\n\\end{figure*}\nThe column density and dust temperature maps generated are shown in Fig.~\\ref{cdtchi}. The goodness of the fits for each pixel can be seen in the $\\chi^{2}$ map where the maximum $\\chi^{2}$ value is seen to be $\\sim 8$.\nThe column density map presents a triangular morphology with three distinct, bright and dense regions.\nA network of broad filaments are also seen in the map. The dust temperature map is relatively patchy with regions of higher temperature within the radio nebula. A region with warm temperature is seen to be located towards the south-east of IRAS 17149$-$3916, the signature of which can be seen in the {\\it Herschel} maps shown in Fig.~\\ref{iremission}. The western side of the {H\\,{\\scshape ii}~region} shows comparatively cold temperatures. Furthermore, the filamentary features seen in the column density map are mostly revealed as distinct low temperature lanes. \n\n\\subsubsection*{Dust clumps and cores}\\label{dust_clump}\nThe FIR and the column density maps show the presence of dust clumps. These clumps are identified using the \\textit{Herschel} 350\\,$\\rm \\mu m$ map and the {\\it Dendrogram}\\footnote{\\url{https:\/\/dendrograms.readthedocs.io\/en\/stable\/}} algorithm. Using this algorithm, we identify the smallest structures, called the `leaves', in the 350\\,$\\rm \\mu m$ map, which in this case are the cold dust clumps. The key input parameters for the identification of the clumps are (1) {\\it min\\_value} = $3\\sigma$ and (2) {\\it min\\_delta} = $\\sigma$, where $\\rm \\sigma (= 191.2\\,MJy\\,sr^{-1})$ is the {\\it rms} level of the 350\\,$\\rm \\mu m$ map. An additional parameter, {\\it min\\_pix = N}, is also used, which is the minimum number of pixels required for a `leaf' to be considered an independent entity. To ensure that the clumps are resolved, the value of {\\it N} is chosen to be 7\\,pixels, the beam area of the 350\\,$\\rm \\mu m$ map. Setting these parameters, we extract three cold dust clumps. The central panel of Fig.~\\ref{dustclumps} shows the 350\\,$\\rm \\mu m$ map overlaid with the retrieved apertures of the three detected clumps labelled 1, 2, and 3. The physical parameters of the detected clumps are listed in Table \\ref{clump-param}. These are derived from the 350\\,$\\rm \\mu m$, column density and dust temperature maps. The peak positions are determined from the 350\\,$\\rm \\mu m$ map. The clump radii, $r=(A\/\\pi)^{0.5}$, where $A$ is the enclosed area within the retrieved clump apertures. \n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.5]{Fig5.eps}\n\\caption{ The central panel is the \\textit{Herschel} 350\\,$\\rm \\mu m$ map overlaid with the {\\it Dendrogram} retrieved clump apertures in black. White circles represent four pointings of the {\\it ALMA} observations, S61, S62, S63, and S64. The 1.4\\,mm {\\it ALMA} map towards each pointing is placed at the corners of the 350\\,$\\rm \\mu m$ image. The apertures of the dense cores extracted using the {\\it Dendrogram} algorithm are overlaid and labelled on these maps. The beam sizes of the 1.4\\,mm maps are given towards the lower left hand corner of each image.}\n\\label{dustclumps}\n\\end{figure*}\n\\begin{table*}\n\\caption{Physical parameters of the detected dust clumps associated with IRAS 17149$-$3916}\n\\begin{center}\n\\centering \n\\scalebox{0.85}{\n\\begin{tabular}{ccccccccccc}\n\\hline\nClump & \\multicolumn{2}{c}{Peak position} & Mean $T_{\\rm d}$ & $\\Sigma N({\\rm H_2})$ & Radius & Mass & Mean $N({\\rm H_2})$ & No. density $(n_{\\textup{H}_{2}})$ & \\multicolumn{1}{l}{$M_{\\rm vir}$} & \\multicolumn{1}{l}{$\\alpha_{\\rm vir}$} \\\\\n & RA (J2000) & DEC (J2000) & (K) & $(\\times 10^{23}$ cm$^{-2})$ & (pc) & ($M_{\\odot}$) & $(\\times 10^{22}$ cm$^{-2}$) & $(\\times 10^{4}$ cm$^{-3}$) & ($M_{\\odot}$) & \\multicolumn{1}{l}{} \\\\ \\hline\n1 & 17 18 24.32 & -39 19 28.04 & 27.8 & 6.1 & 0.3 & 250 & 5.5 & 5.0 & 452 & 1.75 \\\\\n2 & 17 18 18.77 & -39 19 04.54 & 25.0 & 7.1 & 0.3 & 292 & 5.4 & 5.1 & 600 & 2.15 \\\\\n3 & 17 18 22.57 & -39 18 36.44 & 27.3 & 2.8 & 0.2 & 117 & 3.6 & 4.6 & 450 & 4.08 \\\\ \\hline\n\\end{tabular}}\n\\label{clump-param}\n\\end{center}\n\\end{table*}\nTo estimate the masses of the detected clumps, we utilize the column density map and use the following expression\n\\begin{equation}\nM_{\\rm clump} =\\mu_{\\rm H_2}~ \\Sigma N({\\rm H_2})~A_{\\rm pixel}~m_{\\rm H}\n\\label{mass_eqn}\n\\end{equation}\nwhere, $\\mu_{\\rm H_2}$ is the mean molecular weight taken as 2.8, $\\Sigma N({\\rm H_2})$ is the integrated column density over the clump area, $A_{\\rm pixel}$ is the pixel area in $\\rm cm^2$ and $m_{\\rm H}$ is the mass of hydrogen atom. The number density is determined using the expression $n_{\\rm H_2} = 3\\,N({\\rm H_2})\/4r$. The peak positions of the Clumps 1, 2, and 3 agree fairly well with Clumps III, I, and II, respectively, detected by \\citet{2014MNRAS.437..606T} using {\\it Herschel} maps. Comparing the masses presented in Table \\ref{clump-param}, the estimates given in \\citet{2014MNRAS.438.2716T} are higher by a factor of 8, 1.5, and 3 for the Clumps 1, 2, and 3, respectively. \n\n{\\it ALMA} continuum data enables investigation of the detected clumps at high resolution. In Fig.~\\ref{dustclumps}, we show {\\it ALMA} dust continuum maps at 1.4\\,mm towards the four pointings marked and labelled as S61, S62, S63, and S64 where the first three pointings lie mostly within three clumps and S64 lies outside towards the east. Using the same {\\it Dendrogram} algorithm, several cores are identified. The key input parameters to the {\\it Dendrogram} algorithm are, {\\it min\\_value} = $3\\sigma$, {\\it min\\_delta} = $\\sigma$, and {\\it min\\_pix = N}, where $\\sigma$ is the {\\it rms} level and $N(=60)$ is the beam area of the 1.4\\,mm maps. In order to avoid detection of spurious cores, we only retain those with peak flux density greater than $5\\sigma$. Applying these constraints, seven cores are identified towards S61 and S62 each, one towards S63 and two towards S64. \n\nTo further study these dense cores, we estimate their physical parameters. Adopting the formalism described by \\citet{2018ApJ...853..160C} and assuming the emission at 1.4\\,mm to be optically thin, the masses are estimated using the following equation\n\n\\begin{eqnarray}\n M & = &\n \\displaystyle 0.0417 \\, M_{\\odot}\n \\left( {\\textrm e}^{0.533 (\\lambda \/ {1.3\\, \\textrm {mm}})^{-1}\n (T \/ {20\\, \\textrm {K}})^{-1}} - 1 \\right) \\left( \\frac{F_{\\nu}}{\\textrm {mJy}} \\right) \\nonumber \\\\\n & & \\displaystyle\n \\times \\left( \\frac{\\kappa_{\\nu}}{0.00638\\,\\textrm{cm}^2\\,\\textrm{g}^{-1}} \\right)^{-1}\n \\left( \\frac{d}{\\textrm {kpc}} \\right)^2\n \\left( \\frac{\\lambda}{1.3\\, \\textrm {mm}} \\right)^{3} \n \\label{core_mass}\n\\end{eqnarray}\nHere, $F_\\nu$ is the integrated flux density of each core, $d$ is the distance to the source and $\\lambda$ is the wavelength. Opacity, $\\kappa_\\nu$ is estimated using equation~\\ref{kappa} with the dust emissivity spectral index $\\beta$ fixed at 2.0. For cores detected in S61, S62, and S63, mean dust temperatures of the respective clumps are taken. For S64, the mean dust temperature for the region covering the S64 pointing is used. The effective radius, $r=(A\/\\pi)^{0.5}$, of each core is also estimated where $A$ is area enclosed within each core aperture.\nThe identified cores with the retrieved apertures are shown in Fig.~\\ref{dustclumps} and the estimated physical parameters are list in Table \\ref{alma-cores}. The uncertainties related to the missing flux effect are not taken into account in deriving these parameters since it may not be significant given that the largest recoverable scales is quoted to be $\\sim$10~arcsec for this ALMA dataset which is appreciably larger than the typical sizes of the detected cores. Barring the largest detected core which has an angular size of $\\sim$7~arcsec, the average size of the cores is 3~arcsec.\n\\begin{table*}\n\\caption{Parameters of the detected dust cores extracted associated with IRAS 17149$-$3916}\n\\begin{center}\n\\centering \n\\begin{tabular}{ccccccc}\n\\hline\n & Core & \\multicolumn{2}{c}{Peak position} & Flux density & Radius & Mass \\\\\n & & RA (J2000) & DEC (J2000) & (mJy) & (pc) & ($M_\\odot$) \\\\\n\\hline\nS61 & 1 & 17 18 23.03 & -39 19 12.95 & 9.7 & 0.01 & 1.7 \\\\\n & 2 & 17 18 23.75 & -39 19 18.47 & 62.2 & 0.02 & 10.7 \\\\\n & 3 & 17 18 23.45 & -39 19 19.53 & 32.8 & 0.02 & 5.7 \\\\\n & 4 & 17 18 24.17 & -39 19 22.35 & 11.2 & 0.01 & 1.9 \\\\\n & 5 & 17 18 24.34 & -39 19 25.09 & 106.7 & 0.02 & 18.4 \\\\\n & 6 & 17 18 24.76 & -39 19 19.50 & 4.5 & 0.01 & 0.8 \\\\\n & 7 & 17 18 24.93 & -39 19 28.12 & 31.8 & 0.02 & 5.5 \\\\\n\\hline \nS62 & 1 & 17 18 20.38 & -39 18 54.32 & 46.0 & 0.02 & 9.4 \\\\\n & 2 & 17 18 20.09 & -39 18 54.43 & 9.8 & 0.01 & 2.0 \\\\\n & 3 & 17 18 19.64 & -39 18 56.07 & 7.9 & 0.02 & 1.6 \\\\\n & 4 & 17 18 19.63 & -39 19 04.70 & 2.1 & 0.01 & 0.4 \\\\\n & 5 & 17 18 18.82 & -39 19 04.94 & 3.1 & 0.01 & 0.6 \\\\\n & 6 & 17 18 18.35 & -39 19 01.65 & 19.2 & 0.02 & 3.9 \\\\\n & 7 & 17 18 18.42 & -39 19 05.37 & 8.2 & 0.01 & 1.7 \\\\\n\\hline \nS63 & 1 & 17 18 23.48 & -39 18 40.70 & 357.7 & 0.03 & 62.3 \\\\\n\\hline \nS64 & 1 & 17 18 29.98 & -39 19 11.22 & 7.0 & 0.02 & 1.1 \\\\\n & 2 & 17 18 29.10 & -39 18 54.11 & 1.9 & 0.01 & 0.3 \\\\\n\n\\hline\n\\end{tabular}\n\\label{alma-cores}\n\\end{center}\n\\end{table*}\n\n\\subsection{Molecular line observation of identified clumps}\nWe use the optically thin $\\rm N_2 H^+$ line emission to determine the $V_{\\textup{LSR}}$ and the line width, $=\\Delta V$ of the clumps. The line spectra are extracted by integrating over the retrieved apertures of the clumps as shown in Fig.~\\ref{n2hplus_spectra}. The $\\rm N_2 H^+$ spectra have seven hyperfine structures and the {\\tt hfs} method of {\\tt CLASS90} is used to fit the observed spectra. The line parameters retrieved from spectra are listed in Table \\ref{n2hplus_fit_parameters}. The $V_{\\textup{LSR}}$ determined agrees well with the value of $\\rm 13.7\\, km s^{-1}$ obtained from $\\rm CS(2-1)$ observations of the region by \\citet{1996A&AS..115...81B} \n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.2]{Fig6a.eps}\n\\includegraphics[scale=0.2]{Fig6b.eps}\n\\includegraphics[scale=0.2]{Fig6c.eps}\n\\caption{Spectra of the optically thin N$_{2}$H$^{+}$ line emission extracted over the three identified clumps associated with IRAS 17149$-$3916. The red curves are the {\\tt hfs} fit to the spectra. The estimated $V_{\\textup{LSR}}$ is denoted by the dashed blue line and the location of the hyperfine components by magenta lines.}\n\\label{n2hplus_spectra}\n\\end{figure*}\n\\begin{table}\n\\centering\n\\caption{The retrieved $\\rm N_2 H^+$ line parameters, $V_{\\textup{LSR}}$, $\\Delta V$, $T_{\\rm mb}$ and $\\int T_{\\textup{mb}} {\\rm dV}$ for the identified clumps associated with IRAS 17149$-$3916.}\n\\begin{tabular}{ccccc}\n\\hline\nClump & $V_{\\textup{LSR}}$ & $\\Delta V$ & $T_{\\rm mb}$ & $\\int T_{\\textup{mb}} {\\rm dV}$ \\\\\n & (km s$^{-1}$) & (km s$^{-1}$) & (K) & (K km s$^{-1}$) \\\\ \\hline\n1 & -13.0 & 3.2 & 1.5 & 5.4 \\\\\n2 & -13.8 & 3.7 & 1.6 & 6.8 \\\\\n3 & -13.8 & 3.9 & 0.8 & 3.1 \\\\ \\hline\n\\end{tabular}\n\\label{n2hplus_fit_parameters}\n\\end{table}\n\n\\section{Discussion}\n\\label{discussion}\n\\subsection{Understanding the morphology of the ionized gas}\nAs seen, the GMRT maps reveal the large extent and prominent cometary morphology of the {H\\,{\\scshape ii}~region} associated with IRAS 17149$-$3916 which was earlier discussed as a roundish {H\\,{\\scshape ii}~region} by \\citet{2014MNRAS.437..606T}. In this section, we attempt to investigate the likely mechanism for this observed morphology. \nThe widely accepted models to explain the formation of cometary {H\\,{\\scshape ii}~regions} are (1) the bow-shock model (e.g. \\citealt{1992ApJ...394..534V}), (2) the champagne-flow model (e.g. \\citealt{1979A&A....71...59T}), and (3) the mass loading model (e.g. \\citealt{1997ApJ...476..166W}). However, subsequent studies, like those conducted by \\citet{2003ApJ...596..344C} and \\citet{2006ApJS..165..283A}, find \nthe `hybrid' models, that are a combination of these, to better represent the observed morphologies. \n\nThe bow-shock model assumes a wind-blowing, massive star moving supersonically through a dense cloud. Whereas, the champagne-flow model invokes a steep density gradient encountered by the expanding {H\\,{\\scshape ii}~region} around a newly formed stationary, massive star possibly located at the edge of a clump. Here, the ionized gas flows out towards regions of minimum density. In comparison, the model proposed by \\citet{1997ApJ...476..166W} invokes the idea of strong stellar winds mass loading from the clumpy molecular cloud and the cometary structure unfolds when a gradient in the geometrical distribution of mass loading centres are introduced. In this model the massive, young star is considered to be stationary as in the case of the champagne-flow model. \n\nWhile observation of ionized gas kinematics is required to understand the origin of the observed morphology, in the discussion that follows, we discuss a few aspects based on the\nradio, column density, and FIR maps of the region associated with IRAS 17149$-$3916 along with the identification of E4 as the likely ionizing star (refer Section \\ref{ionizing_star}). Following the simple analytic expressions discussed in \\citet{2018A&A...612A..36D}, we derive a few shock parameters to probe the bow-shock model. Taking the spectral type of E4 to be O6.5V -- O7V as estimated from the radio flux density, and assuming it to move at a typical speed of $\\rm 10~km\/s$ through the molecular cloud, we calculate the `stand-off' distance to range between $\\rm 2.6~arcsec (0.02~pc) - 3.1~arcsec (0.03~pc)$. This is defined as the distance from the star at which the shock occurs and where the momentum flux of the stellar wind equals the ram pressure of the surrounding cloud. The theoretically estimated value is significantly less than the observed distance of $\\sim \\rm 84~arcsec (0.8~pc)$ between E4 and the cometary head. Taking viewing angle into consideration would decrease the theoretical estimate thus widening the disparity further. Based on the above estimations, it is unlikely that the bow-shock model would explain the cometary morphology. To confirm further, we determine the trapping parameter which is the inverse of the ionization fraction. As the ionizing star moves supersonically through the cloud, the swept off dense shells trap the {H\\,{\\scshape ii}~region} within it and its expansion is eventually inhibited by the ram pressure. Trapping becomes more significant when recombinations far exceed the ionizing photons. Studies of a large number of cometary {H\\,{\\scshape ii}~regions} show the trapping parameter to be much greater than unity \\citep{1991ApJ...369..395M}. For IRAS 17149$-$3916, we estimate the value to lie in the range $3.2 - 3.5$ which indicates either weak or no bow shock. Similar interpretations are presented in \\citet{2018A&A...612A..36D} and \\citet{2016MNRAS.456.2425V}. The trapping parameters obtained by these authors lie in the range 1.2 -- 4.3.\n\nTo investigate the other models, namely the champagne-flow and clumpy\/mass loading wind models, we compare the observed spatial distribution of the dust component and the ionized gas. The FIR and column density maps presented in Section \\ref{mir-dust} show a complex morphology of pillars, arcs, filaments in the region with detected massive clumps towards the cometary head. The steep density gradient towards the cometary head is evident. Without the ionized and molecular gas kinematics information, it is difficult to invoke the champagne-flow model. However, the maps do show the presence of clumps towards the cometary head which could act as potential mass loading centres and thus support the clumpy cloud model. Further observations and modelling are essential before one can completely understand the mechanisms at work.\n\n\\subsection{Ionizing massive star(s)}\\label{ionizing_star}\n\\citet{2006AJ....131..951R} have studied the associated stellar population towards IRAS 17149$-$3916 in the NIR. Using the colour-magnitude diagram, these authors show the presence of a cluster of massive stars within the infrared nebula and suggest IRS-1 to be the likely ionizing source. In a later study, \\citet{2014MNRAS.437..606T} have supported this view citing the spectroscopic classification of IRS-1 as O5 -- O6 by \\citet{2005A&A...440..121B} and consistency with the Lyman continuum photon flux estimated from the radio observations by \\citet{2013A&A...550A..21S}. \n\\begin{figure*}\n\\centering\n\\includegraphics[width=9cm,height=6.6cm]{Fig7a.eps}\n\\includegraphics[width=8cm,height=6cm]{Fig7b.eps}\n\\includegraphics[width=9cm,height=7cm]{Fig7c.eps}\n\\caption{(a) J vs J-H colour magnitude diagram of the sources (cyan dots) associated with IRAS 17149$-$3916 and located within the 3$\\sigma$ radio contour. The nearly vertical solid lines represent the ZAMS loci with 0, 15, and 30 magnitudes of visual extinction corrected for the distance. The slanting lines show the reddening vectors for spectral types B0 and O5. \n(b) J-H vs H-K colour-colour diagram for sources (magenta dots) in the same region as (a). The cyan and black curves show the loci of main sequence and giants, respectively, and are taken from \\citet{1983A&A...128...84K} and \\citet{1988PASP..100.1134B}. The locus of classical T Tauri adopted from \\citet{1997AJ....114..288M} is shown as long dashed line. The locus of Herbig AeBe stars shown as short dashed line is adopted from \\citet{1992ApJ...393..278L}. The parallel lines are the reddening vectors where cross marks indicate intervals of 5 mag of visual extinction. The colour-colour plot is divided into three regions, namely, `F', `T', and `P' (see text for more discussion). The interstellar reddening law assumed is from \\citet{1985ApJ...288..618R}. The magnitudes, colours and various loci plotted in both the diagrams are in the \\citet{1988PASP..100.1134B} system. \nThe identified early type (earlier than B0) and YSOs candidates are highlighted as blue and red stars, respectively. Of these, the ones located towards the central, bright radio emission are labelled as `E' and `Y', respectively. (c) An enlarged view of the bottom left portion of (b) showing the position of spectral types on the main sequence locus. The location of the source E4 and the errors on the colour are also shown.}\n\\label{NIR-CC-CM}\n\\end{figure*}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.2]{Fig8.eps}\n\\caption{Panel a: {\\it Spitzer} 5.8\\,$\\rm \\mu m$ image in grey scale overlaid by 610~MHz radio contours. The contour levels are 0.002, 0.01, 0.03, 0.06, 0.12, and 0.2~Jy\/beam. The YSOs and massive stars detected towards the central ionized region are shown in red and blue coloured stars, respectively. Panel b: This shows the zoom-in view of central part of the H\\,{\\scshape ii}\\ region.}\n\\label{EY-Spa-dis}\n\\end{figure}\nWhile the spectral type estimated from the GMRT radio emission at 1280~MHz is consistent with that of IRS-1, we investigate the stellar population within the radio emission for a better understanding. \nIn Figs.~\\ref{NIR-CC-CM}(a) and (b), we plot the NIR colour-magnitude and colour-colour diagrams, respectively, of 2MASS sources located within the 3$\\sigma$ radio contour. \nFig.~\\ref{NIR-CC-CM}(c) shows an enlarged view of the bottom left portion of Fig.~\\ref{NIR-CC-CM}(b) to highlight the location of the star E4 with respect to the main sequence locus.\nFollowing the discussion given in \\citet[Fig. 7;][]{2006A&A...452..203T} the colour-colour plot is classified into `F', `T', and `P' regions. The `F' region is occupied by mostly field stars or Class III sources, the `T' region is for T-Tauri stars (Class II YSOs) and protostars (Class I YSOs) populate the `P' region.\nAs seen from the figures, there are sixteen sources earlier than spectral type B0 and eighteen identified YSOs. The sample of identified YSOs fall in the `T' region, the sources of which are believed to be Class II objects with NIR excess. Sources that lie towards the central bright, radio emitting region are labelled in the figures with prefixes of `E' for the sources earlier than B0 and `Y' for the YSOs. As indicated in Fig.~\\ref{NIR-CC-CM}(b), early type sources E1 and E2 are also the identified YSOs, Y1 and Y3, respectively. The coordinates and NIR magnitudes of these selected sources are listed in Table~\\ref{EY-sources}. Fig.~\\ref{EY-Spa-dis} shows the spatial distribution of the above sources with respect to the radio and 5.8\\,$\\rm \\mu m$ emission.\n\nAs seen from the above analysis, in addition to the presence of possible discrete radio sources that could be internally ionized, several massive stars are also identified from the NIR colour-magnitude and colour-colour diagrams. Hence, it is likely that ionization in this {H\\,{\\scshape ii}~region} is the result of this cluster of massive stars. However, the observed symmetrical, cometary morphology of the ionized emission strongly suggests that the ionization is mostly dominated by a single star. \nAs seen in Fig.~\\ref{NIR-CC-CM}(a), out of the early type sources that lie towards the central, bright radio emission, the colour and magnitude of the source E4 is consistent with a spectral type of $\\sim$O6.\nA careful scrutiny of the Fig.~\\ref{NIR-CC-CM}(b) show that early type stars E1 and E2 are embedded Class II sources and hence unlikely to be the main driving source. Sources E3, E5, E6 are possibly reddened giants or field stars. The location of early type star, E4 (which is the source IRS-1) in the colour-colour diagram (see enlarged view shown in Fig.~\\ref{NIR-CC-CM}c) agrees fairly well with the spectral type estimate of $\\sim$O6 obtained from the colour-magnitude diagram and strongly advocates it as the dominating exciting source. This is consistent with the identification of IRS-1 as the ionizing star in previous studies. Spatially also, the location of E4 clearly suggests its role in the formation of the network of pillar like structures observed (see Section \\ref{pillars}). As mentioned earlier, the spectral type of E4, estimated from NIR spectroscopy, is in good agreement with the radio flux. Location wise, however, it is 30~arcsec away from the radio peak. This offset could be attributed to density inhomogeneity or clumpy structure of the surrounding, ambient ISM. Supporting this scenario of E4 being the dominant player, are the interesting pillar like structures revealed in the MIR images discussed in the next section. \n\\begin{table*}\n\\caption{Early type and YSOs detected within the central, bright, radio emission of IRAS 17149$-$3916}\n\\begin{center}\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{cccccc}\n\\hline\nSource & \\multicolumn{2}{c}{Coordinates} & J & H & K \\\\\n & RA (J2000) & DEC (J2000) & & & \\\\ \\hline\n & & {\\it Early-type sources} & & & \\\\[1mm]\nE1 (Y1) & 17 18 22.21 & -39 18 42.24 & 11.364 & 10.039 & 8.957 \\\\\nE2 (Y3) & 17 18 22.85 & -39 18 22.45 & 15.356 & 12.086 & 10.086 \\\\\nE3 & 17 18 25.11 & -39 18 46.41 & 15.398 & 12.486 & 11.361 \\\\\nE4 & 17 18 25.45 & -39 19 08.61 & 8.654 & 8.208 & 7.927 \\\\\nE5 & 17 18 25.68 & -39 18 26.65 & 11.790 & 9.617 & 8.606 \\\\\nE6 & 17 18 25.94 & -39 18 00.89 & 7.715 & 7.191 & 7.021 \\\\ \\hline\n & & {\\it YSOs} & & & \\\\[1mm]\nY1 (E1) & 17 18 22.21 & -39 18 42.24 & 11.364 & 10.039 & 8.957 \\\\\nY2 & 17 18 22.28 & -39 18 12.69 & 15.064 & 14.161 & 13.578 \\\\\nY3 (E2) & 17 18 22.85 & -39 18 22.45 & 15.356 & 12.086 & 10.086 \\\\\nY4 & 17 18 22.87 & -39 18 58.45 & 14.413 & 13.346 & 12.393 \\\\\nY5 & 17 18 23.28 & -39 19 07.77 & 12.993 & 11.906 & 11.135 \\\\\nY6 & 17 18 24.44 & -39 19 11.04 & 14.324 & 12.958 & 12.166 \\\\\nY7 & 17 18 25.14 & -39 19 25.95 & 14.741 & 13.657 & 12.664 \\\\\nY8 & 17 18 25.28 & -39 18 24.76 & 12.818 & 11.372 & 10.420 \\\\\nY9 & 17 18 28.30 & -39 19 49.71 & 14.677 & 13.449 & 12.680 \\\\\n\\hline\n\\end{tabular}}\n\\label{EY-sources}\n\\end{center}\n\\end{table*}\n\n\\subsection{Triggered star formation}\\label{pillars}\n\n\\subsubsection*{Pillar Structures}\nIn Fig.~\\ref{pillar-structure}, we illustrate the identification of pillar structures in the IRAC 8\\,$\\rm \\mu m$ image. The MIR emission presents a region witnessing a complex interplay of the neutral, ambient ISM with the ionizing radiation of newly formed massive star(s). The boxes labelled `A' and `B' show prominent pillar structures, the orientation of these are clearly pointed towards E4. This strongly suggests E4 as the main sculptor of the detected pillars. Furthermore, it also supports the identification of E4 as the main ionizing source of the H\\,{\\scshape ii}~region\\ . \n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.15]{Fig9.eps}\n\\caption{(a) 8.0\\,$\\rm \\mu m$ map of IRAS 17149$-$3916 from the {\\it Spitzer}-GLIMPSE survey. Blue arrows highlight the pillar structures identified within Boxes `A' and `B'. The white `+' mark shows the position of E4 (IRS-1). (b) A zoom-in on `A' at IRAC 5.8\\,$\\rm \\mu m$. (c) 1280\\,MHz map covering the pillar `A'.}\n\\label{pillar-structure}\n\\end{figure}\nOne of the mechanisms widely accepted to explain the formation of these pillars is the radiative driven implosion (RDI) \\citep{1994A&A...289..559L}. Here, pre-existing clouds exposed to newly forming massive star(s) are sculpted into pillars by slow photoevaporation caused due to strong impingement of ionizing radiation. The other being the classical collect and collapse model of triggered star formation proposed by \\citet{1977ApJ...214..725E}. Under this framework, the expanding {H\\,{\\scshape ii}~region} sweeps up the surrounding material, creating dense structures that could eventually form pillars in their shadows. \n\nFigs.~\\ref{pillar-structure}(b) and (c) show the zoomed in IR and radio view of pillar `A'. Clearly seen is a slightly elongated and bright radio source at the pillar head. To ascertain the nature of the bright radio source, we estimate few physical parameters using the 1280~MHz GMRT map. Using the 2D fitting tool of {\\small CASA} viewer we fit a 2D Gaussian and determine the deconvolved size and flux density of this source to be $\\rm 19.05~arcsec \\times 8.07~arcsec$ ($\\theta_{\\rm source} = 12.4 \\rm arcsec; 0.12 \\rm pc$) and $\\rm 445 \\,mJy\\,$, respectively. Inserting these values in equations \\ref{Lyman_flux}, \\ref{e_no_density}, and \\ref{emission_measure}, we get $ log N_{\\rm Ly} = 47.30$ , $n_{\\rm e} = 3.9\\times10^3~\\rm cm^{-3}$ and EM= $\\rm 1.8\\times10^6~pc~cm^{-6}$. The ionized mass ($\\rm M_{ion} = \\frac{4}{3}\\pi r^{3} \\mathit{n_{\\rm e}} m_{p}$, where $\\rm r$ is the radius of the source and $\\rm m_{p}$ is the mass of proton) is also calculated to be $\\rm 0.09 ~ M_{\\odot}$. \nThe estimated values of these physical parameters lie in between the typical values of compact and UCHII region \\citep{2002ASPC..267...81K,2005A&A...433..205M,2021A&A...645A.110Y}.\nHence this radio source could well represent an intermediate evolutionary stage between a compact and an UC{H\\,{\\scshape ii}~region} thus indicating a direct signature of triggered star formation at the tip of the pillar. An alternate picture for the bright radio emission at the head of the pillar could also be external ionization by the ionizing front emanating from E4. Such externally ionized, tadpole-shaped structures have been studied in the Cygnus region, where the ionized front heads point towards the central, massive Cygnus OB2 cluster \\citep{2019A&A...627A..58I}. In support of the former scenario of the UC{H\\,{\\scshape ii}~region}, a bright and compact 5.8\\,$\\rm \\mu m$ emission region is seen that is co-spatial with radio emission. This compact IR emission is seen in all IRAC bands and {\\it Herschel} images. While it is not listed as an IRAC point source in the GLIMPSE catalog, it is included in the PACS 70~$\\rm \\mu m$ point source catalog \\citep{2017arXiv170505693M}. There also exists a 2MASS counterpart (within $\\sim$3~arcsec) but has been excluded from the YSO identification procedure owing to poor quality photometry in one or more 2MASS bands. It is thus likely that the compact IR emission sampled in the {\\it Spitzer}-GLIMPSE and {\\it Herschel} images is the massive YSO powering the UC{H\\,{\\scshape ii}~region}.\n\nSeveral studies (e.g. \\citealt{2010ApJ...712..797B,2017MNRAS.470.4662P}) have shown evidence of star formation in the pillar tips in the form jets, outflows, YSO population, etc. \nThe driving mechanism for this triggered star formation, RDI, is initiated when the propagating ionizing front traverses the pillar head creating a shell (known as the ionized boundary layer, IBL) of ionized gas. If the pressure of the IBL exceeds the internal pressure of the neutral gas within the pillar head then shocks are driven into it. This leads to compression and subsequent collapse of the clump leading to star formation. However, to comment further on the detected UC{H\\,{\\scshape ii}~region} on the tip of pillar `A' and link its formation to RDI, one needs to conduct pressure balance analysis using molecular line data as discussed in \\citet{2017MNRAS.470.4662P} and \\citet{2013A&A...556A.105O}. These authors have used $\\rm ^{13}CO$ transitions for their analysis which is not possible in our case as CO molecular line data with adequate spatial resolution is not available for this region of interest. Furthermore, attempting any study with the detected MALT90 transitions is difficult given the limited spatial resolution. In a recent study, \\citet{2020MNRAS.493.4643M} have carried out involved hydrodynamical simulations to study pillar formation in turbulent clouds. As discussed by these authors, star formation triggered in pillar heads can be explained without invoking the RDI mechanism. Gravitational collapse of pre-existing clumps can lead to star formation without the need for ionizing radiation to play any significant role. From their simulations, they conclude that compressive turbulence driven in {H\\,{\\scshape ii}~regions}, which competes with the reverse process of photoevaporation of the neutral gas, ultimately dictates the triggering of star formation in these pillars. Further high-resolution studies are required to understand the nature of the compact radio and IR emission at the head of the pillar `A'.\n\n\\subsubsection*{Dust clumps and the collect and collapse mechanism}\n\\label{clumps-CC}\nThe detection of clumps and the signature of fragmentation to cores is evident from the FIR and sub-mm dust continuum maps presented in Fig.~\\ref{dustclumps}. Investigating the collect and collapse hypothesis is necessary to explain whether the dust clumps are a result of swept-up material that is accumulated or these clumps are pre-existing entities. Towards this, we carry out a simple analysis and evaluate few parameters such as the dynamical age ($t_{\\rm dyn}$) of the {H\\,{\\scshape ii}~region} and the fragmentation time ($t_{\\rm frag}$) of the cloud. \n\nAssuming that the {H\\,{\\scshape ii}~region} associated with IRAS 17149$-$3916 expands in a homogeneous cloud, the dynamical timescale can be estimated using the classical expressions from \\citet{1978ppim.book.....S} and \\citet{1980pim..book.....D},\n\\begin{equation}\nt_{\\rm dyn} = \\frac{4}{7}\\frac{R_{\\rm St}}{a_{\\rm Hii}}\\left [ \\left ( \\frac{R_{\\rm if}}{R_{\\rm St}} \\right )^{7\/4} - 1 \\right ] \n\\end{equation}\nwhere, $a_{\\rm Hii}$ is the isothermal sound speed and is assumed to be 10~km s$^{-1}$, $R_{\\rm if} = 1.8$~pc is the radius of the {H\\,{\\scshape ii}~region} determined from the geometric mean of an ellipse visually fit to encompass the ionized emission in the 610~MHz GMRT map. $ R_{\\rm St}$ is the Str\\\"{o}mgren radius, given by the following equation\n\\begin{equation}\nR_{\\rm St} = \\left ( \\frac{3 N_{\\rm Ly}}{4 \\pi n^{2}_{0} \\alpha_{\\rm B}} \\right )^{1\/3} \n\\end{equation}\nwhere, $N_{\\rm Ly}$ is the Lyman continuum flux, $n_{0}$ is the initial particle density of the ambient gas. To derive $n_{0}$, we assume that the dense, bright region seen in the column density map is swept-up material due to the expansion of the H\\,{\\scshape ii}~region\\ and it was initially homogeneously distributed within the radius of the {H\\,{\\scshape ii}~region}. To estimate the mass of this swept-up material, we use equation \\ref{mass_eqn} and the column density map (refer Section \\ref{dust_clump}). Integrating within the black contour shown in Fig.~\\ref{cdtchi}(a), the mass is estimated to be $1645~ {M_{\\odot}}$. Taking the observed estimate of the radius, we calculate $n_{0}$ to be $9.5 \\times 10^{2}$ cm$^{-3}$. $\\alpha_{B}$ is the coefficient of radiative recombination and is determined using the expression \\citep{1980pim..book.....D}:\n\\begin{equation}\n\\alpha_{\\rm B} = 2.6\\times10^{-13} \\left ( \\frac{10^{4}\\textup{K}}{T_{\\rm e}} \\right )^{0.7} \\textup{cm}^{3} \\: \\textup{s}^{-1}\n\\end{equation}\nwhere, $T_{\\rm e} = 5000$~K, is the electron temperature. Using the above parameters and from the spectral type of the ionizing source (O6.5V - O7V), we estimate the $t_{\\rm dyn}$ to be $\\sim$ 0.2~Myr.\n\nUsing the formalism discussed in \\citet{1994MNRAS.268..291W}, we proceed next to estimate the fragmentation time scale of a cloud and that can be written as\n\\begin{equation}\nt_{\\rm frag} = 1.56 \\: a^{7\/11}_{{\\rm s},0.2}\\: \\mathit{N^{-1\/11}_{{\\rm Ly},\\textup{49}}}\\: n^{-5\/11}_{0,3} \\: \\textup{Myr}\n\\end{equation}\nwhere, $a_{\\rm s} = a_{\\rm s,0.2} \\times 0.2$ km s$^{-1}$ is the speed of sound in the shocked layer and is taken as 0.3 km s$^{-1}$ \\citep{2017MNRAS.472.4750D}. $N_{\\rm Ly} = N_{\\rm Ly,\\textup{49}} \\times 10^{49}$ s$^{-1}$ is the ionizing photon flux and $n_{0} = n_{0,3}\\times 10^{3}$ cm$^{-3}$ is the initial particle density of the ambient gas. Plugging in the values in the above expression, we estimate $t_{\\rm frag}$ to be $\\sim$ 2.2~Myr.\nComparing the estimates of the two time scales involved, it is seen that the fragmentation time scale is more than a factor of 10 larger than the dynamical time scale of the {H\\,{\\scshape ii}~region}. This essentially indicates that if the clumps detected are the result of swept-up material due to expansion of the {H\\,{\\scshape ii}~region}, then the shell has not got enough time to fragment thus making the collect and collapse process highly unlikely here. Such a scenario has been invoked by \\citet{2012A&A...544A..39J} for the dust bubble N22. In contrast, \\citet{2017MNRAS.472.4750D}, in their investigation of bubble CS51 found support for the collect and collapse hypothesis. Thus, further studies, as indicated earlier, are required to probe the RDI process not only with regards to the pillar structures but also the detected clumps. \n\n\\subsection{Nature of the detected dust clumps and cores}\n\\subsubsection*{Virial analysis of the dust clumps}\nHere, we investigate the gravitational stability of the identified dust clumps associated with IRAS 17149$-$3916. This would enable us to determine whether these clumps are gravitationally bound or not. The virial mass, $M_{\\rm vir}$, of a dust clump is the amount of mass that can be supported against self-gravity purely by thermal and non-thermal gas motion. This is given by \\citet{2016MNRAS.456.2041C}\n\\begin{equation}\nM_{\\rm vir} = \\frac{5\\ r\\ \\Delta V^2}{8\\ {\\rm ln}(2)\\ a_1\\ a_2\\ G} \\sim 209\\ \\frac{1}{a_1\\ a_2} \\left(\\frac{\\Delta V}{\\rm km\\ s^{-1}} \\right)^2\\ \\left(\\frac{r}{\\rm pc}\\right) M_{\\odot}\n\\end{equation}\nIn the above equation, $\\Delta V$ is the line width of the optically thin $\\rm N_2 H^+$ line, $r$ is the radius of clumps taken from Table \\ref{clump-param}, the constant $a_1$ accounts for the correction for power-law density distribution, and is given by $a_1 = (1-p\/3)\/(1-2p\/5)$, for $p< 2.5$ \\citep{1992ApJ...395..140B} where we adopt $p=1.8$ \\citep{2016MNRAS.456.2041C}. The constant $a_2$ accounts for the shape of the clump which we assume to be spherical and take $a_2$ as 1.\nWe also calculate the virial parameter, $\\alpha_{\\rm vir} = M_{\\rm vir}\/M_{\\rm clump}$. The estimated values of $M_{\\rm vir}$ and $\\alpha_{\\rm vir}$ are listed in Table \\ref{clump-param}. \nAs discussed in \\citet{2013ApJ...779..185K} and \\citet{2019ApJ...878...10T}, $\\alpha_{\\rm vir} = 2$ sets a lower threshold for gas motions to prevent collapse in the absence of magnetic field and\/or external pressure. The virial parameter estimate for Clump 1 is $ < 2$ indicating that it is gravitationally bound and hence likely to collapse. However, Clump 2 is marginally above this threshold and Clump 3, which shows signature of star formation in the form of an UC{H\\,{\\scshape ii}~region} has a higher virial parameter value of 4.1. Similar values of $\\alpha > 2$ has been observed for protostellar and prestellar dense cores by \\citet{2019ApJ...878...10T}. These authors have used the $\\rm C^{18}O$ line and discuss the contribution from turbulence as a primary factor that would significantly affect the line width and hence overestimate the virial mass. While turbulence gets dissipated in the densest region of molecular clouds and the $\\rm N_2 H^+$ line used here is a dense gas tracer, it is likely that the resolution of the MALT90 survey does not probe the inner dense cores and the observed velocity dispersion is influenced by the outer and more turbulent region. High-resolution molecular line observations are thus essential to probe the nature of the clumps. \n\n\\subsubsection*{Clump fragmentation and the detected cores}\n{\\it ALMA} 1.4~mm continuum map (Fig.~\\ref{dustclumps}) is seen to resolve the identified dust clumps into a string of cores, with masses ranging between $0.3 - 62.3~M_{\\odot}$ and radii $\\sim 0.01$~pc, thus indicating a scenario of hierarchical fragmentation. If we assume that fragmentation of the clumps is governed by thermal Jeans instability, then the initially homogeneous gas clump has a Jeans length and mass given by \\citet{2019ApJ...886..102S}\n\\begin{equation}\n \\lambda_J = \\sigma_{\\rm th} \\left ( \\frac{\\pi}{G\\rho} \\right )^{1\/2}\n\\end{equation}\nand \n\\begin{equation}\n M_J = \\frac{4\\pi\\rho}{3}\\left ( \\frac{\\lambda_J}{2} \\right )^3 = \\frac{\\pi^{5\/2}}{6}\\frac{\\sigma_{\\rm th}^3}{\\sqrt{G^3\\rho}}\n\\end{equation}\nwhere $\\rho$ is the mass density, $G$ the gravitational constant and $\\sigma_{\\rm th}$ the thermal velocity dispersion (the isothermal sound speed) and is given by\n\\begin{equation}\n \\sigma_{\\rm th} = \\left ( \\frac{k_B T}{\\mu m_{\\rm H}} \\right )^{1\/2}\n\\end{equation}\nwhere $ k_B$ is the Boltzmann constant and $\\mu$ the mean molecular weight. \nAs the thermal velocity dispersion will be dominated by $\\rm H_2$ and He, we consider $\\mu = 2.37$ \\citep{{2008A&A...487..993K},{2014MNRAS.439.3275W},{2019ApJ...886..102S}}. Using the clump parameters tabulated in Table \\ref{clump-param}, we estimate $\\lambda_J$ and $M_J$ of the clumps which are listed in Table \\ref{clump-Jeans}. If turbulence drives the fragmentation instead, then the turbulent Jeans length and mass for each clump is derived by replacing the thermal velocity dispersion with the clump velocity dispersion estimated from the observed line width of the dense gas tracer $\\rm N_2 H^+$ which is a good approximation for the turbulent line width. The calculated $\\lambda_{\\rm turb}$ and $M_{\\rm turb}$ values are given in Table \\ref{clump-Jeans}. \n\\begin{table*}\n\\caption{Hierarchical fragmentation of clumps associated with IRAS$-$3916}\n\\begin{center}\n\\centering\n\\begin{tabular}{ccccccccc}\n\\hline\nClump & $\\sigma_{\\rm th}$ & $\\sigma^a$ & $\\lambda_J$ & $M_J$ &$\\lambda_{\\rm turb}$ & $M_{\\rm turb}$ \\\\\n & ($\\rm km\\,s^{-1}$) & ( $\\rm km\\,s^{-1}$) & (pc) & ($M_{\\odot}$) & (pc) & ($M_{\\odot}$) \\\\ \n\\hline \n\n1 & 0.3 & 1.4 & 0.2 & 6.8 & 0.8 & 543 \\\\\n2 & 0.3 & 1.6 & 0.2 & 5.3 & 0.9 & 807 \\\\\n3 & 0.3 & 1.7 & 0.1 & 5.6 & 0.8 & 819\\\\\n\n\\hline\n\\end{tabular}\n\\label{clump-Jeans}\n\\end{center}\n$^a$ $\\sigma = \\Delta V\/\\sqrt{8\\,{\\rm ln}2}; \\Delta V$ being the $\\rm N_2 H^+$ line width.\n\\end{table*}\nThe turbulent Jeans masses are $\\sim 80 - 150$ times larger than the thermal Jeans mass. \nComparing with the derived core masses, it is seen that 11 out of the 15 detected cores ($\\sim 73\\%$) in the three clumps have masses less than the Jeans mass. This suggests that the observed cores are consistent with the prediction of Jeans fragmentation without invoking turbulence indicating that it does not play a significant role in the fragmentation process. Similar results are obtained by \\citet{2019ApJ...886..102S} who studied the 70\\,$\\rm \\mu m$ dark massive clumps in early stages using {\\it ALMA} data. As discussed by these authors, the majority of detected cores having masses less than the thermal Jeans mass supports competitive accretion and hierarchical fragmentation frameworks. The four cores whose mass exceeds the Jeans mass (the `super-Jeans' cores) are suitable candidates for forming high-mass cores. However, further high-resolution observations are essential to completely understand the fragmentation process, if any, at the core level. \n\n\\begin{figure*}\n\\centering \n\\includegraphics[scale=0.6]{Fig10.eps}\n\\caption{Masses of the dense cores identified from the 1.4\\,mm {\\it ALMA} maps and the cold dust clumps identified using the 350\\,$\\rm \\mu m$ \\textit{Herschel} map of IRAS 17149$-$3916 are plotted as a function of their effective radii, depicted by circles and `$\\star$'s, respectively. The shaded area corresponds to the low-mass star-forming region that do not satisfy the condition, $ M > 870\\,M_\\odot(r\/\\rm pc)^{1.33}$ \\citep{2010ApJ...723L...7K}. Black-dashed lines indicate the surface density thresholds of 0.05 and 1\\,$\\rm g\\,cm^{-2}$ defined by \\citet{2014MNRAS.443.1555U} and \\citet{2008Natur.451.1082K}, respectively. The red lines represent the surface density thresholds of 116\\,$M_\\odot\\,\\rm pc^{-2}$ ($\\sim 0.024\\,\\rm g\\,cm^{-2}$) and 129\\,$M_\\odot\\,\\rm pc^{-2}$ ($\\sim 0.027\\,\\rm g\\,cm^{-2}$) for active star formation proposed by \\citet{2010ApJ...724..687L} and \\citet{2010ApJ...723.1019H}, respectively.}\n\\label{alma_cores_mass_radius}\n\\end{figure*}\nIn Fig.~\\ref{alma_cores_mass_radius}, we plot the estimated mass and radius of the identified clumps and cores. The plot also compiles several surface density thresholds proposed by various studies to identify clumps\/cores with efficient and active star formation \\citep{2010ApJ...724..687L,2010ApJ...723.1019H,2014MNRAS.443.1555U}. In addition, criteria for these to qualify as high-mass star-forming ones are also included. All the detected clumps and cores associated with IRAS 17149$-$3916 are seen to be active star-forming regions. The three clumps satisfy the empirical mass-radius criteria, $ M > 870\\,M_\\odot(r\/\\rm pc)^{1.33}$ defined by \\citet{2010ApJ...723L...7K}, and hence are likely to harbour massive star formation. At the core scale ($\\rm < 0.1~pc$), \\citet{2008Natur.451.1082K} have posed a theoretical surface density threshold of $\\rm 1\\,g\\,cm^{-2}$, below which cores would be devoid of high-mass star formation. From the figure, we see that there are four cores (2 in Clump 1, 1 each in Clumps 2 and 3) which have masses $\\gtrsim 10~M_{\\odot}$ and above this surface density limit. These are the `super-Jeans' cores discussed above. \nHigh-resolution molecular line observations are essential to shed better light on the nature of the cores, the gas kinematics involved and for accurate determination of physical parameters like temperature, mass, etc.\n\n\\subsection{Conclusion}\n\\label{conclusion}\nUsing multiwavelength data, we have carried out a detailed analysis of the region associated with IRAS 17149$-$3916. The important results of this study are summarized below.\n\n\\begin{enumerate}\n\\item Using the GMRT, we present the first low-frequency radio continuum maps of the region mapped at 610 and 1280~MHz. The {H\\,{\\scshape ii}~region}, previously believed to be nearly spherical, displays a large-extent cometary morphology. The origin of this morphology is not explained by the bow shock model. The presence of dense clumps towards the cometary head indicates either the champagne flow or the clumpy cloud model but further observations of the ionized gas kinematics are essential to understand the observed morphology. \n\n\\item The integrated flux densities yields an average spectral index value of $-0.17\\pm0.19$ consistent with thermal {\\it free-free} emission. If powered by a single massive star, the estimated Lyman continuum photon flux suggests an exciting star of spectral type O6.5V -- O7V star. \n\n\\item NIR colour-magnitude and colour-colour diagrams show the presence of a cluster of massive stars (earlier than spectral type B0) located within the bright, central radio emitting region. MIR and FIR images show complex and interesting features like a bubble, pillars, clumps, filaments,and arcs revealing the profound radiative and mechanical feedback of massive stars on the surrounding ISM. \n\n\\item The spatial location of source, E4 (IRS-1), and the orientation of observed pillar structures with respect to it, strongly suggest it as the dominant driving source for the cometary {H\\,{\\scshape ii}~region}. This view finds support from the position of E4 in the colour-magnitude and colour-colour diagrams. Further, its spectral type estimation from literature agrees well with that estimated for the exciting source of the {H\\,{\\scshape ii}~region} from GMRT data. \n\n\\item The column density map reveals the presence of dust clumps towards the cometary head while the dust temperature map appears to be relatively patchy with regions of higher temperature within the radio nebula. The dust clumps identified using the \\textit{Herschel} 350\\,$ \\rm \\mu m$ map have masses ranging between $\\sim$100 - 300~$\\rm M_\\odot$ and radius $\\sim$0.2 - 0.3~pc. Virial analysis using the $\\rm N_2 H^+$ shows that the south-east clump (\\#1) is gravitationally bound. For the other two clumps (\\# 2 and 3), the line widths would possibly have contribution from turbulence thus rendering larger values of the virial parameter.\n\n\\item A likely compact\/UC{H\\,{\\scshape ii}~region} is seen at the tip of a pillar structure oriented towards the source E4 thus suggesting evidence of triggered star formation under the RDI framework. In addition, the detected dust clumps are investigated to probe the collect and collapse model of triggered star formation. The estimated dynamical time scales are seen to be smaller by a factor of $\\sim$10 compared to the fragmentation timescale of the clumps thus clearly negating the collect and collapse mechanism at work. \n\n\\item The {\\it ALMA} 1.4~mm dust continuum map probes the dust clumps at higher resolution and reveal the presence of 17 compact dust cores with masses and radii in the range of $0.3 - 62.3~M_\\odot$ and 0.01 -- 0.03~pc, respectively. The largest and the most massive core is located within Clump 3. The estimated core masses are consistent with thermal Jeans fragmentation and support the competitive accretion and hierarchical fragmentation scenario. \n\n\\item Four `super-Jeans' fragments are detected and are suitable candidates for forming high-mass stars and their mass and radius estimates satisfy the various threshold defined in literature for the potential high-mass star-forming cores.\n\n\\end{enumerate}\n\n\\section*{Acknowledgements}\nWe would like to thank the referee for comments and suggestions which helped in improving the quality of the manuscript. We thank the staff of the GMRT that made the radio observations possible. GMRT is run by the National Centre for Radio Astrophysics\nof the Tata Institute of Fundamental Research. The authors would like to thank Dr. Alvaro S\\'{a}nchez-Monge for providing the FITS image of the radio maps presented in \\citet{2013A&A...550A..21S}. CHIC acknowledges the support of the Department of Atomic Energy, Government of India, under the project 12-R\\&D-TFR-5.02-0700. This work is based\n[in part] on observations made with the {\\it Spitzer} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This publication also made use of data products from {\\it Herschel} (ESA space observatory). This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center\/California Institute of Technology, funded by the NASA and the NSF. This work makes use of the ATLASGAL data, which is a collaboration between the Max-Planck-Gesellschaft, the European Southern Observatory (ESO) and the Universidad de Chile. This paper makes use of the following ALMA data: ADS\/JAO.ALMA\\#2016.1.00191.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI\/NRAO and NAOJ. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. \n\n\\addcontentsline{toc}{section}{Acknowledgements}\n\\section*{Data Availability}\nThe original data underlying this article will be shared on reasonable request to the corresponding author.\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzemjn b/data_all_eng_slimpj/shuffled/split2/finalzzemjn new file mode 100644 index 0000000000000000000000000000000000000000..b8aedc0d6b5bb1f4ffd0823338748e29ff860d08 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzemjn @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec: intro}\nIn the physical sciences, simple mathematical models are often used as a means of developing intuition and capturing phenomenology for complex systems. Whilst more complex, faithful models might require computational methods or advanced analytical techniques to study them, simple models are commonly amenable to succinct pen-and-paper analysis, and can sometimes yield dynamical predictions that agree with those of more intricate representations, at least qualitatively. This minimalistic approach is readily exemplified in the study of microswimmers, where minimal models of complex, shape deforming swimmers on the microscale are used in a range of settings, from back-of-the-envelope calculation and undergraduate teaching through to state-of-the-art, application-driven research contributions \\cite{Zottl2012,Thery2020,Gutierrez-Ramos2018,Qi2022,Spagnolie2015,Elgeti2016,Malgaretti2017,Mathijssen2016,Lauga2020,Omori2022,Lauga2009}.\n\nA common, if not ubiquitous, hydrodynamic representation of a microswimmer is the \\emph{force dipole} model, wherein the flow field generated by a swimmer is taken to correspond simply to that of a force dipole. This approximate representation is valid in the far field of a microswimmer, with force-free swimming conditions being appropriate in the inertia-free limit of low-Reynolds-number swimming that applies to many microswimmers, including the well-studied spermatozoa and the breaststroke-swimming algae \\textit{Chlamydomonas reinhardtii}. Invariably, the force dipole is assumed to be aligned along a swimmer-fixed axis and taken to be of constant signed strength. These parameters can be estimated from experimental measurements and hydrodynamic simulations \\cite{Klindt2015,Ishimoto2020}, and typically involve averaging out rapid temporal variations that can be present in biological swimmers. This leads to a minimal model of microswimming: instead of studying a complex, shape-deforming swimmer and the associated time-varying flow field, we can instead consider the motion of a constant-strength force dipole in the same environment. With this approximation, one can often derive surrogate equations of motion for the swimmer \\cite{Kim2005,Lauga2009}, which can then be analysed with ease, at least when compared to models that capture the intricate, time-dependent details of the microswimmer and the flow that it generates.\n\nThough many of the assumptions associated with this modelling approach are well understood, such as the limitations of the far-field approximation when studying near-field interactions, the impact of assuming a constant dipole strength is less clear. More generally, the impact of adopting constant, \\textit{a priori}-averaged parameters in simple models of temporally evolving microswimmers has not been thoroughly investigated, to the best of our knowledge. However, it is clear that employing rapidly varying parameters can have significant consequences for the predictions of simple models. For example, the work of \\citet{Omori2022} recently explored a simple model of shape-changing swimmers, explicitly including rapid variation in the parameters that describe the swimmer shape and its speed of self propulsion. The subsequent analysis of \\citet{Walker2022a} highlighted, amongst other observations, that the fast variation in the parameters was key to the behavioural predictions of the model, which were found to align with the experimental observations of \\citet{Omori2022}. Hence, the study of the effects of employing time-dependent parameters in even simple models, in comparison to adopting constant, averaged parameters, is warranted.\n\nThus, the primary aim of this study will be to explore minimal models of microswimming, incorporating fast variation in model parameters. To do this, motivated by a number of recent works in a similar vein \\cite{Walker2022,Walker2022a,Gaffney2022}, we will employ a multiple scales asymptotic analysis \\cite{Bender1999} to systematically derive effective governing equations from non-autonomous models. In particular, we will incorporate the effects of rapid variation by exploiting the separation of timescales often associated with microswimming, yielding leading-order autonomous dynamical systems. In what follows, we will focus on two example scenarios: the interaction of a dipole swimmer with a boundary, and the angular dynamics of two hydrodynamically interacting dipoles. Through our analysis, which will be simple, if not elementary, we will compare the dynamics of multi-timescale models with the predictions of the simplest, constant-parameter models, seeking to ascertain both if qualitative differences arise and if they can be systematically corrected for by informed parameter choices.\n\n\\section{A dipole near a no-slip boundary}\\label{sec: no slip}\n\\subsection{Model equations}\nConsider a swimmer moving in a half space that is bounded by an infinite plane, with the swimmer moving in a plane perpendicular to the boundary. We parameterise the orientation of the swimmer by the angle $\\theta$ between a swimmer-fixed director $\\d$ and the boundary normal, and parameterise its position by the distance $h$ from its centre to the boundary, as illustrated in \\cref{fig: no slip: setup}. With all quantities dimensionless, a minimal model for the swimmer dynamics is presented in part by \\citet{Lauga2020} as\n\\begin{subequations}\\label{eq: no slip: original system}\n\\begin{align}\n \\diff{h}{t} &= \\frac{3p}{16h^2}\\left(1 + 3\\cos{2\\theta}\\right) + u \\cos{\\theta}\\,,\\\\\n \\diff{\\theta}{t} &= \\frac{3p\\sin{2\\theta}}{64h^3}\\left[4 + B(3 + \\cos{2\\theta})\\right]\\,,\\label{eq: no slip: original system: angular}\n\\end{align}\n\\end{subequations}\nwhere $u$ is speed of self propulsion and we have shifted \\citeauthor{Lauga2020}'s definition of $\\theta$ by $\\pi\/2$. In this minimal model, the flow generated by the swimmer in the absence of the boundary is assumed to be purely that of a force dipole with vector strength $\\vec{p}=p\\d$, aligned along the body fixed director that defines $\\theta$, and the swimmer shape is captured only through the Bretherton parameter $B$ \\cite{Bretherton1962}. This modelling approach can be justified by considering a far-field limit of a swimmer, though here we focus on analysing the model of \\cref{eq: no slip: original system} rather than on its origin and motivation. In particular, we focus on the angular dynamics contained within \\cref{eq: no slip: original system: angular}.\n\nThe standard approach to modelling this system would be to assume that $p$, $u$, and $B$ are constant in time, as is the case in the textbook of \\citet{Lauga2020}. This can be interpreted as averaging away any time dependence of the three parameters, which one would generically expect to be present for a multitude of shape-changing microswimmers, for instance. Here, we do not perform this \\emph{\\textit{a priori}} averaging of the parameters, and will instead suppose that $p$, $u$, and $B$ are indeed functions of time. In particular, we suppose that $p=p(\\omega t)$, $u=u(\\omega t)$, and $B=B(\\omega t)$ are periodic functions of $\\omega t$, where $\\omega\\gg1$ is a large dimensionless frequency of oscillation and we assume that $p$, $u$, and $B$ share a period, in line with the rapid shape changes undergone by many microswimmers. For later convenience, we make the additional assumption that the average of $p$ over a period is non-zero, and will impose the minimal restriction that $B\\in(-1,1)$, which holds for all but the most elongated of objects \\cite{Bretherton1962}. Hence, we study the non-autonomous system\n\\begin{subequations}\\label{eq: no slip: full system}\n\\begin{align}\n \\diff{h}{t} &= \\frac{3p(\\omega t)}{16h^2}\\left(1 + 3\\cos{2\\theta}\\right) + u(\\omega t) \\cos{\\theta}\\,,\\\\\n \\diff{\\theta}{t} &= \\frac{3p(\\omega t)\\sin{2\\theta}}{64h^3}\\left[4 + B(\\omega t)(3 + \\cos{2\\theta})\\right]\\,,\n\\end{align}\n\\end{subequations}\nwith $\\omega\\gg1$ and all other quantities being $\\bigO{1}$ as $\\omega\\to\\infty$. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\textwidth]{figs\/no_slip_setup\/no_slip_setup.png}\n \\caption{Geometry and parameterisation of a minimal model of a swimmer above a no-slip boundary. The swimmer is parameterised by the angle $\\theta$ between the vector dipole strength $\\vec{p}$ and the separation $h$ of its centroid from the boundary. The velocity due to self-propulsion is assumed to be along the direction of $\\vec{p}$, shared with swimmer-fixed director $\\d$.}\n \\label{fig: no slip: setup}\n\\end{figure}\n\n\\subsection{Multi-scale analysis}\nWe will attempt to exploit the large frequency $\\omega \\gg 1$ in order to make progress in analysing the non-autonomous system of \\cref{eq: no slip: full system}, employing the method of multiple scales \\cite{Bender1999}. Following this approach, we introduce the fast timescale $T \\coloneqq \\omega t$, so that $p = p(T)$ etc., and formally treat $t$ and $T$ as independent. The proper time derivative $\\mathrm{d}\/\\mathrm{d}t$ accordingly transforms as\n\\begin{equation}\\label{eq: no slip: time transform}\n \\diff{}{t} \\mapsto \\pdiff{}{t} + \\omega \\pdiff{}{T}\\,,\n\\end{equation}\ntransforming our non-autonomous system of ordinary differential equations (ODEs) into a system of partial differential equations (PDEs). We now seek asymptotic expansions of $h$ and $\\theta$ in inverse powers of $\\omega$, which we write as\n\\begin{equation}\n h \\sim h_0(t,T) + \\frac{1}{\\omega}h_1(t,T) + \\cdots\\,, \\quad \\theta \\sim \\theta_0(t,T) + \\frac{1}{\\omega}\\theta_1(t,T) + \\cdots\\,.\n\\end{equation}\nTransforming \\cref{eq: no slip: full system} via \\cref{eq: no slip: time transform} and inserting these asymptotic expansions gives the $\\bigO{\\omega}$ balance simply as\n\\begin{equation}\n \\pdiff{h_0}{T} = 0\\,, \\quad \\pdiff{\\theta_0}{T} = 0 \\quad \\implies \\quad h_0 = h_0(t)\\,, \\quad \\theta_0 = \\theta_0(t)\\,,\n\\end{equation}\nso that the leading order solutions are independent of the fast timescale $T$. This should be expected, as the forcing of the system is strictly $\\bigO{1}$, so that the dominant contribution to the evolution occurs on the long timescale $t$.\n\nAt the next asymptotic order we pick up the $\\bigO{1}$ forcing, and we have\n\\begin{subequations}\\label{eq: no slip: order unity system}\n\\begin{align}\n \\diff{h_0}{t} + \\pdiff{h_1}{T} &= \\frac{3p(T)}{16h_0^2}(1 + 3\\cos{2\\theta_0}) + u(T)\\cos{\\theta_0}\\,,\\\\\n \\diff{\\theta_0}{t} + \\pdiff{\\theta_1}{T} &= \\frac{3p(T)\\sin{2\\theta_0}}{64h_0^3}\\left[4 + B(T)(3 + \\cos{2\\theta_0})\\right]\\,,\n\\end{align}\n\\end{subequations}\nwriting $t$-derivatives of $h_0$ and $\\theta_0$ as proper due to their established independence from $T$. The appropriate solvability conditions for this first-order system are obtained by averaging the equations over a period in $T$ and imposing periodicity in $T$, equivalent to the Fredholm Alternative Theorem for this system \\cite{Bender1999}. To do so, we assume, without loss of generality, that the period of the fast oscillations is $2\\pi$, defining the averaging operator $\\avg{\\cdot}$ via\n\\begin{equation}\\label{eq: no slip: averaging operator}\n \\avg{a} = \\frac{1}{2\\pi}\\int_0^{2\\pi}a(T)\\mathop{}\\!\\mathrm{d}{T}\\,.\n\\end{equation}\nComputing the average of \\cref{eq: no slip: order unity system}, we arrive at\n\\begin{subequations}\\label{eq: no slip: systematic model}\n\\begin{align}\n \\diff{h_0}{t} &= \\frac{3\\avg{p}}{16h_0^2}(1 + 3\\cos{2\\theta_0}) + \\avg{u}\\cos{\\theta_0}\\,,\\\\\n \\diff{\\theta_0}{t} &= \\frac{3\\avg{p}\\sin{2\\theta_0}}{64h_0^3}\\left[4 + \\frac{\\avg{pB}}{\\avg{p}}(3 + \\cos{2\\theta_0})\\right]\\,,\n\\end{align}\n\\end{subequations}\nwith the imposed periodicity eliminating the fast-time derivatives. Comparing these leading order differential equations with those of \\cref{eq: no slip: full system}, we see that we have essentially replaced the parameters $p$, $u$, and $B$ with the effective parameters $\\avg{p}$, $\\avg{u}$, and $\\avg{pB}\/\\avg{p}$, the precise forms of which have arisen through our brief, systematic analysis. Whilst the modifications to $p$ and $u$ are as might be naively expected, the effective shape constant $\\avg{pB}\/\\avg{p}$ is perhaps less obvious at first glance, with one perhaps expecting the average parameter $\\avg{B}$. Indeed, with these authors having previously been guilty of employing such parameters in back-of-the-envelope calculations, we refer to this model as the \\emph{\\textit{a priori}-averaged model}, given explicitly by \n\\begin{subequations}\\label{eq: no slip: a priori model}\n\\begin{align}\n \\diff{h^a}{t} &= \\frac{3\\avg{p}}{16h^2}\\left(1 + 3\\cos{2\\theta}\\right) + \\avg{u} \\cos{\\theta}\\,,\\\\\n \\diff{\\theta^a}{t} &= \\frac{3\\avg{p}\\sin{2\\theta}}{64h^3}\\left[4 + \\avg{B}(3 + \\cos{2\\theta})\\right]\\,,\n\\end{align}\n\\end{subequations}\nusing a superscript of $a$ to denote the solutions of the \\textit{a priori}-averaged system. This model makes use of the averaged parameters $\\avg{p}$, $\\avg{u}$, and $\\avg{B}$ in place of the rapidly oscillating quantities.\n\nThough an elementary observation, it is worth highlighting that, without any additional assumptions on $p(T)$ and $B(T)$, it is in general not the case that $\\avg{pB}\/\\avg{p} = \\avg{B}$, so that we should expect to observe differences between the systematically determined, leading-order dynamics of \\cref{eq: no slip: systematic model} and those of the \\textit{a priori}-averaged model of \\cref{eq: no slip: a priori model}. In what follows, through a brief consideration of the angular evolution equations, we will highlight how these differences can be more than simply quantitative.\n\nFocussing on the angular dynamics, we specifically consider the abstracted scalar autonomous ODE\n\\begin{equation}\\label{eq: no slip: scalar ode}\n \\diff{x}{t} = f(x;\\alpha,\\beta)\\,,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq: no slip: f}\n f(x;\\alpha,\\beta) \\coloneqq \\frac{3\\alpha\\sin{2x}}{64h^3}\\left[4 + \\beta(3 + \\cos{2x})\\right]\\,,\n\\end{equation}\nso that $(\\alpha,\\beta) = (\\avg{p},\\avg{B})$ corresponds to the \\textit{a priori}-averaged model, whilst $(\\alpha,\\beta) = (\\avg{p},\\avg{pB}\/\\avg{p})$ gives the leading order, systematically averaged dynamics. Note that, for the purposes of a stability analysis of the angular dynamics, we can treat the swimmer separation from the boundary as a positive parameter, abusing notation and generically writing $h$ in the denominator of \\cref{eq: no slip: f}, without materially modifying a steady state analysis of the angular dynamics.\n\n\\subsection{Exploring the autonomous dynamics}\nThe fixed points of \\cref{eq: no slip: scalar ode} are readily seen to be $x = n\\pi$, $x=\\pi\/2 + n\\pi$, and solutions of $4 + \\beta(3 + \\cos{2x})=0$ (if they exist), for $n\\in\\mathbb{Z}$. Notably, if $\\beta\\in(-1,1)$, as is the case in the \\textit{a priori}-averaged model, then the only steady states are at integer multiples of $\\pi\/2$. Focussing on these steady states, their linear stability is given by\n\\begin{equation}\\label{eq: no slip: simple steady states}\n x = \\left\\{\\begin{array}{lr}\n n\\pi & \\text{ is stable } \\iff\\alpha(1+\\beta) < 0\\,,\\\\\n \\pi\/2 + n\\pi & \\text{ is stable } \\iff\\alpha(2+\\beta) > 0\\,.\\\\\n \\end{array}\\right.\n\\end{equation}\nHence, for $\\beta\\in(-1,1)$, the stability of the steady states is determined solely by the sign of $\\alpha$, with $\\alpha>0$ giving rise to unstable states at $x=n\\pi$ and stable states at $x=\\pi\/2+n\\pi$. Identifying $\\alpha$ with the signed dipole strength, this is precisely in line with the classical analysis of pusher and puller swimmers via the \\textit{a priori}-averaged model, as summarised by \\citet{Lauga2009}, with pushers and pullers corresponding to $\\alpha > 0$ and $\\alpha < 0$, respectively.\n\n\\begin{figure}\n \\centering\n \\begin{overpic}[permil,width=0.7\\textwidth]{figs\/no_slip_dynamics\/no_slip_dynamics.png}\n \\put(100,140){$\\beta<-2$}\n \\put(425,140){$-2<\\beta<-1$}\n \\put(810,140){$\\beta>-1$}\n \\put(0,300){(a)}\n \\put(350,300){(b)}\n \\put(700,300){(c)}\n \\put(115,300){$\\theta=0$}\n \\put(-30,140){$\\frac{\\pi}{2}$}\n \\put(136,-27){$\\pi$}\n \\put(293,140){$\\frac{3\\pi}{2}$}\n \\end{overpic}\n \\caption{Steady states and stability of angular dynamics for the autonomous system of \\cref{eq: rollers: abstract autonomous system} are shown as dynamics on a circle, for $\\alpha>0$ fixed and for various values of $\\beta$. Swimming parallel to the boundary corresponds to states with $\\theta=\\pi\/2$, $3\\pi\/2$, whilst $\\theta=0$, $\\pi$ corresponds to swimming aligned with the normal to the boundary. (a) With $\\beta < -2$, the system evolves to a steady state with $x=n\\pi$, $n\\in\\mathbb{Z}$. (b) For $-2<\\beta<-1$, $x=n\\pi\/2$ are stable for all $n\\in\\mathbb{Z}$, with unstable configurations present between these attractors. (c) For $\\beta > -1$, the system evolves to a steady state with $x=\\pi\/2 + n\\pi$, for $n\\in\\mathbb{Z}$. Stable states are shown as solid points, whilst unstable points are shown hollow. Stabilities for $\\alpha<0$ are obtained by reversing the illustrated dynamics.}\n \\label{fig: no slip: dynamics}\n\\end{figure}\n\nHowever, if $\\beta<-1$, the profile of stability can change significantly. Bifurcations at $\\beta=-1$ and $\\beta=-2$ see the creation and destruction of additional steady states (the solutions of $4 + \\beta(3 + \\cos{2x})=0$) accompanied by changes in stability of the steady states at $x=n\\pi$ and $x=\\pi\/2 + n\\pi$, respectively. When they exist, the additional states have the opposite stability to the other steady states, so that they are stable for $\\alpha<0$. For $\\beta<-2$, the equation defining the additional steady states admits no real solutions, so that these steady states cease to exist and the angular equilibria are the same as for $\\beta>-1$, though with opposite linear stabilities. Each of these dynamical regimes is illustrated in \\cref{fig: no slip: dynamics} for $\\alpha>0$, highlighting a strong dependence of the dynamics on $\\beta$. The linear stability of each state is flipped upon taking $\\alpha<0$.\n\n\n\\subsection{Comparing the emergent dynamics}\nThe elementary analysis of the previous section highlights how qualitative changes in the globally attractive behaviour of the model depend strongly on the parameter $\\beta$. However, the predictions of the \\textit{a priori}-averaged model are simple: if the swimmer is a pusher, with $\\alpha=\\avg{p}>0$, then the states $\\theta^a=n\\pi$ are unstable, and the swimmer instead evolves to a state where $\\theta^a = \\pi\/2 + n\\pi$ and swims parallel to the boundary. If the swimmer is a puller, with $\\alpha=\\avg{p}<0$, then the swimmer instead evolves to $\\theta^a=n\\pi$, thereafter moving perpendicular to the boundary.\n\nHowever, the predictions of the systematically averaged model are more complex. Whilst switching the sign of $\\avg{p}$ still switches the stability of each steady state, the value of $\\beta = \\avg{pB}\/\\avg{p}$, which need not be smaller than unity in magnitude, can materially alter both the steady states in existence and the stability of the states that correspond to parallel and perpendicular swimming. For instance, if $\\alpha=\\avg{p}>0$ and $\\beta\\in(-2,-1)$, both the $\\theta_0=n\\pi$ and the $\\theta_0=\\pi\/2+n\\pi$ states are linearly stable, accompanied by four steady states in the range $\\theta_0\\in(0,2\\pi)$ that are unstable; for $\\alpha=\\avg{p}<0$, the stability of each state is swapped. Hence, swimmers with $\\avg{p}<0$ and $\\avg{pB}\/\\avg{p}\\in(-2,-1)$ will evolve to a steady state that is not a multiple of $\\pi\/2$; in other words, they will neither align parallel nor perpendicular to the boundary, a behaviour that is never predicted by the \\textit{a priori}-averaged model in any admissible parameter regime.\n\nAs an explicit illustration of how the two models can qualitatively differ, we take $p(T) = 4A\\sin{T} + 1$ and $B(T) = \\sin{(T)}\/2$, so that $\\avg{p} = 1$, $\\avg{B}=0$, and $\\avg{pB}\/\\avg{p} = A$. The \\textit{a priori}-averaged model predicts the dynamics shown in \\cref{fig: no slip: dynamics}c for all values of $A$, whilst the systematically averaged dynamics follow \\cref{fig: no slip: dynamics}b for $A\\in(-2,-1)$ and \\cref{fig: no slip: dynamics}a for $A<-2$. Fixing $A=-3\/2$, the temporal evolution of both of these models is shown in \\cref{fig: no slip: example}, along with a numerical solution to angular dynamics of the full system of \\cref{eq: no slip: full system}. Here, we have fixed $h>0$ as a parameter, recalling that the swimmer separation serves only to modify the rate at which the angular dynamics approach a steady state. In agreement with our analysis, the \\textit{a priori}-averaged model incorrectly predicts the qualitative evolution of the model swimmer, whilst the leading-order, systematically averaged model is in agreement with the full numerical solution.\n\nDespite these general qualitative differences, it should be noted that there are parameter regimes in which the dynamics are qualitatively indistinct between the models. For instance, suppose that we are in a regime where $\\beta=\\avg{pB}\/\\avg{p}\\not\\in(-2,-1)$, so that the only steady states are those given in \\cref{eq: no slip: simple steady states}. Then, if $p(T)$ is of fixed sign for all $T$, so that the swimmer is unambiguously a pusher or a puller, it is simple to note that $\\avg{p} + \\avg{pB} =\\int_0^{2\\pi}p(T)[1+B(T)]\\mathop{}\\!\\mathrm{d}{T}\/2\\pi$ has the same sign as $\\avg{p}$, recalling that $B\\in(-1,1)$. Similarly, $2\\avg{p} + \\avg{pB}$ has the same sign as $\\avg{p}$, so that the long-time behaviour is completely determined by the sign of $\\avg{p}$, following \\cref{eq: no slip: simple steady states}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\textwidth]{figs\/no_slip_example\/no_slip_example.png}\n \\caption{Angular evolution of a swimmer at fixed separation from a no-slip boundary. The prediction of the \\textit{a priori}-averaged model can be seen to not align with the prediction of the systematically averaged model of \\cref{eq: no slip: systematic model} or the dynamics of the full model of \\cref{eq: no slip: original system}, with qualitatively distinct steady configurations from the same initial condition. Rapid oscillations in the solution full dynamics are not visible at the resolution of this plot. Here, we have fixed $h=1$, $\\omega = 100$, and taken $p(T) = -6\\sin{(T)} + 1$ and $B(T) = \\sin{(T)}\/2$.}\n \\label{fig: no slip: example}\n\\end{figure}\n\n\\section{Interacting dipole rollers}\\label{sec: rollers}\n\\subsection{Model equations}\\label{sec: rollers: model eqs}\nConsider a pair of particles in the plane that are pinned in place in the laboratory frame, so that they are free to rotate in the plane but unable to translate. The particles, which we term \\emph{rollers}, are assumed to interact through dipolar flow fields, with vector dipole strengths $\\vec{p}_1$ and $\\vec{p}_2$, respectively, which are both assumed to lie in the plane containing the particles. Taking $\\{\\e_x,\\e_y\\}$ to be an orthogonal basis for the laboratory frame that spans this plane, we define the orientation of the $i$\\textsuperscript{th} particle, $i\\in\\{1,2\\}$, via the angle $\\theta_i$ between a body-fixed axis and $\\e_x$, so that the direction of the roller can be captured as $\\d_i = \\cos{\\theta_i}\\e_x + \\sin{\\theta_i}\\e_y$. Analogously to the previous section, we assume that the vector dipole strength of each particle is aligned with $\\d_i$, so that we can write $\\vec{p}_i = p_i\\d_i$, where $p_i$ is the scalar dipolar strength and may take any sign. We further assume that $p_1=p_2$, so that the dipoles are of equal strength, though we remark that retaining generality is straightforward but notationally cumbersome.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figs\/roller_setup\/roller_setup.png}\n \\caption{Geometry of fixed-position dipole rollers. Each roller is pinned in place at its centre, but is free to rotate, driven by the dipolar flow generated by the other. Their orientation is captured by their directors $\\d_1$ and $\\d_2$, parameterised by $\\theta_1$ and $\\theta_2$, respectively. The axis of each roller's dipolar flow field is assumed to be directed along their orientation vector.}\n \\label{fig: roller setup}\n\\end{figure}\nWithout loss of generality, we assume that the displacement of particle $1$ from particle $2$ is $r\\e_x$, where $r>0$ is the distance between them. In this setting, illustrated in \\cref{fig: roller setup} and following \\citet[p. 295]{Lauga2020}, the effects of each particle's dipolar flow on the orientation of the other particle are captured by the following dimensionless coupled system of ODEs:\n\\begin{subequations}\\label{eq: rollers: full system}\n\\begin{align}\n \\diff{\\theta_1}{t} &= f(\\theta_1,\\theta_2;p,B)\\,,\\\\\n \\diff{\\theta_2}{t} &= f(\\theta_2,\\theta_1;p,B)\\,,\n\\end{align}\n\\end{subequations}\nwhere\n\\begin{equation}\\label{eq: rollers: def f}\n f( x,y;\\alpha,\\beta) \\coloneqq -\\frac{3\\alpha}{r^3}\\left(\\sin{ y}\\cos{ y} + \\beta\\left[\\sin{ x}\\cos{ x}\\left(1 - 5\\cos^2{ y}\\right) + \\cos{ y}\\sin{(2 x- y)}\\right]\\right)\\,,\n\\end{equation}\ndistinct from the notation of the previous section. Here, $B$ is once again the shape-capturing parameter of \\citet{Bretherton1962}, and we assume that both of the particles are associated with the same shape parameter. This assumption can be relaxed at the expense of notational convenience.\n\nAs with the previous model, the standard approach would be to assume that $p$ and $B$ are constant, so that the above system is autonomous. Here, we take $p=p(T)$ and $B=B(T)$, interpreting the constant-parameter model as the \\textit{a priori}-averaged model, as above. Hence, we study the non-autonomous system\n\\begin{subequations}\\label{eq: rollers: non-autonomous system}\n\\begin{align}\n \\diff{\\theta_1}{t} &= f(\\theta_1,\\theta_2;p(\\omega t),B(\\omega t)) \\,,\\\\\n \\diff{\\theta_2}{t} &= f(\\theta_2,\\theta_1;p(\\omega t), B(\\omega t))\\,,\n\\end{align}\n\\end{subequations}\nwith $\\omega\\gg1$ and all other quantities being $\\bigO{1}$ as $\\omega\\to\\infty$. It should be noted that, whilst we study this system as a simple extension of an established model, these equations can be rigorously derived by considering the dynamics of shape-changing spheroids, subject to the far-field approximation and the assumption that their instantaneous deformation is both force- and torque-free. This derivation is somewhat elementary and follows the approach given in \\citet{Gaffney2022b}, from which this model can also be extended to accommodate more general deformations through the addition of uncomplicated terms. Here, seeking simplicity and clarity, we pursue the model given in \\cref{eq: rollers: non-autonomous system}.\n\n\\subsection{Multi-scale analysis}\nThe multi-scale analysis of this problem proceeds entirely analogously to that of the previous section, but is reproduced here in detail in the interest of clarity. As in \\cref{sec: no slip}, we formally introduce the fast timescale $T=\\omega t$, transforming the proper time derivative as\n\\begin{equation}\\label{eq: rollers: time transform}\n \\diff{}{t} \\mapsto \\pdiff{}{t} + \\omega \\pdiff{}{T}\\,.\n\\end{equation}\nWe seek an asymptotic expansion of $\\theta_1$ and $\\theta_2$ in inverse powers of $\\omega$, which we write as\n\\begin{equation}\\label{eq: rollers: expansions}\n \\theta_1 \\sim \\theta_{1,0}(t,T) + \\frac{1}{\\omega}\\theta_{1,1}(t,T) + \\cdots\\,, \\quad \\theta_2 \\sim \\theta_{2,0}(t,T) + \\frac{1}{\\omega}\\theta_{2,1}(t,T) + \\cdots\\,.\n\\end{equation}\nTransforming \\cref{eq: rollers: non-autonomous system} via \\cref{eq: rollers: time transform} and inserting the expansions of \\cref{eq: rollers: expansions}, at $\\bigO{\\omega}$ we have\n\\begin{equation}\n \\pdiff{\\theta_{1,0}}{T} = 0\\,, \\quad \\pdiff{\\theta_{2,0}}{T} = 0 \\quad \\implies \\quad \\theta_{1,0} = \\theta_{1,0}(t)\\,, \\quad \\theta_{2,0} = \\theta_{2,0}(t)\\,,\n\\end{equation}\nso that the leading order solutions for $\\theta_1$ and $\\theta_2$ are independent of the fast timescale and, thus, are functions only of $t$. As in \\cref{sec: no slip}, this arises due to the forcing of the ODEs being $\\bigO{1}$, so that the forcing doesn't contribute at $\\bigO{\\omega}$. At $\\bigO{1}$, we have\n\\begin{subequations}\\label{eq: rollers: first-order system}\n\\begin{align}\n \\diff{\\theta_{1,0}}{t} + \\pdiff{\\theta_{1,1}}{T} &= f(\\theta_{1,0},\\theta_{2,0};p(T), B(T)) \\,,\\label{eq: rollers: first-order system: theta1}\\\\\n \\diff{\\theta_{2,0}}{t} + \\pdiff{\\theta_{2,1}}{T} &= f(\\theta_{2,0},\\theta_{1,0};p(T), B(T))\\,,\\label{eq: rollers: first-order system: theta2}\n\\end{align}\n\\end{subequations}\nusing the fact that $\\theta_{1,0}$ and $\\theta_{2,0}$ are independent of $T$ to write their time derivatives as total derivatives. As in \\cref{sec: no slip}, imposing periodicity and averaging over a period in $T$ closes the system of PDEs at this order. Without loss of generality, we assume that the period of oscillations of $p$ and $B$ is $2\\pi$, and we recall the averaging operator $\\avg{\\cdot}$ from \\cref{eq: no slip: averaging operator} as\n\\begin{equation}\\label{eq: rollers: avg}\n \\avg{a} = \\frac{1}{2\\pi}\\int_0^{2\\pi} a(T)\\mathop{}\\!\\mathrm{d}{T}\\,.\n\\end{equation}\nTo compute the average of \\cref{eq: rollers: first-order system} in $T$, it is instructive to consider the dependence of $f$ on its arguments explicitly. To that end, we explicitly compute the average of \\cref{eq: rollers: first-order system: theta1} as\n\\begin{equation}\\label{eq: rollers: explicit average}\n \\diff{\\theta_{1,0}}{t} = -\\frac{3}{r^3}\\left(\\avg{p}\\sin{\\theta_{2,0}}\\cos{\\theta_{2,0}} + \\avg{pB}\\left[\\sin{\\theta_{1,0}}\\cos{\\theta_{1,0}}\\left(1 - 5\\cos^2{\\theta_{2,0}}\\right) + \\cos{\\theta_{2,0}}\\sin{(2\\theta_{1,0}-\\theta_{2,0})}\\right] \\right)\n\\end{equation}\nwith the average of \\cref{eq: rollers: first-order system: theta2} following similarly. Comparing the right-hand side of \\cref{eq: rollers: explicit average} with the definition of $f$ in \\cref{eq: rollers: def f}, we identify the systematically averaged governing equations with those of the original system:\n\\begin{subequations}\\label{eq: rollers: effective eqs}\n\\begin{align}\n \\diff{\\theta_{1,0}}{t} & = f(\\theta_{1,0}, \\theta_{2,0}; \\avg{p}, \\avg{pB} \/ \\avg{p})\\,,\\\\\n \\diff{\\theta_{2,0}}{t} & = f(\\theta_{2,0},\\theta_{1,0}; \\avg{p}, \\avg{pB} \/ \\avg{p})\\,.\n\\end{align}\n\\end{subequations}\nHence, in order to understand the leading order behaviour of the non-autonomous system of \\cref{eq: rollers: non-autonomous system}, it is sufficient to explore the autonomous system\n\\begin{subequations}\\label{eq: rollers: abstract autonomous system}\n\\begin{align}\n \\diff{x}{t} &= f(x,y;\\alpha,\\beta)\\,,\\\\\n \\diff{y}{t} &= f(y,x;\\alpha,\\beta)\\,,\n\\end{align}\n\\end{subequations}\nidentifying $x$ and $y$ with the leading order solutions for $\\theta_1$ and $\\theta_2$, respectively. \n\nAs noted in \\cref{sec: rollers: model eqs}, the commonplace model presented by \\citet{Lauga2020} makes use of constant parameters in place of $p$ and $B$ in \\cref{eq: rollers: full system}, which we interpret as the \\textit{a priori} averages of $p(T)$ and $B(T)$ for flow-generating particles. In symbols, when interpreted in this way, the model of \\citet{Lauga2020} is equivalent to taking $(\\alpha,\\beta) = (\\avg{p},\\avg{B})$ in \\cref{eq: rollers: abstract autonomous system}. In line with having taken these model-agnostic averages of the parameters, we refer to this model as the \\emph{\\textit{a priori}-averaged model}, which we state explicitly as\n\\begin{subequations}\\label{eq: rollers: naive system}\n\\begin{align}\n \\diff{\\theta_1^a}{t} &= f(\\theta_1^a,\\theta_2^a;\\avg{p},\\avg{B})\\,,\\\\\n \\diff{\\theta_2^a}{t} &= f(\\theta_2^a,\\theta_1^a;\\avg{p},\\avg{B})\\,,\n\\end{align}\n\\end{subequations}\nwith the superscript of $a$ distinguishing the solution from that of the full system of \\cref{eq: rollers: full system}. In contrast, we have seen that the true leading order behaviour actually corresponds to taking $(\\alpha,\\beta) = (\\avg{p}, \\avg{pB}\/\\avg{p})$, with $\\avg{pB}\\neq\\avg{p}\\avg{B}$ in general. As we have seen throughout our analysis, this difference in parameters results directly from the employed processes of averaging: one is performed independent of the dynamical systems, whilst the other systematically determines the appropriate averaged parameters for this particular dynamical system. In the next two sections, we seek to determine if these differences in employed parameters between the systematically averaged equations and the \\textit{a priori}-averaged model can result in differences in behaviour, which we establish through an elementary exploration of the autonomous dynamical system of \\cref{eq: rollers: abstract autonomous system}.\n\n\\subsection{Exploring the autonomous dynamics}\nFirst, we identify and classify the fixed points of \\cref{eq: rollers: abstract autonomous system} in terms of $\\alpha$ and $\\beta$, before returning to consider the particular parameter combinations of the previous section. The dynamical system of \\cref{eq: rollers: abstract autonomous system} can be explored via standard methods with relative ease, so we refrain from providing a full and detailed account of the analysis. It is worth noting that, due to the periodicity of the trigonometric functions in $f$, the forcing of the dynamics is periodic with period $\\pi$ in both $x$ and $y$, so that we only need to characterise the dynamics up to multiples of $\\pi$. This periodicity, along with a visual summary of the analysis that follows, is illustrated in \\cref{fig: rollers: portraits,fig: rollers: fixed points}.\n\\begin{figure}\n \\centering\n \\begin{overpic}[width=0.8\\textwidth]{figs\/roller_configurations\/roller_configurations.png}\n \\put(0,50){(a)}\n \\put(56,50){(b)}\n \\put(0,25){(c)}\n \\put(56,25){(d)}\n \\end{overpic}\n \\caption{Fixed points of the two-roller system for $\\alpha>0$. Up to symmetry and $\\pi$-periodicity, the identified fixed points correspond to four distinct configurations whose stability depend on $\\alpha$ and $\\beta$ nontrivially. Here, we report the stability for $\\alpha>0$, corresponding to a swimmer with $\\avg{p}>0$ in \\cref{eq: rollers: effective eqs}, which we might refer to as pusher-type particle. The dynamics of a puller-type particle, with $\\alpha<0$, correspond to reversing the direction of motion from $\\alpha>0$, swapping the stability of non-saddle fixed points. (a) Parallel alignment along the direction of separation is stable if $\\beta<-1$ and unstable for $\\beta>-1$. (b) Orthogonal alignment, with one particle pointing directly towards the other, is stable only if $\\beta>0$. (c) Parallel alignment that is perpendicular to the relative displacement is only stable if $\\beta<-1\/2$. (d) Steady parallel alignment that is neither parallel nor perpendicular to the relative displacement is unstable when it exists, which is for $\\beta < -1\/2$ and $\\beta>1\/3$.}\n \\label{fig: rollers: fixed points}\n\\end{figure}\n\nIt is simple to identify fixed points along the manifolds $x=y$ and $x=-y$, seeking solutions of\n\\begin{equation}\n \\frac{3\\alpha}{2r^3}\\sin{2x}\\left(1 + \\beta\\left[2-5\\cos^2{x}\\right]\\right) = 0 \\quad \\text{ and } \\quad \\frac{3\\alpha}{2r^3}\\sin{2x}\\left(1 + \\beta\\cos^2{x}\\right) = 0\\,,\n\\end{equation}\nrespectively. For non-zero $\\alpha$, these both admit solutions $x=n\\pi\/2$, $n\\in\\mathbb{Z}$, whilst the former admits the additional solutions of $1+\\beta\\left[2-5\\cos^2{x}\\right]=0$ whenever $\\beta \\leq -1\/2$ or $\\beta \\geq 1\/3$. There are additional steady states on the $x=-y$ manifold that satisfy $1+\\beta\\cos^2{x}=0$, which exist whenever $\\beta\\leq-1$. Further, we note the existence of additional fixed points on the manifold $x + y = \\pi\/2$, which again correspond to $x=n\\pi\/2$ for $n\\in\\mathbb{Z}$. Hence, the fixed points of the system are given by $(x,y)\\in\\{(0,0),(\\pi\/2,0),(0,\\pi\/2),(\\pi\/2,\\pi\/2)\\}$, up to periodicity, in addition to solutions of $1+\\beta\\left[2-5\\cos^2{x}\\right]=0$ on the $x=y$ manifold and solutions of $1+\\beta\\cos^2{x}=0$ on the $x=-y$ manifold. The fixed points and their readily computed linear stabilities are summarised in \\cref{tab: rollers: stab} in \\cref{app: rollers: stab}, with the steady configurations interpreted in terms of the particles in \\cref{fig: rollers: fixed points}. \n\nWe illustrate the overall dynamics in various parameter regimes through phase portraits in \\cref{fig: rollers: portraits}, capturing the full range of qualitatively distinct behaviours that emerge from \\cref{eq: rollers: abstract autonomous system}. Noting that $\\alpha$ plays only a simple role in the dynamics, with changing the sign of $\\alpha$ simply reversing the direction of evolution, we fix $\\alpha>0$ in \\cref{fig: rollers: portraits}, focussing instead on the impact of varying $\\beta$. From these portraits, it is clear that changing the value of $\\beta$ can have a drastic effect on the dynamical system. For instance, $\\beta$ crossing the thresholds of $-1\/2$ and $1\/3$ modifies the character of the phase plane through the emergence or destruction of saddle points and nodes, accompanied by qualitative changes in phase-plane trajectories. Though there are multiple further bifurcations, a notable switch in stability occurs when crossing $\\beta=0$, with $\\beta=0$ corresponding to an integrable system with truly closed orbits \\footnote{Excluding heteroclinic trajectories.} that bifurcate into stable and unstable spirals either side of the bifurcation point.\n\nThese local bifurcations, though effecting changes in linear stability, also give rise to changes in the globally attracting dynamics of the system. This drastic alteration to the overall behaviour is illustrated via the sample trajectories highlighted in blue in \\cref{fig: rollers: portraits}, which either approach closed, heteroclinic connections in \\cref{fig: rollers: portraits}d or the fixed point at the centre of stable spirals in \\cref{fig: rollers: portraits}f and \\cref{fig: rollers: portraits}g, for instance.\n\n\\begin{figure}\n \\centering\n \\begin{overpic}[permil,width=\\textwidth]{figs\/all_phase_portraits\/all_phase_portraits.png}\n \\put(10,557){(a)}\n \\put(273,557){(b)}\n \\put(520,557){(c)}\n \\put(760,557){(d)}\n \\put(10,282){(e)}\n \\put(273,282){(f)}\n \\put(520,282){(g)}\n \\put(760,282){(h)}\n \\end{overpic}\n \\caption{Diverse phase portraits of the autonomous system of \\cref{eq: rollers: abstract autonomous system}. Varying $\\beta$, we illustrate the qualitatively distinct portraits of the autonomous dynamics, with stable and unstable fixed points shown as solid and empty circles, respectively. Moving from (a) to (b), we identify the transition of $(0,0)$ from a stable node to a saddle point via the coalescence of two saddles on the $x=-y$ manifold; following this transition, the parallel alignment of \\cref{fig: rollers: fixed points}c is the globally attracting state. Moving from (b) to (c) and from (h) to (g) transitions stable\/unstable nodes at $(0,\\pi\/2)$ and $(\\pi\/2,0)$ to spirals of the same stability. Significant qualitative changes occur through bifurcations of the saddles at $(0,0)$ and $(\\pi\/2,\\pi\/2)$, each into a node and two saddles, visible between panels (c) and (d) and panels (f) and (g). The blue trajectories in each panel, which have a common initial condition, highlight the qualitative change in behaviour that can result from changing $\\beta$, potentially altering the dynamics from the almost-heteroclinic orbits of panel (d) to the distinct stable attractors of panels (c) and (f), for instance. Here, we have fixed $\\alpha>0$, noting that $\\alpha<0$ corresponds purely to a reversal of the dynamics.}\n \\label{fig: rollers: portraits}\n\\end{figure}\n\n\\subsection{Comparing the emergent dynamics}\nFrom the above exploration of the autonomous system of \\cref{eq: rollers: abstract autonomous system}, it is clear that changes to the parameter $\\beta$ can have significant qualitative impacts on the emergent dynamics. In this section, we showcase how adopting $(\\alpha,\\beta) = (\\avg{p},\\avg{B})$ in the \\textit{a priori}-averaged model can give rise to predictions that are wholly different to those of the systematically motivated parameters $(\\alpha,\\beta)=(\\avg{p},\\avg{pB}\/\\avg{p})$.\n\nBefore we exemplify such cases, it is worth highlighting that certain classes of $p(T)$ and $B(T)$ trivially result in $\\avg{pB}\/\\avg{p}=\\avg{B}$, so that the dynamics associated with the \\textit{a priori} and systematically averaged models are identical to leading order. This equality trivially holds if at least one of $p(T)$ or $B(T)$ is constant, with $\\avg{pB}=\\avg{p}\\avg{B}$ in this case. Hence, if the rollers can be associated with a constant dipole strength, or if their Bretherton parameters do not not change in time, then we can naively average any remaining fast-time dependencies before inserting them into the model and achieve the correct leading order behaviour. Further, if we are concerned only with the eventual configuration of the rollers, and not the details of any transient dynamics, we can identify additional $p(T)$ and $B(T)$ that can be \\textit{a priori}-averaged without consequence. For instance, if $p(T)>0$ and $B(T)>0$ for all $T$, then we have $\\avg{pB}\/\\avg{p}>0$ and $\\avg{B}>0$, so that $\\beta > 0$ in both cases and the configuration shown in \\cref{fig: rollers: fixed points}b is globally attracting, up to symmetry, as can be deduced from \\cref{fig: rollers: portraits}f-h. These particular constraints are compatible, for example, with requiring that the particle be spheroidal, always prolate, and consistently generating a dipolar flow field that can be associated with a hydrodynamic pusher.\n\nFor more general $p(T)$ and $B(T)$, however, it is clear that we cannot guarantee that the dynamics predicted by the \\textit{a priori}-averaged model of \\cref{eq: rollers: naive system} will be at all reminiscent of the leading order systematically averaged dynamics of \\cref{eq: rollers: effective eqs}. Seeking a minimal example in order to highlight this general observation, we take $p(T)=2A\\sin{T} + 1$ and $B(T) = \\sin{T} + D$. Clearly, $\\avg{p}=1$ and $\\avg{B}=D$, so that the choice of $D$ uniquely determines the panel of \\cref{fig: rollers: portraits} that corresponds to the dynamics of the \\textit{a priori}-averaged model. However, computing $\\avg{pB}\/\\avg{p} = A + D$ highlights that we can choose $A$ so that the systematically averaged dynamics occupy any given panel of \\cref{fig: rollers: portraits}. \n\nTo illustrate this concretely, we take $A = -5\/4$ and $D = 1\/2$ and numerically solve \\cref{eq: rollers: naive system,eq: rollers: effective eqs,eq: rollers: full system} from the same initial condition, taking $\\omega=100$, with the solutions shown in \\cref{fig: rollers: comparison}. The \\textit{a priori}-averaged system shown in \\cref{fig: rollers: comparison}a corresponds to $\\beta=1\/2$, so follows the dynamics of \\cref{fig: rollers: portraits}g, with the numerical solution approaching the $(0,\\pi\/2)$ steady state, in which the rollers are perpendicular to one another. In contrast, the systematically averaged dynamics of \\cref{fig: rollers: comparison}b correspond to $\\beta = -3\/4$ and the portrait of \\cref{fig: rollers: portraits}c, evolving to the parallel steady state $(\\pi\/2,\\pi\/2)$. The numerical solution to the full problem of \\cref{eq: rollers: full system} is also shown in \\cref{fig: rollers: comparison}b, highlighting good agreement with the asymptotic solution and evidencing the potential for disparity between the predictions of the \\textit{a priori}-averaged model and the dynamics of temporally evolving bodies.\n\n\\begin{figure}\n \\centering\n \\begin{overpic}[width=0.8\\textwidth]{figs\/roller_comparison\/roller_comparison.png}\n \\put(3,36){(a)}\n \\put(53,36){(b)}\n \\end{overpic}\n \\caption{Comparing the results of \\textit{a priori} and systematic averaging. With fixed initial conditions of $(\\pi\/8,0)$, numerical solutions of (a) the \\textit{a priori}-averaged system of \\cref{eq: rollers: naive system}, (b) the systematically averaged dynamics of \\cref{eq: rollers: effective eqs}, and the full system of \\cref{eq: rollers: non-autonomous system} are shown as solid curves. (a) With $p(T) = -5\\sin{T}\/2 + 1$ and $B(T) = \\sin{T} + 1\/2$, the \\textit{a priori}-averaged dynamics approach the $(0,\\pi\/2)$ steady state, following the dynamics illustrated in \\cref{fig: rollers: portraits}g, as $\\beta=\\avg{B}=1\/2$. (b) The leading order, systematically derived dynamics evolve to a distinct steady state, with $\\theta_{1,0}=\\theta_{2,0} = \\pi\/2$, following the dynamics shown in \\cref{fig: rollers: portraits}c, as $\\beta = \\avg{pB}\/\\avg{p} = -3\/4$. The corresponding steady state configurations of the rollers are illustrated in the lower right corner of each panel, highlighting the qualitatively distinct behaviours predicted by the models. Here, we have taken $\\omega=100$.}\n \\label{fig: rollers: comparison}\n\\end{figure}\n\n\n\n\\section{Discussion and conclusions}\nThe use of \\textit{a priori}-averaged parameters in minimal modelling approaches can be appealing, seemingly commensurate with seeking the simplest possible model of a given setting. However, through the simple examples presented in this study, we have seen how such model-agnostic averaging can result in behavioural predictions that differ qualitatively from those of models that incorporate fast-timescale parameter oscillations, which are common to many microswimmers. Hence, the key conclusion of our simple analysis is that such a modelling approach can be unreliable even at the level of back-of-the-envelope calculations, with the intuition gained from such explorations potentially rendered invalid. This observation is expected to hold somewhat generically across such simple models, with our approach extending to a range of employed minimal swimmer representations.\n\nThough one might interpret the conclusion of this study as an argument against the use of minimal models of microswimming, in fact, asymptotic analysis of these models revealed that they \\emph{can} capture the leading order dynamics, but only with appropriate parameterisation. In particular, our analysis highlights that it is the use of \\textit{a priori}-averaged parameters, rather than the use of constant parameters more generally, that gives rise to inaccurate predictions. Hence, this study supports the use of minimal models in developing intuition and understanding of microswimmer systems, though only when used with systematically derived, effective parameters. \n\nFurther, we have seen how an asymptotic analysis can show that, in certain parameter regimes, one can reliably employ \\textit{a priori}-averaged parameters without qualitatively affecting the predictions of the model. However, the explorations of \\cref{sec: no slip,sec: rollers} revealed that such robust parameter regimes are far from being universal; on the contrary, we have seen that they depend strongly on the model in question. Hence, in general, a bespoke analysis is required for any given model in order to determine these regimes of serendipitous agreement.\n\nThe analysis presented in this study is simple, even elementary, relying on the commonplace method of multiple scales and the basic observation that the average of a product need not be the product of individual averages. Despite this simplicity, we have identified potential missteps in the use of the simplest models of microswimming, of which these authors have previously been guilty. Further, we have demonstrated how a straightforward, multi-timescale analysis can inform reliable, systematic parameterisation of minimal models, such that they recover their marked utility in the generation of intuition, basic understanding, and back-of-the-envelope predictions.\n\n\\section*{Acknowledgments}\nB.J.W. is supported by the Royal Commission for the Exhibition of 1851. K.I. acknowledges JSPS-KAKENHI for Young Researchers (Grant No. 18K13456), JSPS-KAKENHI for Transformative Research Areas (Grant No. 21H05309) and JST, PRESTO, Japan (Grant No. JPMJPR1921).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{sec:intro}}\n\nThe study of $\\delta$ Scuti stars is expected to provide an\nimportant diagnostic tool for the structure and evolution of\nintermediate-mass stars at the main sequence evolution\nand thick-shell hydrogen burning stages. Having masses between 1.5 and\n2.5 $M_{\\odot}$, these pulsating variables develop convective\ncores during their central hydrogen-burning phases that make them\nsuitable for investigating the hydrodynamic processes occurring in\nthe core.\n\nAlthough most of the general properties of these stars are well\nunderstood, the application of seismological methods to derive their\nproperties has proved to be a difficult task. The problems\nconcerning the modelling of $\\delta$ Scuti stars have been\ndiscussed in details in recent reviews (e.g.\\ Christensen-Dalsgaard\n\\cite{christensen}; Kjeldsen \\cite{kjeldsen}). The most acute seem\nto be the mode selection mechanism and the influence of rotation.\nConcerning the former, although there is fairly good agreement\nbetween the observed frequency range and that derived from\ninstability calculations (e.g.\\ Michel et al.\\ \\cite{michel1}), far\nmore modes than observed are predicted to be unstable. \nWhile it is expected that the forthcoming\nasteroseismology space mission COROT (Baglin et al.\\ \\cite{baglin})\nwill be able to disentangle the whole spectra of $\\delta$ Scuti\nstars, for the moment the\nmechanism that could cause an amplitude limitation in some\nmodes is still unknown. As a consequence, mode identification\nbased on the comparison between theoretical and observed\nfrequencies is difficult to attain in general. On the other hand,\nalthough several observational techniques for mode identification\nhave been developed in recent years (Watson \\cite{watson};\nGarrido et al.\\ \\cite{garrido}; Aerts \\cite{aerts}; Kenelly \\&\nWalker \\cite{kenelly}; Viskum et al.\\ \\cite{viskum}), in only a few\ncases have these techniques been successfully applied.\n\n\nIt is well known that most $\\delta$ Scuti stars\nare rapid rotators. This has important effects on the\nmode frequencies, both directly as a consequence of the changes that must be\nintroduced in the oscillation equations and indirectly through the changes in the\nequilibrium models.\nTherefore, successful analysis of rotating $\\delta$ Scuti stars\nrequires that rotation should be taken into account consistently at\nall levels in the analysis.\n\nDespite these problems, in recent years several \nattempts have been made to interpret the observed spectra\nof $\\delta$ Scuti stars (Goupil et al.\\ \\cite{goupil}; P\\'erez\nHern\\'andez et al.\\ \\cite{perez1}; Guzik \\& Bradley \\cite{guzik};\nPamyatnykh et al.\\ \\cite{pamyatnykh}; Bradley \\& Guzik\n\\cite{bradley}; Hern\\'andez et al.\\ \\cite{hernandez}; Su\\'arez et\nal.\\ \\cite{suarez1}). Here we address the problem of mode\nidentification for stars in an open cluster. These stars are\nexpected to have a common age and chemical composition.\nFurthermore, the distance to the cluster is usually known with\nhigh accuracy. The constraints imposed by the cluster parameters\nhave proved to be very useful when modelling an ensemble of\n$\\delta$ Scuti stars. The best examples of such studies are found\nfor the variables in the Praesepe cluster (Michel et al.\\\n\\cite{michel1}; Hern\\'andez et al.\\ \\cite{hernandez}). In a similar\nway, we consider here several $\\delta$ Scuti stars of the Pleiades\ncluster and search for a best fit solution in the sense of a set\nof stellar parameters that allows the simultaneous modelling of\nall the stars considered, and that satisfies all the observables,\nincluding the frequencies.\n\nThe paper is organized as follows. In Sect.~\\ref{sec:stars}, we\ndescribe the main characteristics of the Pleiades cluster and the\nsix $\\delta$ Scuti stars under study. The modelling, the range of\nparameters and the calculations of the eigenfrequencies are\ndiscussed in Sect.~\\ref{sec:models}. The details of the comparison\nbetween observed and theoretical frequencies are discussed in\nSect.~\\ref{sec:comp}. The results and their\ndiscussion are given in Sect.~\\ref{sec:results}. Finally, we\npresent our conclusions in Sect.~\\ref{sec:conclusions}.\n\n\n\\section{The observational material \\label{sec:stars}}\n\nThe Pleiades (M45) is a young ($\\sim 75$--100\\, Myr), Population I\ncluster. Because of its proximity, the observational parameters of the\nPleiades have been intensively studied. In particular, the\nmetallicity of the cluster is estimated to be between $-0.14 \\leq$\n[Fe\/H] $\\leq +0.13$ (e.g. $-$0.034 $\\pm$ 0.024 Boesgaard \\& Friel\n\\cite{boesgaard}, 0.0260 $\\pm$ 0.103 Cayrel de Strobel\n\\cite{cayrel}, $-$0.11 $\\pm$ 0.03 Grenon \\cite{grenon}),\n\nUntil recently, there was a dispute regarding the distance of the\nPleiades cluster from the Earth. While the determination of the\ncluster distance from direct parallax measurements of\nHIPPARCOS gives 116.3$\\pm^{3.3}_{3.2}$~pc\n($m_{V}-M_{V}= 5.33\\,\\pm\\, 0.06$ mag, Mermilliod et al.\\\n\\cite{mermilliod}), the previously accepted distance, which is\nbased on comparing the cluster's main sequence with that of nearby\nstars, was $\\approx$130 pc corresponding to a distance modulus of\nabout 5.60 mag (e.g.\\ 5.65 $\\pm$ 0.38 mag [Eggen \\cite{eggen}],\n5.60 $\\pm$ 0.05 mag [Pinsonneault et al.\\ \\cite{pinsonneault}],\n5.61 $\\pm$ 0.03 mag [Stello \\& Nissen \\cite{stello}]).\nAs will be discussed in Sect.~\\ref{sec:results}, the distance derived\nin the research presented here agrees with the latter value.\n\n\\begin{table*}[ht]\n \\caption{Observational properties of the target stars}\n \\begin{tabular}{lccccccc}\n \\hline\n Star & HD & ST & $m_{V}$&$B-V$ & $E(B-V)$ & $V \\sin\\, i$ & $\\beta$ \\\\\n & & & & & & $(\\mathrm{km\\, s}^{-1})$& \\\\\n \\hline\n - & 23628 & A7V & $7.66\\pm0.03 $& $+0.211\\pm0.005$ &$0.063 \\pm0.008$ & $ 150 \\pm15$& 2.884 \\\\\n V650 Tau& 23643 & A3V & $7.77\\pm0.02 $& $+0.157\\pm0.006$& $0.027\\pm0.008$ & $175\\pm18$ & 2.862 \\\\\n - & 23194 & A5V & $8.06\\pm0.02 $& $+0.202\\pm0.010$& $0.076\\pm0.008$ & $20 \\pm2$& 2.881 \\\\\n V624 Tau& 23156 & A7V & $8.23\\pm0.01$ & $+0.250\\pm0.005$& $0.046\\pm0.008$ & $ 20 \\pm2$& 2.823 \\\\\n V647 Tau& 23607 & A7V & $8.26\\pm0.01$ & $+0.255 \\pm0.001$&$0.057 \\pm0.008$ & $ 20 \\pm2$& 2.841 \\\\\n V534 Tau& 23567 & A9V & $8.28\\pm0.02$ & $+0.360\\pm0.001$ &$ 0.084\\pm0.008$ & $ 90\\pm 9$& 2.788 \\\\\n\n \\hline\n \\end{tabular}\n \\label{tab:stars}\n\\end{table*}\n\nTo date, six $\\delta$ Scuti stars are known in\nthe Pleiades. In a survey of variability in the\ncluster, Breger (\\cite{breger}) found $\\delta$ Scuti type oscillations\nin four stars. The remaining two were recently reported to be\n$\\delta$ Scuti stars by Koen (\\cite{koen1}) and Li et al.\\ (\\cite{li}).\nSome observational properties of these stars are given in\nTable~\\ref{tab:stars}. The projected rotational\nvelocities, $v\\sin i$, were obtained from Morse et al.\\\n(\\cite{morse}) and Uesugi \\& Fukuda (\\cite{uesugi}). A 10\\%\nuncertainty in these quantities has been assumed. The spectral\ntypes (ST) and $\\beta$ parameters were obtained from the SIMBAD database in\nStrasbourg (France). The errors in the photometric parameters $(B-V)$\nand $m_{V}$ are estimated from the dispersion between different\nmeasurements of these quantities. The errors in reddening are\ntaken from Breger (\\cite{breger1}). We shall use these\nuncertainties in Section 3.2.\n\nIn recent years, the $\\delta$ Scuti stars of the Pleiades cluster have been\nobserved in several campaigns of the STEPHI multi-site network\n(Michel et al.\\ \\cite{michel2}). The information on\nthe oscillation frequencies of the stars used here has\nbeen obtained from those campaigns and are published elsewhere\n(Belmonte \\& Michel \\cite{belmonte1}; Michel et al.\\\n\\cite{michel4}; Liu et al.\\ \\cite{liu}; Li et al.\\ \\cite{li};\nLi et al.\\ \\cite{li1}; Fox-Machado et al.\\ \\cite{fox}).\nThe frequency peaks detected\nwith a confidence level above 99\\% are summarized\nin Table~\\ref{tab:freq}. We note that the frequency resolution in a typical\nSTEPHI campaign (three weeks) is $\\Delta \\nu \\sim 0.5\\, \\mu$Hz.\n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{Observed frequencies used in this work. The data were obtained from\ndifferent STEPHI campaigns as detailed in the text.}\n\\begin{tabular}{lc|lc}\n\\hline \\hline\nstar & $\\nu$ & star& $\\nu $ \\\\\n& $(\\mu {\\rm Hz})$& & $(\\mu {\\rm Hz})$ \\\\\n\\hline\nHD 23628& 191.8 & V624 Tau &242.9 \\\\\n & 201.7 &&409.0 \\\\\n & 376.6 &&413.5 \\\\\\cline{1-2}\nV650 Tau&197.2&&416.4 \\\\\n &292.7&& 451.7 \\\\\n &333.1&& 489.4 \\\\\n &377.8 &&529.1 \\\\\\hline\nHD 23194&533.6&V647 Tau&236.6 \\\\\n&574.9&&304.7 \\\\\\cline{1-2}\nV534 Tau &234.2 &&315.6 \\\\\n &252.9&&374.4 \\\\\n &307.6&&405.8 \\\\\n &377.9 &&444.1 \\\\\n &379.0 &&\\\\\n &448.1 && \\\\\n &525.0&& \\\\\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:freq}\n\\end{table}\n\nFigure~\\ref{fig:dscutis} illustrates a colour--magnitude diagram of the Pleiades\ncluster in the region where the target stars are located.\nThe filled circles corresponds to the $\\delta$ Scuti stars.\nThe apparent magnitudes, $V$, colours, $(B-V)$ and\nmembership used are contained in the WEBDA database (Mermilliod\n\\cite{mermilliod1}). The Pleiades shows\ndifferential reddening with significant excess in the southwest with an\naverage value of $E(B-V) \\approx 0.04$ (e.g.\\\nvan Leeuwen \\cite{vanleeuwen}; Breger \\cite{breger1}; Hansen-Ruiz\n\\& van Leeuwen \\cite{hansen}; Pinsonneault et al.\\\n\\cite{pinsonneault}). In particular, we adopted the reddening for\nindividual stars as given by Breger (\\cite{breger1}) if present, otherwise\nthe average of $E(B-V) = 0.04$ was used. Stars known to be spectroscopic binaries\nwere rejected.\n\nGiven the young age of the Pleiades cluster, we expect all the\n$\\delta$-Scuti stars to be at an early evolutionary stage on the main sequence.\nFrom the figure it follows that\nwhile V647 Tau and V624 Tau have similar masses,\nV650 Tau, HD 23194 and HD 23628 are slightly more massive.\nV534 Tau, located near the red edge of the instability\nstrip, is the coolest star in the ensemble.\n\n\n\\begin{figure}[!t]\n\\resizebox{\\hsize}{!} {\\includegraphics{3791fig1.eps}}\n\\caption{HR diagram of the Pleiades cluster. Only the region around the instability\nstrip is shown. The target stars are represented with filled circles.\nThe blue and red edges of the instability strip are indicated by continuous lines\nand were taken from Rodr\\'{\\i}guez et al.\\\n(\\cite{rodriguez}). An isochrone of 70 Myr is also shown.}\n\\label{fig:dscutis}\n\\end{figure}\n\n\\section{The modelling \\label{sec:models}}\n\nIn this section we shall explain how we have computed the stellar models,\nthe corresponding eigenfrequencies and how we have constrained the stellar\nparameters used in the forward comparison with the observed frequencies.\n\n\\subsection{The stellar models}\n\nWe have considered stellar models\nwith input physics appropriate to the mass range covered by\n$\\delta$ Scuti stars. In particular, the stellar models were computed\nusing the CESAM evolutionary code (Morel \\cite{morel}).\nThe nuclear reaction rates are from Caughlan \\& Fowler\n(\\cite{caughlan}), the equation of state is from Eggleton et al.\\\n(\\cite{eggleton}), the opacities from the OPAL project (Iglesias\n\\& Roger \\cite{iglesias}) complemented at low temperatures by\nKurucz data (\\cite{kurucz}), and the\natmosphere is computed using the Hopf's $T(\\tau)$ law (Mihalas\n1978). The convection is described according\nto the classical mixing-length theory (MLT hereafter). In particular,\nwe have considered an MLT parameter of $\\alpha_{\\rm MLT} =\nl\/H_{p}=1.52$, where $H_{p}$ is the pressure scale-height.\nWe have considered a fixed value for $\\alpha_{\\rm MLT}$ because this parameter is not expected\nto have any significant influence on the global structure of the evolution models\nconsidered for our target stars.\n\nWe have considered different sets of initial chemical compositions\nin order to cover the Pleiades [Fe\/H] observed range (see\nSection~\\ref{sec:stars}) and different initial He abundances. The\nvalues used are given in Table~\\ref{tab:metal}. For each initial\nchemical composition we have computed models with and without core\novershooting, $\\alpha_{{\\rm ov}}=0.20$ in the former case \n(Schaller et al.\\ \\cite{schaller}).\n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{Metallicity of the models. [Fe\/H] = $\\log (Z\/X)_{\\ast}- \\log (Z\/X)_{\\odot}$,\nwith $(Z\/X)_{\\odot}=0.0245$ from Grevesse \\& Noels (\\cite{grevesse}).\n\\label{tab:metal}}\n\\begin{tabular}[h]{ccccc}\n\\noalign{\\smallskip} \\hline \\hline \\noalign{\\smallskip}\n$X_{0}$&$Y_{0}$&$Z_{0}$&[Fe\/H]&$\\alpha_{\\rm ov}$\\\\\n\\noalign{\\smallskip} \\hline \\noalign{\\smallskip}\n\n0.735&0.250&0.015&$-$0.0794\\,\\,\\,\\,&0.0--0.2\\\\\n\n0.685&0.300&0.015&$-$0.0488\\,\\,\\,\\,&0.0--0.2\\\\\n\n0.700&0.280&0.020&0.0668&0.0--0.2\\\\\n\n0.725&0.250&0.025&0.1484&0.0--0.2\\\\\n\n0.675&0.300&0.025&0.1794&0.0--0.2\\\\\n\n\\noalign{\\smallskip} \\hline \\noalign{\\smallskip}\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nAlthough the final comparison with observations is done\nconsidering rotating models, our procedure requires the\ncomputation of non-rotating models and the corresponding\nisochrones. Hence, sequences of non-rotating models were\ncalculated for masses between 0.8 $M_{\\odot}$ and 5.0 $M_{\\odot}$,\nfrom the ZAMS to the sub-giant branch and the corresponding\nisochrones computed for each sets of parameters in\nTable~\\ref{tab:metal}. In the following analysis we shall consider\nthree ages, $A$, of 70 Myr, 100 Myr and 130 Myr. Finally, for\ncomparison with the observations three distance moduli, $d$, of\n5.50, 5.60 and 5.70 mag were considered\n except for models with [Fe\/H]=$-$0.0488, in which case \nan additional value of $d=5.39$ mag was used.\n\n\nFigure~\\ref{fig:dscutis} shows an example of such isochrones\ncomputed with the following parameters: [Fe\/H]=0.0668,\n$\\alpha_{\\rm ov} = 0.2$, $A = 70$ Myr and $d= 5.70$ mag. The\nisochrones were calibrated from [$T_{\\rm eff}, \\log\n(L\/L_{\\odot})$] to $(B-V, M_{V})$ by using the Schmidt--Kaler\n(\\cite{schmidt}) calibration for magnitudes and the relationship\nbetween $T_{\\rm eff}$ and $B-V$ of Sekiguchi \\& Fukugita\n(\\cite{sekiguchi}) for the colours. As can be seen in\nFig.~\\ref{fig:dscutis}, in the case illustrated here the isochrone\nmatches the observed colour--magnitude diagram. For some combinations \nof high metallicity and low\ndistance modulus the isochrone fit is not satisfactory but we have\nnot rejected those combinations to allow for an independent\ndetermination of the cluster parameters from the oscillation\nfrequencies. \nHence we have considered \nthe entire combination of cluster parameters given in\nTable~\\ref{tab:metal} and the three ages given above. \n\n\n\\subsection{Photometric effect of rotation on the HR diagram}\n\nIt is known that fast rotation affects the position of a star in the\ncolour--magnitude diagram (e.g.\\ Tassoul \\cite{tassoul}).\nIn particular a ZAMS rotating model has a smaller luminosity and mean\n$T_{\\mathrm{eff}}$ than a non-rotating model with the same mass and chemical\ncomposition.\nAlso, the magnitude and the colour of a rotating star depend on\nthe aspect angle, $i$, between the line of sight and the rotation\naxis. In particular, a rotating star seen pole-on\nappears brighter and hotter than the same star seen equator-on.\n\nWe now search for the range of masses and angular rotations suitable for\neach star. To do this we shall use the\nresults of P\\'erez Hern\\'andez et al.\\ (\\cite{perez2}), in which the\ncorrection to the photometric magnitudes of non-rotating models needed to obtain\nthose of rotating models were computed for main sequence stars\nof spectral type A0 to F5.\nIn this calculation, we shall take into account the observed $v \\sin i$ for each star.\nAs an example,\nFig.~\\ref{fig:cor} illustrates the corrections due to rotation in\nthe particular case of the $\\delta$ Scuti type star V650 Tau with $v \\sin\ni=175$ km s$^{-1}$. The actual position of the star is shown with\na thick dot upon which there is a cross that gives the errors in\nthe estimation of $(M_{V})_{0}$ and $(B-V)_{0}$ calculated\naccording the following expressions:\n\n\\begin{figure}\n\\resizebox{\\hsize}{!} {\\includegraphics[]{3791fig2.eps}}\n\\caption{Colour--magnitude diagram illustrating the photometric\ncorrections due to rotation in the case of V650 Tau\n($v\\sin i = 175\\,$km s$^{-1}$). The thick continuous line is the\nsame isochrone represented in Fig.~\\ref{fig:dscutis} ([Fe\/H] =\n0.0668, $\\alpha_{\\rm ov} = 0.2$, $A = 70$ Myr, $d=5.7$ mag).\nSome models are shown with filled circles upon the isochrone. The masses of the \nfirst and last models are indicated.\nThe small tracks represent the photometric corrections due to rotation for each \n the models indicated on the isochrone when $i$ runs from\n$i=90^{\\circ}$ to $i=i_{\\rm min}$. The corrections at fixed $i$ are represented by dotted lines.}\n\\label{fig:cor}\n\\end{figure}\n\n\\begin{equation}\n\\sigma[(B-V)_{0}] = \\sqrt{\\sigma^{2}(B-V) +\\sigma^{2}[E(B-V)]},\n\\end{equation}\n\n\n\\begin{equation}\n\\sigma[(M_{V})_{0}] = \\sqrt{\\sigma^{2}(d)+\\sigma^{2}(m_{V}) +\n(3.2\\sigma[E(B-V)])^{2}}\n\\, ,\n\\end{equation}\n\n\\noindent\nwhere $\\sigma(d)=0.06$ is the error in the distance modulus. The\nerrors in the magnitude $M_{V}$, colour index $(B-V)$ and reddening\n$E(B-V)$ are given in Table~\\ref{tab:stars}.\n\nThe range of masses and rotational velocities of the star depends\non the stellar parameters considered and hence needs to be computed\nfor each combination in Table~\\ref{tab:metal}. In particular, in\nthe example illustrated in Fig.~\\ref{fig:cor}, the same cluster\nparameters as in Fig.~\\ref{fig:dscutis} were considered\n([Fe\/H]=0.0668, $\\alpha_{\\rm ov} = 0.2$, $A = 70$ Myr and $d=\n5.70$ mag). The isochrone is represented by a thick continuous\nline and some of its models are shown by dots. For each model on\nthe isochrone there is a small track corresponding to the\nphotometric corrections due to rotation computed by using the\nprojected velocity $v \\sin i$ of the star and changing the angle\nof inclination from $i=90^{\\circ}$ to the minimum angle $i=i_{\\rm\nmin}$ corresponding to $\\sim 0.90 \\Omega_{\\rm br}$, where\n$\\Omega_{\\rm br}$ is the break-up angular rotational velocity.\nThis angular velocity is given by the balance between the\ngravitational attraction and the centrifugal force at the equator.\nThe dotted lines give the corrections at fixed angles:\n$i=90^{\\circ}$ (the closest line to the isochrone), $i=50^{\\circ}$\n(intermediate line) and $i=39^{\\circ}$ (the reddest line). It is\nevident that the actual position of the star, including the error\nbox, is associated to a domain of non-rotating models. In this\ncase we have obtained for the star considered a range\nof masses of $[1.86,1.91]M_{\\odot}$ and a range of angles of\ninclination of $[39^{\\circ},50^{\\circ}]$. Nonetheless, in order\nto take into account possible errors coming from the isochrone\ncalibration we shall consider a wider range of aspect angles\n$i=[i_{\\rm min}, 90^{\\circ}]$. The same procedure is carried out\nfor the other stars. Also one needs to consider all the\nmetallicities, ages, overshooting parameters and distance moduli for\nthe whole set of stars.\n\nHaving obtained ranges of $M$ and values of $i_{\\rm min}$, we proceed to find the\ncorresponding range of angular rotational velocities $\\Omega$, for\nwhich an estimate of the equatorial radius, $R_{\\rm eq}$, of the\nrotating model is needed. Under the assumption of uniform rotation\nand approximating the surface of the star by a Roche surface (see\ne.g. P\\'erez Hern\\'andez et al. \\cite{perez2}) one has:\n\n\\begin{equation}\nR_{\\rm eq} \\simeq R_0 \\frac{3}{\\omega} \\cos \\left\\{ \\frac{1}{3}\n\\left[ \\pi + {\\rm arcos}\\, \\omega \\right] \\right\\}\n\\; ,\n\\end{equation}\n\n\\noindent where $\\omega \\simeq \\Omega\/\\Omega_{\\rm br}$,\n$\\Omega^2_{\\rm bq} \\simeq 8GM\/(27R^3_0)$ and $R_0$ is the polar\nradius. As noted in P\\'erez Hern\\'andez et al.~(\\cite{perez2}), the\npolar radius can be approximated by the radius of a spherical\nnon-rotating model with the same mass and evolutionary state\n(since our stars are close to the ZAMS, a non-rotating model of\nthe same mass and age is suitable for this error-box estimation).\n\nSince on the other hand the angular rotation is related to the equatorial radius by\n\n\\begin{equation}\n\\Omega \\simeq \\frac{(v\\sin i)_{\\rm obs}}{R_{\\rm eq}\\sin i}\n\\; ,\n\\end{equation}\n\n\\noindent it is possible to carry out a simple iterative process\nto obtain a range of possible rotations, $[\\Omega_{\\rm min},\\Omega_{\\rm max}]$, \nfrom the previously obtained range of\nangle of inclinations $[i_{\\rm min},90^{\\circ}]$ and masses.\nSince for given data parameters, the mass of the star is obtained with an \nuncertainty of $\\sim 0.02$ $M_{\\odot}$ this\niterative process was carried out for just one mass.\n\nAs an example, Figure~\\ref{fig:om} shows the ranges of $\\Omega$\nobtained for the six stars as a function of the mass of the models\nconsidering the same cluster parameters as in\nFigure~\\ref{fig:cor}. Lines of different types give the estimates\nfor the different stars as indicated in the figure. A similar\nprocedure needs to be considered for all sets of parameters in\nTable~\\ref{tab:metal}.\n\n\\begin{figure}[!t]\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig3.eps}}\n\\caption{Range of rotation rates against masses estimated for the\nsix stars from the photometric corrections due to rotation applied\nto the same isochrone depicted in Figure~\\ref{fig:cor}. The vertical error\nbars give the estimated range of mass for each star. The crosses\ngive the values of the break-up angular rotational velocity,\n$\\Omega_{\\rm br}$. The ranges of $\\Omega$ and $M$ are associated\nwith stars as follows: dot-dashed line for V650 Tau $(v\\sin i=175$\nkm s$^{-1})$; long-dashed line for HD 23628 $(v\\sin i=150\\,$km\ns$^{-1})$; three-dots dashed line for HD 23194 $(v\\sin i=20$\nkm s$^{-1})$; dashed-line for V647 Tau $(v\\sin i=20$ km s$^{-1})$;\ncontinuous line for V624 Tau $(v\\sin i=20\\,$km s$^{-1})$ and\ndotted line for V534Tau $(v\\sin i=90\\,$km s$^{-1})$. }\n\\label{fig:om}\n\\end{figure}\n\nWith the information obtained above we compute evolutionary\nsequences of rotating models with the same input physics as the\nnon-rotating models but with appropriate initial angular\nrotational velocities in order to match as closely as possible the\nextreme values in the interval $[\\Omega_{\\rm min},\\Omega_{\\rm\nmax}]$ at the final age. Solid-body rotation and conservation of\nglobal angular momentum during the evolution were assumed during\nthe calculus. In order to have a better overview of the rotation rates of\nthe stars we have also computed models with an intermediate value\nof $\\Omega_{\\rm mid} \\approx 0.5 \\Omega_{\\rm br}$. A total amount\nof 1620 sequences of rotating models were finally obtained for the\nwhole ensemble of stars.\n\n\\subsection{Theoretical oscillation frequencies}\n\nThe eigenfrequencies of the rotating models previously described\nhave been calculated using the oscillation code FILOU (Tran Minh\net al.\\ \\cite{tran}; Su\\'arez \\cite{suarez2}). We have considered\nfrequency perturbations up to second order in the rotation rate.\nFrequencies of the modes are labelled in the usual way: $n, l, m$\nfor the radial order, degree and azimuthal order. The radial\norders of a given mode are assigned according to the Scuflaire\ncriterion (\\cite{scuflaire}) with $n>0$ for the $p$ modes, $n<0$\nfor the $g$ modes, $n=1$ for the fundamental radial mode of degree\n0 and 1, and $n=0$ for the fundamental mode with the $l=2$.\nEigenfrequencies were computed up to $l=2$. For geometrical\nreasons higher degree modes are expected to have considerably\nsmaller amplitudes. The theoretical frequencies cover the\nfrequency range of the observed pulsation peaks (see\nTable~\\ref{tab:freq}). Coupling between modes is not considered in\nthe present work.\n\nThe estimated interval of $\\Omega_{\\rm\nrot}$ for all the stars may be as large as \n$\\Delta \\Omega_{\\rm rot} \\sim 1.6 \\times 10^{-4}$ rad\/s \nwhich in terms of cyclic rotational frequency corresponds\nto $\\Delta \\nu_{\\rm rot} \\sim 25\\,\\mu$Hz (see for instance\nFig.~\\ref{fig:om}). After some tests, we found that a\nsatisfactory comparison between the observed and theoretical\nfrequencies cannot be achieved by using only the frequencies\ncomputed for models with three $\\Omega_{\\rm rot}$ within this\ninterval. To overcome this difficulty once we had the mode\nfrequencies for each series of models (three $\\Omega_{\\rm rot}$\nfor fixed $M$, [Fe\/H], $\\alpha_{\\rm MLT}$, $\\alpha_{\\rm ov}$, $d$\nand $A$), we proceeded to interpolate the results on $\\Omega_{\\rm\nrot}$ in 21 equally spaced points covering the interval\n[$\\Omega_{\\rm min}, \\Omega_{\\rm max}$]. A quadratic spline\ninterpolation was applied. In order to\nevaluate the goodness of the interpolation we also computed the\neigenfrequencies for several randomly selected models. We have\nfound that there is a good agreement between the interpolated and\ntheoretical frequencies up to $\\nu \\sim 600\\, \\mu$Hz ($n \\sim 6$),\n the average of absolute separation being $|\\nu_{\\rm int} -\\nu_{\\rm\ncal} |\\sim 0.3\\,\\mu$Hz. The disagreement found beyond this value can be\nexplained by the fact that the second order effect of rotation is\nenhanced at higher frequencies. In any case, as can be seen in\nTable~\\ref{tab:freq}, the highest frequency we are dealing with\nis $\\simeq 574\\,\\mu$Hz.\n\n\\section{Method of comparison \\label{sec:comp}}\n\nOnce we have the mode frequencies for all sets of parameters we\ncompare the observational frequencies ($\\nu_{\\rm obs}$)\nwith the theoretical ones ($\\nu_{\\rm cal}$) at each interpolated\n$\\Omega_{\\rm rot}$.\n\nLet us consider a rotating model with given parameters for just\none star. Then, for all the possible combinations between the\nobserved and computed frequencies, we compute the quantity $\\chi^{2}$ given by\n\n\\begin{equation}\n\\chi^{2}= \\frac{1}{N}\\sum^{N}_{j=1}(\\nu_{{\\rm obs},j} - \n\\nu_{{\\rm cal},j})^{2}\n\\, ,\n\\end{equation}\n\n\\noindent\nwhere $N$ is the number of observational frequencies.\nIn this computation it is understood that each theoretical frequency is\nassigned to one observed frequency at most.\n\nAs a first step, in order fully to exploit the collective behaviour\nof stars within open cluster we consider the solutions of the six\nstars with given cluster parameters of [Fe\/H], $\\alpha_{\\rm ov}$,\n$d$ and $A$. The models corresponding to each star differ in the\nmass and the rotation rate. As an example, Fig.\\ref{fig:ang}\nshows $\\chi^{2}$ against angular rotational velocity,\n$\\Omega$, for the six models corresponding to the same parameters\nas in Fig.~\\ref{fig:cor} but without overshooting ([Fe\/H]=0.0668,\n$\\alpha_{\\rm ov}=0.0$, $d$=5.70 mag and $A$=70 Myr). Each panel\ncorresponds to the star indicated in the figure. For clarity,\nonly solutions with $\\chi^{2}< 20 \\, (\\mu\\rm{Hz}^{2})$ are shown. Similar\nfeatures are found for other metallicity, distance moduli and ages,\nbut the lowest $\\chi^{2}$ may appear with rather high values.\n\n\\begin{figure}[!ht]\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig4.eps}}\n\\caption{$\\chi^{2}$ against $\\Omega$ for six models with common parameters\n [Fe\/H], $\\alpha_{\\rm ov}$, $d$ and $A$.}\n\\label{fig:ang}\n\\end{figure}\n\nSince we expect the same [Fe\/H], $d$ and $A$ for all the stars in\nthe cluster we shall assume that the best fits should happen\nsimultaneously in the six stars despite\n differences in rotation\nrate and masses of the models. Hence we proceed to calculate the\nmean square root of the lowest $\\chi^{2}$ found in each model for\ngiven common parameters by means of the following expression:\n\n\\begin{equation}\n\\epsilon= \\sqrt{\\frac{1}{6}\\sum^{6}_{i=1}(\\chi_{{\\rm min},i})^{2}}.\n\\end{equation}\n\n\n\\noindent Figure~\\ref{fig:sig} shows $\\epsilon$ against\n[Fe\/H] for models without overshooting. For the sake of clarity\nonly the points with $\\epsilon \\leq 3.0\\,\\mu$Hz are shown. It\ncan be seen that the solution with the smallest $\\epsilon $\ncorresponds to [Fe\/H]=0.0668. A plot of similar characteristics is\nobtained for models with overshooting.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig5.eps}}\n\\caption{Minimum values of the mean square root of the difference\nbetween observed and theoretical frequencies for models with the\nsame values of [Fe\/H], $\\alpha_{\\rm ov}$, $d$ and $A$ against\n[Fe\/H]. For clarity, only models without overshooting are shown.\nThe symbols are related to different values of $A$ and $d$ as\nfollows: asterisks ($d=5.39$ mag), squares ($d=5.50$ mag), diamonds ($d=5.60$ mag), circles\n($d=5.70$ mag), small (70 Myr), middle (100 Myr) and big (130\nMyr). } \\label{fig:sig}\n\\end{figure}\n\nSince only those identifications obtained from eq.~(5) with low\n$\\chi^{2}$ values are of interest, we require that for each star\nthe solution must have $\\chi \\leq 3.5\\,\\mu$Hz. \nIn Fig.~\\ref{fig:sig} we reject those solutions that have at least one\nmodel with $\\chi> 3.5\\,\\mu$Hz for all the parameters and\ncombinations between observed and theoretical frequencies. With\nthis restriction we found that only those solutions associated with\nthe models represented by the four lowest points in\nFig.~\\ref{fig:sig} should be considered. Similar\nsolutions were found for models with overshooting. All these models are listed\nin Table~\\ref{tab:dom} and will be analysed in detail later.\n\n\\begin{table}[]\n\\begin{center}\n\\caption{Ensembles of models with and without overshooting, which,\nafter applying a threshold of $\\chi \\leq 3.5\\,\\mu$Hz, have potential valid\nsolution (see text for details).}\n\\begin{tabular}{cccccc}\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\multispan6 \\hfill [Fe\/H] = 0.0668 \\hfill \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\multispan3 \\hfill $\\alpha_{\\rm ov} = 0.00$ \\hfill &\\multispan3 \\hfill\n$\\alpha_{\\rm ov} = 0.20$ \\hfill \\\\\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n& $d$ & $A$ & & $d$ & $A$ \\\\\n\n& (mag)&(Myr)& & (mag)&(Myr) \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\nC1& 5.70 & \\,\\,70&D1& 5.70 & \\,\\,70 \\\\\nC2& 5.70 & 130& D2&5.60 & 100\\\\\nC3& 5.60 & \\,\\,70 &D3&5.60 &130 \\\\\nC4& 5.60 & 100&D4&5.70 &100 \\\\\n\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\end{tabular}\n\\normalsize\n\\end{center}\n\\label{tab:dom}\n\\end{table}\n\nWe shall use a geometrical argument to reduce further the\nnumber of solutions so far available for each star. To this end,\nwe take into account that the visibility of each mode\ndepends on the angle of inclination. \nFollowing Pesnell~(\\cite{pesnell}), we illustrate in\nFig.~\\ref{fig:vis} the spatial response function\n$S_{lm}$ against $i$, for mode degree $l=0,1,2$. For\nsimplicity, limb-darkening has been neglected. It can be seen that for $i \\approx\n90^{\\circ}$ only even $l+m$ values can be detected, while for $i\n\\approx 0^{\\circ}$ only modes with $m = 0$ will be visible.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig6.eps}}\n\\caption{Variation of visibility amplitude with the inclination angle for radial\n$l=0$ (continuous line) and non-radial $l=1$ (dotted lines) and $l=2$ (dashed lines)\nmodes.}\\label{fig:vis}\n\\end{figure}\n\nWe introduce an hypothesis based on the visibility of the modes:\nwe reject any solution with one or more modes\nwith $S_{lm}^{\\prime} < 0.1$. After applying this constraint we\nwere able to limit the analysis to four sets in \nTable~\\ref{tab:dom}, two with $\\alpha_{\\rm ov}=0.0$: C1\n$(d=5.70,\\, A=70)$, C3 $(d=5.60,\\, A=70)$ and two with\n$\\alpha_{\\rm ov}=0.2$: D1 $(d=5.70,\\, A=70)$, D2 $(d=5.60,\\, A=100\n)$. We note that while for the stars HD 23628 and HD 23194 only\na few identifications ($<$ 8) remain, the number of identifications\nfor V624 Tau, V534 Tau, V647 Tau and V650 Tau remained larger than\n$100$ in most cases.\n\n\n\\section{Results and discussion \\label{sec:results}}\n\nIn order to discuss the results we shall introduce a parameter\n $\\Delta$ associated with each star in each identification and given\nby\n\n\\begin{equation}\n\\Delta = {\\rm max} ( |\\nu_{\\rm obs}- \\nu_{\\rm cal}| ).\n\\end{equation}\n\nTable~\\ref{tab:delta} lists the number of solutions for the six\nstars in each ensemble. Different maximum values of $\\Delta$ have\nbeen considered. Certain features can be derived from these\ngeneral results:\n\n\n\\begin{table*}[]\n\\begin{center}\n\\caption{Number of possible solutions for models in each ensemble\nfor different values of $\\Delta$ (in $\\mu$Hz). \\label{tab:delta}}\n\\begin{tabular}{lcccccccccc}\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&\\multispan{10} \\hfill [Fe\/H] = 0.0668 \\hfill \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&\\multispan5 \\hfill $\\alpha_{\\rm ov} = 0.00$ \\hfill &\\multispan5 \\hfill $\\alpha_{\\rm ov} = 0.20$ \\hfill \\\\\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&$\\Delta\\,(\\mu{\\rm Hz})\\, \\leq$ &1.0 & 2.0&3.0 &3.5 & $\\Delta\\,(\\mu{\\rm Hz})\\, \\leq$ &1.0 &2.0 & 3.0 & 3.5 \\\\\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&$\\frac{M}{M_{\\odot}}$& & & &&$\\frac{M}{M_{\\odot}}$& & & & \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&C1 & & & & &D1& & & & \\\\\nV624 Tau&1.70 &0 &0 &12 &18&1.69 &0 &0 &0 &12 \\\\\nV534 Tau&1.67 &0 &0 &0 &16&1.67 & 0& 0& 0&0 \\\\\nV647 Tau&1.70 &0 &0 &0 &2 &1.70 &0 & 0&0 &2 \\\\\nV650 Tau&1.86 &0 &0 &7 &13&1.86 &0 &2 &4 &4 \\\\\nHD 23194&1.80 &0 &1 &2 & 3&1.81 &0 &0 &3 &3 \\\\\nHD 23668&1.84 &0 &1 &2 & 4&1.84 &0 &0 &2 &3 \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&C3 & & & & &D2 & & & & \\\\\nV624 Tau&1.68 &0 &0 &6 &12&1.68 &0 &0 &0 &0\\\\\nV534 Tau&1.66 &0 &0 &0 &24&1.66 & 0& 0& 0&0 \\\\\nV647 Tau&1.69 &0 &0 &0 &2 &1.69 &0 & 0&0 &3 \\\\\nV650 Tau&1.86 &0 &0 &0 &0&1.86 &0 &0 &0 &3\\\\\nHD 23194&1.79 &1 &1 &2 &2&1.79 &1 &1 &1 &2 \\\\\nHD 23668&1.82 &0 &0 &3 &4&1.84 &0 &2 &3 &6 \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\end{tabular}\n\\normalsize\n\\end{center}\n\\end{table*}\n\n\n\\begin{itemize}\n\\item The resulting masses for each star are similar\nfor all the solutions. \n\n\\item The number of solutions for a given star at fixed $\\Delta$\nassociated with models with overshooting is smaller than without it.\n\n\\item There are no solutions with $\\Delta \\leq 3.0\\,\\mu$Hz for the stars V534 Tau\nand V647 Tau, while the star HD 23194 shows solutions even with\n$\\Delta \\leq 1.0\\mu$Hz.\n\n\\item In order to find a set of models that has at least one solution simultaneously\nfor all the stars a value of $\\Delta \\simeq 3.5\\,\\mu$Hz is needed.\nThis solution is found for C1 that corresponds to the cluster\nparameters [Fe\/H]=0.0668, $\\alpha_{\\rm ov}=0.0$, $d=5.70$ and\n$A=70$, and it coincides with the minimum in Figure~\\ref{fig:sig}.\n\n\\item\nThe remaining ensembles in Table~\\ref{tab:delta} have at least\none solution simultaneously when\nthe value of $\\Delta$ is increased slightly. In particular, for\nC3 a value of $\\Delta \\simeq 4.5\\,\\mu$Hz is needed to obtain a\nsolution while for D1 and D2 a value of $\\Delta \\simeq 3.8\\,\\mu$Hz\nis required. \n\n\\end{itemize}\n\nWe have found that the\nidentifications and the range of aspect angles derived for each star are \nsimilar for all the solutions. \nThis agreement may be due to the fact that the estimated range of mass and radius,\nand hence of mean densities, is similar for all the solutions. \nAt the ages considered the overshooting has a negligible\neffect on the models.\n\nTable~\\ref{tab:results} summarizes the parameters estimated for\nthe cluster as well as for each star associated with the possible\nidentifications. The radial orders of these\nmodes are consistent with growth rate predictions\n(Su\\'arez et al.\\ \\cite{suarez}). \nWhile these identifications cannot be considered as\ndefinitive but rather as ``best fit solutions'', some\ninformation on the stars could be derived.\nOn the one hand, those stars with smaller masses\n(V624 Tau, V534 Tau and V647 Tau) have more than six frequencies and could have \nsimultaneously excited radial and non-radial oscillations although\nat most two radial modes. \nFollowing Hern\\'andez et al.\\ (\\cite{hernandez}), \nat most two radial modes were found to be excited for the $\\delta$\nScuti stars in the Praesepe cluster.\nOn the other hand, the most massive stars, V650 Tau, HD\n23628 and HD 23194, with a mean mass of 1.83 $M_{\\odot}$\npresent on average few observational frequencies ($N \\le 4$) that\nmight be associated only with non-radial modes. With the\npresent observational data it remains unclear, however, whether this\nis a general trend for $\\delta$ Scuti stars. \n\n\\begin{figure}\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig7.eps}}\n\\caption{Comparison of observed (solid lines) and theoretical (dot lines)\nfrequencies of the best fit solutions in C1-models (Table~\\ref{tab:delta}).}\n\\label{fig:comp}\n\\end{figure}\n\n\n\\begin{center}\n\\begin{table*}[]\n\\caption{Summary of parameters derived for the Pleiades cluster as well as for\n$\\delta$ Scuti stars with the possible identification of the frequency modes.\n\\label{tab:results}}\n\\begin{tabular}{lcclcc}\n\n\\noalign{\\smallskip}\n\\hline\n\\hline\n\\noalign{\\smallskip}\n \\multispan{6}\\hfill [Fe\/H] = 0.0668, $\\alpha_{\\rm ov} = $[0.00-0.20], $m_{V}-M_{V}=$[5.60-5.70], $A=$[70-100]${\\times}10^{6}$ years \\hfill \\\\\n\\noalign{\\smallskip}\n\\hline\n\\hline\n\\noalign{\\smallskip}\n Star&$\\nu $ &Identification &Star&$\\nu $ &identification \\\\\n &($\\mu$Hz)& ($n,l,m)$& &($\\mu$Hz)& ($n,l,m)$ \\\\\n\\noalign{\\smallskip}\n\\hline\n\\hline\n\\noalign{\\smallskip}\nV624 Tau&242.9& (1,0,0) &V650 Tau& 197.2&(1,1,1) \\\\\n$M = [1.68$-1.72]\\,$M_{\\odot}$ &409.0& (3,1,1) &$M =$ [1.84-1.88]\\,$M_{\\odot}$ & 292.7&(0,2,$-$2),(2,2,2) \\\\\n $\\nu_{\\rm rot}= [3$-5]\\,$\\mu$Hz &413.5& (3,1,0) & $\\nu_{\\rm rot}= [$25-28]\\, $\\mu$Hz & 333.1&(3,1,1),(3,2,2) \\\\\n $i=[37^{\\circ}$-$67^{\\circ}]$&416.4&(3,1,$-$1) & $i=[60^{\\circ}$-$70^{\\circ}]$ & 377.8&(2,2,$-$2),(3,1,0) \\\\\n& 451.7& (3,2,$-$2),(4,0,0) & &&(3,1,$-$1),(3,2,1)\\\\\n &489.4 & (4,1,0),(4,1,1) &&&\\\\\n &529.1 & (4,2,$-$1),(4,2,$-$2)&&&\\\\\n & &(5,0,0) &&&\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\nV534 Tau&234.2& (1,1,1)&HD 23628 &191.8 &(0,2,2) \\\\\n$M = [1.65$-1.69]\\,$M_{\\odot}$ &252.9& (1,1,0)&$M = [1.82$-1.86]\\,$M_{\\odot}$ &201.7 & (1,1,1) \\\\\n$\\nu_{\\rm rot}= [14$-16]\\,$\\mu$Hz &307.6&(2,0,0),(2,1,1)& $\\nu_{\\rm rot}= [24$-26]\\,$\\mu$Hz &376.6&(2,2,$-$2)\\\\\n$i=[59^{\\circ}$-$79^{\\circ}]$ &377.9& (2,2$-$1),(3,0,0) &$i=[53^{\\circ}$-$59^{\\circ}]$&&\\\\\n &379.0 & (2,2$-$1),(3,0,0) & &&\\\\\n & & (3,1,1) &&\\\\\n &448.1& (3,2,$-$1),(4,0,0) &&&\\\\\n &525.0& (4,2,$-$1) &&&\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\nV647 Tau &236.6&(1,1,1) &HD 23194&533.6&(5,1,1),(5,1,$-$1) \\\\\n $M = [1.68$-1.72]\\,$M_{\\odot}$ &304.7&(1,2,1) &$M = [1.78$-1.82]\\,$M_{\\odot}$&574.9&(5,2,0),(5,2,1) \\\\\n $\\nu_{\\rm rot}= 10-11\\,\\mu$Hz&315.6&(2,0,0),(2,1,1)& $\\nu_{\\rm rot}= [14$-23]\\,$\\mu$Hz & & \\\\\n $i=17^{\\circ}$-18$^{\\circ}$ &374.4&(2,2,$-$1)&$i=7^{\\circ}$-$12^{\\circ}$ &&\\\\\n &405.8&(3,1,0) &&&\\\\\n &444.1&(3,2,0) &&&\\\\\n\\noalign{\\smallskip}\n\\hline\n\\hline\n\\noalign{\\smallskip}\n\\end{tabular}\n\\end{table*}\n\\end{center}\n\n\n\n\nA comparison of observed and computed frequencies is shown in\nFigure~\\ref{fig:comp}. In each panel the solid lines represent the \nobservational frequencies with their corresponding amplitudes.\nThe ``best fit'' theoretical frequencies\nare represented by dotted lines. The stars are arranged from top to bottom\naccording to their magnitude. \n\nFrom this figure it follows that\nthere is an asymmetric triplet in the observational spectrum of\nV624 Tau. We have found that for all the solutions the\nidentification is always a $(l=1,\\,n=3)$ mode split by rotation.\nIn turn, this implies that an independent estimate of the mean rotation\nrate can be obtained from the rotational splitting, up to second\norder, given by\n\n\\begin{equation}\n\\frac{\\nu_{-m} - \\nu_{m}}{2 m} \\sim \\frac{\\Omega_{\\rm rot}}{2 \\pi}\n(1-C_{nl}),\\;\\;\\;\\; m=1,2,\\ldots,l\n\\end{equation}\n\n\\noindent\nWe have found complete agreement between the cyclic\nrotational frequency computed from the above expression and those\nderived from the mode identification, the differences being \n$\\sim 0.03-0.04\\,\\mu$Hz. We have also found a fairly\ngood agreement between the observations and the models in the differences\n$\\nu_{-m}-\\nu_{0}$ and $\\nu_{+m}-\\nu_{0}$.\n\n\\medskip\nIf the estimated inclination angles for\nthe stars are correct, all of them could be rapid rotators ($\\nu_{\\rm rot}>10\\mu$Hz)\nexcept for V624 Tau with $\\nu_{\\rm rot}\\leq 5\\mu$Hz.\nHD 23194 and V647 Tau, whose projected rotational velocities are low,\n$v\\,\\sin\\,i = 20$ kms$^{-1}$, would have equatorial velocities as high as\n90 and 70 kms$^{-1}$ respectively.\n\n\\medskip\nIn the present work the distance is found to be\n$m_{V}-M_{V} = 5.60-5.70$ mag $(132 - 138$ pc) and agrees very well\nwith that derived recently\nby independent determinations using independent techniques and data\n(Pan et al.\\ \\cite{pan}; Munari et al.\\ \\cite{munari}; Soderblom et al.\\\n\\cite{soderblom}; Percival et al.\\ \\cite{percival}).\n However, although\nthe isochrones computed with sub-solar metallicity\n ($Z=0.015$ and $Y=0.30$) match well the observational data\n with a distance modulus of $d=5.39$ mag in agreement with\nthat of HIPPARCOS within 1-$\\sigma$ error,\nthe resulting solutions (shown by asterisks in\nFig.~\\ref{fig:sig}) do not lead to\n good fits. Moreover, the plot of $\\epsilon$ against $d$ which we do\nnot reproduce here, shows better fits for further distances.\nFor solar metallicity, in order to reproduce the HIPPARCOS MS of the Pleiades,\nunrealistically high helium abundances are required (e.g. $Y=0.34$ Belikov et al.\n\\cite{belikov}, $Y=0.37$ Pinsonneault et\n al. \\cite{pinsonneault}).\n\n\\section{Conclusions \\label{sec:conclusions}}\n\nIn this study we have performed a seismological\nanalysis of six $\\delta$ Scuti stars belonging to the Pleiades\ncluster to identify their frequency modes.\nTo the best of our knowledge this group of\nvariables constitutes the most statistically significant sample of\n$\\delta$ Scuti stars analysed in an open cluster to date.\n\nRotational effects were considered in different stages of\nthe analysis: first when determining the star positions in the HR\ndiagram and second when computing stellar models that approximately\nreproduce the evolutionary stage of the stars, and finally when\ncomputing theoretical oscillation frequencies in order to\nconstruct seismic models for target stars. The comparison between\nobservational and computed frequencies was carried out by\na least-squares fit.\nThere is a large number of possible solutions partly because\n neither the equatorial velocity nor the inclination\nangle are known a priori. In order to limit the number of possible\nsolutions we used the relationship between the\namplitude visibility, $S_{lm}$ and the aspect angle, $i$.\nAs a result we found few solutions for each star,\nsuggesting the existence of only $p$ modes of low degree, low radial order\nin all the stars. For the less massive stars,\nsolutions with at most two radial modes were also possible. These solutions have\nassociated ranges of stellar parameters for each star. Most of the\nstars could be rapid rotators according to the estimated angle of inclination,\n$i$.\n\nThe best fits between observational and theoretical frequencies\nare achieved for global cluster parameters of [Fe\/H] = 0.0668\n$(Z_{0}=0.02,\\, X_{0}=0.70)$, $A=70 - 100\\,$Myr and $d=5.60 -\n5.70$ mag. The derived distance modulus for the Pleiades cluster\nagrees with that of the main sequence fitting method, in spite of the fact that\nthe isochrones with sub-solar metallicity\nclosely matches the Pleiades HR diagram with the HIPPARCOS distance.\n\n\n\n\\begin{acknowledgements}\nThis work has been partially funded by grants AYA2001-1571, ESP2001-4529-PE and \nESP2004-03855-C03-03 of the Spanish national research plan. L.F.M acknowledges the partial \nfinancial support by the Spanish AECI. J.C.S. acknowledges the financial support by the Spanish Plan of Astronomy and Astrophysics, under project AYA2003-04651, by the Spanish ``Consejer\\'{\\i}a de Innovaci\\'on, Ciencia y Empresa'' from the ``Junta de Andaluc\\'{\\i}a local government, and by the Spanish Plan Nacional del Espacio under project ESP2004-03855-C03-01.\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe problem of predicting network structure can be both of great practical importance as well as a case-study in understanding the usefulness of deep learning in network settings. An accurate model can be used to suggest friend recommendations, product recommendations, and even predict individual user actions. A system which solves this problem is generally referred to in the literature as a recommender system, and such systems are quite common at large Internet companies such as Amazon \\cite{Linden:2003:ARI:642462.642471}, Netflix \\cite{Zhou:2008:LPC:1424237.1424269}, and Google.\n\nThe main approaches typically taken fall into two categories - \\textit{content based} and \\textit{collaborative filtering} approaches. The first makes use of text, meta-data, and other features in order to identify potentially related items, while the latter leans more towards making use of aggregated behavior and of a large number of training samples (ie, users and businesses). Collaborative filtering approaches have proven useful in recommender systems in industry, and are typically the preferred method due to how expensive it typically is (in both computational resources and engineering effort) to extract useful features from large amounts of meta-data. However, with advances in deep learning (extracting features from videos and text that are useful for many tasks), it seems feasible that revisiting content-based approaches with additional network-level data will prove fruitful.\n\nIn this paper, we seek to explore a novel method combining both deep learning feature extraction (a \\textit{content-based} approach) with network prediction models (a quasi-\\textit{collaborative filtering} approach). We focus on a real-world, practical network - the Yelp Review Network. The network consists of 4.7M review (edges), 156K businesses, 200K pictures, covering over 12 metropolitan areas in the united state.\n\nSpecifically, we seek to model the problem of predicting a user's star rating of a previously unrated business by using features about the business, the user, as well as existing interactions between the user and other businesses.\n\nFrom a general view point, we hypothesize that the final star rating given by a users is a mixture of all of the above interactions. In particular, we would expect that rating at time $t$ between user $i$ and business $j$ could be modeled as:\n$$\nr_t = f(i_t, j_t, \\delta_{i,j,t}) + \\mathcal{N}(0,\\epsilon_{i,j,t})\n$$\n\nHere, we have $i_t$ is the overall user-based bias at time $t$. For example, some users simply tend to give higher or lower ratings based on previous experience -- one could argue this is inherent to the user directly. We also have $j_t$, the overall business bias at time $t$. For example, some business are objectively better across the board, by having better food, websites, or being at better locations. Finally, the term $\\delta_{i,j,t}$ which is an interaction term reflecting the interaction between this user and the business as time $t$. One might imagine for example that a user who really enjoys Mexican food will tend to give those restaurants a higher rating.\n\nIn the end, these three terms should be combined in some way (with normalization, etc.) to arrive at a final rating. As such, we essentially have four models which can be combined to give better predictive power:\n\n\\begin{itemize}\n\\item a user model, trained only on user properties\n\\item a business model, trained on business properties\n\\item interaction model trained on a mixture of both properties with additional features known only to the network (such as previous business interactions, etc).\n\\end{itemize}\n\n\\section{Related Work}\nIn general, there are three areas of interest in the literature. We have (1) work which focuses and provides techniques for predicting results based on network structures, (2) work which has applied some ML techniques to the features extracted from networks (and sometimes elements themselves), and (3) work which throws away a lot of the network structure and focuses exclusively on using the data to make predictions. All of these are supervised learning methods which varying degrees of complexity. We provide a brief overview of them, followed by a section discussing the mathematical underpinnings of the models.\n\n\\subsection{Graph-Based Approaches}\n\nLiben-Nowell and Kleinberg \\cite{TheLinkPredictionProblemForSocialNetworks} formalize the \\textit{link prediction problem} and develop a proximity-based approach to predict the formation of links in a large co-authorship network. The model focuses on the network topology alone, ignoring any additional meta-data associated with each node since its basic hypothesis is that the known network connections offer sufficient insight to accurately predict network growth over time. They formally tackle the problem of given a social graph $G = (V,E)$ where each edge represents an interaction between $u,v$ and a particular timestamps $t$, can we use a subset of the graph across time (ie, with edges only in the interval $[t,t']$ to predict a future subset of the graph $G'$). The methods presented ignore the creation of new nodes, focusing only on edge prediction.\n\nMultiple predictors $p$ are presented, each focusing on only network structure. For example, some intuitive predictors (there are many others studied, though not necessarily as intuitive) for the edge creation between $x$ and $y$:\n\n\\begin{enumerate}\n\\item graph distance -- (negated) length of the shortest path between $x$ and $y$\n\\item preferential attachments -- $|\\Gamma(x)| \\cdot |\\Gamma(y)|$ where $\\Gamma: V \\to 2^V$ is a map from nodes to neighbors of nodes.\n\\end{enumerate}\n\nEach of the above predictors $p$ can output a ranked list of most likely edges. The paper evaluates effectiveness by comparing calculating the percentage of edges which are correctly predicted to exists in the test data. The baseline for the paper appears to be a random predictor based on the training graph and the graph distance predictor. The predictors are evaluated over five difference co-authoring networks. =\n\nThe predictors can be classified into essentially three categories:\n\n\\begin{itemize}\n\\item Predictors based on local network structure\n\\item Predictors based on global network structure\n\\item Meta predictors based on a mixture of the above two \n\\end{itemize}\n\nAll predictors performed above the random baseline, on average. The hitting time predictors performed below the graph distance baseline, with a much narrower positive gap for the remaining predictors. Most predictors performed on-par with just a common neighbors predictors.\n\n\\subsection{Introducing ML}\n\nFurther work by Leskovec et al. \\cite{Leskovec:2010:PPN:1772690.1772756} seeks to introduce the nuance of both ``positive'' and ``negative'' relationships to the link prediction problem, addressing limitations of previous work. In concrete, it seeks to predict the sign of each edge in a graph based on the local structure of the surrounding edges. Such predictions can be helpful in determining future interactions between users, as well as determining polarization of groups and communities. \n\nLeskovec et al. introduce the ``edge sign prediction problem'' and study it in three social networks where explicit trust\/distrust is recorded as part of the graph structure, work which is later expanded by Chiang et al. \\cite{Chiang:2011:ELC:2063576.2063742}. The explicit sign of the edges is given by a vote for or a vote against, for example, in the Wikipedia election network. They find that their prediction performance degrades only slightly across these three networks, even when the model is trained on one network and evaluated against another.\n\nThey also introduces social-psychological theories of balance and status and demonstrates that these seems to agree, in some predictions, with the models explored.\n\nFurthermore, they introduces the novel idea of using a machine learning approach built on top of the network features to improve the performance of the model. Rather than rely directly on any one network features, it instead extracts these features from the network and uses them in a machine learning model, achieving great performance. The features selected are, roughly speaking:\n\n\\begin{itemize}\n\\item Degree features for pair $(u,v)$ - there are seven such features, which are (1) the number of incoming positive edges to $v$, (2) the number of incoming negative edges to $v$, (3) the number of outgoing positive edges from $u$, (4) the number of outgoing negative edges from $u$, (5) the total number of common neighbors between $u$ and $v$, (6) the out-degree of $u$ and the (7) in-degree of $v$.\n\\item Triad features - We consider 16 distinct triads produced by $u,v,w$ and count how many of each type of triad.\n\\end{itemize}\n\nThe above features are fed into a logistic regression model and are used to relatively successfully predict the sign of unknown edges.\n\nOverall, while previous network predictions problems have attempted to make use of machine learning, most still rely on relatively simple models and have not yet made the jump to deeper architectures.\n\n\\subsection{Content-Based Deep Learning}\nHasan et. al in \\cite{Hasan06linkprediction} introduce the very important idea of using features of the node to assist in link prediction. The paper also significantly expands on the set of possible models to use for ML, demonstrating that for their data, SVMs work the best when it comes to predicting the edge. They formulate their problem as a supervised machine learning problem. Formally, we take two snapshots of a network at different times $t$ and $t'$ where $t' > t$. The training set of generated by choosing pairs of nodes $(u,v)$ which are not connected by an edge in $G_t$, and labeling as positive if they are connected in $G_{t'}$ and negative if they are not connected in $G_{t'}$. The task then becomes a classification problem to predict whether the edges $(u,v)$ is positive or negative. \n\nIn particular,they make use of the following features:\n\n\\begin{itemize}\n\\item Proximity features - computed from the similarity between nodes.\n\\item Aggregated features - how \"prolific\" a scientists is, or other features that belong to each node.\n\\item Network topology features - (1) shortest distance among pairs of nodes, (2) number of common neighbors, (3) Jaccard's coefficient, etc.\n\\end{itemize}\n\nThe authors rigorously describes the sets of features it found the most predictive, and takes into account node-level information extractable from the network as well as some amount of ``meta''-level information (for example, how similar two nodes are to each other). The results demonstrate great success (with accuracies up to 90\\% compared to a baseline of 50\\% or so). Overall, The authors presents a novel approach of using machine learning to assist in the link prediction problem by rephrasing the problem as a supervised learning task.\n\n\\section{Methodology and Data}\nIn this section, we describe the architecture of our feature extraction networks as well as lay the ground work for our predictive models. We define our loss function and presents some additional details used for training, such as learning rate and other hyper-parameters.\n\nWe convert the original data from JSON format to CSV. The data set contains 156,639 businesses (with 101 distinct attributes), 196,278 photos (associated with businesses), 1,028,802 tips (these are between users and businesses), 135,148 check-ins (again, associated with each business), and 1,183,362 users.\n\n\\subsection{Dataset}\nOur dataset is the set released for the Yelp Data Set Challenge Round 10 \\cite{YelpDataSet} in 2017. The entirety of the dataset consists of the following entities:\n\\begin{itemize}\n\\item \\textbf{Businesses}: Consists of exactly 156,639 businesses. It contains data about businesses on Yelp including geographical location, attributes, and categories.\n\\item \\textbf{Reviews}: 4,736,897 reviews. It contains full review text (for NLP processing) as well as the user id that wrote the review and the business id the review is written for. It also contains the number of stars given, as well as the number of useful, funny, and cool up-votes (finally, it also contains the date).\n\\item \\textbf{Users}: 1,183,362 Yelp users. It includes the user's friend mapping and all the meta-data associated with the user. Just this single dataset consists contains 538,440,966 edges.\n\\item \\textbf{Tips}: 1,028,802 tips. Tips are associated with each business and are written by users. Tips are similar to reviews, but without rating and usually much shorter.\n\\item \\textbf{Photos:} 196,278 Photos, each associated with businesses. The photos are also associated with captions.\n\\item \\textbf{Check-ins:} 135,148 check-ins on a business (this a business only attribute).\n\\end{itemize}\n\nAs we can see from above, the dataset is relatively rich, with many possible graph structures to study on top of it. In general, given that we are trying to predict review ratings, we focus on the following bipartite graph with users and businesses:\n\n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}[scale=0.2]\n\\tikzstyle{every node}+=[inner sep=0pt]\n\\draw [black] (21.9,-11.9) circle (3);\n\\draw (21.9,-11.9) node {$1$};\n\\draw [black] (21.9,-20.2) circle (3);\n\\draw (21.9,-20.2) node {$2$};\n\\draw [black] (21.9,-28.1) circle (3);\n\\draw (21.9,-28.1) node {$3$};\n\\draw [black] (21.9,-36.9) circle (3);\n\\draw (21.9,-36.9) node {$4$};\n\\draw [black] (44.1,-8.1) circle (3);\n\\draw (44.1,-8.1) node {$1$};\n\\draw [black] (44.1,-8.1) circle (2.4);\n\\draw [black] (44.1,-18.7) circle (3);\n\\draw (44.1,-18.7) node {$2$};\n\\draw [black] (44.1,-18.7) circle (2.4);\n\\draw [black] (44.1,-27.4) circle (3);\n\\draw (44.1,-27.4) node {$3$};\n\\draw [black] (44.1,-27.4) circle (2.4);\n\\draw [black] (44.1,-35.9) circle (3);\n\\draw (44.1,-35.9) node {$4$};\n\\draw [black] (44.1,-35.9) circle (2.4);\n\\draw [black] (44.1,-43.8) circle (3);\n\\draw (44.1,-43.8) node {$5$};\n\\draw [black] (44.1,-43.8) circle (2.4);\n\\draw [black] (24.86,-11.39) -- (41.14,-8.61);\n\\fill [black] (41.14,-8.61) -- (40.27,-8.25) -- (40.44,-9.23);\n\\draw (34.64,-10.84) node [below] {$reviews$};\n\\draw [black] (24.36,-13.62) -- (41.64,-25.68);\n\\fill [black] (41.64,-25.68) -- (41.27,-24.81) -- (40.7,-25.63);\n\\draw [black] (23.94,-14.1) -- (42.06,-33.7);\n\\fill [black] (42.06,-33.7) -- (41.89,-32.77) -- (41.15,-33.45);\n\\draw [black] (24.89,-20) -- (41.11,-18.9);\n\\fill [black] (41.11,-18.9) -- (40.27,-18.46) -- (40.34,-19.46);\n\\draw [black] (24.75,-21.13) -- (41.25,-26.47);\n\\fill [black] (41.25,-26.47) -- (40.64,-25.75) -- (40.33,-26.7);\n\\draw [black] (24.66,-35.72) -- (41.34,-28.58);\n\\fill [black] (41.34,-28.58) -- (40.41,-28.44) -- (40.8,-29.35);\n\\draw [black] (24.35,-29.83) -- (41.65,-42.07);\n\\fill [black] (41.65,-42.07) -- (41.29,-41.2) -- (40.71,-42.01);\n\\draw [black] (24.73,-29.09) -- (41.27,-34.91);\n\\fill [black] (41.27,-34.91) -- (40.68,-34.17) -- (40.35,-35.11);\n\\draw [black] (23.73,-34.52) -- (42.27,-10.48);\n\\fill [black] (42.27,-10.48) -- (41.38,-10.8) -- (42.18,-11.41);\n\\draw [black] (24.13,-26.09) -- (41.87,-10.11);\n\\fill [black] (41.87,-10.11) -- (40.94,-10.27) -- (41.61,-11.01);\n\\end{tikzpicture}\n\\caption{Simplified graph model of user reviews of businesses. The graph is bipartite, with users and businesses connected by directed \"review\" edges.}\n\\label{fig:graph_structure}\n\\end{figure}\n\nand further propose making use of the friend-friend explicit graph (it is possible that it might be meaningful to see if we can find any relationship between friend reviews and user reviews) and the tip edges (without ratings, but possible meaningful information about a business). With this additional information, the structure of the graph itself becomes increasingly complex, as shown in Diagram \\ref{fig:graph_complex_structure}.\n\n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}[scale=0.2]\n\\tikzstyle{every node}+=[inner sep=0pt]\n\\draw [black] (28.4,-27.4) circle (3);\n\\draw (28.4,-27.4) node {$1$};\n\\draw [black] (30.9,-16.9) circle (3);\n\\draw (30.9,-16.9) node {$2$};\n\\draw [black] (20.3,-19.9) circle (3);\n\\draw (20.3,-19.9) node {$3$};\n\\draw [black] (21.9,-36.9) circle (3);\n\\draw (21.9,-36.9) node {$4$};\n\\draw [black] (44.1,-8.1) circle (3);\n\\draw (44.1,-8.1) node {$1$};\n\\draw [black] (44.1,-8.1) circle (2.4);\n\\draw [black] (44.1,-18.7) circle (3);\n\\draw (44.1,-18.7) node {$2$};\n\\draw [black] (44.1,-18.7) circle (2.4);\n\\draw [black] (44.1,-27.4) circle (3);\n\\draw (44.1,-27.4) node {$3$};\n\\draw [black] (44.1,-27.4) circle (2.4);\n\\draw [black] (44.1,-35.9) circle (3);\n\\draw (44.1,-35.9) node {$4$};\n\\draw [black] (44.1,-35.9) circle (2.4);\n\\draw [black] (44.1,-43.8) circle (3);\n\\draw (44.1,-43.8) node {$5$};\n\\draw [black] (44.1,-43.8) circle (2.4);\n\\draw [black] (33.4,-15.24) -- (41.6,-9.76);\n\\fill [black] (41.6,-9.76) -- (40.66,-9.79) -- (41.22,-10.62);\n\\draw (41.47,-13) node [below] {$5\\mbox{ }review$};\n\\draw [black] (41.115,-27.585) arc (-94.24841:-162.7529:11.037);\n\\fill [black] (41.11,-27.58) -- (40.35,-27.03) -- (40.28,-28.02);\n\\draw (33.5,-25.71) node [below] {$tip$};\n\\draw [black] (30.21,-19.82) -- (29.09,-24.48);\n\\fill [black] (29.09,-24.48) -- (29.77,-23.82) -- (28.79,-23.59);\n\\draw (28.89,-21.73) node [left] {$friend$};\n\\draw [black] (29.09,-24.48) -- (30.21,-19.82);\n\\fill [black] (30.21,-19.82) -- (29.53,-20.48) -- (30.51,-20.71);\n\\draw [black] (22.5,-21.94) -- (26.2,-25.36);\n\\fill [black] (26.2,-25.36) -- (25.95,-24.45) -- (25.27,-25.19);\n\\draw (21.45,-24.14) node [below] {$friend$};\n\\draw [black] (26.2,-25.36) -- (22.5,-21.94);\n\\fill [black] (22.5,-21.94) -- (22.75,-22.85) -- (23.43,-22.11);\n\\draw [black] (26.71,-29.88) -- (23.59,-34.42);\n\\fill [black] (23.59,-34.42) -- (24.46,-34.05) -- (23.63,-33.48);\n\\draw (24.55,-30.8) node [left] {$friend$};\n\\draw [black] (23.59,-34.42) -- (26.71,-29.88);\n\\fill [black] (26.71,-29.88) -- (25.84,-30.25) -- (26.67,-30.82);\n\\draw [black] (21.62,-33.91) -- (20.58,-22.89);\n\\fill [black] (20.58,-22.89) -- (20.16,-23.73) -- (21.15,-23.64);\n\\draw (21.73,-28.32) node [right] {$friend$};\n\\draw [black] (20.58,-22.89) -- (21.62,-33.91);\n\\fill [black] (21.62,-33.91) -- (22.04,-33.07) -- (21.05,-33.16);\n\\draw [black] (18.908,-36.812) arc (-98.51807:-309.93742:12.583);\n\\fill [black] (18.91,-36.81) -- (18.19,-36.2) -- (18.04,-37.19);\n\\draw (8.57,-18.2) node [left] {$friend$};\n\\draw [black] (18.908,-36.811) arc (-98.5328:-309.92269:12.582);\n\\fill [black] (28.85,-14.72) -- (28.56,-13.82) -- (27.92,-14.59);\n\\draw [black] (31.02,-25.95) -- (41.48,-20.15);\n\\fill [black] (41.48,-20.15) -- (40.53,-20.1) -- (41.02,-20.98);\n\\draw (37.8,-23.55) node [below] {$tip$};\n\\draw [black] (20.708,-16.933) arc (166.48658:66.25769:15.116);\n\\fill [black] (41.49,-6.63) -- (40.96,-5.85) -- (40.56,-6.76);\n\\draw (27.15,-6.41) node [above] {$tip$};\n\\draw [black] (24.9,-36.77) -- (41.1,-36.03);\n\\fill [black] (41.1,-36.03) -- (40.28,-35.57) -- (40.33,-36.57);\n\\draw [black] (24.76,-37.79) -- (41.24,-42.91);\n\\fill [black] (41.24,-42.91) -- (40.62,-42.19) -- (40.32,-43.15);\n\\draw (34.33,-39.78) node [above] {$tip$};\n\\draw [black] (24.66,-35.72) -- (41.34,-28.58);\n\\fill [black] (41.34,-28.58) -- (40.41,-28.44) -- (40.8,-29.35);\n\\draw (36.85,-32.7) node [below] {$4\\mbox{ }review$};\n\\end{tikzpicture}\n\\caption{Proposed Complex Graph Models Based on Users, Reviews, Businesses, User-User Interactions, and Tips}\n\\label{fig:graph_complex_structure}\n\\end{figure}\n\n\\subsection{Predictive Models}\nThe rich meta-data about the network makes it quite interested to analyze, and opens up a lot of venues for possible improvements in terms of link prediction. We have multiple networks available for explorations, including \\textit{user-user} network (based on friendships, comments, etc.), \\textit{user-business} network, based on reviews given by a specific business to a user.\n\nFurthermore, we also have the raw text of the Yelp Review as well as geographical information about the business and photos for some businesses, which opens the possibility of using moderns visual image recognition and natural language processing techniques to further extract node-level meta-data to incorporate into our model.\n\nConcretely, we focus our work on predicting the rating that a user will assign a particular business. This problem has immediate and obvious utility: it would be useful to help users discover new businesses to visit (if the predicted rating is high) and also help business determine positive and negative trends. The dataset can be broken into three sets so we can train, evaluate, and test our models. One set will have edges, reviews, and information for businesses for a certain time $[t_0, t_1)$, the second set will have the edges created from $[t_1, t_2)$ and will be used to cross-validate our models and tune hyper-parameters, and the third set will he a hold out containing edges from $[t_2, t_3)$ and will be used for testing only.\n\n\\subsection{Network-Only Predictor}\nWe first present a predictive model which focus ``only'' on the structure of the graph, and uses this information to predict the ratings. For this purposes, we focus on the smaller user\/business graph as shown in Figure \\ref{fig:graph_structure}. We therefore have an undirected, weighed graph. In later sections, we explore alternative representations as well as additional data which can be input to our learning models.\n\n\n\\subsubsection{Data Preprocessing}\nGiven this representation $G$, we define three sets - training, validation, and test. We split the graph naturally -- edges and nodes are added to the graph as time progresses. However, we make special care to only use the nodes which remained and were available in the graph for the extent of our study. We can see the distribution of the reviews (edges) in our graph over time in Figure \\ref{fig:reviews_over_time}. Given the skewed nature of the graph, we subset it to include only the latest reviews. Let us consider $G, G_{train}, G_{val}$ and $G_{test}$ where $G = G_{train} \\cup G_{val} \\cup G_{test}$. We first perform the following to obtain $G$:\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{distribution_of_reviews_over_time}\n\\caption{Number of reviews in original dataset as a measure of time. We can see readily that the number of reviews increases drastically in the later years.}\n\\label{fig:reviews_over_time}\n\\end{figure}\n\n\\begin{itemize}\n\\item Remove all reviews before ``2016-08-24''. This is primarily to (1) remove bias from early users and reviewers and instead focus on later reviews (see Figure \\ref{fig:reviews_subset_over_time} for the distribution over time, which is far more uniform) and (2) reduce the size of our graphs to a manageable data set. We then have a graph with 428,795 users, 107,138 businesses, and 1,000,277 edges. We therefore have an extremely sparse graph, as only 0.0103109977941166\\% of all possible edges even exist.\n\\end{itemize}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{distribution_of_reviews_subset_over_time}\n\\caption{Number of reviews by date for $G$}\n\\label{fig:reviews_subset_over_time}\n\\end{figure}\n\nWe now proceed to split the graph into $G_{train}, G_{test}$ and $G_{val}$. We split time-wise using three split-points, $t_0, t_1,$ and $t_2$. Then we have $G_{train}$ as the subset of $G$ from $[t_0, t_1)$ and $G_{val}$ as the subset in $[t_1, t_2)$ with $G_{test}$ containing the subset to the latest date $[t_2, \\infty)$. Furthermore, we set all nodes in $G_{val}$ and $G_{test}$ to be the same set as those in $G_{train}$ to avoid running into issues with unseen nodes in the network.\n\nAfter the above, we end up with the following networks:\n\\begin{itemize}\n\\item $G_{train} = (V_{train}, E_{train})$ with $|V_{train}| = 375,149$ where we have $283,085$ users and $92,064$. We also have $|E_{train}| = 599,133$, which is an incredibly sparse graph given (only $0.00143945169412\\%$ of edges exists, even taking into account the bipartite structure of the graph).\n\\item $G_{val} = (V_{val}, E_{val})$ with $|V_{val}| = 75,466$ and $|E_{val}| = 88,079$.\n\\item $G_{test} = (V_{test}, E_{test})$ with $|V_{test}| = 67.125$ and $|E_{val}| = 73,730$.\n\\end{itemize}\n\nNote that we've split the data essentially into an $80\\%, 10\\%, 10\\%$ split. For more details on the graph structures, see Appendix \\ref{sec:graph_distributions}. Given the sparsity of the graph, we focus on predicting the star rating given that an edge is created between user $u$ and business $b$. As such, our dataset does not contain any negative examples. We leave this predictive problem for open investigation. Furthermore, given the extreme size of our data, we process and train our models using Google Compute Engine with 8 CPUs and 30GB of memory.\n\n\\subsubsection{Graph Features}\nNow that we have partitioned our data into training, validation, and testing, we move forward with calculating some rating prediction scores. We first focus on calculating multiple properties from our generated graph. In fact, we calculate the following:\n\n\\begin{itemize}\n\\item \\textbf{Number of Common Raters:} For each pair $(u,b)$ of user and business, we calculate the number of common raters. A common rater is an extension of neighbors, in the sense that this is someone who has also rated $b$.\n\\item \\textbf{Number of Common Business:} For each pair $(b,u)$ of user and business, we calculate the number of common businesses. A common business is an extension of neighbors, in the sense that this is a business someone who has also rated $b$.\n\\item \\textbf{Average Rating of Common Raters:} For each pair $(u,b)$ of user and business, we calculate the number of common raters. A common rater is an extension of neighbors, in the sense that this is someone who has also rated $b$.\n\\item \\textbf{Average Rating of Common Business:} For each pair $(u,b)$ of user and business, we calculate the number of common raters. A common rater is an extension of neighbors, in the sense that this is someone who has also rated $b$.\n\\item \\textbf{Preferential Attachment}: We take the product of the average star rating of businesses rated by $u$ and the average star rating of raters of business $b$. We expect this value to indicate the relative popularity.\n\\item \\textbf{Page Rank:} We treat the graph as an unweighed undirected graph and calculate the page rank value for all nodes and assign their sum as a feature.\n\\item \\textbf{Eigenvector Centrality}: We calculate the global centrality of a node (compared to its neighbors) and use the sum as a feature.\n\\item \\textbf{Adamic-Adar measure}: We look at common neighbors (as defined previously) and sum the inverse of the sum of their degrees (considering the graph to be weighed). Intuitively, this creates a measure for similarity where nodes with the same degreed neighbors are more similar.\n\\end{itemize}\n\n\nOnce calculate for our training, validation, and test data sets, all of the features are normalized to have unit mean and unit variance as is standard practice in machine learning problems.\n\n\\subsubsection{Models for Prediction}\nWe now present and describe the machine learning models used from the extracted features. Let $X$ be our training matrix, which is of shape $(n,d)$ where $n$ is the number of training examples and $d$ is the number of features extracted (in our case, $d = 9$) and $n = 599,133$. The most straight forward approach is simply to integrate our extracted features individually and directly train our models to predict the ratings, in a scale from $0$ to $5$. We now present the models we attempted.\n\n\\begin{enumerate}\n\\item \\textbf{Linear Regression}: We attempt to fit a standard linear regressors to our input feature set. That is to say, our model takes the form of $r_i = \\sum_{d=1}^{D} w_d x_{id}$ where $x_i$ is a single feature vector in our training set and $r_i$ is the corresponding rating. We train the model directly using the generated data from above and directly on the raw ratings for each edge. Linear regression is a simple model which attempts to minimize the mean square error, and can be thought of as a data generating process where we assume the ratings $r$ are generated by $r = W^TX + b + \\epsilon$ where $\\epsilon \\sim N(0,\\sigma)$ is some noise introduced into the system. The models is then able to recover the best plane of fit $W$ such that the error is minimized. In terms of loss functions, we can consider this as minimizing the loss function $L:$\n\\begin{align*}\nL(W,b; X,y) &= \\sum_{i=1}^{|X|} (\\hat{y}_i - y_i)^2 \\\\\n&=\\sum_{i=1}^{|X|} (w_i^Tx_i + b_i - y_i)^2 \\\\\n&=\\sum_{i=1}^{|X|} \\left(\\sum_{d = 1}^{|x_i|} w_dx_{id} + b_i - y_i\\right)^2\n\\end{align*}\nwhere $x_i$ is a the $i$-th row in our feature matrix $X$. We minimize over the parameters $W$.\n\\item \\textbf{Ridge Regression}: This is an improvement of linear regression. A possible issue with normal linear regression is that the possibility of over-training on the training set. It is possible to generate an extremely ``peaky'' set of weights such that the training error is reduced significantly yet the test error increases. The issue here is that we lack any term enforcing generalization in our loss function. The most typical method to enforce this generalization is to add a regularizer to the weights $W$. The loss function then becomes:\n\\begin{align*}\nL(W, b; X,y) &= \\sum_{i=1}^{|X|} (\\hat{y}_i - y_i)^2 + \\alpha|W|\\\\\n&=\\sum_{i=1}^{|X|} (w_i^Tx_i + b_i - y_i)^2 + \\alpha \\left|\\sum_{i,j} W_{ij}^2\\right|\\\\\n&=\\sum_{i=1}^{|X|} \\left(\\sum_{d = 1}^{|x_i|} w_dx_{id} + b_i - y_i\\right)^2 + \\alpha \\left|\\sum_{i,j} W_{ij}^2\\right|\n\\end{align*}\n\nThe above encourages the model to minimize the squared loss for the training data while still maintaining a relatively sparse matrix $W$. This further prevent values in $W$ from becoming too large. In our case, we find $\\alpha = 0.0001$ to be the optimal hyper-parameter (tuned on the validation set).\n\\item \\textbf{Bayesian Regression}: Bayesian regression is essentially equivalent to ridge regression, but it is self-regularizing -- this means we do not need to choose an optimal parameter $\\alpha.$ The theory behind Bayesian regression is to consider finding the parameters $W$ in our mode $y = WX$ which maximize the model probability. Given Bayes' rule, we have:\n\n\\begin{align*}\nP(W,b \\mid X) &= \\frac{P(X \\mid W,b)P(W,b)}{P(X)} \\\\\n&\\propto P(X \\mid W,b)P(W,b) \\\\\n\\end{align*}\nIf we consider the case where $P(W,b) \\sim N(\\mu, \\Sigma)$, then we arrive at ridge regression. We use this Bayesian model to also directly predict our ratings $r$. We optimize the above using the ADAM gradient descent optimizer where we use $\\alpha_1 = \\alpha_2 = \\lambda = \\lambda_2 = 0.000001$. The parameters are not tuned using the validation set due to lack of computational resources.\n\n\\item \\textbf{Deep Neural Networks}: The latest research has had great success using ``deep learning'' to extract more details from the data and to learn the values and result more directly. We make use of this approach by constructing a relatively shallow network consists of a fully connected layer with 200 neurons, followed by a second fully-connected layer with 40 neurons, followed by a fully connected layer of 8 neurons, and a final fully connected layer of 2 neurons.\n\nGiven the recent effectiveness in a large range of tasks of this model, we expect that it will likewise be useful for rating prediction.\n\nThis gives us a total of $200x(9 + 1) + 40x200 + 8x40 + 2x8$ with a relu nonlinearity:\n$$\nrelu(x) = \\max(0,x)\n$$\nWe use a final softmax at the end to generate the distribution of ratings.\n\nWe use Adam to perform gradient descent (with parameters $\\beta_1 = 0.9$ and $\\beta_2 = 0.999$ and $\\epsilon = 1\\times10^{-8}$) on the loss function with a regularization factor or $\\alpha = 0.0001$ and batch size of $200$. We maintain a constant learning rate of $0.001$, randomly shuffle the input data. The parameters are selected based on past experience with neural network training and are not optimized using cross-validation or the validation set. \n\n\\item \\textbf{Random Forest}: We make use also of a random forest estimator. The random forest is a meta estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. We generate the sub-samples by sampling from the original data set with replacement. We use a total of 100 estimators, where we look at all features in the data set when considering the best split. \n\\end{enumerate}\n\n\\subsection{Method Evaluation}\n\nFor all of the above approaches, we evaluate our effectiveness on the validation set and use this to tune our hyper-parameters (ie, network size, learning rate, etc.). In the end, we evaluate the results on the test set (previously unseen and untouched by our models) and make predictions for ratings in the seen edges. \n\nWe evaluate our models across three metrics:\n\n\\begin{itemize}\n\\item The root mean squared error. This evaluates how close our predictions achieve our desired ratings:\n$$\nRMSE = \\sqrt{\\frac{1}{|N|}\\sum_{i = 1}^{|X|} (\\hat{y}_i - y_i)^2}\n$$\n\\item The relative error. This is a metric that evaluates, on average, how wrong our star rating is compared to the true star rating. We take the method to be more indicate of improvements in our algorithms and with our data extraction. Formally, we define the relative error as:\n$$\nRELERROR = 100*\\frac{1}{|X|}\\sum_{i=1}^{|X|} \\frac{|\\hat{y_i} - y_i|}{\\max_{i=1}^{|X|} \\max\\{\\hat{y_i}, \\hat{y}\\}}\n$$\n\\item The last metric we use for evaluating our regression models is the $R^2$ score. This gives us a way to evaluate our models against other models in literature, as is standard across regression problems. The best possible score is 1.0. The score ranges from $(-\\infty, 1.0]$, where the worse a model is the more negative the value. Note that in the case where we have a model which simply predicts the constant expected value of the final output (disregarding any input features):\n$$\nE[Y] = \\frac{1}{|X|}\\sum_{i=1}^{|X|} y_{i}\n$$\nwe will have a score of $0.0$. The formula for computing this score is:\n$$\nR^2 = 1 - \\frac{\\sum_{i=1}^{|X|}(y_i - \\hat{y}_i)^2}{\\sum_{i=1}^{|X|} \\left(y_i - \\frac{1}{|X|}\\sum_{i=1}^{|X|}y_i\\right)^2}\n$$\n\n\\end{itemize}\n\n\\subsection{Extracting Item Features}\n\nGiven our results (see Results section) from the above models, we continue forward with our deep neural network. We begin by augmenting the data available for the business nodes. \n\n\\begin{itemize}\n\\item We make use of the pre-trained SqueezeNet network included in the PyTorch model zoo for visual image processing. We first down-sample the images to the expected 256x256x3 input (we do this simply by cropping and averaging pixel values over regions mapping into the 256x256xspace).\n\\item We can then feed these smaller images directly into the pre-trained squeezenet (see Figure \\ref{fig:squeezenet_architecture} for architecture) which has been modified to remove the final soft-max layer (and instead we produce a vector in $\\mathbb{R}^1000$.\n\\item For a business $b$, we take the $p_i^b \\in \\mathbb{R}^1000$ and compute their mean. We use this embedding as a representation of the business.\n\n\\item Furthermore, we make use of the pre-trained word-embedding and take the business description and generate a small 256-dimensional vector.\n\\item We concatenate the above vectors into a 1000 + 256 + 9 vector, which we take as input into a modified neural net.\n\\end{itemize}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{squeezenet.jpg}\n\\caption{Original SqueezeNet Architecture. We modify it to remove the final soft-max layer and instead output a $1000$ embedding for our images.}\n\\label{fig:squeezenet_architecture}\n\\end{figure}\n\nThe meat of the model consists of a neural net which takes a input as 1265-length vector for each $(u,b)$ pair and runs through through a single layer with 200 hidden units (so number of parameters is 200x1265). We then take this and feed it into the successful network described in the previous section and evaluate it in the same way as described before.\n\n\\subsubsection{Training and Date}\n\nDue to the large size of the above networks, we subset the data significantly into a much smaller amount of only ~15k reviews. We select the businesses with the most photos as the candidates to subset by, and make sure we take the reviews which include these businesses. With the reduced data size, we are able to successfully train our specified model end-to-end and achieve a marginal improvement over our previous models.\n\n\n\\section{Results and Discussion}\nIn this section, we presents the results on our test set and provide some discussion as to the extent to which our models successfully predicted Yelp review ratings.\n\n{\\renewcommand{\\arraystretch}{2}%\n\\begin{table*}[]\n\\centering\n\\caption{Supervised Training Results on Training Set}\n\\label{table:trainint_set_results}\n\\begin{tabular}{|l|lll}\n\\hline\n\\textit{\\textbf{Model}} & \\multicolumn{1}{l|}{\\textbf{RMSE}} & \\multicolumn{1}{l|}{\\textbf{RELERROR}} & \\multicolumn{1}{l|}{\\textbf{R\\textasciicircum 2}} \\\\ \\hline\n\\textit{Baseline} & 1.50142049076 & 25.7312431633 & 0.0 \\\\ \\cline{1-1}\n\\textit{Linear Regression} & 1.29409210615 & 20.397681968 & 0.257107970487 \\\\ \\cline{1-1}\n\\textit{Ridge Regression} & 1.29409210617 & 20.3976788744 & 0.257107970462 \\\\ \\cline{1-1}\n\\textit{Bayesian Regression} & 1.29409213097 & 20.3975770983 & 0.257107941987 \\\\ \\cline{1-1}\n\\textit{Neural Network} & 1.26509191767 & 18.7831282852 & 0.290030838364 \\\\ \\cline{1-1}\n\\textit{Random Forest} & \\textbf{0.749654164334} & \\textbf{10.0184313324} & \\textbf{0.750702893173} \\\\ \\cline{1-1}\n\\textit{\\textbf{Business Features}} & 1.24943163247 & 16.2852635532 & 0.32123445344 \\\\ \\cline{1-1}\n\\end{tabular}\n\\end{table*}\n}\n\n\n{\\renewcommand{\\arraystretch}{2}%\n\\begin{table*}[t]\n\\centering\n\\caption{Supervised Training Results on Validation Set}\n{}\\label{table:validation_set_results}\n\\begin{tabular}{|l|lll}\n\\hline\n\\textit{\\textbf{Model}} & \\multicolumn{1}{l|}{\\textbf{RMSE}} & \\multicolumn{1}{l|}{\\textbf{RELERROR}} & \\multicolumn{1}{l|}{\\textbf{R\\textasciicircum 2}} \\\\ \\hline\n\\textit{Baseline} & 1.42776997765 & 24.1191750901 & 0.0 \\\\ \\cline{1-1}\n\\textit{Linear Regression} & 1.18380441761 & 18.2853412809 & 0.327892972837 \\\\ \\cline{1-1}\n\\textit{Ridge Regression} & 1.18380391943 & 18.2853539128 & 0.312515072251 \\\\ \\cline{1-1}\n\\textit{Bayesian Regression} & 1.18378759889 & 18.2857790551 & 0.312534028173 \\\\ \\cline{1-1}\n\\textit{Neural Network} & \\textbf{1.16442192777} & \\textbf{16.0869919882} & \\textbf{0.334842664919} \\\\ \\cline{1-1}\n\\textit{Random Forest} & 1.18801209881 & 18.6598898159 & 0.307618649842 \\\\ \\cline{1-1}\n\\textit{\\textbf{Business Features}} & 1.14952444234 & 14.8854451849 & 0.35986245424 \\\\ \\cline{1-1}\n\\end{tabular}\n\\end{table*}\n}\n{\\renewcommand{\\arraystretch}{2}%\n\\begin{table*}[t]\n\\centering\n\\caption{Supervised Training Results on Test Set}\n\\label{table:test_set_results}\n\\begin{tabular}{|l|lll}\n\\hline\n\\textit{\\textbf{Model}} & \\multicolumn{1}{l|}{\\textbf{RMSE}} & \\multicolumn{1}{l|}{\\textbf{RELERROR}} & \\multicolumn{1}{l|}{\\textbf{R\\textasciicircum 2}} \\\\ \\hline\n\\textit{Baseline} & 1.4634860104 & 24.7465614817 & -0.000855120695457 \\\\ \\cline{1-1}\n\\textit{Linear Regression} & 1.19928440313 & 18.5669587686 & 0.327892972837 \\\\ \\cline{1-1}\n\\textit{Ridge Regression} & 1.19928405085 & 18.5669755633 & 0.327893367682 \\\\ \\cline{1-1}\n\\textit{Bayesian Regression} & 1.19927256299 & 18.567542783 & 0.327906243749 \\\\ \\cline{1-1}\n\\textit{Neural Network} & \\textbf{1.1838237529} & \\textbf{16.3219078547} & \\textbf{0.34511029369} \\\\ \\cline{1-1}\n\\textit{Random Forest} & 1.19281377927 & 18.7137709175 & 0.335125985417 \\\\ \\cline{1-1}\n\\textit{\\textbf{Business Features}} & 1.1694425252 & 15.4556500245 & 0.35111454552 \\\\ \\cline{1-1}\n\\end{tabular}\n\\end{table*}\n}\n\nWe now present the final results from our models, each evaluated on the test set. We compare the different methods used, and discuss their differences and possible improvements. The main results are presented in Table \\ref{table:test_set_results}, with the validation data set results in Table \\ref{table:validation_set_results} and the training data set results in Table \\ref{table:trainint_set_results}. \n\nWe have implemented a complete end-to-end pipeline which begins with the (1) raw Yelp JSON data, (2) construct training, validation, and testing graphs over the data by our pre-determined timescales, (3) extracts training, validation, and testing sets from the graphs generated and computes a variety of network properties to be used by our machine learning models and (4) trains a variety of machine learning models on the extracted data and tunes their hyper-parameters when possible, (5) culminating in the evaluation of the models on the known results from the test set. We implements this work, wrote optimized code for feature extraction, and built our networks. Everything was implemented ourselves with the use of SNAP, Python, scikit-learn, and PyTorch -- the code base is publicly available at GitHub \\footnote{https:\/\/github.com\/kandluis\/cs224w-project}.\n\nThe major challenge faced when experimenting was the sheer size of the dataset -- even after sub-setting the data to a more manageable size in the millions and using the extremely powerful Google Compute Engine to add additional memory and processing power, more complex models such as random forest and the convolution neural networks could take in the order of days to fully train. Even extracting the word embeddings and pre-trained image feature vectors with SqueezeNet and ResNet would alone take a significant amount of time -- so much so that it proved unfeasible to do for a large portion of the dataset.\n\nAs such, as described in our Methods section, we were able to sample only approximately one years worth of data from the Yelp review network. However, despite this, we were nonetheless able to train the network predictors on over 500k training samples (reviews) which contained over 280k users and over 92k businesses (for more than 375k nodes), and validate and test our network models on over 88K and 73K examples respectively.\n\nFinally, to evaluate the performance of all of our models we make use of RMSE, relative error, and the score function defined in our methods. Our results can be see in Table \\ref{table:trainint_set_results}, Table \\ref{table:test_set_results}, and Table \\ref{table:validation_set_results}.\n\n\\section{Discussion}\nWe begin the discussion by analyzing some network properties.\n\n\\subsection{Summary Statistics}\n\\subsubsection{Users}\nWe present some overview of the user meta-data. In Figure \\ref{fig:user_characteristics}, we can see that multiple characteristics of the users follow a power-law degree distributions -- and not just the node degrees. This is to say that the distribution can be modeled as $P(k) \\propto k^{-\\gamma}$. The power-law distribution is immediately evident in:\n\n\\begin{itemize}\n\\item Number of review -- we have a few users that write many reviews and many users that write few reviews.\n\\item Number of friends -- this means the network follows a true-power law distribution.\n\\item useful\/funny\/cool\/fans -- this appears to demonstrate that social ranking\/status also follows a power-law distribution in the Yelp social network. This trend is further demonstrated by Figure \\ref{fig:user_compliment_distribution}.\n\\end{itemize} \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{distribution_user_characteristics}\n\\caption{Frequency of countable user characteristics -- the majority exhibit a power-law distributions}\n\\label{fig:user_characteristics}\n\\end{figure}\n\nFurthermore, we can look at the average rating given by users, across the network. The results are shown in a log plot in Figure \\ref{fig:user_rating_distribution}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{average_user_rating_distribution}\n\\caption{Distribution of Average user Rating}\n\\label{fig:user_rating_distribution}\n\\end{figure}\n\nWe notice that the ratings tend to be inflated (3-5) stars being quite frequent, while 1-2 stars being very infrequent. Presumable this might be due to the fact that people do not frequent poor restaurants. The other aspect that is immediately apparent is the spikes at even numbered ratings -- this is likely due to users who have rated only one once, of which we have many.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{compliement_type_distribution}\n\\caption{Distribution of Received user Compliments}\n\\label{fig:user_compliment_distribution}\n\\end{figure}\n\n\\subsubsection{Businesses}\nWe present some overview of the user meta-data. In Figure \\ref{fig:business_review_distribution}, we can see that the power-law distribution is also respected in the business side. Furthermore, we can also see that businesses tend to be rated quite highly, on average, with most businesses either having in the range from 3-5 (see Figure \\ref{fig:business_star_distribution}).\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{business_review_count_distribution}\n\\caption{Business Review Distribution}\n\\label{fig:business_review_distribution}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{business_star_distribution}\n\\caption{Business Average Star Distribution}\n\\label{fig:business_star_distribution}\n\\end{figure}\n\n\\subsection{Review Prediction}\nWe continue our discussing by now focusing on the results for our model predictions.\n\nWe can see that extracting network-only data and using machine learning models to fit the ratings seems to perform relatively well, even with simple, un-regularized linear regression. It appears that the features we selected, for example the nearest neighbors and the average ratings, are quite effective at both capturing network properties as well as capturing the user ratings. We did not need to extend the network to include further metadata information, and the results were nonetheless quite good, especially when compared to our non-trivial baseline.\n\nFurthermore, we note that our feature extraction proved extremely effective at generalizing across models. We see that in particular, the deep neural network and the random forest models both performed extremely well. It's interesting to note that the random forest model appears to have over-fit the data by a significant margin. This appeared promising on the training set but did not pan-out when we took the model to unseen data. However, we note that the neural net performed the best -- this appears to lead credence to the idea that the function learned is inherently non-linear, at least in the feature space we selected. This is somewhat counter to what we originally hypothesized, since all of the original features are approximately in the same scale as the ratings and would, intuitively, appear to predict the ratings rather directly. This idea is supported by the t-SNE embedding in Figure \\ref{fig:tsne_embedding}, where we embed our test set in a lower dimensional space (given the features extracted) and we color each based on the rating given (from $1,2,3,4,5$). \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{TSE.png}\n\\caption{t-SNE embedding of of $G_{test}$}\n\\label{fig:tsne_embedding}\n\\end{figure}\n\n\nThis intuitively gives us a good foundation for why the features we choose appear to be so well correlated. Furthermore, we note that the embedding shows a clear non-linear distribution, which appears to corroborate the results where our neural network performed the best. This seems to imply that a neural network would be the best approach to disambiguate between the possible ratings a user might assign to a business. We found the issue of predicting the lack of edges to be somewhat more nuanced and subtle, though initial experiments with this approach proved promising.\n\nAnother promising aspect involves using the photographic and textual descriptions of businesses as input features to our predictive models. Despite making use of pre-trained word-embeddings and pre-trained image models, the computational cost for the models proved extreme for our dataset. Sub-setting into smaller set showed some initially results which appeared positive, however the smaller dataset makes it difficult to determine whether the additional information was accurately fed into the models and used effectively. However, it does appear that the networks performed well overall, and at least learned to make use of the features from the images.\n\n\\section{Conclusion}\nIn this paper, we have presented a novel approach towards review rating prediction in a subset of the Yelp review network. We have investigated the effectiveness of network-only machine learning models, determining 9 key structural features of the network that proved effective in predicting the weight of unseen edges after using supervised algorithms to generate the models. We demonstrated that a deep neural net even with the limited feature set was the most effective and most general approach. Furthermore, we performed early experiments in making use of none-network features to improve the predictions of the neural network. We did this by creating a pipeline building on previous work were for each $(u,b)$ pair, the business descriptions were converted into their respective word-embeddings using the popular word2vect network followed by an RNN which output a fixed sized 256 feature vector for each business. Furthermore, we selected key images from the business and the photo dataset provided by yelp and ran them through pre-trained SqueezeNet network with the final classification layer removed to generate multiple 4096-dimensional feature vectors per image. These feature vectors were then averaged and fed as additional input into a final fully-connected neural network. These preliminary results showed marginal improvement in the accuracy of the results. This shows not only that our original models are able to understand higher order relationships within the Yelp review network, but are also able to understand features (and build on them) specific to each node.\n\n\n\\section{Future Work}\nThe project could be continued in several directions. In particular, we could continue to follow the example set by \\cite{PintrestProject} and consider some of the temporal features of the graph structure. They proposed using a sliding window approach to achieve improved accuracy in link-prediction, which could easily be modified to support review prediction in the Yelp network.\n\nFurthermore, our preliminary work incorporating deep convolution neural nets and recurrent neural nets to extract feature embeddings for the businesses have demonstrated marginal capabilities of improving the predictive power of our models. Further work could be done in this area by, rather than extracting static embeddings, incorporating the visual and textual networks into an end-to-end model which could tweak the learned weights for visual and textual processing in order to have better understanding of how these features related to the ratings given to businesses by users. Furthermore, would like to see further work placed into whether user features can similar be used to improve performance -- for example, finding embeddings of users based on their features and using these embeddings as inputs to our model.\n\nLastly, there's additional work to be done to incorporate even more graph features into the predictive model. Given the effectiveness of the network structure itself at predicting the values of unseen ratings alone, we would like to explore further network features and models and see how this additional information can improve our models. This can include incorporating the information about tips -- we would expect someone that has given a tip to be more likely to rate the business positively (or negatively).\n\nIn any case, we believe that there is yet much work to be done in this field and many potential interesting developments in the area of combining non-network features with network features.\n\n\n\\section{Acknowledgments}\nThe author would like to thank Jure Leskovec and the rest of the CS224W staff for their continued guidance and advice during the course of the project. In particular, for emphasizing a focus on network properties.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nModeling the regression relationship between a response $Y$ and a multivariate Euclidean predictor vector $\\tbf{X}$ corresponds to specifying the form of the conditional means $h(\\tbf{x})=\\mbb{E}{(Y|\\tbf{X} = \\tbf{x})}$ and higher dimensionality of $\\tbf{X}$ can be problematic when one is interested to go beyond the standard multiple linear model. This provides strong motivation to consider regression models that provide dimension reduction, and single index models are one of the most popular approaches to achieve this, under the assumption that the influence of the predictors can be collapsed to an index, i.e., a projection on one direction complemented by a nonparametric link function. This reduces the predictors to a univariate index while still capturing relevant features of the high-dimensional data and is thus not subject to the curse of dimensionality. This model generalizes linear regression (where the link function is the identity) and is a special case of nonparametric regression \\citep{heck:86, rice:86, rupp:03}, which in its general form is subject to the curse of dimensionality. For a real-valued response $Y$ and a $p$-dimensional predictor $\\tbf{X}$, the semiparametric single index regression model is \n\\begin{equation}\\begin{gathered} \\label{model:real}\\mbb{E}{(Y|\\tbf{X}= \\tbf{x})} = m(\\tbf{x} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}). \\end{gathered}\\end{equation} \nIn model~\\eqref{model:real},\nthe dependence between $Y$ and $\\tbf{X}$ is characterized by the conditional mean, i.e., \nthe conditional mean is a function of ${\\boldsymbol{\\bar{\\theta}_0}},$ and this reduces the dimensionality of the predictor from $p$ to essentially 1. \n\nThe function $m(\\cdot)$ is nonparametric and thus includes location and level changes, and therefore the vector $\\tbf{X}_i$ cannot include a constant that would serve as intercept. For identifiability reasons, ${\\boldsymbol{\\bar{\\theta}_0}}$ is often assumed to be a unit vector with non-negative first coordinate. A second approach that has been used is to require one component to equal one. This presupposes that the component that is set to equal 1 indeed has a non-zero coefficient \\citep{lin:07,cui:11, ichi:93}.\nModel (\\ref{model:real}) is only meaningful if the Euclidean predictor vector $\\tbf{X}_i$ is of dimension $2$ or larger. If $\\tbf{X}_i$ is one-dimensional, the\ncorresponding special case of the model is the one-dimensional nonparametric regression $\\mbb{E}{(Y|X =x)} = m(x)$, which does not feature any parametric component.\n\nDue to its flexibility, the interpretability of the (linear) coefficients and the nonparametric link function $m(\\cdot)$, as well as due to its wide applicability in many scientific fields, the classical single index regression model with Euclidean responses has attracted attention from the scientific community for a long time. The coefficient ${\\boldsymbol{\\bar{\\theta}_0}}$ that defines the single index $\\tbf{x} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}$ along with the shape of the nonparametric component $m(\\cdot)$ characterizes the relationship between the response and the predictor. \nThe parametric component ${\\boldsymbol{\\bar{\\theta}_0}}$ is \n of primary interest\nin this model. \nThe problem of recovering the true direction ${\\boldsymbol{\\bar{\\theta}_0}}$ can be viewed as a subclass of sufficient dimension reduction (SDR) techniques, where identifying the central subspace of $\\tbf{X}$ that explains most of the variation in $Y$ has been a prime target \\citep{chen:98, li:89,cook:94, li:07}. \n\nIn addition to sufficient dimension reduction techniques, a multitude of related approaches to estimate ${\\boldsymbol{\\bar{\\theta}_0}}$ in \\eqref{model:real} have been studied. These include projection pursuit regression (PPR) \\citep{frie:81, hall:89}, the average derivative approach \\citep{hard:89a, stok:86, fan:95, powe:86}, sliced inverse regression (SIR) \\citep{li:91, cook:91}, conditional minimum average variance estimation (MAVE) \\citep{xia:09} and various other methods \\citep{wang:10, xia:99, xia:06, xia:07, lian:10,yu:02}.\nVarious approaches focused on nonparametric estimation of the link function to recover the index parameter in \\eqref{model:real} \\citep{hard:93, huh:02, hris:01} along with partially linear versions \\citep{carr:97,cui:11,ichi:93}, various noise models \\citep{chan:10a,wang:10}\n\n\nVarious extensions of single index regression have been considered more recently \\citep{zhao:20,kere:20}, including models with multiple indices or \nhigh- dimensional predictors \\citep{zhu:09b,zhou:08, kuch:17, kuch:20} and longitudinal and functional data as predictors \\citep{jian:11, chen:11, ferr:11, novo:19}. However, none of these extensions considers the case where responses are not in a Euclidean vector space, even though this case is increasingly important for data application. An exception is \\cite{ying:20}, who considered extending sufficient dimension reduction approaches for the case of random objects.\nThis lack of available methodology for single index models with random object responses motivates our approach.\nNon-Euclidean complex data structures arising in areas such as biological or social sciences are becoming increasingly common, due to technological advances have made it possible to record and efficiently store time courses of images \\citep{peyr:09, gonz:18}, shapes \\citep{smal:12} or networks \\citep{tsoc:04},\nin addition to sensor data and other complex data. For example, one might be interested in the functional connectivity quantified as correlation matrices obtained from neuroimaging studies to study the effect of predictors on brain connectivity, an application that we explore further below.\n\n\nOther examples of general metric space objects include probability distributions \\citep{deli:17}, \nsuch as age-at-death distributions as observed in demography or network objects, such as internet traffic networks. Such ``object-oriented data'' \\citep{marr:alon:14} or ``random objects'' \\citep{mull:16} can be viewed as random variables taking values in a separable metric space that is devoid of a vector space structure and where only pairwise distances between the observed data are available. Existing methodology for single index models as briefly reviewed above assumes that one has Euclidean responses, and these methods rely in a fundamental way on the vector space structure of the responses. When there is no linear structure, new methodology is needed and this paper contributes to this development. \n\nThe natural notion of a mean for random elements of a metric space is the Fr\\'echet mean \\citep{frec:48}, which is a direct generalization of the standard mean, and is defined as the element of the metric space for which the expected squared distance to all other elements, the so-called Fr\\'echet function, is minimized. Depending on the space and metric, Fr\\'echet means may or may not exist as unique minimizers of the Fr\\'echet function. Fr\\'echet regression is an extension of Fr\\'echet means to the notion of conditional Fr\\'echet means, and has been recently studied in several papers \\citep{pete:19, dube:19, pete:19b,chen:18}, including both global and local versions. \n\nIn this paper, we introduce a novel method for the single Index Fr\\'echet Regression (IFR) when the response variable is a random object lying in a general metric space and the predictor is a $p$-dimensional Euclidean vector. \nOur goal is to develop a simple and straightforward extension of the conventional estimation\nparadigm for single index models for this challenging case.\nSince there is no notion of direction or sign in a general metric space, we interpret the index parameter in the proposed index Fr\\'echet regression model (IFR) as the direction in the predictor space along which the variability of the response is maximized.\nIt turns out to be useful to view the direction as an M-estimator of an appropriate objective function, and to use empirical process theory to show consistency of the proposed estimate.\nWe also develop a \nbootstrap method to obtain inference in finite sample situations. \n\nThe paper is organized as follows: The basic set up is defined in Section~\\ref{sec:model:methods} and theory on the asymptotic behavior of the index parameter is provided in Section~\\ref{sec:theory}\nThe index vector is assumed to lie on a hypersphere, with non-negative first element to facilitate identifiability. Then it is natural to quantify the performance of the proposed estimators by the geodesic distances between the estimated and true directions. Empirical studies with different types of random object responses are conducted in Section~\\ref{sec:simul} to validate the estimation method. In Section~\\ref{sec:data:ADNI} we apply the methods to infer and analyze the effect of age, sex, total ADAS score, and the stage of propagation of Alzheimer's Disease (AD) on the brain connectivity networks of patients with dementia. The networks are derived from fMRI signals of certain brain regions \\citep{thom:11} and for our analysis we represent them as correlation matrices. We conclude with a brief discussion in Section~\\ref{sec:concl}. \n\n\\section{Model and estimation methods}\n\\label{sec:model:methods}\nMore formally, in all of the following $(\\Omega,d,P)$ is a totally bounded metric space with metric $d$ and probability measure $P.$ The random objects $Y$ take values in $\\Omega$. \nThis is coupled with a $p$-dimensional real-valued predictor $\\tbf{X}.$ The conditional Fr\\'echet mean of $Y|\\tbf{X}$ is a generalization of $\\mbb{E}{(Y|\\tbf{X} = \\tbf{x})}$ to metric spaces, as the minimizer of $\\mbb{E}{(d^2(Y,\\omega)|\\tbf{X} = \\tbf{x}))},$ $\\omega \\in \\Omega.$ The latter is the corresponding generalized measure of dispersion around the conditional Fr\\'echet mean and can be viewed as a conditional Fr\\'echet function. \n\nAdopting the framework of Fr\\'echet regression for random objects with Euclidean predictors, we define the Index Fr\\'echet Regression (IFR) model as\n\\begin{equation}\\begin{gathered} \\label{model:sim}\n\\mop{\\tbf{x} \\t {\\boldsymbol{\\bar{\\theta}_0}}} := \\mbb{E}_\\oplus{(Y|\\tbf{X} = \\tbf{x})} := \\underset{\\omega \\in \\Omega}{\\argmin} \\ \\mbb{E}{(d^2(Y,\\omega)|\\tbf{X} = \\tbf{x})},\n\\end{gathered}\\end{equation}\nwhere ${\\boldsymbol{\\bar{\\theta}_0}}$ is the true direction parameter of interest. The conditional Fr\\'echet mean is assumed to be a function of ${\\boldsymbol{\\bar{\\theta}_0}}$ in such a way that the distribution of $Y$ only depends on $\\tbf{X}$ through the index $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}},$ that is, $$Y \\perp \\mbb{E}{(Y|\\tbf{X})}|(\\tbf{X} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}).$$\nModel \\eqref{model:real} emerges as a special case of model \\eqref{model:sim} for a Euclidean response, as the conditional Fr\\'echet mean coincides with the conditional expectation $\\mbb{E}{(Y|\\tbf{X})}$ for the choice of the squared distance metric for the case $\\Omega = \\mbb{R}.$\n\n\nThe identifiability condition for is rephrased following the state-of-the-art literature \\citep{carr:97, lin:07, cui:11, zhu:06}. We assume the parameter space to be $\\Theta$ rather than the entire $\\mbb{R}^p$ in order to ensure that $\\boldsymbol{\\theta}$ in the representation \\eqref{sim:obj} can be uniquely defined, where $\\Theta := \\{\\boldsymbol{\\bar{\\theta}} = (\\theta_1, \\dots, \\theta_p)^{\\top} : \\ltwoNorm{\\boldsymbol{\\bar{\\theta}}} = 1,\\ \\theta_1 \\geq 0, \\ \\boldsymbol{\\bar{\\theta}} \\in \\mbb{R}^p\\}.$ \nWe first choose an identifiable parametrization which transforms the boundary of a unit ball in $\\mbb{R}^p$ to the interior of a unit ball in in $\\mbb{R}^{(p-1)}.$ By eliminating $\\theta_1,$ the parameter space $\\Theta$ can be rearranged to a form $\\{( (1- \\sum_{r=2}^p \\theta_r^2)^{1\/2}, \\theta_2,\\dots, \\theta_p)^{\\top} : \\sum_{r=2}^p \\theta_r^2 \\leq 1\\}.$ This re-parametrization is the key to analyzing the asymptotic properties of the estimates for $\\boldsymbol{\\theta}$ and to facilitating an efficient computation algorithm. \n\nThe true parameter is then partitioned into $\\boldsymbol{\\bar{\\theta}} = (\\theta_1, \\boldsymbol{\\theta})^{\\top},$ where $\\boldsymbol{\\theta} = (\\theta_2,\\dots, \\theta_p)^{\\top}.$ We need to estimate the $(p-1)-$ dimensional vector $\\boldsymbol{\\theta}$ in the single-index model, and then use the fact that $\\theta_1 = (1- \\sum_{r=2}^p \\theta_r^2)^{1\/2}$ to obtain $\\hat{\\theta}_1.$\n\\begin{prop}[Identifiability of model \\eqref{model:sim}]\n\\label{lem:iden:sim:obj}\nSuppose $h_\\oplus({\\tbf{x}}) = \\mbb{E}{(Y|\\tbf{X} = \\tbf{x})}.$ The support $S$ of $h_\\oplus({\\cdot})$ is a convex bounded set with at least one interior point and $h_\\oplus({\\cdot})$ is a non-constant continuous function on $S.$ If\n$$h_\\oplus(\\tbf{x}) = g_{1\\oplus}(\\boldsymbol{\\alpha}^{\\top} \\tbf{x}) = g_{2\\oplus}(\\boldsymbol{\\beta} ^{\\top} \\tbf{x}), \\text{ for all } \\tbf{x} \\in S, $$ for some continuous link function objects $g_{1\\oplus}$ and $g_{2\\oplus}$, and some $\\boldsymbol{\\alpha}, \\boldsymbol{\\beta}$ with positive first element such that $\\ltwoNorm{\\boldsymbol{\\alpha}} = \\ltwoNorm{\\boldsymbol{\\beta}} =1$ then $\\boldsymbol{\\alpha} = \\boldsymbol{\\beta}$ and $g_{1\\oplus} \\equiv g_{2\\oplus}$ on $\\{\\boldsymbol{\\alpha} ^{\\top} \\tbf{x}| \\tbf{x} \\in S\\}.$\n\\end{prop}\n\nThe above result can be proven using a similar argument as given in Theorem $1$ of \\cite{lin:07}. \n\nStudying the special case of the Euclidean response $Y$ in detail one may observe that the variation in $Y$ results from the variation in $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}$ and also from the variation in the error, $\\varepsilon$ \\citep{ichi:93}. On the contour line $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}} = c$, the variability in $Y$ only results from the variability in $\\varepsilon$. Along the contour line $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= c,\\ \\boldsymbol{\\bar{\\theta}}\\neq {\\boldsymbol{\\bar{\\theta}_0}}$, the value of $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}$ changes. Therefore the variability in $Y$ on the contour line $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= c,\\ \\boldsymbol{\\bar{\\theta}}\\neq {\\boldsymbol{\\bar{\\theta}_0}}$ comes from both the variation in $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}$ and in $\\varepsilon$. Since ${\\rm Var}\\left(Y|\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= c\\right)$ measures the variability in $Y$ on a contour line $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= c,\\ \\boldsymbol{\\bar{\\theta}}\\neq {\\boldsymbol{\\bar{\\theta}_0}}$, a sensible way to alternatively interpret ${\\boldsymbol{\\bar{\\theta}_0}}$ would be finding the minimizer of the objective function $H(\\boldsymbol{\\bar{\\theta}})$, where\n$H(\\boldsymbol{\\bar{\\theta}}) : = \\mbb{E}{\\left( {\\rm Var}\\left(Y|\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}\\right)\\right)} \\text{ and } {\\boldsymbol{\\bar{\\theta}_0}} = \\underset{\\boldsymbol{\\bar{\\theta}}:\\ \\boldsymbol{\\bar{\\theta}} ^{\\top} \\boldsymbol{\\bar{\\theta}} =1}{\\argmin}\\ H(\\boldsymbol{\\bar{\\theta}}).$\nIt is indeed important to impose the constraint $\\boldsymbol{\\bar{\\theta}} ^{\\top} \\boldsymbol{\\bar{\\theta}} = 1,$ with the first element of the index $\\theta_{01}>0$ to ensure the identifiability of the objective function. Under such constraint we note that $H({\\boldsymbol{\\bar{\\theta}_0}}) \\leq H(\\boldsymbol{\\bar{\\theta}}).$\n\n\n\nThe method for recovering the true direction of the single index from model~\\eqref{model:sim} can be generalized in a similar way. The conditional variance of $Y$ given $\\tbf{X} = \\tbf{x}$ for a real-valued response can be directly generalized to the conditional Fr\\'echet variance $d^2(Y,\\mop{\\tbf{x} ^{\\top} \\boldsymbol{\\bar{\\theta}}})$ for any given unit orientation vector $\\boldsymbol{\\bar{\\theta}}.$ Thus, for a general object response $Y\\in (\\Omega,d),$ ${\\boldsymbol{\\bar{\\theta}_0}}$ can alternatively be expressed as\n\\begin{equation}\\aligned\n\\label{sim:obj}\n{\\boldsymbol{\\bar{\\theta}_0}} = &\\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}\\ H(\\boldsymbol{\\bar{\\theta}}), \\ \\text{where } H(\\boldsymbol{\\bar{\\theta}})= \\mbb{E}{\\left(d^2(Y,\\mop{\\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}})\\right)},\\\\\n\\mop{t} &= \\underset{\\omega\\in \\Omega}{\\argmin} \\ M(\\omega, t), \\ \\text{with } M(\\omega,t):= \\mbb{E}{\\left(d^2(Y,\\omega) |\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= t\\right)}.\n\\endaligned\\end{equation}\n\nTo recover ${\\boldsymbol{\\bar{\\theta}_0}}$ from the representation~\\eqref{sim:obj}, one needs to also estimate the conditional Fr\\'echet mean, as in the IFR model \\eqref{model:sim}. We employ the local Fr\\'echet regression estimate \\citep{pete:19} for this. The conditional Fr\\'echet mean in \\eqref{sim:obj} can be approximated by a locally weighted Fr\\'echet mean, with weight function $S(\\cdot,\\cdot,\\cdot)$ that depends on a chosen kernel function $K(\\cdot)$ and a bandwidth parameter $b.$ For any given unit direction index $\\boldsymbol{\\bar{\\theta}},$ this intermediate localized weighted Fr\\'echet mean is defined as\n\\begin{equation}\\aligned\n\\label{intermed:local:fr}\n\\tmop{t} &= \\underset{\\omega\\in \\Omega}{\\argmin} \\ \\tilde{L}_b(\\omega, t), \\ \\text{with } \\tilde{L}_b(\\omega,t):= \\mbb{E}{\\left(S(\\tbf{X}^{\\top} \\boldsymbol{\\bar{\\theta}},\\ t,b)d^2(Y,\\omega)\\right)},\n\\endaligned\\end{equation}\nwhere\n\\begin{equation}\\aligned \n\\label{local:Fr:weight} \n&S(\\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}},\\ t,b) = \\frac{1}{\\sigma_0^2}K_b(\\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}- t ) [\\mu_2 - \\mu_1(\\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}- t)],\\\\\n&\\mu_k = \\mbb{E}{(K_b( \\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}} - t ) \\ ( \\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}- t )^k)},\\ k = 0,1,2,\\quad \n\\sigma_0^2 = \\mu_2 \\mu_0- \\mu_1^2, \n\\endaligned\\end{equation}\nwhere $M(\\cdot,t) = \\tilde{L}_b(\\cdot,t) + O(b)$ for all $t$ \\citep{pete:19}. Suppose we observe a random sample of paired observations $(\\tbf{X}_i,Y_i),\\ l=1,\\dots,n$, where $\\tbf{X}_i$ is a $p-$ dimensional Euclidean predictor and $Y_i \\in (\\Omega,d)$ is an object response situated in a general metric space $(\\Omega,d).$ Using the form of the intermediate target in \\eqref{intermed:local:fr} and replacing the auxiliary parameters by their corresponding empirical estimates, the local Fr\\'echet regression estimator at a given value $t$ of the single index is defined as\n\n\\begin{equation}\\aligned\n\\label{est:local:fr}\n\\hmop{t} &= \\underset{\\omega \\in \\Omega}{\\argmin} \\ \\hat{L}_n(\\omega, t), \\ \\text{with } \\hat{L}_n(\\omega,t):= \\frac{1}{n}\\sum_{i=1}^n \\wh{S}(\\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}},\\ t,b)d^2(Y_i,\\omega),\n\\endaligned\\end{equation}\nwhere\n\\begin{equation}\\aligned \n\\label{est:local:Fr:weight} \n&\\wh{S}(\\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}},\\ t,b) = \\frac{1}{\\hat{\\sigma}_{0}^2}K_b( \\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}} - t) [\\hat{\\mu}_{2} - \\hat{\\mu}_{1}(\\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}} - t)],\\\\\n&\\hat{\\mu}_{p} = \\frac{1}{n} \\sum_{j=1}^n K_b( \\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}} - t ) \\ (\\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}} - t)^p, \\ p = 0,1,2,\\quad\n\\hat{\\sigma}_{0}^2 = \\hat{\\mu}_{2} \\hat{\\mu}_{0}- \\hat{\\mu}_{1}^2.\n\\endaligned\\end{equation}\nAssuming that the support of $T:= \\tbf{X}^{\\top} \\boldsymbol{\\bar{\\theta}}$, for any given unit direction $\\boldsymbol{\\bar{\\theta}}$ is compact and is denoted by $\\mathcal{T} = [0,1],$ we partition the interval $\\mathcal{T}$ into $M$ equal-width non-overlapping bins $\\{B_1,B_2,\\dots,B_M\\},$ such that data belonging to different bins are independent and identically distributed.\nWe define the mean observations $\\tilde{\\tbfX}_l$ and $\\tilde{Y}_l$ for the data points belonging to the $l-$th bin, where the latter are defined as the appropriate Fr\\'echet barycenters,\n\\begin{equation}\\aligned\n\\label{binned_dat}\n\\tilde{\\tbfX}_l &= \\sum_{i=1}^{n} W_{il} \\tbf{X}_i, \\ \\tilde{Y}_l = \\underset{\\omega \\in \\Omega} {\\text{argmin}}\\ \\sum_{i=1}^{n} W_{il} d^2(Y_i,\\omega), \\text{ where } W_{il} &= \\frac{\\indicator{\\tbf{X}_i^{\\top}\\boldsymbol{\\bar{\\theta}} \\in B_l} } {\\sum_{i=1}^{n} \\indicator{\\tbf{X}_i^{\\top}\\boldsymbol{\\bar{\\theta}} \\in B_l}}.\n\\endaligned\\end{equation}\n\nHere the number of bins $M$ depends on the sample size $n$. The appropriate choice of $M = M(n)$ will be discussed later.\nThe proposed estimator for the true direction ${\\boldsymbol{\\bar{\\theta}_0}}$ in \\eqref{sim:obj} is then \n\\begin{equation}\\aligned \n\\label{est:sim:obj}\n\\boldsymbol{\\wh{\\bar{\\theta}}} &= \\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}\\ V_n(\\boldsymbol{\\bar{\\theta}}), \\text{ where }\nV_n(\\boldsymbol{\\bar{\\theta}}) = \\frac{1}{M} \\sum_{l=1}^{M} d^2(\\tilde{Y}_l, \\hmop{\\tilde{\\tbfX}_l^{\\top}\\boldsymbol{\\bar{\\theta}}}).\n\\endaligned\\end{equation}\nHere $\\hmop{\\tilde{\\tbfX}_l^{\\top}\\boldsymbol{\\bar{\\theta}}}, \\ l =1,\\dots,M,\\ $ is the local Fr\\'echet regression estimator, constructed based on the sample $(\\tbf{X}_i,Y_i), \\ i=1,\\dots,n$, and evaluated at each sample point of the binned sample $(\\tilde{\\tbfX}_l,\\tilde{Y}_l), \\ l = 1,\\dots, M,$ as described in \\eqref{est:local:fr} and \\eqref{est:local:Fr:weight}.\nDefine an intermediate quantity that corresponds to the empirical version of $H(\\cdot)$ in \\eqref{sim:obj} as \n\\begin{equation}\\aligned \n\\label{intermed:sim:obj}\n\\boldsymbol{\\tilde{\\bar{\\theta}}} &= \\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}\\ \\tilde{V}_n(\\boldsymbol{\\bar{\\theta}}), \\text{ where }\n\\tilde{V}_n(\\boldsymbol{\\bar{\\theta}}) = \\frac{1}{M} \\sum_{l=1}^{M} d^2(\\tilde{Y}_l, \\mop{\\tilde{\\tbfX}_l^{\\top}\\boldsymbol{\\bar{\\theta}}}).\n\\endaligned\\end{equation}\nThis auxilary quantity is used to prove the asymptotic results for $\\boldsymbol{\\wh{\\bar{\\theta}}}$ in the next section.\n\nThe bandwidth $b = b(n)$ is a tuning parameter involved in the estimation and the rate of convergence for $\\hmop{\\cdot}$ to $\\mop{\\cdot}$ is contingent on $b.$ It is important to note here that, another possible estimator for $\\mop{\\cdot}$ could be given by the global Fr\\'echet regression estimator introduced by \\cite{pete:19}. This is developed by generalizing multiple linear regression to the case of a metric-valued response by viewing the regression function as a sequence of weighted Fr\\'echet means, with weights derived from those of the corresponding standard linear regression. Using this alternative estimate for the unknown link function in the IFR model~\\eqref{model:sim} avoids the tuning parameter $b$ that is needed for local Fr\\'echet regression.\n\n\n\\section{Theory}\n\\label{sec:theory}\nThe unknown quantities that constitute the Index Fr\\'echet Regression (IFR) model consist of the nonparametric link function and the index parameter, and thus the asymptotic properties of the estimate of the true unit direction rely on those of the estimates of the link function (with local Fr\\'echet regression) and the index parameter (through an M-estimator of the criterion function $H(\\cdot)$ in \\eqref{sim:obj}). The separable metric space $(\\Omega,d)$ is assumed to be totally bounded with diameter $D$, hence separable. In addition, with regard to the quantities in \\eqref{sim:obj}, \\eqref{est:local:fr}, and \\eqref{est:sim:obj} we require the following assumptions.\n\\begin{enumerate}[label = (A\\arabic*), series = fregStrg, start = 1]\n\\item \\label{ass:fr:exist} The conditional and weighted Fr\\'echet means in \\eqref{sim:obj}, \\eqref{intermed:local:fr}, \\eqref{est:local:fr} and \\eqref{binned_dat} are well defined, i.e., they exist and are unique.\n\\item \\label{ass:reg:cont} The link function $\\mop$ is Lipschitz continuous, that is, that is, there exists a real constant $K > 0$, such that, for all $\\boldsymbol{\\bar{\\theta}}_1, \\boldsymbol{\\bar{\\theta}}_2 \\in \\bar{\\Theta},$\n\\[\nd(\\mop{\\tbf{x} ^{\\top}\\boldsymbol{\\bar{\\theta}}_1}, \\mop{\\tbf{x} ^{\\top}\\boldsymbol{\\bar{\\theta}}_2}) \\le\nK \\ltwoNorm{\\tbf{x} ^{\\top} (\\boldsymbol{\\bar{\\theta}}_1-\\boldsymbol{\\bar{\\theta}}_2)}.\n\\]\n\\item For any $\\varepsilon>0$ and $\\beta_1,\\beta_2>1,$ define\n\\begin{equation}\\aligned\n\\label{rate:an}\na_n = \\max\\{ b^{2\/(\\beta_1 -1)}, (nb^2)^{-1\/(2(\\beta_2 -1)+\\varepsilon)}, (nb^2(-\\log b)^{-1})^{1\/2(\\beta_2 -1)}\\}.\n\\endaligned\\end{equation}\n\\label{ass:tuning:M} \nThe number of non-overlapping bins defined in Section~\\ref{sec:model:methods}, $M$ is a function of the sample size $n,$ that is $M = M(n)$ such that $Ma_n \\rightarrow 0.$\n\\end{enumerate}\nWe note that for $\\beta_1 = \\beta_2 =2,$ $a_n$ reduces to\n\\[\na_n = \\max\\{ b^{2}, (nb^2)^{-1\/(2 +\\varepsilon)}, (nb^2(-\\log b)^{-1})^{1\/2}\\}.\n\\]\t\n\n\nThe above assumptions are commonly imposed when one studies M-estimators. Whether Fr\\'echet means are well defined depends on the nature of the space, as well as the metric considered. For example, in case of Euclidean responses Fr\\'echet means coincide with the usual means for random vectors with finite second moments. For finite-dimensional Riemannian manifolds additional regularity conditions are required \\citep{afsa:11,penn:18}. For Hadamard spaces, unique Fr\\'echet means are known to exist \\citep{bhat:03,bhat:05,patr:15,kloe:10}.\nAssumption~\\ref{ass:reg:cont} limits how fast the object $\\mop{\\cdot}$ can change, introducing a concept of smoothness in the link function for the IFR model \\eqref{model:sim}.\nAssumption~\\ref{ass:fr:exist} is satisfied for the space $(\\Omega,d_W)$ of univariate probability distributions with the 2-Wasserstein metric and also for the space $(\\Omega,d_F)$ of covariance matrices\nwith the Frobenius metric $d_F$ \\citep{dube:19,pete:19}.\nAssumption~\\ref{ass:tuning:M} serves to connect the uniform rate of convergence $a_n$ for the local Fr\\'echet regression estimator as given in ~\\eqref{rate:an} with the number of bins $M$. A basic additional assumption is that the predictors needed for the nonparametric Fr\\'echet regression are randomly distributed over the domain where the function is to be estimated, and that on average they become denser as more data are collected. \nThis requires that there is at least one continuous predictor since if all the predictors are binary then the predictor locations cannot become denser with larger sample size. For any given direction $\\boldsymbol{\\bar{\\theta}},$ the univariate index variable $T := \\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}$ is assumed to have a density $f_T(\\cdot)$ with a compact support $\\mathcal{T}$ and that the multivariate random variable $\\tbf{X}$ is bounded.\n\nAdditional assumptions ~\\ref{ass:minUnif}-\\ref{ass:curvatureUnif}, and ~\\ref{ass:ker}-\\ref{ass:jointdtn} have been used previously in \\cite{pete:19} and are stated in the Appendix. They concern metric entropy and curvature for M estimators and are commonly used in their asymptotic analysis utilizing empirical process theory \\citep{vand:00}. They\nare specifically required to establish consistency and uniform rate of convergence for the local Fr\\'echet regression estimator in \\eqref{est:sim:obj} \\citep{chen:20}. Assumptions ~\\ref{ass:ker}-\\ref{ass:jointdtn} are commonly used in the local regression literature \\citep{silv:78,fan:96}.\n\\begin{prop}\n\\label{lem:H}\nUnder assumption~\\ref{ass:fr:exist}-\\ref{ass:reg:cont}, $H(\\cdot)$ in model \\eqref{sim:obj} is a continuous function of $\\boldsymbol{\\bar{\\theta}}$ and for any $\\boldsymbol{\\bar{\\theta}} \\in \\bar{\\Theta},$ $H({\\boldsymbol{\\bar{\\theta}_0}}) \\leq H(\\boldsymbol{\\bar{\\theta}}).$ \n\\end{prop}\n\nMost types of random objects, such as those in the Wasserstein space (the space of probability distributions equipped with the 2-Wasserstein distance) or the space of symmetric, positive semidefinite matrices endowed with the Frobenius or power metric satisfy assumptions ~\\ref{ass:minUnif}-\\ref{ass:curvatureUnif} (see Appendix) with $\\beta_1= \\beta_2 =2$. \tIf one chooses the bandwidth sequence $b$ for the local Fr\\'echet regression such that, for a given $\\varepsilon>0,$ $b\\sim n^{-(\\beta_1 -1)\/(2\\beta_1 + 4\\beta_2 - 6 +2\\varepsilon)},$ $a_n$ is of the order $n^{-\\frac{1}{(\\beta_1 +2\\beta_2 -3 +\\varepsilon)}}$ \\citep{chen:20}. For $\\beta_1 = \\beta_2 =2,$ this becomes \n$a_n \\sim n^{-\\frac{1}{3+\\varepsilon}},$ leading to a uniform convergence rate that is arbitrarily close to $O_P(n^{-1\/3}).$ Any choice $M=M(n)=n^{\\gamma}$ with \n$0 <\\gamma<\\frac{1}{3}$ will then satisfy Assumption~\\ref{ass:tuning:M}. \t\n\nWe will make use of the following known result to deal with the link function part when investigating the asymptotic convergence rates of the proposed IFR estimator.\n\\begin{lem}[\\cite{chen:20} Theorem $1$]\n\\label{lem:unif:local:fr:rate}\nUnder assumptions~\\ref{ass:minUnif}-\\ref{ass:curvatureUnif},~\\ref{ass:ker}-\\ref{ass:jointdtn}, and if $b\\to 0,$ such that $nb^2(-\\log b)^{-1} \\to \\infty$ as $n\\to \\infty,$ for any $\\varepsilon>0,$ and $\\beta_1,\\beta_2 >1$ as per assumption~\\ref{ass:curvatureUnif},\n\\begin{equation}\\begin{gathered}\n\\underset{t \\in \\mathcal{T}}{\\sup} \\ d(\\hmop{t},\\mop{t}) = O_P(a_n),\n\\end{gathered}\\end{equation}\nwhere $a_n$ is as given in equation \\eqref{rate:an} in Assumption~\\ref{ass:tuning:M}.\n\\end{lem}\n\nThe following result demonstrates the consistency of the proposed estimator for the true index direction. All proofs can be found in the Supplementary Material\n\\begin{thm} \\label{thm:probConv}\nUnder assumptions~\\ref{ass:fr:exist}-\\ref{ass:reg:cont}, ~\\ref{ass:minUnif}-\\ref{ass:curvatureUnif}, and ~\\ref{ass:ker}-\\ref{ass:jointdtn}\n$$\\boldsymbol{\\wh{\\bar{\\theta}}} -{\\boldsymbol{\\bar{\\theta}_0}} \\overset{P}{\\longrightarrow} 0 \\text{ on } \\bar{\\Theta}.$$\n\\end{thm}\nAny $\\boldsymbol{\\bar{\\theta}} \\in \\bar{\\Theta}$ is decomposed into $(\\theta_1,\\boldsymbol{\\theta})^{\\top},$ where $\\theta_1>0,$ and for purposes of modeling the single index, keeping identifiability in mind, can be expressed solely as a function of $\\boldsymbol{\\theta}.$\nTo this end we rewrite the criteria function and the corresponding minimizers in terms of the sub-vector $\\boldsymbol{\\theta}$ only, \n\\begin{equation}\\aligned\n\\boldsymbol{\\theta_0} = \\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin} \\, H(\\boldsymbol{\\theta}),\\quad \n\\tilde{\\boldsymbol{\\theta}} = \\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin} \\, \\tilde{V}_n(\\boldsymbol{\\theta}),\\quad\t\n\\hat{\\boldsymbol{\\theta}} = \\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin} \\, V_n(\\boldsymbol{\\theta}).\n\\endaligned\\end{equation}\nWe note that $\\boldsymbol{\\theta_0},$ $\\tilde{\\boldsymbol{\\theta}},$ and $\\hat{\\boldsymbol{\\theta}}$ are the unconstrained minimizers for the criteria functions $H(\\cdot),$ $\\tilde{V}_n(\\cdot),$ and $V_n(\\cdot)$ respectively, which \nare continuous functions of $\\boldsymbol{\\theta},$ the latter two almost surely. \n\nCombining the consistency result for the direction vector from Theorem~\\ref{thm:probConv} with the uniform convergence of the local Fr\\'echet regression estimator in Lemma~\\ref{lem:unif:local:fr:rate}, the asymptotic consistency of the estimated single index regression (IFR) model follows.\n\\begin{cor}\n\\label{cor:probConv:ifr}\nUnder the conditions required for Theorem~\\ref{thm:probConv}, for any $\\tbf{x} \\in \\mbb{R}^p,$\n$$d(\\hmop{\\tbf{x} ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}}, \\mop{\\tbf{x} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}}) = o_P(1).$$\n\\end{cor}\nThe above corollary justifies the effectiveness of the use of local Fr\\'echet regression in the context of IFR.\n\n\\section{Simulation studies}\n\\label{sec:simul}\nThere are two tuning parameters involved in the implementation of the single index Fr\\'echet regression (IFR) model in~\\eqref{sim:obj}, namely the bandwidth $b = b(n)$ involved in the local Fr\\'echet regression as per \\eqref{model:sim} and the number of i.i.d data points, $M = M(n),$ where the local Fr\\'echet regression is fitted as per Assumption~\\ref{ass:tuning:M}.\n\nThe pair $(b,M)$ can be chosen by leave-one-out cross validation, where the objective function to be minimized is the mean discrepancy between the local Fr\\'echet regression estimates and the observed distributions for the binned data; specifically, \n\\[b_{opt} = \\argmin_{(b,M)} \\frac{1}{Mn} \\sum_{l=1}^M \\sum_{i=1}^n d^2(\\tilde{Y}_l, \\loomop{\\tilde{\\tbfX}_l ^{\\top}\\boldsymbol{\\bar{\\theta}}}) \\]\nwhere $\\loomop{\\tilde{\\tbfX}_l ^{\\top}\\boldsymbol{\\bar{\\theta}}}$ is the local Fr\\'echet regression estimate at $\\tilde{\\tbfX}_l ^{\\top}\\boldsymbol{\\bar{\\theta}}$ obtained with bandwidth $b$ based on the sample excluding the $i-$th pair $(\\tbf{X}_i,Y_i)$, i.e., \n\\[\n\\loomop{\\tilde{\\tbfX}_l ^{\\top}\\boldsymbol{\\bar{\\theta}}} = \\underset{\\omega \\in \\Omega}{\\argmin} \\frac{1}{(n-1)} \\sum_{j\\neq i} \\wh{S}(\\tbf{X}_j ^{\\top} \\boldsymbol{\\bar{\\theta}},\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\bar{\\theta}} ,b)d^2(Y_j,\\omega),\n\\]\nIn practice, we replace leave-one-out cross validation by $5-$fold cross validation when $n > 30$.\n\nThe performance of the estimation is measured through simulations under various settings. The random objects we consider include samples of univariate distributions equipped with the Wasserstein$-2$ metric, samples of adjacency matrices from networks with the Frobenius metric and samples of multivariate data with the usual Euclidean metric. It is important to recall that true direction ${\\boldsymbol{\\bar{\\theta}_0}}$ is chosen to lie on the unit sphere in $\\mbb{R}^p$ with $\\theta_{01}>0.$ In each case, the optimal direction is estimated as the minimizer of $V_n(\\boldsymbol{\\theta}) : \\boldsymbol{\\theta} \\in \\mbb{R}^{p-1} \\text{ and } \\boldsymbol{\\theta} ^{\\top} \\boldsymbol{\\theta} \\leq 1,$ in \\eqref{est:sim:obj}. \n\nTo evaluate the accuracy of the estimate, we repeat the data generating mechanism $500$ times in each simulation setting, and for each such replication, obtain the optimal direction as $\\boldsymbol{\\wh{\\bar{\\theta}}}^{(i)} \\ i = 1,\\dots, 500.$ The intrinsic Fr\\'echet mean of these $500$ estimates on the unit sphere is computed as $\\widehat{\\bar{\\boldsymbol{\\theta}}}.$ Since each $\\boldsymbol{\\wh{\\bar{\\theta}}}^{(i)}$ lies on the manifold (the unit sphere in $\\mbb{R}^p$), the bias and deviance of the estimator is estimated as\n\\begin{align}\n\\text{bias}(\\boldsymbol{\\wh{\\bar{\\theta}}}) &= \\arccos \\langle \\widehat{\\bar{\\boldsymbol{\\theta}}},{\\boldsymbol{\\bar{\\theta}_0}} \\rangle, \\nonumber \\\\\n\\text{dev}(\\boldsymbol{\\wh{\\bar{\\theta}}}) &= {\\rm Var}\\left( \\arccos \\langle \\boldsymbol{\\wh{\\bar{\\theta}}}^{(i)}, \\widehat{\\bar{\\theta}} \\rangle\\right) \n\\label{simul:bias:var}\n\\end{align}\n\n\\noindent Essentially, we estimate the $(p-1)-$ dimensional parameter $\\boldsymbol{\\theta_0} = (\\theta_{20},\\dots,\\theta_{p0})$ freely and then estimate $\\theta_{10}$ from the relation $\\theta_{10} = \\sqrt{1- \\ltwoNorm{\\boldsymbol{\\theta_0}}^2}.$ \n\n\\subsection{Distributional responses}\n\nThe space of distributions with the Wasserstein$-2$ metric provides an ideal setting for illustrating the efficacy of the proposed methods through simulation experiments. We consider distributions on a bounded domain $\\mathcal{T}$ as the response, $Y(\\cdot)$, and they are represented by the respective quantile functions $Q(Y)(\\cdot)$. A $p-$ dimensional Euclidean predictor $\\tbf{X}$ is considered.\nThe random response is generated conditional on $\\tbf{X}$, by adding noise to the true regression quantile\n\\begin{align}\nQ(m_\\oplus(\\tbf{x}))(\\cdot) &= \\mbb{E}{\\left(Q(Y)(\\cdot)|\\tbf{X} = \\tbf{x}\\right)}\n\\label{simul:dens1}\n\\end{align}\n\n\\noindent As emphasized, the conditional distribution of $Y$ depends on $\\tbf{X}$ only through the true index parameter $\\boldsymbol{\\theta_0}.$\nTwo different simulation scenarios are examined as we generate the distribution objects from location-scale shift families (see Table~\\ref{table:sim}). In the first setting, the response is generated, on average, as a normal distribution with parameters that depend on $\\tbf{X}.$ For $\\tbf{X} =\\tbf{x}$, the distribution parameters $\\mu \\sim N(\\zeta(\\tbfx \\t \\true), \\nu_1)$ is independently sampled, and for a fixed parameter $\\sigma=0.1$ the corresponding distribution is given by $Q(Y)(\\cdot) = \\mu + \\sigma \\Phi^{-1}(\\cdot)$. Here, the relevant sub-parameter is chosen as $\\nu_1 = 0.1$ and three different link functions are considered, namely $\\zeta(y) = y$ $\\ \\zeta(y) = y^2,$ and $\\zeta(y) = \\exp(y),$ and $\\Phi(\\cdot)$ is the standard normal distribution function. The second setting is slightly more complicated. The distributional parameter $\\mu|\\tbf{X} = \\tbf{x}$ is sampled as before and $\\sigma = 0.1$ is assumed to be a fixed parameter. The resulting distribution is then ``transported'' in Wasserstein space via a random transport map $T$, that is uniformly sampled from the collection of maps $T_k(a) = a - \\sin (ka)\/|k|$ for $k \\in \\{\\pm1, \\pm 2, \\pm 3\\}$. The distributions thus generated are not Gaussian anymore due to the transportation. Nevertheless, one can show that the Fr\\'echet mean is exactly $ \\mu + \\sigma \\Phi^{-1}(\\cdot)$ as before.\n\n\\begin{table}[!htb]\n\t\\centering\n\t\\begin{tabular}{|l|l|}\n\t\t\\hline\n\t\tSetting I &\n\t\tSetting II \\\\ \\hline\n\t\t\\begin{tabular}[c]{@{}l@{}}$Q(Y)(\\cdot) = \\mu + \\sigma \\Phi^{-1}(\\cdot) $, \\\\ where \\\\ $\\mu \\sim N(\\zeta(\\tbfx \\t \\true), \\nu_1), $\\ $\\sigma = 0.1.$\n\t\t\\end{tabular} &\n\t\t\\begin{tabular}[c]{@{}l@{}}$Q(Y)(\\cdot) = T \\#(\\mu +\\sigma \\Phi^{-1}(\\cdot)) $, \\\\ where\\\\ $\\mu \\sim N(\\zeta(\\tbfx \\t \\true), \\nu_1), $\\ $\\sigma = 0.1,$ \\\\ $T_k(a) = a - \\sin(ka)\/|a|, k \\in \\{\\pm1,\\pm 2, \\pm 3\\}.$\\end{tabular} \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Table showing two different simulation scenarios.}\n\t\\label{table:sim}\n\\end{table}\n\nTo this end, we generate a random sample of size $n$ of density objects and multivariate Euclidean predictors from the true models, incorporating measurement error as described in the two situations above. We sample the predictors $\\tbf{X}_i$ to be of dimension $p=4,$ with the components of the vectors to be distributed independently as $Beta(1,1).$ Three different link functions are used to simulate the densities from the ``true'' model, namely, the identity link, the square link, and the exponential link. The bias and deviance of the estimated direction vectors for varying sample sizes are displayed in Tables~\\ref{tab:dens:set1} and~\\ref{tab:dens:set2}. We note that, the bias encountered due to the local Fr\\'echet estimation is generally low and the variance of the estimates also diminish with a higher sample size.\n\\begin{table}[!htb]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t\\multicolumn{7}{|c|}{Setting I} \\\\ \\hline\n\t\t\t& \\multicolumn{2}{c|}{link1 ($x \\mapsto x$)} & \\multicolumn{2}{c|}{link2 ($x \\mapsto x^2$)} & \\multicolumn{2}{c|}{link3 ($x \\mapsto e^x$)} \\\\ \\hline\n\t\t\t& bias & dev & bias & dev & bias & dev \\\\ \\hline\n\t\t\t$n = 100$ & 0.033 & 0.027 & 0.031 & 0.029 & 0.039 & 0.041 \\\\ \\hline\n\t\t\t$n = 1000$ & 0.011 & 0.013 & 0.017 & 0.012 & 0.020 & 0.013 \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Table showing bias and variance of $\\boldsymbol{\\wh{\\bar{\\theta}}}$ (measured in radians) based on $500$ replications when the predictor dimension is $p=4.$ We took the four components of the vectors to be distributed independently as $Beta(1,1).$ The tuning parameters $(b,M)$ are chosen by a $5-$fold cross validation method.}\n\t\\label{tab:dens:set1}\n\\end{table}\nThe performance of the fits is evaluated by computing the Mean Square Error (MSE) between the observed and the fitted distribution. Denoting the simulated true and estimated distribution objects at $(\\tilde{\\tbfX}_l,\\tilde{Y}_l),$ by $\\mop{\\tilde{\\tbfX}_l ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}}$ and $\\hmop{\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}}$ respectively, for $l=1,\\dots,M,$ the utility of the estimation was measured quantitatively by \n\\begin{align}\n\\label{simu:dens:mse:ifr}\nMSE = \\frac{1}{M} \\sum_{l=1}^M d^2_W(\\tilde{Y}_l, \\hmop{\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}}),\n\\end{align}\nwhere $d_W(\\cdot,\\cdot)$ is the Wasserstein-2 distance between two distributions.\nWe also compared the estimation performance of the proposed single index Fr\\'echet regression (IFR) method to a baseline Global Fr\\'echet regression (GFR) method, which can handle multivariate predictors being a generalization of global least squares regression \\citep{pete:19}. Denoting the GFR estimate of the distribution at $(\\tilde{\\tbfX}_l,\\tilde{Y}_l),$ by $\\hat{g}_\\oplus(\\tilde{\\tbfX}_l)$ for each $l=1,\\dots,M,$ we express the MSE of the fits as\n\\begin{align}\n\\label{simu:dens:mse:gfr}\nMSE = \\frac{1}{M} \\sum_{l=1}^M d^2_W(\\tilde{Y}_l, \\hat{g}_\\oplus(\\tilde{\\tbfX}_l)),\n\\end{align}\nwhere $d_W(\\cdot,\\cdot)$ is the Wasserstein-2 distance between two distributions. Figure~\\ref{fig:dens:mspe} shows the boxplots for different link functions used to generate the distribution data in the two simulation settings I and II for a sample size of $n=1000.$ We observe that, the IFR method outperforms the baseline GFR method in all the cases. Perhaps, the closest comparison between the two methods would be when an identity link function is used in the data generation mechanism. This is indeed expected since, in this case, the true model essentially reduces to a linear model.\nWe also display the fitted and the true distributions represented as densities (Figure~\\ref{fig:dens:fits}). The estimates from the IFR method matches the true observed densities quite well, thus the proposed IFR method can be used to validate the estimation method.\n\\begin{figure}[!htb]\n\t\\centering\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{plots\/boxplot_dens_n100}\n\t\t\\caption{Simulation Setting I.}\n\t\t\\label{fig:dens_set1:mspe}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{plots\/boxplot_dens_set2_n100}\n\t\t\\caption{Simulation Setting II.}\n\t\t\\label{fig:dens_set2:mspe}\n\t\\end{subfigure}\n\t\\caption{Boxplot of MSPE of the fits using the single index Fr\\'echet regression model (IFR) and the Global Fr\\'echet regression (GFR) model for a sample size $n=1000.$ The left and the right panel correspond to the simulation settings I and II, respectively. The left, middle, and right columns in each of the panels correspond to the three different link functions used in the data generation mechanism, namely, identity, square, and exponential link functions, respectively, while the link functions in all cases are estimated from the data.}\n\t\\label{fig:dens:mspe}\n\\end{figure}\n\n\n\\begin{table}[!htb]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t\\multicolumn{7}{|c|}{Setting II} \\\\ \\hline\n\t\t\t& \\multicolumn{2}{c|}{link1 ($x \\mapsto x$)} & \\multicolumn{2}{c|}{link2 ($x \\mapsto x^2$)} & \\multicolumn{2}{c|}{link3 ($x \\mapsto e^x$)} \\\\ \\hline\n\t\t\t& bias & dev & bias & dev & bias & dev \\\\ \\hline\n\t\t\t$n = 100$ & 0.029 & 0.027 & 0.022 & 0.037 & 0.028 & 0.044 \\\\ \\hline\n\t\t\t$n = 1000$ & 0.010 & 0.012 & 0.011 & 0.014 & 0.017 & 0.021 \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Table showing bias and variance of $\\boldsymbol{\\wh{\\bar{\\theta}}}$ (measured in radians) based on $500$ replications when the predictor dimension is $p=4.$ \tWe took the five components of the vectors to be distributed independently as $Beta(1,1).$ The tuning parameters $(b,M)$ are chosen by $5-$fold cross validation method.}\n\t\\label{tab:dens:set2}\n\\end{table}\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\t\\includegraphics[width=\\textwidth, height= .4\\textheight]{plots\/fits_dens_set2_n100}\n\t\\caption{Figure showing the density estimates of the distribution objects generated in simulation setting I. The blue and red curves are the observed and estimated densities, respectively. The left, middle, and right panels correspond to the three different link functions used in the data generation mechanism, namely, identity, square, and exponential link functions, respectively.}\n\t\\label{fig:dens:fits}\n\\end{figure}\n\n\n\\subsection{Adjacency matrix responses}\nHere the response objects are assumed to reside in the space of adjacency matrices arising out of a weighted graph equipped with the Frobenius norm. The $(p,q)-$th entry of the adjacency matrix $Y$ is given by,\n\\begin{align}\n\\label{simu:adj:mat}\nY_{pq} = m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0}) + \\epsilon_{pq},\n\\end{align}\nwhere $\\epsilon_{pq}$ are independently sampled errors and the link function $m(\\cdot)$ is such that $Y_{pq} \\in (0,1)$ for all $p, q.$ To ensure this, an appropriate link function in this case is taken as the expit link function, that is, $m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0}) = 1\/(1 + \\exp(- m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0}))).$ For a given index $\\tbf{x}^{\\top} \\boldsymbol{\\theta_0},$ $\\epsilon_{pq}$ was sampled from a Uniform distribution on $[\\max\\{0,-m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0})\\}, \\min\\{1,1-m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0})\\}]$. \n\\begin{table}[!htb]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{link ($x \\mapsto 1\/(1+\\exp(-x))$} \\\\ \\hline\n\t\t\t& bias & dev \\\\ \\hline\n\t\t\t$n = 100$ & 0.044 & 0.052 \\\\ \\hline\n\t\t\t$n = 1000$ & 0.021 & 0.019 \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Table showing bias and variance of $\\hat{\\theta}$ (measured in radians) based on $500$ replications for weighted adjacency matrix responses. The tuning parameters for the model-fitting are chosen by $5-$ fold cross validation method.}\n\t\\label{tab:adjacency:set1}\n\\end{table}\nWe generated samples of networks with $10$ nodes, as one might encounter in brain networks, with the weighted adjacency computed as per \\eqref{simu:adj:mat}. The predictors are sampled from a $4-$dimensional multivariate normal distributions, where each of the components was truncated to lie between $[-5,5].$ While the mean vector for the multivariate normal distribution in the data generation scheme is assumed to be the zero vector, we assume the associated covariance matrix to be non-identity with ${\\rm cor}(X_1,X_2) = {\\rm cor}(X_1,X_3) = {\\rm cor}(X_2,X_3) = 0.3,$ and ${\\rm cor}(X_1,X_4) = {\\rm cor}(X_2,X_4) = -0.4.$ The variances for each of the four components are assumed equal to $0.25.$ We note here that the non-zero correlation among the components of the predictor vector does not influence the performance of the nonparametric regression fit negatively. Table~\\ref{tab:adjacency:set1} presents the bias and variance of the estimator computed based on $500$ replication of the data generating process. \n\\subsection{Euclidean responses}\nHere the object response of interest is assumed to lie in the Euclidean space. For generating the predictor vectors we consider a $5-$dimensional vector distributed as truncated multivariate normal distributions, where each of the components is truncated to lie between $[-10,10].$ The components are assumed to be correlated such that $X_1$ correlates with $X_2$ and $X_3$ with $r = 0.5$, and $X_2$ and $X_3$ correlate with $r = 0.25.$ The variances for each of the five components are $0.1.$ \nThe consistency of the estimates is illustrated in Table~\\ref{tab:euclidean:set1} based on $500$ replications of the simulation scenario.\n\\begin{table}[!htb]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{link1 ($x \\mapsto x$)} & \\multicolumn{2}{c|}{link2 ($x \\mapsto x^2$)} & \\multicolumn{2}{c|}{link3 ($x \\mapsto e^x$)} \\\\ \\hline\n\t\t\t& bias & dev & bias & dev & bias & dev \\\\ \\hline\n\t\t\t$n = 100$ & 0.013 & 0.061 & 0.025 & 0.048 & 0.037 & 0.029 \\\\ \\hline\n\t\t\t$n = 1000$ & 0.006 & 0.021 & 0.014 & 0.019 & 0.013 & 0.009 \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Table showing bias and variance of $\\boldsymbol{\\wh{\\bar{\\theta}}}$ (measured in radians) based on $500$ replications for a Euclidean vector response. The predictors $X_1,\\dots, X_5$ are generated from a truncated multivariate normal distribution.}\n\t\\label{tab:euclidean:set1}\n\\end{table}\n\\section{Data analysis}\n\\label{sec:data:ADNI}\nModern functional Magnetic Resonance Imaging (fMRI) methodology has made it possible to study structural elements of the brain and identify brain regions or cortical hubs that exhibit similar behavior, especially when subjects are in the resting state \\citep{alle:14, ferr:13}\nIn resting state fMRI, a time series of Blood Oxygen Level Dependent (BOLD) signal is observed for the seed voxels in selected functional hubs. For each hub, a seed voxel is identified as the voxel whose signal has the highest correlation with the signals of nearby voxels. Alzheimer's Disease has been found to have associations with anomalies in functional integration of brain regions and target regions or hubs of high connectivity in the brain \\citep{damo:12,zhan:10c}. \n\n\nData used in the preparation of this article were obtained from the Alzheimer's Disease Neuro-imaging Initiative (ADNI) database (\\url{adni.loni.usc.edu}).\nBOLD signals for $V= 11$ brain seed voxels for each subject were extracted. These Regions of Interest are aMPFC (Anterior medial prefrontal cortex), PCC (Posterior cingulate cortex), dMFPC (Dorsal medial prefrontal cortex), TPJ (Temporal parietal junction), LTC (Lateral temporal cortex), TempP (Temporal pole), vMFPC (Ventral medial prefrontal cortex), pIPL (Posterior inferior parietal lobule), Rsp (Retrosplenial cortex), PHC (Parahippocampal cortex), and HF$^+$ (Hippocampal formation) \\citep{andr:10}. The pre-processing of the BOLD signals was implemented by adopting the standard procedures of slice-timing correction, head motion correction and normalization and other standard steps. The signals for each subject were recorded over the interval $[0, 270]$ (in seconds), with $K=136$ measurements available at $2$ second intervals. From this the temporal correlations were computed to construct the connectivity correlation matrix, also referred to as the Pearson correlation matrix in the area of fMRI studies. \n\nThe data set in our analysis consists of $n=830$ subjects at the four stages of the disease: $372$ CN (cognitively normal), $113$ EMCI (early mild cognitive impairment), $200$ LMCI ( late mild cognitive impairment), and $145$ AD subjects were considered. The inter-hub connectivity Pearson correlation matrix for the $i-th$ subject is denoted as $Y_{i}$,and has the $(q,r)-$th element \n\\begin{align}\n(Y_{i})_{qr} = \\frac{\\sum_{p=1}^{K}(s_{ipq} - \\bar{s}_{iq}) (s_{ipr} - \\bar{s}_{ir})}{\\left[\\left(\\sum_{p=1}^{K}(s_{ipq} - \\bar{s}_{iq})^2\\right) \\left(\\sum_{p=1}^{K}(s_{ipq} - \\bar{s}_{iq})^2 \\right) \\right]^{1\/2}},\n\\label{data:ADNI:pearson:corr}\n\\end{align}\nwhere $s_{ipq}$ is the $(p,q)^{\\text{th}}$ element of the signal matrix for the $i^{\\text{th}}$ subject and $\\bar{s}_{iq} := \\frac{1}{K}\\sum_{p=1}^{K} s_{ipq}$ is the mean signal strength for the $q^{\\text{th}}$ voxel. For Alzheimer's disease trials, ADAS-Cog-13 is a widely-used measure of cognitive performance. It measures impairments across several cognitive domains that are considered to be affected early and characteristically in Alzheimer's disease \\citep{rock:07, kuep:18}. It is important to note that higher scores are associated with more serious cognitive deficiency. \n\nTo demonstrate the validity of the model, we consider the out-of-sample prediction performance of the proposed IFR. For this, we first randomly split the dataset into a training set with sample size $n_{\\text{train}}$ and a test set with the remaining $n_{\\text{test}}$ subjects. The IFR method was implemented as follows - for any given unit direction $\\boldsymbol{\\bar{\\theta}} \\in \\bar{\\Theta},$ we partition the domain of the projections into $M$ equal-width non-overlapping bins and compute the mean observations $\\tilde{\\tbfX}_l$ and $\\tilde{Y}_l$ for the data points belonging to the $l-$th bin, where the latter are defined as the appropriate Fr\\'echet barycenters. Observe that $M$ is dependent on the sample size. The ``true'' index is estimated as $\\boldsymbol{\\wh{\\bar{\\theta}}}$ as per~\\eqref{est:sim:obj}. We then take the fitted objects obtained from the training set, and predict the responses in the test set using the covariates present in the test set. As a measure of the efficacy of the fitted model, we compute root mean squared prediction error (RMPE) as\n\\begin{align}\n\\text{RMPE} = \\left[\\frac{1}{M_{n_{\\text{test}}}}\\sum_{i=1}^{M_{n_{\\text{test}}}} d_F^2\\left(\\tilde{Y}_l^{\\text{test}}, \\hmop{\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}} \\right) \\right]^{-1\/2},\n\\end{align}\nwhere $\\tilde{Y}_l^{\\text{test}}$ and $\\hmop{\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}}$ denote, respectively, the $l^{\\text{th}}$ observed and predicted responses in the test set, evaluated at the binned average $\\tilde{\\tbfX}_l.$ We repeat this process $500$ times, and compute RMPE for each split for the subjects separately (See Table~\\ref{tab:ADNI:rmpe}).\nThe tuning parameters $(b,M)$ are chosen by a $5-$fold cross validation method for each replication of the process.\n\\begin{table}[h!]\n\t\\centering\n\t\\begin{tabular}{ ccc } \n\t\t\\hline\n\t\t$n_{\\text{train}}$ & $n_{\\text{test}}$ & RMPE for the IFR method\\\\\n\t\t\\hline\n\t\t$500$ & $330$ & $0.206$ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Average Root Mean Prediction Error (RMPE) over $1000$ repetitions for the subjects, as obtained from the local fits of the single Index Fr\\'echet Regression (IFR) model. Here, $n_{\\text{train}}$ and $n_{\\text{test}}$ denote the sample sizes for the split training and testing datasets respectively.}\n\t\\label{tab:ADNI:rmpe}\n\\end{table}\nWe observe that the out-of-sample predictions error is quite low for after fitting the IFR model. In fact it is very close to the in-sample-prediction error $(0.251)$, calculated as the average distance between the observed training sample and the predicted objects based on the covariates in the training sets, which supports the validity of the proposed IFR models. \n\n\nAnother interest of our study was to understand the effect of various relevant predictors, such as age, gender, total score, and stage of the disease on the inter-hub functional connectivity. In particular, the hypothesis to test is given by\n$H_0: \\mop{\\cdot} = \\mop{\\theta_1X_1} \\text{ vs. } H_1: \\mop{\\cdot} = \\mop{\\sum_{j=1}^p \\theta_jX_j},$ or \nequivalently \n$$H_0: \\boldsymbol{\\theta} = \\mathbf{0}_{(p-1) \\times 1} \\ vs. \\ H_1: \\text{ not all } \\theta_j \\text{ are }0, \\ j =2,\\dots,p $$ \nwhere $\\boldsymbol{\\bar{\\theta}} = (\\theta_1,\\boldsymbol{\\theta})^{\\top}$ and $\\boldsymbol{\\theta} = (\\theta_2,\\dots,\\theta_p).$\nHere we consider $p =10,$ predictors, namely, $X_{1}=$ stages for the disease, $X_2 =$ age, $X_3 =$ sex, $X_4 =$ total ADAS score, and the pairwise interaction terms between them as\n$X_5 = X_1X_2$, $X_6 = X_1X_3$, $X_7 = X_1X_4$, $X_8 = X_2X_3$, $X_9 = X_2X_4$, and $X_{10} = X_3X_4$. \n\nWe employ a bootstrap procedure to test for $H_0$.\nBased on a standard bootstrap approach, we resample $(\\tbf{X}_i^\\ast, Y_i^\\ast)$ a large number of times, $B$. For each of these $B$ bootstrap samples, the estimated direction as $\\hat{\\boldsymbol{\\theta}}^{\\ast}$ ($(p-1)-$ dimensional vector) is computed. \tWe estimate the full $p-$ dimensional vector $\\boldsymbol{\\wh{\\bar{\\theta}}}^\\ast$ for each of the bootstrap sample, where $\\boldsymbol{\\wh{\\bar{\\theta}}}^\\ast = (\\widehat{\\theta}_1^\\ast, \\hat{\\boldsymbol{\\theta}}^\\ast),$ with $\\widehat{\\theta}_1^\\ast = \\sqrt{1 - \\ltwoNorm{\\hat{\\boldsymbol{\\theta}}^\\ast}^2}.$ Denote the $p-$ dimensional unit vector estimated from the original sample as $\\boldsymbol{\\wh{\\bar{\\theta}}}.$ $\\boldsymbol{\\wh{\\bar{\\theta}}}$ and $\\boldsymbol{\\wh{\\bar{\\theta}}}^\\ast$ for all $B$ bootstrap samples, lie on a sphere is $\\mbb{R}^p.$ We can compute the the geodesic distance between two points $\\boldsymbol{\\bar{\\gamma}_1} $ and $\\boldsymbol{\\bar{\\gamma}_2} $ situated on the boundary of a unit sphere by $d_g(\\boldsymbol{\\bar{\\gamma}_1} , \\boldsymbol{\\bar{\\gamma}_2} ) = \\arccos \\langle \\boldsymbol{\\bar{\\gamma}_1} , \\boldsymbol{\\bar{\\gamma}_2} \\rangle.$ To this end, we proceed to conduct a bootstrap test as follows. \n\\begin{itemize}\n\t\\item[1.] Compute $d_g(\\boldsymbol{\\wh{\\bar{\\theta}}}^{\\ast (b)}, {\\boldsymbol{\\bar{\\theta}_0}}),$where $\\boldsymbol{\\wh{\\bar{\\theta}}}^{\\ast (b)}$ is the $p-$ dimensional unit vector estimated based on the $b-$th bootstrap sample, $b = 1,\\dots, B$ and ${\\boldsymbol{\\bar{\\theta}_0}} = (1,0,\\dots)_{p \\times 1}.$\n\t\\item[2.] The Achieved Significance Level (ASL) of the bootstrap test is given by\n\t$$ASL = \\frac{1}{B}\\sum_{b=1}^B d_g(\\boldsymbol{\\wh{\\bar{\\theta}}}^{\\ast (b)},{\\boldsymbol{\\bar{\\theta}_0}}) > d_g(\\boldsymbol{\\wh{\\bar{\\theta}}}, \\boldsymbol{\\wh{\\bar{\\theta}}}^{\\ast (b)}),$$\n\t\\item[3.] If $ASL <\\alpha$ reject the null hypothesis at level $\\alpha.$\n\\end{itemize}\nWe carried out the procedure for testing the significance of the other predictors when ``stages of the disease'' is assumed to be included in the model. The $ASL$ for the bootstrap test came out to be $0.012,$ thus giving evidence for rejecting $H_0$ at level $\\alpha = 0.05.$ That is, not all of the $(p-1)$ predictors are insignificant. \n\\begin{table}[H]\n\t\\begin{center}\n\t\t\\begin{tabular}{|l|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{Step 1} & \\multicolumn{2}{c|}{Step 2} & \\multicolumn{2}{c|}{Step 3} \\\\ \\hline\n\t\t\t& Coeff. & ASL & Coeff. &ASL & Coeff & ASL \\\\ \\hline\n\t\t\tAge & -0.364 & 0.005 & -0.394 & - & -0.401 & - \\\\ \\hline\n\t\t\tGender & 0.371 & 0.122 & 0.558 & 0.161 & 0.279 & 0.113 \\\\ \\hline\n\t\t\tTotal Score & 0.198 & 0.094 & 0.207 & 0.010 & 0.173 & - \\\\ \\hline\n\t\t\\end{tabular}\n\t\t\\caption{Table shows the coefficients and p-values for the step-wise addition of predictors to quantify their relative significance.}\n\t\t\\label{tab:data:step_reg}\n\t\\end{center}\n\\end{table}\nWe further performed a sequential addition of predictors to quantify their relative significance. For this, we specify an ``alpha-to-enter'' significance level at $\\alpha = 0.05$ and fit each of the one-predictor model. To this end, we consider $X_1$ to be in the model and fit model~\\eqref{sim:obj} to estimate the direction index. In particular, we include each of $X_2,$ $X_3,$ and $X_4$ in the model along with $X_1$ and test whether the corresponding effect is significant, i.e., we test $\\theta_j=0, \\ j=2,3,4$ separately. The first predictor to be included in the step-wise model is the predictor that has the smallest $ASL$. We stop if no predictor has a test $ASL$ less than $\\alpha.$ Table~\\ref{tab:data:step_reg} illustrates the step-wise addition of predictors for all one-predictor models. The first predictor to be added at level $0.05,$ when $X_1$ (``stage of the disease'') is already in the model, is $X_2$ (age). Interestingly, a negative value of $\\hat{\\theta}_2$ $(-0.364)$ signifies a possible negative effect of the predictor on the response. This is quite expected since Alzheimer's is a disease that is known to progress with age.\nIn step 2, $X_4$ (total score) is added as a significant predictor at $0.05$ level. Here the estimate $\\hat{\\theta}_4 = 0.207$ can be interpreted as the possible association of a higher value of the total score with greater cognitive impairment. The effect of $X_3$ (gender) is deemed not significant.\n\nFinally, with $X_1,$ $X_2,$ and $X_4$ in the model, we test for significance of the pairwise interactions terms. The hypothesis of interest is $H_0 : \\theta_5 =\\theta_6 = \\dots = \\theta_{10} =0.$ The $ASL$ for the test comes out to be $0.106,$ giving evidence for no significant pairwise interaction to be included in the model. \nThus we estimate the relevant ``true'' model in this case as $E_\\oplus(Y|\\tbfX \\t \\para) = \\mop{X_1\\theta_1 + X_2\\theta_2 + X_4\\theta_4}.$\nThe estimated average Fr\\'echet error $\\frac{1}{n} \\sum_{i=1}^n d^2(Y_i,\\hmop{X_{1i}\\hat{\\theta}_1 + X_{2i}\\hat{\\theta}_2 + X_{4i}\\hat{\\theta}_4})$ is \nquite small $(0.239).$ \n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=.9\\textwidth]{.\/plots\/obs_vs_fits_ADNI_corrplots}\n\n\n\t\\caption{The observed and fitted functional connectivity matrices for an increasing value of the single index are plotted. The estimated direction $\\boldsymbol{\\wh{\\bar{\\theta}}}$ is computed for the final model, and the estimated index $\\tbf{X}_i ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}$ are calculated. The panels in the top row, from left to right, depict the observed functional connectivity correlation matrices for the subjects, for whom the estimated index values are the closest to the $25\\%, 50\\%,$ and $75\\%$ of the estimated index respectively. The bottom row shows the fitted functional connectivity correlation matrices for the same subjects, the link functions estimated using the nonparametric Fr\\'echet regression at the given quantiles (from left to right). Positive (negative) values are drawn in red (blue) and larger circles correspond to larger absolute values. The figure illustrates the dependence of functional connectivity on the overall index effect.}\n\t\\label{fig:ADNI:obs_vs_fits}\n\\end{figure}\n\n\nTo demonstrate the validity of the IFR method, we compute the estimated indices for the final model as $X_{1}\\hat{\\theta}_1 + X_{2}\\hat{\\theta}_2 + X_{4}\\hat{\\theta}_4,$ for each subject and calculate the $25\\%, 50\\%,$ and $75\\%$ quantiles of the index. These come out to be $q_1 = 15.048,$ $q_2 = 16.430,$ and $q_3 = 18.250,$ respectively. We find the subjects who have their estimated index values closest to $q_1,$ $q_2,$ and $q_3$ respectively. Table~\\ref{tab:ADNI:fits} shows the details on the three subjects selected. We compare the observed and fitted functional connectivity correlation matrices for these three subjects, where the object link function is fitted by the local Fr\\'echet regression method at the estimated index values corresponding to each subject. This gives an intuitive idea of how the estimated link function at the estimated direction vector given by $\\hmop{\\tbf{x} ^{\\top}\\boldsymbol{\\wh{\\bar{\\theta}}}}$ changes with an increasing value of the index $\\tbf{x} ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}},$ and thus brings about the effectiveness of the IFR model. In Figure~\\ref{fig:ADNI:obs_vs_fits} we display the observed (top row) and fitted (bottom row) correlation matrices for an increasing value of the single index at $q_1,$ $q_2,$ and $q_3$ respectively, in the columns from left to right. We observe that the fits match the general pattern of the observed matrices quite well. Indeed the Frobenius distance between the observed and the estimated matrices at $q_1,$ $q_2,$ and $q_3$ are calculated as $1.68,$ $1.10,$ and $0.79,$ respectively. A seeming tendency to have more negative correlation values in the connectivity matrices with increasing index values is an interesting finding. An overall effect of stages of the disease, age, and total score tends to influence the connectivity pattern in the brain of the Alzheimer's patients. A higher index value would imply more cognitive deficiency in this case, which matches the widely held beliefs. \n\\begin{table}[]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|c|c|}\n\t\t\\hline\n\t\t\\begin{tabular}[c]{@{}c@{}}Subject\\\\ number\\end{tabular} &\n\t\t\\begin{tabular}[c]{@{}c@{}}Estd.\\\\ index value\\end{tabular} &\n\t\t\\begin{tabular}[c]{@{}c@{}}Stage of the\\\\ disease\\end{tabular} &\n\t\tAge &\n\t\tGender &\n\t\tTotal score \\\\ \\hline\n\t\t726 & 15.045 & 2 & 66.10 y & M & 20.33 \\\\ \\hline\n\t\t695 & 16.430 & 2 & 78.12 y & M & 14 \\\\ \\hline\n\t\t556 & 18.252 & 1 & 72.55 y & M & 51.67 \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Table showing the details on the subjects who have their estimated index values closest to the first three quantiles of the estimated index, $q_1 (15.048),$ $q_2 (16.430),$ and $q_3 (18.250),$ respectively. The subject number $726$ has his estimated index value closest to $q_1$ and so on.}\n\t\\label{tab:ADNI:fits}\n\\end{table}\n\\section{Discussion}\n\\label{sec:concl}\nThe proposed single Index Fr\\'echet Regression (IFR) model provides a new tool for the regression analysis of random object data, which are increasingly encountered in modern data analysis. Instead of a two step procedure to estimate the link function and the index parameter separately, we discuss a direct M-estimation approach. In fact, the proposed method is a combination of multivariate M-estimation (for the index vector) and Fr\\'echet regression (for the link function) that extends the regression regime to object data responses. The index parameter is recovered as the unit direction minimizing the ``residual sum of squares'' after fitting the Fr\\'echet model. In fact, any other convex loss function can be used for this purpose. For an efficient computation, we use the \\emph{Julia} language and parallel programming to estimate the minimizer direction $\\hat{\\boldsymbol{\\theta}}$ by searching over $1000$ directions such that the Fr\\'echet variance is minimized. In a $25$ core computing system, for a sample size $n=1000,$ determining the optimal direction takes about $2$ hours, while employing a $5-$fold cross validation method to select the tuning parameters $(b,M).$\n\nIn this project, we provide the asymptotic results involving the estimation of the index parameter which could be extended for inference. In particular, single index models being a generalization of linear regression, the interpretability of the index parameter is important for testing the effect of a subset of predictors in modulating the response. The same is true in our IFR model, despite the responses being situated in a general metric space, as illustrated by the FMRI brain imaging example.\n\n\\section*{Appendix}\n\\subsection*{A.1. Technical assumptions regarding local Fr\\'echet regression}\n\\label{appen:assump}\nRecall, for any given direction $\\boldsymbol{\\theta},$ such that $\\tbfx \\t \\para = t,$ the conditional Fr\\'echet mean is given by \n\\begin{equation}\\aligned\n\\label{loc:fr:target}\n\\mop{t} = \\underset{\\omega \\in \\Omega}{\\argmin} \\ M(\\omega,t); \\quad M(\\omega,t) := \\mbb{E}{(d^2(Y,\\omega)|\\tbf{X}^{\\top} \\boldsymbol{\\theta} = t)}.\n\\endaligned\\end{equation}\nThe local Fr\\'echet regression estimate is given by \n\\begin{equation}\\aligned\n\\label{loc:fr::inter:target}\n\\hmop{t} = \\underset{\\omega \\in \\Omega}{\\argmin} \\ \\hat{L}_n(\\omega,t);\\quad \\hat{L}_n(\\omega,t); :=\\frac{1}{n}\\sum_{i=1}^n\\wh{S}(\\tbf{X}_i ^{\\top} \\boldsymbol{\\theta}_1, t, h)d^2(Y_i,\\omega)).\n\\endaligned\\end{equation}\nLet us define the intermediate localized weighted Fr\\'echet mean as \n\\begin{equation}\\aligned\n\\label{loc:fr:estd}\n\\tmop{t} = \\underset{\\omega \\in \\Omega}{\\argmin} \\ \\tilde{L}_b(\\omega,t); \\quad \\tilde{L}_b(\\omega,t):=\\mbb{E}{(S(\\tbfX \\t \\para_1, t, h)d^2(Y,\\omega))}.\n\\endaligned\\end{equation}\nThe marginal density $f_T$ of $T := \\tbf{X} ^{\\top} \\boldsymbol{\\theta},$ for any $\\boldsymbol{\\theta} \\in \\mbb{R}^{p-1} \\text{ and } \\boldsymbol{\\theta} ^{\\top} \\boldsymbol{\\theta} \\leq 1,$ is bounded away from zero on its support $\\mathcal{T},$ i.e., $\\inf_{t\\in\\mathcal{T}}\\ f_T(t) >0.$ We require the following assumptions for as described in \\cite{chen:20}.\n\\begin{enumerate}[label=(U\\arabic*)]\n\\item \\label{ass:minUnif} \nFor all $t \\in \\mathcal{T},$ \nthe minimizers $\\mop{t}$, $\\hmop{t}$, and $\\tmop{t}$ exist and are unique, the latter two almost surely. In addition, for any $\\varepsilon>0$, \n\\begin{equation}\\begin{gathered}\n\\inf_{t \\in \\mathcal{T}}\n\\inf_{d(\\mop{t},\\omega)>\\varepsilon} [M(\\omega,t)-M(\\mop{t},t)]>0,\\\\\n\\liminf_{b \\to 0} \\inf_{t \\in \\mathcal{T}} \n\\inf_{d(\\omega, \\tmop{t}) > \\varepsilon} [\\tilde{L}_b(\\omega,t) - \\tilde{L}_b(\\tmop{t},t)]>0,\n\\end{gathered}\\end{equation}\nand there exists $c = c(\\varepsilon)>0$ such that\n\\begin{equation}\\begin{gathered}\nP\\left(\\inf_{t \\in \\mathcal{T}} \\inf_{d(\\hmop{t},\\omega) >\\varepsilon} [\\hat{L}_n(\\omega,t) - \\hat{L}_n(\\hmop{t},t)]\\geq c \\right) \\to 1.\n\\end{gathered}\\end{equation}\n\\item \\label{ass:entropyUnif} \nLet $\\mathcal{B}_r(\\mop{t}) \\subset \\Omega$ be a ball of radius $r$ centered at $\\mop{t}$ and $\\mathcal{N}(\\varepsilon,\\mathcal{B}_r(\\mop{t}),d)$ be its covering number using balls of radius $\\epsilon.$ Then\n\\begin{equation}\\begin{gathered}\n\\underset{r\\to 0+}{\\lim} \\int_0^1 \\underset{t \\in \\mathcal{T}}{\\sup} \\sqrt{1 + \\log \\mathcal{N}(r\\varepsilon,\\mathcal{B}_r(\\mop{t}),d)} d \\epsilon = O(1).\n\\end{gathered}\\end{equation}\n\\item \\label{ass:curvatureUnif} There exists $r_1,r_2>0,$ $c_1,c_2>0,$ and $\\beta_1,\\beta_2>1$ such that\n\\begin{equation}\\begin{gathered}\n\\inf_{t \\in \\mathcal{T}} \\ \\inf_{d(\\mop{t},\\omega)0}|K'(x)| < \\infty.$ Additionally, $\\int_\\mbb{R} x^2 |K'(x)|\\sqrt{|x\\log|x||} dx <\\infty.$\n\\item \\label{ass:jointdtn} \nThe marginal density $f_T$ of $T = \\tbfX \\t \\para$ for any given unit direction $\\boldsymbol{\\theta}$ and the conditional densities $f_{T|Y}(\\cdot,y)$ of $T = \\tbfX \\t \\para$ given $Y= y$ exist and are twice continuously differentiable on the interior of $\\mathcal{T}$, the latter for all $y \\in\\Omega$. The marginal density $f_T$ is bounded away from zero on its support $\\mathcal{T},$ i.e., $\\inf_{t\\in\\mathcal{T}} f_T(t) >0.$\nThe second-order derivative $f_T''$ is uniformly bounded, $\\sup_{^{\\top}}|f_T''(t)|<\\infty$. \nThe second-order partial derivatives $(\\partial^2 f_{T|Y}\/\\partial t^2)(\\cdot,y)$ are uniformly bounded, $\\sup_{y,t} |(\\partial^2 f_{T|Y}\/\\partial t^2)(\\cdot,y)| < \\infty$. \nAdditionally, for any open set $U\\subset \\Omega$, $P(Y\\in U | T = t)$ is continuous as a function of $t$. \n\\end{enumerate}\n\\bibliographystyle{apalike}\n\n\\section*{\\bf \\sf \\refname}}\n\\renewcommand{\\baselinestretch}{1.6}\n\\renewcommand{\\arraystretch}{1.0}\n\\newcommand{\\renewcommand{\\baselinestretch}{1.2}\\normalsize}{\\renewcommand{\\baselinestretch}{1.2}\\normalsize}\n\\newcommand{\\renewcommand{\\baselinestretch}{1.63}\\normalsize}{\\renewcommand{\\baselinestretch}{1.63}\\normalsize}\n\n\n\n\\newtheorem{prop}{Proposition}\n\n\\newtheorem{thm}{Theorem}\n\\newtheorem{lem}{Lemma}\n\\newtheorem{cor}{Corollary}\n{\n\t\\theoremstyle{remark}\n\t\\newtheorem{rem}{Remark}\n\t\\newtheorem{exm}{Example}\n\t\\newtheorem{eg}{Example}\n\t\\newtheorem{assumption}{Assumption}\n}\n\\newtheorem{defn}{Definition}\n\\newtheorem*{lem*}{Lemma}\n\n\\newcommand{\\begin{eqnarray*}}{\\begin{eqnarray*}}\n\\newcommand{\\end{eqnarray*}}{\\end{eqnarray*}}\n\\newcommand{\\begin{eqnarray}}{\\begin{eqnarray}}\n\\newcommand{\\end{eqnarray}}{\\end{eqnarray}}\n\\newcommand{\\begin{equation}}{\\begin{equation}}\n\\newcommand{\\end{equation}}{\\end{equation}}\n\\newcommand{\\aligned}{\\aligned}\n\\newcommand{\\endaligned}{\\endaligned}\n\\newcommand{\\als}[1]{\\begin{align*}#1\\end{align*}}\n\\newcommand{\\begin{equation}\\aligned}{\\begin{equation}\\aligned}\n\\newcommand{\\endaligned\\end{equation}}{\\endaligned\\end{equation}}\n\\newcommand{\\begin{equation}\\begin{gathered}}{\\begin{equation}\\begin{gathered}}\n\\newcommand{\\end{gathered}\\end{equation}}{\\end{gathered}\\end{equation}}\n\\newcommand{\\end{document}}{\\end{document}}\n\\newcommand{\\textit{et al. }}{\\textit{et al. }}\n\\newcommand{\\begin{tabular}}{\\begin{tabular}}\n\\newcommand{\\end{tabular}}{\\end{tabular}}\n\\newcommand{\\newpage}{\\newpage}\n\\newcommand{\\label}{\\label}\n\\newcommand{\\begin{itemize}}{\\begin{itemize}}\n\\newcommand{\\end{itemize}}{\\end{itemize}}\n\\newcommand{\\begin{figure}}{\\begin{figure}}\n\\newcommand{\\end{figure}}{\\end{figure}}\n\\newcommand{\\begin{enumerate}}{\\begin{enumerate}}\n\\newcommand{\\end{enumerate}}{\\end{enumerate}}\n\\newcommand{\\begin{array}}{\\begin{array}}\n\\newcommand{\\end{array}}{\\end{array}}\n\\newcommand{\\boldsymbol}{\\boldsymbol}\n\\newcommand{\\boldsymbol}{\\boldsymbol}\n\\newcommand{\\nonumber}{\\nonumber}\n\\newcommand{\\vspace{-.1in}}{\\vspace{-.2cm}}\n\n\\def\\vspace{0.3cm}{\\vspace{0.3cm}}\n\\def\\vspace{.125cm}{\\vspace{.1cm}}\n\n\\def\\iffalse{\\iffalse}\n\n\\newcommand{\\noindent}{\\noindent}\n\\newcommand{\\begin{center}}{\\begin{center}}\n\\newcommand{\\end{center}}{\\end{center}}\n\n\\definecolor{DarkBlue}{rgb}{0,.08,.45}\n\\definecolor{DarkRed}{rgb}{.7,0,.4}\n\\definecolor{DarkGreen}{rgb}{0,0.4,0}\n\\def\\textcolor{black}{\\textcolor{black}}\n\\def\\textcolor{blue}{\\textcolor{blue}}\n\\def\\textcolor{red}{\\textcolor{red}}\n\\def\\textcolor{DarkRed}{\\textcolor{DarkRed}}\n\\def\\textcolor{DarkBlue}{\\textcolor{DarkBlue}}\n\\def\\textcolor{DarkGreen}{\\textcolor{DarkGreen}}\n\\def\\begin{\\red}{\\begin{\\textcolor{red}}}\n\t\\def\\end{\\red}{\\end{\\textcolor{red}}}\n\\def\\begin{\\blu}{\\begin{\\textcolor{blue}}}\n\t\\def\\end{\\blu}{\\end{\\textcolor{blue}}}\n\n\n\\newcommand{\\begin{pmatrix}}{\\begin{pmatrix}}\n\t\\newcommand{\\end{pmatrix}}{\\end{pmatrix}}\n\n\\newcommand{\\ct}{\\lincmmt\n\\newcommand{\\includegraphics}{\\includegraphics}\n\\newcommand{\\hangindent=0.5cm\\hangafter=1\\noindent}{\\hangindent=0.5cm\\hangafter=1\\noindent}\n\n\\newcommand{\\begin{split}}{\\begin{split}}\n\t\\newcommand{\\end{split}}{\\end{split}}\n\n\\newcommand{\\begin{description}}{\\begin{description}}\n\t\\newcommand{\\end{description}}{\\end{description}}\n\n\\newcommand{\\begin{assumption}}{\\begin{assumption}}\n\t\\newcommand{\\end{assumption}}{\\end{assumption}}\n\\newcommand{\\begin{thm}}{\\begin{thm}}\n\t\\newcommand{\\end{thm}}{\\end{thm}}\n\\newcommand{\\begin{lem}}{\\begin{lem}}\n\t\\newcommand{\\end{lem}}{\\end{lem}}\n\\newcommand{\\begin{cor}}{\\begin{cor}}\n\t\\newcommand{\\end{cor}}{\\end{cor}}\n\\newcommand{\\begin{proof}}{\\begin{proof}}\n\t\\newcommand{\\end{proof}}{\\end{proof}}\n\\newcommand{\\begin{prop}}{\\begin{prop}}\n\t\\newcommand{\\end{prop}}{\\end{prop}}\n\n\\def\\vspace{-.1in}{\\vspace{-.24cm}}\n\\def\\vspace{.2cm}{\\vspace{.2cm}}\n\\def\\vspace{-1cm}{\\vspace{-1cm}}\n\\def\\vspace{-.1cm}{\\vspace{-.1cm}}\n\\def\\vspace{-.12in}{\\vspace{-.12in}}\n\\def\\hspace{0.2cm}{\\hspace{0.2cm}}\n\\def\\vspace{0.3cm}{\\vspace{0.3cm}}\n\\def\\vspace{.125cm}{\\vspace{.125cm}}\n\\def\\vspace{0.1cm}{\\vspace{0.1cm}}\n\\def\\vspace{.7cm}{\\vspace{.7cm}}\n\\def\\vspace{.125cm}{\\vspace{.125cm}}\n\n\\def_{i1}{_{i1}}\n\\def_{ik}{_{ik}}\n\\def_{ki}{_{ki}}\n\\def_{ij}{_{ij}}\n\\def_{ijk}{_{ijk}}\n\\def_{ik}{_{ik}}\n\\def_{jj}{_{jj}}\n\\def_{jk}{_{jk}}\n\\def^{-1\/2}{^{-1\/2}}\n\\def^{-2}{^{-2}}\n\\def^{1\/2}{^{1\/2}}\n\\def^{-1}{^{-1}}\n\n\\def{\\rm cov}{{\\rm cov}}\n\\def{\\rm Cov}{{\\rm Cov}}\n\\def{\\rm var}{{\\rm var}}\n\\def{\\rm Var}{{\\rm Var}}\n\\def{\\rm cor}{{\\rm cor}}\n\\def{\\rm diag}{{\\rm diag}}\n\\def{\\rm trace}{{\\rm trace}}\n\\def{\\rm diam}{{\\rm diam}}\n\\def\\hbox{logit}{\\hbox{logit}}\n\\def\\hbox{expit}{\\hbox{expit}}\n\\def\\cite{\\cite}\n\\def\\citep{\\citep}\n\\def\\varepsilon{\\varepsilon}\n\\def\\vspace{-.1in}{\\vspace{-.1in}}\n\\def{\\rm id}{{\\rm id}}\n\\def\\hbox{det}{\\hbox{det}}\n\\def\\mathrm{d}{\\mathrm{d}}\n\n\\def\\overline{X}{\\overline{X}}\n\\def\\widehat{\\widehat}\n\\def\\widetilde{\\widetilde}\n\\def\\widecheck{\\widecheck}\n\\def\\longrightarrow{\\longrightarrow}\n\\def\\rightarrow{\\rightarrow}\n\n\\def \\mathbb {\\mathbb}\n\\def \\mathbf {\\mathbf}\n\\def \\mathcal {\\mathcal}\n\\def \\mathscr{\\mathscr}\n\\def \\rightsquigarrow{\\rightsquigarrow}\n\\def\\ell^\\infty(\\Omega){\\ell^\\infty(\\Omega)}\n\\def \\textbf{\\textbf}\n\\def \\underset{\\eps \\to 0}{\\lim}\\ {\\underset{\\varepsilon \\to 0}{\\lim}\\ }\n\\def \\underset{h \\to 0}{\\lim}\\ {\\underset{h \\to 0}{\\lim}\\ }\n\\def \\frac{1}{\\eps}{\\frac{1}{\\varepsilon}}\n\\def \\frac{1}{\\eps^2}{\\frac{1}{\\varepsilon^2}}\n\\def \\boldsymbol{\\theta}_\\mathbf{\\ast}{\\boldsymbol{\\theta}_\\mathbf{\\ast}}\n\\def \\boldsymbol{\\theta}^\\mathbf{\\ast}{\\boldsymbol{\\theta}^\\mathbf{\\ast}}\n\\def \\triangledown{\\triangledown}\n\\def \\tilde{\\Dtri}{\\tilde{\\triangledown}}\n\\def \\wh{\\Dtri}{\\widehat{\\triangledown}}\n\n\\def \\boldsymbol{\\alpha}{\\boldsymbol{\\alpha}}\n\\def \\boldsymbol{\\beta}{\\boldsymbol{\\beta}}\n\\def g_{1\\oplus} {g_{1\\oplus}}\n\\def g_{2\\oplus} {g_{2\\oplus}}\n\\def h_\\oplus {h_\\oplus}\n\\def \\wh{\\sigma}_{rs}(\\boldsymbol{\\theta_0}){\\widehat{\\sigma}_{rs}(\\boldsymbol{\\theta_0})}\n\\def \\wh{S}{\\widehat{S}}\n\\def \\wh{\\Sigma}(\\boldsymbol{\\theta_0}){\\widehat{\\Sigma}(\\boldsymbol{\\theta_0})}\n\\def \\wh{\\Lambda}(\\boldsymbol{\\theta_0}){\\widehat{\\Lambda}(\\boldsymbol{\\theta_0})}\n\n\\def \\bm{\\mathit{\\Delta}}H {\\bm{\\mathit{\\Delta}}H}\n\\def \\bm{\\mathit{\\Delta}}\\tilde{V}_n {\\bm{\\mathit{\\Delta}}\\tilde{V}_n}\n\\def V_n{V_n}\n\\def \\bm{\\mathit{\\Delta}}\\Vn {\\bm{\\mathit{\\Delta}}V_n}\n\n\\def \\bm{\\mathit{\\Delta^2}}H {\\bm{\\mathit{\\Delta^2}}H}\n\\def \\bm{\\mathit{\\Delta^2}}\\tilde{V}_n {\\bm{\\mathit{\\Delta^2}}\\tilde{V}_n}\n\\def V_n{V_n}\n\\def \\bm{\\mathit{\\Delta^2}}\\Vn {\\bm{\\mathit{\\Delta^2}}V_n}\n\\def \\overset{P}{\\rightarrow}{\\overset{P}{\\rightarrow}}\n\\def \\overset{D}{\\rightarrow}{\\overset{D}{\\rightarrow}}\n\\def \\underset{\\boldsymbol{\\theta} \\in \\Theta}{\\sup}{ \\underset{\\boldsymbol{\\theta} \\in \\Theta}{\\sup}}\n\\def \\underset{\\omega \\in \\Omega}{\\argmin}{\\underset{\\omega \\in \\Omega}{\\argmin}}\n\\def \\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}{\\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}}\n\\def \\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin}{\\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin}}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\argmin}{argmin}\n\n\\def\\msr{B}{\\mathscr{B}}\n\\def\\mbb{E}{\\mathbb{E}}\n\\defP{P}\n\\def\\mbb{R}{\\mathbb{R}}\n\\def\\mbb{N}{\\mathbb{N}}\n\\def\\ntnum_+{\\mbb{N}_+}\n\\def\\mbb{Z}{\\mathbb{Z}}\n\\defO{O}\n\\defo{o}\n\\defO_P{O_P}\n\\defo_P{o_P}\n\\def\\rightarrow\\infty{\\rightarrow\\infty}\n\\def\\indicator#1{\\mathbb I\\left(#1\\right)}\n\\def\\ball#1#2{B_{#2}(#1)}\n\n\\def^{\\top}{^{\\top}}\n\\def {\\boldsymbol{\\bar{\\theta}_0}}{{\\boldsymbol{\\bar{\\theta}_0}}}\n\\def \\tbf{x} \\t {\\boldsymbol{\\bar{\\theta}_0}}{\\tbf{x} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}}\n\\def \\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}{\\tbf{X} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}}\n\\def \\boldsymbol{\\tilde{\\bar{\\theta}}}{\\boldsymbol{\\tilde{\\bar{\\theta}}}}\n\\def \\boldsymbol{\\wh{\\bar{\\theta}}}{\\boldsymbol{\\widehat{\\bar{\\theta}}}}\n\\def \\boldsymbol{\\bar{\\theta}}{\\boldsymbol{\\bar{\\theta}}}\n\n\\def\\boldsymbol{\\theta_0}{\\boldsymbol{\\theta_0}}\n\\def\\tilde{\\boldsymbol{\\theta}}{\\tilde{\\boldsymbol{\\theta}}}\n\\def\\boldsymbol{\\theta}{\\boldsymbol{\\theta}}\n\\def\\boldsymbol{\\theta}^{\\mathbf{(1)}}{\\boldsymbol{\\theta}^{\\mathbf{(1)}}}\n\\def\\tbf{X}{\\textbf{X}}\n\\def\\tbf{x}{\\textbf{x}}\n\\def m(\\cdot){m(\\cdot)}\n\\def \\hat{m}(\\cdot){\\hat{m}(\\cdot)}\n\\defK{K}\n\\def \\hat{\\boldsymbol{\\theta}}{\\hat{\\boldsymbol{\\theta}}}\n\\def \\tbfX \\t \\true{\\tbf{X} ^{\\top} \\boldsymbol{\\theta_0}}\n\\def \\tbfx \\t \\true{\\tbf{x} ^{\\top} \\boldsymbol{\\theta_0}}\n\\def \\tbfX \\t \\para{\\tbf{X} ^{\\top} \\boldsymbol{\\theta}}\n\\def \\tbfx \\t \\para{\\tbf{x} ^{\\top} \\boldsymbol{\\theta}}\n\\def \\tilde{Y}_l{\\tilde{Y}_l}\n\\def \\tilde{\\tbfX}_l{\\tilde{\\tbf{X}}_l}\n\\def \\tilde{L}_b{\\tilde{L}_b}\n\\def \\hat{L}_n{\\hat{L}_n}\n\\def \\tilde{V}_n{\\tilde{V}_n}\n\n\n\\newcommand \\mop[1]{m_\\oplus(#1)}\n\\newcommand \\hmop[1]{\\hat{m}_\\oplus(#1)}\n\\newcommand \\tmop[1]{\\tilde{m}_\\oplus(#1)}\n\\newcommand \\loomop[1] {\\hat{m}_{\\oplus(-i)}(#1)}\n\n\n\\newcommand{\\ltwoNorm}[2][]{%\n\t\\ifthenelse{ \\equal{#1}{} }\n\t{\\ensuremath{\\|#2\\|}}\n\t{\\ensuremath{\\|#2\\|_{#1}}}\n} ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nThe study of heavy quarkonia, i.e. mesons build from heavy (with $m_Q\\gg\\Lambda_\\mathrm{QCD}$) quark-antiquark pair is a very interesting task from both theoretical and experimental points of view. The theoretical model that is usually used for theoretical description of these particles is the Nonrelativistic Quantum Chromodynamic (NRQCD) \\cite{Bodwin:1994jh}, that allows one to describe with pretty good accuracy the processes of charmonia (e.g. ${J\/\\psi}$ or $\\chi_{cJ}$ or bottomonia (e.g. $\\Upsilon$, $\\chi_{bJ}$) production on hadronic colliders as well as various decays of these particles \\cite{Aad:2011sp,Chatrchyan:2011kc,TheATLAScollaboration:2013bja,Butenschoen:2012qr,Butenschoen:2012px, Likhoded:2016zmk, Aaij:2016bqq}. It should be mentioned, however, that NRQCD predictions depend on a number of parameters (so called NRQCD matrix elements), whose numerical values are determined phenomenologically from analysis of available experimental data \\cite{Likhoded:2014kfa,Abe:1997yz,Abulencia:2007bra,Chatrchyan:2012ub,LHCb:2012ac,Aaij:2013dja}.\n\nIn addition to mentioned above processes there are also reactions that can be considered in almost model independent way. Among such processes one can name, for example, lepton pair production in $\\chi_{cJ}\\to{J\/\\psi}\\ell\\ell$ and $\\chi_{bJ}\\to\\Upsilon\\ell\\ell$ decays. In \\cite{Faessler:1999de} it was shown that the branching fractions of these decays and distributions over the invariant mass of $(\\ell\\ell)$ pair can be calculated using very general assumptions on the basis of experimentally known branching fractions of the corresponding radiative decays $\\chi_{cJ}\\to{J\/\\psi}\\gamma$ and $\\chi_{bJ}\\to\\Upsilon\\gamma$ (see also \\cite{Eichten:1979ms,Brambilla:2004wf, Barnes:2005pb,Cao:2016xqo}). Currently only $\\chi_{c1,2}\\to{J\/\\psi} ee$ process was studied experimentally \\cite{Ablikim:2017kia}. It could be interesting also to consider muon pair production in $\\chi_{cJ}\\to{J\/\\psi}\\mu\\mu$ and $\\chi_{bJ}\\to\\Upsilon\\mu\\mu$ decays. In the current paper we consider theoretically these reactions.\n\n\n\n\nIn the recent experimental paper \\cite{Ablikim:2017kia} BESIII Collaboration analysed electron-positron pair production in $\\chi_{c1,2}$-meson decays\n\\begin{align}\n \\label{eq:decay}\n \\chi_{cJ} &\\to{J\/\\psi} e^+ e^-.\n\\end{align}\n In this article branching fractions of the named reactions and distributions over invariant mass of the lepton pair were presented. It is interesting to note, that these results can be obtained from the branching fractions of the radiative decays $\\chi_{cJ}\\to{J\/\\psi}\\gamma$ using a very simple relation. In paper \\cite{Faessler:1999de} for example, it was shown, that $q^2=m_{ee}^2$ distribution of the (\\ref{eq:decay}) decay is equal to\n \\begin{align}\n \\label{eq:II}\n \\frac{d\\Br{J}{{\\ell\\ell}}}{dq^2} &= \n \\frac{\\alpha}{3\\pi q^2}\\frac{\\lambda(M_\\chi,M_\\psi,\\sqrt{q^2})}{\\lambda(M_\\chi,M_\\psi,0)} \n \\left(1+\\frac{2m_\\ell^2}{q^2}\\right)\\sqrt{1-\\frac{4m_\\ell^2}{q^2}}\n \\Br{J}{\\gamma},\n \\end{align}\nwhere $\\Br{J}{{\\ell\\ell}}$ and $\\Br{J}{\\gamma}$ are the branching fractions of $\\chi_{cJ}\\to{J\/\\psi}{\\ell\\ell}$ and $\\chi_{cJ}\\to{J\/\\psi}\\gamma$ decays respectively and\n\\begin{align}\n \\lambda(M,m_1,m_2) &= \\sqrt{1-\\left(\\frac{m_1+m_2}{M}\\right)^2} \\sqrt{1-\\left(\\frac{m_1-m_2}{M}\\right)^2}\n\\end{align}\nis the velocity of the final particle in $M\\to m_1m_2$ decay. It should be noted that this result is almost model independent since it is based on the gauge invariance of $\\chi_c\\to{J\/\\psi}\\gamma^*$ vertex. The only assumption that was made is that one can neglect the form factors' dependence on photon virtuality. This assumption seems pretty reasonable since according to energy conservation $q^2<(M_\\chi-M_\\psi)^2\\sim 0.025\\,\\mathrm{GeV}^2$, that is much smaller than the typical hard scale $\\sim M_\\psi^2\\sim 10\\,\\mathrm{GeV}^2$. It is clear from presented in Fig.~\\ref{fig:hQEE} figures, that obtained by BESIII Collaboration experimental data are in good agreement with theoretical predictions (\\ref{eq:II}). The same is true also for the integrated values of the branching fractions: theoretical predictions in comparison with BESIII results are\n\\begin{align}\n \\fBr{0}{ee} &= 8.1\\times 10^{-3},\\qquad \\left(\\fBr{0}{ee}\\right)_\\mathrm{exp} = (9.5\\pm1.9\\pm0.7)\\times 10^{-3},\\\\\n \\fBr{1}{ee} &= 8.6\\times 10^{-3},\\qquad \\left(\\fBr{1}{ee}\\right)_\\mathrm{exp} = (10.1\\pm0.3\\pm0.5)\\times 10^{-3},\\\\\n \\fBr{2}{ee} &= 8.7\\times 10^{-3},\\qquad \\left(\\fBr{2}{ee}\\right)_\\mathrm{exp} = (11.3\\pm0.4\\pm0.5)\\times 10^{-3}.\n\\end{align}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{hQ_EE.pdf}\n \\caption{$Q$ distribution in comparison with BESSIII data \\cite{Ablikim:2017kia, Zhang:PC}}\n \\label{fig:hQEE}\n\\end{figure}\n\n\n\nIt could be interesting to use described above formalism for muon pair production in $\\chi_{cJ}\\to{J\/\\psi}\\mu\\mu$ decays. It is clear that in this case we can also use (\\ref{eq:decay}) relation to describe $q^2$ distributions, the corresponding results are shown in figure \\ref{fig:hQ2MM}(a). One can easily see that the form of the distributions depends strongly on the spin of the initial particle and differs significantly from the presented in figure \\ref{fig:hQEE} curves, but all these distributions are given by all universal relation (\\ref{eq:II}), only the difference in participating particles' masses is important. As for the integrated branching fractions, we have\n\\begin{align}\n\\fBr{0}{\\mu\\mu} &= 2.2\\times 10^{-4},\n\\quad \\fBr{1}{\\mu\\mu}= 5.1 \\times 10^{-4},\n\\quad\\fBr{2}{\\mu\\mu}= 6.4\\times 10^{-4}.\n\\end{align}\nUsing the same approach we have calculated also branching fractions of $\\chi_{bJ}\\to\\Upsilon\\mu\\mu$ decays:\n\\\n\\begin{align}\n \\fBrb{0}{\\mu\\mu} &= 4.7\\times 10^{-4},\n\\quad \\fBrb{1}{\\mu\\mu}= 5.7 \\times 10^{-4},\n\\quad\\fBrb{2}{\\mu\\mu}= 6.2\\times 10^{-4}.\n\\end{align}\nDistributions over leptons' invariant mass are shown in Figure \\ref{fig:hQ2MM}(b).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{Q2_MM}\n \\caption{Normalized $Q^2$ distribution for $\\chi_{cJ}\\to J\/\\psi\\mu\\mu$ (left figure) and $\\chi_{bJ}\\to\\Upsilon\\mu\\mu$ (right figure) decays}\n \\label{fig:hQ2MM}\n\\end{figure}\n\n\n\n\nIf one wants to obtain other types of distributions more detailed information on the physics of the underlying process is required. In our work we use the following expressions for $P$-wave quarkonia decay verticies \\cite{Baranov:2011ib}:\n\\begin{align}\n \\A{0} &= g_0 M_V \\left( g_{\\mu\\nu} - \\frac{p_\\mu q_\\nu}{(pq)}\\right)\\epsilon_V^\\nu \\epsilon^{(\\gamma)}_\\mu, \\\\\n \\A{1} &= g_1 e_{\\mu\\nu\\alpha\\beta} q^\\nu\\epsilon_V^\\alpha\\epsilon_\\chi^\\beta \\epsilon^{(\\gamma)}_\\mu,\\\\\n \\A{2} &= \\frac{g_2}{M_V} p^\\mu \\epsilon_V^\\alpha \\epsilon_{\\chi_{c2}}^{\\alpha\\beta}\\left[q_\\mu\\epsilon^{(\\gamma)}_\\beta-q_\\beta\\epsilon^{(\\gamma)}_\\mu\\right],\n\\end{align}\nwhere $\\epsilon^{(\\gamma)}$, $\\epsilon_V$, and $\\epsilon_\\chi$ are polarization vectors of final photon, vector quarkonia and initial $\\chi_Q$ meson respectively, while dimensionless coupling constants $g_{0,1,2}$ can be determined from experimental values of the corresponding radiative decays.\nUsing these expressions it is easy to obtain presented in Fig.~\\ref{fig:hM2PsiM} distributions over squared invariant of $\\psi\\mu$ pair.\n\nExperimentally vector quakonium is detected via its leptonic decay $V\\to\\mu^+\\mu^-$. In order to obtain the information of the polarization picture of\n\\begin{align}\n \\label{eq:dec2}\n \\chi_{QJ} &\\to V\\mu^+\\mu^- \\to (\\mu^+\\mu^-)\\mu^+\\mu^-\n\\end{align}\nit could be interesting to study also distributions over squared invariant mass of the same-signed final leptons. These distributions are shown in Fig.~\\ref{fig:hm2MM}.\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{PsiK.pdf}\n \\caption{Normalized $m^2_{V\\mu}$ distribution for $\\chi_c$- and $\\chi_b$ meson decays (left and right panels respectively)}\n \\label{fig:hM2PsiM}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{K1KK1.pdf}\n \\caption{Normalized \n$m_{\\mu^+\\mu^+}^2$ distribution for $\\chi_c$- and $\\chi_b$ meson decays (left and right panels respectively)}\n \\label{fig:hm2MM}\n\\end{figure}\n\nThe author would like to thank I. Belyaev for useful discussions and J. Zhang for providing BESIII experimental data.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgski b/data_all_eng_slimpj/shuffled/split2/finalzzgski new file mode 100644 index 0000000000000000000000000000000000000000..1812e8af2b04171a909cd2e4994fc57f2977e70c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgski @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\nLet $K$ be a number field, $\\mathcal{O}_K$ be its ring of integers and $\\mathfrak{A}\\subseteq\\mathcal{O}_K$ be a principal ideal.\nOur aim is to investigate the following property of $\\mathfrak{A}$:\n\n\\begin{equation}\\label{eq:property}\\hbox{ $\\mathfrak{A}$ admits a generator $x$ such that $|\\sigma(x)|\\geq 1$ for every embedding $\\sigma: K\\to\\mathbb{C}$}.\n\\end{equation}\n\nWe shall call such a generator a \\emph{large} element of $\\mathcal{O}_K$ (\\emph{strictly large} when the strict inequality holds), and we shall call \\emph{large} every ideal satisfying property \\eqref{eq:property}.\n\nWe shall see that all but finitely many principal ideals $\\mathfrak{A}$ are strictly large; in particular this happens when the \\emph{logarithmic norm} $n(\\mathfrak{A})=\\frac {N(\\mathfrak{A})}{[K:\\mathbb{Q}]}$ exceeds the \\emph{covering radius} of the lattice of units with respect to the $L_\\infty$ norm. Moreover, every non-trivial ideal becomes large in a suitable finite extension of $K$.\n\nIt is possible to relate the notion of largeness of a principal ideal to the Weil height of its generators. Therefore, lower bounds of the Weil height on $K$, as given by the Bogomolov property, may help to prove that some ideals are not large. \nTo this aim, we shall state some inequalities concerning \nthe covering radius, the regulator and the Weil height of systems of multiplicatively independent units of $K$. We shall apply the technique described above in some concrete example.\n\nAs soon as the group of units of $K$ is known, it is relatively easy to decide if a principal ideal is large, and we give the corresponding algorithm. Then, we shall present\nthe results of applying it to some particular ideal in cyclotomic fields.\n\nAs a last application, we shall define the notion of floor function for $K$ relatively to $\\mathfrak{A}$ and show that condition \\eqref{eq:property} allows to explicitly construct a bounded floor function.\nThis turns out to be a good property for the the resulting continued fractions (\\cite{CapuanoMurruTerracini2022}).\n\n\\section{The largeness property and general results}\\label{sect:theproblem}\nLet $K$ be a number field of degree $d$.\nWe denote by $\\calo_K$ the ring of integers of $K$ and $\\calo_K^\\times$ the group of units.\nThe number field $K$ has $r_1$ real embeddings $\\sigma_1,\\dots,\\sigma_{r_1}$\nand $2r_2$ complex embeddings $\\sigma_{r_1+1},\\dots,\\sigma_{r_1+2r_2}$,\nwhere $r_1+2r_2 = d$ and $\\sigma_{r_1+i}$ and $\\sigma_{r_1+r_2+i}$\nare conjugates for $1\\leq i \\leq r_2$. We denote by $\\Sigma$ the whole set of Archimedean embeddings. We shall denote by $|\\cdot |$ the standard complex absolute value.\nAn element $x \\in K$ therefore has\n$r_1+r_2$ Archimedean absolute values, namely $|\\sigma_1(x)|, \\dots, |\\sigma_{r_1+r_2}(x)|$,\nand we have $|\\sigma_{r_1+i}(x)| = |\\sigma_{r_1+r_2+i}(x)|$\nfor all $1\\leq i \\leq r_2$.\nWe put $s=r_1+r_2$, $r=s-1$.\n\nLet\n\\begin{align*} \\iota: K& \\longrightarrow \\mathbb{R}^{r_1}\\times \\mathbb{C}^{r_2}\\\\\n\\lambda &\\longmapsto (\\sigma_1(\\lambda),\\ldots,\\sigma_{r_{1}}(\\lambda),\\sigma_{r_1+1}(\\lambda),\\ldots,\\sigma_{r_1+r_2}(\\lambda))\n\\end{align*} be the canonical embedding of $K$, and\n$$\\ell: K^\\times \\to \\mathbb{R}^{r_1+r_2}$$ be the logarithmic embedding, i.e., the composition $\\mathcal{L}\\circ\\iota$ where \n\\begin{align*} \\mathcal{L}: (\\mathbb{R}^\\times)^{r_1}\\times (\\mathbb{C}^\\times)^{r_2}& \\longrightarrow \\mathbb{R}^{r_1}\\times \\mathbb{R}^{r_2}\\\\\n(x_1,\\ldots, x_{r_1}, y_1,\\ldots, y_{r_2}) &\\longmapsto (\\log|x_1|,\\ldots, \\log|x_{r_1}|, 2\\log|y_1|,\\ldots, 2\\log|y_{r_2}|).\n\\end{align*} \\\\\nFor $\\mathbf{x}=(x_1,\\ldots, x_{r_1},y_1,\\ldots, y_{r_2})\\in \\mathbb{R}^{r_1}\\times \\mathbb{C}^{r_2}$, let us define\n$$N(\\mathbf{x})=\\prod_{i=1}^{r_1}|x_i \n|\\cdot \\prod_{j=1}^{r_2}|y_j|^2;$$ \nthen, $N(\\iota(a))=|N_{K\/\\mathbb{Q}}(a)|$ for every $a\\in K.$\nThe {\\em absolute norm} of $x\\in K$ is equal to the absolute value of the norm of $x$.\\\\\nWe shall denote $\\Lambda_K=\\ell(\\mathcal{O}_K^\\times)$; it is a lattice of rank $r$ in $\\mathbb{R}^s$ by Dirichlet's Unit Theorem. We recall also that the \\emph{regulator} $R_K$ of $K$ is the determinant of any submatrix of order $r$ of the $r\\times (r+1)$ matrix whose rows are $\\ell(u_1),\\ldots, \\ell(u_r)$ \nfor a system $u_1,\\ldots, u_r$ of fundamental units for $K$. If $V_K$ is the volume of a fundamental domain for $\\Lambda_K$ then the relation $V_K=\\sqrt{s}R_K$ holds.\n\n\\begin{definition}\\ \\begin{itemize}\n \\item[a)] We say that $x\\in \\mathcal{O}_K$ is \\emph{large} (resp. \\emph{strictly large}) if\n$|\\sigma(x)|\\geq 1$ (resp. $|\\sigma(x)|> 1$) for every $\\sigma\\in\\Sigma$ . \n\\item[b)] An ideal $\\mathfrak{A}\\subseteq\\mathcal{O}_K$ is \\emph{large} (resp. \\emph{strictly large}) if it principal and has a large (resp. strictly large) generator.\n\\end{itemize} \n\\end{definition}\n\nFor $x \\in \\mathcal{O}_K$\nas in the above definition,\nwe observe that\n$x$ is large in $\\mathcal{O}_K$ if and only if $x$ is large in $\\mathcal{O}_L$ for any finite extension $L$ of $K$.\nFor the ideal $\\mathfrak{A}$\nit is true that if it is large in $K$, then\nthe ideal $\\mathfrak{A} \\mathcal{O}_L$ is large in $\\mathcal{O}_L$, but the converse is not true.\n\nThen the following definition makes sense and extends the notion of largeness to possibly infinite extensions:\n\\begin{definition}\nLet $L$ be an infinite extension of $K$.\nFor an element $x \\in \\mathcal{O}_K$\nand an ideal $\\mathfrak{A} \\subseteq \\mathcal{O}_K$, we say that:\n\\begin{itemize}\n\\item[a)]\n$x$ is \\emph{large} (resp. \\emph{strictly large}) in $L$ if there is a number field $K'$ with $K \\subseteq K'\\subseteq L$ such that $x$ is large (resp. strictly large) in $K'$.\n\\item[b)]\n$\\mathfrak{A}$ is \\emph{large} (resp. \\emph{strictly large}) in $L$ if there is a number field $K'$ with $K \\subseteq K'\\subseteq L$ such that $\\mathfrak{A} \\mathcal{O}_{K'}$ is large (resp. strictly large) in $K'$.\n\\end{itemize}\n\\end{definition}\n\nSince every unit in $\\mathcal{O}_K$ has norm $\\pm 1$, a unit $x$ is large if and only if $|\\sigma(x)|=1$ for every $\\sigma\\in \\Sigma$; by a theorem of Kronecker \\cite{Kronecker1857}, this happens if and only if $x$ is a root of unity. Therefore no unit can be strictly large. So every strictly large element $x$ in $\\mathcal{O}_K$ must satisfy $|N_{K\/\\mathbb{Q}}(x)|\\geq 2$.\\\\\nWe shall see in Proposition \\ref{prop:normalarge} that almost all principal ideals in $\\mathcal{O}_K$ are (strictly) large. In order to prove this fact we need to recall some terminology from lattice theory. \n\nLet $\\Lambda$ be a lattice in $\\mathbb{R}^n$ of rank $r$ and for a real number $p\\in [1,\\infty)\\cup\\{\\infty\\}$ let $|| \\cdot||_p$ be the norm $L_p$ in $\\mathbb{R}^n$. The \\emph{distance function} relatively to $p$ is by definition\n $$\\rho_p(\\mathbf{v},\\Lambda)=\\min_{\\mathbf{w}\\in\\Lambda} ||\\mathbf{v}-\\mathbf{w}||_p.$$\n The \\emph{covering radius} of $\\Lambda$ with respect to $|| \\cdot||_p$ is\n $$\\rho_p(\\Lambda)=\\sup_{\\mathbf{v}\\in \\mathrm{span}(\\Lambda)} \\rho_p(\\mathbf{v},\\Lambda).$$\n Balls of radius $\\rho_p(\\Lambda)$ centered around all lattice points cover the whole space $\\mathrm{span}(\\Lambda)$.\\\\\n By the well known inequality\n \\begin{equation}\\label{eq:disuglp} ||\\mathbf{v}||_p\\leq ||\\mathbf{v}||_r\\leq n^{\\frac 1 r-\\frac 1 p}||\\mathbf{v}||_p\\quad \\hbox{for } \\infty\\geq p\\geq r,\\end{equation}\n we get \n \\begin{equation}\\label{eq:disugcovrad} \\rho_p(\\Lambda)\\leq \\rho_r(\\Lambda) \\leq n^{\\frac 1 r-\\frac 1 p}\\rho_p(\\Lambda)\\quad \\hbox{ for } \\infty\\geq p\\geq r.\\end{equation} \n If $K$ is a number field we shall write $\\rho_p(K)$ instead of $\\rho_p(\\Lambda_K)$.\n \n For every algebraic number $x\\in\\overline{\\mathbb{Q}}^\\times$ we define the \\emph{logarithmic norm}\n $$n(x)=\\frac{\\log|N_{\\mathbb{Q}(x)\/\\mathbb{Q}}(x)|}{[\\mathbb{Q}(x):\\mathbb{Q}]}.$$\n Analogously, if $\\mathfrak{A}\\subseteq \\mathcal{O}_K$ is any non-zero ideal, we write\n $$n(\\mathfrak{A})=\\frac{\\log|N_{K\/\\mathbb{Q}}(\\mathfrak{A})|}{[K:\\mathbb{Q}]}.$$\n Notice that $$n(x)=\\frac{\\log|N_{K\/\\mathbb{Q}}(x)|}{[K:\\mathbb{Q}]},$$\n for every finite extension $K$ of $\\mathbb{Q}(x)$; moreover\n $$n(x)=n(ux),$$\n for every algebraic unit $u\\in\\overline{\\mathbb{Q}}$. \n Then $n(a)=n(a\\mathcal{O}_K)$\n depends only on the principal ideal generated by $a$ in the ring of integers of every number field containing $a$.\\\\\n We also observe that $n:\\overline{\\QQ}^\\times\\to \\mathbb{R}$ is a morphism; in particular $n(x^k)=kn(x)$ for every $k\\in\\mathbb{N}$.\n We also have $n(x) \\ge 0$ when $x$ is an algebraic integer.\n\\begin{proposition}\\label{prop:normalarge} \n Every principal ideal $\\mathfrak{A}$ of $\\mathcal{O}_K$ such that $n(\\mathfrak{A})>\\rho_\\infty(K)$ is strictly large.\\\\\n Therefore all but finitely many integral principal ideals of $\\mathcal{O}_K$ are strictly large.\n\\end{proposition}\n\\begin{proof}\nLet $x\\in \\mathcal{O}_K$ be a generator of $\\mathfrak{A}$ and put $N=|N_{K\/\\mathbb{Q}}(x)|$. The image of units $\\ell(\\mathcal{O}_K^\\times)$ is a lattice in $\\mathbb{R}^s$ of rank $r=s-1$; it spans the hyperplane $\\mathcal{H}$ of $\\mathbb{R}^s$ with equation $\\sum_{i=1}^{r_1} x_i+\\sum_{i=1}^{r_2}y_i=0$. The vector $$\\mathbf{y}=\\ell(x)-\\frac 1 d \\log(N)(1,\\ldots,1,2,\\ldots,2)$$ lies on $\\mathcal{H}$ . Let $\\rho=\\rho_\\infty(K)$ and assume $n(x)>\\rho$; by definition of covering radius, there exists $u\\in\\mathcal{O}_K^\\times$ such that $||\\mathbf{y}+\\ell(u)||_\\infty \\leq \\rho$. This means that $|\\log|\\sigma(ux)|-n(x)|\\leq \\rho$ for every Archimedean embedding $\\sigma$ of $K$, so that\n$$|\\log|\\sigma(ux)||\\geq n(x)-\\rho> 0.$$\nThe second assertion follows from the fact that the ideals of norm $\\le\\rho$ are finitely many.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:esisteL} Every non-trivial integral ideal $\\mathfrak{A} \\subsetneq \\mathcal{O}_K$ is strictly large in $\\overline{\\mathbb{Q}}$.\n\\end{proposition}\n\\begin{proof} The statement is a consequence of \\cite[Th\u00e9or\u00e8me 5.1]{BergeMartinet1989}, here we present a more direct proof.\nFirst of all, by class field theory, it is well known that $\\mathfrak{A}$ becomes principal in a suitable finite extension $K'$ of $K$.\nWe have $\\mathfrak{A} \\mathcal{O}_{K'} = x \\mathcal{O}_{K'}$ for some $x \\in \\mathcal{O}_{K'}$.\nBy Proposition \\ref{prop:normalarge} there exists a positive integer $j$ such that $x^j$ is strictly large in $K'$. Let $u\\in \\mathcal{O}_{K'}^\\times$ be such that $|\\sigma(u x^j)|>1$ for every embedding $\\sigma:K'\\to\\mathbb{C}$. Let $L=K'(\\omega)$ where $\\omega^j=u$. Let $\\tau:L\\to\\mathbb{C}$ be any embedding and $\\sigma$ be the restriction of $\\tau$ to $K'$. Then\n$$|\\tau(\\omega x)|^j=|\\sigma(ux^j)|>1$$\nso that $|\\tau(\\omega x)|>1$.\n\\end{proof}\n\nBy looking at the proof of Proposition \\ref{prop:esisteL}, we see that a uniform and stronger version holds true.\nFor every number field $K$ and every positive $j\\in\\mathbb{N}$, we denote by $K_j$ the field obtained from $K$ by adding the $j$-th roots of every unit of $K$; it is a finite extension of $K$ by Dirichlet's Unit Theorem.\n\\begin{proposition}\\label{prop:esisteLuniforme}\nLet $K$ be a number field, and let $j> \\frac{\\rho_\\infty(K)[K:\\mathbb{Q}]}{\\log2}$. Every non-trivial principal ideal $\\mathfrak{A} \\subsetneq \\mathcal{O}_K$ is strictly large in $K_j$.\n \\end{proposition}\n \\begin{proof}\n Let $x \\in \\mathcal{O}_K$ be a generator of $\\mathfrak{A}$.\n We have $n(x)\\geq \\frac{\\log 2}{[K:\\mathbb{Q}]}$, so that $n(x^j)=jn(x) >\\rho_\\infty(K)$. Then one can choose $L=K_j$ in the proof of Proposition \\ref{prop:esisteL}.\n \\end{proof}\n\\section{Largeness and Weil height}\\label{sect:height}\nLet $h$ denote the logarithmic Weil height of an algebraic number (see for example \\cite[\\S 1.5.7]{BombieriGubler2006}). For $x\\in K$\n$$h(x)=\\frac 1 d\\sum_{\\sigma\\in \\Sigma} \\max\\{0,\\log|\\sigma(x)|\\}+\\log|a|$$\nwhere $a$ is the leading coefficient of a primitive equation for $x$ over $\\mathbb{Z}$; in particular\nfor an algebraic integer $x$ in $\\mathcal{O}_K$\n$$h(x)=\\frac 1 d\\sum_{\\sigma\\in \\Sigma} \\max\\{0,\\log|\\sigma(x)|\\}.$$\nIt follows that \n\\begin{equation}\\label{eq:h>n} h(x)\\geq \\frac 1 d\\log|N_{K\/\\mathbb{Q}}(x)|=n(x) \\quad \\hbox{ for every non-zero algebraic integer $x$},\\end{equation}\nand equality holds exactly when $x\\mathcal{O}_K$ is large.\\\\\nThen we can draw necessary conditions for largeness of ideals when some explicit minoration for the height of elements in $\\mathcal{O}_K$ is known. Namely, if there is a constant $c>0$ such that \n\\begin{equation}\\label{eq:maggiorazioneperinteri} h(x)>c\\hbox{ for every } x\\in\\mathcal{O}_K\\setminus\\mathcal{O}_K^\\times\\end{equation}\nand $\\mathfrak{A}$ is a principal ideal such that $n(\\mathfrak{A})\\leq c$, then $\\mathfrak{A}$ cannot be large.\\\\\nWe are thus lead to make use of the well known \\emph{Bogomolov property} $\\PB$ and an additional property $\\PS$ defined below.\\\\\nLet $\\mathcal{A}$ be a set of algebraic numbers.\n We put\n \\begin{align*}\n b(\\mathcal{A})&=\\inf\\{h(x)\\ |\\ x\\in \\mathcal{A}, x\\not=0, x\\hbox{ not a root of unity }\\};\\\\\n s(\\mathcal{A})&=\\inf\\{n(x)\\ |\\ x\\in\\mathcal{A}, x\\not=0, N_{\\mathbb{Q}(x)\/\\mathbb{Q}}(x)\\not=\\pm 1\\}.\n \\end{align*}\n\\begin{definition}\\label{def:propBS} We say that a set $\\mathcal{A}$ of algebraic numbers satisfies\n \\begin{itemize} \n \\item[a)] \\emph{property $\\PB$} if $b(\\mathcal{A})>0$;\n\\item[b)] \\emph{property $\\PS$} if $s(\\mathcal{A})>0$.\n\\end{itemize}\n\\end{definition}\n\nIn particular, if $x\\in\\mathcal{O}_L\\setminus \\mathcal{O}_L^\\times $ for an (infinite) extension $L$ with the property $\\PB$, then $h(x)\\geq c_L$ for some $c_L>0$ depending only on $L$ and thus, by \\eqref{eq:h>n}, \n$$x\\mathcal{O}_{\\mathbb{Q}(x)} \\hbox{ large} \\Longrightarrow \nn(x)\\geq c_L.\n$$\nProperty $\\PB$ is known for some special algebraic extensions, as the compositum ${\\mathbb Q}^{\\rm tr}$ of all totally real fields; it is also known for extensions having bounded local degrees at some finite place, and for Abelian extensions of number fields (see~\\cite[Remark 5.2, p.1902]{AmorosoDavidZannier2014}). \\\\\nNote however that property $\\PB$ for the whole field $L$, and even for the ring of integers of $L$, is much stronger that condition \\eqref{eq:maggiorazioneperinteri}, which assumes a lower bound only for the height of algebraic \\emph{integers} which are not units. \n\n\\begin{examples}\\label{exs:esempi}~\n\n\\begin{itemize}\n\\item[a)] Of course $\\PS\\Rightarrow \\PB$ if $\\mathcal{A}$ is a set of algebraic integers containing only a finite number of units. \n \n\\item[b)] On the other hand, there exist sets of algebraic integers satisfying \n$\\PB$ but not $\\PS$: for example the ring $\\mathcal{O}^{\\rm ab}$ of integers of $\\mathbb{Q}^{\\rm ab}$ satisfies property $\\PB$ with $b(\\mathcal{O}^{\\rm ab})\\geq \\frac{\\log 5}{12}$, (see the main theorem in \\cite{AmorosoDvornicich2000}), but \n$$n(1-\\zeta_p)=\\frac{\\log(p)}{p-1}$$ \nfor a prime $p$ and $\\zeta_p$ a primitive $p$-th root of unity; therefore $s(\\mathcal{O}^{\\rm ab})=0$.\n\\item[c)] It is proven in \\cite[Corollary 1]{AmorosoDvornicich2000} that property $\\PS$ holds for the set $\\mathcal{A}$ of algebraic integers $x$ lying in an Abelian extension of $\\mathbb{Q}$ and such that $ x\/{\\overline{x}}$ is not a root of unity. More precisely \n$$n(x)\\geq \\frac{\\log 5 } {12},\\hbox{ for every $x\\in\\mathcal{A}$ }.$$ \n\\item[d)] Recall that ${\\mathbb Q}^{\\rm tr}(i)$ is the compositum of all CM fields, see \\cite[page 1902]{AmorosoDavidZannier2014}; therefore $x\\in {\\mathbb Q}^{\\rm tr}(i)$ if and only if $\\mathbb{Q}(x)$ is either a totally real or a CM field. Since the complex conjugation commutes with all the embeddings of ${\\mathbb Q}^{\\rm tr}(i)$ in $\\mathbb{C}$, \nwe have $|\\sigma(x)| = 1$ for some $\\sigma \\in \\Sigma$ if and only if \n$|\\sigma(x)| = 1$ for all $\\sigma \\in \\Sigma$.\nIn this case, we just write $|x| = 1$.\\\\\nBy a result of Schinzel (apply~\\cite[Corollary 1', p. 386]{Schinzel1973}, to the linear polynomial $P(z)=z-x$), if\n$\\mathcal{A}=\\{x\\in {\\mathbb Q}^{\\rm tr}(i)\\ |\\ |x|\\not=1\\}$ then \n\\begin{equation}\\label{eq:Schinzel} b(\\mathcal{A})\\geq \\frac 1 2\\log \\frac{1+\\sqrt{5}} 2.\\end{equation}\n\\end{itemize}\n\\end{examples}\n\nBy Example \\ref{exs:esempi}.d) we obtain the following\n\\begin{proposition}\\label{prop:notlargeperSchinzel}\nLet $L={\\mathbb Q}^{\\rm tr}(i)$, and let $x\\in\\mathcal{O}_L $ be a non-zero element.\nIf $n(x)< \\frac 1 2 \\log\\frac{1+\\sqrt{5}}2$ then $x \\mathcal{O}_{\\mathbb{Q}(x)}$ is not large in $L$ except if $x$ is a unit.\n\\end{proposition}\n\\begin{proof}\nIf $x$ is a unit, then $x\\mathcal{O}_{\\mathbb{Q}(x)} = \\mathcal{O}_{\\mathbb{Q}(x)}$ is trivially large. If not, we have $|x|\\not=1$, so that Schinzel result \\eqref{eq:Schinzel} implies that $ h(x)\\geq \\frac 1 2\\log \\frac{1+\\sqrt{5}} 2>n(x)$. Then the result follows from \\eqref{eq:h>n}.\n\\end{proof}\n\n\n\n\n\\subsection{Example} Let $p$ be one of the primes for which ${\\QQ}(\\zeta_{p-1})$ has class number one. Note that $p$ splits completely in ${\\QQ}(\\zeta_{p-1})$. Recall (\\cite{MasleyMontgomery1976}) that the cyclotomic field ${\\QQ}(\\zeta_m)$ has class number one if and only if $m$ is one of the following forty-four numbers:\n\\begin{multline*}\n3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 25, 26, 27, 28,\\\\ \n30, 32, 33, 34, 35, 36, 38, 40, 42, 44, 45, 48, 50, 54, 60, 66, 70, 84, 90.\n\\end{multline*}\n(which corresponds to twenty-nine distinct cyclotomic fields). Thus the relevant primes are\n\\begin{equation}\n\\label{list}\n5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 61, 67, 71.\n\\end{equation} \n\\begin{question}\n\\label{qu:continued}\nLet $p$ be one of the fifteen primes~\\eqref{list} and let $\\mathfrak{P}$ be a prime ideal over $p$ in the ring of integers of $\\mathbb{Q}(\\zeta_{p-1})$. Is $\\mathfrak{P}$ large?\n\\end{question}\nSince ${\\mathbb Q}^{\\rm ab}\\subseteq{\\mathbb Q}^{\\rm tr}(i)$ and\n$$\np\\leq \\Big(\\frac{1+\\sqrt{5}}2\\Big)^{\\varphi(p-1)\/2}\\hbox{ for }p=41,67\\hbox{ and }71\n$$\nthe answer to Question~\\ref{qu:continued} is negative for these primes, by Proposition \\ref{prop:notlargeperSchinzel}. \n\nNote that we could have tried the same strategy as in Proposition \\ref{prop:notlargeperSchinzel}\nbut using the inequality of Example \\ref{exs:esempi}b. This would work only for the primes $p$ satisfying\n\\begin{equation}\\label{eq:valetutti}\np < 5^{\\varphi(p-1)\/12}.\n\\end{equation}\nHowever, this inequality is satisfied by none of the primes in the list~\\eqref{list}.\n\nIn the subsequent Theorem \\ref{teo:risposta}, Question \\eqref{qu:continued} will receive a complete answer.\n \\subsection{Number theoretic minorations of the covering radius}\n In the light of the propositions \\ref{prop:normalarge} and \\ref{prop:esisteLuniforme}, it is useful to have some quantitative information on the covering radius $\\rho_\\infty(K)$ of a number field $K$ of degree $d$. \\\\\n Let $\\lambda_1,\\ldots,\\lambda_r$ be the successive minima (w.r.t. the Euclidean norm $||\\cdot ||_2$) of the lattice $\\Lambda_K$. \n It is well known (see for example \\cite[Theorem 7.9]{MicciancioGoldwasser2002} that \n\\begin{equation}\\label{eq:disugminsucccovrad} \\lambda_1\\leq\\ldots\\leq \\lambda_r \\leq 2\\rho_2\\leq \\sqrt{s}\\lambda_r.\\end{equation}\nMoreover by \\eqref{eq:disugcovrad} we have\n \\begin{equation}\\label{eq:disugcovrad2}\n \\rho_\\infty(K)\\leq \\rho_2(K)\\leq \\sqrt{s}\\rho_\\infty(K).\\end{equation}\n Therefore \n \\begin{align} \\rho_\\infty(K)&\\geq \\frac 1 {\\sqrt{s}}\\rho_2(K)\\quad\\hbox{from \\eqref{eq:disugcovrad2}}\\nonumber \\\\\\\n& \\geq \\frac 1 {2\\sqrt{s}}\\lambda_r\\quad\\hbox{from \\eqref{eq:disugminsucccovrad}}\\label{eq:keyminoration}\n \\end{align} \n \n \\begin{theorem}\\label{teo:regolatore}~\n \\begin{itemize}\n \\item [a)] Let $K$ be a number field such that $r\\geq 1$. Then $\\rho_\\infty(K)\\geq \\frac 1 2 R_K^{\\frac 1r}$. \n \\item[b)] \n There exists a constant $c>0$ such that $\\rho_\\infty(K)\\geq c$ for every number field $K$ such that $r\\geq1$.\n \\end{itemize}\n \\end{theorem}\n \n \\begin{proof}\n Recall that $V_K$ is the volume of a fundamental domain for $\\Lambda_K$.\n By Minkowski's Second Theorem \\cite[Theorem 1.5]{MicciancioGoldwasser2002}\n $$(\\lambda_1\\cdot\\ldots\\cdot \\lambda_r)^{\\frac 1 r}\\geq \\sqrt{r}\\cdot V_K^{\\frac 1r}= \\sqrt{r}\\cdot (\\sqrt{s}R_K)^{\\frac 1r}.$$\n Then from \\eqref{eq:keyminoration}\n \\begin{align}\\label{eq:covradreg} \\rho_\\infty(K)\\geq \\frac {\\sqrt{r}}{2\\sqrt{s}}(\\sqrt{s}R_K)^{\\frac 1r}\\geq c' R_K^{\\frac 1r},\\end{align}\n for a suitable constant $c'$. Studying the function $\\frac {\\sqrt{r}}{2\\sqrt{s}}(\\sqrt{s})^{\\frac 1 r}$ with $s=r+1$ we see that $c'=\\frac 1 2$. This proves a). Then b) follows from the well known fact that there exist constants $c_0>0$ and $c_1>1$ such that $R_K>c_0\\cdot c_1^d$ (\\cite[\\S 3]{Zimmert1981}, see also \\cite{FriedmanSkoruppa1999}. \\end{proof}\n \n The next results provide some lower bounds for the covering radius involving the Weil height on $K$.\\\\\nFor $n=1,...,r$ we put, as in \\cite[Page 9]{AmorosoDavid2021}\n$$\\mu_K(n)=\\inf_{\\atop{v_1,\\ldots, v_n\\in\\mathcal{O}_K^\\times}{{\\rm multipl. indep.}}}(h(v_1)\\cdot\\ldots\\cdot h(v_n)),$$\n \n \\begin{theorem}\n \\label{teo:eccolo}\n For $n=1,\\ldots,r$, we have\n $$\\rho_\\infty(K)\\geq \\frac d s\\mu_K(n)^{\\frac 1 n}.$$\n In particular $$\\rho_\\infty(K)\\geq \\frac d s b(\\mathcal{O}_K^\\times).$$\n \\end{theorem}\n \\begin{proof}\nFor every $u\\in\\mathcal{O}_K^\\times$, by \\eqref{eq:disuglp},\n\\begin{equation}\\label{eq:due} 2dh(u)= ||\\ell(u)||_1 \\leq \\sqrt{s}||\\ell(u)||_2.\\end{equation} \n \nThere exist multiplicatively independent $u_1,\\ldots, u_r \\in\\mathcal{O}_K^\\times$ such that $\\lambda_i=||\\ell(u_i)||_2$ (\\cite[Theorem 1.2]{MicciancioGoldwasser2002}). \nWe notice that for $n=1,\\ldots, r$\n\\begin{align*} \\lambda_n & =\\inf_{\\atop{v_1,\\ldots, v_n\\in\\mathcal{O}_K^\\times} {\\rm multipl. indep.}} \\max\\{ ||\\ell(v_1)||_2,\\ldots, ||\\ell(v_n)||_2\\}\\\\\n&\\geq \\frac {2d} {\\sqrt{s}} \\inf_{\\atop{v_1,\\ldots, v_n\\in\\mathcal{O}_K^\\times} {\\rm multipl. indep.}} \\max \\{ h(v_1),\\ldots, h(v_n)\\}\\quad\\hbox{ by \\eqref{eq:due}.}\n\\end{align*}\nIt follows from \\eqref{eq:keyminoration} that\n\\begin{align*} \\rho_\\infty(K) &\\geq \\frac 1 {2\\sqrt{s}} \\lambda_n\\geq \\frac d s\\inf_{\\atop{v_1,\\ldots, v_n\\in\\mathcal{O}_K^\\times} {\\rm multipl. indep.}} \\max \\{ h(v_1),\\ldots, h(v_n)\\}\\geq \\frac d s\\mu_K(n)^{\\frac 1 n}.\\end{align*}\n\\end{proof}\n\nBy inequality \\eqref{eq:covradreg}, it is possible to use known lower bounds for the regulator $R_K$ in order to deduce lower bounds for the covering radius $\\rho_\\infty(K)$. See \\cite[\\S 3.5, 15]{Narkiewicz2004} for an overview on evaluations of the regulator and \\cite[\\S 8]{Narkiewicz2004} for the special case of Abelian extensions. Moreover \\cite[Proposition 3.3]{AmorosoDavid2021} provides a tool allowing to improve the bound from below of extensions $K$ for which $\\mathcal{O}_K^\\times$ has property $\\PB$. In particular \\cite[Corollaire 3.5]{AmorosoDavid2021} deals with the case of totally real and CM field.\n\nSome remarkable results are collected below:\n \n\\begin{itemize}\n\\item[a)] Let $L$ be an infinite extension. Assume that $\\mathcal{O}_L^\\times$ satisfies property $\\PB$ and let $c_L= b(\\mathcal{O}_L^\\times)=\\inf_{u\\in\\mathcal{O}_L^\\times\\setminus\\mathcal{O}_L^{\\times,\\rm{tors}}} h(u) >0$. Then by Theorem \\ref{teo:eccolo}$$ \\rho_\\infty(K)\\geq c_L,$$ for every number field $K \\subseteq L$ such that $r(K) \\geq 1$.\n\\item[b)] In particular, if a number field $K$ is contained in $\\mathbb{Q}^{\\rm{tr}}(i)$, then by Example \\ref{exs:esempi} d),\n$$\\rho_\\infty(K)\\geq \\frac 1 2 \\log \\frac{1+\\sqrt{5}} 2.$$\n\\item[c)] Silverman's theorem \\cite{Silverman1984} allows to construct fields with fixed degree and covering radius arbitrarily large. It suffices to choose a non-CM field of discriminant large enough. \n\\end{itemize}\n\n\n\n\n\n\\section{An algorithm for largeness}\\label{sect:algorithm}\n\n We describe an algorithm that solves the following problem:\n \\begin{problem}\\label{problem1}\n Given a non-zero algebraic number $x\\in K$ and a bound $B>0$,\n find all units $u\\in \\calo_K^\\times$ such that\n $|\\sigma(xu)| \\geq B$ for all $\\sigma \\in \\Sigma$.\n\\end{problem}\nWhen $B=1$ this algorithm detects when a principal ideal is large.\n\\subsection{Basic results}\n\n\\begin{lemma}\\label{upperbound}\n If $u \\in \\calo_K^\\times$ is a solution of Problem \\ref{problem1},\n then for all $S \\subset \\Sigma$, we have\n $$B^{\\# S} \\leq \\prod_{\\sigma \\in S} |\\sigma(xu)| \\leq B^{\\# S - d} N(x)$$\n where $\\# S$ denotes the cardinality of $S$.\n\\end{lemma}\n\\begin{proof}\nLet $u \\in \\mathcal{O}_K^\\times$ be a solution of Problem \\ref{problem1}.\nFor $\\sigma \\in S$, we have the trivial inequality $B \\leq |\\sigma(xu)|$.\nMultiplying these inequalities gives the announced left inequality.\n\n\nFor $\\sigma \\in S$, we have the inequality $|\\sigma(xu)| \\leq |\\sigma(xu)|$, and\nfor $\\sigma \\not\\in S$, we have $B \\leq |\\sigma(xu)|$.\nMultiplying these inequalities gives $B^{d-\\# S} \\prod_{\\sigma \\in S} |\\sigma(xu)| \\leq N(xu)$.\nBut $u$ is a unit, hence $N(xu) = N(x)$, whence the result.\n\\end{proof}\n\n\\begin{proposition}\\label{smallnorm}\n If $N(x) < B^d$, then Problem \\ref{problem1} has no solution.\n\\end{proposition}\n\\begin{proof}\nApply Lemma \\ref{upperbound} with $S = \\Sigma$.\n\\end{proof}\n\n\\begin{proposition}\\label{finitesol}\n Given $x\\in K$ and $B>0$, Problem \\ref{problem1} has only finitely many solutions.\n\\end{proposition}\n\\begin{proof}\nBy Proposition \\ref{smallnorm}, Problem \\ref{problem1} has no solution for $x=0$.\nWe assume now that $x\\neq 0$.\n\nDividing the right inequality of Lemma \\ref{upperbound} by $\\prod_{\\sigma \\in S}|\\sigma(x)| \\neq 0$ gives\n$$ \\prod_{\\sigma\\in S} |\\sigma(u)| \\leqslant B^{\\# S - d} \\prod_{\\sigma \\not\\in S}|\\sigma(x)|\n\\leqslant B^{\\#S-d} \\left( \\frac {||x||_1}{d-\\# S} \\right)^{d-\\# S} \\; .$$\nLet us consider the characteristic polynomial of $u$ for the extension $K\/{\\QQ}$ and\ndenote it by $P_u$.\nSince $u$ is a unit, $P_u \\in \\mathbb{Z}[X]$ and $P_u$ is monic.\nThe roots of $P_u$ in $\\mathbb{C}$ are the real or complex numbers $\\sigma(u)$.\nUsing the above inequality and expressing the coefficient $a_k$ of $X^k$ in $P_u$ in terms of the roots of $P_u$,\nwe deduce that $|a_k|$ is bounded independently of $u$.\nFor example we have\n$|a_d| = |a_0|=1$ since $u$ is a unit\nand \n$$|a_k| \\leq \\binom{d}{k} \\left( \\frac {||x||_1}{kB} \\right)^k$$\nfor the other values of $k$.\nSince this bound does not depend on $u$, there are only finitely many\npossibilities for $P_u$, hence for $u$.\n\\end{proof}\n\n\\noindent{\\bf Remark.}\nWe could turn the proof of Proposition \\ref{finitesol} into an algorithm\nthat tests all polynomials with coefficients within some bounds depending on $B$ and $x$.\nExplicitly, using the bounds given during the proof,\nwe see that, for a given number field of fixed degree $d$,\nthe number of polynomials that need to be tested, is proportional to\n$\\displaystyle \\left( \\frac {||x||_1}{B} \\right)^\\alpha$\nwith $\\displaystyle \\alpha = \\sum_{k=1}^{d-1} k = \\frac {d(d-1)}2$.\nThe number of polynomials that need to be tested in the algorithm\nis therefore exponential in the input $x$, hence very large,\nand the resulting algorithm is very slow.\n\nWe will give another algorithm in the next section.\n\n\\subsection{An algorithm to solve Problem \\ref{problem1}}\nIf $A=(a_{i,j})$ is a matrix (or a vector) with real entries,\nwe write $A \\geq 0$ to indicate that $a_{i,j} \\geq 0$ for all $i$ and $j$.\nWe also write $A\\geq B$ if $A-B \\geq 0$.\nWe will use the fact that, if $A\\geq 0$ and $B\\geq 0$, then $AB \\geq 0$.\n\nWe recall that, for a number field $K$ of degree $d$, with $r_1$ real embeddings and $r_2$ complex embeddings, we have set $s = r_1+r_2$ and $r = s-1$.\nIf $u_1,\\dots,u_r$ are generators of $\\calo_K^\\times$ modulo torsion,\nwe define the matrix $L$ of size $r\\times s$ by\n$$ L_{i,j} = \\log | \\sigma_j(u_i)|$$\nif $\\sigma_j$ is real and \n$$ L_{i,j} = 2 \\log | \\sigma_j(u_i)|$$\nif $\\sigma_j$ is complex.\nThe $i$-th row of $L$ is equal to $\\ell(u_i)$.\nAt last, we define the column vector $V = (1,\\dots,1)^t \\in \\mathbb{Z}^s$.\n\n\\medskip\nUsing logarithms, we can reformulate our Problem \\ref{problem1} as:\n\n\\begin{problem}\\label{problem2}\n Given a non-zero algebraic number $x\\in K^\\times$ and a bound $B>0$,\n find all rows $U \\in \\mathbb{Z}^r$ such that\n $$ UL + X \\geq 0 $$\n where $X = \\ell(x)-\\mathcal{L}(B)$.\n\\end{problem}\n\nFormulated in this way, we see that Problem \\ref{problem2} can be solved by integer linear programming. However, the situation is not generic here, and a simpler algorithm is given below.\n\n\\begin{algo}\\label{algo}~\n \n Input : $x\\in K$, $x\\neq 0$, and $B>0$.\n\n Output : all solutions $U \\in \\mathbb{Z}^r$ of Problem \\ref{problem2}.\n \n {\\tt\n \\begin{enumerate}\n \\item Compute the matrix $L$ of size $r\\times s$\n and the column vector $V$ of size $s$ as in the above definition.\n \\item Remove from $L$ its last column and call $M$ the inverse of this matrix.\n Concatenate $M$ with the row vector of size $r$ whose all entries are $0$\n and obtain a new matrix $M$ of size $s\\times r$.\n \n For the next steps, we use the notation $N_{,j}$ for the $j$-th column of a matrix $N$.\n \\item Define the matrix $N^+$ of size $s\\times r$, such that, for $1\\leq j \\leq r$, \n $N^+_{,j} = M_{,j} - \\min_i\\{M_{i,j}\\} V$.\n \\item Define the matrix $N^-$ of size $s\\times r$, such that, for $1\\leq j \\leq r$, \n $N^-_{,j} = M_{,j} - \\max_i\\{M_{i,j}\\} V$.\n \n\n \\item Compute the row vector $X = \\ell(x) - \\mathcal{L}(B)$.\n \\item For all row vector $U \\in \\mathbb{Z}^r$ in the range\n $ -XN^+ \\leq U \\leq -XN^-$, test if $UL+X \\geq 0$.\n If this is the case, output $U$.\n \\end{enumerate}\n }\n\\end{algo}\n\n\\begin{proposition}\\label{proofalgo}\n Algorithm \\ref{algo} is correct.\n\n Furthermore, when the number field $K$ is fixed,\n the number of $U$ that need to be tested during step 6\n is at most proportional to $( \\log N(x)- d\\log B +1)^r$.\n\n\\end{proposition}\n\\begin{proof}\nWe follow the algorithm step by step.\n\\begin{enumerate}\n\\item By Dirichlet's Unit Theorem, the matrix $L$ constructed in step 1 has rank $r$.\n Since the absolute norm of a unit is equal to $1$, we have $LV = 0$,\n hence $V$ is in the right kernel of $L$.\n\\item By Dirichlet's Unit Theorem,\n when we remove any column of $L$, the determinant of the remaining square matrix\n is always the same and equals the regulator of $K$, which is not $0$.\n This matrix of size $r \\times r$ is invertible.\n By construction, we have $LM = I_r$.\n\\item For all columns of $N^+$, we have $N^+_{,j} = M_{,j} - \\min_i\\{M_{i,j}\\} V$.\n Let $i(j)$ be the index such that $\\min_i\\{M_{i,j}\\} = M_{i(j),j}$.\n We have $N^+_{i,j} = M_{i,j} - M_{i(j),j} \\geq 0$ by minimality.\n Hence $N^+ \\geq 0$.\n Because $V$ is in the right kernel of $L$, we deduce that\n $LN^+ = LM = I_r$.\n\\item Using a similar argument, we can prove that\n $N^- \\leq 0$ and $LN^- = I_r$.\n\\item There is nothing to say here.\n\\item If $U$ is a solution of Problem \\ref{problem2},\n then $UL+X \\geq 0$.\n But $N^+ \\geq 0$, hence $ULN^+ + XN^+ \\geq 0$.\n By the relation $LN^+ = I_r$, we deduce $U \\geq -XN^+$.\n By $N^- \\leq 0$, we deduce $ULN^- + XN^- \\leq 0$, and $U \\leq -XN^-$.\n\\end{enumerate}\n\nIn order to bound the number of $U$ tested in step $6$,\nwe observe that $-XN^+ \\leq U \\leq -XN^-$.\nFor the $j$-th entry, this is explicitly\n$-XN^+_{,j} \\leq U_j \\leq -XN^-_{,j}$\nhence the number of $U_j$ that need to be tested is at most equal to\n$-XN^-_{,j} + XN^+_{,j}+1$. But we have\n\\begin{align*} -XN^-_{,j} + XN^+_{,j} & = X(M_{,j}-\\min_i\\{M_{i,j}\\}V - M_{,j}+\\max_i\\{M_{i,j}\\}V)\\\\\n& = (\\max\\{M_{,j}\\} - \\min\\{M_{,j}\\}) XV\n\\end{align*}\nWe also have $XV = \\log N(x)- d\\log B$.\nIf $\\log N(x) -d\\log B < 0$, we have seen in Proposition \\ref{smallnorm} that the problem has no solution. When\n$\\log N(x) -d\\log B \\geq 0$, we have\n$$-XN^-_{,j} + XN^+_{,j}+1\n\\leq \n(\\max\\{M_{,j}\\} - \\min\\{M_{,j}\\}+1) (\\log N(x) - d \\log B + 1)$$\nwhence a bound for the number of $U$ by\n$$ (\\log N(x) - d\\log B+1)^r \\times \\prod_j (\\max\\{M_{,j}\\} - \\min\\{M_{,j}\\}+1)$$\n\n\n\\end{proof}\n\n\\section{A complete example}\\label{sect:cyclotomic}\n\nIn this section, we shall give a detailed execution of Algorithm \\ref{algo}, which answers Question \\ref{qu:continued} for $p=17$.\nAll computations were done using PARI\/gp \\cite{PARI2}.\n\n\\medskip\nLet us consider the $16$-th cyclotomic field $K$ equal to ${\\QQ}(\\zeta) = {\\QQ}[X]\/\\Phi_{16}(X)$,\nwhere $\\Phi_{16}(X) = X^8+1$.\n\nIn this field, we consider $x = -\\zeta^7 - \\zeta^3 + \\zeta^2$.\nWe have $N(x) = 17$, hence $x$ is a generator of a principal prime ideal above $17$.\nWe are looking for another generator $x'$ of this principal ideal\nsuch that $|\\sigma(x')| \\geq 1$ for all $\\sigma\\in\\Sigma$. We need to solve Problem \\ref{problem2} with $B = 1$.\n\nWe follow here the steps of Algorithm \\ref{algo}.\n\n\\begin{enumerate}\n\\item\n For this field, we have $d=8$, $r_1 = 0$ and $r_2 = 4$.\n In this case, we have $s = r_1+r_2 = 4$ and $r = 3$.\n \n The units of $K$ are generated by\n $u_0 = \\zeta$, $u_1 = -\\zeta^6 + \\zeta^2 - 1$, $u_2 = \\zeta^2+\\zeta+1$ and $u_3=-\\zeta^6+\\zeta^3-\\zeta$,\n where $u_0$ generates the torsion part and $u_1,u_2,u_3$ generate the free part.\n The matrix $L$ is equal to\n $$L =\n \\left(\n \\begin{array}{rrrr}\n -1.76274&-1.76274& 1.76274& 1.76274\\\\\n -0.33031& 2.09306&-2.89946& 1.13671\\\\\n 1.13671&-2.89946&-0.33031& 2.09306\n \\end{array} \\right)\n $$\n We easily check that $L\\begin{pmatrix}1\\\\1\\\\1\\\\1\\end{pmatrix} = 0$.\n\\item\n We have\n $$M = \n \\left( \\begin{array}{rrrr}\n -0.46575&-0.29144& 0.07276\\\\\n -0.17430&-0.07276&-0.29144\\\\\n -0.07276&-0.36421&-0.21868\\\\\n 0 & 0 & 0\n \\end{array} \\right)\n $$\n We can check that $LM = I_3$, the identity matrix of order $3$.\n\\item We have\n $\\min(M_{,1}) = -0.46575$,\n $\\min(M_{,2}) = -0.36421$,\n $\\min(M_{,3}) = -0.29144$.\n This gives\n $$N^+ = \\left( \\begin{array}{rrr}\n 0 & 0.07276 & 0.36421 \\\\\n 0.29144 & 0.29144 & 0 \\\\\n 0.39298 & 0 & 0.07276 \\\\\n 0.46575 & 0.36421 & 0.29144 \\\\\n \\end{array}\\right)$$\n We can check that $N^+ \\geq 0$ and $LN^+ = I_3$.\n\\item We have\n $\\max(M_{,1}) = 0$,\n $\\max(M_{,2}) = 0$,\n $\\max(M_{,3}) = 0.07276$. This gives\n $$N^- = \\left( \\begin{array}{rrr}\n -0.46575 & -0.29144 & 0 \\\\\n -0.17430 & -0.07276 & -0.36421\\\\\n -0.07276 & -0.36421 & -0.29144\\\\\n 0 & 0 & -0.07276\\\\\n \\end{array}\\right)$$\n We can check that $N^- \\leq 0$ and $LN^- = I_3$.\n\\item Since $B=1$, we have $\\mathcal{L}(B) = 0$.\n For $x = -\\zeta^7 - \\zeta^3 + \\zeta^2$, we have\n $$X = \\ell(x) = (1.40668, 0.65107, 1.72510, -0.94965)$$\n\\item\n We compute\n $$-XN^+ = (-0.42539, 0.05376, -0.36109)$$\n $$-XN^- = ( 0.89419, 1.08567, 0.67081)$$\n In this example, the only $U \\in \\mathbb{Z}^3$ within the bounds is\n $U = (0,1,0)$.\n However, for this $U$, we have\n $$UL+X = (1.07636, 2.74414, -1.17435, 0.18706)$$\n hence this is not a solution.\n\n\n\\end{enumerate}\n This computation shows that, in this example, Problem \\ref{problem2} has no solution.\\\\\nAnalogous computations applied to all prime $p$ in the list \\eqref{list} allow to give a complete answer to Question \\ref{qu:continued}:\n\\begin{theorem}\\label{teo:risposta}\nLet $p$ be one of the primes for which $\\mathbb{Q}(\\zeta_{p-1})$ has class number 1, as listed in \\eqref{list}, and let $\\mathfrak{P}$ be a prime ideal over $p$ in the ring of integers of $\\mathbb{Q}(\\zeta_{p-1})$. Then $\\mathfrak{P}$ is large if and only if\n\\begin{equation}\\label{eq:listalarge} p\\in \\{5,7,11,13,19,31\\}.\\end{equation}\n\\end{theorem}\nMore precisely, for each primes in the list \\eqref{eq:listalarge} the following table gives (up to Galois conjugation and multiplication by a root of unity) the elements $\\pi$ of absolute norm $p$ having all the components $\\geq 1$ in the canonical embedding ($\\zeta=\\zeta_{p-1}$ in each case): \n\n\\begin{table}[h]\n\\def1.5{1.5}\n\\begin{tabular}{l|l} $p$ & $\\pi$\\\\ \\hline $5$ & $2\\zeta+1$\\\\\n $7$ & $\\zeta-3$\\\\\n $11$ & $2\\zeta^3-1$, \\quad $2\\zeta^2-\\zeta+1$\\\\\n $13$ & $-2\\zeta^3-\\zeta^2$, \\quad $\\zeta^3-\\zeta^2+2$\\\\\n $19$ & $-\\zeta^4-\\zeta^3+\\zeta^2+\\zeta+1$\\\\\n $31$ & $-\\zeta^7-\\zeta^3-\\zeta$,\\quad $ -\\zeta^6-\\zeta^5+\\zeta^3+\\zeta^2+\\zeta-1$\\\\\n &\n \\end{tabular}\n \n \\caption{Strictly large integers of absolute norm $p$ in $\\mathbb{Q}(\\zeta_{p-1})$} \\label{tavola} \n\\end{table}\n\n\n\n\n\\section{Another application: floor functions and types}\\label{sect:floorfunctions}\n\nLet $K$ be a number field of degree $d$ over $\\mathbb{Q}$, and let $\\mathcal{O}_K$ be its ring of integers. We fix an ideal $\\mathfrak{A}$ of $\\mathcal{O}_K$.\nThe aim of this section is to apply largeness (when possible) in order to define complete sets of representatives of $K\/\\mathfrak{A}$ (which will be called \\emph{types}) satisfying some integrality properties and having all the Archimedean embeddings bounded in a controlled way. \\\\ Types associated to a prime ideal $\\mathfrak{P}$ of a number field were introduced in \\cite{CapuanoMurruTerracini2022} with the aim of constructing a general notion of $\\mathfrak{P}$-adic continued fractions and studying their finiteness and periodicity properties.\n\nLet $\\mathcal{M}_K^0$ be a set of representatives for the non-Archimedean places of $K$. For every rational prime $p$ and every $v\\in\\mathcal{M}_K^0$ above $p$ let $K_{v}\\subseteq \\overline{\\mathbb{Q}}_p$ be the completion of $K$ w.r.t. the $v$-adic valuation and $\\mathcal{O}_v$ be its valuation ring; we put $d_v=[K_v:\\mathbb{Q}_p]$. Let $|\\cdot |_v=|N_{K_v\/\\mathbb{Q}_p}(\\cdot)|_p^{\\frac 1{d_v}}$ be the unique extension of $|\\cdot |_p$ to $K_v$. \nLet $\\widetilde{K}=\\prod_{v \\mid \\mathfrak{A}} K_v$ be the $\\mathfrak{A}$-adic completion of $K$, with $K$ diagonally embedded, and $\\widetilde{\\mathcal{O}}=\\prod_{v \\mid \\mathfrak{A}}\\mathcal{O}_v$.\\\\\nLet $S_0=\\{v\\in \\mathcal{M}_K^0 \\mid v \\mid \\mathfrak{A}\\}$.\n\\begin{definition}\nAn \\emph{$\\mathfrak{A}$-adic floor function} for $K$ is a function $s:\\widetilde K\\to K$ such that\n\\begin{itemize} \n\\item[a)] $\\alpha-s(\\alpha)\\in \\mathfrak{A}\\widetilde\\mathcal{O}$ for every $\\alpha\\in \\widetilde K$;\n\\item[b)] $|s(\\alpha)|_{v}\\leq 1$ for every $v\\in \\mathcal{M}_K^0\\setminus S_0$;\n\\item[c)] $s(0)=0$;\n\\item[d)] $s(\\alpha)=s(\\beta)$ if $\\alpha-\\beta\\in\\mathfrak{A}\\widetilde\\mathcal{O}$.\n\\end{itemize}\n\\end{definition}\n\n\n \nBy the Strong Approximation Theorem in number fields (see for example \\cite[Theorem 4.1]{Cassels}), $\\mathfrak{A}$-adic floor functions always exist, and there are infinitely many. \\\\\nWe define the ring of $S_0$-integers\n\\[\\mathcal{O}_{K,S_0}=\\{\\alpha\\in K\\ |\\ |\\alpha|_v\\leq 1\\hbox{ for every } v \\in \\mathcal{M}_K^0 \\setminus S_0\\}.\n\\]\nThen, we can regard an $\\mathfrak{A}$-adic floor function as a map $s: \\widetilde K\/\\mathfrak{A}\\widetilde\\mathcal{O}\\to \\mathcal{O}_{K,S_0}$ such that $s(\\mathfrak{A}\\widetilde\\mathcal{O})=0$ and which is a section of the projection map $\\widetilde K\\to \\widetilde K\/\\mathfrak{A}\\widetilde\\mathcal{O}$.\nTherefore the choice of an $\\mathfrak{A}$-adic floor function amounts to choose a set $\\mathcal{Y}$ of representatives of the cosets of $\\mathfrak{A}\\widetilde\\mathcal{O}$ in $\\widetilde K$ containing $0$ and contained in\n$\\mathcal{O}_{K,S_0}$.\\\\\nWe shall call the data $\\tau=(K,\\mathfrak{A},s)$ (or $(K,\\mathfrak{A},\\mathcal{Y})$) a \\emph{type}. \n\\begin{remark} The absolute Galois group $\\mathrm{Gal}(\\overline{\\QQ}\/\\mathbb{Q})$ acts on the set of types; indeed, if $\\tau=(K,\\mathfrak{A}, s)$ is a type, then $\\sigma\\in \\mathrm{Gal}(\\overline{\\QQ}\/\\mathbb{Q})$ induces a continuous map $\\widetilde K \\to \\widetilde{K^\\sigma}$, where $\\widetilde{K^\\sigma}$ is the completion of $K^\\sigma$ with respect to the ideal $\\mathfrak{A}^\\sigma$. Then $\\tau^\\sigma=(K^\\sigma,\\mathfrak{A}^\\sigma, s^\\sigma)$ is also a type, where $s^\\sigma=\\sigma\\circ s\\circ \\sigma^{-1}$. In particular, if $K\/\\mathbb{Q}$ is a Galois extension and $\\sigma$ belongs to the decomposition group \n$$D_\\mathfrak{A}=\\{\\sigma\\in\\mathrm{Gal}(\\overline{\\QQ}\/\\mathbb{Q})\\ |\\ \\mathfrak{A}^\\sigma=\\mathfrak{A}\\},$$\n then $\\tau^\\sigma=(K,\\mathfrak{A}, s^\\sigma)$ is again an $\\mathfrak{A}$-adic type.\n\\end{remark}\n\\subsection{Types arising from generators of $\\mathfrak{A}$} \\label{sec:special_type}\nIn the case $\\mathfrak{A}$ is principal, there is a natural way of defining an $\\mathfrak{A}$-adic floor function.\nIndeed, let $\\pi\\in\\mathfrak{A}$ be generator and let $\\mathcal{R}$ be a complete set of representatives of $\\mathcal{O}_K\/\\mathfrak{A}$ containing $0$. Then, every $\\alpha\\in \\widetilde K$ can be expressed uniquely as a Laurent series $\\alpha=\\sum_{j=-n}^\\infty c_j\\pi^j$, where $c_j\\in\\mathcal{R}$ for every $j$. It is possible to define an $\\mathfrak{A}$-adic floor function by\n$$s(\\alpha)=\\sum_{j=-n}^0c_j\\pi^j\\in K. $$\nWe shall denote the types $\\tau=(K,\\mathfrak{A},s)$ obtained in this way by $\\tau=(K,\\pi,\\mathcal{R})$, and we will usually call them \\emph{special types}.\n\n\\begin{example}[Browkin and Ruban types over $\\mathbb{Q}$]\nWhen $K=\\mathbb{Q}$ and $\\pi=p$ odd prime, two main special types have been studied in the literature:\n\\begin{itemize}\n\\item the \\emph{Browkin type} $\\tau_B=(\\mathbb{Q},p,\\mathcal{R}_B)$ where $\\mathcal{R}_B=\\{- \\frac {p-1} 2,\\ldots, \\frac {p-1} 2\\}$ (see \\cite{Browkin1978, Bedocchi1988, Bedocchi1989, Bedocchi1990, Browkin2000, CapuanoMurruTerracini2020});\n\\item the \\emph{Ruban type} $\\tau_R=(\\mathbb{Q},p,\\mathcal{R}_R)$ where $\\mathcal{R}_R=\\{0,\\ldots, p-1\\}$ (see \\cite{Ruban1970, Laohakosol1985, Wang1985, CapuanoVenezianoZannier2019}).\n\\end{itemize}\n\\end{example}\n\n\\subsection{Bounded types} We say that a type $\\tau=(K,\\mathfrak{A},s)$ is \\emph{bounded} if there exists a real number $C>0$ such that $|\\sigma(s(\\alpha))| 0$ is a regularization parameter and $x_1 \\in \\mathcal{H}.$ Rockafellar \\cite{Rockafella1976,Rockafellar1976} has proved that proximal point algorithm converges weakly to solution set of inclusion problems in real Hilbert space framework. Further, he has introduced the inexact proximal point algorithm as follows:\n\\begin{equation}\\label{P4e1.3}\n\tx_{n+1}=J_{\\lambda_n T} (x_n+v_n),\\quad\\forall n \\in \\mathbb{N},\n\\end{equation}\nwhere $v_n$ is the error term in $\\mathcal{H}$. The sequence $\\{x_n\\}$ also converges weakly to solution set of inclusion problem provided $\\sum_{n=1}^{\\infty} v_n < \\infty.$ Guler \\cite{Guler1991} has shown by an example that sequence generated by proximal point algorithm (\\ref{P4e1.2}) converges weakly, but not strongly. It becomes a matter of interest for the research community to modify the proximal point algorithm to obtain the strong convergence. In such consequences, Tikhonov method has been proposed which generates as follows:\n\\begin{equation}\n\tx_{n+1}=J_{\\lambda_n T} (x),\n\\end{equation}\nwhere $x \\in \\mathcal{H}$ and $\\lambda_n >0$ such that $\\lambda_n \\to \\infty.$ Detailed study of Tikhonov regularization method can be found in \\cite{Butnariu2008,Tikhonov1965,Tikhonov1977,Tikhonov1963,Xu2002}. Lehdili and Moudafi \\cite{Lehdili1996} have combined the idea of proximal algorithm and Tikhonov regularization to find an algorithm converges strongly to solution of inclusion problem (\\ref{P4e1.1}). They have solved the inclusion problem (\\ref{P4e1.1}) by solving inclusion problem of fixed approximation of $T$, which is $T_n=T+\\mu_n Id$, i.e.,\n\\begin{equation*}\n\t\\text{find } x\\in \\mathcal{H} \\text{ such that } 0 \\in T_n(x),\n\\end{equation*}\nwhere $\\mu_n$ is regularization parameter of $T$.\nThe proximal-Tikhonov algorithm is given by\n$$x_{n+1}=J_{\\lambda_{n}T_n}(x_{n}).$$\nThe Tikhonov regularization term $\\mu_n Id$ has impelled the strong convergence to the algorithm. In absence of Tikhonov regularization term, proximal-Tikhonov algorithm becomes proximal algorithm which shows only weak convergence in most of the cases. The strong convergence of the algorithm can be obtained by using some other techniques also, some of them can be found in \\cite{Bauschke2001,Haugazeau1968}.\n\nEvaluation of resolvent is sometimes as hard as the original problem. This problem has been tried to resolve by splitting the operator in two operators, i.e., $T=A+B,$ whose resolvents are easy to compute. For $T=A+B,$ the monotone inclusion problem (\\ref{P4e1.1}) becomes\n\\begin{equation}\\label{P4e1.4}\n\t\\text{find } x \\in \\mathcal{H} \\text{ such that }0 \\in (A+B)x,\n\\end{equation}\nwhere $A : \\mathcal{H} \\to 2^\\mathcal{H}$ is maximally monotone operator and $B$ is an operator.\nForward-backward splitting algorithm and Douglas-Rachford algorithm have been proposed to solve Problem (\\ref{P4e1.4}). Forward-backward splitting method has been proposed by Lions and Mercier \\cite{Lions1979}, Passty \\cite{Passty1979}, which is given by\n\\begin{equation}\n\tx_{n+1}=(Id+\\lambda_n A)^{-1}(Id-\\lambda_n B)x_n,\n\\end{equation}\nwhere $\\lambda_n > 0$ and $B:\\mathcal{H} \\to \\mathcal{H}$ is cocoercive operator. Mercier \\cite{Mercier1980} and Gabay \\cite{Gabay1983} have studied the convergence behavior of forward-backward method when $A^{-1}$ is $\\gamma$-strongly monotone with $\\gamma >0.$ They have proved that forward-backward algorithm converges weakly to the point in the solution set provided $\\lambda_n < 2 \\gamma$, is constant. In addition, if $A$ is strongly monotone, then $\\{x_n\\}$ shows strong convergence to the unique solution of Problem (\\ref{P4e1.4}). Chen and Rockafellar \\cite{Chen1997} have also assumed the strong monotonicity of $A$ to prove the strong convergence of forward-backward method which depends on Lipschitz constant and modulus of strong monotonicity. Further, forward-backward method has been extensively studied, few of them can be found in (\\cite{Chen1994,Chen1997,Mouallif1991,Moudafi1997}) and references therein. \n\nDouglas-Rachford method has been proposed to solve problem (\\ref{P4e1.4}) when both $A$ and $B$ are set-valued. It has been originally proposed by Douglas and Rachford \\cite{Douglas1956} to solve linear equations arising in heat-conduction problems. Lions and Mercier \\cite{Lions1979} have extended the Douglas-Rachford algorithm to monotone operators. Douglas-Rachford algorithm is given as follows$\\colon$\n\\begin{equation}\n\tx_{n+1}=R_B R_A x_n, \\quad\\forall n \\in \\mathbb{N},\n\\end{equation}\nwhere $R_B$ and $R_A$ are reflected resolvent of operators $B$ and $A$, respectively. Lions and Mercier \\cite{Lions1979} have proved that Douglas-Rachford algorithm converges weakly to a fixed point of operator $T$ which helps to obtain the solution of the Problem (\\ref{P4e1.4}). Svaiter \\cite{Svaiter2011} has supported the results of Lions and Mercier by proving the weak convergence of the shadow sequence to a solution. Further the analysis of the Douglas-Rachford algorithm can be found in (\\cite{Artacho2013,Davis2015,Luke2020,Phan2016}).\n\nLet $\\mathcal{C}$ be a nonempty closed convex subset of $\\mathcal{H}$ and $\\mathcal{S}: \\mathcal{C} \\to \\mathcal{C}$ be a nonexpansive operator. There are a number of iterative methods for finding fixed points of nonexpansive operators. We recall some well known fixed point methods, which are given below$\\colon$\n\\begin{enumerate}\n\n\n\n\n\t\\item[$\\bullet$] Mann iteration method \\cite{Mann1953}:\n\t\\begin{equation*}\n\t\tx_{n+1}=(1-\\beta_n) x_n+ \\beta_n \\mathcal{S}(x_n),\\quad\\forall n \\in \\mathbb{N};\n\t\\end{equation*}\n\t\\item[$\\bullet$] S-iteration method \\cite{Agarwal2007}:\n\t\\begin{equation*}\n\t\tx_{n+1}=(1-\\alpha_n)\\mathcal{S}(x_n) +\\alpha_n \\mathcal{S}[(1-\\beta_n) x_n + \\beta_n \\mathcal{S} x_n],\\quad\\forall n \\in \\mathbb{N};\n\t\\end{equation*}\n\t\\item[$\\bullet$] Normal S-iteration method \\cite{Sahu2011}:\n\t\\begin{equation*}\n\t\tx_{n+1}=\\mathcal{S}[(1-\\beta_n)x_n+\\beta_n \\mathcal{S} x_n],\\quad\\forall n \\in \\mathbb{N};\n\t\\end{equation*}\n\\end{enumerate}\nwhere $\\alpha_n, \\beta_n \\in (0,1).$ The importance of these algorithms are not limited to solve fixed point problems, but these algorithms are also useful for solving inclusion problems of sum of a set-valued maximally monotone operator and a single-valued cocoercive operator, and inclusion problems of sum of two set-valued maximally monotone operators. The S-iteration methodology has been applied for solving\n\tvarious nonlinear problems, inclusion problems, optimization problems and\n\timage recovery problems. Recently, it has been demonstrated by Avinash et al. \\cite{Dixit2019} that the inertial normal S-iteration method has better performance compared to the inertial Mann iteration method. The S-iteration method and normal S-iteration method are also useful for finding common fixed points of nonexpansive operators. Since last few years, these properties of normal S-iteration make it popular among research community to find fixed point. Several research articles related to S-iteration and normal S-iteration can be found in \\cite{Chang2013,Chang2014,Cholamjiak2015,Sahu2020,Sahuajit2020}. The weak convergence of the fixed point algorithms has reduced its applicability in infinite dimensional spaces. To achieve the strong convergence of algorithms one assumes stronger assumptions like strong monotonicity and strong convexity, which is difficult to achieve in many applications. This situation leaves a question to research community: can we find the strongly convergent algorithms without assuming these strong assumptions? The answer to this question is replied positively by Bot et al. \\cite{Bot2019}. They have modified the Mann algorithm as follows:\n\\begin{equation}\\label{P4e1.5}\n\tx_{n+1}=e_n x_n + \\theta_n(\\mathcal{S}(e_n x_n)-e_n x_n),\n\\end{equation}\nwhere $e_n, \\theta_n$ are positive real numbers. The strong convergence of algorithm (\\ref{P4e1.5}) for nonexpansive operator $\\mathcal{S}$ has been studied by Bot et al. \\cite{Bot2019} when set of fixed points of $\\mathcal{S}$ is nonempty and parameters $\\theta_n$ and $e_n$ satisfy the following:\n\\begin{enumerate}\n\t\\item [(i)] $0< e_{n} < 1$ for all $n \\in \\mathbb{N},$ $\\lim\\limits_{n \\to \\infty} e_{n} = 1$, $\\sum_{n=1}^{\\infty}(1-e_{n})= \\infty$ and $\\sum_{n=1}^{\\infty}|e_{n}-e_{n-1}|< \\infty;$\n\t\\item [(ii)] $0<\\theta_{n}\\leq 1$ for all $n \\in \\mathbb{N},$ $0<\\liminf_{n\\to \\infty}\\theta_{n},$ $\\sum_{n=1}^{\\infty} |\\theta_{n}-\\theta_{n-1}| < \\infty.$\n\\end{enumerate}\n\n We consider the following more general problem:\n\\begin{Problem}\\label{Prob1}\n\tConsider $\\mathcal{T},\\mathcal{S}:\\mathcal{H} \\to \\mathcal{H}$ are nonexpansive operators. Find an element $x \\in \\mathcal{H}$ such that $x \\in \\operatorname{Fix} (\\mathcal{T})\\cap \\operatorname{Fix} (\\mathcal{S})$.\n\\end{Problem}\n\n\t\\begin{Remark} The algorithm (\\ref{P4e1.5}) proposed by Bot et al. \\cite{Bot2019} can not apply to solve inclusion problem (\\ref{P4e1.1}).\n\\end{Remark}\n\n\nIn this paper, we introduce the normal S-iteration method based fixed point algorithm to find common fixed point of nonexpansive operators $\\mathcal{T}, \\mathcal{S}:\\mathcal{H} \\to \\mathcal{H}$, which converges strongly to solutions of common fixed point problem of operators $\\mathcal{S}$ and $\\mathcal{T}$. Based on the proposed fixed point algorithm, we develop a forward-backward algorithm and a Douglas-Rachford algorithm containing Tikhonov regularization term to solve the monotone inclusion problems.\nIn many cases, monotone inclusion problems are very complex, they contain mixture of linear and parallel sum monotone operators. Recently, many researchers have proposed primal-dual algorithms to precisely solve the considered complex monotone inclusion system \\cite{Attouch1996,Bot2013,Brice2011,Combettes2012,BC2013}. We have proposed a forward-backward type primal-dual algorithm and a Doughlas-Rachford type primal-dual algorithm having Tikhonov regularization term to find the common solution of the complexly structured monotone inclusion problems. The proposed algorithms have a special property that resolvents of all the operators are evaluated separately.\\\\\n\nThe paper is organized as follows: Next section recalls some important definitions and results in nonlinear analysis. In Section \\ref{sec3}, we propose a normal S-iteration based Tikhonov regularized fixed point algorithm and study its convergence behavior. In Section \\ref{sec4}, we propose a forward-backward-type algorithm and a forward-backward-type primal-dual algorithm to solve inclusion problem and complexly structured monotone inclusion problem, respectively. In Section \\ref{sec5}, we propose Douglas-Rachford-type algorithms to solve monotone inclusion problems and complexly structured monotone inclusion problems of set-valued operators. In the last, we perform a numerical experiment to show the importance of proposed algorithms in solving image deblurring problems.\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\nThis section devotes some important definitions and results from nonlinear analysis and operator theory.\nLet $\\mathbb{N}$ and $\\mathbb{R}$ denote set of natural numbers and set of real numbers, respectively and `$Id$' denotes identity operator. Consider the operator $T:\\mathcal{H}\\to 2^\\mathcal{H}.$ Let $Gr(T)$ be denote the graph of $T$, $\\operatorname{Zer}(T)$ denote set of zeros of operator $T$ and $\\operatorname{Fix} (T)$ denote set of fixed points of $T$. The symbol $m$ is used to denote a strictly positive integer throughout the paper. The set of proper convex lower semicontinuous functions from $\\mathcal{H}$ to $[-\\infty,+\\infty]$ is denoted by $\\Gamma(\\mathcal{H}).$\nLet $A:\\mathcal{H} \\to 2^\\mathcal{H}$ be an operator. Domain of $A$ is $\\operatorname{dom}$ $(A)=\\{x\\in \\mathcal{H}: Ax\\neq \\emptyset\\}$. Range of $A$ is denoted by ran $(A)$ = $\\cup_{x\\in \\mathcal{H}}$ $Ax$. $A$ is said to be monotone if\n$$\\langle x-y, u-v \\rangle \\geq 0, \\ \\forall (x,u), (y,v) \\in Gr (A).$$\n$A$ is said to be maximally monotone if there exists no monotone operator $B:\\mathcal{H} \\to 2^\\mathcal{H}$ such that $Gr (B)$ properly contains $Gr (A).$ $A$ is strongly monotone with constant $\\beta \\in (0,\\infty)$ if\n\\begin{equation*}\n\t\\langle x-y,u-v \\rangle \\leq \\beta \\|x-y\\|^2 \\ \\forall (x,u), (y,v) \\in Gr(A).\n\\end{equation*}\n The resolvent of $A$ is defined by $J_A= (Id+A)^{-1} $ and the reflected resolvent of $A$ is $R_A= 2J_A -Id.$\nConsider $f: \\mathcal{H} \\to [-\\infty,\\infty]$. The conjugate of $f$ is defined by $f^*:\\mathcal{H} \\to [-\\infty,\\infty],$ $$u \\mapsto \\sup_{x \\in \\mathcal{H}} \\left( \\langle x,u \\rangle -f(x)\\right). $$\nLet $f:\\mathcal{H} \\to [-\\infty,\\infty]$ be a proper function. The subdifferential of $f$ is $\\partial f:\\mathcal{H} \\to 2^\\mathcal{H}$ is defined by\n$$x \\mapsto \\{u \\in \\mathcal{H}| f(y) \\geq f(x)+\\langle y-x,u \\rangle \\ \\forall y \\in \\mathcal{H}\\}.$$\nIf $f \\in \\Gamma(\\mathcal{H}),$ then $\\partial f$ is maximally monotone. The resolvent of subdifferential of $f$ is $prox_f$, where $prox_f:\\mathcal{H}\\to \\mathcal{H}$ defined by\n\\begin{equation*}\n\tprox_f(x)=argmin_{y\\in \\mathcal{H}}\\left\\lbrace f(y)+\\frac{1}{2}\\|y-x\\|^2\\right\\rbrace .\n\\end{equation*}\n\n\\begin{Definition}\n\tLet $\\mathcal{C}$ be a nonempty subset of $\\mathcal{H}$. Then:\n\t\\item [(i)] interior of $\\mathcal{C}$ is\n\t\\begin{equation*}\n\t\t\\text{ int } \\mathcal{C}=\\{x\\in \\mathcal{C}:(\\exists \\rho > 0) B(0;\\rho) \\subset \\mathcal{C}-x\\};\n\t\\end{equation*}\n\t\\item [(ii)]strong relative interior of $\\mathcal{C}$ is\n\t\\begin{equation*}\n\t\t\\text{sri } \\mathcal{C} =\\{x\\in \\mathcal{C}: cone(\\mathcal{C}-x)=\\overline{span}(\\mathcal{C}-x) \\};\n\t\\end{equation*}\n\t\\item [(iii)]strong quasi-relative interior of $\\mathcal{C}$ is\n\t\\begin{equation*}\n\t\t\\text{sqri } \\mathcal{C} = \\{x \\in \\mathcal{C}: \\bigcup_{\\rho > 0}\\rho(\\mathcal{C}-x) \\text{is a closed linear subspace of space } \\mathcal{H}\\}.\n\t\\end{equation*}\n\\end{Definition}\nIn the case $\\mathcal{H}$ is finite dimensional, sqri and sri are equivalent.\n\n\\begin{Definition}\n\tLet $C$ be a nonempty subset of $\\mathcal{H},$ and $T:C\\to \\mathcal{H}$ be a nonexpansive operator. $T$ is said to be\n\t\\begin{enumerate}\n\t\t\\item[(i)] nonexpansive if\n\t\t\\begin{equation*}\n\t\t\t\\|Tx-Ty\\|\\leq \\|x-y\\| \\ \\forall x,y \\in C;\n\t\t\\end{equation*}\n\t\t\\item[(ii)] firmly nonexpansive if\n\t\t\\begin{equation*}\n\t\t\t\\|Tx-Ty\\|^2 +\\|(Id-T)x-(Id-T)y\\|^2 \\leq \\|x-y\\|^2 \\ \\forall x,y \\in C;\n\t\t\\end{equation*}\n\t\t\\item[(iii)] $\\beta$-cocoercive ($\\beta>0$) if\n\t\t\\begin{equation*}\n\t\t\t\\langle x-y,Tx-Ty \\rangle \\leq \\beta \\|Tx-Ty\\|^2 \\ \\forall x, y \\in C\n\t\t\\end{equation*}\n\t\t\\item[(iv)] $\\alpha$-averaged for $\\alpha \\in (0,1)$ if there exists a nonexpansive operator $R:C\\to \\mathcal{H}$ such that $T=(1-\\alpha)Id +\\alpha R$.\n\t\\end{enumerate}\n\\end{Definition}\nAn operator $T:\\mathcal{H}\\to 2^\\mathcal{H}$ is strongly monotone with $\\beta \\in (0, \\infty)$ implies $T^{-1}:\\mathcal{H} \\to \\mathcal{H}$ is $\\beta$-cocoerceive.\n\\begin{Definition}\\cite{Bauschke2011}\n\tLet $\\mathcal{C}$ be a nonempty subset of $\\mathcal{H}.$ Then:\n\t\n\t\\item [(i)] The indicator function $i_\\mathcal{C}:\\mathcal{H} \\to[-\\infty,+\\infty]$ is defined by\n\t\\begin{eqnarray}{\n\t\t\ti_\\mathcal{C}(x)\t=\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t0, \\ \\ if \\ x \\in \\mathcal{C}\\\\\n\t\t\t\t\\infty \\ \\ otherwise\n\t\t\t\\end{array}\\right.}.\n\t\\end{eqnarray}\n\t\\item [(ii)] The projection of a point $x \\in \\mathcal{H}$ on $\\mathcal{C}$ is defined by $\\operatorname{proj}_C(x) =\\left\\lbrace u \\in \\mathcal{C}:u=argmin_{z \\in \\mathcal{C} }\\|x-z\\|\\right\\rbrace.$\n\t\n\t\\item [(iii)] Suppose $\\mathcal{C}$ is convex, then normal cone to $\\mathcal{C}$ at $x$ is defined by\n\t\\begin{eqnarray}{\n\t\t\tN_\\mathcal{C}(x)\t=\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\tu \\in \\mathcal{H}:\\sup \\langle y-x,u \\rangle \\leq 0 ~\\forall y \\in \\mathcal{C}, \\ if \\ x \\in \\mathcal{C}\\\\\n\t\t\t\t\\emptyset, \\ \\ \\ \\ otherwise\n\t\t\t\\end{array}\\right.}.\n\t\\end{eqnarray}\n\\end{Definition}\n\n\\begin{Definition}\\cite[Proposition 4.32]{Bauschke2011}\n\tThe parallel sum of two operators $T_1, T_2:\\mathcal{H}\\to 2^\\mathcal{H}$ is $T_1 \\Box T_2 :\\mathcal{H} \\to 2^\\mathcal{H}$ defined by $T_1 \\Box T_2 = (T_1 ^{-1}+T_2 ^{-1})^{-1}$.\n\\end{Definition}\nThe subdifferential of parallel sum of operators $T_1$ and $T_2$ is $\\partial (T_1 \\square T_2)=\\partial T_1 \\square \\partial T_2$.\n\\begin{Remark}\\label{P3R2.1}\n\tIf $T_1 $ and $T_2 $ are monotone then the set of zeros of their sum $\\operatorname{Zer}(T_1+T_2 )=J_{\\gamma T_2} (\\operatorname{Fix}(R_{\\gamma T_1 } R_{\\gamma T_2 }))$ $\\forall \\gamma >0 $.\n\\end{Remark}\n\n\\begin{Proposition}\\cite{Bauschke2011}\n\tConsider $T_1,T_2 : \\mathcal{H} \\to \\mathcal{H}$ be $\\alpha_1,\\alpha_2$-averaged operates, respectively. Then the averaged operator $T_1 \\circ T_2$ is $\\alpha= \\frac{\\alpha_1+\\alpha_2-2\\alpha_1\\alpha_2}{1-\\alpha_1\\alpha_2}$-averaged.\n\\end{Proposition}\n\n\\begin{Lemma}\\emph{\\cite[Corollary 4.18]{Bauschke2011}}\\label{P4L2.1}\n\tLet $T:\\mathcal{H} \\to \\mathcal{H}$ be a nonexpansive mapping. Let $\\{u_n\\}$ be a sequence in $\\mathcal{H}$ and $u\\in \\mathcal{H}$ such that $u_n \\rightharpoonup u$ and $u_n-Tu_n \\to 0$ as $n \\to \\infty$.\n\tThen $u \\in \\operatorname{Fix}(T)$.\n\\end{Lemma}\n\\begin{Lemma}\\cite[Lemma 2.5]{Xu2002}\\label{P4L2.2}\n\tLet $\\{a_n\\}$ be a sequence of nonnegative real numbers satisfying the inequality:\n\t$$a_{n+1}\\leq (1-\\theta_{n})a_n+\\theta_n b_n+\\epsilon_n \\ \\ \\forall n \\geq 0,$$\n\t where\n\t\\begin{enumerate}\n\t\t\\item[(i)] $ 0\\leq \\theta_n \\leq 1 $ for all $n \\geq 0$ and $\\sum_{n\\geq 0} \\theta_n = \\infty;$\n\t\t\\item[(ii)] $\\limsup_{n\\to \\infty} b_n \\leq 0$;\n\t\t\\item[(iii)] $\\epsilon_n \\geq 0 \\text{ for all } n \\geq 0 \\text{ and } \\sum_{n\\geq 0} \\epsilon_n < \\infty.$\n\t\\end{enumerate}\n\tThen the sequence $\\{a_n\\}$ converges to $0$.\n\\end{Lemma}\n\n\n\n\n\n\n\n\n\\section{Tikhonov Regularized Strongly Convergent Fixed Point Algorithm}\\label{sec3}\nThis section devotes to investigate a computational theory for finding common fixed points of nonexpansive operators. We introduce a common fixed point algorithm such that sequence generated by the algorithm strongly converges to the set of common fixed points of mappings.\n\\begin{Algorithm}\\label{P4A3.1}\n\tLet $\\mathcal{S},\\mathcal{T} :\\mathcal{H} \\to \\mathcal{H}$ be nonexpansive mappings. Select $\\{e_{n}\\}$, $\\{\\theta_{n}\\} \\subset (0,1)$ and compute the $(n+1)^{th}$ iteration as follows:\n\t\\begin{equation}\\label{P4e3.1}\n\t\ty_{n+1}= \\mathcal{S}[(1-\\theta_n)e_{n}y_n+\\theta_n \\mathcal{T}(e_{n}y_n)] \\ \\ \\ \\ \\textrm{for all} \\ n\\in\\mathbb{N}.\n\t\\end{equation}\n\\end{Algorithm}\n\n\nWe now study the convergence behavior of Algorithm \\ref{P4A3.1} for finding the common fixed point of $\\mathcal{S}$ and $\\mathcal{T}$.\n\\begin{Theorem}\\label{P4T3.1}\n\tLet $\\mathcal{S},\\mathcal{T} :\\mathcal{H} \\to \\mathcal{H}$ be nonexpansive mappings such that $\\Omega:=\\operatorname{Fix} (\\mathcal{T})\\cap \\operatorname{Fix} (\\mathcal{S})\\neq \\emptyset$. Let $\\{y_n\\}$ be sequence in $\\mathcal{H}$ defined by Algorithm \\ref{P4A3.1}, where $\\{\\theta_{n}\\}$ and $\\{e_{n}\\}$ are real sequences satisfy the following conditions:\n\t\\begin{enumerate}\n\t\t\\item [\\textbf{(i)}]\\label{P4C1}\n\t\t$0< e_{n} < 1$ for all $n \\in \\mathbb{N},$ $\\lim\\limits_{n \\to \\infty} e_{n} = 1$, $\\sum_{n=1}^{\\infty}(1-e_{n})= \\infty$ and $\\sum_{n=1}^{\\infty}|e_{n}-e_{n-1}|< \\infty;$\n\t\t\\item [\\textbf{(ii)}]\\label{P4C2}\n\t\t$0<\\underline{\\theta}\\le \\theta_{n}\\leq\\overline{\\theta}< 1$ for all $n \\in \\mathbb{N},$ and $\\sum_{n=1}^{\\infty} |\\theta_{n}-\\theta_{n-1}| < \\infty.$\n\t\\end{enumerate}\n\tThen the sequence $\\{y_n\\}$ converges strongly to $\\operatorname{proj}_{\\Omega}(0).$\n\\end{Theorem}\n\\begin{proof}\n\tIn order to prove the convergence of the sequence $\\{y_n\\}$, we proceed with following steps:\n\t\\begin{enumerate}\n\t\t\\item[Step 1.]\\label{C3.1} Sequence $\\{y_n\\}$ is bounded.\\\\\n\t\t\n\t\tLet $y\\in \\Omega$. Since $\\mathcal{S}$ and $\\mathcal{T}$ are nonexpansive, we have following\n\t\t\\begin{eqnarray}\\label{P4e1}\n\t\t\t\\|y_{n+1}-y\\|&=& \\|\\mathcal{S}[(1-\\theta_{n})e_{n}y_n + \\theta_{n} \\mathcal{T}(e_{n}y_n)]-y\\| \\nonumber\\\\ [6pt]\n\t\t\t&\\leq &\\|(1-\\theta_{n})e_{n}y_n + \\theta_{n} \\mathcal{T}(e_{n}y_n)-y\\| \\nonumber\\\\[6pt]\n\t\t\t&\\leq &(1-\\theta_{n})\\|e_{n}y_n -y\\|+ \\theta_{n}\\| \\mathcal{T}(e_{n}y_n)-y\\| \\nonumber\\\\[6pt]\n\t\t\t&\\leq & \\|e_{n}y_n-y\\| \\\\[6pt]\n\t\t\t&=&\\|e_{n}(y_n-y)-(1-e_{n})y\\|\\nonumber\\\\[6pt]\n\t\t\t&\\leq & e_{n}\\|(y_n-y)\\|+(1-e_{n})\\|y\\|\\nonumber\\\\[6pt]\n\t\t\t&\\leq & \\text{max}\\{\\|y_0-y\\|,\\|y\\|\\}\\nonumber.\n\t\t\\end{eqnarray}\n\t\tThus, $\\{y_n\\}$ is bounded.\\\\\n\t\t\n\t\t\\item[Step 2.]\\label{C3.2} $\\|y_{n+1}-y_n\\| \\to 0$ as $n \\to \\infty.$\\\\\n\t\t\n\t\tUsing nonexpansivity of $\\mathcal{S} \\text{ and } \\mathcal{T}$, we have\n\t\t\\begin{eqnarray}\n\t\t\t\\|y_{n+1}-y_n\\|&=&\\|\\mathcal{S}[(1-\\theta_{n})e_{n}y_n + \\theta_{n} \\mathcal{T}(e_{n}y_{n})]-\\mathcal{S}[(1-\\theta_{n-1})e_{n-1}y_{n-1} + \\theta_{n-1} \\mathcal{T}(e_{n-1}y_{n-1})]\\| \\nonumber\\\\[6pt]\n\t\t\t&\\leq&\\|(1-\\theta_{n})e_{n}y_n + \\theta_{n} \\mathcal{T}(e_{n}y_{n})-(1-\\theta_{n-1})e_{n-1}y_{n-1} - \\theta_{n-1} \\mathcal{T}(e_{n-1}y_{n-1})\\| \\nonumber\\\\[6pt]\n\t\t\t&=& \\|(1-\\theta_{n})e_{n}y_n-(1-\\theta_{n-1})e_{n-1}y_{n-1}+\\theta_{n} \\mathcal{T}(e_{n}y_{n}) - \\theta_{n-1} \\mathcal{T}(e_{n-1}y_{n-1})\\| \\nonumber\\\\[6pt]\n\t\t\t&\\leq&\\|(1-\\theta_{n})(e_{n}y_n-e_{n-1}y_{n-1})+(\\theta_{n-1}-\\theta_{n})e_{n-1}y_{n-1})\\|\\nonumber\\\\\n\t\t\t&+&\\|\\theta_{n} (\\mathcal{T}(e_{n}y_{n})- \\mathcal{T}(e_{n-1}y_{n-1}))+(\\theta_{n}-\\theta_{n-1})\\mathcal{T}(e_{n-1}y_{n-1})\\| \\nonumber\\\\[6pt]\n\t\t\t&\\leq&\\|e_{n}y_n-e_{n-1}y_{n-1}\\|+|\\theta_{n}-\\theta_{n-1}|\\mathcal{C}_1 \\nonumber\\\\[6pt]\n\t\t\t&=& \\|e_{n}(y_n-y_{n-1})+(e_{n}-e_{n-1})y_{n-1}\\|+|\\theta_{n}-\\theta_{n-1}|\\mathcal{C}_1\\nonumber\\\\[6pt]\n\t\t\t&\\leq& e_{n}\\|y_n-y_{n-1}\\|+|e_{n}-e_{n-1}| \\mathcal{C}_2 +|\\theta_{n}-\\theta_{n-1}|\\mathcal{C}_1, \\nonumber\n\t\t\\end{eqnarray}\n\t\tfor some $\\mathcal{C}_1,$ $\\mathcal{C}_2 >0$. By applying Lemma \\ref{P4L2.2} with $a_n=\\|y_n-y_{n-1}\\|, b_n=0$, $ \\epsilon_n=|e_{n}-e_{n-1}| \\mathcal{C}_2 +|\\theta_{n}-\\theta_{n-1}|\\mathcal{C}_1 \\text{ and } \\theta_{n}=1-e_{n}, \\forall n \\in \\mathbb{N}$, we obtain that $\\|y_{n+1}-y_n\\| \\to 0$.\\\\\n\t\t\n\t\t\\item[Step 3.]\\label{C3.3} $\\|y_n-\\mathcal{T}y_n\\|$ and $\\|y_n-\\mathcal{S}y_n\\| \\to 0$ as $n \\to \\infty$.\\\\\n\t\t\n\t\tLet $y\\in \\Omega$. Note\\begin{eqnarray}\n\t\t\t\\|y_{n+1}-y\\|^2&=& \\|\\mathcal{S}[(1-\\theta_{n})e_{n}y_n + \\theta_{n} \\mathcal{T}(e_{n}y_n)]-y\\|^2 \\nonumber\\\\ [6pt]\n\t\t\t&\\leq &\\|(1-\\theta_{n})e_{n}y_n + \\theta_{n} \\mathcal{T}(e_{n}y_n)-y\\|^2 \\nonumber\\\\[6pt]\n\t\t\t&=&(1-\\theta_{n})\\|e_{n}y_n-y\\|^2 + \\theta_{n} \\|\\mathcal{T}(e_{n}y_n)-y\\|^2 -\\theta_{n}(1-\\theta_{n}) \\|e_{n}y_n-\\mathcal{T}(e_{n}y_n)\\|^2\\nonumber\\\\ [6pt]\n\t\t\t&\\leq&(1-\\theta_{n})\\|e_{n}y_n-y\\|^2 + \\theta_{n} \\|e_{n}y_n-y\\|^2 -\\theta_{n}(1-\\theta_{n}) \\|e_{n}y_n-\\mathcal{T}(e_{n}y_n)\\|^2\\nonumber\\\\[6pt]\n\t\t\t&=& \\|e_{n}y_n-y\\|^2 - \\theta_{n}(1-\\theta_{n})\\|e_{n}y_n - \\mathcal{T}(e_{n}y_n)\\|^2,\n\t\t\\end{eqnarray}\n\t\twhich implies that\n\t\t\\begin{eqnarray}\n\t\t\t\\theta_{n}(1-\\theta_{n})\\|e_{n}y_n - \\mathcal{T}(e_{n}y_n)\\|^2 &\\leq& \\|e_{n}y_n-y\\|^2 -\\|y_{n+1}-y\\|^2.\\nonumber\\\\\n\t\t\t&\\leq& (\\|e_{n}y_n-y\\|+\\|y_{n+1}-y\\|)\\|e_{n}y_n-y_{n+1}\\| \\nonumber\\\\\n\t\t\t&\\leq& (\\|e_{n}y_n-y\\|+\\|y_{n+1}-y\\|)\\|e_{n}y_n-e_{n}y_{n+1}+e_{n}y_{n+1}-y_{n+1}\\| \\nonumber\\\\\n\t\t\t&\\leq& (\\|e_{n}y_n-y\\|+\\|y_{n+1}-y\\|)(e_{n}\\|y_n-y_{n+1}\\| +(1-e_{n})\\|y_{n+1}\\|).\\nonumber\n\t\t\\end{eqnarray}\n\t\tNote \n\t\t\\begin{eqnarray*}\n\t\t\t\\|\\beta_{n}y_n-y\\|^2 &=& \\|\\beta_{n}y_n+\\beta_{n}y-\\beta_{n}y-y\\|^2\\\\\n\t\t\t&=&\\|\\beta_{n}(y_n-y)-(1-\\beta_{n})y\\|^2\\\\\n\t\t\t&=&(1-\\beta_{n})\\|y\\|^2 +\\beta_{n}(1-\\beta_{n})\\|y_n\\|^2-\\beta_{n}\\|y_n-y\\|^2\\\\\n\t\t\t&\\leq& (1-\\beta_{n})\\|y\\|^2 +\\beta_{n}(1-\\beta_{n})\\|y_n\\|^2.\n\t\t\\end{eqnarray*}\n\t\tThus $\\|\\beta_{n}y_n-y\\|^2 \\to 0 \\text{ as }n \\to \\infty.$\n\t\tSince $\\lim\\limits_{n \\to \\infty} e_{n} = 1$ by the condition (\\textbf{i}), $0<\\underline{\\theta}\\le \\theta_{n}\\leq\\overline{\\theta}< 1$ for all $n \\in \\mathbb{N}$ by the condition (\\textbf{ii}) and $\\|y_{n+1}-y\\|\\to 0$ by step \\ref{C3.2}, we have\n\t\t$\t\\|e_{n}y_n - \\mathcal{T}(e_{n}y_n)\\| \\to 0$ as $n \\to \\infty.$\n\t\t\n\t\t\n\t\t\n\t\tNow \t\\begin{eqnarray}\n\t\t\t\\|y_n-\\mathcal{T}y_n\\| &=&\\|y_n-e_{n}y_n+e_{n}y_n-\\mathcal{T}(e_{n}y_n)+\\mathcal{T}(e_{n}y_n)-\\mathcal{T}y_n\\| \\nonumber\\\\\n\t\t\t&\\leq& \\|y_n-e_{n}y_n\\|+\\|e_{n}y_n-\\mathcal{T}(e_{n}y_n)\\|+\\|\\mathcal{T}(e_{n}y_n)-\\mathcal{T}y_n\\| \\nonumber\\\\\n\t\t\t&\\leq& 2(1-e_{n}) \\|y_n\\|+\\|e_{n}y_n-\\mathcal{T}(e_{n}y_n)\\| \\to 0 \\text{ as } n \\to \\infty. \\nonumber\n\t\t\\end{eqnarray}\n\t\tObserve that\n\t\t\\begin{eqnarray}\n\t\t\t\\|y_n-\\mathcal{S}y_n\\| &\\leq& \\|y_n-y_{n+1}\\|+\\|y_{n+1}-\\mathcal{S}y_n\\| \\nonumber\\\\\n\t\t\t&=& \\|y_n-y_{n+1}\\|+\\|\\mathcal{S}[(1-\\theta_n)e_{n}y_n + \\theta_n \\mathcal{T}(e_{n}y_n)]-\\mathcal{S}y_n\\| \\nonumber\\\\\n\t\t\t&\\leq&\\|y_n-y_{n+1}\\|+\\|(1-\\theta_n)e_{n}y_n + \\theta_n \\mathcal{T}(e_{n}y_n)-y_n\\| \\nonumber\\\\\n\t\t\t&\\leq&\\|y_n-y_{n+1}\\|+(1-\\theta_n)\\|e_{n}y_n-y_n\\| + \\theta_n \\|\\mathcal{T}(e_{n}y_n)-y_n\\| \\nonumber\\\\\n\t\t\t&\\leq&\\|y_n-y_{n+1}\\|+(1-\\theta_n)(1-e_{n})\\|y_n\\|+\\theta_n \\|\\mathcal{T}(e_{n}y_n)-\\mathcal{T}y_n+\\mathcal{T}y_n-y_n\\| \\nonumber\\\\\n\t\t\t&\\leq&\\|y_n-y_{n+1}\\|+(1-\\theta_n)(1-e_{n})\\|y_n\\|+\\theta_n\\|e_{n}y_n-y_n\\|+\\theta_{n}\\|\\mathcal{T}y_n-y_n\\| \\nonumber\\\\\n\t\t\t&=&\\|y_n-y_{n+1}\\|+(1-e_{n})\\|y_n\\|+\\theta_n \\|\\mathcal{T}y_n-y_n\\| \\to 0 \\text{ as } n \\to \\infty. \\nonumber\n\t\t\\end{eqnarray}\n\t\tSince $e_{n} \\to 1$ and $\\|y_n-y_{n-1}\\|$ and $\\|y_n-\\mathcal{T}y_n\\| \\to 0 \\text{ as } n \\to \\infty,$\n\t\twe have $\\|y_n-\\mathcal{S}y_n\\| \\to 0.$\n\t\t\n\t\t\\item[Step 4.] \\label{C3.4}$\\{y_n\\}$ converges strongly to $\\bar{y}=\\operatorname{proj}_{\\Omega}(0).$\\\\\n\t\t\n\t\tFrom (\\ref{P4e1}), we set\n\t\t\\begin{eqnarray}\n\t\t\t\\|y_{n+1}-\\bar{y}\\| &\\leq& \\|\\mathcal{S}[(1-\\theta_n)e_{n}y_n + \\theta_n \\mathcal{T}(e_{n}y_n)]-\\bar{y}\\|\n\t\t\t\\nonumber\\\\\n\t\t\t&=& \\|(1-\\theta_n)e_{n}y_n + \\theta_n \\mathcal{T}(e_{n}y_n)-\\bar{y}\\| \\nonumber\\\\\n\t\t\t&=& \\|(1-\\theta_n)(e_{n}y_n-\\bar{y}) + \\theta_n \\mathcal{T}((e_{n}y_n)-\\bar{y})\\| \\nonumber\\\\\n\t\t\t&\\leq& \\|e_{n}y_n -\\bar{y}\\|.\n\t\t\\end{eqnarray}\n\t\tHence\n\t\t\\begin{eqnarray}\\label{P4e3.4}\n\t\t\t\\|y_{n+1}-\\bar{y}\\|^2 &\\leq& \\|e_{n}y_n -\\bar{y}\\|^2\\nonumber\\\\\n\t\t\t&\\leq& \\|e_{n}(y_n -\\bar{y})-(1-e_{n})\\bar{y}\\|^2 \\nonumber\\\\\n\t\t\t&\\leq& e_{n}^2 \\|y_n -\\bar{y}\\|^2+2 e_n(1-e_{n}) \\langle -\\bar{y}, y_n-\\bar{y} \\rangle+(1-e_{n})^2\\|\\bar{y}\\|^2 \\nonumber\\\\\n\t\t\t&\\leq& e_{n} \\|y_n -\\bar{y}\\|^2+2e(1-e_{n}) \\langle -\\bar{y}, y_n-\\bar{y} \\rangle+(1-e_{n})^2\\|\\bar{y}\\|^2.\n\t\t\\end{eqnarray}\t\n\t\tNext we show that\n\t\t\\begin{equation}\\label{P4e3.5}\n\t\t\t\\limsup_{n\\to \\infty} \\langle -\\bar{y}, y_n-\\bar{y} \\rangle\\leq 0.\n\t\t\\end{equation}\n\t\tContrarily assume a real number $l$ and a subsequence $\\{y_{n_j}\\}$ of $\\{y_n\\}$ satisfying\n\t\t\\begin{equation}\n\t\t\t\\langle -\\bar{y}, y_{n_j}-\\bar{y}\\rangle\\geq l > 0 ~ \\forall j\\in \\mathbb{N}.\n\t\t\\end{equation}\n\t\tSince $\\{y_n\\}$ is bounded, there exists a subsequence $\\{y_{n_j}\\}$ which converges weakly to an element $y\\in \\mathcal{H}.$ Lemma \\ref{P4L2.1} alongwith Step \\ref{C3.2} implies that $y\\in \\Omega.$ By using variational characterazation of projection, we can easily derive\n\t\t\\begin{equation}\n\t\t\t\\lim_{j\\to\\infty}\\langle -\\bar{y}, y_{n_j}-\\bar{y}\\rangle = \\langle -\\bar{y}, y-\\bar{y} \\rangle \\leq 0,\n\t\t\\end{equation}\n\t\twhich is a contradiction. Thus, (\\ref{P4e3.5}) holds and\n\t\t\\begin{eqnarray}\n\t\t\t\\limsup_{n\\to \\infty} \\left( 2 e_n \\langle -\\bar{y}, y_n-\\bar{y} \\rangle+(1-e_{n})\\|\\bar{y}\\|^2 \\right) \\leq 0.\n\t\t\\end{eqnarray}\n\t\tConsider $a_n=\\|y_n-\\bar{y}\\|, b_n= 2 e_n \\langle -\\bar{y}, y_n-\\bar{y} \\rangle+(1-e_{n})\\|\\bar{y}\\|^2, \\epsilon_n= 0$ and $\\theta_{n}= 1-e_{n}$ in (\\ref{P4e3.4}) and apply Lemma \\ref{P4L2.2}, we get the desired conclusion.\n\t\\end{enumerate}\n\\end{proof}\n\n\n\\begin{Corollary}\n\t\\label{c3.1} Let $R_1, R_2:\\mathcal{H}\\to \\mathcal{H}$ be $\\alpha_1,\\alpha_2\n\t$-averaged operators respectively, such that $\\operatorname{Fix}{(R_1)}\\cap \\operatorname{Fix}(R_2) \\neq\n\t\\emptyset$. For $y_1\\in \\mathcal{H}$, let $\\{y_n\\}$ be sequence in $\\mathcal{H}$ defined by\n\t\\begin{equation}\\label{key}\n\t\ty_{n+1}=R_2\\{e_n y_n +\\theta_{n}(R_1 (e_{n}y_n)-e_{n}y_n)\\}\\ \\ \\forall n\\in\n\t\t\\mathbb{N},\n\t\\end{equation}\n\twhere $\\{\\theta_{n}\\}$ and $\\{e_{n}\\}$ are real sequences satisfy the\n\tcondition (\\textbf{i}) given in Theorem \\ref{P4T3.1} and the conditions:\n\n\t$$0<\\underline{\\Theta }\\leq \\alpha _{1}\\theta _{n}\\leq\n\t\\overline{\\Theta }<1 \\textrm{ for all } n\\in \\mathbb{N}\\textrm{ and }\\sum_{n=1}^{\\infty} |\\theta\n\t_{n}-\\theta _{n-1}|<\\infty. $$\n\n\tThen the sequence $\\{y_n\\}$ converges strongly to $\\operatorname{proj}_{\\operatorname{Fix}{(R_1)}\\cap\n\t\t\\operatorname{Fix}(R_2)}(0).$\n\\end{Corollary}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Tikhonov Regularized Forward-Backward-type Algorithms}\\label{sec4}\nIn this section, we propose a forward-backward algorithm based on Algorithm \\ref{P4A3.1} to simultaneously solve the monotone inclusion problems of the sum of two maximally monotone operators in which one is single-valued. Further, we also propose an forward-backward-type primal-dual algorithm based on Algorithm \\ref{P4A3.1} to solve a complexly structured monotone inclusion problem containing composition with linear operators and parallel-sum operators.\n\n\\subsection{Tikhonov Regularized Forward-Backward Algorithm}\nLet $A_1,A_2: \\mathcal{H} \\to 2^\\mathcal{H}$ be maximally monotone operators and $B_1,B_2: \\mathcal{H} \\to \\mathcal{H}$ be $\\alpha_1, \\alpha_2$-cocoercive operators. We consider the monotone inclusion problem\n\\begin{equation}\\label{P4Al3.1}\n\t\\text{ find }x \\in \\mathcal{H} \\text{ such that } 0 \\in (A_1 +B_1)x \\cap (A_2+B_2)x.\n\\end{equation}\nWe propose a forward-backward algorithm to solve the monotone inclusion problem (\\ref{P4Al3.1}) such that generated sequence converges strongly to the solution set of the Problem (\\ref{P4Al3.1}).\n\\begin{Theorem}\\label{T4.1}\n\tSuppose $\\operatorname{Zer}(A_1+B_1)\\cap \\operatorname{Zer}(A_2+B_2) \\neq \\emptyset$ and $\\gamma_1 \\in (0,2\\alpha_1)\\text{ and }\\gamma_2 \\in (0,2\\alpha_2).$ For $y_1 \\in \\mathcal{H}$, consider the forward-backward algorithm defined as follows:\n\t\\begin{equation}\\label{e4.1}\n\t\ty_{n+1}=J_{\\gamma_2A_2}(Id-\\gamma_2B_2)\\left\\lbrace (1-\\theta_n)e_{n}y_n + \\theta_n J_{\\gamma_1 A_1}(e_n y_n-\\gamma_1B_1(e_{n}y_n))\\right\\rbrace \\forall n \\in \\mathbb{N}.\n\t\\end{equation}\n\twhere $\\{\\theta_{n}\\}$ and $\\{e_{n}\\}$ are real sequences satisfy the\n\tcondition (\\textbf{i}) given in Theorem \\ref{P4T3.1} and the conditions:\n\n\t$$0<\\underline{\\Theta }\\leq \\frac{2\\alpha_1}{4\\alpha_1-\\gamma_1} \\theta _{n}\\leq\n\t\\overline{\\Theta }<1 \\textrm{ for all } n\\in \\mathbb{N} \\textrm{ and } \\sum_{n=1}^{\\infty}|\\theta\n\t_{n}-\\theta _{n-1}|<\\infty .$$\n\n\tThen $\\{y_n\\}$ converges strongly to $\\operatorname{proj}_{\\operatorname{Zer}(A_1+B_1)\\cap \\operatorname{Zer}(A_2+B_2)}(0).$\n\\end{Theorem}\n\\begin{proof}\n\tSet $T_1=J_{\\gamma_1A_1}(Id-\\gamma_1B_1)$ and $T_2=J_{\\gamma_2A_2}(Id-\\gamma_2B_2),$ then algorithm (\\ref{e4.1}) can be rewritten as:\n\t\\begin{equation}\n\t\ty_{n+1}=T_2\\{(1-\\theta_{n})e_n y_n +\\alpha_1\\theta_{n}(T_1 (e_{n}y_n)-e_{n}y_n)\\} \\forall n \\in \\mathbb{N}.\n\t\\end{equation}\n\tSince $J_{\\gamma_1A_1}$ is $\\frac{1}{2}$-cocoercive \\cite[Corollary 23.8]{Bauschke2011} and $Id-\\gamma_1B_1$ is $\\frac{\\gamma_1}{2\\alpha_1}$-averaged \\cite[proposition 4.33]{Bauschke2011}, $T_1$ is $\\frac{2\\alpha_1}{4\\alpha_1 -\\gamma_1}$-averaged. Using the fact that $\\operatorname{Zer}(A_i+B_i)= \\operatorname{Fix}(T_i), i=1,2$ and assumption $\\operatorname{Zer}(A_1+B_1)\\cap \\operatorname{Zer}(A_2+B_2) \\neq \\emptyset$ \\cite[Proposition 25.1]{Bauschke2011}, $\\operatorname{Fix}(T_1) \\cap \\operatorname{Fix}(T_2) \\neq \\emptyset.$ Therefore, Theorem \\ref{T4.1} follows from Corollary \\ref{c3.1}.\n\\end{proof}\n\nFurther, we consider the following minimization problem and propose a proximal-point-type algorithm to solve it.\n\\begin{Problem}\\label{P4P4.1}\n\tConsider strictly positive real numbers $\\beta_1,\\beta_2.$ Let $f_1, f_2 :\\mathcal{H} \\to \\mathbb{R} \\cup \\{\\infty\\}$ be proper convex lower semicontinuous functions and $g_1,g_2: \\mathcal{H} \\to \\mathbb{R}$ be convex and Frechet-differentiable functions with $\\frac{1}{\\beta_1},\\frac{1}{\\beta_2}$-Lipschitz continuous gradients, respectively. The problem is to find a point $y\\in \\mathcal{H}$ satisfying\n\t\\begin{equation}\\label{P4.1}\n\t\tmin\\left\\lbrace (f_1+g_1) \\cap (f_2+g_2)\\right\\rbrace .\n\t\\end{equation}\n\\end{Problem}\n\\begin{Theorem}\\label{c4.1}\n\tConsider the functions $f_1, f_2, g_1 \\text{ and }g_2$ are as in Problem \\ref{P4P4.1}. Let $argmin(f_1+g_1) \\cap argmin(f_2+g_2) \\neq \\emptyset.$ For $\\gamma_1 \\in (0,2\\beta_1] \\text{ and } \\gamma_2 \\in (0,2\\beta_2],$ consider an algorithm with initial point $y_1 \\in \\mathcal{H},$\n\t\\begin{equation}\\label{e4.2}\n\t\ty_{n+1}=prox_{\\gamma_2f_2}(Id-\\gamma_2\\nabla g_2)\\{(1-\\theta_n)e_{n}y_n + \\theta_n prox_{\\gamma_1f_1}(e_n y_n-\\gamma_1\\nabla g(e_{n}y_n))\\} \\ \\ \\forall n \\in \\mathbb{N},\n\t\\end{equation}\n\twhere $\\theta_n \\in (0,1] \\text{ and }e_{n} \\in (0,\\frac{4\\beta_1-\\gamma_1}{2\\beta_1})$ satisfy the condition (\\textbf{i}) in Theorem \\ref{P4T3.1} and the conditions:\n\n\t$$0< \\underline{\\Theta} \\leq \\frac{2\\beta_1}{4\\beta_1-\\gamma_1}\\theta_{n} \\leq \\overline{\\Theta } <1 \\textrm{ and } \\sum_{n=1}^{\\infty} |\\theta_{n}-\\theta_{n-1}| < \\infty.$$\n\n\tThen $\\{y_n\\}$ converges strongly to minimal norm solution of Problem \\ref{P4P4.1}.\n\\end{Theorem}\n\\begin{proof}\n\tConsider $A_1=\\partial f_1, A_2=\\partial f_2, B_1= \\nabla g_1, B_1=\\nabla g_2$. Since $$\\operatorname{Zer}(\\partial f_i +\\nabla g_i)= argmin (f_i +g_i), \\ i=1,2$$ and $\\nabla g_1,\\nabla g_2$ are $\\beta_1,\\beta_2$-cocoercive, respectively (Ballion-Hadded Theorem \\cite[Corollary 16.18]{Bauschke2011}). Thus, by Theorem \\ref{T4.1}, $\\{y_n\\}$ converges strongly to a point in $argmin(f_1+g_1) \\cap argmin(f_2+g_2)$.\n\\end{proof}\n\n\n\\subsection{Tikhonov Regularized Forward-Backward-type Primal-Dual Algorithm}\n\n\\begin{Problem}\\label{P6.1}\n\tSuppose $\\Omega_1, \\dots,\\Omega_m$ are real Hilbert spaces. Consider the following operators:\n\t\\begin{enumerate}\n\t\t$\\bullet A,B:\\mathcal{H} \\to 2^\\mathcal{H}$ are maximally monotone operators,\\\\\n\t\t$\\bullet C,D:\\mathcal{H} \\to \\mathcal{H}$ are $\\mu_1,\\mu_2$-cocoercive operators, respectively, $\\mu_1,\\mu_2> 0$\\\\\n\t\t$\\bullet P_i,Q_i,R_i, S_i:\\Omega_i \\to 2^{\\Omega_i}$ are maximally monotone operators such that $Q_i, S_i$ are $\\nu_i,\\delta_i$-cocoercive, respectively, $\\nu_i,\\delta_i>0,$ $i=1, \\dots, m$,\\\\\n\t\t$\\bullet$ nonzero continuous linear operators $L_i: \\mathcal{H} \\to \\Omega_i, i=1, \\dots, m.$\n\t\\end{enumerate}\n\tThe primal inclusion problem is to find $\\bar{y} \\in \\mathcal{H}$ satisfying\n\t\\begin{eqnarray}\n\t\t& 0\\in A \\bar{y} +\\sum_{i=1}^{m} L_i^{*}(P_i \\square Q_i)(L_i \\bar{y})+C\\bar{y}\\nonumber\\\\\n\t\t&\\text{and}\\nonumber\\\\\n\t\t&0\\in B\\bar{y} +\\sum_{i=1}^{k} L_i ^*(R_i \\square S_i)(L_i \\bar{y})+D\\bar{y} \\nonumber\n\t\\end{eqnarray}\n\ttogether with dual inclusion problem\n\t\\begin{eqnarray}\\label{e6.11}{find \\ \\ \\bar{v}_1\\in \\varOmega_1, \\dots, \\bar{v}_m \\in \\varOmega_m \\text{ such that }\n\t\t\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t-\\sum_{i=1}^{m}L_i^* \\bar{v}_i \\in Ay+Cy\\\\\n\t\t\t\t\\bar{v}_i \\in (P_i \\square Q_i)(L_i x)\\\\\n\t\t\t\t\\text{ and }\\\\\n\t\t\t\t-\\sum_{i=1}^{m}L_i^* \\bar{v}_i \\in By+Dy\\\\\n\t\t\t\t\\bar{v}_i \\in (R_i \\square S_i)(L_i y), \\ \\ \\ i =1, \\dots m.\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\\end{Problem}\nA point $(\\bar{y}, \\bar{v}_1,\\dots, \\bar{v}_m) \\in \\mathcal{H}\\times \\varOmega_1 \\times \\cdots \\times \\varOmega_m$ be a primal-dual solution of Problem \\ref{P6.1} if it satisfies the following:\\\\\n\\begin{eqnarray}{\n\t\t\\left\\{\n\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t-\\sum_{i=1}^{m}L_i^* \\bar{v}_i \\in A\\bar{y}+C\\bar{y},\\\\\n\t\t\t-\\sum_{i=1}^{m}L_i^* \\bar{v}_i \\in B\\bar{y}+D\\bar{y}, \\\\\n\t\t\t\\bar{v}_i \\in (P_i \\square Q_i)(L_i \\bar{y}),\\\\\n\t\t\t\\bar{v}_i \\in (R_i \\square S_i)(L_i \\bar{y}) \\ \\ \\ i =1,\\dots, m.\n\t\t\\end{array}\\right.}\n\\end{eqnarray}\n\n\n\\begin{Theorem}\\label{P4T6.1}\n\tConsider the operators as in Problem \\ref{P6.1}. Assume\n\t\\begin{equation}\\label{e6.4}\n\t\t0\\in ran \\left( A+\\sum_{i=1}^{m}L_i ^* \\circ (P_i \\square Q_i)\\circ L_i +C\\right) \\bigcap ran \\left( B+\\sum_{i=1}^{m}L_i ^* \\circ (R_i \\square S_i)\\circ L_i +D\\right).\n\t\\end{equation}\n\tLet $\\tau,\\sigma_1,\\dots, \\sigma_m >0 $ such that\n\t\\begin{equation*}\n\t\t2\\rho\\min\\{\\beta_1, \\beta_2\\} \\geq 1,\n\t\\end{equation*}\n\twhere\n\t\\begin{eqnarray*}\n\t\t\\rho=\\min\\left\\lbrace \\frac{1}{\\tau},\\frac{1}{\\sigma_1},\\dots, \\frac{1}{\\sigma_m}\\right\\rbrace \\left(1-\\sqrt{\\tau\\sum_{i=1}^{m}\\sigma_i\\|L_i\\|^2}\\right)\\\\\n\t\t\\beta_1=\\min\\{\\mu_1,\\nu_1,\\dots,\\nu_m\\} \\textrm{ and } \\beta_2= \\min\\{\\mu_2,\\delta_1,\\dots,\\delta_m\\}.\n\t\\end{eqnarray*}\n\t\n\tConsider the algorithm with intial point $(y_1,v_{1,1},\\dots, v_{m,1}) \\in \\mathcal{H} \\times \\varOmega_1 \\times \\cdots \\times\\varOmega_m$ and defined by\n\t\n\t\n\t\\begin{algorithm}[H]\n\t\t\\SetAlgoLined\n\t\t\n\t\t\n\t\t\\KwIn{\\begin{enumerate}\n\t\t\t\t\\item initial points $(y_1,v_{1,1},\\dots, v_{m,1}) \\in \\mathcal{H} \\times \\varOmega_1 \\times \\cdots \\times\\varOmega_m$\n\t\t\t\t\\item real numbers $\\tau, \\sigma_i > 0$, $i=1,2,...,m$ be such that $\\tau\\sum_{i=1}^{m} \\sigma_i \\|L_i\\|^2 < 4 $.\n\t\t\t\t\\item $\\theta_n \\in (0,\\frac{4\\beta_1 \\rho-1}{2\\beta_1 \\rho}], e_{n} \\in (0,1)$ .\n\t\t\t\\end{enumerate}\t\n\t\t}\n\t\t\\SetKwBlock{For}{For}{}\n\t\t\\SetKwProg{}{}{}{}\n\t\t\\For($n\\in \\mathbb{N}$; ){\n\t\t\t$p_n = J_{\\tau A}\\left[e_{n}y_n-\\tau \\left(e_{n}\\sum_{i=1}^{m}L_i^* v_{i,n}+C(e_{n}y_n)\\right)\\right]$\\\\\n\t\t\t$u_n= e_{n}y_n+\\theta_{n}(p_n-e_{n}y_n)$\\\\\n\t\t\t\n\t\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t\t\n\t\t\t\t$q_{i,n}=J_{\\sigma_iP_i^{-1}}\\left[e_{n}v_{i,n}+\\sigma_i(L_i(2p_n-e_{n}y_n)-Q_i^{-1}(e_{n}v_{i,n}))\\right]$\\\\\n\t\t\t\t$u_{i,n}=e_{n}v_{i,n}+\\theta_{n}(q_{i,n}-e_{n}v_{i,n})$\\\\\n\t\t\t}\n\t\t\t\n\t\t\t$y_{n+1}=J_{\\tau B}\\left[u_n-\\tau \\left(\\sum_{i=1}^{m}L_i^* u_{i,n}+D(u_n)\\right)\\right]$\\\\\n\t\t\t$v_{i,n+1}=J_{\\sigma_iR_i^{-1}}\\left[u_{i,n}+\\sigma_i(L_i(2y_{n+1}-r_n)-S_i^{-1}(u_{i,n}))\\right]$\t\n\t\t\t\n\t\t\t\n\t\t}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\KwOut{$(y_{n+1},v_{1,n+1},\\dots,v_{m,n+1})$}\t\t\n\t\t\\caption{To optimize the complexly structured Problem \\ref{P6.1} }\\label{P4Al4.11}\n\t\\end{algorithm}\n\t\n\twhere, sequencences $\\{\\theta_{n}\\}$ and $\\{e_{n}\\}$ satisfy the condition (\\textbf{i}) and the conditions:\n\n\t$$0< \\underline{\\Theta} \\leq \\frac{2\\beta_1 \\rho}{4\\beta_1 \\rho-1}\\theta_{n} \\leq \\overline{\\Theta } <1 \\textrm{ and } \\sum_{n=1}^{\\infty} |\\theta_{n}-\\theta_{n-1}| < \\infty.$$\n\n\tThen there exists $(\\bar{y},\\bar{v}_1,\\dots,\\bar{v}_m)\\in \\mathcal{H}\\times \\varOmega_1 \\times \\cdots \\times \\varOmega_m $ such that sequence $\\{(y_n,v_{1,n},\\dots,v_{m,n})\\}$ converges strongly to $(\\bar{y},\\bar{v}_1,\\dots,\\bar{v}_m)$ and satisfies the Problem \\ref{P6.1}.\n\\end{Theorem}\n\n\\begin{proof}\n\tConsider the real Hilbert space $\\mathcal{K }\\equiv$ $\\mathcal{H}\\times \\varOmega_1 \\times \\cdots \\times \\varOmega_m$ endowed with innner product\n\t$$\\langle (x,u_1,\\dots, u_m),(y,v_1,\\dots,v_m)\\rangle_{\\mathcal{K}} =\\langle x,y\\rangle_\\mathcal{H} +\\sum_{i=1}^{m}\\langle u_i,v_i \\rangle_{\\varOmega_i}$$\n\tand corresponding norm\n\t$$\\|(x,u_1, \\dots,u_m)\\|_\\mathcal{K}=\\sqrt{\\|x\\|_\\mathcal{H} ^2+\\sum_{i=1}^{m}\\|u_i\\|^2 _{\\varOmega_i} },\\ \\ \\ \\forall (x,u_1,\\dots u_m),(y,v_1,\\dots,v_m) \\in \\mathcal{K}.$$\n\tFurther we consider following operators on real Hilbert space $\\mathcal{K}$:\n\t\n\t\\begin{enumerate}\n\t\t\\item $\\phi_1: \\mathcal{K} \\to 2^\\mathcal{K}$, defined by $(x,u_1,\\dots,u_m)\\mapsto (Ax,P_1^{-1}u_1,\\dots,P_m^{-1}u_m),$\n\t\t\\item $\\phi_2: \\mathcal{K} \\to 2^\\mathcal{K}$, defined by $(x,u_1,\\dots,u_m)\\mapsto (Bx,R_1^{-1}u_1,\\dots,R_m^{-1}u_m),$\n\t\t\\item $\\xi:\\mathcal{K} \\to \\mathcal{K}$, defined by $(x,u_1,\\dots,u_m)\\mapsto \\left(\\sum_{i=1}^{m}L_i ^*u_i,-L_1 x,\\dots, -L_m x\\right),$\n\t\t\\item $\\psi_1:\\mathcal{K} \\to \\mathcal{K}$, defined by $(x,u_1,\\dots,u_m)\\mapsto\\left(Cx,Q_1^{-1}u_1,\\dots, Q_m^{-1}u_m\\right),$\n\t\t\\item $\\psi_2:\\mathcal{K} \\to \\mathcal{K}$, defined by $(x,u_1,\\dots,u_m)\\mapsto\\left(Dx,S_1^{-1}u_1,\\dots, S_m^{-1}u_m\\right).$\n\t\\end{enumerate}\n\tThese operators are maximally monotone as $A,B,P_i,R_i,Q_i, S_i, \\ i=1,\\dots,m$ are maximally monotone and $\\xi$ is skew-symmetric, i.e.,\n\t$\\xi^*=-\\xi$ (\\cite[Proposition 20.22, 20.23 and Example 20.30]{Bauschke2011}).\n\tNow, define the continuous linear operator $\\textbf{V}:\\mathcal{K} \\to \\mathcal{K}$ by,\n\t$$(x,u_1,\\dots,u_m) \\to \\left(\\frac{x}{\\tau}-\\sum_{i=1}^{m}L_i^*u_i, \\frac{u_1}{\\sigma_1}-L_1x,\\dots,\\frac{u_m}{\\sigma_m}-L_mx\\right),$$\n\twhich is self adjoint and $\\rho$-strongly positive, i.e., $\\langle \\textbf{x},\\textbf{Vx}\\rangle_{\\mathcal{K}} \\geq \\rho \\|\\textbf{x}\\|_{\\mathcal{K}}^2 \\ \\ \\forall \\textbf{x}\\in \\mathcal{K}$ \\cite{BC2013}. Therefore inverse of operator $\\textbf{V}$ exists and satisfy $\\|\\textbf{V}^{-1}\\|\\leq \\frac{1}{\\rho}.$\\\\\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tNow, consider the sequences\n\t\\begin{eqnarray}{\n\t\t\t\\forall n\\in \\mathbb{N}\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t\\textbf{y}_n$=$(y_n,v_{1,n},\\dots,v_{m,n}),\\\\ \\textbf{u}_n$=$(u_n,u_{1,n},\\dots,u_{m,n}),\\\\\n\t\t\t\t\\textbf{x}_n =(p_n,q_{1,n},\\dots,q_{m,n}).\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\t\n\tBy taking into account the sequences $\\{\\textbf{y}_n \\}$,$\\{\\textbf{x}_n\\}$ and $\\{\\textbf{u}_n\\}$ and operator \\textbf{V}, Algorithm \\ref{P4Al4.11} can be rewritten as\n\n\n\n\t\\begin{eqnarray}\\label{e6.22}{\n\t\t\t\\forall n\\in \\mathbb{N}\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\te_{n}\\text{\\textbf{V}}(\\text{\\textbf{y}}_n)-\\text{\\textbf{V}}(\\text{\\textbf{x}}_n)-\\text{\\textbf{$\\psi_1$}}(e_{n} \\text{\\textbf{y}}_n) \\in (\\phi_1+\\xi)(\\text{\\textbf{x}}_n)\\\\\n\t\t\t\t\\text{\\textbf{u}}_n=e_{n}\\text{\\textbf{y}}_n+\\theta_n(\\text{\\textbf{x}}_n-e_{n}\\text{\\textbf{y}}_n)\\\\\n\t\t\t\t\\text{\\textbf{V}\\textbf{$\\textbf{u}_n$}-\\textbf{V}\\textbf{y}}_{n+1}-\\psi_2 \\text{\\textbf{$\\textbf{u}_n$}} \\in (\\phi_2+\\xi)\\textbf{y}_{n+1}.\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\tOn further analysing Algorithm $\\ref{P4Al4.11}$, we get\n\t\\begin{eqnarray}\\label{e6.22}{\n\t\t\t\\forall n\\in \\mathbb{N}\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t\\textbf{x}_n=J_{\\textbf{A}_1}(e_{n}\\textbf{y}_n-\\textbf{B}_1(e_{n}\\textbf{y}_n)) \\\\\n\t\t\t\t\\textbf{u}_n=e_{n}\\textbf{y}_n+\\theta_n(\\textbf{x}_n-e_{n}\\textbf{y}_n)\\\\\n\t\t\t\t\\textbf{y}_{n+1}=J_{\\textbf{A}_2}(\\textbf{u}_n-\\textbf{B}_2 \\textbf{u}_n),\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\twhere $\\textbf{A}_1=\\textbf{V}^{-1}(\\phi_1 +\\xi)$ ,\\ $\\textbf{B}_1=\\textbf{V}^{-1}\\psi_1$,\\ $\\textbf{A}_2=\\textbf{V}^{-1}(\\phi_2 +\\xi)$ and $\\textbf{B}_2=\\textbf{V}^{-1}\\psi_2.$ The Algorithm (\\ref{e6.22}) is in the form of Algorithm (\\ref{P4Al3.1}) for $\\gamma=1$ and $A_i=\\textbf{A}_i$ and $B_i=\\textbf{B}_i, \\ i=1,2$.\n\tNow, we define the real Hilbert space $\\mathcal{K}_{\\textbf{V}} \\equiv \\mathcal{H}\\times \\varOmega_1 \\times \\cdots \\times \\varOmega_m$ endowed with inner product $\\langle \\textbf{x},\\textbf{y}\\rangle_{\\mathcal{K}_{\\textbf{V}}}=\\langle \\textbf{x},{\\textbf{V}}\\textbf{y}\\rangle_{\\mathcal{K}}$ and corresponding norm is given by, $\\|\\textbf{x}\\|_{\\mathcal{K}_{\\textbf{V}}}=\\sqrt{\\langle \\textbf{x},{\\textbf{V}}\\textbf{x}\\rangle_{\\mathcal{K}}} \\ \\ \\forall \\textbf{x},\\textbf{y} \\in \\mathcal{K}_{\\textbf{V}}.$ \\\\\n\tIn view of real Hilbert space $\\mathcal{K}_{\\textbf{V}}$ and Algorithm \\ref{P4Al4.11}, we observe the following:\n\t\\begin{enumerate}\n\t\t\\item[(i)] since dom$(\\xi)=\\mathcal{K}$, $\\phi_i +\\xi$ are maximally monotone on $\\mathcal{K}$ and thus maximal monotonocity of $\\textbf{A}_i$ and $\\textbf{B}_i$ are followed, for $i=1,2$ \\cite{BC2013}.\n\t\t\\item[(ii)] $\\textbf{B}_i$ are $\\beta_i \\rho$-cocoercive on $\\mathcal{K}_{\\textbf{V}}$ as $\\psi_i$ are $\\beta_i$-cocoercive in $\\mathcal{K}$ \\cite[Page 672]{BC2013}, for $i=1,2$.\n\t\t\\item[(iii)] $\\operatorname{Zer}(\\textbf{A}_i+\\textbf{B}_i$)=$\\operatorname{Zer}({\\textbf{V}^{-1}}(\\phi_i+\\xi+\\psi_i))$=$\\operatorname{Zer}(\\phi_i+\\xi+\\psi_i),\\ i=1,2$ and from condition $(\\ref{e6.4})$, we can easily obtain that $\\operatorname{Zer}(\\textbf{A}_1+\\textbf{B}_1)\\cap \\operatorname{Zer}(\\textbf{A}_2+\\textbf{B}_2) \\neq \\emptyset.$\n\t\\end{enumerate}\n\t\\par \tSince \\textbf{V} is self-adjoint and $\\rho$-strongly positive, weak convergence and strong convergence of sequences are same in both Hilbert spaces $\\mathcal{K}$ and $\\mathcal{K}_{\\textbf{V}}$. Operators $\\textbf{A}_i, \\textbf{B}_i, \\ i=1,2$ and sequences $\\{\\theta_{n}\\},\\{e_{n}\\}$ satisfy the assumptions in Theorem $\\ref{T4.1}$, therefore, according to Theorem \\ref{T4.1}, $\\{\\textbf{x}_n\\}$ converges strongly to $(\\bar{y},\\bar{v}_1,\\dots,\\bar{v}_m)\\in$ $\\operatorname{proj}_{\\operatorname{Zer}(\\textbf{A}_1+\\textbf{B}_1)\\cap \\operatorname{Zer}(\\textbf{A}_2+\\textbf{B}_2)}(0,\\dots,0)$ in the space ${\\mathcal{K}}_\\textbf{V}$ as $n \\to \\infty.$ Thus, we obtain the conclusion as $(\\bar{y},\\bar{v}_1,\\dots,\\bar{v}_m)\\in \\operatorname{Zer}(\\phi_1+\\xi+\\psi_1)\\cap \\operatorname{Zer}(\\phi_2+\\xi+\\psi_2),$ also satisfy primal-dual Problem \\ref{P6.1}.\n\t\n\\end{proof}\nNext, we define a complexly structured convex optimization problem and their Fenchel duals. Further, we propose an algorithm to solve the considered problem and study the convergence property of algorithm to find simultaneously the common solutions of optimaztion problems and common solutions of their Fenchel duals. Let $m$ is a positive integer. We have considered the following problem:\n\\begin{Problem}\\label{P6.2}\n\tLet $f_1,f_2\\in \\varGamma(\\mathcal{H})$ and $h_1,h_2$ be convex differentiable function with $\\mu_1^{-1},\\mu_2^{-1}$- Lipschitz continuous gradients, for some $\\mu_1,\\mu_2 >0.$ Let $\\varOmega_i$ be real Hilbert spaces and $g_i, l_i,s_i,t_i \\in \\varGamma(\\varOmega_i)$ such that $l_i,t_i$ are $\\nu_i,\\delta_i (>0)$-strongly convex, respectively, and $L_i:\\mathcal{H} \\to \\varOmega_i$ be non-zero linear continuous operator $\\forall i=1,2,\\dots,m$. The opmization problem under consideration is\n\t\\begin{equation}\n\t\t\\inf_{x\\in \\mathcal{H}}\\left\\lbrace f_1(x)+\\sum_{i=1}^{m}(g_i\\square l_i)(L_i x)+h_1(x)\\right\\rbrace \\bigcap \\inf_{x\\in \\mathcal{H}}\\left\\lbrace f_2(x)+\\sum_{i=1}^{m}(s_i\\square t_i)(L_i x)+h_2(x)\\right\\rbrace\n\t\\end{equation}\n\twith its Fenchel-dual problem\n\t\\begin{eqnarray}\n\t\t\\sup_{v_i \\in \\varOmega, i\\in 1,\\dots,m}\\left\\lbrace -(f_1^{*} \\square h_1^{*})\\left( -\\sum_{i=1}^{m}L_i^* v_i\\right) -\\sum_{i=1}^{m}\\left( g_i^*(v_i)+l_i^*(v_i)\\right) \\right\\rbrace \\nonumber\\\\\n\t\t\\bigcap \\sup_{v_i \\in \\varOmega, i\\in 1,\\dots,m}\\left\\lbrace -(f_2^{*} \\square h_2^{*})\\left( -\\sum_{i=1}^{m}L_i^* v_i\\right) -\\sum_{i=1}^{m}\\left( s_i^*(v_i)+t_i^*(v_i)\\right) \\right\\rbrace.\n\t\\end{eqnarray}\n\\end{Problem}\nIn following corollary, we propose an algorithm and study its convergence behaviour. The point of convergence will be the solution of Problem \\ref{P6.2}.\n\n\n\\begin{Corollary}\n\tAssume in Problem \\ref{P6.2}\n\t\\begin{equation}\\label{e6.5}\n\t\t0\\in ran \\left( \\partial f_1+\\sum_{i=1}^{m}L_i ^* \\circ (\\partial g_i \\square \\partial l_i)\\circ L_i +\\nabla h_1 \\right) \\bigcap ran \\left( \\partial f_2+\\sum_{i=1}^{m}L_i ^* \\circ (\\partial s_i \\square \\partial t_i)\\circ L_i +\\nabla h_2\\right).\n\t\\end{equation}\n\tConsider $\\tau, \\sigma_i >0, \\ i=1,2,\\dots,m$ such that\n\t\\begin{equation*}\n\t\t2\\rho\\min\\{\\beta_1,\\beta_2\\} \\geq 1.\n\t\\end{equation*}\n\twhere\n\t\\begin{eqnarray*}\n\t\t\\rho= \\min\\{\\tau^{-1}, \\sigma_1^{-1}, \\dots, \\sigma_m^{-1}\\}\\left(1-\\sqrt{\\tau\\sum_{i=1}^{m}\\sigma_i\\|L_i\\|^2}\\right)\\\\\n\t\t\\beta_1=\\min\\{\\mu_1,\\nu_1,\\dots,\\nu_m\\} \\textrm{ and } \\beta_2= \\min\\{\\mu_2,\\delta_1,\\dots,\\delta_m\\}.\n\t\\end{eqnarray*}\n\t\n\tConsider the iterative scheme with intial point $(x_1,v_{1,1},\\dots, v_{m,1}) \\in \\mathcal{H} \\times \\varOmega_1 \\times \\cdots \\times\\varOmega_m$ and defined by\n\t\\begin{eqnarray}\\label{e6.6}{\n\t\t\t\\forall n\\in \\mathbb{N}\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\tp_n = prox_{\\tau f_1}\\left[e_{n}x_n-\\tau \\left(e_{n}\\sum_{i=1}^{m}L_i^* v_{i,n}+\\nabla h_1(e_{n}x_n)\\right)\\right]\\\\\n\t\t\t\tu_n= e_{n}x_n+\\theta_{n}(p_n-e_{n}x_n)\\\\\n\t\t\t\tFor \\ i=1,2,\\dots, m\\\\\n\t\t\t\t\n\t\t\t\tq_{i,n}=prox_{\\sigma_ig_i^{*}}\\left[e_{n}v_{i,n}+\\sigma_i(L_i(2p_n-e_{n}x_n)-\\nabla l_i ^{*}(e_{n}v_{i,n}))\\right]\\\\\n\t\t\t\tu_{i,n}=e_{n}v_{i,n}+\\theta_{n}(q_{i,n}-e_{n}v_{i,n})\\\\\n\t\t\t\tx_{n+1}=prox_{\\tau f_2}\\left[u_n-\\tau \\left(\\sum_{i=1}^{m}L_i^* u_{i,n}+\\nabla h_2(u_n)\\right)\\right]\\\\\n\t\t\t\tv_{i,n+1}=prox_{\\sigma_i s_i^{*}}\\left[u_{i,n} +\\sigma_i(L_i(2x_{n+1}-u_n)-\\nabla t_i^{*}(u_{i,n}))\\right],\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\twhere sequences $\\{\\theta_{n}\\}$ and $\\{e_n\\}$ satisfy the condition (\\textbf{i}) in Theorem \\ref{P4T3.1} and the conditions:\n\n\t$$0< \\underline{\\Theta} \\leq \\frac{2\\beta_1 \\rho}{4\\beta_1 \\rho-1} \\theta_{n} \\leq \\overline{\\Theta } <1 \\textrm{ and } \\sum_{n=1}^{\\infty} |\\theta_{n}-\\theta_{n-1}| < \\infty.$$\n\n\t\n\tThen, there exists $(\\bar{x},\\bar{v}_1,\\dots,\\bar{v}_m)\\in \\mathcal{H}\\times \\varOmega_1 \\times \\cdots \\times \\varOmega_m $ such that sequence $(x_n,v_{1,n},\\dots,v_{m,n})$ converges strongly to $(\\bar{x},\\bar{v}_1,\\dots,\\bar{v}_m)$ as $n \\to \\infty$ and $(\\bar{x},\\bar{v}_1,\\dots,\\bar{v}_m)$ satisfies Problem \\ref{P6.2}.\n\\end{Corollary}\t\n\n\n\n\n\n\n\n\n\n\\section{Tikhonov Regularized Douglas-Rachford-type Algorithms}\\label{sec5}\nIn this section, using Algorithm \\ref{P4A3.1} we propose a new Douglas-Rachford algorithm to solve monotone inclusion problem of sum of two maximally monotone operators. Further using proposed Douglas-Rachford algorithm, we propose a Douglas-Rachford-type primal-dual algorithm to solve complexly structured monotone inclusion problem containing linearly composite and parallel-sum operators.\n\n\n\\subsection{Tikhonov Regularized Douglas-Rachford Algorithm}\nLet $A, B:\\mathcal{H} \\to 2^\\mathcal{H}$ be maximally monotone operators. In this section, we consider the following monotone inclusion problem:\n\\begin{equation}\n\t\\text{ find } x\\in \\mathcal{H} \\text{ such that } 0 \\in (A+B)x .\n\\end{equation}\nWe propose a new Douglas-Rachford algorithm based on Algorithm \\ref{P4A3.1} such that generated sequence converges strongly to a point in the solution set.\n\n\\begin{Theorem}\\label{T5.1}\n\tConsider $x_1 \\in \\mathcal{H}$ and $\\gamma >0,$ then algorithm is given by:\n\t\\begin{eqnarray}\\label{e5.1}{\n\t\t\t\\forall n\\in \\mathbb{N}\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\ty_n=J_{\\gamma B}(e_{n}x_n)\\\\\n\t\t\t\tz_n=J_{\\gamma A}(2y_n-e_{n}x_n)\\\\\n\t\t\t\tu_n=e_{n}y_n +\\theta_{n}(z_n-y_n)\\\\\n\t\t\t\tx_{n+1}=(2J_{\\gamma A}-Id)(2J_{\\gamma B}-Id)u_n. \\ \\\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\tLet $\\operatorname{Zer}(A +B) \\neq \\emptyset$ and sequences $ e_n \\in (0,1)$ and $\\theta_{n} \\in (0,2]$ satisfy the condition (\\textbf{i}) in Theorem \\ref{P4T3.1} and the conditions:\n\n\t\n\t$$0< \\underline{\\Theta} \\leq \\frac{\\theta_{n}}{2} \\leq \\overline{\\Theta } < 1 \\textrm{ and } \\sum_{n=1}^{\\infty} |\\theta_{n}-\\theta_{n-1}| < \\infty.$$\n\t\n\n\tThen the following statements are true:\n\t\\begin{enumerate}\n\t\t\\item[(i)] $\\{x_n\\}$ converges strongly to $\\bar{x}=\\operatorname{proj}_{\\operatorname{Fix}R_{\\gamma A} R_{\\gamma B}}(0)$.\n\t\t\\item[(ii)] \\label{P4i2} $\\{y_n\\}$ and $\\{z_n\\}$ converges strongly to $J_{\\gamma B}(\\bar{x})\\in \\operatorname{Zer}(A+B)$.\n\t\n\t\\end{enumerate}\n\\end{Theorem}\n\\begin{proof}\n\tConsider the operator $T\\equiv R_{\\gamma A} R_{\\gamma B}:\\mathcal{H} \\to 2^\\mathcal{H}$. From the definition of reflected resolvent, and definition of operator $T$, algorithm (\\ref{e5.1}) can be rewritten as\n\t\\begin{eqnarray}\n\t\tx_{n+1}&=&R_{\\gamma A}R_{\\gamma B}\\{e_{n}x_n+\\frac{\\theta_{n}}{2}(R_{\\gamma A}R_{\\gamma B})(e_{n}x_n)-e_{n}x_n\\}\\nonumber\\\\\n\t\t&=& T\\{e_{n}x_n +\\frac{\\theta_{n}}{2}(T(e_{n}x_n)-e_{n}x_n)\\}.\n\t\\end{eqnarray}\n\tSince resolvent operator is nonexpansive \\cite[Corollary 23.10(ii)]{Bauschke2011}, $T$ is nonexpansive. Suppose $x^* \\in \\operatorname{Zer}(A+B)$ and results from \\cite[Proposition 25.1(ii)]{Bauschke2011}, we have $\\operatorname{Zer}(A+B)=J_{\\gamma B}(\\operatorname{Fix}(T)),$ which collectively implies that $\\operatorname{Fix} {(T )} \\neq \\emptyset.$ Applying Theorem \\ref{P4T3.1} with $A_1=A_2=A,B_1=B_2=B$, we conclued that $\\{x_n\\}$ conveges strongly to $\\bar{x}=\\operatorname{proj}_{\\operatorname{Fix} (T)}(0)$ as $n \\to \\infty.$ \\\\\n\tThe continuity of resolvent operator forces the sequence $\\{y_n\\} $ to converge strongly to $J_{\\gamma B}\\bar{x} \\in \\operatorname{Zer}(A+B).$ Finally, since $z_n-y_n=\\frac{1}{2}(T(e_{n}x_n)-e_{n}x_n),$ which converges strongly to 0, concludes \\textbf{(ii)}. \\\\\n\\end{proof}\n\n\\begin{Problem}\\label{P4P4.11}\n\tLet $f,g:\\mathcal{H} \\to \\mathbb{R} \\cup \\{\\infty\\}$ are convex proper and lower semicontinuous functions. Consider the minimization problem\n\t\\begin{equation}\\label{P4e4.11}\n\t\t\\min_{x\\in \\mathcal{H}} f(x) +g(x).\n\t\\end{equation}\n\\end{Problem}\nUsing Karush-Kuhn-Tucker condition, (\\ref{P4e4.11}) is equivalent to solve the inclusion problem\n\\begin{equation}\n\t\\text{find} {\\ x \\in \\mathcal{H}} \\ 0 \\in \\partial f(x) +\\partial g(x).\n\\end{equation}\n\n\n\nIn order to solve such type of problem, we propose an iterative scheme and study its convergence behaviour which can be summerized in following corollary.\n\\begin{Corollary}\n\tLet $f,g$ be as in Problem \\ref{P4P4.11} with $argmin_{x \\in \\mathcal{H}}\\{f (x)+g (x)\\} \\neq \\emptyset$ and $0 \\in sqri (dom \\ f-dom \\ g).$ Consider the following iterative scheme with $x_1 \\in \\mathcal{H}$:\n\t\\begin{eqnarray}\\label{e5.11}{\n\t\t\t\\forall n\\in \\mathbb{N}\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\ty_n=prox_{\\gamma g}(e_{n}x_n)\\\\\n\t\t\t\tz_n=J_{\\gamma f}(2y_n-e_{n}x_n)\\\\\n\t\t\t\tu_n=e_{n}x_n +\\theta_{n}(z_n-y_n)\\\\\n\t\t\t\tx_{n+1}=(2prox_{\\gamma f}-Id)(2prox_{\\gamma g}-Id)u_n, \\ \\\n\t\t\t\t\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\twhere $\\gamma >0$\n\tand sequences $\\{\\theta_{n}\\} \\text{ and } \\{e_{n}\\}$ satisfy the condition (\\textbf{i}) in Theorem \\ref{P4T3.1} and the conditions:\n\n\t\n\t$$0< \\underline{\\Theta} \\leq \\frac{\\theta_{n}}{2} \\leq \\overline{\\Theta } < 1 \\textrm{ and } \\sum_{n=1}^{\\infty} |\\theta_{n}-\\theta_{n-1}| < \\infty.$$\n\tThen we have the following:\n\t\\begin{enumerate}\n\t\t\\item[(i)] $\\{x_n\\}$ converges strongly to $\\bar{x}= \\operatorname{proj}_{\\operatorname{Fix}(T)}(0)$ where $T=(2prox_{\\gamma f}-Id)(2prox_{\\gamma g}-Id)$.\n\t\t\\item[(ii)] $\\{y_n\\}$ and $\\{z_n\\}$ converge strongly to $prox_{\\gamma g}(\\bar{x})\\in argmin_{x \\in \\mathcal{H}}\\{f (x)+g (x)\\} $.\n\t\\end{enumerate}\n\t\\begin{proof}\n\t\tSince $argmin_{x \\in \\mathcal{H}}\\{f (x)+g (x)\\} \\neq \\emptyset$ and $0 \\in sqri (dom \\ f-dom \\ g)$ (\\cite[Proposition 7.2]{Bauschke2011}) ensures that $\\operatorname{Zer}(A+B)=argmin_{x \\in \\mathcal{H}}\\{f (x)+g (x)\\}.$ The results can be obtained by choosing $A =\\partial f, B= \\partial g $ in Theorem \\ref{T5.1}.\n\t\\end{proof}\n\\end{Corollary}\n\n\n\n\n\\subsection{Tikhonov Regularized Douglas-Rachford-type Primal-Dual Algorithm}\nIn this section, we propose Douglas-Rachford-type primal-dual algorithms to solve the complex-structured monotone inclusion problem having mixtures of composition of linear operators and parallel-sum operators. We consider the monotone inclusion problem is as follows:\n\\begin{Problem}\\label{P4P6.2}\n\tLet $A:\\mathcal{H} \\to 2^\\mathcal{H}$ be a maximally monotone operator. Consider for each $i=1, \\dots,m$, $\\varOmega_i$ is a real Hilbert space, $P_i, Q_i: \\varOmega_i \\to 2^{\\varOmega_i}$ are maximally monotone operators and $L_i:\\mathcal{H} \\to \\varOmega_i$ are nonzero linear continuous operator.\n\tThe primal inclusion problem is to find $\\bar{x} \\in \\mathcal{H}$ satisfying\n\t\\begin{eqnarray}\n\t\t0\\in A \\bar{x} +\\sum_{i=1}^{m} L_i^{*}(P_i \\square Q_i)(L_i \\bar{x})\\nonumber\n\t\\end{eqnarray}\n\ttogether with dual inclusion problem\n\t\\begin{eqnarray}\\label{e6.1}{find \\ \\ \\bar{v}_1\\in \\varOmega_1, \\dots, \\bar{v}_m \\in \\varOmega_m \\text{ such that } (\\exists x \\in \\mathcal{H})\n\t\t\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t-\\sum_{i=1}^{m}L_i^* \\bar{v}_i \\in Ax\\\\\n\t\t\t\t\\bar{v}_i \\in (P_i \\square Q_i)(L_i x), \\ i=1,\\dots,m.\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\\end{Problem}\n\nHere, operators $P_i ,Q_i, i=1,\\dots,m $ are not cocoercive, thus to solve the Problem \\ref{P4P6.2}, we have to evaluate resolvent of each operator, which makes the Douglas-Rachford algorithm based primal-dual algorithm is more appropriate to solve the problem.\n\n\\begin{Theorem}\\label{P4T5.11}\n\tIn addition to assumption in Problem \\ref{P4P6.2}, we assume that\n\t\n\t\\begin{equation}\\label{P4P6.10}\n\t\t0 \\in ran \\left( A+\\sum_{i=1}^{m}L_i ^* \\circ(P_i \\square Q_i) \\circ L_i \\right).\n\t\\end{equation}\n\tConsider the strictly positive integers $\\tau, \\sigma_i, i=1,\\dots,m$ satisfying\n\t\\begin{equation}\n\t\t\\tau \\sum_{i=1}^{m} \\sigma_i\\|L_i\\|^2 < 4.\n\t\\end{equation}\n\t\n\tConsider the initial point $(x_1,v_{1,1},\\dots,v_{m,1})\\in \\mathcal{H}\\times \\varOmega_i \\times \\cdots \\times \\varOmega_m.$ The primal- dual algorithm to solve Problem \\ref{P4P6.2} is given by\n\t\n\t\n\t\\begin{algorithm}[H]\n\t\t\\SetAlgoLined\n\t\t\n\t\t\\KwIn{\\begin{enumerate}\n\t\t\t\t\\item initial points $(x_1,v_{1,1},\\dots,v_{m,1})\\in \\mathcal{H}\\times \\varOmega_i \\times \\cdots \\times \\varOmega_m.$\n\t\t\t\t\\item strictly positive real numbers $\\tau, \\sigma_i$, $i=1,2,...,m$ be such that $\\tau\\sum_{i=1}^{m} \\sigma_i \\|L_i\\|^2 < 4 $.\n\t\t\t\t\\item sequences $e_{n} \\in (0,1),\\theta_n \\in (0,2]$\n\t\t\t\\end{enumerate}\t\n\t\t}\n\t\t\\SetKwBlock{For}{For}{}\n\t\t\\SetKwProg{}{}{}{}\n\t\t\\For($n\\in \\mathbb{N}$; ){\n\t\t\t$p_{1,n}=J_{\\tau A} (e_n x_n-\\frac{\\tau}{2}e_n \\sum_{i=1}^{m}L_i^{*}v_{i,n} ) $\\\\\n\t\t\t$w_{1,n}=2p_{1,n}-e_n x_n$\\\\\n\t\t\t\n\t\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t\t\n\t\t\t\t$\tp_{2,i,n}=J_{\\sigma_i P_i ^{-1}}(e_n v_{i,n} +\\frac{\\sigma_i}{2}L_i w_{1,n})$\\\\\n\t\t\t\t$\tw_{2,i,n}=2p_{2,i,n}-e_n v_{i,n} $ \\\\\n\t\t\t}\n\t\t\t\n\t\t\t$\tz_{1,n}=w_{1,n}-\\frac{\\tau}{2}\\sum_{i=1}^{m}L_i^{*}w_{2,i,n}$\\\\\n\t\t\t$\tu_{1,n}=e_n x_n+\\theta_{n}(z_{1,n}-p_{1,n})$\\\\\n\t\t\t\n\t\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t\t$z_{2,i,n}=J_{\\sigma_i Q_i ^{-1}}(w_{2,i,n} + \\frac{\\sigma_i}{2}L_i (2z_{1,n}-w_{1,n}))$\\\\\n\t\t\t\t$u_{2,i,n}=e_n v_{i,n} +\\theta_{n}(z_{2,i,n}-p_{2,i,n})$\\\\\n\t\t\t\t\n\t\t\t}\n\t\t\t\n\t\t\t$\tq_{1,n}=J_{\\tau A}(u_{1,n}-\\frac{\\tau}{2}\\sum_{i=1}^{m}L_i^{*}(u_{2,i,n}))$\\\\\n\t\t\t$\ts_{1,n}=2q_{1,n}-u_{1,n}$\\\\\n\t\t\t\n\t\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t\t$\tq_{2,i,n}=J_{\\sigma_i P_i ^{-1}}(u_{2,i,n}+\\frac{\\sigma_i}{2}L_i s_{1,n})$\\\\\n\t\t\t\t$\ts_{2,i,n}=2q_{2,i,n}-u_{2,i,n}$\\\\\n\t\t\t}\n\t\t\t\n\t\t\t$d_{1,n}=s_{1,n}-\\frac{\\tau}{2}\\sum_{i=1}^{m}L_i^{*}(s_{2,i,n})$\\\\\n\t\t\t$x_{n+1}=2d_{1,n}-s_{1,n}$\\\\\n\t\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t\t$d_{2,i,n}=J_{\\sigma_i Q_i ^{-1}}(s_{2,i,n}+\\frac{\\sigma_i}{2}L_i(x_{n+1}))$\\\\\n\t\t\t\t$v_{2,i,n}=2d_{2,i,n}-s_{2,i,n}$\n\t\t\t\t\n\t\t\t}\n\t\t\t\n\t\t}\n\t\t\n\t\t\n\t\t\\KwOut{$(x_{n+1},v_{1,n+1},\\dots,v_{m,n+1})$}\t\t\n\t\t\\caption{To optimize the complexly structured monotone inclusion Problem \\ref{P4P6.2} }\n\t\t\\label{P3Al4.1}\n\t\\end{algorithm}\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\twhere sequences $\\{\\theta_{n}\\}$ and $\\{e_{n}\\}$ satisfy the condition (\\textbf{i}) in Theorem \\ref{P4T3.1} and the conditions:\n\n\t\n\t$$0< \\underline{\\Theta} \\leq \\frac{\\theta_{n}}{2} \\leq \\overline{\\Theta } < 1 \\textrm{ and } \\sum_{n=1}^{\\infty} |\\theta_{n}-\\theta_{n-1}| < \\infty.$$\n\t\n\tThen there exist an element $(\\bar{x}, \\bar{v}_1,\\dots,\\bar{v}_m)\\in \\mathcal{H}\\times \\varOmega_1 \\times \\cdots \\times \\varOmega_m$ such that following statements are true:\n\t\\begin{enumerate}\n\t\t\\item[(a)] Denote\\\\\n\t\t$\\bar{p}_1=J_{\\tau A}\\left( \\bar{x}-\\frac{\\tau}{2}\\sum_{i=1}^{m}L_i ^* \\bar{v}_i\\right) $\\\\\n\t\t$\\bar{p}_{2,i}=J_{\\sigma_i P_i ^{-1}}\\left( \\bar{v}_i +\\frac{\\sigma_i}{2}L_i(2\\bar{p}_1-\\bar{x})\\right) , i=1,\\dots, m.$\n\t\tThen the point $(\\bar{p}_1, \\bar{p}_{2,1},\\dots, \\bar{p}_{2,m})\\in \\mathcal{H} \\times \\varOmega_1 \\times\\cdots \\times \\varOmega_m $ is a primal-dual solution of Problem \\ref{P4P6.2}.\n\t\t\\item[(b)] $(x_n,v_{1,n}, \\dots, v_{m,n})$ converges strongly to $(\\bar{x},\\bar{v}_1,\\dots, \\bar{v}_m)$.\n\t\t\\item[(c)] $(p_{1,n}, p_{2,1,n},\\dots, p_{2,m,n})$ and $(z_{1,n}, z_{2,1,n},\\dots, z_{2,m,n})$ converge strongly to $(\\bar{p}_1, \\bar{p}_{2,1},\\dots, \\bar{p}_{2,m})$.\n\t\\end{enumerate}\n\\end{Theorem}\n\n\\begin{proof}\n\tConsider the real Hilbert space $\\mathcal{K}$ and operators\n\t\\begin{enumerate}\n\t\t\\item[(i)] $\\phi: \\mathcal{K} \\to 2^\\mathcal{K}$, defined by $(x,u_1,\\dots,u_m)\\mapsto (Ax,P_1^{-1}u_1,\\dots,P_m^{-1}u_m),$\n\t\t\n\t\t\\item[(ii)] $\\xi:\\mathcal{K} \\to \\mathcal{K}$, defined by $(x,u_1,\\dots,u_m)\\mapsto \\left(\\sum_{i=1}^{m}L_i ^*u_i,-L_1 x,\\dots, -L_m x\\right),$\n\t\t\\item[(iii)] $\\psi: \\mathcal{K} \\to \\mathcal{K}$, defined by\n\t\t$\\psi (x,u_1,\\dots,u_m)= \\left(0,Q_1^{-1}u_1,\\dots, Q_m^{-1}u_m\\right)$\n\t\\end{enumerate}\n\t\n\tWe can observe the following:\n\t\\begin{enumerate}\n\t\t\\item[(i)] operator $\\frac{1}{2} \\xi + \\psi \\text{ and } \\frac{1}{2}\\xi+\\phi $ are maximally monotone as dom $\\xi=\\mathcal{K}$ (\\cite[Corollary 24.4(i)]{Bauschke2011}),\n\t\t\\item[(ii)] condition (\\ref{P4P6.10}) implies $\\operatorname{Zer}(\\phi+\\xi+\\psi) \\neq \\emptyset$,\n\t\t\\item[(iii)] \\label{P4R6.2}every point in $\\operatorname{Zer}(\\phi+\\xi+\\psi)$ solves Problem \\ref{P4P6.2}.\n\t\\end{enumerate}\n\tDefine the linear continuous operator $\\textbf{W}:\\mathcal{K}\\to \\mathcal{K}$, defined by\n\t$$\\textbf{W}(x,u_1,\\dots,u_m )= \\left( \\frac{x}{\\tau}-\\frac{1}{2}\\sum_{i=1}^{m}L_i ^*u_i,\\frac{u_1}{\\sigma_1}-\\frac{1}{2}L_1x, \\dots, \\frac{u_m}{\\sigma_m}-\\frac{1}{2}L_m x \\right) $$ which is self-adjoint. Consider $$\\rho=\\left( 1-\\frac{1}{2}\\sqrt{\\tau \\sum_{i=1}^{m}\\sigma_i\\|L_i\\|^2} \\right) \\min\\left\\lbrace \\frac{1}{\\tau}, \\frac{1}{\\sigma_1},\\dots, \\frac{1}{\\sigma_m} \\right\\rbrace >0.$$ The operator $\\textbf{W}$ is $\\rho$- strongly positive in $\\mathcal{K}_\\textbf{W}$ (\\cite{BC2013}) and satisfies the following inequality\n\t$$\\langle x, \\textbf{W}x \\rangle _\\mathcal{K} \\geq \\rho \\|x\\|_\\mathcal{K} ^2 \\ \\ \\forall x \\in \\mathcal{K}.$$ Thus the inverse of $\\textbf{W}$ exists and satisfies $\\|\\textbf{W}^{-1}\\| \\leq \\frac{1}{\\rho}.$ Consider the sequences\n\t\\begin{eqnarray}{\n\t\t\t\\forall n\\in \\mathbb{N}\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t\\text{\\textbf{x}}_n = (x_n, v_{1,n}, \\dots,v_{m,n})\\\\\n\t\t\t\t\\text{\\textbf{y}}_n = (p_{1,n},p_{2,1,n},\\dots,p_{2,m,n}) \\\\\n\t\t\t\t\\text{\\textbf{z}}_n = (z_{1,n},z_{2,1,n},\\dots,z_{2,m,n})\\\\\n\t\t\t\t\\text{\\textbf{u}}_n = (u_{1,n},u_{2,1,n},\\dots,u_{2,m,n})\\\\\n\t\t\t\t\\text{\\textbf{q}}_n = (q_{1,n},q_{2,1,n},\\dots,q_{2,m,n}) \\\\\n\t\t\t\t\\text{\\textbf{s}}_n = (s_{1,n},s_{2,1,n},\\dots,s_{2,m,n}) \\\\\n\t\t\t\t\\text{\\textbf{d}}_n = (d_{1,n},d_{2,1,n},\\dots,d_{2,m,n}).\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\tUsing the definition of operators $\\phi, \\xi,\\psi$ and $\\textbf{W}$, Algorithm \\ref{P3Al4.1} can be written equivalently as\n\t\n\t\\begin{eqnarray}{\n\t\t\t\\forall n\\in \\mathbb{N}\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t\\textbf{W}(e_n \\textbf{x}_n-\\textbf{y}_n ) \\in (\\frac{1}{2}\\xi+\\phi)\\textbf{y}_n\\\\\n\t\t\t\t\\textbf{W}(2\\textbf{y}_n-\\textbf{x}_n-\\textbf{z}_n ) \\in (\\frac{1}{2}\\xi+\\psi)\\textbf{z}_n\\\\\n\t\t\t\t\\textbf{u}_n= e_n \\textbf{x}_n+\\theta_{n}(\\textbf{z}_n -\\textbf{y}_n)\\\\\n\t\t\t\t\\textbf{W}(\\textbf{u}_n -\\textbf{q}_n)\\in (\\frac{1}{2}\\xi+\\phi)\\textbf{q}_n\\\\\n\t\t\t\t\\textbf{W}(\\textbf{s}_n -\\textbf{d}_n)\\in (\\frac{1}{2}\\xi+\\psi)(\\textbf{d}_n)\\\\\n\t\t\t\t\\textbf{x}_{n+1}=2\\textbf{d}_n-\\textbf{s}_n,\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\twhich is further equivalent to\n\t\\begin{eqnarray}{\n\t\t\t\\forall n\\in \\mathbb{N}\t\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t\\textbf{y}_n =(Id+\\textbf{W}^{-1}(\\frac{1}{2}\\xi+\\phi))^{-1}(\\textbf{x}_n) \\\\\n\t\t\t\t{\\textbf{z}_n=(Id+\\textbf{W}^{-1}(\\frac{1}{2}\\xi+\\psi))^{-1}(2\\textbf{y}_n-\\textbf{x}_n) }\\\\\n\t\t\t\t{\\textbf{u}_n= \\textbf{x}_n+\\theta_{n}(\\textbf{z}_n -\\textbf{y}_n)}\\\\\n\t\t\t\t\\textbf{q}_n= (Id+\\textbf{W}^{-1}(\\frac{1}{2}\\xi+\\phi))^{-1}(\\textbf{u}_n)\\\\\n\t\t\t\t\\textbf{d}_n=(Id+\\textbf{W}^{-1}(\\frac{1}{2}\\xi+\\psi))^{-1}(\\textbf{s}_n)\\\\\n\t\t\t\t\\textbf{x}_{n+1}=2\\textbf{d}_n-\\textbf{s}_n.\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\tNow, consider the real Hilbert space $\\mathcal{K}_{\\text{\\textbf{W}}}=\\mathcal{H}\\times \\varOmega_1\\times \\cdots \\times \\varOmega_m$ with inner product and norm defined as \\\\\n\t$\\langle \\textbf{x}, \\textbf{y} \\rangle_{\\mathcal{K}_\\textbf{W}} = \\langle \\textbf{x}, \\textbf{W}\\textbf{y} \\rangle $ and $\\|\\textbf{x}\\|_{\\mathcal{K}_\\textbf{W}}= \\sqrt{\\langle \\textbf{x}, \\textbf{W}\\textbf{x} \\rangle_{\\mathcal{K}}}$ respectively.\\\\\n\tNow, define the operators $\\textbf{A} \\equiv \\textbf{W}^{-1}(\\frac{1}{2}\\xi+\\psi)$ and $\\textbf{B} \\equiv \\textbf{W}^{-1}(\\frac{1}{2}\\xi+\\phi),$ which are maximally monotone on $\\mathcal{K}_{\\textbf{W}}$ as $\\frac{1}{2}\\xi+\\phi$ and $\\frac{1}{2}\\xi+\\psi$ are maximally monotone on $\\mathcal{K}.$ The Algorithm \\ref{P3Al4.1} can be written in the form of Doughlas-Rachford algorithm as\n\t\\begin{eqnarray}{\n\t\t\t\\forall n\\in \\mathbb{N}\\left\\{\n\t\t\t\\begin{array}{lc@{}c@{}r}\n\t\t\t\t\\textbf{y}_n = \\textbf{J}_{\\textbf{B}}(e_n \\textbf{x}_n)\\\\\n\t\t\t\t\\textbf{u}_n = e_n\\textbf{x}_n-\\theta_{n}\\left( \\textbf{J}_{\\textbf{A}}(2\\textbf{y}_n - e_n \\textbf{x}_n)-\\textbf{y}_n\\right) \\\\\n\t\t\t\t\\textbf{x}_{n+1}=(2\\textbf{J}_{\\textbf{A}}- Id) (2\\textbf{J}_{\\textbf{B}}- Id)\\textbf{z}_n,\n\t\t\t\\end{array}\\right.}\n\t\\end{eqnarray}\n\twhich is of the form Algorithm (\\ref{e5.1}) for $\\gamma =1.$ From assumption (\\ref{P4P6.10}), we have\n\t$$\\operatorname{Zer}(\\textbf{A} +\\textbf{B})=\\operatorname{Zer}(\\textbf{W}^{-1}(\\textbf{M}+\\textbf{S}+\\textbf{Q}))=\\operatorname{Zer}(\\textbf{M}+\\textbf{S}+\\textbf{Q}).$$\n\tApplying Theorem \\ref{T5.1}, we can find $\\bar{\\textbf{x}} \\in \\operatorname{Fix}(R_{\\textbf{A}}R_{\\textbf{B}})$ such that $\\textbf{J}_{\\text{\\textbf{B}}}{\\bar{\\textbf{x}}} \\in \\operatorname{Zer}(\\textbf{A}+\\textbf{B}).$\n\\end{proof}\n\nAt the end of this section, we study iterative technique to solve following convex optimization problem:\n\\begin{Problem}\\label{P4P5.33}\n\tLet $f \\in \\Gamma(\\mathcal{H})$. Consider $\\varOmega_i$ are real Hilbert spaces, $g_i,l_i \\in \\Gamma(\\varOmega_i)$ and $L_i : \\mathcal{H} \\to \\varOmega_i$ are linear continuous operators, $i=1,\\dots,m$. The optimization problem is given by\n\t\\begin{equation}\n\t\t\\inf_{x\\in H}\\left[f(x)+\\sum_{i=1}^{m}(g_i\\square l_i)(L_i x)\\right]\n\t\\end{equation}\n\twith conjugate-dual problem is given by\n\t\\begin{equation}\n\t\t\\sup_{v_i \\in \\varOmega, i=1,2,\\dots,m}\\left\\lbrace -f^{*}\\left( -\\sum_{i=1}^{m}L_i^* v_i\\right) -\\sum_{i=1}^{m}(g_i^*(v_i)+l_i^*(v_i))\\right\\rbrace.\n\t\\end{equation}\n\\end{Problem}\n\n\nConsider stricly positive integers $\\tau, \\sigma_i, i=1,\\dots,m$ and\ninitial point $(x_1,v_{1,1},\\dots,v_{m,1})\\in \\mathcal{H}\\times \\varOmega_i \\times \\cdots \\times \\varOmega_m.$ The primal-dual algorithm to solve Problem \\ref{P4P5.33} is given by\n\n\\begin{algorithm}[H]\n\t\\SetAlgoLined\n\t\n\t\\KwIn{\\begin{enumerate}\n\t\t\t\\item initial points $(x_1,v_{1,1},\\dots,v_{m,1})\\in \\mathcal{H}\\times \\varOmega_i \\times \\cdots \\times \\varOmega_m.$\n\t\t\t\\item \\label{P4As5.32}Positive real numbers $\\tau, \\sigma_i$, $i=1,2,...,m$ be such that $\\tau\\sum_{i=1}^{m} \\sigma_i \\|L_i\\|^2 < 4 $.\n\t\t\t\\item The sequences $\\{\\theta_n\\}, \\{e_n\\}$ satisfying the assumptions in Theorem \\ref{P4T5.11}.\n\t\t\\end{enumerate}\t\n\t}\n\t\\SetKwBlock{For}{For}{}\n\t\\SetKwProg{}{}{}{}\n\t\\For($n\\in \\mathbb{N}$; ){\n\t\t$p_{1,n}=prox_{\\tau f} (e_n x_n-\\frac{\\tau}{2}e_n \\sum_{i=1}^{m}L_i^{*}v_{i,n} )$\\\\\n\t\t$w_{1,n}=2p_{1,n}-e_n x_n$\\\\\n\t\t\n\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t\n\t\t\t$\tp_{2,i,n}=prox_{\\sigma_i g_i ^*}(e_n v_{i,n} +\\frac{\\sigma_i}{2}L_i w_{1,n})$\\\\\n\t\t\t$\tw_{2,i,n}=2p_{2,i,n}-e_n v_{i,n} $ \\\\\n\t\t}\n\t\t\n\t\t$z_{1,n}=w_{1,n}-\\frac{\\tau}{2}\\sum_{i=1}^{m}L_i^{*}w_{2,i,n}$\\\\\n\t\t$u_{1,n}=e_n x_n+\\theta_{n}(z_{1,n}-p_{1,n})$\\\\\n\t\t\n\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t$z_{2,i,n}=prox_{\\sigma_i l_i ^ *}(w_{2,i,n} + \\frac{\\sigma_i}{2}L_i (2z_{1,n}-w_{1,n}))$\\\\\n\t\t\t$\tu_{2,i,n}=e_n v_{i,n} +\\theta_{n}(z_{2,i,n}-p_{2,i,n})$\\\\\n\t\t\t\n\t\t}\n\t\t\n\t\t$q_{1,n}=prox_{\\tau f}(u_{1,n}-\\frac{\\tau}{2}\\sum_{i=1}^{m}L_i^{*}(u_{2,i,n}))$\\\\\n\t\t$s_{1,n}=2q_{1,n}-u_{1,n}$\\\\\n\t\t\n\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t$\tq_{2,i,n}=prox_{\\sigma_i g_i ^*}(u_{2,i,n}+\\frac{\\sigma_i}{2}L_i s_{1,n})$\\\\\n\t\t\t$\ts_{2,i,n}=2q_{2,i,n}-u_{2,i,n}$\\\\\n\t\t}\n\t\t\n\t\t$\tt_{1,n}=s_{1,n}-\\frac{\\tau}{2}\\sum_{i=1}^{m}L_i^{*}(s_{2,i,n})$\\\\\n\t\t$\tx_{n+1}=2t_{1,n}-s_{1,n}$\\\\\n\t\t\\For($i=1,\\dots, m$; ){\n\t\t\t$t_{2,i,n}=prox_{\\sigma_i l_i ^*}(s_{2,i,n}+\\frac{\\sigma_i}{2}L_i(x_{n+1}))$\\\\\n\t\t\t$\tv_{2,i,n}=2t_{2,i,n}-s_{2,i,n}$\n\t\t\t\n\t\t}\n\t\t\n\t}\n\t\n\t\n\t\\KwOut{$(x_{n+1},v_{1,n+1},\\dots,v_{m,n+1})$}\t\t\n\t\\caption{To optimize the complexly structured monotone inclusion Problem \\ref{P4P5.33} }\\label{P3Al4.3}\n\\end{algorithm}\t\nwhere $\\{\\theta_{n}\\} \\text{ and } \\{e_n\\}$ are real sequences.\n\n\\begin{Corollary}\n\tIn addition to assumptions in Problem \\ref{P4P5.33}, consider\n\t\\begin{equation}\\label{e6.}\n\t\t0\\in ran\\left( \\partial f +\\sum_{i=1}^{m}L_i ^* \\circ (\\partial g_i \\square \\partial l_i)\\circ L_i \\right) .\n\t\\end{equation}\t\n\tThen, there exists an element $(\\bar{x}, \\bar{v}_1,\\dots,\\bar{v}_m)\\in \\mathcal{H}\\times \\varOmega_1 \\times \\cdots \\times \\varOmega_m$ such that following statements are true:\n\t\\begin{enumerate}\n\t\t\\item[(a)] Denote\\\\\n\t\t$\\bar{p}_1=prox_{\\tau f}\\left( \\bar{x}-\\frac{\\tau}{2}\\sum_{i=1}^{m}L_i ^* \\bar{v}_i\\right) $\\\\\n\t\t$\\bar{p}_{2,i}=prox_{\\sigma_i g^*_i}\\left( \\bar{v}_i +\\frac{\\sigma_i}{2}L_i(2\\bar{p}_i-\\bar{x})\\right) , i=1,\\dots, m.$\n\t\tThen the point $(\\bar{p}_1, \\bar{p}_{2,1},\\dots, \\bar{p}_{2,m})\\in \\mathcal{H} \\times \\varOmega_1 \\times\\cdots \\times \\varOmega_m $ is a primal-dual solution of Problem \\ref{P4P5.33}.\n\t\t\\item[(b)] $(x_n,v_{1,n}, \\dots, v_{m,n})$ converges strongly to $(\\bar{x},\\bar{v}_1,\\dots, \\bar{v}_m)$.\n\t\t\\item[(c)] $(p_{1,n}, p_{2,1,n},\\dots, p_{2,m,n})$ and $(z_{1,n}, z_{2,1,n},\\dots, z_{2,m,n})$ converge strongly to $(\\bar{p}_1, \\bar{p}_{2,1},\\dots, \\bar{p}_{2,m})$.\n\t\\end{enumerate}\n\\end{Corollary}\n\n\n\n\n\n\n\\section{Numerical Experiment}\nIn this section, we make an experimental setup to solve the wavelet based image deblurring problem. In image deblurring, we develop mathematical methods to recover the original, sharp image from blurred image. The mathematical formulation of the blurring process can be written as linear inverse problem,\n\\begin{equation} \\label{p4n1}\n\t\\text{ find } x \\in \\mathbb{R}^d \\text{ such that }\tAx=b+w\n\\end{equation}\nwhere $A\\in \\mathbb{R}^{m\\times d}$ is blurring operator, $b\\in \\mathbb{R}^m$ is blurred image and $w$ is an unknown noise.\nA classical approach to solve Problem (\\ref{p4n1}) is to minimize the least-square term\n$\\|Ax-b\\|^2$. In the deblurring case, the problem is ill-conditioned as the norm solution has a huge norm. To remove the difficulty, the ill-conditioned problem is replaced by a nearly well-conditioned problem. In wavelet domain, most images are sparse in nature, that's why we choose $l_1$ regularization. For $l_1$ regularization, the image processing problem becomes\n\\begin{equation}\\label{p4n2}\n\t\\min_{x\\in\\mathbb{R}^2}F(x)=\t\\|Ax-b\\|^2 +\\lambda \\|x\\|_1,\n\\end{equation}\nwhere $\\lambda$ is a sparsity controlling parameter and provides a tradeoff between fidelity to the measurements and noise sensitivity. The $l_1$ regularization produces sparse images having sharp edges since it is less sensitive to outliers.\nUsing subdifferential characterization of the minimum of a convex function, a point $x^*$ minimizes $F(x)$ if and only if\n\\begin{equation}\\label{p4n3}\n\t0\\in A^T(Ax^*-b)+\\partial \\lambda \\|x^*\\|_1.\\nonumber\n\\end{equation}\nThus we can apply forward-backward Algorithm (\\ref{e4.1}) to solve the deblurring problem (\\ref{p4n2}).\n\nFor numerical experiment purposes, we have chosen images from publicly available domain and assumed reflexive (Neumann) boundary conditions. We blurred the images using gaussian blur of size $9 \\times 9$ and standard deviation 4. We have compared the algorithm (\\ref{e4.1}) with \\cite[Algorithm 8]{Bot2019}. The operator $A=RW$, where $W$ is the three stage haar wavelet transform and $R$ is the blur operator. The original and corresponding blurred images were shown in Figure \\ref{Fig6.5}.\n\\begin{figure}[htb!]\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{face}\n\t\t\\caption{Original.}\n\t\t\\label{Fig6.1}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{blurredface}\n\t\t\\caption{Blurred}\n\t\t\\label{Fig6.2}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{crowd}\n\t\t\\caption{Original}\n\t\t\\label{Fig6.3}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{blurredcrowd}\n\t\t\\caption{Blurred}\n\t\t\\label{Fig6.4}\n\t\\end{subfigure}\n\t\\caption{The original and blurred images of Lenna and crowd.}\n\t\\label{Fig6.5}\n\\end{figure}\nThe regularization parameter was chosen to be $\\lambda=2\\times10^{-5}$, and the initial image was the blurred image. The objective function value is denoted by $F(x^*)$ and function value at $n^{\\text{th}}$ iteration is denoted by $F(x_n)$. Sequences $\\{\\lambda_n\\}$ and $\\{\\beta_n\\}$ are chosen as $\\{1-\\frac{1}{n+1}\\}$ and $\\{0.9\\}$ respectively. The images recovered by the algorithms for 1000 iterations are shown in the figure.\nThe graphical representation of convergence of $F(x_n) - F(x^*)$ is depicted in Figure \\ref{Fig6.8}.\n\\begin{figure}[htb!]\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{face_plot_1000_9}\n\t\t\\caption{Lenna.}\n\t\t\\label{Fig6.6}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{crowd_plot_1000_9}\n\t\t\\caption{Crowd}\n\t\t\\label{Fig6.7}\n\t\\end{subfigure}\n\t\\caption{The variation of $F(x_n)-F(x^*)$ with respect to number of iteration for different images.}\n\t\\label{Fig6.8}\n\\end{figure}\nFor deblurring methods, lower the value of $F(x_n)-F(x^*)$ higher the quality of recovered images.\n\\begin{figure}[htb!]\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{face_mann_1000_9}\n\t\t\\caption{ Algorithm (\\ref{e4.1}).}\n\t\t\\label{Fig6.9}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{face_normalS_1000_9}\n\t\t\\caption{\\cite[Algorithm 8]{Bot2019}}\n\t\t\\label{Fig6.10}\n\t\\end{subfigure}\\hfill\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{crowd_mann_1000_9}\n\t\t\\caption{ Algorithm (\\ref{e4.1}).}\n\t\t\\label{Fig6.11}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\n\t\t\\includegraphics[width=\\linewidth]{crowd_normalS_1000_9}\n\t\t\\caption{\\cite[Algorithm 8]{Bot2019}.}\n\t\t\\label{Fig6.12}\n\t\\end{subfigure}\n\t\\caption{The recovered images using different algorithms for 1000 iterations.}\n\t\\label{Fig6.13}\n\\end{figure}\n\nIt can be observed from Figures \\ref{Fig6.8} and \\ref{Fig6.13} that proposed Algorithm (\\ref{e4.1}) outperforms over \\cite[Algorithm 8]{Bot2019}.\n\n\n\\section*{Acknowledgements}\nThe first author acknowledges the Indian Institute of Technology (BHU), Varanasi for Senior Research Fellowship in terms of teaching assistantship. The third author is thankful to University Grant Commission for the Senior Research Fellowship.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSynchronization phenomena occur recurrently in \nphysical, chemical and biological systems.\nFew noteworthy examples include superconducting currents\nin Josephson junction arrays\n\\cite{strogatz_prl_1996,strogatz_pre_1998},\nemerging coherence in populations of chemical oscillators\n\\cite{kiss},\nor the accuracy of central circadian pacemakers in insects and vertebrates\n\\cite{reppert_97,winfree,sync_book}.\nThe latter serve as biomolecular time-keeping\ndevices, which most organisms have evolved to coordinate their\nphysiology and metabolic\nactivities with the geophysical light-dark and temperature cycles\n\\cite{schibler_naef}.\n\nThe present work was motivated by recent experiments in mammalian\ncell cultures in which the levels of\nproteins implicated in the circadian (with $\\sim 24$hr periods)\nclockwork were monitored using fluorescent reporters\n\\cite{nagoshi, welsh, Carr}.\nIt was demonstrated that\nindividual cellular oscillators generate self-sustained rhythms\nin protein abundance\nand that populations can be synchronized by treatment with a short\nserum shock or light pulse.\nImportantly frequencies of individual oscillators are not strictly\nconstant but drift in time\n(see for example Fig.~S2 in \\cite{Carr} showing the results in the zebrafish).\nSeveral chemical kinetics models thought to\ncapture the biochemistry responsible for generating\noscillations in living cells\nwere shown to exhibit oscillatory instabilities and limit-cycles\n\\cite{goldbeter, leibler}.\nExperimental and theoretical evidence therefore supports a\ndescription of oscillator populations in terms of phase variables.\n\nOur understanding of the onset of collective synchronization in coupled\nnonlinear oscillator models has greatly benefited from a large body of\nwork on the Kuramoto model\n\\cite{kuramoto_84,Sakaguchi_kuramoto_86,kuramoto_nishikawa_87,strogatz_prl_1992,stogatz_physicaD_2000}\n\\begin{eqnarray}\n\\label{kuramoto}\n\\dot\\phi_i=f_i+\\frac K N \\sum_{j=1}^N\\sin(\\phi_j-\\phi_i)+\\xi_i(t)~,\n\\end{eqnarray}\ndescribing the phase dynamics in a set of weakly coupled identical\nnon-linear oscillators.\nHere, $\\phi_i(t)$ represent the phase of the\n$i$-th oscillator at time $t$,\nand $\\xi_i(t)$ are white noise sources with\nexpectation and covariance\n\\begin{eqnarray*}\n{\\rm E}[\\xi_i(t)]&=&0~,\\\\\n{\\rm Cov}[\\xi_i(s),\\xi_j(t)]&=&2D\\delta _{ij}\\delta(s-t)~.\n\\end{eqnarray*}\nThe frequencies $f_i$ are static and taken from a distribution\n$g(f)$ symmetric around $\\mu_f$. Eq.~\\ref{kuramoto} is an effective\nmodel for the phase degrees of freedom in a population of\nlimit-cycle oscillators and assumes a regime where phase and amplitude\ndynamics decouple.\nThe parameter $K$ measures the strength of the all-to-all coupling.\nMost exact results are given for the coupling function\n$U(\\phi_j-\\phi_i) = \\sin(\\phi_j-\\phi_i)$. More general interactions\nlead to much greater\nanalytical complexity and were investigated in\n\\cite{crawford_prl95,crawford_physicaD_1999}.\nCritical properties of the model are conveniently studied using the complex\norder parameter $R(t)e^{i\\psi(t)}=\\frac{1}{N}\\sum_{j=1}^Ne^{i\\phi_j(t)}$\nso that collective synchronization occurs\nwhen $R_\\infty=\\lim_{T\\rightarrow\\infty}\\frac 1 T\\int_0^T\\! R(t)\\,dt$ remains positive\nin the infinite population limit.\nFor the sine coupling model a bifurcation occurs at\n$K_c = 2\/\\int\\frac D {D^2+f^2}g(f+\\mu_f)\\,\\!df$\nat which the incoherent desynchronized state $R_\\infty=0$ \nbecomes unstable and a macroscopic number of oscillators phase lock\nto the average phase $\\psi(t)=\\mu_f\\,t$ \\cite{crawford_jsp_1994,Sakaguchi,strogatz_jsp_1991}.\nFor $D\\rightarrow 0$ the classical Kuramoto result\n$K_c=\\frac 2{\\pi g(\\mu_f)}$ \\cite{kuramoto_book} is recovered.\nBelow the critical coupling, the incoherent state is linearly stable when $D>0$ \\cite{strogatz_jsp_1991}\nbut only neutrally stable when $D=0$ with $R(t)$ still decaying to zero \\cite{strogatz_prl_1992}.\n\n\\section{The Model}\n\nTo study the effects of the reported drifts on collective synchronization,\nwe generalize the Kuramoto model by introducing\na second time scale $1\/\\gamma$ (besides $1\/\\sigma_f$)\ncharacterizing the frequency drifts.\nThe frequency dynamics is formulated as an Ornstein-Uhlenbeck (O-U) process\nwhile the phases are coupled following the canonical\nall-to-all sine interaction. The model for $N$ oscillators reads\n\\begin{eqnarray}\n\\label{model}\n\\dot f_i(t)&=& -\\gamma(f_i(t)-\\mu_f)+\\eta_i(t)~,\\\\\n\\dot \\phi_i(t)&=& f_i(t)+\\frac K N \\sum_{j=1}^N\\sin\\bigl(\\phi_j(t)-\\phi\n_i(t)\\bigr)~,\\nonumber\n\\end{eqnarray}\nwhere $\\mu_f$ is the average frequency chosen identical\nfor each oscillator\nWe assume that the $\\eta_i$ are independent and identically\ndistributed white noise sources with\n\\begin{eqnarray*}\n{\\rm E}[\\eta_i(t)]&=&0~,\\\\\n{\\rm Cov}[\\eta_i(s),\\eta_j(t)]&=&\\eta^2\\delta _{ij}\\delta(s-t)~.\n\\end{eqnarray*}\nThe solution for $f_i$ \nis a Gaussian process with mean and covariance \n\\begin{eqnarray}\n{\\rm E}[f_i(t)]&=&\\mu_f+e^{-\\gamma t}(f_i(0)-\\mu_f)~,\\nonumber\\\\\n{\\rm Cov}[f_i(s),f_j(t)]&=&\\frac{\\eta^2}{2\\gamma}\\delta_{ij}\n\\left(e^{-\\gamma|t-s|}-e^{-\\gamma(t+s)}\\right)\\nonumber\\\\\n&\\stackrel{\\gamma t\\gg1}{\\longrightarrow}&\\frac{\\eta^2}{\\gamma^2}\\delta_{ij}\\delta(t-s)~.\n\\end{eqnarray}\n\nIn the following we use as independent parameters the\nasymptotic frequency dispersion $\\sigma_f^2=\\frac{\\eta^2}{2\\gamma}$\nand the damping $\\gamma$, which are in principle both accessible\nexperimentally.\nThen,\n$\n{\\rm Cov}[f_i(s),f_i(t)]\\rightarrow2\\sigma_f^2\/\\gamma\\,\\delta(t-s)\n$ when $\\gamma t\\gg1$~. \nTo remind the significance of this regime we note\nfor $K=0$ the phases $\\phi_i(t)$ also follow Gaussian processes with\n$\n{\\rm Cov}[\\phi_i(t),\\phi_i(t)]\n\\rightarrow\\frac{2\\sigma_f^2}{\\gamma}t\n$\nasymptotically for $\\gamma t\\gg 1$.\nBecause of the linear time dependence, this regime (Schmolukowski) describes\nphase diffusion with constant $D=\\frac{\\sigma_f^2}\\gamma$.\n\nAlthough it is {\\it a priori} unclear whether this model exhibits\na bifurcation, we expect that the large $\\gamma$ behavior\nreminiscent of phase diffusion will converge\nto the Kuramoto model (Eq.~\\ref{kuramoto}) with a\nfrequency distribution given by\n$g(f)=\\delta(f-\\mu_f)$ and white noise\nstrength $D=\\sigma_f^2\/\\gamma$, and thus exhibit a bifurcation at $K_c=2D$.\nHowever the small $\\gamma$ behavior is less obvious\nsince we are simultaneously concerned with long time properties.\nFor fixed $\\sigma_f$, we anticipate that synchronization should be hardest\nfor strictly static oscillators ($\\gamma=0$).\nThis case corresponds to the noiseless $D=0$ Kuramoto\nmodel with Gaussian frequency dispersion\n$g(f)=\\mathcal{N}_{\\mu_f,\\sigma_f}(f)\\equiv(\\sqrt{2\\pi}\\sigma_f)^{-1}e^{-(f-\\mu_f)^2\/2\\sigma_f^2}$, \nso that $K_c=2\\sqrt{2\/\\pi}\\,\\sigma_f$.\nAs the frequency dynamics loses stiffness (when $\\gamma$ increases),\nwe expect the synchronization threshold to be facilitated\nby the frequency drifts.\n\nWe study the infinite population model $N\\rightarrow\\infty$ by formulating\na Fokker-Planck equation for the time\ndependent joint density $p(\\phi, f, t)$.\nThe all-to-all interaction term\n\\begin{equation}\n\\label{roft}\n\\frac K N \\sum_{j=1}^N\\sin\\bigl(\\phi_j(t)-\\phi _i(t)\\bigr)\n\\,=\\,KR(t)\\sin(\\psi(t)-\\phi_i(t))\n\\end{equation}\nhas well known mean-field character and can be replaced\nfor $N\\rightarrow\\infty$\nby\n\\begin{equation}\n\\label{ergodic}\nK\\int_0^{2\\pi}\\!d\\theta\\int\\!dg\\,p(\\theta,g,t)\\sin(\\theta-\\phi_i(t))~.\n\\end{equation}\nWe obtain\n\\begin{eqnarray}\n\\label{fokker}\n\\frac{\\partial p}{\\partial t}&=&\n\\gamma\\sigma_f^2\\frac{\\partial^2p}{\\partial f^2}\n-f\\frac{\\partial p}{\\partial \\phi}\n+\\gamma(f-\\mu_f)\\frac{\\partial p}{\\partial f}+\\gamma p\\\\\n&&-K\\frac{\\partial \\bigl(c(p, \\phi)\\,p\\bigr)}{\\partial \\phi}\\nonumber~.\n\\end{eqnarray}\nThis is the known expression for an O-U process augmented by a phase\ncoupling involving $c(p,\n\\phi)=\\int_0^{2\\pi}\\!\\!d\\theta\\int\\!dg\\,p(\\theta,g,t)\\sin(\\theta-\\phi)$,\nwhich makes Eq.~\\ref{fokker} non-linear as a consequence of Eq.~\\ref{ergodic}.\n\n\\section{Stability Analysis}\n\nWe next discuss the linear stability of the incoherent\nstationary solution $p_0(\\phi, f)=\\mathcal{N}_{\\mu_f,\\sigma_f} (f)\/(2\\pi)$ in first\norder. For reasons that will become clear, we factorize a term \n$\\mathcal{N}^{1\/2}_{\\mu_f,\\sigma_f}(f)$\noff the perturbation and write\n$$\np(\\phi, f, t)=p_0(f)+\\mathcal{N}^{1\/2}_{\\mu_f,\\sigma_f}(f)\n\\epsilon\\bigl(\\phi-\\mu_ft,\\sigma_f^{-1}(f-\\mu_f),\\gamma t\\bigr)~,\n$$\nwhere $\\epsilon(\\tilde\\phi, \\tilde f, \\tilde t)$ is a small perturbation\nexpressed in a rotating frame using rescaled frequency and time variables.\nBy plugging this ansatz into Eq.~\\ref{fokker} we obtain the linearized problem\n$\\frac{\\partial\\epsilon}{\\partial t}=\\mathcal{L}\\epsilon+\\mathcal{O}(\\epsilon^2)$\nwhere $\\mathcal{L}$ is the linear operator\n\\begin{eqnarray*}\n\\mathcal{L}\\epsilon&=&\\frac{\\partial^2\\epsilon}{\\partial f^2}\n-\\frac{\\sigma_f}{\\gamma}f\\frac{\\partial\\epsilon}{\\partial \\phi}\n+\\left(\\frac{1}{2}-\\frac{1}{4} f^2\\right)\\epsilon\\hskip 2.5cm\\\\\n\\lefteqn{+\\frac{K}{2\\pi\\gamma}\\mathcal{N}^{1\/2}_{0,1}(f)\n\\int_0^{2\\pi}\\!\\!\\!\\!d\\theta\\!\n\\int\\!dg\\,\\mathcal{N}^{1\/2}_{0,1}(g)\\epsilon(\\theta,g,t)\\cos(\\theta-\\phi)~.}\n\\end{eqnarray*}\nDecomposing $\\epsilon$ as a Fourier series in $\\phi$, \n$\\epsilon (\\phi,f,t)\\,=\\,\\sum_{n=-\\infty}^\\infty\\epsilon_n(f,t)e^{-in\\phi}$,\nwe obtain for the coefficients $\\epsilon_n$\n\\begin{eqnarray}\n\\label{fourier}\n\\frac{\\partial\\epsilon _n}{\\partial t}\n&=&\\frac{\\partial^2\\epsilon _n}{\\partial f^2}\n+\\left(\\frac{1}{2}-\\frac{1}{4} f^2+\\frac{in\\sigma_f}{\\gamma}f\\right)\\epsilon_n\\nonumber\\\\\n&&+\\delta_{1|n|}\\frac{K}{2\\gamma}\n\\mathcal{N}^{1\/2}_{0,1}(f)\\int\\!dg\\,\\mathcal{N}^{1\/2}_{0,1}(g)\\epsilon_n(g,t)\\nonumber\\\\\n&\\equiv&\\mathcal{L}_n\\epsilon_n\n+\\delta_{1|n|}\\frac{K}{2\\gamma}\\langle\\epsilon _n,\\mathcal{N}^{1\/2}_{0,1}\\rangle\\mathcal{N}^{1\/2}_{0,1}~.\n\\end{eqnarray}\nWe notice that the first term representing the frequency dynamics resembles\nthe harmonic oscillator plus a complex part, which\ncan be removed by applying\nthe translation operator $U_\\theta$ defined by\n$\\bigl(U_\\theta f\\bigr)(x)=f(x-\\theta)$. We note that \n$\\mathcal{L}_n=U_{2in\\sqrt{a}}\\hat\\mathcal{L}_nU_{-2in\\sqrt{a}}$, \nwhere we have set $a=(\\sigma_f\/\\gamma)^2$ and the operator \n$\\hat\\mathcal{L}_n=\\partial_f^2+\\frac{1}{2}-n^2a-\\frac{1}{4} f^2$ is self-adjoint on\n$L^2(\\mathbb{R})$ and\nhas pure point spectrum\n$\\Sigma(\\hat\\mathcal{L}_n)=\\{\\lambda_{n\\ell}=-\\ell-n^2a\\,:\\,\\ell=0,1,2,\\dots\\}$.\nIts eigenfunctions are given in terms of the Hermite\nfunctions \\cite{reed-simon}\n\\begin{eqnarray*}\nH_0(x)&=&\\pi^{-\\frac{1}{4}}e^{-\\frac{1}{2} x^2}~\\mbox{and}~\\\\\nH_\\ell(x)&=&(2^\\ell\\ell!\\sqrt{\\pi})^{-\\frac{1}{2}}(-1)^\\ell\ne^{\\frac{1}{2} x^2}\\partial_x^\\ell e^{-x^2}~,~\\ell\\,=\\,1,2,\\dots\n\\end{eqnarray*}\nas follows:\n$$\n\\hat\\mathcal{L}_n\\Phi_\\ell\\,=\\,\\lambda_{n\\ell}\\Phi_\\ell~,\\quad\n\\Phi_\\ell(f)\\,=\\,2^{-1\/4}H_\\ell(2^{-1\/2}f)~.\\\\\n$\nTherefore $\\{U_{2in\\sqrt{a}}\\Phi_\\ell:\\ell=0,1,2,\\dots\\}$ forms an\northonormal family which diagonalizes $\\mathcal{L}_n$. \nNotice that the largest eigenvalue for each $n$ is\n$\\lambda_{n0}=-n^2a$.\n\nLinear stability follows directly except for $|n|\\neq 1$ and\n$K>0$. Indeed, \nfor $n=0$, we find $\\lambda_{00}=0$ with corresponding\neigenfunction is $\\Phi_0=\\mathcal{N}^{1\/2}_{0,1}$.\nHowever, this function lies outside the space of relevant perturbations\nbecause the normalization of $p$, \n$\\int_0^{2\\pi}d\\phi\\,\\int df\\,p(\\phi,f,t)=1$,\nrequires orthogonality of $\\mathcal{N}^{1\/2}_{0,1}$ and $\\epsilon_0(f,t)$\nthrough $\\int\\mathcal{N}^{1\/2}_{0,1}\\epsilon_0(f,t)df=0$.\nSubsequent eigenvectors have negative eigenvalues.\nFor all other $|n|\\neq 1$\nthe coupling term in Eq.~\\ref{fourier} vanishes\nand the incoherent state $p(\\phi, f, t)=p_0(f)$ \nis linearly stable as a consequence of the\nstrictly negative spectrum of $\\mathcal{L}_n$.\nThe same holds for all $n$ in the absence of coupling $K=0$.\n\\setlength{\\unitlength}{1mm}\n\\begin{figure}\n\\includegraphics[scale=0.8]{fig1}\n\\put(1,2){$\\displaystyle\\frac{\\gamma}{\\sigma_f}$}\n\\put(-75,45){$\\displaystyle\\frac{K_c}{\\sigma_f}$}\n\\caption{Behavior of $K_c$ as a function of $\\gamma$ as given by Eq.~\\ref{kcritical} with\n$a=(\\sigma_f\/\\gamma)^2$ (continuous line). \nDashed line represents the Kuramoto model with\nidentical frequencies and $K_c=2D=2\\sigma_f^2\/\\gamma$. The\n$\\gamma\\rightarrow 0$ limit reproduces the $\\gamma=0$ model with Gaussian\nfrequency dispersion and gives $K_c\/\\sigma_f=2\\sqrt{2\/\\pi}=1.5957$ (dotted \nline).}\n\\label{fig1}\n\\end{figure}\n\nFor the remaining case $n=\\pm 1$ and $K>0$,\nwe notice that the coupling term\nin Eq.~\\ref{fourier} also vanishes for all directions\northogonal to $\\mathcal{N}^{1\/2}_{0,1}$,\nleaving a one-dimensional space that could develop an instability.\nWe write the eigenvalue problem for Eq.~\\ref{fourier} implicitly as\n\\begin{eqnarray*}\n\\mathcal{L}_n\\epsilon_n\n+\\delta_{1|n|}\\frac{K}{2\\gamma}\\langle\\epsilon_n,\\mathcal{N}^{1\/2}_{0,1}\\rangle\n\\mathcal{N}^{1\/2}_{0,1}=\\lambda\\,\\epsilon_n~.\n\\end{eqnarray*}\nUsing the resolvent equation \n\\begin{eqnarray*}\n(\\lambda-\\mathcal{L}_n)^{-1}&=&(\\lambda-U_{2in\\sqrt{a}}\\hat\\mathcal{L}_nU_{-2in\\sqrt{a}})^{-1}\\\\\n&=&U_{2in\\sqrt{a}}(\\lambda-\\hat\\mathcal{L}_n)^{-1}U_{-2in\\sqrt{a}}\n\\end{eqnarray*}\nwe obtain\n\\begin{eqnarray*}\n\\epsilon_n&=&\\frac{K}{2\\gamma}\\langle\\epsilon _n,\\mathcal{N}^{1\/2}_{0,1}\\rangle\nU_{2in\\sqrt{a}}(\\lambda-\\hat\\mathcal{L}_n)^{-1}U_{-2in\\sqrt{a}}\\mathcal{N}^{1\/2}_{0,1}\\\\\n&=&\\frac{K}{2\\gamma}\\langle\\epsilon _n,\\mathcal{N}^{1\/2}_{0,1}\\rangle\n\\sum_{j=0}^\\infty\n\\frac{\\langle\\Phi_j,U_{-2in\\sqrt{a}}\\mathcal{N}^{1\/2}_{0,1}\\rangle}{\\lambda-\\lambda_{nj}}\nU_{2in\\sqrt{a}}\\Phi_j~,\n\\end{eqnarray*}\nwhere we have used the spectral decomposition\n$\\hat\\mathcal{L}_nf=\\sum_j\\lambda_j\\langle\\Phi_j,f\\rangle\\Phi_j$.\n\nTo find the critical coupling $K_c$ above which the incoherent state becomes\nlinearly unstable, we need to monitor when the\nlargest eigenvalue crosses the imaginary axis.\nAfter projecting onto $\\mathcal{N}^{1\/2}_{0,1}$, simplifying the factors \n$\\langle\\epsilon _n,\\mathcal{N}^{1\/2}_{0,1}\\rangle$ on both sides of the\nequation, and setting $\\lambda=0$ we\nfind an equation for $K_c$:\n\\begin{eqnarray}\n\\frac{2\\gamma}{K_c}&=&\n\\sum_{j=0}^\\infty\n\\frac{\\langle\\Phi_j,U_{-2i\\sqrt{a}}\\mathcal{N}^{1\/2}_{0,1}\\rangle^2}{-\\lambda_{1j}}\n\\,=\\,e^a\\sum_{j=0}^\\infty\\frac{(-a)^j}{j!(j+a)}\\nonumber\\\\\n&=&e^aa^{-a}\\!\\int_0^a\\!\\!t^{a-1}e^{-t}dt\n\\,=\\,e^aa^{-a}\\gamma(a,a)~,\n\\label{kcritical}\n\\end{eqnarray}\nwhere $\\gamma(a,x)$ is the lower incomplete $\\Gamma$-function.\n\nThe behavior of $K_c$ together with the Kuramoto model asymptotes for\n$\\gamma\\rightarrow\\infty$ and $\\gamma\\to0^+$ limit are shown in Fig.~\\ref{fig1}.\nIt is noticeable that we find a bifurcation for all values of $\\gamma$.\n$K_c$ strictly decreases from a finite\n$\\gamma=0$ as $\\gamma$ increases, asymptotically behaving as\n$K_c=2\\sigma_f^2\/\\gamma$. \nThe analytical result thus supports the following picture:\nfor small $\\gamma$, the dominant source of fluctuations against which\nthe coupling must work to achieve synchronization is the\n(Gaussian) frequency dispersion. As $\\gamma$ increases while $\\sigma_f$\nis kept fixed, faster frequency drifts help synchrony by preventing\nindividual oscillators with detuned frequency to stay out of tune for\ntoo long. Indeed with drifting frequencies every individual oscillators\nfluctuates around the mean frequency $\\mu_f$ with a time scale $\\gamma^{-1}$.\nIn the large $\\gamma$ regime, the effective frequency dispersion vanishes\nand the coupling force needs to synchronize noisy but otherwise identical frequency\noscillators. As predicted by the phase diffusion limit, the effective white\nnoise strength $D$ and hence $K_c$ decrease as $\\gamma^{-1}$.\n\nWe now discuss the asymptotic regimes in detail: \nthe small $\\gamma$ limit follows from\nreverting to the original variables and using the asymptotic expansion of\n$\\gamma(a,a)$ \n(using Stirling's formula and \\cite{abramowitz}:\n6.5.3, 6.5.22, and 6.5.35). We obtain in the\nlimit $\\gamma\\to0^+$\n$$\n2\\left(\\frac {K_c} \\sigma_f\\right)^{-1}\\,=\\,\n\\sqrt{\\frac{\\pi}{2}}+\n\\frac 1 3 \\frac{\\gamma}{\\sigma_f}+\\frac{\\sqrt{2\\pi}}{24}\\frac{\\gamma^2}{\\sigma_f^2}+\\mathcal{O}(\\gamma^3)~.\n$$\nThis proves that the model continuously interpolates to the noiseless ($D=0$)\nmodel and that the $\\gamma\\rightarrow 0$\nrecovers the $\\gamma=0$ transition predicted in the original\nKuramoto model at $K_c\/\\sigma_f=2\\sqrt{2\/\\pi}$.\nIn the opposite regime $\\gamma\\to\\infty$ (thus $a\\to 0$) we find\n(\\cite{abramowitz} 6.5.12, 13.1.2)\n$$\na^{-a}e^a\\gamma(a,a)\\,=\\,a^{-1}M(1,1+a,a)\\,\\sim\\,a^{-1}\\bigl(1+\\mathcal{O}(a)\\bigr)~,\n$$\nwhere $M(\\cdot,\\cdot,\\cdot)$ is the confluent hypergeometric\nfunction. This leads $K_c\\sim2\\sigma_f^2\/\\gamma+\\mathcal{O}(\\gamma^{-2})$\nand hence proves the convergence to the white noise model\n(Eq.~\\ref{kuramoto}) with $D=\\sigma_f^2\/\\gamma$.\n\nFinally, we mention a generalization that\nincludes a white noise source in the phase equation (as in Eq.~\\ref{kuramoto})\nin addition to the correlated frequency fluctuations.\nThis leads an additional diffusion term\n$\n-D\\frac{\\partial^2p}{\\partial \\phi^2}\n$\nin Eq.~\\ref{fokker}.\nFollowing the steps above readily extends Eq.~\\ref{kcritical} to\n$$\n\\frac{2\\gamma}{K_c}\\,=\\,\\,e^aa^{-(a+b)}\\gamma(a+b,a)~,\n$$\nwhere $b=D\/\\gamma$, with similar qualitative behavior.\nIn particular, $K_c$ asymptotes to $2(\\sigma_f^2\/\\gamma+D)$ for large\n$\\gamma$ and has finite $\\gamma\\rightarrow 0$ limit.\n\n\\section{Numerical Simulations}\n\nWe have performed numerical simulations of Eq.~\\ref{model} to explore\nthe behavior of $R(t)$ (see Eq.~\\ref{roft}) and in particular\n$R_\\infty$ in function of the reduced\ncoupling $K_r=(K-K_c)\/K_c$.\nTo verify the analytical results and study the scaling $R_\\infty=\\kappa K_r^\\beta$\nabove the bifurcation, we simulated a finite number of oscillators\nusing the exact solution for the frequency part, leading to the updates\n$f_i(t+\\!dt)=f_i(t)\\,e^{-\\gamma\\,\\!dt} + \\mu_f(1-e^{-\\gamma\\,\\!dt})+\n\\eta\\,\\sigma_f\\sqrt{1-e^{-2\\gamma\\,\\!dt}}$\nand $\\phi_i(t+\\!dt)=\\phi_i(t) + (f_i(t)+\\frac K N \\sum_j\\sin(\\phi_j-\\phi_i))\\,\\!dt$\nwhere $\\eta$ is a Gaussian random number.\nWe used Eq.~\\ref{roft} to compute $R(t)$ and transients were removed by waiting\nuntil the solutions from two different initial conditions\n$\\phi_i(t=0)=0$ and $\\phi_i(t=0)$ taken randomly converged to the same trajectory.\nThe steady state value $R_\\infty$ was subsequently estimated by averaging $R(t)$ over time.\n\nFig.~\\ref{kcfig} fully supports the analytical solution and also indicates that\nthe behavior of $K_c$ above the bifurcation depends only weakly on $\\gamma$ over the simulated range.\nTo inspect more closely whether $R_\\infty\\sim\\sqrt{K_r}$ as in the Kuramoto model,\nwe used refined spacing and larger sizes in the vicinity of $K_r=0^+$.\nAs shown in Fig.~\\ref{finer}, the simulations are compatible with an\nexponent $\\beta=0.5$, the slightly higher exponents probably reflect a\nfinite size effect.\nOn the other hand $\\kappa$ correlates negatively with $\\gamma$ which\nis visible in both Figs \\ref{kcfig} and \\ref{finer}.\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4,angle=270]{fig2}\n\\put(-58,-1){\\includegraphics[scale=0.17,angle=270]{fig3}}\n\\put(-70,-31){$R_\\infty$}\n\\put(-39,-68){$(K-K_c)\/K_c$}\n\\put(-60,-12){\\tiny$R_\\infty$}\n\\put(-45,-31){\\tiny$1\/N$}\n\\caption{Numerical simulation of Eq.~\\ref{model}. \nEstimation of $R_{\\infty}$ was obtained using finite size\nscaling for systems of sizes $N=5000$, $10000$ and\n$20000$. Eq.~\\ref{kcritical} was used for $K_c$ to set the reduced coupling\n$K_{r}=(K-K_c)\/K_c$. Values for $\\gamma$ were $4(\\circ)$,\n$3(\\bigtriangleup)$, $2.5(+)$,\n$2(\\times)$, $1.8(\\diamond)$ and $1.6(\\bigtriangledown)$\nand $\\sigma_f=1$. In each simulation, $10^5$ time steps of size\n$dt=0.01$ were performed. We verified that the dependence in the step size was\nweak. Inset: $1\/N$ finite size\nscaling for $\\gamma=1.8$. $K_r=0$ is the solid line, smaller (resp. larger)\n$K_r$ are below (resp. above) $K_r=0$.\nThe extrapolated value for $1\/N=0$ is used in the main panel.}\n\\label{kcfig}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.32,angle=270,origin=br]{fig4}\n\\includegraphics[scale=0.21]{fig5}\n\\put(-83,29){$R_\\infty$}\n\\put(-61,-3){$(K-K_c)\/K_c$}\n\\put(-26,40){$\\kappa$}\n\\put(-11,28){$\\gamma$}\n\\put(-26,11){$\\beta$}\n\\put(-11,-2){$\\gamma$}\n\\caption{Critical behavior above the bifurcation using step $dK_r=0.01$.\nHere, $\\gamma$ is $4(\\circ)$, $3(\\bigtriangleup)$, $2(+)$,\n$1.6(\\times)$, $1.2(\\diamond)$ and $0.8(\\bigtriangledown)$.\nNotice the log-log scale to emphasize the power law. Lines are fits to\n$R_\\infty=\\kappa\\,K_r^\\beta$ Systems of sizes $N=10000$, $20000$ and\n$50000$ were simulated with the same parameters and same scaling\nprocedure as in Fig.~\\ref{kcfig}. Right panels show parameter\nestimates from the left panel.}\n\\label{finer}\n\\end{figure}\n\n\\section{Discussion}\n\nWe have extended the Kuramoto model to frequencies which can drift\nin time following Ornstein-Uhlenbeck dynamics.\nThe net effect is that the white noise source in Eq.~\\ref{kuramoto} is\nreplaced by colored noise\n(with a Cauchy distributed power spectrum), hereby\nadding a new time scale describing memory or frequency stiffness to\nthe problem.\nApart from mean field coupling among the phases which introduces a\nnon-linearity, the stochastic phase and frequency dynamics lead to a linear\nFokker-Planck operator which can be solved.\nConsequently the linear stability of the\nincoherent state can be addressed analytically using\nspectral calculus. The expression for the critical coupling above which\nmacroscopic phase coherence emerges can be resummed and expressed in\nterms of incomplete $\\Gamma$-functions. Asymptotic expansion for small\nand large $\\gamma$ shows that the full model continuously interpolates\nbetween two limits of the original Kuramoto model: one dominated by\nnoise (large $\\gamma$) and the other by the frequency dispersion\n(small $\\gamma$).\nTherefore, the coupling force must counteract different sources of\nfluctuations to induce collective synchrony in drifting frequency\noscillators, depending on the regime set by $\\gamma$.\nSpecifically, for slowly drifting (small $\\gamma$) frequencies\nthe model approaches the noiseless model (Eq.~\\ref{kuramoto} with\n$D=0$ and $g(f)=\\mathcal{N}_{\\mu_f,\\sigma_f}(f)$)\nwhere the coupling splits the population into distinct locked and\nincoherent sub-populations, depending\non the proximity of individual frequencies to the population mode.\nAs $\\gamma$ increases, (while $\\sigma_f$ is held fixed) the frequencies\nlose their stiffness which results in a reduction in the critical\ncoupling $K_c$ needed for synchrony. Finally for very rapidly drifting\noscillators (large $\\gamma$) cancel out the frequency distribution and\ngenerate an effective white noise source acting on the phases of\notherwise equal frequency oscillators.\nAt the same time the locked and incoherent subgroups become indistinguishable.\nFor intermediate $\\gamma$, our numerical simulations indicate\nthat the model belongs to the same $\\beta=0.5$ universality class as the\nKuramoto model.\n\nBecause of analytical tractability and few parameters we expect this solution\nto be relevant for oscillatory systems in the presence of complex noise sources.\nSuch cases include populations of neural oscillators or biochemical oscillators\nwhere bioluminescence recordings have shown how intracellular noise sources\ngenerate frequency dispersion through drifts.\n\n\\subsection*{Acknowledgments}\nWe thank Benoit Kornmann and Ueli Schibler for initiating our interest in\nthe drifting frequency model and Olivier Hernandez for pointing out useful\nreferences. The simulations were performed on an Itanium2\ncluster from HP\/Intel at the Vital-IT facilities.\nFN acknowledges funding from the Swiss National Science Foundation\nNCCR Molecular Oncology program and NIH administrative supplement to parent\ngrant GM54339.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\\subsection{The $(\\mu,\\phi)$-$L^2$-Euler characteristic}\n\\label{subsec:The_(mu,phi)_L2-Euler_characteristic_cw-intro}\nLet $X$ be a finite connected CW-com\\-plex and let $\\mu\\colon \\pi_1(X) \\to G$ and $\\phi \\colon G \\to \\IZ$ be group\nhomomorphisms. We say that $(\\mu,\\phi)$ is an \\emph{$L^2$-acyclic Atiyah pair}\nif the $n$th $L^2$-Betti number\n$b_n^{(2)}(\\overline{X};\\caln(G))$ of the $G$-covering $\\overline{X} \\to X$ associated to\n$\\mu$ vanishes for all $n \\ge 0$, and $G$ is torsion-free and satisfies\nthe Atiyah Conjecture. (We will discuss the Atiyah Conjecture in \nSection~\\ref{sec:About_the_Atiyah_Conjecture}). \nThen one can define\nby twisting the cellular $\\IZ G$-chain complex with the infinite dimensional\n$G$-representation $\\phi^*\\IR \\IZ$ the \\emph{$(\\mu,\\phi)$-$L^2$-Euler characteristic}\n$\\chi^{(2)}(X;\\mu,\\phi)$ which is an integer. We will not give the precise definition of $(\\mu,\\phi)$-$L^2$-Euler \ncharacteristic in the introduction but we refer to\nSection~\\ref{sec:Twisting_the_L2-Euler_characteristic_with_a_cocycle_in_the_first_cohomology}\nfor details and a summary of the key properties.\n\nThe $(\\mu,\\phi)$-$L^2$-Euler characteristic can be employed in many different contexts.\nFor example it is used by Funke--Kielak~\\cite{Funke-Kielak(2016)} to study descending HNN-extensions of \nfree groups and it is at least implicitly used by the authors and Tillmann~\\cite{Friedl-Lueck-Tillmann(2016)} to study one-relator groups.\n\n\n\n\\subsection{The $\\phi$-$L^2$-Euler characteristic of 3-manifolds}\n\\label{subsec:The_phi_L2-Euler_characteristic_for_universal_coverings_intro}\nIn this paper our main application of the $(\\mu,\\phi)$-$L^2$-Euler characteristic lies in the study of 3-manifolds \nto which we restrict ourselves in the remainder of the introduction.\nMore precisely, our main focus will be on the following class of $3$-manifolds.\n\n\\begin{definition}[Admissible $3$-manifold] \\label{def:admissible_3-manifold} \n A $3$-manifolds is called \\emph{admissible} if it is connected, orientable, and irreducible,\n its boundary is empty or a disjoint union of tori, and its fundamental group\n is infinite.\n\\end{definition}\n\nLet $M$ be an admissible $3$-manifold and let \n$\\phi \\colon \\pi_1(M) \\to \\IZ$ be a group homomorphism. Then \nall the conditions listed in Section~\\ref{subsec:The_(mu,phi)_L2-Euler_characteristic_cw-intro} are satisfied \nfor the triple $(X,\\id_{\\pi_1(M)},\\phi)$ and the corresponding $L^2$-Euler characteristic \n$\\chi^{(2)}(M;\\id_{\\pi_1(M)},\\phi)$ is defined. We denote by $x_M(\\phi)$ the Thurston norm of $\\phi$ \nwhich is loosely speaking defined as the minimal complexity of a surface dual to $\\phi$.\n (We recall the precise definition of the Thurston norm in Section~\\ref{subsec:Brief_review_of_the_Thurston_norm}.) \nThe following is one of our main theorems.\n\n\n\\begin{theorem}[Equality of $(\\mu,\\phi)$-$L^2$-Euler characteristic and the Thurston norm\n for universal coverings]\n \\label{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm_universal_covering_for_universal_coverings}\n Let $M\\ne S^1\\times D^2$ be an admissible $3$-manifold.\n\n Then we get for any $\\phi\\in H^1(M;\\IZ)$\n \\[\n -\\chi^{(2)}(M;\\id_{\\pi_1(M)},\\phi) \\,\\,=\\,\\, x_M(\\phi).\n \\]\n\\end{theorem}\n\nIf $M$ is not a closed graph manifold, then\nTheorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm_universal_covering_for_universal_coverings}\nis a direct consequence of the subsequent\nTheorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm}\ntogether with the fact that in this case the fundamental groups of $M$ satisfies the\nAtiyah Conjecture, see\nTheorem~\\ref{the:Status_of_the_Atiyah_Conjecture}. If $M$ is a (closed) graph manifold\nTheorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm_universal_covering_for_universal_coverings}\nfollows from\nTheorem~\\ref{the:The_phi-L2-Euler_characteristic_and_the_Thurston_norm_for_graph_manifolds}.\n\\medskip\n\nIt is also interesting to consider group homomorphisms $\\pi_1(M)\\to G$ that are not the identity. \nFor example in Section~\\ref{sec:The_(mu,phi)-L2-Euler_characteristic_and_the_degree_of_higher_order_Alexander_polynomials}\nwe will see that if $G$ is a torsion-free elementary amenable group, then the\n$(\\mu,\\phi)$-$L^2$-Euler characteristic $\\chi^{(2)}(M;\\mu,\\phi)$ is basically the same as\nthe degrees of the non-commutative Alexander polynomials studied by\nCochran~\\cite{Cochran(2004)}, Harvey~\\cite{Harvey(2005)} and the first\nauthor~\\cite{Friedl(2007)}. In these three papers it was shown that the degrees of\nnon-commutative Alexander polynomials give lower bounds on the Thurston norm. The following\ntheorem can be viewed as a generalization of these results.\n\n\\begin{theorem}[The negative of the $(\\mu,\\phi)$-$L^2$-Euler characteristic is a lower bound for the Thurston norm]\n\\label{the:The_Thurston_norm_ge_the_(mu,phi)-L2-Euler_characteristic-intro}\nLet $M\\ne S^1\\times D^2$ be an admissible $3$-manifold and let $(\\mu,\\phi)$ be an $L^2$-acyclic Atiyah-pair.\n\nThen we get\n\\[\n-\\chi^{(2)}(M;\\mu,\\phi)~\\le~x_M(\\phi \\circ \\mu).\n\\]\n\\end{theorem}\n\nIn general the inequality of the above theorem is not an equality. But the following theorem shows \nthat for any `sufficiently large epimorphism', the inequality does become an equality.\n\n\\begin{theorem}[Equality of $(\\mu,\\phi)$-$L^2$-Euler characteristic and the Thurston norm]\n\\label{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm}\nLet $M\\ne S^1\\times D^2$ be an admissible $3$-manifold which is not a closed graph manifold.\n\nThen there exists a virtually abelian group $\\Gamma$ and a factorization \n\\[\n\\pr_M \\colon \\pi_1(M) \\xrightarrow{\\alpha} \\Gamma \\xrightarrow{\\beta} H_1(M)_f := H_1(M)\/\\tors(H_1(M))\n\\]\nof the canonical projection $\\pr_M\\colon \\pi_1(M)\\to H_1(M)_f$ into epimorphisms such that the following holds:\n\n\nGiven a group $G$ satisfying the Atiyah Conjecture, a factorization of $\\alpha \\colon\n\\pi \\to \\Gamma$ into group homomorphisms $\\pi \\xrightarrow{\\mu} G \\xrightarrow{\\nu}\n\\Gamma$, and a group homomorphism $\\phi \\colon H_1(M)_f \\to \\IZ$, the pair $(\\mu, \\phi\n\\circ \\beta \\circ \\nu)$ is an $L^2$-acyclic Atiyah-pair, and we get\n\\[\n -\\chi^{(2)}(M;\\mu,\\phi \\circ \\beta \\circ \\nu)~=x_M(\\phi).\n\\]\n\\end{theorem}\n\nWith a little bit of extra effort one can use\nTheorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm} to\nshow that one can use epimorphisms onto torsion-free elementary amenable groups to detect\nthe Thurston norm. Put differently, one can show that the aforementioned non-commutative\nAlexander polynomials detect the Thurston norm. We refer to\nCorollary~\\ref{cor:existence_of_torsion-free_elementary_coverings_with_equality} for the\nprecise statement.\n\n\n\n\\subsection{Inequality of the Thurston norm}\n\\label{subsec:Inequality_of_the_THurston_norm}\n\nOne of the key motivations for developing the theory of $L^2$-Euler characteristics is the following question by \nSimon~\\cite[Problem~1.12]{Kirby(1997)}.\n\n\\begin{question}\\label{que:Simons_question}\nLet $K$ and $K'$ be two knots. If there is an epimorphism from the knot group of $K$ to\nthe knot group of $K'$, does this imply that the genus of $K$ is greater than or equal to the\ngenus of $K'$?\n\\end{question}\n\nWe propose the following conjecture. \n\n\\begin{conjecture}[Inequality of the Thurston norm] \\label{con:Inequality_of_the_Thurston_norm} \n Let $f \\colon M \\to N$ be a map of\n admissible $3$-manifolds which is surjective on $\\pi_1(N)$ and induces an isomorphism\n $f_* \\colon H_n(M;\\IQ) \\to H_n(N;\\IQ)$ for $n \\ge 0$.\n \n Then we get for any $\\phi \\in H^1(N;\\IR)$ that\n \\[\n x_M(f^*\\phi) \\,\\,\\ge\\,\\, x_N(\\phi).\n \\]\n\\end{conjecture}\n\n\\begin{remark}\n\\begin{enumerate}\n\\item In Section~\\ref{subsec:Brief_review_of_the_Thurston_norm} we will recall that the\n Thurston norm can be viewed as a generalization of the knot genus. In particular a proof\n of Conjecture~\\ref{con:Inequality_of_the_Thurston_norm} \n would give an affirmative answer to Simon's question.\n\\item If $M$ and $N$ are closed 3-manifolds, then the conclusion of \n Conjecture~\\ref{con:Inequality_of_the_Thurston_norm} follows from~\\cite[Corollary~6.18]{Gabai(1983)}.\n\\item The condition on the induced map on rational homology cannot be dropped. For\n example, suppose that $M=S^1\\times \\Sigma$ with $\\Sigma$ a surface of genus $g\\geq 2$\n with boundary. Let $N$ be the exterior of a torus knot. Then $\\pi_1(N)$ is generated by\n two elements, therefore there exists an epimorphism $f_*\\colon \\pi_1(S^1\\times\n \\Sigma)\\to \\pi_1(N)$ which factors through the projection $\\pi_1(S^1\\times \\Sigma)\\to\n \\pi_1(\\Sigma)$. If $\\phi$ is a generator of $H^1(N;\\IZ)$, then $x_N(\\phi)\\ne 0$ but it\n is straightforward to see that $x_M(f^*\\phi)=0$. We are grateful to Yi Liu for pointing\n out this example.\n\\end{enumerate}\n\\end{remark}\n\nBefore we state our main contribution to\nConjecture~\\ref{con:Inequality_of_the_Thurston_norm} we need to recall one more\ndefinition. A group $G$ is called \\emph{locally indicable} if any finitely generated\nnon-trivial subgroup of $G$ admits an epimorphism onto $\\IZ$. For example\nHowie~\\cite{Howie(1982)} showed that the fundamental group of any admissible 3-manifold\nwith non-trivial boundary is locally indicable.\n\nOur main result is Theorem~\\ref{the:Inequality_of_the_Thurston_norm-intro}. \n\n\n\n\\begin{theorem}[Inequality of the Thurston norm] \\label{the:Inequality_of_the_Thurston_norm-intro}\nLet $f \\colon M \\to N$ be a map of admissible $3$-manifolds which is surjective on $\\pi_1(N)$ \nand induces an isomorphism $f_* \\colon H_n(M;\\IQ) \\to H_n(N;\\IQ)$ for $n \\ge 0$. Suppose that \n$\\pi_1(N)$ is residually locally indicable elementary amenable. \n\nThen we get for any $\\phi\\in H^1(N;\\IR)$ that\n \\[\n x_M(f^*\\phi ) \\,\\,\\ge\\,\\, x_N(\\phi).\n \\]\n\\end{theorem}\n\nBy Lemma~\\ref{lem:fibered-manifolds-acapl} the fundamental group of any fibered 3-manifold\nis residually locally indicable elementary amenable. Thus we have proved\nConjecture~\\ref{con:Inequality_of_the_Thurston_norm} in particular in the case that $N$ is\nfibered. The conclusion of Conjecture~\\ref{con:Inequality_of_the_Thurston_norm} can be\nproved relatively easily for \\emph{fibered classes in $H^1(N;\\IR)$}, but it seems to us\nthat if $N$ is fibered, then there is no immediate reason why the inequality should hold\nfor \\emph{non-fibered} classes in $H^1(N;\\IR)$.\n\nWe propose the following\n\n\\begin{conjecture} \\label{con:res_loc_ind_elem_ama_always}\nThe fundamental group of any admissible $3$-manifold $M$ with $b_1(M) \\ge 1$\nis residually locally indicable elementary amenable. \n\\end{conjecture}\n\nA proof of Conjecture~\\ref{con:res_loc_ind_elem_ama_always} together with \nTheorem~\\ref{the:Inequality_of_the_Thurston_norm-intro} implies \nConjecture~\\ref {con:Inequality_of_the_Thurston_norm} and in particular an affirmative answer to Simon's \nQuestion~\\ref{que:Simons_question}.\n\n\n\n\\subsection{The $(\\mu,\\phi)$-$L^2$-Euler characteristic and the degree of the $L^2$-torsion \nfunction}\n\\label{subsec:The_(mu,phi)-L2-Euler_characteristic_and_the_degree_of_the_L2-torsion_function_intro}\n\nWe briefly discuss a relation of the $(\\mu,\\phi)$-$L^2$-Euler characteristic to the degree of the $L^2$-torsion \nfunction in Section~\\ref{sec:The_degree_of_the_L2-torsion_function}.\n\n\n\n\n\\subsection{Methods of proof}\n\\label{subsec:Methods_of_proof}\n\nOne key ingredient in this paper is to replace the Ore localization of a group ring $\\IZ G$,\nwhich is known to exist for torsion-free elementary amenable groups and is definitely not available,\nif $G$ contains a non-abelian free subgroup, by the division closure $\\cald(G)$ of\n$\\IZ G$ in the algebra $\\calu(G)$ of operators affiliated to the group von Neumann algebra $\\caln(G)$.\nThis is a well-defined skew field containing $\\IZ G$ if and only if $G$ is torsion-free and satisfies\nthe Atiyah Conjecture with rational coefficients. This is known to be true in many interesting cases. \n\n\nWe will also take advantage of the recent proof by Agol and others of the Virtual Fibering Conjecture.\n\n\n\n\n\n\n\n\n\n\\subsection*{Acknowledgments.}\nThe first author gratefully acknowledges the support provided by the SFB 1085 ``Higher\nInvariants'' at the University of Regensburg, funded by the Deutsche\nForschungsgemeinschaft {DFG}. The paper is financially supported by the Leibniz-Preis of the second author granted by the {DFG}\nand the ERC Advanced Grant ``KL2MG-interactions'' (no. 662400) of the second author granted by the European Research Council.\n\n\n\n\\tableofcontents\n\n\n\n\n\\typeout{------------ Section 1: Brief review of the Thurston norm -------------}\n\n\n\n\\section{Brief review of the Thurston norm}\n\\label{subsec:Brief_review_of_the_Thurston_norm}\n\nRecall the definition in~\\cite{Thurston(1986norm)} of the \\emph{Thurston norm} $x_M(\\phi)$\nof a compact connected orientable $3$-manifold $M$ with empty or non-empty boundary \nand an element $\\phi \\in H^1(M;\\IZ)=\\Hom(\\pi_1(M),\\IZ)$\n\\[\nx_M(\\phi)\\,\\,:=\\,\\,\\min \\{ \\chi_-(F)\\, | \\, F \\subset M \\mbox{ properly embedded surface dual to\n}\\phi\\},\n\\]\nwhere, given a surface $F$ with connected components $F_1, F_2, \\ldots , F_k$, we define\n\\[\\chi_-(F)\\,\\,:=\\,\\,\\smsum{i=1}{k} \\max\\{-\\chi(F_i),0\\}.\\]\n\n\nThurston~\\cite{Thurston(1986norm)} showed that this defines a seminorm on $H^1(M;\\IZ)$\nwhich can be extended to a seminorm on $H^1(M;\\IR)$ which we denote by $x_M$ again.\nIn particular we get for $r \\in \\IR$ and $\\phi \\in H^1(M;\\IR)$\n\\begin{eqnarray}\n x_M(r \\cdot \\phi) \n & = & \n |r| \\cdot x_M(\\phi).\n \\label{scaling_Thurston_norm}\n\\end{eqnarray}\n\n\nIf $K\\subset S^3$ is a knot, then we denote by $\\nu K$ an open tubular neighborhood of $K$\nand we refer to $X_K=S^3\\setminus \\nu K$ as the exterior of $K$. We refer to the minimal\ngenus of a Seifert surface of $K$ as the \\emph{genus $g(K)$ of $K$}. We have\n$H^1(X_K;\\IZ)\\cong \\IZ$ and an elementary exercise shows that for any generator $\\phi$ of\n$H^1(X_K;\\IZ)\\cong \\IZ$ we have\n\\begin{equation} x_{X_K}(\\phi)\\,\\,=\\,\\,\\operatorname{max}\\{\n 2g(K)-1,0\\}.\\label{equ:genus-thurston-norm} \\end{equation}\n\n\n\nIf $p \\colon M' \\to M$ is a finite covering with $n$ sheets, then Gabai~\\cite[Corollary~6.13]{Gabai(1983)} showed\n\\begin{eqnarray}\n x_{M'}(p^*\\phi) \n & = & \n n \\cdot x_M(\\phi).\n \\label{finite_coverings_Thurston_norm}\n\\end{eqnarray}\nIf $F \\to M \\xrightarrow{p} S^1$ is a fiber bundle for a compact connected orientable\n$3$-manifold $M$ and compact surface $F$ and $\\phi \\in H^1(M;\\IZ)$ is given by $H_1(p)\n\\colon H_1(M) \\to H_1(S^1)$, then\n\\begin{eqnarray}\n x_M(\\phi) & = & \n \\begin{cases}\n - \\chi(F) & \\text{if} \\;\\chi(F) \\le 0;\n \\\\\n 0 & \\text{if} \\;\\chi(F) \\ge 0.\n \\end{cases}\n \\label{fiber_bundles_Thurston_norm}\n\\end{eqnarray}\n\n\n\n\n\n\\typeout{-- Section 2: Twisting the $L^2$-Euler characteristic with a cocycle in the first cohomology -----------}\n\n\n\\section{Twisting the $L^2$-Euler characteristic with a cocycle in the first cohomology}\n\\label{sec:Twisting_the_L2-Euler_characteristic_with_a_cocycle_in_the_first_cohomology}\n\nIn this section we introduce our main invariant on the $L^2$-side, namely certain\nvariations of the $L^2$-Euler characteristic which are obtained in the special case of the\nuniversal covering $\\widetilde{X} \\to X$ with by twisting with an element $\\phi \\in H^1(X;\\IZ)$. More generally,\nwe will consider $G$-$CW$-complexes and twist with a group homomorphism $\\phi \\colon G \\to \\IZ$.\n\n\n\n\n\n \n\n\n\\subsection{Review of the $L^2$-Euler characteristic}\n\\label{subsec:Review_of_the_L2-Euler_characteristic}\n\nLet $G$ be a group. Denote by $\\caln(G)$ the group von Neumann algebra which can be\nidentified with the algebra $B(L^2(G),L^2(G)^G$ of bounded left $G$-equivariant operators\n$L^2(G) \\to L^2(G)$. Let $C_*$ be a finitely generated based free left $\\IZ G$-chain\ncomplex. Then we can consider the chain complex of finitely generated Hilbert\n$\\caln(G)$-chain complexes $L^2(G) \\otimes_{\\IZ G} C_*$. Its $L^2$-Betti numbers\n$b_n^{(2)}(L^2(G) \\otimes_{\\IZ G} C_*)$ are defined as the von Neumann dimension\nof its $L^2$-homology, see~\\cite[Section~1.1]{Lueck(2002)}.\n\nOne can also work entirely algebraically by applying $\\caln(G) \\otimes_{\\IZ G} -$ which\nyields a chain complex $\\caln(G) \\otimes_{\\IZ G} C_*$ of $\\caln(G)$-modules, where we\nconsider $\\caln(G)$ as a ring and forget the topology. There is a dimension function\ndefined for all $\\caln(G)$-modules, see~\\cite[Section~6.1]{Lueck(2002)}. So one gets\nanother definition of $L^2$-Betti numbers by taking this dimension of the\n$\\caln(G)$-module $H_n(\\caln(G) \\otimes_{\\IZ G} C_*)$. These two definitions agree\nby~\\cite[Section~6.2]{Lueck(2002)}.\n\nThe advantage of the algebraic approach is that it works and often still gives finite\n$L^2$-Betti numbers also in the case where we drop the condition that $C_*$ is a finitely\ngenerated free $\\IZ G$-chain complex and consider any $\\IZ G$-chain complex $C_*$. This is\nexplained in detail in~\\cite[Chapter~6]{Lueck(2002)}. We recall that for any\nchain complex of free left $\\IZ G$-chain modules $C_*$ we can define its $n$th $L^2$-Betti number a\n\\begin{eqnarray}\nb_n^{(2)}(\\caln(G) \\otimes_{\\IZ G} C_*)\n& := &\n \\dim_{\\caln(G)}\\bigl(H_n(\\caln(G) \\otimes_{\\IZ G} C_*)\\bigr)\n \\quad \\in [0,\\infty].\n\\label{b_n(2)(caln(G)_otimes_ZGC_ast)}\n\\end{eqnarray}\nIn particular given a $G$-$CW$-complex $X$ we can define its $n$th $L^2$-Betti number as \n\\begin{eqnarray}\n b_n^{(2)}(X;\\caln(G)) & := & \n \\dim_{\\caln(G)}\\bigl(H_n(\\caln(G) \\otimes_{\\IZ G} C_*(X))\\bigr)\n \\quad \\in [0,\\infty].\n \\label{b_n(2)(X;caln(G))}\n\\end{eqnarray}\n\nWe leave the superscript in the notation $b_n^{(2)}(\\caln(G) \\otimes_{\\IZ G} C_*)$ and $b_n^{(2)}(X;\\caln(G))$, although the definition is\npurely algebraic, in order to remind the reader that it is related to the classical notion\nof $L^2$-Betti numbers.\n\n\\begin{definition}[$L^2$-Euler characteristic]\n \\label{def:L2-Euler_characteristic}\n Let $X$ be a $G$-$CW$-complex. Define\n \\begin{eqnarray*}\n h^{(2)}(X;\\caln(G))\n & := & \\lmsum{p \\ge 0}{} b_p^{(2)}(X;\\caln(G)) \\in [0,\\infty];\n \\\\\n \\chi^{(2)}(X;\\caln(G))\n & := & \n \\lmsum{p \\ge 0}{} (-1)^p \\cdot b_p^{(2)}(X;\\caln(G))\n \\in \\IR, \\quad \\mbox{if}\\; h^{(2)}(X;\\caln(G)) < \\infty.\n \\\\\n \\end{eqnarray*}\n We call $\\chi^{(2)}(X;\\caln(G))$ the \\emph{$L^2$-Euler characteristic} of $X$.\n\\end{definition}\n\nThe condition $h^{(2)}(X;\\caln(G)) < \\infty$ ensures that the sum which appears in the\ndefinition of $\\chi^{(2)}(X;\\caln(G))$ converges absolutely. In the sequel we assume that\nthe reader is familiar with the notion of the $L^2$-Euler characteristic and its basic\nproperties, as presented in~\\cite[Section~6.6.1]{Lueck(2002)}. Another approach to\n$L^2$-Betti numbers for not necessarily finite $G$-$CW$-complexes is given by\nCheeger-Gromov~\\cite{Cheeger-Gromov(1986)}.\n\n\n\n\n\n\n\\subsection{The $\\phi$-twisted $L^2$-Euler characteristic}\n\\label{subsec:The_phi-twisted-L2-Euler_characteristic}\n\nWe will be interested in the following version of an $L^2$-Euler characteristic.\n\n\n\\begin{definition}[$\\phi$-twisted $L^2$-Betti number and $L^2$-Euler characteristic]\n \\label{def:phi-twisted_L2_betti_number_and_L2-Euler_characteristic}\n Let $X$ be a $G$-$CW$-complex. Let $\\phi \\colon G \\to \\IZ$ be a group homomorphism. Let\n $\\phi^*\\IZ[\\IZ]$ be the $\\IZ G$-module obtained from $\\IZ[\\IZ]$ regarded as module over\n itself by restriction with $\\phi$. If $C_*(X)$ is the cellular $\\IZ G$-chain complex,\n denote by $C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ]$ the $\\IZ G$-chain complex obtained by\n the diagonal $G$-action. Define\n \\begin{eqnarray*}\n b_n^{(2)}(X;\\caln(G),\\phi) & := &\n \\dim_{\\caln(G)}\\bigl(H_n(\\caln(G) \\otimes_{\\IZ G}(C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ]))\\bigr)\n \\in [0,\\infty];\n \\\\\n h^{(2)}(X;\\caln(G),\\phi)\n & := & \\lmsum{p \\ge 0}{} b_p^{(2)}(X;\\caln(G),\\phi) \\in [0,\\infty];\n \\\\\n \\chi^{(2)}(X;\\caln(G);\\phi)\n & := & \n \\lmsum{p \\ge 0}{} (-1)^p \\cdot b_p^{(2)}(X;\\caln(G),\\phi)\n \\in \\IR, \\hspace{2mm} \\mbox{if} \\; h^{(2)}(X;\\caln(G),\\phi) < \\infty.\n \\\\\n \\end{eqnarray*}\n We say that $X$ is \\emph{$\\phi$-$L^2$-finite} if $h^{(2)}(X;\\caln(G),\\phi) < \\infty$ holds. If this\n the case, we call the real number $\\chi^{(2)}(X;\\caln(G),\\phi)$ the \n \\emph{$\\phi$-twisted $L^2$-Euler characteristic} of $X$.\n \\end{definition}\n\nNotice that so far we are not requiring that the $G$-$CW$-complex $X$ is free or finite. \n\n\n\n\n\nWe collect the basic properties of this invariant.\n\n\\begin{theorem}[Basic properties of the $\\phi$-twisted $L^2$-Euler characteristic]\n \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic}\n Let $X$ be a $G$-$CW$-complex. Let $\\phi \\colon G \\to \\IZ$ be a group homomorphism.\n \n \\begin{enumerate}[font=\\normalfont]\n\n \\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic:homotopy_invariance}\n \\emph{$G$-homotopy invariance}\\\\\n Let $X$ and $Y$ be $G$-$CW$-complexes which are $G$-homotopy equivalent.\nThen $X$ is $\\phi$-$L^2$-finite if and only if $Y$ is $\\phi$-$L^2$-finite, and in this case\n we get\n \\[\n \\chi^{(2)}(X;\\caln(G),\\phi) = \\chi^{(2)}(Y;\\caln(G),\\phi);\n \\]\n\n \\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic:sum_formula}\n \\emph{Sum formula}\\\\\n Consider a $G$-pushout of $G$-$CW$-complexes\n \\[\n \\xymatrix@C1.2cm@R0.5cm{ X_0 \\ar[r] \\ar[d] & X_1 \\ar[d]\n \\\\\n X_2 \\ar[r]& X }\n \\]\n where the upper horizontal arrow is cellular, the left vertical arrow is an inclusion\n of $G$-$CW$-complexes and $X$ has the obvious $G$-$CW$-structure coming from the ones\n on $X_0$, $X_1$ and $X_2$. Suppose that $X_0$, $X_1$ and $X_2$ are $\\phi$-$L^2$-finite.\n Then $X$ is $\\phi$-$L^2$-finite and we get\n \\[\n \\chi^{(2)}(X;\\caln(G),\\phi) = \\chi^{(2)}(X_1;\\caln(G),\\phi) + \\chi^{(2)}(X_2;\\caln(G),\\phi) -\n \\chi^{(2)}(X_0;\\caln(G),\\phi);\n \\]\n\n \n\n \\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic:induction}\n \\emph{Induction}\\\\\n Let $i \\colon H \\to G$ be the inclusion of a subgroup of $G$. \n Then $X$ is $(\\phi \\circ i)$-$L^2$-finite if and only if $G \\times_HX$ is\n $\\phi$-$L^2$-finite. If this is the case, we get\n \\[\n \\chi^{(2)}(G \\times_H X;\\caln(G),\\phi) = \\chi^{(2)}(X;\\caln(H),\\phi \\circ i);\n \\]\n \n \n \n \n\n \\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic:restriction}\n \\emph{Restriction}\\\\\n Let $i\\colon H \\to G$ be the inclusion of a subgroup $H$ of $G$ with $[G:H] < \\infty$.\n Let $X$ be a $G$-$CW$-complex. Denote by $i^*X$ the $H$-$CW$-complex obtained from\n $X$ by restriction with $i$. \n Then $i^* X$ is $\\phi \\circ i$-$L^2$-finite if and only if $X$ is $\\phi$-$L^2$-finite, and in\n this case we get\n \\[\n \\chi^{(2)}(i^*X;\\caln(H),\\phi \\circ i) = [G:H]\\cdot \\chi^{(2)}(X;\\caln(G),\\phi);\n \\]\n\n \\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic:scaling_phi} \\emph{Scaling $\\phi$}\\\\\n We get for every integer $k \\ge 1$ that $X$ is $\\phi$-$L^2$-finite if and only if $X$\n is $(k \\cdot \\phi)$-$L^2$-finite, and in this case we get\n \\[\n \\chi^{(2)}(X;\\caln(G),k \\cdot \\phi) = k\\cdot \\chi^{(2)}(X;\\caln(G),\\phi);\n \\]\n\n \\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic:trivial_phi} \\emph{Trivial $\\phi$}\\\\\n Suppose that $\\phi$ is trivial. Then $X$ is $\\phi$-$L^2$-finite if and only if we have \n $b_n^{(2)}(X;\\caln(G)) = 0$ for all $n \\ge 0$. If this is the case, then\n \\[\n \\chi^{(2)}(X;\\caln(G),\\phi) = 0.\n \\]\n\n \n \\end{enumerate}\n\\end{theorem}\n\\begin{proof}~\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:homotopy_invariance}\n This follows from the homotopy invariance of $L^2$-Betti numbers.\n \\\\[1mm]~\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:sum_formula} The\n proof is analogous to the one of~\\cite[Theorem~6.80~(2) on page~277]{Lueck(2002)}, just\n replace the short split exact sequence of $\\IZ G$-chain complexes $0 \\to C_*(X_0) \\to\n C_*(X_1) \\oplus C_*(X_2) \\to C_*(X) \\to 0$ by the induced short split exact\n sequence of $\\IZ G$-chain complexes\n \\begin{multline*}\n 0 \\to C_*(X_0) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ] \n \\to C_*(X_1) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ] \\oplus C_*(X_2) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ] \n \\\\\n \\to C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ] \\to 0.\n \\end{multline*}\n \\\\[1mm]~\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:induction} The\n proof is analogous to the one of~\\cite[Theorem~6.80~(8) on page~279]{Lueck(2002)} using\n the isomorphism of $\\IZ G$-chain complexes\n \\begin{multline*}\n C_*(G \\times_H X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ] \\cong \\bigl(\\IZ G \\otimes_{\\IZ H} C_*(X)\\bigr) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ])\n \\\\\n \\cong \\IZ G \\otimes_{\\IZ H}\\bigl(C_*(X) \\otimes_{\\IZ} (\\phi \\circ i)^*\\IZ[\\IZ]\\bigr),\n \\end{multline*}\n where the second isomorphism is given by $(g \\otimes u) \\otimes v \\mapsto g \\otimes u \\otimes g^{-1}v$.\n \\\\[1mm]~\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:restriction} The\n proof is analogous to the one of~\\cite[Theorem~6.80~(7) on page~279]{Lueck(2002)} using\n the obvious identification of $\\IZ H$-chain complexes $i^*\\bigl(C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ]\\bigr)\n = C_*(i^*X) \\otimes_{\\IZ} i^*\\phi^*\\IZ[\\IZ]$.\n \\\\[1mm]~\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:scaling_phi}\n Since there is an obvious isomorphism of $\\IZ G$-modules\n $(k \\cdot \\phi)^* \\IZ[\\IZ] \\cong \\bigoplus_{i = 1}^k \\phi^* \\IZ[\\IZ]$,\n we get\n\\begin{multline*}\nb_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IZ G} (C_*(X) \\otimes_{\\IZ} (k \\cdot \\phi)^*\\IZ[\\IZ]\\bigr)\n\\\\\n= k \\cdot b_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IZ G} (C_*(X) \\otimes_{\\IZ} \\phi^* \\IZ[\\IZ])\\bigr).\n\\end{multline*}\n\\\\[1mm]~\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:trivial_phi} \nSince the triviality of $\\phi$ implies\nthat $C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ]$ is $\\IZ G$-isomorphic to $\\bigoplus_{\\IZ} C_*(X)$, we get\n\\begin{eqnarray*}\nb_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IZ G} (C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ]\\bigr)\n = \n\\begin{cases}\n0 & \\text{if} \\; b_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IZ G} C_*(X)\\bigr) = 0;\n\\\\\n\\infty & \\text{otherwise.}\n\\end{cases}\n\\end{eqnarray*}\nThis finishes the proof of Theorem~\\ref{the:Basic_properties_of_the_phi_L2-Euler_characteristic}.\n \\end{proof}\n\n\n \nWe can interpret the $\\phi$-twisted $L^2$-Euler characteristic also as an $L^2$-Euler characteristic \nfor surjective $\\phi$ as follows.\n\n\\begin{lemma} \\label{lem:phi-twisted_as_ordinary_L2_Euler_characteristic}\n Let $X$ be a $G$-$CW$-complex. Let $\\phi \\colon G \\to \\IZ$ be a surjective group homomorphism.\n Denote by $K$ the kernel of $\\phi$ and by $i \\colon K \\to G$ the inclusion. \n \n Then $X$ is $\\phi$-$L^2$-finite if and only if $h_n^{(2)}(i^*X;\\caln(K)) < \\infty$\n hold. If this is the case, then\n \\[\n \\chi^{(2)}(X;\\caln(G),\\phi)\\,\\, =\\,\\, \\chi^{(2)}(i^*X;\\caln(K)).\n \\]\n\n \n\\end{lemma}\n\\begin{proof} \n\nWe have the isomorphism of $\\IZ G$-chain complexes\n\\[\n\\IZ G \\otimes_{\\IZ K} i^*C_*(X) \\xrightarrow{\\cong} C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ],\n\\quad g \\otimes x \\mapsto gx \\otimes \\phi(g).\n\\]\nThe inverse sends $y\\otimes q$ to $g \\otimes g^{-1} y$ for any choice of $g \\in\n\\phi^{-1}(q)$. Since $\\caln(G)$ is flat as an $\\caln(K)$-module\nby~\\cite[Theorem~6.29~(1) on page~253]{Lueck(2002)}, we obtain a sequence of\nobvious isomorphisms of $\\caln(G)$-modules\n\\[\n\\begin{array}{ll}\n\\phantom{\\cong }\\caln(G) \\otimes_{\\caln(K)} H_n(\\caln(K) \\otimes_{\\IZ K} C_*(i^*X))\n\\hspace{-0.3cm}& \\cong\n\\caln(G) \\otimes_{\\caln(K)} H_n(\\caln(K) \\otimes_{\\IZ K} i^*C_*(X))\n\\\\\n \\cong \nH_n\\bigl(\\caln(G) \\otimes_{\\caln(K)} \\caln(K) \\otimes_{\\IZ K} i^*C_*(X)\\bigr)\n\n\\hspace{-0.3cm}& \\cong\nH_n\\bigl(\\caln(G) \\otimes_{\\IZ K} i^*C_*(X)\\bigr)\n\\\\\n \\cong \nH_n\\bigl(\\caln(G) \\otimes_{\\IZ G} \\IZ G \\otimes_{\\IZ K} i^*C_*(X))\\bigr)\n\n\\hspace{-0.3cm}& \\cong \nH_n\\bigl(\\caln(G) \\otimes_{\\IZ G} (C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ])\\bigr).\n\\end{array}\\]\nSince $\\dim_{\\caln(G)}(\\caln(G) \\otimes_{\\caln(K)} M) = \\dim_{\\caln(K)}(M)$ holds for\nevery $\\caln(K)$-module $M$ by~\\cite[Theorem~6.29~(2) on page~253]{Lueck(2002)}, we\nconclude for every $n \\ge 0$\n\\[\nb_n^{(2)}\\bigl(\\caln(K) \\otimes_{\\IZ K} C_*(i^*X);\\caln(K)\\bigr)\n=\nb_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IZ G} (C_*(X) \\otimes_{\\IZ} \\phi^*\\IZ[\\IZ]);\\caln(G)\\bigr).\n\\]\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{The $(\\mu,\\phi)$-$L^2$-Euler characteristic}\n\\label{subsec:The_(mu,phi)-L2-Euler_characteristic}\n\n We will be interested in this paper mainly in the case, where the $G$-$CW$-complex $X$ is\nfree. If we put $Y = X\/G$, then $X$ is the disjoint union of the preimages of the\ncomponents of $Y$. Therefore it suffices to study a connected $CW$-complex $Y$ and\n$G$-coverings $\\overline{Y} \\to Y$. Any such $G$-covering is obtained from the universal\ncovering $\\widetilde{Y} \\to Y$ and a group homomorphism $\\mu \\colon \\pi = \\pi_1(Y) \\to G$\nas the projection $G \\times_{\\mu} \\widetilde{Y} \\to Y$. Therefore we introduce the\nfollowing notation:\n\n \\begin{definition}[$(\\mu,\\phi)$-$L^2$-Euler characteristic]\n \\label{def:mu-phi-L2-Euler_characteristic}\n Let $X$ be a connected $CW$-complex. Let $\\mu \\colon \\pi_1(X) \\to G$ and $\\phi \\colon G \\to \\IZ$ be a group homomorphisms.\n Let $\\overline{X} \\to X$ be the $G$-covering associated to $\\mu$. We call $X$ \\emph{$(\\mu,\\phi)$-$L^2$-finite}\n if $\\overline{X}$ is $\\phi$-$L^2$-finite, and in this case we define the \\emph{$(\\mu,\\phi)$-$L^2$-Euler characteristic}\n $\\chi^{(2)}(X;\\mu,\\phi)$ to be $\\chi^{(2)}(\\overline{X};\\caln(G);\\phi)$, see \n Definition~\\ref{def:phi-twisted_L2_betti_number_and_L2-Euler_characteristic}.\n \\end{definition}\n\n\n The next lemma essentially reduces the general case $(\\mu,\\phi)$ to the special case,\n where $\\mu$ and $\\phi$ are surjective or $\\phi \\circ \\mu$ is trivial.\n\n\n \\begin{lemma} \\label{lem:reduction_to_surjective_mu} Let $X$ be a connected\n $CW$-complex. Let $\\mu \\colon \\pi_1(X) \\to G$ and $\\phi \\colon G \\to \\IZ$ be group\n homomorphisms. Let $G'$ be the image of $\\mu$. Let $\\mu' \\colon \\pi_1(X) \\to G'$ be the\n epimorphism induced by $\\mu$ and let $\\phi' \\colon G' \\to \\IZ$ be obtained by\n restricting $\\phi$ to $G'$.\n\n \\begin{enumerate}[font=\\normalfont]\n\n \\item \\label{lem:reduction_to_surjective_mu:mu'} \n Then $X$ is $(\\mu,\\phi)$-$L^2$-finite if and only if $X$ is $(\\mu',\\phi')$-$L^2$-finite.\n If this is the case, we get \n \\[\\chi^{(2)}(X;\\mu,\\phi)\\,\\, =\\,\\, \\chi^{(2)}(X;\\mu',\\phi');\\]\n\n\n \\item \\label{lem:reduction_to_surjective_mu:mu_circ_phi_is_not_trivial} \n\n Suppose that $\\mu \\circ \\phi \\not= 0$. Let\n $k\\ge 1 $ be the natural number such that the image of $\\phi'$ is $k \\cdot \\IZ$ and let $\\phi''\n \\colon G' \\to \\IZ$ be the epimorphism uniquely determined by $k \\cdot \\phi'' = \\phi'$. \n Then $X$ is $(\\mu,\\phi)$-$L^2$-finite, if and only if $X$ is\n $(\\mu',\\phi'')$-$L^2$-finite. If this is the case, we get\n \\[\n \\chi^{(2)}(X;\\mu,\\phi) \\,\\,=\\,\\, \\smfrac{1}{k} \\cdot \\chi^{(2)}(X;\\mu',\\phi'');\n \\]\n \\item \\label{lem:reduction_to_surjective_mu:mu_circ_phi_is_trivial} \n Suppose that $\\phi \\circ \\mu = 0$. Then\n $X$ is $(\\mu,\\phi)$-$L^2$-finite if and only if $b_n^{(2)}(\\overline{X};\\caln(G))$ vanishes \n for the $G$-covering $\\overline{X}$ associated to\n $\\mu$ and every $n \\ge 0$. If this is the case, then\n \\[\\chi^{(2)}(X;\\mu,\\phi)\\,\\, =\\,\\, 0.\n \\]\n \\end{enumerate}\n \\end{lemma}\n \n \\begin{proof} The first statement follows from Theorem~\\ref{the:Basic_properties_of_the_phi_L2-Euler_characteristic}.\n The second and third statement follow from (1) and Theorem~\\ref{the:Basic_properties_of_the_phi_L2-Euler_characteristic} (5) and (6).\n\\end{proof}\n\n\n\n\\begin{example}[Mapping torus] \\label{exa_mapping_torus} Let $Y$ be a connected finite\n $CW$-complex and $f \\colon Y \\to Y$ be a self-map. Let $T_f$ be its\n mapping torus. Consider any factorization $\\pi_1(T_f) \\xrightarrow{\\mu} G\n \\xrightarrow{\\phi} \\IZ$ of the epimorphism $\\pi_1(T_f) \\to \\pi_1(S^1) = \\IZ$ induced by\n the obvious projection $T_f \\to S^1$. Then $T_f$ is $(\\mu,\\phi)$-$L^2$-finite and we get\n \\[\n \\chi^{(2)}(T_f;\\mu,\\phi) \\,\\,= \\,\\,\\chi(Y)\n \\]\n by the following argument. Let $\\overline{T_f}$ be the $G$-covering\n associated to $\\mu \\colon \\pi_1(T_f) \\to G$. Let $K$ be the kernel of \n $\\phi$ and $i \\colon K \\to G$ be the inclusion. The\n image of the composite $\\pi_1(Y) \\to \\pi_1(T_f) \\xrightarrow{\\mu} G$ is contained in\n $K$ and we can consider the $K$-covering $\\widehat{Y} \\to Y$ associated to it. The\n $K$-$CW$-complex $i^*\\overline{T_f}$ is $K$-homotopy equivalent to the $K$-$CW$-complex\n $\\widehat{Y}$, see~\\cite[Section~2]{Lueck(1994b)}. Hence we conclude \n $\\chi^{(2)}(T_f;\\mu,\\phi) = \\chi^{(2)}(\\widehat{Y};\\caln(K))$ \n from Lemma~\\ref{lem:phi-twisted_as_ordinary_L2_Euler_characteristic}\n and the $K$-homotopy invariance of $L^2$-Betti numbers. Since $Y$ is a finite \n $CW$-complex, we have $\\chi^{(2)}(\\widehat{Y};\\caln(K)) = \\chi(Y)$. \n\n Notice that the $L^2$-Betti numbers $b_n^{(2)}(\\overline{T_f};\\caln(G))$ are all trivial\n by~\\cite[Theorem~2.1]{Lueck(1994b)} and hence the $L^2$-Euler characteristic\n $\\chi^{(2)}(\\overline{T_f};\\caln(G))$ is trivial. So the passage to the subgroup of\n infinite index $K$ or, equivalently, the twisting with the $\\IZ G$-module $\\phi^*\\IZ[\\IZ]$ \n which is not finitely generated as abelian group,\n ensures that we get an interesting invariant by the\n $(\\mu,\\phi)$-$L^2$-Euler characteristic.\n \\end{example}\n\n \n\n\n\\begin{lemma}\\label{lem:tori} \n Let $T^n$ be the $n$-dimensional torus for $n \\ge 1$. Consider homomorphisms $\\mu \\colon\n \\pi_1(T^n) \\to G$ and $\\phi \\colon G \\to \\IZ$ such that the image of $\\mu$ is infinite.\n \nThen $T^n$ is $(\\mu,\\phi)$-$L^2$-finite and we get\n\\[\n\\chi^{(2)}(T^n;\\mu,\\phi) = \n\\begin{cases} [\\IZ : \\im(\\phi \\circ \\mu)] \n& \n\\text{if}\\; n = 1, \\phi \\circ \\mu \\not = 0;\n\\\\\n0 \n& \n\\text{otherwise.}\n\\end{cases}\n\\]\n\\end{lemma}\n\\begin{proof}\n Because of\n Lemma~\\ref{lem:reduction_to_surjective_mu}~\\eqref{lem:reduction_to_surjective_mu:mu'} we\n can assume without loss of generality that $\\mu$ is surjective. Suppose that $\\phi \\circ \\mu$ is\n non-trivial. Because of\n Lemma~\\ref{lem:reduction_to_surjective_mu}~\\eqref{lem:reduction_to_surjective_mu:mu_circ_phi_is_not_trivial}\n it suffices to consider the case, where $\\mu$ and $\\phi$ are surjective.\n Then the claim follows from Example~\\ref{exa_mapping_torus} since there is a\n homeomorphism $h \\colon T^n \\xrightarrow{\\cong} T^{n-1} \\times S^1$ that that the composite of the map\n of $\\pi_1(T^{n-1} \\times S^1) \\to \\pi_1(S^1)$ induced by the projection onto $S^1$\n composed with $\\pi_1(h)$ is $\\phi$. Suppose that $\\phi$ is trivial. Since $\\mu$ has infinite image,\n one can show using~\\cite[Theorem~2.1]{Lueck(1994b)} that \n $b_m^{(2)}(\\overline{T^n};\\caln(G))$ vanishes for the $G$-covering\n $\\overline{T^n} \\to T^n$ associated to $\\mu$ for all $m \\ge 0$. Hence the claim follows from\n Lemma~\\ref{lem:reduction_to_surjective_mu}~\\eqref{lem:reduction_to_surjective_mu:mu_circ_phi_is_trivial}.\n\\end{proof}\n\n\n\n\n\\begin{theorem}[The $(\\mu,\\phi)$-$L^2$-Euler characteristic for $S^1$-$CW$-complexes] \n\\label{the:The_(mu,phi)-L2-Euler_characteristic_for_S1-CW-complexes}\nLet $X$ be a connected finite $S^1$-$CW$-complex. Let $\\mu \\colon \\pi_1(X) \\to G$ \nand $\\phi \\colon G \\to \\IZ$ be group homomorphisms. Suppose that for one and hence all $x \\in X$ the composite\n\\[\n\\eta \\colon \\pi_1(S^1,1) \\xrightarrow{\\pi_1(\\ev_x,1)} \\pi_1(X,x) \\xrightarrow{\\mu} G \\xrightarrow{\\phi} \\IZ\n\\]\nis injective, where $\\ev_x \\colon S^1 \\to X$ sends $z$ to $z \\cdot x$. \nDefine the $S^1$-orbifold Euler characteristic of $X$ by\n\\[\n\\chi^{S^1}_{\\orb}(X)\\,\\,=\\,\\, \\smsum{n \\ge 0}{} (-1)^n \\cdot \\smsum{e \\in I_n}{} \\smfrac{1}{|S^1_e|},\n\\]\nwhere $I_n$ is the set of open $n$-dimensional $S^1$-cells of $X$ and for \n$e \\in I_n $ we denote by $S^1_e$ the isotropy group of any point in $e$. \nThen $X$ is $(\\mu,\\phi)$-$L^2$-finite and we get\n\\[\n\\chi^{(2)}(X;\\mu,\\phi)\\,\\, =\\,\\, \n\\chi^{S^1}_{\\orb}(X) \\cdot [\\IZ : \\im(\\eta)].\n\\]\n\\end{theorem}\n\\begin{proof}\n The strategy of proof is the same as the the one of~\\cite[Theorem~1.40 on page~43]{Lueck(2002)}, \n where one considers more general finite $S^1$-$CW$-complex $Y$ together with a\n $S^1$-map $f \\colon Y \\to X$ and does induction over the number of $S^1$-equivariant cells.\n A basic ingredient is the additivity of the two terms appearing in the desired equation in \n Theorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_for_S1-CW-complexes}.\n\\end{proof}\n\n \n\nWe mention that the condition about the injectivity of the map $\\eta$ appearing in\n Theorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_for_S1-CW-complexes} is necessary.\n\n\n\n\n\nFor the reader's convenience we record the next result \nwhich we will not need in this paper and whose proof is a variation of the one of\nTheorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_for_S1-CW-complexes}.\n\n\\begin{theorem}[The $(\\mu,\\phi)$-$L^2$-Euler characteristic for fibrations]\n \\label{the:(mu,phi)-Euler_characteristic_for_fibrations}\n Let $F \\xrightarrow{i} E \\xrightarrow{p} B$ be a fibration of connected $CW$-complexes. Suppose\n that $B$ is a finite $CW$-complex. Consider group homomorphisms\n $\\mu \\colon \\pi_1(E) \\to G$ and $\\phi \\colon G \\to \\IZ$. Suppose that\n $\\overline{F}$ is $(\\mu \\circ \\pi_1(i),\\phi)$-$L^2$-finite.\n\n Then $\\overline{E}$ is $(\\mu,\\phi)$-$L^2$-finite and we get\n\\[\n\\chi^{(2)}(E;\\mu,\\phi) =\n\\chi(B) \\cdot \\chi^{(2)}(F,\\mu \\circ \\pi_1(i), \\phi). \n\\]\n\\end{theorem}\n\n\nIf $M$ is a compact connected orientable $3$-manifold with proper $S^1$-action, then $M$\nis a compact connected orientable Seifert manifold. The converse is not true\ngeneral. Associated to a compact connected orientable Seifert manifold is an orbifold $X$\nand $X$ has a orbifold Euler characteristic $\\chi_{\\orb}(X)$. For a basic introduction to\nthese notions we refer for instance to~\\cite{Scott(1983)}. If $M$ is a compact\n$3$-manifold with proper $S^1$-action, then $X$ is given by $M\/S^1$ and $\\chi_{\\orb}(X)$\nis the $S^1$-orbifold Euler characteristic $\\chi^{S^1}_{\\orb}(M)$. We omit the proof of\nthe next result since it is essentially a variation of the one of\nTheorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_for_S1-CW-complexes}, the role of\nthe cells $S^1\/H \\times D^n$ in\nTheorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_for_S1-CW-complexes} is now played\nby the typical neighborhoods of the Seifert fibers given by solid tori.\n\n\n\\begin{theorem}[The $(\\mu,\\phi)$-$L^2$-Euler characteristic for Seifert manifolds] \n\\label{the:The_(mu,phi)-L2-Euler_characteristic_for_Seifert_manifolds}\nLet $M$ be a compact connected orientable Seifert manifold. Let $\\mu \\colon \\pi_1(M) \\to G$ \nand $\\phi \\colon G \\to \\IZ$ be group homomorphisms. Suppose that for one (and hence all) $x \\in M$\n\\[\n\\eta \\colon \\pi_1(S^1,1) \\xrightarrow{\\pi_1(\\ev,1)} \\pi_1(M,x) \\xrightarrow{\\mu} G \\xrightarrow{\\phi} \\IZ\n\\]\nis injective, where $\\ev \\colon S^1 \\to M$ is the inclusion of a regular fiber.\nLet $X$ be the associated orbifold of $M$ and denote by $\\chi_{\\orb}(X)$ its orbifold Euler characteristic.\nThen $M$ is $(\\mu,\\phi)$-$L^2$-finite and we get\n\\[\n\\chi^{(2)}(M;\\mu,\\phi)\\,\\, = \\,\\, \n\\chi_{\\orb}(X) \\cdot [\\IZ : \\im(\\eta)].\n\\]\n\\end{theorem}\n\n\n\n\\begin{lemma} \\label{lem:JSJ_decom_Euler} Let $M$ be a $3$-manifold,\nwhich is admissible, see Definition~\\ref{def:admissible_3-manifold}. Let\n $M_1$, $M_2$, \\ldots, $M_r$ be its pieces in the Jaco-Shalen-Johannson decomposition. Consider\n group homomorphisms $\\mu \\colon \\pi_1(M) \\to G$ and $\\phi \\colon G \\to \\IZ$. Suppose that\n the composite of $\\mu$ with $\\pi_1(j) \\colon \\pi_1(T^2) \\to \\pi_1(M)$ has infinite image for the inclusion\n $j \\colon T^2 \\to M$ of any splitting torus appearing in the Jaco-Shalen-Johannson decomposition.\n Let $\\mu_i \\colon \\pi_1(M_i) \\to G$ be the composite of $\\mu$ with the map $\\pi_1(M_i) \\to \\pi_1(M)$\n induced by the inclusion $M_i \\to M$. Suppose that $M_i$ is $(\\mu_i,\\phi)$-$L^2$-finite for $i = 1,2, \\ldots , r$. \n\n Then $M$ is $(\\mu,\\phi)$-$L^2$-finite and we have\n\\[\n\\chi^{(2)}(M;\\mu,\\phi)\\,\\,=\\,\\, \\lmsum{i = 1}{r} \\chi^{(2)}\\big(\\widetilde{M_i};\\mu_i,\\phi\\big).\n\\]\n\\end{lemma}\n\\begin{proof} \nThis follows from \nTheorem~\\ref{the:Basic_properties_of_the_phi_L2-Euler_characteristic}\n\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:sum_formula}\nand Lemma~\\ref{lem:tori}.\n\\end{proof}\n\n\n\\begin{theorem}[The $(\\phi,\\mu)$-$L^2$-Euler characteristic and the Thurston norm for graph manifolds]\n\\label{the:The_(mu,phi-L2-Euler_characteristic_and_the_Thurston_norm_for_graph_manifolds}\nLet $M$ be an admissible $3$-manifold, which is a graph manifold and not homeomorphic to $S^1 \\times D^2$.\nConsider group homomorphisms $\\mu \\colon \\pi_1(M) \\to G$ and $\\phi \\colon G \\to \\IZ$. \nSuppose that for each piece $M_i$ in the\nJaco-Shalen-Johannson decomposition the map $\\pi_1(S^1 ) \\xrightarrow{\\pi_1(\\ev_i)}\n\\pi_1(M_i) \\xrightarrow{j_i} \\pi_1(M) \\xrightarrow{\\mu} G \\xrightarrow{\\phi} \\IZ$ is\ninjective, where $\\ev_i \\colon S^1 \\to M_i$ is the inclusion of the regular fiber \nand $j_i \\colon M_i \\to M$ is the inclusion.\n\nThen $M$ is $(\\mu,\\phi)$-$L^2$-finite and we get\n\\[\n- \\chi^{(2)}(M;\\mu,\\phi) \\,\\,=\\,\\, x_M(\\phi\\circ \\mu).\n\\]\n\\end{theorem}\n\\begin{proof} In the situation and notation of Lemma~\\ref{lem:JSJ_decom_Euler} \nwe conclude from~\\cite[Proposition~3.5 on page~33]{Eisenbud-Neumann(1985)}\n\\[\nx_M(\\phi)\\,\\, =\\,\\, \\lmsum{i = 1}{r} x_{M_i}(\\phi_i).\n\\]\nif $\\phi_i \\in H^1(M_i;\\IZ)$ is the restriction of $\\phi$ to $M_i$.\nMoreover, we get from Lemma~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_for_Seifert_manifolds}\nand from~\\cite[Lemma~A]{Herrmann(2016)}\nfor $i = 1,2 \\ldots, r$ \n\\[\n\\chi^{(2)}(M_i;\\mu_i,\\phi) \\,\\,=\\,\\, -x_M(\\phi_i),\n\\]\nif $\\mu_i$ is the composite of $\\mu$ with the homomorphism $\\pi_1(M_i) \\to \\pi_1(M)$ induced\nby the inclusion. \nNow the claim follows from Lemma~\\ref{lem:JSJ_decom_Euler}\nsince any splitting torus appearing in the Jaco-Shalen-Johannson decomposition contains a regular fiber\nof one of the pieces $M_i$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{The $\\phi$-$L^2$-Euler characteristic for universal coverings}\n\\label{subsec:The_phi-L2-Euler_characteristic_for_universal_coverings}\n\n\n\n\nIn this section we consider the special case of the universal covering and of a group\nhomomorphism $\\phi \\colon \\pi_1(X) \\to \\IZ$. This is in some sense the most canonical and\nimportant covering and in this case the formulations of the main results simplify in a\nconvenient way.\n\n\\begin{definition}[The $\\phi$-$L^2$-Euler characteristic for $\\phi \\in H^1(X;\\IZ)$]\n\\label{def:The_phi-L2-Euler_characteristic_for_phi_in_H1(X;Z)}\n \n Let $X$ be a connected $CW$-complex with fundamental group $\\pi$.\n Let $\\phi$ be an element in $H^1(X;\\IZ)$, or, equivalently, \n let $\\phi \\colon \\pi \\to \\IZ$ be a group homomorphism. We say that the universal covering\n $\\widetilde{X}$ of $X$ is \\emph{$\\phi$-$L^2$-finite}, if $X$ is $(\\id_{\\pi},\\phi)$-$L^2$-finite \n in the sense of Definition~\\ref{def:mu-phi-L2-Euler_characteristic}.\n If this is the case, we define its \\emph{$\\phi$-$L^2$-Euler characteristic}\n \\[\n \\chi(\\widetilde{X};\\phi) \n \\,\\, := \\,\\,\n \\chi^{(2)}(X;\\id_{\\pi},\\phi)\n \\]\n where $\\chi^{(2)}(X;\\id_{\\pi},\\phi)$ has been introduced in Definition~\\ref{def:mu-phi-L2-Euler_characteristic}.\n \n If $X$ is a (not necessarily connected) finite $CW$-complex and $\\phi \\in H^1(X;\\IZ)$, we say that\n $\\widetilde{X}$ is $\\phi$-$L^2$-finite if for each component $C \\in X$ the universal\n covering $\\widetilde{C} \\to C$ is $\\phi|_C$-$L^2$-finite and we put\n \\[\n \\chi^{(2)}(\\widetilde{X};\\phi) \\,\\,= \\,\\,\\lmsum{C \\in \\pi_0(X)}{} \\chi^{(2)}(\\widetilde{C},\\phi|_C).\n \\]\n\\end{definition}\n\n\n\n\nFor the reader's convenience we record the basic properties of the $\\phi$-$L^2$-Euler characteristic.\n\n\n\n\\begin{theorem}[Basic properties of the $\\phi$-$L^2$-Euler characteristic for universal coverings]\n\\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings}\n\\\n\n\\begin{enumerate}[font=\\normalfont]\n\n\n\\item \\emph{Homotopy invariance}\\\\[1mm]\n\\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:homotopy_invariance}\nLet $f \\colon X \\to Y $ be a homotopy equivalence of $CW$-complexes. Consider $\\phi \\in H^1(Y;\\IZ)$.\nLet $f^*\\phi \\in H^1(X;\\IZ)$ be its pullback with $f$.\n\nThen $\\widetilde{X}$ is $f^*\\phi$-$L^2$-finite if and only if $\\widetilde{Y}$ is $\\phi$-$L^2$-finite, and in this\n case we get\n \\[\n \\chi^{(2)}(\\widetilde{X};f^*\\phi)\\,\\, =\\,\\, \\chi^{(2)}(\\widetilde{Y};\\phi);\n \\]\n\n\\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:sum_formula}\n \\emph{Sum formula}\\\\\n Consider a pushout of $CW$-complexes\n \\[\n \\xymatrix@C1.2cm@R0.5cm{ X_0 \\ar[r] \\ar[d] & X_1 \\ar[d]\n \\\\\n X_2 \\ar[r]& X }\n \\]\n where the upper horizontal arrow is cellular, the left vertical arrow is an inclusion of\n $CW$-complexes and $X$ has the obvious $CW$-structure coming from the ones on $X_0$,\n $X_1$ and $X_2$. Consider $\\phi \\in H^1(X;\\IZ)$. For every $i \\in \\{0,1,2\\}$ suppose\n that for each base point $x_i \\in X_i$ the map $\\pi_1(j_i,x_i) \\colon \\pi_1(X_i,x_i) \\to \\pi_1(X,j_i(x_i))$ \n induced by the inclusion $j_i \\colon X_i \\to X$ is injective and that\n $\\widetilde{X_i}$ is $j_i^*\\phi$-$L^2$-finite.\n\n Then $\\widetilde{X}$ is $\\phi$-$L^2$-finite and we get\n \\[\n \\hspace{1cm} \\chi^{(2)}(\\widetilde{X};\\phi) \\,\\,=\\,\\, \\chi^{(2)}(\\widetilde{X_1};j_1^*\\phi) + \\chi^{(2)}(\\widetilde{X_2};j_2^*\\phi) -\n \\chi^{(2)}(\\widetilde{X_0};j_0^*\\phi);\n \\]\n \n \\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:finite_coverings}\n \\emph{Finite coverings}\\\\\n Let $p \\colon X \\to Y$ be a finite $d$-sheeted covering of connected $CW$-complexes, $\\phi$\nbe an element in $H^1(Y;\\IZ)$ and $p^*\\phi \\in H^1(X;\\IZ)$ be its pullback with $p$.\n\nThen $Y$ is $\\phi$-$L^2$-finite if and only if $X$ is $p^*\\phi$-$L^2$-finite, and in this case\n\\[\n\\chi^{(2)}(\\widetilde{X};p^*\\phi) \\,\\,:=\\,\\, d \\cdot \\chi^{(2)}(\\widetilde{Y};\\phi);\n\\]\n\n\n\\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:scaling_phi}\n \\emph{Scaling $\\phi$}\\\\\nLet $X$ be $CW$-complex and $\\phi$ be an element in $H^1(X;\\IZ)$\nConsider any integer $k \\not= 0$. \nThen $\\widetilde{X}$ is $\\phi$-$L^2$-finite if and only if $\\widetilde{X}$ is $(k \\cdot \\phi)$-$L^2$-finite, \nand in this case we get\n\\[\n\\chi^{(2)}(\\widetilde{X};k \\cdot \\phi)\\,\\, =\\,\\, |k| \\cdot \\chi^{(2)}(\\widetilde{X};\\phi);\n\\]\n\n\n\\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:trivial_phi}\n \\emph{Trivial $\\phi$}\\\\\nLet $X$ be a $CW$-complex. Let $\\phi$ be trivial.\nThen $\\widetilde{X}$ is $\\phi$-$L^2$-finite if and only if $b_n^{(2)}(\\widetilde{X};\\caln(\\pi_1(X)) = 0$\nholds for all $n \\ge 0$. If this is the case, we get\n\\[\n\\chi^{(2)}(\\widetilde{X};\\phi)\\,\\, =\\,\\, 0;\n\\]\n \n\n\n\n\\item \\label{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:tori}\n \\emph{Tori}\\\\\nLet $T^n$ be the $n$-dimensional torus for $n \\ge 1$. Consider any $\\phi \\in H^1(T^n;\\IZ)$. Then\n$\\widetilde{T^n}$ is $\\phi$-$L^2$-finite and we get\n\\[\n\\chi^{(2)}(\\widetilde{T^n};\\phi) \\,\\,=\\,\\, \n\\begin{cases} [\\IZ : \\im(\\phi)] \n& \n\\text{if}\\; n = 1, \\phi \\not = 0;\n\\\\\n0 \n& \n\\text{otherwise.}\n\\end{cases}\n\\]\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\n The second statement follows from\n Theorem~\\ref{the:Basic_properties_of_the_phi_L2-Euler_characteristic}\n \\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:sum_formula}\n and~\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic:induction} using the\n fact that for every $i \\in \\{0,1,2\\}$ and every base point $x_i \\in X_i$ the total space\n of the pullback of the universal covering of the component $D$ of $X$ containing\n $j_i(x_i)$ with $j_i|_C \\colon C \\to D$ for $C$ the component of $X_i$ containing $x_i$\n is $\\pi_1(D)$-homeomorphic to $\\pi_1(D) \\times_{\\pi_1(j_i)} \\widetilde{C}$ for the\n universal covering $\\pi_1(C,x_i) \\to \\widetilde{C} \\to C$ of $C$.\n\n The last statement is a special case of Lemma~\\ref{lem:tori}. Finally all other\n statements follow from the corresponding statements\n of~Theorem~\\ref{the:Basic_properties_of_the_phi_L2-Euler_characteristic}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\begin{lemma} \\label{lem:JSJ_decom_Euler_universal} \n Let $M$ be an admissible $3$-manifold. Let\n $M_1, M_2, \\ldots, M_r$ be its pieces in the Jaco-Shalen-Johannson decomposition. Consider\n $\\phi \\in H^1(M;\\IZ)$. Let $\\phi_i \\in H^1(M_i;\\IZ)$ be the pullback of $\\phi$ with the\n inclusion $M_i \\to M$ for $i = 1,2 , \\ldots,r$.\n\n Then $M_i$ is $\\phi_i$-$L^2$-finite for $i = 1,2, \\ldots ,r$ and $M$ is\n $\\phi$-$L^2$-finite and we have\n\\[\n\\chi^{(2)}(\\widetilde{M};\\phi)\\,\\, = \\,\\,\\lmsum{i = 1}{r} \\chi^{(2)}(\\widetilde{M_i};\\phi_i).\n\\]\n\\end{lemma}\n\\begin{proof} \nIf $M_i$ is Seifert, then it is $\\phi_i$-$L^2$-finite by \nTheorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_for_Seifert_manifolds}.\nIf $M_i$ is hyperbolic, $b_n^{(2)}(\\widetilde{M_i})$ vanishes for all $n \\ge 0$\nby~\\cite[Theorem~0,1]{Lott-Lueck(1995)}, and hence $M_i$ is $\\phi_i$-$L^2$-finite by \nTheorem~\\ref{the:Status_of_the_Atiyah_Conjecture}\n~\\eqref{the:Status_of_the_Atiyah_Conjecture:3-manifold_not_graph}\nand Theorem~\\ref{the:Atiyah_and_(mu,phi)-L2-Euler_characteristic}. Now the claim follows from\nTheorem~\\ref{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings}\n\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:sum_formula}\nand~\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:tori}\nusing the fact that the splitting tori in the Jaco-Shalen-Johannson decomposition are incompressible.\n\\end{proof}\n\n\\begin{theorem}[The $\\phi$-$L^2$-Euler characteristic and the Thurston norm for graph manifolds]\n\\label{the:The_phi-L2-Euler_characteristic_and_the_Thurston_norm_for_graph_manifolds}\nLet $M$ be an admissible $3$-manifold, which is a graph manifold and not homeomorphic to $S^1 \\times D^2$. \nConsider $\\phi \\in H^1(M;\\IZ)$. Then $\\widetilde{M}$ is $\\phi$-$L^2$-finite and we get\n\\[\n- \\chi^{(2)}(\\widetilde{M};\\phi) \\,\\,=\\,\\, x_M(\\phi).\n\\]\n\\end{theorem}\n\\begin{proof} This follows from \nTheorem~\\ref{the:The_(mu,phi-L2-Euler_characteristic_and_the_Thurston_norm_for_graph_manifolds}.\n\\end{proof}\n\n\n\\begin{example}[$S^1\\times D^2$ and $S^1 \\times S^2$]\n\\label{exa:S1_timesD2_and_S1_times_S2}\nConsider a homomorphism $\\phi \\colon H_1(S^1 \\times D^2) \\xrightarrow{\\cong} \\IZ$.\nLet $k$ be the index $[\\IZ : \\im(\\phi)]$ if $\\phi$ is non-trivial, and let $k = 0$ if $\\phi$ is trivial.\nThen we conclude from~\\eqref{scaling_Thurston_norm},~\\eqref{fiber_bundles_Thurston_norm},\n Lemma~\\ref{lem:reduction_to_surjective_mu} and Example~\\ref{exa_mapping_torus}\n\\[\nx_{S^1 \\times D^2}(\\phi) \\,\\, = \\,\\, 0\\quad \\mbox{ and }\\quad \n-\\chi^{(2)}(\\widetilde{S^1 \\times D^2};\\phi) \n\\,\\, = \\,\\, k.\\]\nHence we have to exclude $S^1 \\times D^2$ in \nTheorem~\\ref{the:The_phi-L2-Euler_characteristic_and_the_Thurston_norm_for_graph_manifolds} and thus also in Theorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm_universal_covering_for_universal_coverings}.\nAnalogously we get \n\\[\nx_{S^1 \\times S^2}(\\phi) \\,\\, = \\,\\, 0\\quad \\mbox{ and } \\quad\n-\\chi^{(2)}(\\widetilde{S^1 \\times S^2};\\phi) \n\\,\\, = \\,\\, 2 \\cdot k,\\]\nso that we cannot replace ``irreducible'' by ``prime'' in \nTheorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm_universal_covering_for_universal_coverings}.\n\\end{example}\n\n\n\n\n\n \\typeout{-------------------------- Section 3: About the Atiyah Conjecture -----------------------}\n\n\n\n\n\n\\section{About the Atiyah Conjecture}\n\\label{sec:About_the_Atiyah_Conjecture}\n\nSo far the definition and the analysis of the $\\phi$-twisted $L^2$-Euler characteristic\nhas been performed on an abstract level. In order to ensure that the condition\n$(\\mu,\\phi)$-$L^2$-finite is satisfied and that the $(\\mu,\\phi)$-$L^2$-Euler\ncharacteristic contains interesting information, we will need further input, namely, the\nfollowing Atiyah Conjecture.\n\n\n\n\\subsection{The Atiyah Conjecture}\n\\label{subsec:The_Atiyah_Conjecture}\n\n\n\\begin{definition}[Atiyah Conjecture]\n\\label{def:Atiyah_Conjecture}\nWe say that a torsion-free group $G$ satisfies the \\emph{Atiyah Conjecture} if for any\nmatrix $A \\in M_{m,n}(\\IQ G)$ the von Neumann dimension $\\dim_{\\caln(G)}(\\ker(r_A))$\nof the kernel of the $\\caln(G)$-homomorphism\n$r_A\\colon \\caln(G)^m \\to \\caln(G)^n$ given by right multiplication with $A$ is an integer.\n\\end{definition}\n\nThe Atiyah Conjecture can also be formulated for any field $F$ with $\\IQ \\subseteq F \\subseteq \\IC$\nand matrices $A \\in M_{m,n}(FG)$ and for any group with a bound on the order of its finite subgroups. \nHowever, we only need and therefore consider in this paper the case, where $F = \\IQ$ and $G$ is torsion-free.\n\n\\begin{theorem}[Status of the Atiyah Conjecture]\n\\label{the:Status_of_the_Atiyah_Conjecture}\n\\begin{enumerate}[font=\\normalfont]\n\n\\item \\label{the:Status_of_the_Atiyah_Conjecture:subgroups}\nIf the torsion-free group $G$ satisfies the Atiyah Conjecture,\nthen also each of its subgroups satisfies the Atiyah Conjecture;\n\n\n\n\n\\item \\label{the:Status_of_the_Atiyah_Conjecture:Linnell} Let $\\calc$ be the smallest\n class of groups which contains all free groups and is closed under directed unions and\n extensions with elementary amenable quotients. Suppose that $G$ is a torsion-free group\n which belongs to $\\calc$.\n\nThen $G$ satisfies the Atiyah Conjecture;\n\n\\item \\label{the:Status_of_the_Atiyah_Conjecture:3-manifold_not_graph} Let $G$ be an infinite group which is the\n fundamental group of an admissible $3$-manifold $M$. Suppose that one of the following conditions is satisfied:\n\n \\begin{itemize}\n \\item $M$ is not a closed graph manifold;\n \\item $M$ is a closed graph manifold which admits a Riemannian metric of non-positive sectional curvature.\n \\end{itemize}\n \n Then $G$ is torsion-free and belongs to $\\calc$. In particular $G$ satisfies the Atiyah Conjecture.\n\n\n\n\\item \\label{the:Status_of_the_Atiyah_Conjecture:approx}\nLet $\\cald$ be the smallest class of groups such that\n\\begin{itemize}\n\n\\item The trivial group belongs to $\\cald$;\n\n\\item If $p\\colon G \\to A$ is an epimorphism of a torsion-free group $G$ onto an\nelementary amenable group $A$ and if $p^{-1}(B) \\in \\cald$ for every finite group\n$B \\subset A$, then $G \\in \\cald$;\n\n\\item $\\cald$ is closed under taking subgroups;\n\n\\item $\\cald$ is closed under colimits and inverse limits over directed systems.\n\n\\end{itemize}\n\nIf the group $G$ belongs to $\\cald$,\nthen $G$ is torsion-free and the Atiyah Conjecture holds for $G$.\n\nThe class $\\cald$ is closed under direct sums, direct products and free\nproducts. Every residually torsion-free elementary amenable group\nbelongs to $\\cald$;\n\n\n\n\\end{enumerate}\n\n\\end{theorem}\n\\begin{proof}~\\eqref{the:Status_of_the_Atiyah_Conjecture:subgroups} \n This follows from~\\cite[Theorem~6.29~(2) on page~253]{Lueck(2002)}.\n \\\\[1mm]~\\eqref{the:Status_of_the_Atiyah_Conjecture:Linnell} This is due to Linnell,\n see for instance~\\cite{Linnell(1993)} or~\\cite[Theorem~10.19 on page~378]{Lueck(2002)}.\n \\\\[1mm]~\\eqref{the:Status_of_the_Atiyah_Conjecture:3-manifold_not_graph} It suffices to\n show that $G = \\pi_1(M)$ belongs to the class $\\calc$ appearing in\n assertion~\\eqref{the:Status_of_the_Atiyah_Conjecture:Linnell}. By the proof of the Virtual Fibering Theorem due to Agol, Liu, Przytycki-Wise, and \n Wise~\\cite{Agol(2008),Agol(2013),Liu(2013),Przytycki-Wise(2012), Przytycki-Wise(2014),Wise(2012raggs),Wise(2012hierachy)}\n there exists a finite normal covering $p \\colon \\overline{M} \\to M$ and a fiber\n bundle $F \\to \\overline{M} \\to S^1$ for some compact connected orientable surface $F$. Hence it\n suffices to show that $\\pi_1(F)$ belongs to $\\calc$. If $F$ has non-empty boundary,\n this follows from the fact that $\\pi_1(F)$ is free. If $M$ is closed, the commutator\n subgroup of $\\pi_1(F)$ is free and hence $\\pi_1(F)$ belongs to $\\calc$. Now\n assertion~\\eqref{the:Status_of_the_Atiyah_Conjecture:3-manifold_not_graph} follows from\n assertion~\\eqref{the:Status_of_the_Atiyah_Conjecture:Linnell}.\n \\\\[1mm]~\\eqref{the:Status_of_the_Atiyah_Conjecture:approx} This result is due to Schick,\n see for instance~\\cite{Schick(2001b)} or~\\cite[Theorem~10.22 on page~379]{Lueck(2002)}.\n\\end{proof}\n\n\n\n\\subsection{$L^2$-acyclic Atiyah pair}\n\\label{subsec:L2-acyclic_Atiyah_pair}\n\n\n\\begin{definition}[$L^2$-acyclic Atiyah-pair] \n \\label{def:L2-acyclic_Atiyah-pair} \n An \\emph{$L^2$-acyclic Atiyah-pair} $(\\mu,\\phi)$ for a finite connected $CW$-complex $X$\n consists of group homomorphisms $\\mu \\colon \\pi_1(X) \\to G$ and\n $\\phi \\colon G \\to \\IZ$ such that the $G$-covering $\\overline{X} \\to X$ associated to\n $\\mu$ is $L^2$-acyclic, i.e., the $n$th $L^2$-Betti number\n $b_n^{(2)}(\\overline{X};\\caln(G))$ vanish for every $n \\ge 0$, and $G$ is torsion-free and satisfies the\n Atiyah Conjecture.\n\\end{definition}\n\n\n \nNotice that the conditions appearing in Definition~\\ref{def:L2-acyclic_Atiyah-pair} only\nconcern $G$ and $\\mu$ but not $\\phi$. The Atiyah Conjecture enters in this paper because\nof the following theorem.\n\n\\begin{theorem}[The Atiyah Conjecture and the $(\\mu,\\phi)$-$L^2$-Euler characteristic]\n\\label{the:Atiyah_and_(mu,phi)-L2-Euler_characteristic}\nLet $X$ be a connected finite $CW$-complex. \nSuppose that $(\\mu,\\phi)$ is an $L^2$-acyclic Atiyah-pair.\nThen $X$ is $(\\mu,\\phi)$-$L^2$-finite, and the $(\\mu,\\phi)$-$L^2$-Euler characteristic\n$\\chi^{(2)}(X;\\mu,\\phi)$ is an integer.\n\\end{theorem}\n\nTheorem~\\ref{the:Atiyah_and_(mu,phi)-L2-Euler_characteristic} will be a direct consequence\nof Lemma~\\ref{lem:phi-twisted_as_ordinary_L2_Euler_characteristic}, \nLemma~\\ref{lem:reduction_to_surjective_mu} and the following \nTheorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):chain},\nwhose formulation requires some preparation.\n\n\n\\subsection{The division closure $\\cald(G)$ of $\\IQ G$ in $\\calu(G)$}\n\\label{subsec:The_division_closure_cald(G)_of_FG_in_calu(G)}\n\n\n\n\nLet $S$ be a ring with subring $R \\subset S$. The \\emph{division closure} $\\cald(R \\subset S)$ \nis the smallest subring of $S$ which contains $R$ and is division closed, i.e., every\nelement in $\\cald(R \\subset S)$ which is a unit in $S$ is already a unit in $\\cald(R \\subset S)$.\n\n\n\\begin{notation}[$\\cald(G)$] \\label{not:T(Sigma_subset_R)} Let $G$ be a group.\n Denote by $\\calu(G)$ the algebra of operators\n affiliated to the (complex) group von Neumann algebra $\\caln(G)$,\n see~\\cite[Section~8.2]{Lueck(2002)}. (This is the Ore localization of $\\caln(G)$ with\n respect to the set of non-zero-divisors of $\\caln(G)$, see~\\cite[Theorem~8.22 on\n page~327]{Lueck(2002)}.) Denote by $\\cald(G)$ the division closure of $\\IQ G$ considered\n as a subring of $\\calu(G)$. \n\\end{notation}\n\nThe proof of Theorem~\\ref{the:Main_properties_of_cald(G)} will be based on ideas of Peter\nLinnell from~\\cite{Linnell(1993)} which have been explained in detail and a little bit extended\nin~\\cite[Chapter~10]{Lueck(2002)} and~\\cite{Reich(2006)}. \n\n\\begin{theorem}[Main properties of $\\cald(G)$]\\label{the:Main_properties_of_cald(G)}\nLet $G$ be a torsion-free group.\n\n\\begin{enumerate}[font=\\normalfont]\n\n\\item \\label{the:Main_properties_of_cald(G):skew_field}\nThe group $G$ satisfies the Atiyah Conjecture if and only if\n$\\cald(G)$ is a skew field;\n\n\n\\item \\label{the:Main_properties_of_cald(G):dim}\nSuppose that $G$ satisfies the Atiyah Conjecture.\nLet $C_*$ be a projective $\\IQ G$-chain complex.\nThen we get for all $n \\ge 0$\n\\[\nb_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IQ G} C_*\\bigr) = \\dim_{\\cald(G)}\\bigl(H_n(\\cald(G) \\otimes_{\\IQ G} C_*)\\bigr).\n\\]\nIn particular $b_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IQ G} C_*\\bigr)$ is either infinite or an integer;\n\n\\item \\label{the:Main_properties_of_cald(G):cald(K)_and_(cald(G)} \n Suppose that $G$ satisfies the Atiyah Conjecture.\n Let $\\phi \\colon G \\to \\IZ$ be a surjective group\n homomorphism. Let $K \\subseteq G$ be the kernel of $\\phi$. Fix an element $\\gamma \\in G$\n with $\\phi(\\gamma) = 1$. If we define the semi-direct product $K \\rtimes \\IZ$ with\n respect to the conjugation automorphism $c_{\\gamma} \\colon K \\to K$ of $\\gamma$ on $K$,\n we can identify $G$ with $K \\rtimes \\IZ$ and $\\phi$ becomes the\n canonical projection $G = K \\rtimes \\IZ \\to \\IZ$. Let $\\cald(K)_{t}[u^{\\pm 1}]$\n be the ring of twisted Laurent polynomials with respect to the automorphism\n $t \\colon \\cald(K) \\xrightarrow{\\cong} \\cald(K)$ coming\n from $c_{\\gamma} \\colon K \\to K$. \n\n Then $\\cald(K_{t}[u^{\\pm 1}]$ is a non-commutative principal ideal domain, i.e.,\n it has no non-trivial zero-divisor and every left ideal is a principal left ideal and\n every right ideal is a principal right ideal. Furthermore the set $T$ of non-zero elements in\n $\\cald(K)_{t}[u^{\\pm 1}]$ satisfies the Ore condition and there is a canonical\n isomorphism of skew fields\n \\[\n T^{-1}\\cald(K)_{t}[u^{\\pm 1}] \\xrightarrow{\\cong} \\cald(G);\n \\]\n\n \n\n\\item \\label{the:Main_properties_of_cald(G):chain} \n Let $G$ be a torsion-free group which satisfies the\n Atiyah Conjecture. Let $\\phi \\colon G \\to \\IZ$ be a surjective group\n homomorphism. Denote by $i \\colon K \\to G$ the inclusion of the kernel $K$ of $\\phi$.\n Let $C_*$ be a finitely generated projective $\\IQ G$-chain complex such that \n $b_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*)$ vanishes for all $n \\ge 0$. Denote by $i^*C_*$ \n the restriction of the $\\IQ G$-chain complex $C_*$ to a $\\IQ K$-chain complex. \n\n \n\n Then $H_n(\\cald(K) \\otimes_{\\IQ K} i^*C_*)$ and $(H_n(\\cald(K) _{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*)$\n are finitely generated free as $\\cald(K)$-modules, $b_n^{(2)}\\bigl(\\caln(K) \\otimes_{\\IQ K} i^*C_*\\bigr)$ is finite,\n and we have\n\\begin{eqnarray*}\nb_n^{(2)}\\bigl(\\caln(K) \\otimes_{\\IQ K} i^*C_*\\bigr) \n& = &\n\\dim_{\\cald(K)}\\bigl(H_n(\\cald(K) _{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*)\\bigr).\n\\\\\n& = & \n\\dim_{\\cald(K)}\\bigl(H_n(\\cald(K) \\otimes_{\\IQ K} i^*C_*)\\bigr)\n\\end{eqnarray*}\nfor all $n \\ge 0$.\n\n \n\n\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}~\\eqref{the:Main_properties_of_cald(G):skew_field} This is proved in the\n case $F = \\IC$ in~\\cite[Lemma~10.39 on page~388]{Lueck(2002)}. The proof goes through\n for an arbitrary field $F$ with $\\IQ \\subseteq F \\subseteq \\IC$ without\n modifications. \n \\\\[1mm]~\\eqref{the:Main_properties_of_cald(G):dim} \n We have the following commutative diagram of inclusion of rings\n\\[\n\\xymatrix@C1.2cm@R0.5cm{\\IQ G \\ar[r] \\ar[d]\n& \\caln(G) \\ar[d]\n\\\\\n\\cald(G) \\ar[r] \n&\n\\calu(G).\n}\n\\]\nThere is a dimension function $\\dim_{\\calu{G}}$ for arbitrary (algebraic) $\\calu(G)$-modules\nsuch that for any $\\caln(G)$-module $M$ we have \n$\\dim_{\\calu{G}}(\\calu(G) \\otimes_{\\caln(G)} M) = \\dim_{\\caln(G)}(M)$ \nand basic features like additivity and continuity and cofinality are still satisfied, \nsee~\\cite[Theorem~8.29 on page~330]{Lueck(2002)}. Moreover, $\\calu(G)$ is flat over $\\caln(G)$,\nsee~\\cite[Theorem~8.22~(2) on page~327]{Lueck(2002)}. \nSince $\\cald(G)$ is a skew field by assertion~\\eqref{the:Main_properties_of_cald(G):skew_field}, \n$\\calu(G)$ is also flat as a $\\cald(G)$-module\nand we have for any $\\cald(G)$-module $M$ the equality\n$\\dim_{\\calu(G)}(\\calu(G) \\otimes_{\\cald(G)} M) = \\dim_{\\cald(G)}(M)$. We conclude\n\\begin{eqnarray*}\nb_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*) \n& = & \n\\dim_{\\caln(G)} \\bigl(H_n(\\caln(G) \\otimes_{\\IQ G} C_*)\\bigr)\n\\\\\n& = & \n\\dim_{\\calu(G)} \\bigl(\\calu(G) \\otimes_{\\caln(G)} H_n(\\caln(G) \\otimes_{\\IQ G} C_*)\\bigr)\n\\\\\n& = & \n\\dim_{\\calu(G)} \\bigl(H_n(\\calu(G) \\otimes_{\\caln(G)} \\caln(G) \\otimes_{\\IQ G} C_*)\\bigr)\n\\\\\n& = & \n\\dim_{\\calu(G)} \\bigl(H_n(\\calu(G) \\otimes_{\\IQ G} C_*)\\bigr)\n\\\\\n& = & \n\\dim_{\\calu(G)} \\bigl(H_n(\\calu(G) \\otimes_{\\cald(G)} \\cald(G) \\otimes_{\\IQ G} C_*)\\bigr)\n\\\\\n& = & \n\\dim_{\\calu(G)} \\bigl(\\calu(G) \\otimes_{\\cald(G)} H_n(\\cald(G) \\otimes_{\\IQ G} C_*)\\bigr)\n\\\\\n& = & \n\\dim_{\\cald(G)} \\bigl(H_n(\\cald(G) \\otimes_{\\IQ G} C_*)\\bigr).\n\\end{eqnarray*}\nThis finishes the proof of assertion~\\eqref{the:Main_properties_of_cald(G):dim}.\n\\\\[1mm]~\\eqref{the:Main_properties_of_cald(G):cald(K)_and_(cald(G)} Since $G\n= K \\rtimes \\IZ$ satisfies the Atiyah Conjecture by assumption, the\nsame is true for $K$ by\nTheorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:subgroups}.\nWe know already from assertion~\\eqref{the:Main_properties_of_cald(G):skew_field} that\n$\\cald(K)$ and $\\cald(G)$ are skew fields. The ring $\\cald(K)_{t}[u^{\\pm 1}]$ is\na non-commutative principal ideal domain, see~\\cite[2.1.1 on page~49]{Cohn(1995)}\nor~\\cite[Proposition~4.5]{Cochran(2004)}. The claim that the Ore localization\n$T^{-1}\\cald(K)_{t}[u^{\\pm 1}]$ exists and is isomorphic to $\\cald(G)$ is proved in\nthe case $F = \\IC$ in~\\cite[Lemma~10.60 on page~399]{Lueck(2002)}. The proof goes through\nfor an arbitrary field $F$ with $\\IQ \\subseteq F \\subseteq \\IC$ without\nmodifications. \n\\\\[1mm]~\\eqref{the:Main_properties_of_cald(G):chain} \nWe write the group ring $\\IQ G$ as the ring $\\IQ K_{t}[u^{\\pm 1}]$ of twisted Laurent\npolynomials with coefficients in $\\IQ K$. We get a commutative diagram of inclusions of rings,\nwhere $\\cald(K)_{t}[u^{\\pm 1}]$ is a (non-commutative) principal ideal domain and\n$\\cald(K)$ and $\\cald(G)$ are skew fields\n\n\\[\\xymatrix@C1.2cm@R0.5cm{\\IQ K \\ar[r] \\ar[d]\n&\n\\IQ G = \\IQ K_{t}[u^{\\pm 1}] \\ar[dd] \\ar[ldd]\n\\\\\n\\cald(K) \\ar[d]\n&\n\\\\\n\\cald(K)_{t}[u^{\\pm 1}] \\ar[r] \\ar[d]\n&\n\\cald(G) \\ar[d]^{\\id}\n\\\\\nT^{-1}\\cald(K)_{t}[u^{\\pm 1}] \\ar[r]^-{\\cong}\n&\n\\cald(G).\n}\n\\]\nSince $C_*$ is a finitely generated projective $\\IQ G$-chain complex by assumption,\nthe $\\cald(K)_{t}[u^{\\pm 1}]$-chain complex $\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{ \\IQ G}C_*$ \nis finitely generated projective. Since $\\cald(K \\rtimes \\IZ)$ is a\n(non-commutative) principal ideal domain, it follows from~\\cite[p.~494]{Cohn(1985)},\nthat there exist integers $r,s \\ge 0$ and non-zero elements $p_1, p_2, \\ldots , p_s \\in \\cald(K)_{t}[u^{\\pm 1}]$ \nsuch that we get an isomorphism of $\\cald(K)_{t}[u^{\\pm 1}]$-modules\n\\[\nH_n\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*\\bigr) \\cong \\cald(K)_{t}[u^{\\pm 1}]^r \\oplus \\bigoplus_{i = 1}^s \n\\cald(K)_{t}[u^{\\pm 1}]\/(p_i).\n\\]\nSince $\\cald(G) = T^{-1}\\cald(K)_{t}[u^{\\pm 1}]$ is flat over\n$\\cald(K)_{t}[u^{\\pm 1}]$, we conclude using\nassertion~\\eqref{the:Main_properties_of_cald(G):dim}\n\\begin{eqnarray*}\nr \n& = &\n\\dim_{\\cald(G)}\\bigl(\\cald(G)\\otimes_{\\cald(K)_{t}[u^{\\pm 1}]} \nH_n(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*)\\bigr)\n\\\\\n& = &\n\\dim_{\\cald(G)}\\bigl(\nH_n\\bigl(\\cald(G) \\otimes_{\\cald(K)_{t}[u^{\\pm 1}]} \\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*\\bigr)\\bigr)\n\\\\\n& = &\n\\dim_{\\cald(G)}\\bigl(\nH_n\\bigl(\\cald(G) \\otimes_{\\IQ G} C_*\\bigr)\\bigr)\n\\\\\n& = & \nb_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*).\n\\end{eqnarray*}\nSince by assumption $b_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*) = 0$ holds, we conclude\n\\[\nH_n(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*)\\,\\, \\cong \\,\\, \\bigoplus_{i = 1}^s \n\\cald(K)_{t}[u^{\\pm 1}]\/(p_i).\n\\]\nLemma~\\ref{lem:degree_and_rank} implies that $\\cald(K)_{t}[u^{\\pm 1}]\/(p_i)$ considered as\n$\\cald(K)$-module is finitely generated free. \nThis implies that $H_n\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*\\bigr)$\nconsidered as $\\cald(K)$-module is finitely generated free.\nAssertion~\\eqref{the:Main_properties_of_cald(G):dim} \napplied to $K$ instead of $G$ implies\n\\[\nb_n^{(2)}\\bigl(\\caln(K) \\otimes_{FK} i^*C_*\\bigr) \n=\n\\dim_{\\cald(K)}\\bigl(H_n\\bigl(\\cald(K) \\otimes_{\\IZ K} i^* C_*\\bigr)\\bigr). \n\\]\nThere is an obvious isomorphism of $\\cald(K)$-chain complexes\n\\[\n\\cald(K) \\otimes_{\\IZ K} i^*C_* \\xrightarrow{\\cong} \\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IZ G} C_*,\n\\quad \nx \\otimes_{\\IZ K} y \\mapsto x \\otimes_{\\IZ G} y\n\\]\nwhich induces an isomorphism of $\\cald(K)$-modules\n\\[\nH_n\\bigl(\\cald(K) \\otimes_{\\IQ K} i^*C_*\\bigr) \\xrightarrow{\\cong} H_n\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*\\bigr).\n\\]\nHence we get\n\\[\n\\dim_{\\cald(K)}\\bigl(H_n\\bigl(\\cald(K) \\otimes_{FK} i^*C_*\\bigr)\\bigr) \n\\,\\,=\\,\\,\n\\dim_{\\cald(K)}\\bigl(H_n\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IQ G} C_*\\bigr)\\bigr). \n\\]\nThis finishes the proof of Theorem~\\ref{the:Main_properties_of_cald(G)}\nand hence also of Theorem~\\ref{the:Atiyah_and_(mu,phi)-L2-Euler_characteristic}.\n\\end{proof}\n\n\n\n\n \\typeout{-- Section 4: The $\\phi$-$L^2$-Euler characteristic is a lower bound for the Thurston norm ---------}\n\n\n\\section{The negative of the $(\\mu,\\phi)$-$L^2$-Euler characteristic is a lower bound for the Thurston norm}\n\\label{subsec:The_(mu,phi)-L2-Euler_characteristic_is_a_lower_bound_for_the_Thurston_norm}\n\n\n\n\\begin{theorem}[The negative of the $(\\mu,\\phi)$-$L^2$-Euler characteristic is a lower bound for the Thurston norm]\n\\label{the:The_Thurston_norm_ge_the_(mu,phi)-L2-Euler_characteristic}\nLet $M\\ne S^1\\times D^2$ be an admissible $3$-manifold and let $(\\mu,\\phi)$ be an $L^2$-acyclic Atiyah-pair.\nThen $M$ is $(\\mu,\\phi)$-$L^2$-finite and we get\n\\[\n-\\chi^{(2)}(M;\\mu,\\phi) \\,\\,\\le \\,\\,x_M(\\phi \\circ \\mu).\n\\]\n\\end{theorem}\n\nIts proof needs some preparation.\n\n\n\\subsection{Some preliminaries about twisted Laurent polynomials over skew fields}\n\\label{subsec:Some_preliminaries_about_twisted_Laurent_polynomials_over_skew_fields}\n\nIn this subsection we consider a skew field $\\cald$ together with an automorphism \n$t\\colon \\cald \\to \\cald$ of skew fields. Let $\\cald_{t}[u^{\\pm 1}]$ be the ring of twisted Laurent\npolynomials over $\\cald$. For a non-trivial element $x = \\sum_{i \\in \\IZ} d_i \\cdot u^i$\nin $\\cald_{t}[u^{\\pm 1}]$ we define its degree to be the natural number\n \\begin{eqnarray}\n \\deg(x) & := & n_+ - n_-\n \\label{degree_of_a_Laurent_polynomials}\n \\end{eqnarray}\n where $n_-$ the smallest integer such that $d_{n_-}$ does not vanish, and $n_+$ is largest\n integer such that $d_{n_+}$ does not vanish.\n\n \\begin{lemma} \\label{lem:degree_and_rank} \n Consider a non-trivial element $x$ in\n $\\cald_{t}[u^{\\pm 1}]$. Then the $\\cald_{t}[u^{\\pm 1}]$-homomorphism $r_x\n \\colon \\cald_{t}[u^{\\pm 1}] \\to \\cald_{t}[u^{\\pm 1}]$ given by right\n multiplication with $x$ is injective and its cokernel has finite dimension over\n $\\cald$, namely,\n \\[\\dim_{\\cald}(\\coker(r_x)) = \\deg(x).\n \\]\n \\end{lemma}\n \\begin{proof} Notice that for two non-trivial elements $x$ and $y$ we have $n_-(xy) = n_-(x) + n_-(y)$,\n $n_+(xy) = n_+(x) + n_+(y)$, and $\\deg(xy) = \\deg(x) + \\deg(y)$. Now one easily checks that\n $r_x$ is injective and that a $\\cald$-basis for the cokernel of $r_x$ is given by the image of the subset\n $\\{u^0, u^1, \\ldots , u^{\\deg(x)-1}\\}$ of $\\cald_{t}[u^{\\pm 1}]$ under the canonical projection\n $\\cald_{t}[u^{\\pm 1}] \\to \\coker(r_x)$.\n\n\\end{proof}\n\n\n\\begin{lemma} \\label{rank_estimate_for_a_plus_t_I_k} \nConsider integers $k,n$ with $0 \\le k$, $1 \\le n$ and $k \\le n$. Denote by\n $I_k^n$ the $(n,n)$-matrix whose first $k$ entries on the diagonal are $1$ and all of\n whose other entries are zero. Let $A$ be an $(n,n)$-matrix over $\\cald$. Suppose that\n the $\\cald_{t}[u^{\\pm 1}]$-map \n \\[\n r_{A + u \\cdot I_k^n} \\colon\n \\cald_{t}[u^{\\pm 1}]^n \\to \\cald_{t}[u^{\\pm 1}]^n\n \\] \n given by right multiplication with $A + u \\cdot I_k^n$ is injective. \n\n Then the dimension over $\\cald$ of its cokernel is finite and satisfies\n\\[\n\\dim_{\\cald}\\bigl(\\coker\\bigl(r_{A + u \\cdot I_k^n} \\colon \n\\cald_{t}[u^{\\pm 1}]^n \\to \\cald_{t}[u^{\\pm 1}]^n\\bigr)\\bigr) \\,\\,\\le\\,\\, k.\n\\]\n\\end{lemma}\n\\begin{proof}\n We use induction over the size $n$. If $ k = n$, the claim has already been proved\n in~\\cite[Proposition~9.1]{Harvey(2005)}. So we can assume in the sequel $k < n$. We\n perform certain row and column operations on matrices $B \\in\n M_{n,n}(\\cald_{t}[u^{\\pm 1}])$ and it will be obvious that they will respect the\n property that $r_B \\colon \\cald_{t}[u^{\\pm 1}]^n \\to \\cald_{t}[u^{\\pm 1}]^n$ is\n injective and will not change $\\dim_{\\cald}\\bigl(\\coker\\bigl(r_{B} \\colon\n \\cald_{t}[u^{\\pm 1}]^n \\to \\cald_{t}[u^{\\pm 1}]^n\\bigr)\\bigr)$. With the help\n of these operations we will reduce the size of $A$ by $1$ and this will finish the\n induction step. To keep the exposition easy, we explain the induction step from $(n-1)$\n to $n$ in the special case $k = 3$ and $n = 5$. The general induction step is\n completely analogous.\n\n\nSo we start with \n\\[\nA + u \\cdot I_3^5 =\\footnotesize{\\begin{pmatrix}\nu + * & * & * & * & * \n\\\\\n* & u+ * &* & * & *\n\\\\\n* & * & u+* & * & *\n\\\\\n* & * & * & * & *\n\\\\\n* & * & * & * & *\n\\end{pmatrix}}\n\\]\nwhere here and in the sequel $*$ denotes some element in $\\cald$.\nWe first treat the case, where the $(2,2)$-submatrix sitting in the right lower corner is non-trivial.\nBy interchanging rows and columns and right multiplying a row with a non-trivial element in $\\cald$, we can achieve \n\n\\[\n\\footnotesize{\\begin{pmatrix}\nu+* & * & * & * & * \n\\\\\n* & u+* &* & * & *\n\\\\\n* & * & u+* & * & *\n\\\\\n* & * & * & * & *\n\\\\\n* & * & * & * & 1\n\\end{pmatrix}}\n\\]\nBy subtracting appropriate right $\\cald$-multiplies of the lowermost row from the other rows and \nsubtracting appropriate left $\\cald$-multiplies of the rightmost column from the other columns,\nwe achieve\n\\[\n\\footnotesize{\n\\begin{pmatrix}\nu+* & * & * & * & 0\n\\\\\n* & u+* &* & * & 0\n\\\\\n* & * & u+* & * & 0\n\\\\\n* & * & * & * & 0\n\\\\\n0 & 0 & 0 & 0 & 1\n\\end{pmatrix}}\n\\]\nFor this matrix the claim follows from the induction hypothesis applied to the $(4,4)$-matrix\nobtained by deleting the lowermost row and the rightmost column. \n\nIt remains to treat the case, where the matrix looks like\n\\[\n\\footnotesize{\n\\begin{pmatrix}\nu+* & * & * & * & * \n\\\\\n* & u+* &* & * & *\n\\\\\n* & * & u+* & * & *\n\\\\\n* & * & * & 0 & 0\n\\\\\n* & * & * & 0 & 0\n\\end{pmatrix}}\n\\]\nAt least one of the entries in the lowermost row must be non-trivial since the map induced\nby right multiplication with it is assumed to be injective. By interchanging rows and\ncolumns and right multiplying a row with a non-trivial element in $\\cald$, we can achieve\n\\[\n\\footnotesize{\n\\begin{pmatrix}\nu+* & * & * & * & * \n\\\\\n* & u+* &* & * & *\n\\\\\n* & * & u+* & * & *\n\\\\\n* & * & * & 0 & 0\n\\\\\n1 & * & * & 0 & 0\n\\end{pmatrix}}\n\\]\nBy subtracting appropriate $\\cald$-multiples of first column from the second and third column, we\ncan arrange\n\\[\n\\footnotesize{\n\\begin{pmatrix}\nu+* & * \\cdot u+* & *\\cdot u+* & * & * \n\\\\\n* & u+* &* & * & *\n\\\\\n* & * & u+* & * & *\n\\\\\n* & * & * & 0 & 0\n\\\\\n1 & 0 & 0 & 0 & 0\n\\end{pmatrix}}\n\\]\nBy subtracting appropriate right $\\cald$-multiples of last row from the other rows we can achieve\n\\[\n\\footnotesize{\n\\begin{pmatrix}\n0 & * \\cdot u+* & * \\cdot u+* & * & * \n\\\\\n0 & u+* &* & * & *\n\\\\\n0 & * & u+* & * & *\n\\\\\n0 & * & * & 0 & 0\n\\\\\n1 & 0 & 0 & 0 & 0\n\\end{pmatrix}}\n\\]\nBy subtracting right $\\cald$-multiples of the second and the third row from the first row and then interchanging rows\nwe can arrange\n\\[\n\\footnotesize{\n\\begin{pmatrix}\n0 & u+* & * & * & * \n\\\\\n0 & * & u+* & * & *\n\\\\\n0 & * & * & * & *\n\\\\\n0 & * & * & 0 & 0\n\\\\\n1 & 0 & 0 & 0 & 0\n\\end{pmatrix}}\n\\]\nSince the induction hypothesis applies to the matrix obtained by deleting the first column and the lowermost row,\nwe have finished the induction step, and hence the proof of Lemma~\\ref{rank_estimate_for_a_plus_t_I_k}.\n\\end{proof}\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{the:The_Thurston_norm_ge_the_(mu,phi)-L2-Euler_characteristic}}\n\\label{subsec:Proof_of_Theorem_ref_the:The_Thurston_norm_ge_the_(mu,phi)-L2-Euler_characteristic)}\n\n\n\n\\begin{proof}[Proof of Theorem~\\ref{the:The_Thurston_norm_ge_the_(mu,phi)-L2-Euler_characteristic}]\nWe have already shown in Theorem~\\ref{the:Atiyah_and_(mu,phi)-L2-Euler_characteristic}\nthat $M$ is $(\\mu,\\phi)$-$L^2$-finite.\n\nBecause of~\\eqref{scaling_Thurston_norm} and Lemma~\\ref{lem:reduction_to_surjective_mu} \nwe can assume without loss of generality that $\\phi$ is surjective. \nLet $K$ be the kernel of $\\phi$ and $\\gamma$ be an element in $G$ with $\\phi(\\gamma) = 1$.\n It follows easily\n from~\\cite[Section~1]{Turaev(2002)} that we can find an oriented surface $\\Sigma\\subset M$ with\n components $\\Sigma_1,\\dots,\\Sigma_l$ and non-zero $r_1,\\dots,r_l\\in \\IN$ with the following\n properties: \n \\begin{itemize}\n\\item $r_1[\\Sigma_1]+\\dots+r_l[\\Sigma_l]$ is dual to $\\phi$,\n\\item $\\sum_{i=1}^l -r_i\\chi(\\Sigma_i)\\leq x_M(\\phi)$,\n\\item $M\\setminus \\Sigma$ is connected.\n\\end{itemize}\n(Here we use that $M\\ne S^1\\times D^2$ and that $N$ is irreducible.)\nFor $i=1,\\dots,l$ we pick disjoint oriented tubular neighborhoods $\\Sigma_i\\times [0,1]$\nand we identify $\\Sigma_i$ with $\\Sigma_i\\times \\{0\\}$. We write $M':=M\\setminus\n\\cup_{i=1}^l \\Sigma_i\\times [0,1]$. We pick once and for all a base point $p$ in $M'$ and\nwe denote by $\\widetilde{M}$ the universal cover of $M$. We write $\\pi=\\pi_1(M,p)$. For\n$i=1,\\dots,l$ we also pick a curve $\\nu_i'$ based at $p$ which intersects $\\Sigma_i$\nprecisely once in a positive direction and does not intersect any other component of\n$\\Sigma$. Put $\\nu_i = \\mu(\\nu_i')$. Note that $\\phi(\\nu_i)=r_i$. Finally for\n$i=1,\\dots,l$ we put\n\\[\nn_i :=- \\chi(\\Sigma_i)+2.\n\\]\n\nFollowing~\\cite[Section~4]{Friedl(2014twisted)} we now pick an appropriate $CW$-structure\nfor $M$ and we pick appropriate orientations and lifts of the cells to the universal\ncover. The resulting boundary maps are described in\ndetail~\\cite[Section~4]{Friedl(2014twisted)}. In order to keep the notation manageable we\nnow restrict to the case $l=2$, the general case is completely analogous.\n\nLet $\\overline{M} \\to M$ be the $G$-covering associated to $\\mu$. It follows from the\ndiscussion in~\\cite[Section~4]{Friedl(2014twisted)} and the definitions that the cellular\n$\\IZ G$-chain complex $C_*(\\overline{M})$ of $\\overline{M} $ looks like\n\\[\n\\xymatrix{0 \\ar[r] \n& \\IZ[G]^{4}\\ar[r]^-{B_3} \n& \\IZ[G]^{4+2n_1+2n_2+s}\\ar[r]^-{B_2} \n& \\IZ[G]^{4+2n_1+2n_2+s}\\ar[r]^-{B_1} \n& \\IZ[G]^4\\ar[r] \n& 0}\n\\]\nwhere $s$ is a natural number and the matrices $B_3,B_2,B_1$ are matrices of the form\n\\[\nB_3\n\\,\\, = \\,\\, \n\\footnotesize{\\begin{array}{ccccccc|c}\nn_1 & n_2 & 1 & 1 & 1 & 1 & s+n_1+n_2 &\n\\\\\n\\hline \n * & 0 & 1 & -\\nu_1& 0 & 0 & 0 & 1\n\\\\ \n 0 & 0 & 1 & -z_1 & 0 & 0 & * & 1\n\\\\\n 0 & * & 0 & 0 & 1 & -\\nu_2 & 0 & 1\n\\\\ \n 0 & 0 & 0 & 0 & 1 & -z_2 & * &1\n\\end{array} }\n\\]\n\\[\nB_2\n\\,\\,=\\,\\, \n\\footnotesize{\n\\begin{array}{ccccccccc|c}\n1 &1 & n_1 & n_1 & n_2 & n_2 & 1 & 1 & s\n\\\\\n\\hline\n * & 0 & \\id_{n_1} & -\\nu_1\\id_{n_1} & 0 & 0 & 0 & 0 & 0 & n_1\n\\\\ \n 0 & * & 0 & 0& \\id_{n_2} &-\\nu_2\\id_{n_2} & 0 & 0 & 0 & n_2\n\\\\ \n 0 & 0 & * & 0 & 0 & 0 & 0 & 0 & 0 & 1\n\\\\ \n 0 & 0 & 0 & * & 0 & 0 & 0 & 0 & 0 & 1\n\\\\ \n 0 & 0 & 0 & 0 & * & 0 & 0 & 0 & 0 & 1\n\\\\ \n 0 & 0 & 0 & 0 & 0 & * & 0 & 0 & 0 & 1\n\\\\ \n 0 & 0 & * & * & * & * & * & * & * & s+n_1+n_2\n\\end{array} }\n\\]\n\\[\nB_1\\,\\,=\\,\\,\n\\footnotesize{\n\\begin{array}{cccc|c}\n1 & 1 & 1 & 1\n\\\\\n\\hline\n 1& -\\nu_1 & 0 & 0 & 1\n\\\\ \n 0 & 0 & 1 &-\\nu_2 & 1\n\\\\ \n * & 0 & 0 & 0 & n_1\n\\\\\n 0 & * &0 & 0 & n_1\n\\\\\n0 & 0 & * & 0 & n_2\n\\\\\n0 & 0 & 0 & * & n_2\n\\\\\n1 & -x_1 & 0 & 0 & 1 \n\\\\\n0 & 0 & 1 & -x_2 & 1 \n\\\\ \n * & * & * & * & s\n\\end{array} }\n\\]\nwhere $x_1,x_2,z_1,z_2\\in K$, all the entries of the matrices marked by $*$ lie in $\\IZ K$\nand the entries above the horizontal line and right to the vertical arrow indicate the\nsize of the blocks. (Note that in~\\cite{Friedl(2014twisted)} we view the elements of\n$\\IZ[G]^n$ as column vectors whereas we now view them as row vectors.)\n\n\nDefine submatrices $B_i'$ of $B_i$ for $i = 1,2,3$ by \n\\[\n\\begin{array}{rcl} \nB_3'\n& = &\n\\footnotesize{\n\\begin{pmatrix} \n1 & -\\nu_1 & 0 & 0 \n\\\\ \n1 & -z_1 & 0 & 0 \n\\\\ \n0 & 0 & 1 & -\\nu_2 \n\\\\ \n0 & 0 & 1 &-z_2\n\\end{pmatrix}}\n\\\\\n&\n\\\\[-0.3cm]\nB_2'\n& = &\n\\footnotesize{\n\\begin{array}{ccccc|c}\n n_1 & n_1 & n_2 & n_2 & s & \n\\\\\n\\hline\n\\id_{n_1} & -\\nu_1\\id_{n_1} & 0 & 0 & 0 & n_1\n\\\\ \n0 & 0& \\id_{n_2} &-\\nu_2\\id_{n_2} & 0 & n_2\n\\\\ \n* & * & * & * & * & s+n_1+n_2\n\\end{array} }\n\\\\\n&\n\\\\[-0.3cm]\nB_1'\n& = &\n\\footnotesize{\n\\begin{pmatrix} \n1 & -\\nu_1 & 0 & 0 \n\\\\\n0 & 0 & 1 & -\\nu_2 \n\\\\\n1 & -x_1 & 0 & 0 \n\\\\\n0 & 0 & 1 & -x_2 \n\\end{pmatrix}}\n\\end{array}\n\\]\nwhere $B_1'$ and $B_3'$ are $(4,4)$-matrices and $B_2'$ is a $(2n_1+ 2n_2 + s,2n_1+2n_2+ s)$-matrix. \nFor $i = 1,2,3$ let $C_*[i]$ be the $\\IZ G$-chain complex\nconcentrated in dimensions $i$ and $i - 1$ whose $i$-th differential is given by the\nmatrix $B_i'$. There is an appropriate finitely generated free $\\IZ G$-chain complex\n$C_*'$ concentrated in dimensions $2$, $1$ and $0$ of the shape\n\\[\n\\ldots \\to 0 \\to 0 \\to \\IZ G^{2n_1 +2n_2+s} \\to \\IZ G^{4 + 2n_1 +2n_2+s} \\to \\IZ G^4\n\\]\nand obvious short based exact sequences of finitely generated based free $\\IZ G$-chain\ncomplexes\n\\[\n0 \\to C_*' \\to C_*(\\overline{M}) \\to C_*[3] \\to 0\n\\]\nand\n\\[\n0 \\to C_*[1] \\to C_*' \\to C_*[2] \\to 0.\n\\]\nConsider $\\nu' \\in G$ with $\\phi(\\nu) \\not= 0$ and $x' \\in H$. The matrix\n$\\begin{pmatrix}1& - \\nu' \\\\ 1 & x'\\end{pmatrix}$ can be transformed by subtracting the\nfirst row from the second row to the matrix \n$\\begin{pmatrix}1& -\\nu' \\\\ 0 & \\nu' + x'\\end{pmatrix}$. \nThe map $r_{\\nu' + x'} \\colon \\cald(K)_{t}[u^{\\pm 1}] \\to \\cald(K)_{t}[u^{\\pm 1}]$ \nis injective and its cokernel has dimension $|\\phi(\\nu')|$\nover $\\cald(K)$ by Lemma~\\ref{lem:degree_and_rank}.\nHence the map $\\cald(K)_{t}[u^{\\pm 1}]^2 \\to \\cald(K)_{t}[u^{\\pm 1}]^2$\ngiven by right multiplication with $\\begin{pmatrix}1& -\\nu' \\\\ 0 & \\nu' + x'\\end{pmatrix}$\nis injective and its cokernel has dimension $|\\phi(\\nu')|$\nover $\\cald(K)$. We conclude from Theorem~\\ref{the:Main_properties_of_cald(G)} for $l = 1,3$\nusing notation~\\eqref{b_n(2)(caln(G)_otimes_ZGC_ast)} and defining for a skewfield $\\cald$ and \na $\\cald$-chain complex $D_*$ its Betti number $b_n(D_*)$\nto be $\\dim_{\\cald}(H_n(D_*))$.\n\n\\begin{eqnarray*}\nb_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IZ G} C_*[l]\\bigr) \n& = & \n0 \\quad \\text{for} \\; n \\ge 0;\n\\\\\nb_n\\bigl(\\cald(K) \\otimes_{\\IZ K} i^*C_*[l]\\bigr) \n& = & \n\\begin{cases}\n0 & \\text{if}\\; n \\not= l-1;\\\\\nr_1 + r_2 & \\text{if}\\; n = l-1;\n\\end{cases}\n\\\\\n\\chi^{(2)}\\bigl(\\caln(K) \\otimes_{\\IZ K} i^*C_*[l]\\bigr) \n& = & \nr_1 + r_2.\n\\end{eqnarray*}\nSince $b_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr) = 0$ holds\nfor all $n \\ge 0$ by assumption, we get\n\\[\nb_n^{(2)}\\bigl(\\caln(G) \\otimes_{\\IZ G} C_*[2]\\bigr)\n = \n0 \\quad \\text{for} \\; n \\ge 0.\n\\]\nWe conclude from from Theorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):dim}\n\\[\n\\dim_{\\cald(G)}\\left(\\ker\\bigl(r_{B_2'} \\colon \\cald(G)^{1 +2n_1 +2n_2+s} \\to \\cald(G)^{1 +2n_1 +2n_2+s}\\bigr)\\right) = 0.\n\\]\nTheorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):cald(K)_and_(cald(G)} \nimplies that \n\\[\nr_{B_2'} \\colon \\cald(K)_{t}[u^{\\pm 1}]^{1 +2n_1 +2n_2+s} \\to \\cald(K)_{t}[u^{\\pm 1}]^{1 +2n_1 +2n_2+s}\n\\]\nis injective. We get using Theorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):dim}\n\\begin{eqnarray}\n\\label{the:The_Thurston_norm_and_the_(mu,phi)_L2-Euler_characteristic:(3)}\n&&\n\\\\\n&&\\chi^{(2)}\\bigl(\\caln(K) \\otimes_{\\IZ K} i^*C_*(\\overline{M})\\bigr) \\,\\,=\\,\\,\\lmsum{l = 1}{3} \\chi^{(2)}\\bigl(\\caln(K) \\otimes_{\\IZ K} i^*C_*[l]\\bigr)\n\\nonumber\n\\\\\n& = & \n2r_1 + 2r_2 - \\chi^{(2)}\\bigl(\\caln(K) \\otimes_{\\IZ K} i^*C_*[2]\\bigr)\n\\nonumber\n\\\\\n& = & \n2r_1 + 2r_2 - \\chi\\bigl(\\cald(K) \\otimes_{\\IZ K} i^*C_*[2]\\bigr)\n\\nonumber\\\\\n& = &\n2r_1 + 2r_2 \n\\nonumber\\\\\n& & - \\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_{B_2'} \\colon \\cald(K)_{t}[u^{\\pm 1}]^{1 +2n_1 +2n_2+s} \n\\to \\cald(K)_{t}[u^{\\pm 1}]^{1 +2n_1 +2n_2+s}\\bigr)\\bigr).\n\\nonumber\n\\end{eqnarray}\n\nSince $\\dim_{\\caln(G)}\\bigl(\\ker\\bigl(\\id_{\\caln(G)} \\otimes_{\\IZ G} r_{B_2'}) \\colon \\caln(G)^{1 +2n_1 +2n_2+s}\n\\to \\caln(G)^{1 +2n_1 +2n_2+s}\\bigr)\\bigr)$ coincides with $b_n^{(2)}(\\caln(G) \\otimes_{\\IZ G} C_*')$ and $b_n^{(2)}(\\caln(G) \\otimes_{\\IZ G} C_*')$ is zero, one\nof the entries in the rightmost column of $B_2'$ is\nnon-trivial. By switching rows and multiplying a row with a non-trivial element in $\\cald(K)$, \nwe can arrange that the entry in the lower right corner is $1$. By elementary row and\ncolumn operations we can arrange that the lowermost row and the rightmost column have all\nentries zero except the element in the lower right corner which is still equal to $1$. By\niterating this process we can transform $B_2'$ by such row and column operations into a\nmatrix of the shape\n\\[\nB_2''\n\\,\\,=\\,\\,\n\\footnotesize{ \n\\begin{array}{ccccc|c}\n n_1 & n_1 & n_2 & n_2 & s & \n\\\\\n\\hline\n\\id_{n_1} & -\\nu_1\\id_{n_1} & 0 & 0 & 0 & n_1\n\\\\ \n0 & 0& \\id_{n_2} &-\\nu_2\\id_{n_2} & 0 & n_2\n\\\\ \n* & * & * & * & * & n_1+n_2\n\\\\ \n0 & 0 & 0 & 0 & \\id_s & s\n\\end{array} }\n\\]\nLet $B_2'''$ be the $(2n_1+ 2n_2)$-submatrix of $B_s''$ given by\n\\[\nB_2'''\n\\,\\, =\\,\\,\n\\footnotesize{ \n\\begin{array}{cccc|c}\n n_1 & n_1 & n_2 & n_2 & \n\\\\\n\\hline\n\\id_{n_1} & -\\nu_1\\id_{n_1} & 0 & 0 & n_1\n\\\\ \n0 & 0& \\id_{n_2} &-\\nu_2\\id_{n_2} & n_2\n\\\\ \n* & * & * & * & n_1+n_2\n\\end{array}}\n\\]\nAn inspection of the proof of~\\cite[Lemma~9.3]{Dubois-Friedl-Lueck(2014Alexander)} shows\nthat by elementary row and column operations and taking block sum with triangular matrices\nhaving $\\id$ on each diagonal entry,\nwe can transform $B_2'''$ into a matrix of the shape\n\\[\nB_2'''' = A''' + u \\cdot I_{r_1\\cdot n_1 + r_2\\cdot n_2}^{2r_r\\cdot n_1 +2r_2 \\cdot n_2}\n\\]\nfor some matrix $A'''' \\in M_{2n_1 +2n_2,2n_1 +2n_2}(\\cald(K))$ and\n$I_{r_1\\cdot n_1 + r_2\\cdot n_2}^{2r_r\\cdot n_1 +2r_2 \\cdot n_2}$ as introduced in Lemma~\\ref{rank_estimate_for_a_plus_t_I_k}. \nObviously we have\n\\begin{multline*}\n\\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_{B_2'} \\colon \\cald(K)_{t}[u^{\\pm 1}]^{1 +2n_1 +2n_2+s} \n\\to \\cald(K)_{t}[u^{\\pm 1}]^{1 +2n_1 +2n_2+s}\\bigr)\\bigr)\n\\\\\n= \\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_{B_2''''} \\colon \\cald(K)_{t}[u^{\\pm 1}]^{2n_1 +2n_2} \n\\to \\cald(K)_{t}[u^{\\pm 1}]^{2n_1 +2n_2}\\bigr)\\bigr).\n\\end{multline*}\nWe conclude from Lemma~\\ref{rank_estimate_for_a_plus_t_I_k} applied to $B_2''''$\n\\begin{multline*}\n\\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_{B_2''''} \\colon \\cald(K)_{t}[u^{\\pm 1}]^{2n_1 +2n_2} \n\\to \\cald(K)_{t}[u^{\\pm 1}]^{2n_1 +2n_2}\\bigr)\\bigr)\n\\\\\n\\le r_1 \\cdot n_1 + r_2 \\cdot n_2.\n\\end{multline*}\nHence we get \n\\begin{multline}\n\\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_{B_2'} \\colon \\cald(K)_{t}[u^{\\pm 1}]^{1 +2n_1 +2n_2+s} \n\\to \\cald(K)_{t}[u^{\\pm 1}]^{1 +2n_1 +2n_2+s}\\bigr)\\bigr)\n\\label{the:The_Thurston_norm_and_the_(mu,phi)_L2-Euler_characteristic:(4)}\n\\\\\n\\le r_1 \\cdot n_1 + r_2 \\cdot n_2.\n\\end{multline}\nWe conclude from~\\eqref{the:The_Thurston_norm_and_the_(mu,phi)_L2-Euler_characteristic:(3)}\nand~\\eqref{the:The_Thurston_norm_and_the_(mu,phi)_L2-Euler_characteristic:(4)}\n\\[\n\\chi^{(2)}\\bigl(\\caln(K) \\otimes_{\\IZ K} i^*C_*(\\overline{M})\\bigr) \\ge 2r_1 + 2r_2 - r_1 \\cdot n_1 - r_2 \\cdot n_2.\n\\]\nThis together with Lemma~\\ref{lem:phi-twisted_as_ordinary_L2_Euler_characteristic} implies\n\\[ \\begin{array}{rcl}\n- \\chi^{(2)}(M;\\mu,\\phi) \n& = &\n- \\chi^{(2)}\\big(\\caln(K) \\otimes_{\\IZ K} i^*C_*(\\overline{M})\\big)\\\\\n& \\le &\n r_1 n_1 + r_2 n_2 - 2r_1 - 2r_2\n\\\\\n& = & \n r_1 (n_1 -2) + r_2 (n_2 -2)\n\\,\\, = \\,\\,\n- r_1 \\chi(\\Sigma_1) - r_2 \\chi(\\Sigma_2)\n\\,\\,\\le \\,\\,\nx_M(\\phi).\n\\end{array}\n\\]\nThis finishes the proof of Theorem~\\ref{the:The_Thurston_norm_ge_the_(mu,phi)-L2-Euler_characteristic}.\n\\end{proof}\n\n\n\n \\typeout{---- Section 5: Fox calculus and and the $(\\mu,\\phi)$-$L^2$-Euler characteristic ------}\n\n\n\\section{Fox calculus and the $(\\mu,\\phi)$-$L^2$-Euler characteristic}\n\\label{sec:Fox_calculus_and_the_(mu,phi)-L2-Euler_characteristic}\n\nThe following calculations of the $(\\mu,\\phi)$-$L^2$-Euler characteristic from a\npresentation of the fundamental group is adapted to the corresponding calculation for the\n$L^2$-torsion function appearing in~\\cite[Theorem~2.1]{Friedl-Lueck(2015l2+Thurston)} and\nwill be used when we will compare these two invariants and the higher order Alexander\npolynomials. For information about the Fox matrix and the Fox calculus we refer for instance \nto~\\cite[9B on page 123]{Burde-Zieschang(1985)}, and~\\cite{Fox(1953)}. \n\n\n\\begin{theorem}[Calculation of the $(\\mu,\\phi)$-$L^2$-Euler characteristic \nfrom a presentation of the fundamental group]\n \\label{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation} \nLet $M$ be an admissible $3$-manifold with fundamental group $\\pi$. Let\n\\[\n\\pi = \\langle x_1,x_2, \\ldots, x_a \\mid R_1,R_2, \\ldots, R_b \\rangle\n\\]\nbe a presentation of $\\pi$. Let the $(a,b)$-matrix over $\\IZ \\pi$\n\\[\nF = \n\\left( \\begin{array}{ccc} \\frac{\\partial R_1}{\\partial x_1}\n & \\ldots & \\frac{\\partial R_1}{\\partial x_a} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial R_b}{\\partial x_1} & \\ldots & \\frac{\\partial R_b}{\\partial x_a}\n\\end{array}\\right) \n\\]\nbe the Fox matrix of the presentation. \n\nLet $G$ be a torsion-free group which satisfies the Atiyah Conjecture.\nConsider two group homomorphisms \n$\\mu \\colon \\pi \\to G$ and $\\phi \\colon G \\to \\IZ$.\nSuppose that $\\mu$ is non-trivial and that $\\phi$ is surjective.\n\nLet $i \\colon K \\to G$ be the inclusion of the kernel $K = \\ker(\\phi)$ of \n$\\phi$ and let $t \\colon \\cald(K) \\xrightarrow{\\cong} \\cald(K)$ be the automorphism\nintroduced in \nTheorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):cald(K)_and_(cald(G)}. \nLet $\\overline{M} \\to M$ be the $G$-covering associated to $\\mu$. \n\n\n\n\\begin{enumerate}\n\n\\item \\label{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation:non-empty} \nSuppose that $\\partial M$ is non-empty and and that $a = b + 1$. \nChoose $i \\in \\{1,2, \\ldots, r\\}$ with $|\\mu(x_i)| = \\infty$.\nLet $A$ be the $(a-1,a-1)$-matrix with entries in $\\IZ G$\nobtained from the Fox matrix $F$ by deleting the $i$-th column\nand then applying the homomorphism $M_{a-1,a-1}(\\IZ \\pi) \\to M_{a-1,a-1}(\\IZ G)$ \ninduced by $\\mu \\colon \\pi \\to G$.\n\nThen $b_n^{(2)}(\\overline{M};\\caln(G))$ vanishes for all $n \\ge 0$ if and only if we have\n$\\dim_{\\caln(G)}\\bigl(\\ker(r_A \\colon \\caln(G)^{a-1} \\to \\caln(G)^{a-1})\\bigr) = 0$.\nIf this is the case, then $(\\mu,\\phi)$ is an $L^2$-acyclic Atiyah-pair in the sense of \nDefinition~\\ref{def:L2-acyclic_Atiyah-pair} and we get \n\\begin{multline*}\n\\chi^{(2)}(M;\\mu,\\phi)\n\\\\\n= \n- \\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_A \\colon \\cald(K)_{t}[u^{\\pm 1}]^{a-1} \n\\to \\cald(K)_{t}[u^{\\pm 1}]^{a-1}\\bigr)\\bigr)\n\\\\\n+ |\\phi \\circ \\mu(x_i)|;\n\\end{multline*}\n\n\n\\item \\label{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation:empty} \nSuppose $\\partial M$ is empty. We make the assumption\nthat the given presentation comes from a Heegaard decomposition\nas described in~\\cite[Proof of Theorem~5.1]{McMullen(2002)}.\nThen $a = b$ and there is another set of dual generators \n$\\{x_1', x_2', \\ldots, x_a'\\}$ coming from the Heegaard decomposition as described \nin~\\cite[Proof of Theorem~5.1]{McMullen(2002)}. Choose $i,j \\in \\{1,2, \\ldots, a\\}$ \nwith $|\\mu(x_i)| = \\infty$ and $|\\mu(x_j')| = \\infty$.\nLet $A$ be the $(a-1,a-1)$-matrix with entries in $\\IZ G$\nobtained from the Fox matrix $F$ by deleting the $i$th column and the $j$th row\nand then applying the homomorphism $M_{a-1,a-1}(\\IZ \\pi) \\to M_{a-1,a-1}(\\IZ G)$\ninduced by $\\mu \\colon \\pi \\to G$.\n\n\nThen $b_n^{(2)}(\\overline{M};\\caln(G))$ vanishes for all $n \\ge 0$ if and only if we have\n$\\dim_{\\caln(G)}\\bigl(\\ker(r_A \\colon \\caln(G)^{a-1} \\to \\caln(G)^{a-1})\\bigr) = 0$.\nIf this is the case, then $(\\mu,\\phi)$ is an $L^2$-acyclic Atiyah-pair and we get \n\\begin{multline*}\n\\chi^{(2)}(M;\\mu,\\phi) \n\\\\\n= \n- \\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_A \\colon \\cald(K)_{t}[u^{\\pm 1}]^{a-1} \n\\to \\cald(K)_{t}[u^{\\pm 1}]^{a-1}\\bigr)\\bigr)\n\\\\\n+ |\\phi \\circ \\mu(x_i)| + |\\phi \\circ \\mu(x_j')|.\n\\end{multline*}\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof} \n We only treat the case, where $\\partial M$ is empty, and leave it to the reader to\n figure out the details for the case of a non-empty boundary using the proof\n of~\\cite[Theorem~2.4]{Lueck(1994a)}. \n\n \n From~\\cite[Proof of Theorem~5.1]{McMullen(2002)} we\n obtain a compact $3$-dimensional $CW$-complex $X$ together with a homotopy equivalence\n $f \\colon X \\to M$ and a set of generators $\\{x_1', \\ldots ,x_a'\\}$ such that\n the $\\IZ G$-chain complex $C_*(\\overline{X})$ of $\\overline{X}$ looks like \n\\[\n\\IZ G \n\\xrightarrow{\\prod_{i=1}^a r_{\\mu(x_i') - 1}} \n\\bigoplus_{i=1}^a \\IZ G \n\\xrightarrow{r_{\\mu(F)}} \\bigoplus_{i=1}^a \\IZ G \n\\xrightarrow{\\bigoplus_{i=1}^a r_{\\mu(x_i) - 1}} \\IZ G,\n\\]\nwhere $\\mu(F)$ is the image of $F$ under the map $M_{a,a}(\\IZ \\pi) \\to M_{a,a}(\\IZ G)$ induced \nby $\\mu$ and $\\overline{X} \\to X$ is the pullback of $\\overline{M} \\to M$ with $f$.\n\nFor $g \\in G$ with $g \\not= 1$ let $D(g)_*$ be the $1$-dimensional\n$\\IZ G$-chain complexes which has as first differential\n$r_{g -1} \\colon \\IZ G\\to \\IZ G$. Since $g$ generates an infinite cyclic subgroup of $G$,\nwe conclude $b_n^{(2)}(\\caln(G) \\otimes_{|\\IZ G} D(g)_*) = 0$\nfor $n \\ge 0$ from~\\cite[Lemma~1.24~(4) on page~30 and Lemma~1.34~(1) on page~35]{Lueck(2002)}.\nTheorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):chain} \nand Lemma~\\ref{lem:degree_and_rank} imply\n\\begin{eqnarray}\n\\chi^{(2)}\\bigl(\\caln(K) \\otimes_{\\IZ K} i^*D_*(g)) & = & |\\phi(g)|.\n\\label{chi(2)(D(g)}\n\\end{eqnarray}\nThere is a surjective $\\IZ G$-chain map \n$p_* \\colon C_*(\\overline{X}) \\to \\Sigma^2 D(\\mu(x_j'))_*$ \nwhich is the identity in degree $3$ and the\nprojection onto the summand corresponding to $j$ in degree $2$, and an injective \n$\\IZ G$-chain map $i_* \\colon D(\\mu(x_i))_* \\to C_*(\\overline{X})$\nwhich is the identity in degree $0$ and the inclusion to the summand corresponding to\n$i$ in degree $1$. Let $P_*$ be the kernel of $p_*$ and let $Q_*$ be the cokernel of\nthe injective map $j_* \\colon D(\\mu(x_1))_* \\to P_*$ induced by \n$i_* \\colon D(\\mu(x_1))_* \\to C_*(\\overline{X})$. \nThen $Q_*$ is concentrated in dimensions $1$ and $2$ and its second differential is\n$r_A \\colon \\IZ G^{a-1} \\to \\IZ G^{a-1}$. We conclude from the long exact\nhomology sequence of a short exact sequence of Hilbert $\\caln(G)$-chain complexes,\nthe homotopy invariance of $L^2$-Betti numbers and the additivity of the von Neumann dimension\n\\[\nb_n^{(2)}(\\overline{X};\\caln(G)) = \n\\begin{cases}\n\\dim_{\\caln(G)}\\bigl(\\ker(r_A \\colon \\caln(G)^{a-1} \\to \\caln(G)^{a-1}))\\bigr)& \\text{if} \\; n = 1,2;\n\\\\\n0 & \\text{otherwise}.\n\\end{cases}\n\\]\nThis shows that $b_n^{(2)}(\\overline{M};\\caln(G))$ vanishes for all $n \\ge 0$ if and only if we have\n$\\dim_{\\caln(G)}\\bigl(\\ker(r_A \\colon \\caln(G)^{a-1} \\to \\caln(G)^{a-1}))\\bigr) = 0$.\n\nSuppose that this is the case. Then $(\\mu,\\phi)$ is an $L^2$-acyclic Atiyah-pair.\nWe conclude from Theorem~\\ref{the:Main_properties_of_cald(G)}\nthat\n$r_A \\colon \\cald(G)^{a-1} \\to \\cald(G)^{a-1}$ is injective and hence\n$r_A \\colon \\cald(K)_{t}[u^{\\pm 1}]^{a-1} \\to \\cald(K)_{t}[u^{\\pm 1}]^{a-1}$ is injective\nand \n\\begin{multline}\n\\chi^{(2)}(\\caln(K) \\otimes_{\\IZ K} i^*Q_*)\n\\\\\n = \n\\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_A \\colon \\cald(K)_{t}[u^{\\pm 1}]^{a-1} \\to \\cald(K)_{t}[u^{\\pm 1}]^{a-1}\\bigr)\\bigr).\n\\label{chi(2)(caln(K)_otimes_iD(g)}\n\\end{multline}\nWe get from Lemma~\\ref{lem:phi-twisted_as_ordinary_L2_Euler_characteristic}\nand the chain complex version of~\\cite[Theorem~6.80~(2) on page~277]{Lueck(2002)}\napplied to the short exact sequences of $\\caln(K)$-chain complexes obtained by applying\n$\\caln(K) \\otimes_{\\IZ K} i^*-$ to the short exact sequences of $\\IZ G$-chain complexes\n$0 \\to P_* \\to C_*(\\overline{X}) \\to \\Sigma^2 D(\\mu(x_j'))_*\\to 0$\nand $0 \\to D(\\mu(x_1))_* \\to P_* \\to Q_*\\to 0$\n\\begin{eqnarray*}\n\\lefteqn{\\chi^{(2)}(\\overline{M};\\mu,\\phi)}\n& & \n\\\\\n& = & \n\\chi^{(2)}(\\caln(K) \\otimes_{\\IZ K} i^*C_*(\\overline{X}))\n\\\\\n& = & \n\\chi^{(2)}(\\caln(K) \\otimes_{\\IZ K} i^*Q_*) + \\chi^{(2)}(\\caln(K) \\otimes_{\\IZ K} i^*D(\\mu(x_j'))_*) \n\\\\ \n& & \\quad \\quad \\quad \\quad + \n\\chi^{(2)}(\\caln(K) \\otimes_{\\IZ K} i^*D(\\mu(x_i))_*) \n\\\\\n& \\stackrel{\\eqref{chi(2)(D(g)} \\,\\text{and} \\,\\eqref{chi(2)(caln(K)_otimes_iD(g)}}{=}& \n- \\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_A \\colon \\cald(K)_{t}[u^{\\pm 1}]^{a-1} \\to \\cald(K)_{t}[u^{\\pm 1}]^{a-1}\\bigr)\\bigr)\n\\\\\n& & \n\\quad \\quad \\quad \\quad + |\\phi \\circ \\mu(x_i) + |\\phi \\circ \\mu(x_j')|.\n\\end{eqnarray*}\nThis finishes the proof of Theorem~\\ref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation}.\n\\end{proof}\n\n\n\n\\begin{lemma} \\label{lem:b_1(2)_is_zero_implies_L2-acyclic}\n\\begin{enumerate}[font=\\normalfont]\n\\item \\label{lem:b_1(2)_is_zero_implies_L2-acyclic:b_1}\nLet $M$ be an admissible $3$-manifold. Consider an infinite group $G$ \nand a $G$-covering $\\overline{M} \\to M$. Then we get $b_n^{(2)}(\\overline{M}; \\caln(G)) = 0$ for\nall $n \\ge 0$ if $b_1(\\overline{M};\\caln(G)) = 1$;\n\n\n\\item \\label{lem:b_1(2)_is_zero_implies_L2-acyclic:universal}\nIf $M$ is an admissible $3$-manifold,\nthen we get $b_n^{(2)}(\\widetilde{M};\\caln(\\pi)) = 0$ for all $n \\ge 0$.\n\\end{enumerate}\n\n\\end{lemma}\n\\begin{proof}~\\eqref{lem:b_1(2)_is_zero_implies_L2-acyclic:b_1}\nSince $G$ is infinite, we have $b_0^{(2)}(\\overline{M};\\caln(G)) = 0$\nby~\\cite[Theorem~1.35~(8) on page~38]{Lueck(2002)}. Using Poincar\\'e duality in the closed case,\nsee~\\cite[Theorem~1.35~(3) on page~37]{Lueck(2002)} we conclude\n$b_m^{(2)}(\\overline{M};\\caln(G)) = 0$ for $m \\ge 3$. Since $\\chi(M) = 0$, we get\n$b_1^{(2)}(\\overline{M};\\caln(G)) = b_2^{(2)}(\\overline{M};\\caln(G))$. Hence the\nassumption $b_1^{(2)}(\\overline{M};\\caln(G)) = 0$ implies that we have\n$b_m^{(2)}(\\overline{M};\\caln(G)) = 0$ for all $m \\ge 0$. \n\\\\[1mm]~\\eqref{lem:b_1(2)_is_zero_implies_L2-acyclic:universal}\nThis follows from~\\cite[Theorem~0.1]{Lott-Lueck(1995)}.\n\\end{proof}\n\n\n\n\\begin{theorem}[The $(\\mu,\\phi)$-$L^2$-Euler characteristic in terms of the first homology]\n \\label{the:The_(mu,phi)-L2-Euler_characteristic_in_terms_of_the_first_homology} \n Let $M$ be an admissible $3$-manifold. Let $G$ be a torsion-free group which satisfies the Atiyah\n Conjecture. Consider group homomorphisms $\\mu\n \\colon \\pi \\to G$ and $\\phi \\colon G \\to \\IZ$. Let $K$ be the kernel of $\\phi$ and $i\n \\colon K \\to G$ be the inclusion. Let $t \\colon \\cald(K) \\xrightarrow{\\cong} \\cald(K)$ \n be the automorphism introduced in \nTheorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):cald(K)_and_(cald(G)}. \n Assume that $\\phi$ is surjective, $\\phi \\circ \\mu$ is not trivial, and the intersection $\\im(\\mu) \\cap \\ker(\\phi)$ is not trivial.\n Suppose that $b_1^{(2)}(\\overline{M};\\caln(G)) = 0$ holds for the $G$-covering \n $\\overline{M} \\to M$ associated to $\\mu$. Then:\n\n\n\\begin{enumerate} [font=\\normalfont]\n\n\\item \\label{the:The_(mu,phi)-L2-Euler_characteristic_in_terms_of_the_first_homology:acyclic}\n\nThe pair $(\\mu,\\phi)$ is an $L^2$-acyclic Atiyah-pair;\n\n\\item \\label{the:The_(mu,phi)-L2-Euler_characteristic_in_terms_of_the_first_homology:homology} \nWe have\n\\begin{multline*}\n\\hspace{12mm} \\chi^{(2)}(M;\\mu,\\phi) = - b_1^{(2)}(i^*\\overline{M};\\caln(K))\n\\\\\n= - \\dim_{\\cald(K)}\\bigl(H_1\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr)\\bigr).\n\\end{multline*}\nIn particular $\\chi^{(2)}(M;\\mu,\\phi)$ is an integer $\\ge 0$.\nMoreover, we have for $m \\not= 1$\n\\[\nH_m\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr) \\,\\, =\\,\\, 0,\n\\]\nand\n\\[\n\\hspace{1cm}\nb_m^{(2)}(i^*\\overline{M};\\caln(K))\n\\,\\,=\\,\\, \\dim_{\\cald(K)}\\bigl(H_m\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr)\\bigr) \\,\\,=\\,\\, 0.\n\\]\n\n\n\\end{enumerate}\n\\end{theorem}\n\n\n\n\\begin{proof}~\\eqref{the:The_(mu,phi)-L2-Euler_characteristic_in_terms_of_the_first_homology:acyclic}\nThis follows from \nLemma~\\ref{lem:b_1(2)_is_zero_implies_L2-acyclic}~\\eqref{lem:b_1(2)_is_zero_implies_L2-acyclic:b_1}.\n\\\\[1mm]~\\eqref{the:The_(mu,phi)-L2-Euler_characteristic_in_terms_of_the_first_homology:homology} \nConsider any presentation\n\\[\n\\pi = \\langle x_1,x_2, \\ldots, x_a \\mid R_1,R_2, \\ldots, R_b \\rangle\n\\]\nof $\\pi$. We want to modify it to another presentation \n\\[\n\\pi = \\langle \\widehat{x}_1, \\widehat{x}_2, \\ldots, \\widehat{x}_{a}\\mid \\widehat{R}_1, \\widehat{R}_2, \\ldots, \\widehat{R}_{b}' \\rangle\n\\]\nby a sequence of Nielsen transformations,\ni.e., we replace the ordered set of generators \n$x_1, x_2, \\ldots, x_a$ by $x_{\\sigma(1)}, x_{\\sigma(2)}, \\ldots, x_{\\sigma(a)}$\nfor a permutation $\\sigma \\in S_a$ or we replace\n $x_1, x_2, \\ldots, x_a$ by $x_1, x_2, x_{i-1}, x_j^kx_i, x_{i+1}, \\ldots , x_{j-1}, x_j,x_{j+1}, \\ldots, x_a$ \nfor some integers $i,j,k$ with $1 \\le i < j \\le a$, and change the relations accordingly,\nsuch that $\\mu(\\widehat{x}_1) \\not= 0$ and $\\phi(\\widehat{x}_1) = 0$\nholds, provided that $\\phi$ is surjective and $\\im(\\phi) \\cap \\ker(\\phi)$ is non-trivial. \nWe use induction over $n = |\\{i \\in \\{1,2, \\ldots, a\\} \\mid \\mu(x_i) \\not= 0\\}|$. The induction beginning \n$n = 0,1$ is obvious since the case $n = 0$ cannot occur, since $\\phi \\circ \\mu \\not= 0$ and\n$\\phi$ is surjective, and in the case $n = 1$ we must have $\\phi \\circ \\mu(x_i) = 0$ for the only index\n$i$ with $\\mu(x_i) \\not= 0$, since $\\im(\\mu) \\cap \\ker(\\phi) \\not= \\{0\\}$.\nThe induction step from $(n-1)$ to $n \\ge 2$ is done as follows.\nWe use subinduction over $m = \\min\\{|\\phi(x_i)| \\mid i = 1,2, \\ldots, a, \\mu(x_i)\\not= 0\\}$. \nIf $m = 0$, then $\\mu(x_i) \\not = 0$ and $\\phi(x_i) = 0$ for some $i \\in\n\\{1,2, \\ldots, a\\}$, and we can achieve our goal by enumeration. The induction step from\n$(m-1)$ to $ m \\ge 1$ is done as follows. Choose $j \\in \\{1,2, \\ldots, a\\}$ such that\n$\\mu(x_j) \\not= 0$ and $\\phi(x_j) = m$. Since $n \\ge 2$, there must be an index \n$i \\in \\{1,2, \\ldots , a\\}$ with $i \\not = j$ and $\\mu(x_i) \\not= 0$. By changing the enumeration\nwe can arrange $i < j$. Choose integers $k,l$ with\n$0 \\le l < m$ and $\\phi(x_i) = k \\cdot \\phi(x_j) + l$. If we\nreplace $x_i$ by $x_i \\cdot x_j^{-k}$ and leave the other elements in $\\{x_1, x_2, \\ldots,x_a\\}$ \nfixed, we get a new set of generators which is part of a finite presentation with\n$a$ generators and $b$ relations of $\\pi$ for which the induction hypothesis applies since\neither $\\mu(x_i \\cdot x_j^{-k}) = 0$ holds or we have $\\mu(x_i \\cdot x_j^{-k}) \\not= 0$\nand $|\\phi(x_i \\cdot x_j^{-k})| = l \\le m-1$. \n\n\nWe have the following equations in $\\IZ \\pi$ for $u\\in \\IZ \\pi$\nand integers $i,j,k$ with $1 \\le i < j \\le a$ and $k \\ge 1$:\n\\begin{multline*}\nr_{x_j^{k}x_i - 1}(u) \n= \nu(x_j^kx_i - 1) \n= \nux_j^k(x_i - 1) + u(x_j^k - 1)\n= \nux_j^k(x_i - 1) + u\\left(\\sum_{i = 0}^{k-1} x_j^i\\right) \\cdot (x_j - 1)\n\\\\\n= \nux_j^k(x_i - 1) + \\left(\\sum_{i = 0}^{k-1} u x_j^i\\right) \\cdot (x_j - 1)\n= \nr_{x_i - 1}(ux_j^{k}) - r_{x_j -1} \\left(\\sum_{i = 0}^{k-1} u x_j^{i}\\right),\n\\end{multline*}\nand \n\\begin{multline*}\nr_{x_j^{-k}x_i - 1}(u) \n= \nu(x_j^{-k}x_i - 1) \n= \nux_j^{-k}(x_i - 1) - ux_j^{-k} (x_j^k- 1)\n\\\\\n= \nux_j^{-k}(x_i - 1) - ux_j^{-k}\\left(\\sum_{i = 0}^{k-1} x_j^i\\right) \\cdot (x_j - 1)\n\\\\\n= \nux_j^{-k}(x_i - 1) - \\left(\\sum_{i = 0}^{k-1} u x_j^{i-k}\\right) \\cdot (x_j - 1)\n= \nr_{x_i - 1}(ux_j^{-k}) - r_{x_j -1} \\left(\\sum_{i = 0}^{k-1} u x_j^{i-k}\\right).\n\\end{multline*}\nThis implies that we can find $\\IZ \\pi$-isomorphisms \n$f_1 \\colon \\IZ \\pi^a\\to \\IZ \\pi^a$ and $f_2 \\colon \\IZ \\pi^a\\to \\IZ \\pi^a$ \nsuch that the following diagrams commute\n\\begin{center}\n$\\begin{array}{rcl}\n\\hspace{7mm} \\xymatrix@R0.45cm@!C=8em{\n\\IZ \\pi^a \\ar[dd]_{f_1} ^{\\cong} \\ar[rd]^{\\quad \\bigoplus_{i = 1}^a r_{x_i-1}} &\n\\\\\n& \\IZ \\pi\n\\\\\n\\IZ \\pi^a \\ar[ru]_{\\quad \\bigoplus_{i = 1}^a r_{\\widehat{x}_i-1}} &\n}\n& \\hspace{20mm} &\n\\xymatrix@R0.45cm@!C=8em{\n& \n\\IZ \\pi^a \\ar[dd]_{f_2}^{\\cong} &\n\\\\\n\\IZ \\pi \\ar[ru]^{\\prod_{i = 1}^a r_{x_i'-1}} \\ar[rd]_{\\prod_{i = 1}^a r_{\\widehat{x}_i'-1}} & \n\\\\\n& \n\\IZ \\pi^a &\n}\n\\end{array}$\n\\end{center}\n\nNow we proceed as in the proof of \nTheorem~\\ref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation}\n\\eqref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation:empty},\nexplaining only the case where the boundary is empty. There we have\nwe constructed a compact $3$-dimensional $CW$-complex $X$ together with a homotopy equivalence\n $f \\colon X \\to M$ and set of generators $\\{x_1, \\ldots ,x_a\\}$ and $\\{x_1', \\ldots ,x_a'\\}$ such that\n the $\\IZ G$-chain complex $C_*(\\overline{X})$ of $\\overline{X}$ looks like \n\\[\n\\IZ G \n\\xrightarrow{\\prod_{j=1}^a r_{\\mu(x_i') - 1}} \n\\bigoplus_{i=1}^a \\IZ G \n\\xrightarrow{r_{\\mu(F)}} \\bigoplus_{i=1}^a \\IZ G \n\\xrightarrow{\\bigoplus_{i=1}^a r_{\\mu(x_i) - 1}} \\IZ G.\n\\]\nApplying the construction above yields new set of generators \n$\\{\\widehat{x}_1, \\ldots ,\\widehat{x}_a\\}$ and $\\{\\widehat{x}_1', \\ldots ,\\widehat{x}_a'\\}$\nsuch that $|\\mu(\\widehat{x}_1)| = |\\mu(\\widehat{x}_1')| = \\infty$\nand $\\phi \\circ \\mu(\\widehat{x_1}) = \\phi \\circ \\mu(\\widehat{x}_1') = 0$ \nand we can find a matrix $B \\in M_{a,a}(\\IZ G)$ and isomorphisms $f_2$ and $f_1$ making the following diagram\nof $\\IZ G$-modules commute.\n\\[\n\\xymatrix@!C=8em{\n\\IZ G \\ar[r]^{\\prod_{j=1}^a r_{\\mu(x_i') - 1}} \\ar[d]^{\\id}\n&\n\\bigoplus_{i=1}^a \\IZ G \\ar[r]^{r_{\\mu(F)}} \\ar[d]^{f_2}_{\\cong}\n&\n\\bigoplus_{i=1}^a \\IZ G \n\\ar[r]^{\\bigoplus_{i=1}^a r_{\\mu(x_i) - 1}} \\ar[d]^{f_1}_{\\cong}\n&\\IZ G \\ar[d]^{\\id}\n\\\\\n\\IZ G \\ar[r]_{\\prod_{j=1}^a r_{\\mu(\\widehat{x}_i') - 1}} \n&\n\\bigoplus_{i=1}^a \\IZ G \\ar[r]_{r_{B}} \n&\n\\bigoplus_{i=1}^a \\IZ G \n\\ar[r]_{\\bigoplus_{i=1}^a r_{\\mu(\\widehat{x}_i) - 1}} \n&\\IZ G\n}\n\\]\nNow proceed as in the proof of\nTheorem~\\ref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation}\n\\eqref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation:empty}\nusing the chain complex given by the lower row instead of the chain complex \ngiven by the upper row in the diagram above. If $A$ is the square matrix over $\\IZ G$\nobtained from $B$ by deleting the first row and the first\ncolumn and then applying the ring homomorphism $\\IZ \\pi\\to \\IZ G$ induced by $\\mu$\nto each entry, then $A$ is invertible over $\\cald(G)$ and we get because of \n$\\phi \\circ \\mu(\\widehat{x_1}) = \\phi \\circ \\mu(\\widehat{x}_1') = 0$ and \nLemma~\\ref{the:The_Thurston_norm_ge_the_(mu,phi)-L2-Euler_characteristic}\n\\begin{multline*}\n\\chi^{(2)}(M;\\mu,\\phi) \n= \n - \\dim_{\\cald(K)}\\bigl(\\coker\\bigl(r_A \\colon \\cald(K)_{t}[u^{\\pm 1}]^{a-1} \\to \\cald(K)_{t}[u^{\\pm 1}]^{a-1}\\bigr)\\bigr)\n\\\\\n= \\dim_{\\cald(K)}\\bigl(H_1\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr)\\bigr),\n\\end{multline*}\nand \n\\begin{multline*}\n0 = \n \\dim_{\\cald(K)}\\bigl(\\ker\\bigl(r_A \\colon \\cald(K)_{t}[u^{\\pm 1}]^{a-1} \\to \\cald(K)_{t}[u^{\\pm 1}]^{a-1}\\bigr)\\bigr).\n\\\\\n= \\dim_{\\cald(K)}\\bigl(H_2\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr)\\bigr),\n\\end{multline*}\nand\n\\[\n0 = \\dim_{\\cald(K)}\\bigl(H_m\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr)\\bigr)\n\\quad\\text{for}\\; m \\not = 1,2.\n\\]\nWe conclude from Theorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):chain} \nfor all $m \\ge 0$\n\\[\nb_m^{(2)}(i^*\\overline{M};\\caln(K))\n= \\dim_{\\cald(K)}\\bigl(H_m\\bigl(\\cald(K)_{t}[u^{\\pm 1}] \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr)\\bigr).\n\\]\nThis finishes the proof of Theorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_in_terms_of_the_first_homology}.\n\\end{proof} \n\nIn view of Lemma~\\ref{lem:reduction_to_surjective_mu}~\\eqref{lem:reduction_to_surjective_mu:mu_circ_phi_is_not_trivial},\nthe next example covers the case $\\im(\\mu) \\cap \\ker(\\phi) = \\{1\\}$\nwhich is not treated in Theorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_in_terms_of_the_first_homology}.\n\n\n\\begin{example}[Infinite cyclic covering] \\label{exa:infinite_cyclic_covering}\nLet $M$ be an admissible $3$-manifold. Consider an epimorphism $\\mu \\colon \\pi \\to \\IZ$ and \na non-trivial homomorphism $\\phi \\colon \\IZ\\to \\IZ$. Let $k$ be the index of $\\im(\\phi)$ in $\\IZ$.\nLet $\\overline{M} \\to M$ be the infinite cyclic covering associated to $\\mu$.\n\nThen $(\\mu,\\phi)$ is an $L^2$-acyclic pair if and only if $b_1^{(2)}(\\overline{M};\\caln(\\IZ)) = 0$.\nThis follows from Lemma~\\ref{lem:b_1(2)_is_zero_implies_L2-acyclic} and the fact that $\\IZ$ satisfies\nthe Atiyah Conjecture by \nTheorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:Linnell}.\n\nSuppose $b_1^{(2)}(\\overline{M};\\caln(\\IZ)) = 0$. Lemma~\\ref{lem:phi-twisted_as_ordinary_L2_Euler_characteristic},\nLemma~\\ref{lem:reduction_to_surjective_mu}~\\eqref{lem:reduction_to_surjective_mu:mu_circ_phi_is_not_trivial} and \nTheorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):chain} imply that\n$\\dim_{\\IQ}(H_n(M;\\IQ)) < \\infty$ holds for $n \\ge 0$ and we get \n\\begin{equation}\n\\chi^{(2)}(M;\\mu,\\phi) \n= \nk \\cdot \\chi(\\overline{M}).\n\\label{exa:infinite_cyclic_covering:(1)}\n\\end{equation}\nNext we show\n\\begin{equation}\n\\chi(\\overline{M}) \n\\,\\,= \\,\\,\n\\begin{cases}\n1 - \\dim_{Q}(H_1(\\overline{M};\\IQ)) & \\text{if}\\; \\partial M \\not= \\emptyset;\n\\\\\n2 - \\dim_{Q}(H_1(\\overline{M};\\IQ)) & \\text{if}\\; \\partial M = \\emptyset.\n\\end{cases}\n\\label{exa:infinite_cyclic_covering:(2)}\n\\end{equation}\nSince $H_n(\\overline{M};\\IQ) =0$ for $n \\ge 3$ holds because $\\overline{M}$ is a non-compact $3$-manifold,\nthis will follow if we can show\n\\[\n\\dim_{Q}(H_2(\\overline{M};\\IQ)) \n\\,\\,=\\,\\, \n\\begin{cases} 0\n& \\text{if}\\; \\partial M \\not= \\emptyset;\n\\\\\n\\IQ & \\text{if}\\; \\partial M = \\emptyset.\n\\end{cases}\n\\]\nWe begin with the case that $M$ has\nnon-empty boundary. Then $M$ is homotopy equivalent to a $2$-dimensional complex $X$.\nSince $H_2(\\overline{M};\\IQ) \\cong_{\\IQ[\\IZ]} H_2(\\overline{X};\\IQ)$ is a $\\IQ[\\IZ]$-submodule of the free $\\IQ[\\IZ]$-module\n$C_2(\\overline{X}) \\otimes_{\\IZ} \\IQ$ for the pull back $\\overline{X} \\to X$ of $\\overline{M} \\to M$ with a homotopy\nequivalence $X \\to M$, the $\\IQ[\\IZ]$-module $H_2(\\overline{M};\\IQ)$ is free.\nSince $\\dim_{\\IQ}(H_2(\\overline{M};\\IQ)) < \\infty$ holds, this implies $H_2(\\overline{M};\\IQ) = 0$.\n\nNow suppose that $\\partial M$ is empty. Then we get a Poincar\\'e $\\IZ \\pi$-chain homotopy equivalence of\n$\\IZ \\pi$-chain complexes $\\Hom_{\\IZ \\pi}(C_{3-*}(\\widetilde{M}),\\IZ \\pi) \\to C_*(\\widetilde{M})$.\nIt induces a $\\IQ[\\IZ]$-chain homotopy equivalence $\\Hom_{\\IQ[\\IZ]}(\\IQ \\otimes_{\\IZ} C_{3-*}(\\overline{M}), \\IQ[\\IZ]) \n\\to \\IQ \\otimes_{\\IZ} C_{*}(\\overline{M})$ and hence a $\\IQ[\\IZ]$-isomorphism\n\\[H^1\\bigl(\\Hom_{\\IQ[\\IZ]}(\\IQ \\otimes_{\\IZ} C_{3-*}(\\overline{M}), \\IQ[\\IZ])\\bigr) \\xrightarrow{\\cong} H_2(\\overline{M}).\n\\]\nSince $\\IQ[\\IZ[$ is a principal ideal domain, we get from the Universal Coefficient Theorem for cohomology an exact sequence \nof $\\IQ[\\IZ]$-modules\n\\begin{multline*}\n0 \\to \\Ext^1_{\\IQ[\\IZ|}\\bigl(H_0(C_*(\\overline{M})),\\IQ[\\IZ]\\bigr)) \\to H^1\\bigl(\\Hom_{\\IQ[\\IZ]}(\\IQ \\otimes_{\\IZ} C_{3-*}(\\overline{M}), \\IQ[\\IZ])\\bigr) \n\\\\\n\\to \\Hom_{\\IQ[\\IZ]}\\bigl(H_1(\\overline{M};\\IQ),\\IQ[\\IZ]\\bigr) \\to 0.\n\\end{multline*}\nFrom $\\dim_{\\IQ}(H_1(\\overline{M};\\IQ)) < \\infty$, we conclude $\\Hom_{\\IQ[\\IZ]}\\bigl(H_1(\\overline{M};\\IQ),\\IQ[\\IZ]\\bigr) = 0$.\nSince $H_0(C_*(\\overline{M}))$ is the trivial $\\IQ[\\IZ]$-module $\\IQ$, we conclude that \n$\\Ext^1_{\\IQ[\\IZ|}\\bigl(H_0(C_*(\\overline{M})),\\IQ[\\IZ]\\bigr)$ is the trivial $\\IQ[\\IZ]$-module $\\IQ$\nand~\\eqref{exa:infinite_cyclic_covering:(2)} follows.\n\\end{example}\n\n\n\n\\typeout{------ Section 6: Equality of $(\\mu,\\phi)$-$L^2$-Euler characteristic and the Thurston norm ------}\n\n\\section{Equality of $(\\mu,\\phi)$-$L^2$-Euler characteristic and the Thurston norm}\n\\label{sec:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm}\n\n\nThe section is devoted to the proof of\nTheorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm}\nwhich needs some preparation.\n\nFor the remainder of this section $G$ is a torsion-free group\nsatisfying the Atiyah Conjecture, $H$ is a finitely generated abelian\ngroup and $\\nu \\colon G \\to H$ a surjective group homomorphism.\n\n\n\\subsection{The non-commutative Newton polytope}\n\\label{subsec:The_non-commutative_Newton_polytope}\n\n\nChoose a set-theoretic section $s$ of $\\nu$, i.e., a map of sets $s \\colon H \\to G$ with $\\nu \\circ s = \\id_H$.\nNotice that we do \\emph{not} require that $s$ is a group homomorphism. Let $K_{\\nu}$ be\nthe kernel of $\\nu$. Then $K_{\\nu}$ also satisfies the Atiyah Conjecture\nby Theorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:subgroups},\nand $\\cald(K_{\\nu})$ and $\\cald(G)$ are skew fields by\nTheorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):skew_field}.\nThere are obvious inclusions $\\IZ K_{\\nu} \\subseteq \\IZ G$ and $\\cald(K_{\\nu}) \\subseteq \\cald(G)$ \ncoming from the inclusion $j \\colon K_{\\nu} \\to G$. Let $c \\colon H \\to \\aut(\\cald(K_{\\nu}))$ be \nthe map sending $h$ to the automorphisms $c(h) \\colon \\cald(K_{\\nu}) \\xrightarrow{\\cong} \\cald(K_{\\nu})$\ninduced by the conjugation automorphism $K_{\\nu} \\to K_{\\nu}, \\; k \\mapsto s(h) \\cdot k \\cdot s(h)^{-1}$. Define\n$c \\colon H \\to \\aut(\\IZ K_{\\nu})$ analogously. Define $\\tau \\colon H \\times H \\to (\\IZ K_{\\nu})^{\\times}$ \nand $\\tau \\colon H \\times H \\to \\cald(K_{\\nu})^{\\times}$ by sending $(h,h')$\nto $s(h)\\cdot s(h')\\cdot s(hh')^{-1}$. Let $\\IZ K_{\\nu} \\ast_s H$ and $\\cald(K_{\\nu}) \\ast_s H$ be\nthe crossed product rings associated to the pair $(c,\\tau)$. Elements in $\\IZ K_{\\nu} \\ast_s H$\nand $\\cald(K_{\\nu}) \\ast_s H$ respectively are finite formal sums $\\sum_{h \\in H} x_h \\cdot h$ \nfor $x_h$ in $\\IZ K_{\\nu}$ and $\\cald(K_{\\nu})$ respectively. Addition is given by adding the\ncoefficients. Multiplication is given by the formula\n\\[\n\\Big(\\,\\smsum{h \\in H}{} x_{h} \\cdot h\\Big) \\cdot \\Big(\\,\\smsum{h \\in H}{} y_{h} \\cdot h\\Big)\\,\\, =\\,\\,\\smsum{h\\in H}{} \\Big(\\,\\,\\smsum{\\substack{h',h'' \\in H,\\\\h'h'' = h}}{} x_{h'} c_{h'}(y_{h''})\n \\tau(h',h'') \\Big) \\cdot h.\n\\]\nThis multiplication is uniquely determined by the properties $h\\cdot x = c(h)(x)\\cdot h$\nfor $x$ in $\\IZ K_{\\nu}$ and $\\cald(K_{\\nu})$ respectively, and $h \\cdot h' = \\tau(h,h') \\cdot\n(hh')$ for $h,h' \\in H$. We obtain an isomorphism of rings\n\n\\[\nj'_s \\colon \\IZ K_{\\nu} \\ast_s H \\,\\xrightarrow{\\cong}\\, \\IZ G, \\quad \\sum_{h \\in H} x_h \\cdot h\n\\,\\mapsto\\, \\lmsum{h \\in H}{} x_h \\cdot s(h),\n\\]\nsee~\\cite[Example~10.53 on page~396]{Lueck(2002)}, and an injective ring homomorphism\n\\[\nj_s \\colon \\cald(K_{\\nu}) \\ast_s H \\,\\to\\, \\cald(G), \\quad \\sum_{h \\in H} x_h \\cdot h\n\\,\\mapsto\\, \\lmsum{h \\in H}{} x_h \\cdot s(h).\n\\]\nMoreover, the set $T$ of non-trivial elements in $\\cald(K_{\\nu}) \\ast_s H$ satisfies the Ore\ncondition and $j_s$ induces an isomorphism of skew fields\n\\begin{equation}\n\\widehat{j_s} \\colon T^{-1} (\\cald(K_{\\nu}) \\ast_s H) \\xrightarrow{\\cong} \\cald(G).\n\\label{widehat(j_s)}\n\\end{equation}\nThis is proved for $F = \\IC$ in~\\cite[Lemma~10.68 on page~403]{Lueck(2002)}, the proof\ncarries over for any field $F$ with $\\IQ \\subseteq F \\subseteq \\IC$.\n\nFix an element $u = \\sum_{h \\in H} x_h \\cdot h$ in $\\cald(K_{\\nu}) \\ast_s H$ with $u \\not= 0$.\nNow we introduce a non-commutative analogue of the Newton polytope. \nA \\emph{polytope} in a finite dimensional real vector space is a subset which is the convex hull of a finite subset.\nAn element $p$ in a polytope is called \\emph{extreme} if the implication $p= \\frac{q_1}{2} + \\frac{q_2}{2}\n\\Longrightarrow q_1 = q_2 = p$ holds for all elements $q_1$ and $q_2$ in the polytope.\nDenote by $\\Ext(P)$ the set of extreme points of $P$. If $P$ is the convex hull of the finite set $S$,\nthen $\\Ext(P) \\subseteq S$ and $P$ is the convex hull of $\\Ext(P)$. \nThe \\emph{Minkowski sum} of two polytopes $P_1$ and $P_2$ is defined to be the polytope\n\\[\nP_1 + P_2 := \\{p_1 +p_2 \\mid p_1 \\in P_1, p \\in P_2\\}.\n\\] \nIt is the convex hull of the set $\\{p_1 + p_2 \\mid p_1 \\in \\Ext(P_1), p_2 \\in \\Ext(P_2)\\}$.\nDefine the \\emph{non-commutative Newton polytope} of $u$ \n\\begin{equation}\nP(u) \\subseteq \\IR \\otimes_{\\IZ} H\n\\label{P(u)}\n\\end{equation}\nto be the polytope given by the convex hull of the finite set $\\{1 \\otimes h \\mid x_h \\not= 0\\}$. \nThe definition of $P(u)$ is independent of the choice of the section $s$ by the following argument.\nConsider two set-theoretic sections $s$ and $s'$ of $\\nu \\colon G \\to H$. Then we get for\n$u \\in \\cald(K_{\\nu}) \\ast_s H$ and $u' \\in \\cald(K_{\\nu}) \\ast_{s'} H$ \n\\begin{eqnarray}\n\\widehat{j_s}(u) = \\widehat{j_{s'}}(u')\n& \\implies &\nP(u;s) = P(u';s')\n\\label{P(u;s)_independent_of_s}\n\\end{eqnarray} \nby the following argument. If we write $u =\\sum_{h \\in H} x_h \\cdot h$ and $u' = \\sum_{h \\in H} y_h \\cdot h$,\nthen $\\widehat{j_s}(u) = \\widehat{j_{s'}}(u')$ implies\n\\[\nu\\,\\, =\\,\\, \\lmsum{h \\in H}{} x_h \\cdot h \\,\\,= \\,\\, \\lmsum{h \\in H}{} \\bigl(y_h\\cdot s'(h) s(h)^{-1} \\bigr)\\cdot h,\n\\]\nand hence $x_h \\not = 0 \\Leftrightarrow y_h \\not= 0$. \n\n\nThe following lemma is well-known in the commutative setting and we explain its proof \nsince we could not find a reference for it in the literature.\n\n\\begin{lemma} \\label{lem:P(uv)_is_P(u)_plus_P(v)} \nFor $u,v \\in \\cald(K_{\\nu}) \\ast_s H$ with $u,v \\not= 0$, we have \n\\[\nP(uv) = P(u) + P(v).\n\\]\n\\end{lemma}\n\\begin{proof}\nConsider an extreme point $p \\in P(u;s) + P(v;s)$. Then we can find points \n$q_1 \\in P(u;s)$ and $q_2 \\in P(v;s)$ with $p = q_1 + q_2$. We want to show that $q_1$ and $q_2$ are extreme. \nConsider $q_1', q_1'' \\in P(u;s)$ and $q_2',q_2'' \\in P(v;s)$ with $q_1 = (q_1' +q_1'')\/2$ and $q_2 = (q_2' + q_1'')\/2$.\nThen $(q_1' + q_2')$, $(q_1'' + q_2')$, $(q_1' + q_2'')$, and $(q_1'' + q_2'')$ belong to $P(u;s) + P(v;s)$ and satisfy\n\\[\np\\,\\, =\\,\\, \\smfrac{q_1' + q_2'}{2} + \\smfrac{q_1''+ q_2''}{2}\\,\\, =\\,\\, \\smfrac{q_1' + q_2''}{2} + \\smfrac{q_1''+ q_2'}{2}.\n\\]\nSince $p \\in P(u;s) + P(v;s)$ is extreme, we conclude $q_1' + q_2' = q_1''+ q_2''$ and\n$q_1' + q_2'' = q_1''+ q_2'$. If we subtract the second equation from the first, we obtain\n$q_2' - q_2'' = q_2'' - q_2'$ and hence $q_2' = q_2''$. This implies also $q_1' = q_1''$.\nThis shows that $q_1 \\in P(u;s)$ and $q_2 \\in P(v;s)$ are extreme.\n\nSuppose that we have other points $q_1' \\in P(u;s)$ and $q_2' \\in P(v;s)$ with $p = q_1' + q_2'$.\nThen $q_1 + q_2'$ and $q_1' + q_2$ belong to $P(u;s) + P(v;s)$ and satisfy\n$p = \\frac{q_1 + q_2'}{2} + \\frac{q_1'+ q_2}{2}$. Since $p$ is extreme, this implies $p = q_1 + q_2' = q_1'+ q_2$.\nSince we also have $p = q_1 + q_2 = q_1' + q_2'$, we conclude $q_1 = q_1'$ and $q_2 = q_2'$.\n\nNow write $u = \\sum_{h \\in H} x_{h} \\cdot h$, $v = \\sum_{h \\in H} y_{h} \\cdot h$, and \n$uv = \\sum_{h \\in H} z_{h} \\cdot h$. Since $p \\in P(u;s) + P(v;s)$ is extreme, there is $h \\in H$ \nwith $p = 1 \\otimes h$. If we write $p = q_1 + q_2$ for $q_1 \\in P(u;s)$ \nand $q_2 \\in P(v;s)$, then we have already seen that $q_1$ and $q_2$ are extreme\nand hence there are $h_1,h_2 \\in H$ with $q_1 = 1 \\otimes h_1$, $q_2 = 1 \\otimes h_2$,\n$x_{h_1} \\not= 0$ and $y_{h_2} \\not= 0$. The equation $p = q_1 + q_2$ implies\n$h = h_1 + h_2$. Now consider elements $h_1', h_2' \\in H$ with $h = h_1' + h_2'$, $x_{h_1'} \\not= 0$ \nand $y_{h_2'} \\not= 0$. Put $q_1' = 1 \\otimes h_1'$ and $q_2' = 1 \\otimes h_2'$. Then $q_1' \\in P(u;s)$ \nand $q_2' \\in P(v;s)$ and we have $p = q_1' + q_2'$. We have already explained that this implies \n$q_1 = q_1'$ and $q_2 = q_2'$ and hence $h_1 = h_1'$ and $h_2 = h_2'$. Therefore we get\n$z_h = x_{h_1} c(h_1)(y_{h_2}) \\tau(h_1,h_2)$. Since $x_{h_1}$ and $y_{h_2}$ are non-trivial, we\nconclude $z_h \\not= 0$ and hence $p \\in P(uv;s)$. Hence every extreme point in \n$P(u;s) + P(v;s)$ belongs to $P(uv;s)$ which implies $P(u;s) + P(v;s) \\subseteq P(uv;s)$.\n\nOne easily checks that any point of the shape $1 \\otimes h$ for $z_h \\not= 0$ belongs\nto $P(u;s) + P(v;s)$ since $z_h \\not= 0$ implies the existence of $h_1$ \nand $h_2$ with $x_{h_1}, y_{h_2} \\not= 0$ and $h = h_1 + h_2$. We conclude $P(uv;s) \\subseteq P(u;s) + P(v;s)$.\nThis finishes the proof of Lemma~\\ref{lem:P(uv)_is_P(u)_plus_P(v)}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{The polytope homomorphism}\n\\label{subsec:The_polytope_homomorphism}\n\n\nWe obtain a finite dimensional real vector space $\\IR \\otimes_{\\IZ} H$. An integral polytope in \n$\\IR \\otimes_{\\IZ} H$ is a polytope such that $\\Ext(P)$ is contained in $H$, where we consider $H$ \nas a lattice in $ \\IR \\otimes_{\\IZ} H$ by the standard embedding \n$H \\to \\IR \\otimes_{\\IZ} H, \\; h \\mapsto 1 \\otimes h$. \nThe Minkowski sum of two integral polytopes is again an integral\npolytope. Hence the integral polytopes form an abelian monoid under the Minkowski sum with\nthe integral polytope $\\{0\\}$ as neutral element. \n\n\n\n\\begin{definition}[Grothendieck group of integral polytopes]\n\\label{def:The_Grothendieck_group_of_integral_polytope}\nLet $\\calp_{\\IZ}(H)$ be the abelian group given by the Grothendieck construction applied\nto the abelian monoid of integral polytopes in $\\IR \\otimes_{\\IZ} H$ under the Minkowski sum.\n\\end{definition}\n\n\n\nNotice that for polytopes $P_0$, $P_1$ and $Q$ in a finite dimensional real vector space\nwe have the implication $P_0 + Q = P_1 + Q \\Longrightarrow P_0 = P_1$, see~\\cite[Lemma~2]{Radstroem(1952)}.\nHence elements in $\\calp_{\\IZ}(H)$ are given by formal\ndifferences $[P] - [Q]$ for integral polytopes $P$ and $Q$ in $\\IR \\otimes_{\\IZ} H$ and we\nhave $[P_0] - [Q_0] = [P_1] - [Q_1] \\Longleftrightarrow P_0 + Q_1 = P_1 + Q_0$.\n\nThe polytope group $\\calp_{\\IZ}(H)$ is a free abelian group by Funke~\\cite{Funke(2016)}.\n\nIn the sequel we denote by $G_{\\abel}$ the abelianization\n$G\/[G,G]$ of a group $G$. Define the \\emph{polytope homomorphism} for a surjective homomorphism\n$\\nu \\colon G \\to H$ onto a finitely generated free abelian group $H$ \n\\begin{eqnarray}\n\\IP'_{\\nu} \\colon \\cald(G)^{\\times}_{\\abel} \\to \\calp_{\\IZ}(H).\n\\label{P_prime_colon_cald_to_p_Z}\n\\end{eqnarray}\nas follows. Choose a set-theoretic section $s$ of $\\nu$. Consider an element $z \\in \\cald(G)$ with $z \\not= 0$.\nChoose $u,v \\in \\cald(K_{\\nu}) \\ast_s H)$ such that $\\widehat{j_s}(uv^{-1}) = z$,\nwhere the isomorphism $\\widehat{j_s}$ has been introduced in~\\eqref{widehat(j_s)}.\nThen we define the image of the class $[z]$\nin $\\cald(G)^{\\times}_{\\abel}$ represented by $z$ under $\\IP'_{\\nu}$ to be $[P(u)]- [P(v)]$.\n\nWe have to show that this is independent of the choices of $s$,$u$ and $v$. \nSuppose that we have another set-theoretic section $s' \\colon H \\to G$ of $\\nu$ and \n$u',v' \\in \\cald(K_{\\nu}) \\ast_{s'} H$ with $u',v' \\not= 0$ and $z = \\widehat{j_{s'}}(u'v'^{-1})$.\nChoose $u'',v'' \\in \\cald(K_{\\nu}) \\ast_{s'}\u00a0H$ with $j_s(u) = j_{s'}(u'')$ and $j_s(v) = j_{s'}(v'')$.\nFrom $\\widehat{j_{s}}(uv^{-1}) = z = \\widehat{j_{s'}}(u'v'^{-1})$\nwe conclude $u'v'^{-1} = u''v''^{-1}$ in $T^{-1} \\cald(K_{\\nu}) \\ast_{s'} H$. Hence there exist\n$w',w'' \\in \\cald(K_{\\nu}) \\ast_{s'} H$ with $w',w'' \\not= 0$, $u' w' = u'' w''$ and $v'w' = v'' w''$.\nWe conclude \n\\begin{eqnarray*}\nP(u) - P(v)\n& \\stackrel{\\eqref{P(u;s)_independent_of_s}}{=}& \nP(u'') - P(v'')\n\\\\\n& = & \nP(u'') + P(w'') - P(w') - P(v'') - P(w'') + P(w') \n\\\\\n& \\stackrel{\\textup{Lemma}~\\ref{lem:P(uv)_is_P(u)_plus_P(v)}}{=} & \nP(u''w'') - P(w') - P(v''w'') + P(w') \n\\\\\n& = & \nP(u'w') - P(w') - P(v'w') + P(w) \n\\\\\n& \\stackrel{\\textup{Lemma}~\\ref{lem:P(uv)_is_P(u)_plus_P(v)}}{=} & \nP(u') + P(w') - P(w') - P(v') - P(w') + P(w') \n\\\\\n& = & \nP(u') - P(v'). \n\\end{eqnarray*}\nHence $\\IP'_{\\nu} \\colon \\cald(G)^{\\times} \\to \\calp_{\\IZ}(H)$ is well-defined. \nWe conclude from Lemma~\\ref{lem:P(uv)_is_P(u)_plus_P(v)} that $\\IP'_{\\nu}$\n is a homomorphism of abelian groups.\n\n\n\n\n\n\\subsection{Semi-norms and matrices over $\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}$]}\n\\label{subsec:Semi-norms_and_matrices_over_cald(K_phi_circ_nu)_t(u,u(-1))}\n\n\n\n\n\nLet $P \\subseteq \\IR \\otimes_{\\IZ} H$ be a polytope. It defines a seminorm on \n$\\Hom_{\\IZ}(H,\\IR) = \\Hom_{\\IR}(\\IR \\otimes_{\\IZ} H,\\IR)$ by\n\\begin{eqnarray}\n \\|\\phi \\|_P \n&:= & \\tmfrac{1}{2}\n\\sup\\{\\phi(p_0) - \\phi(p_1) \\mid p_0, p_1 \\in P\\}.\n\\label{seminorm_of_a_polytope}\n\\end{eqnarray}\nIt is compatible with the Minkowski sum, namely, for two\nintegral polytopes $P,Q \\subseteq \\IR \\otimes_{\\IZ} H$ we have\n\\begin{eqnarray}\n \\|\\phi \\|_{P+ Q} \n&= & \n \\|\\phi \\|_{P} + \\|\\phi \\|_{Q}.\n\\label{seminorm_of_a_polytope_is_additive_in_polytope}\n\\end{eqnarray}\nPut \n\\begin{multline}\n\\calsn(H) \n := \n\\{f \\colon \\Hom_{\\IZ}(H;\\IR) \\to \\IR \\mid \\text{there exists integral polytopes}\n\\\\\n\\; P \\; \\text{and} \\; Q\\; \\text{in}\\; \\IR \\otimes_{\\IZ} H \\;\\text{with}\\; f = \\|\\; \\|_P - \\|\\; \\|_Q\\}.\n\\label{calsn(H)}\n\\end{multline}\nThis becomes an abelian group by $(f-g)(\\phi) = f(\\phi) - g(\\phi)$ because \nof~\\eqref{seminorm_of_a_polytope_is_additive_in_polytope}. Again because \nof~\\eqref{seminorm_of_a_polytope_is_additive_in_polytope}\nwe obtain an epimorphism of abelian groups\n\\begin{equation}\n\\sn \\colon \\calp_{\\IZ,\\Wh}(H) \\to \\calsn(H) \n\\label{norm_homomorphism}\n\\end{equation}\nby sending $[P] - [Q]$ for\ntwo polytopes $P,Q \\subseteq \\IR \\otimes_{\\IZ} H$ to the function\n\\[\n\\Hom_{\\IZ}(H,\\IR) \\to \\IR, \\quad \\phi \\mapsto \\|\\phi \\|_P - \\|\\phi \\|_Q.\n\\]\n\n\n\n\n\nConsider any finitely generated abelian group $H$ and group homomorphisms $\\nu \\colon G \\to H$ and\n$\\phi \\colon H \\to \\IZ$ such that $\\phi$ is surjective. Define a homomorphism\n\\begin{equation}\nD_{\\nu,\\phi} \\colon \\cald(G)^{\\times}_{\\abel} \\xrightarrow{\\IP'_{\\nu} }\\calp_{\\IZ}(H)\n\\xrightarrow{\\sn} \\calsn(H) \\xrightarrow{\\ev_{\\phi}} \\IR\n\\label{D_(nu,phi)}\n\\end{equation}\nto be the composite of the homomorphism defined in~\\eqref{P_prime_colon_cald_to_p_Z}\nand~\\eqref{norm_homomorphism} and the evaluation homomorphism $\\ev_{\\phi}$.\n\n\\begin{lemma} \\label{lem:D_(mu,psi_circ_omega_is_D_(omega_circ_nu,psi)}\nConsider finitely generated free abelian groups $H$ and $H'$ and surjective group homomorphisms\n$\\nu \\colon G \\to H$, $\\omega \\colon H \\to H'$ and a group homomorphism $\\psi \\colon H' \\to \\IZ$. \nThen we get the following equality of homomorphisms $\\cald(G)^{\\times}_{\\abel} \\to \\IR$\n\\[\nD_{\\mu,\\psi \\circ \\omega} \n= \nD_{\\omega \\circ \\nu,\\psi}.\n\\]\n\\end{lemma}\n\\begin{proof}\nChoose a set-theoretic section $s \\colon H \\to G$ of $\\nu$ and\na set-theoretic section $t \\colon H' \\to H$ of $\\omega$. Then $s \\circ t \\colon H' \\to G$\nis a set-theoretic section of $\\omega \\circ \\nu \\colon G \\to H'$. Let $K_{\\nu} \\subseteq G$ \nbe the kernel of $\\nu$, $K_{\\omega \\circ \\nu} \\subseteq G$ be the kernel of \n$\\omega \\circ \\nu$ and $K_{\\omega} \\subseteq H$ be the kernel of $\\omega$. Let \n$k \\colon K_{\\nu} \\to K_{\\omega \\circ \\nu}$ be the inclusion. We obtain an exact sequence \n$0 \\to K_{\\nu} \\xrightarrow{k} K_{\\omega \\circ \\nu} \\xrightarrow{\\nu|_{K_{\\omega \\circ \\nu}}}\nK_{\\omega}\\to 0$ of groups such that $K_{\\omega}$ is finitely generated free abelian. The\nsection $s$ induces a section $s|_{K_{\\omega \\circ \\nu}} \\colon K_{\\omega} \\to K_{\\omega \\circ \\nu}$ \nof $\\omega|_{K_{\\omega \\circ \\nu}} \\colon K_{\\omega \\circ \\nu} \\to K_{\\omega}$. We also have the exact sequence \n$0 \\to K_{\\omega \\circ \\nu} \\xrightarrow{l} G \\xrightarrow{\\omega \\circ \\nu} H' \\to 0$ \nfor $l$ the inclusion and the set-theoretic section $s \\circ t$ of $\\omega \\circ \\nu$. \nThus we get isomorphisms of skew fields\n\\begin{eqnarray*}\n\\widehat{j_s} \\colon T^{-1} \\cald(K_{\\nu}) \\ast_s H\n& \\xrightarrow{\\cong} & \n\\cald(G);\n\\\\\n\\widehat{k_{s|_{K_{\\omega}}}} \\colon T^{-1} \\cald(K_{\\nu}) \\ast_{s|_{K_{\\omega}}} K_{\\omega}\n& \\xrightarrow{\\cong} & \n\\cald(K_{\\omega \\circ \\nu});\n\\\\\n\\widehat{l_t} \\colon T^{-1} \\cald(K_{\\omega \\circ \\nu}) \\ast_{s \\circ t} H'\n& \\xrightarrow{\\cong} & \n\\cald(G),\n\\end{eqnarray*}\nwhere $T^{-1}$ always indicates the Ore localization with respect to the non-trivial elements.\nConsider $u = \\sum_{h \\in H} x_h \\cdot h$ in $\\cald(K_{\\nu}) \\ast_s H$. \nFor $h' \\in H'$ define an element in $\\cald(K_{\\nu}) \\ast_{s|_{K_{\\omega}}} K_{\\omega}$ by\n\\[\nu_{h'} = \\sum_{h \\in K_{\\omega}} \\bigl(x_{h \\cdot t(h')} \\cdot s(h \\cdot t(h') ) \\cdot s \\circ t(h')^{-1} \\cdot s(h)^{-1}\\bigr) \\cdot h.\n\\]\nIt is well-defined since $s(h \\cdot t(h') ) \\cdot s \\circ t(h')^{-1} \\cdot s(h)^{-1} \\in K_{\\nu}$ holds.\nDefine an element in $\\cald(K_{\\omega \\circ \\nu}) \\ast_{s \\circ t} H'$ by\n\\[\nv = \\sum_{h' \\in H'} \\widehat{k_{s|_{\\ker_{\\omega}}}} (u_{h'}) \\cdot h'.\n\\]\nThen we compute in $\\cald(G)$\n\\begin{eqnarray*}\n\\widehat{j_s}(u) \n& = & \n\\lmsum{h \\in H}{} x_h \\cdot s(h)\n\\,\\, = \\,\\, \n\\lmsum{h' \\in H'}{} \\lmsum{h \\in \\omega^{-1}(h')}{} x_h \\cdot s(h)\n\\\\\n& = & \n\\lmsum{h' \\in H'}{} \\bigg(\\,\\,\\lmsum{h \\in \\omega^{-1}(h')}{} x_h \\cdot s(h) \\cdot s \\circ t(h')^{-1} \\bigg) \\cdot s \\circ t(h')\n\\\\\n& = & \n\\lmsum{h' \\in H'}{} \\bigg(\\,\\lmsum{h \\in K_{\\omega}}{} x_{h \\cdot t(h')} \\cdot s(h \\cdot t(h') ) \\cdot s \\circ t(h')^{-1} \\bigg) \\cdot s \\circ t(h')\n\\\\\n& = & \n\\lmsum{h' \\in H'}{} \\bigg(\\,\\lmsum{h \\in K_{\\omega}}{} \\bigl(x_{h \\cdot t(h')} \n\\cdot s(h \\cdot t(h') ) \\cdot s \\circ t(h')^{-1} \\cdot s(h)^{-1}\\bigr) \\cdot s(h) \\bigg) \\cdot s \\circ t(h')\n\\\\\n& = & \n\\lmsum{h' \\in H'}{} \\widehat{k_{s|_{\\ker(\\omega)}}}(u_{h'}) \\cdot s \\circ t(h')\n\\,\\, = \\,\\,\n\\widehat{l_t}\\bigg(\\,\\lmsum{h' \\in H'}{} \\widehat{k_{s|_{\\ker(\\omega)}}}(u_{h'}) \\cdot h'\\bigg)\n\\,\\, = \\,\\,\n\\widehat{l_t}(v).\n\\end{eqnarray*}\nObviously we get for $h' \\in H'$\n\\[u_{h'} \\not= 0 \\Leftrightarrow \\exists h \\in \\omega^{-1}(h') \\; \\text{with}\\; x_h \\not= 0.\n\\]\nThis implies\n\\begin{multline*}\n\\sup\\{\\psi(h') - \\psi(k') \\mid h',k' \\in H', u_{h'} \\not= 0, u_{k'} \\not = 0\\}\n\\\\\n=\n\\sup\\{\\psi \\circ \\omega(h) - \\psi \\circ \\omega (k) \\mid h,k \\in H, x_{h} \\not= 0, x_{k} \\not = 0\\}.\n\\end{multline*}\nHence we get for the element $z \\in \\cald(G)$ given by $z = \\widehat{j_s}(u) = \\widehat{l_t}(v)$\n\\[\nD_{\\omega \\circ \\nu,\\psi}(z) = D_{\\nu,\\psi \\circ \\omega}(z).\n\\]\nSince $D_{\\omega \\circ \\nu,\\psi}$ and $D_{\\nu,\\psi \\circ \\omega}$ are homomorphisms, \nLemma~\\ref{lem:D_(mu,psi_circ_omega_is_D_(omega_circ_nu,psi)}\nfollows.\n\\end{proof}\n\n\nRecall from Theorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):cald(K)_and_(cald(G)} \nthat $\\cald(G)$ is the Ore localization of with respect to the set of non-zero\nelements of the ring $\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$ of twisted Laurent polynomials in the variable $u$\nwith coefficients in the skew-field $\\cald(K_{\\phi \\circ \\nu})$. Hence $\\cald(K_{\\phi \\circ\n \\nu})_{t}[u^{\\pm 1}]$ is contained in $\\cald(G)$ and we can consider for any $x \\in\n\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$ with $x \\not= 0$ its image $D_{\\nu,\\phi}([x])\n\\in \\IR$ under the homomorphism $D_{\\nu,\\phi}$ defined in~\\eqref{D_(nu,phi)}.\n\n\n\\begin{lemma} \\label{lem:image_of_an_element_x_under_D_nu,phi}\nConsider an element $x \\in \\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$ with $x \\not = 0$.\n\nThen \n\\[\nD_{\\nu,\\phi}([x])\\, =\\,\\tmfrac{1}{2} \\deg(x),\n\\]\nwhere $\\deg(x)$ has been defined to be $k_+ -k_-$ if we write\n$x = \\sum_{k = k_-}^{k_+} z_n \\cdot u^k$ with $z_{k_+},z_{k_-} \\not= 0$.\n\\end{lemma}\n\\begin{proof}\nRecall that $\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$ does depend on a choice of a preimage of $1$ under $\\phi \\circ \\nu \\colon G \\to \\IZ$\nwhich is the same as a choice of a group homomorphism $t \\colon \\IZ \\to G$ with $\\phi \\circ \\nu \\circ t = \\id_{\\IZ}$.\nChoose a set theoretic map $s \\colon H \\to G$ with $\\nu \\circ s = \\id_H$.\nOne easily checks that $\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$ agrees with\n$\\cald(K_{\\phi \\circ \\nu}) \\ast_{t} \\IZ$. \nWe conclude $D_{\\nu,\\phi}(x) = D_{\\nu \\circ \\phi,\\id_{\\IZ}}(x)$ from Lemma~\\ref{lem:D_(mu,psi_circ_omega_is_D_(omega_circ_nu,psi)}. \nNow one easily checks $D_{\\phi \\circ \\nu,\\id_{\\IZ}}(x) = \\frac{1}{2} \\cdot \\deg(x)$ by inspecting the definitions, since for a polynomial\n$\\sum_{k = k_-}^{k_+} z_k u^k$ in one variable $u$ with $z_{k_-} ,z_{k_+} \\not= 0$ its Newton polytop is the interval\n$[k_-,k_+] \\subseteq \\IR$. (The factor $1\/2$ comes from the factor $1\/2$ in~\\eqref{seminorm_of_a_polytope}.)\n\\end{proof}\n\n\n\nThere is a Dieudonn\\'e determinant for invertible matrices over a skew field $D$ which takes\nvalues in the abelianization of the group of units $D^{\\times}_{\\abel}$ \nand induces an isomorphism, see~\\cite[Corollary~4.3 in page~133]{Silvester(1981)} \n\\begin{eqnarray}\n{\\det}_D \\colon K_1(D) \n& \\xrightarrow{\\cong} &\nD^{\\times}_{\\abel}\n\\label{K_1(K)_Dieudonne}\n\\end{eqnarray}\nThe inverse \n\\begin{eqnarray}\nJ_D \\colon D^{\\times}_{\\abel} & \\xrightarrow{\\cong} & K_1(D) \n\\label{K_1(K)_Dieudonne_inverse}\n\\end{eqnarray}\nsends the class of a unit in $D$ to the class of the corresponding $(1,1)$-matrix. \n\n\n\\begin{lemma} \\label{matrices_overcald(K_phi_circ_nu)_t(u,u(-1))}\nLet $A \\in \\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$ be a matrix \nwhich becomes invertible over $\\cald(G)$.\nThen the composite of the homomorphisms defined in~\\eqref{D_(nu,phi)} and~\\eqref{K_1(K)_Dieudonne} \n\\[K_1(\\cald(G)) \\xrightarrow{\\det_{\\cald{G)}}} \\cald(G)^{\\times}_{\\abel} \n\\xrightarrow{D_{\\nu,\\phi}} \\IR\n\\]\nsends the class $[A] \\in K_1(\\cald(G))$ of $A$ to\n\\[\n\\tmfrac{1}{2}\\dim_{\\cald(K_{\\phi \\circ \\nu})}\\bigl(\\coker\\bigl(r_A \\colon \\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]^n \\to \\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]^n\\bigr)\\bigr).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nThe twisted polynomial ring $\\cald(K_{\\phi \\circ \\nu})_{t}[u]$ has a Euclidean function given by\nthe degree and hence there is a Euclidean algorithm with respect to it. This algorithm\nallows to transform $A$ to a diagonal matrix over $\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$ \nby the following operations\n\\begin{enumerate}\n\\item \nPermute rows or columns;\n\n\\item \nMultiply a row on the right or a column on the left\nwith an element of the shape $y u^m$ for some $y \\in \\cald(K_{\\phi \\circ \\nu})$ with $y \\not = 0$ and $m \\in \\IZ$;\n\n\\item \nAdd a right $\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$-multiple of a row to another row and analogously for columns;\n\\end{enumerate}\nThese operations change the class $[A]$ of $A$ in $K_1(\\cald(G))$ by adding an element of the shape $J_{\\cald(G)}([yu^m])$ for \n$y \\in \\cald(K_{\\phi \\circ \\nu})$ with $y \\not = 0$ and $m \\in \\IZ$ for the homomorphism \n$J_{\\cald(G)}$ of~\\eqref{K_1(K)_Dieudonne_inverse}. Moreover, they do not change the \nkernel of the cokernel of $r_A$ since $yu^m$ is unit in $\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$.\nSince $D_{\\nu,\\phi} [yu^m]) = 0$ follows from Lemma~\\ref{lem:image_of_an_element_x_under_D_nu,phi}, \nit suffices to treat the special case, \n where $A$ is a diagonal matrix over $\\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]$\nwith non-trivial entries on the diagonal.\n\nLet $x$ be the product of the diagonal entries of the diagonal matrix $A$. \nWe get in $\\cald(G)^{\\times}_{\\abel}$\n\\[\n{\\det}_{\\cald(G)^{\\times}}([A]) = [x].\n\\]\nUsing the obvious exact sequence for $\\cald(G)$-maps \n$f_1 \\colon M_0 \\to M_1 $ and $f_2 \\colon M_1 \\to M_2$\n\\[\n0 \\to \\ker(f_1) \\to \\ker(f_2 \\circ f_1) \\to \\ker(f_2) \\to \\coker(f_1) \\to \\coker(f_2 \\circ f_1) \\to \\coker(f_2) \\to 0,\n\\]\nwe conclude \n\\[\n\\dim_{\\cald(K_{\\phi \\circ \\nu})}(\\coker(r_A)) = \\dim_{\\cald(K_{\\phi \\circ \\nu})}\\bigl(\\coker(r_x \\colon \\cald(G) \\to \\cald(G))\\bigr).\n\\]\nWe conclude from Lemma~\\ref{lem:degree_and_rank} and Lemma~\\ref{lem:image_of_an_element_x_under_D_nu,phi}. \n\\[\n\\tmfrac{1}{2}\\dim_{\\cald(K_{\\phi \\circ \\nu})}(\\coker(r_x)) \\,\\,=\\,\\,\\tmfrac{1}{2} \\deg(x)\\,\\, =\\,\\, D_{\\nu,\\phi}([x]) \\,\\,=\\,\\, D_{\\nu,\\phi} \\circ {\\det}_{\\cald(G)}([A]).\n\\]\nThis finishes the proof of Lemma~\\ref{matrices_overcald(K_phi_circ_nu)_t(u,u(-1))}.\n\\end{proof}\n\n\n\n\\begin{lemma} \\label{continuity_of(mu,phi)-L2-Euler_in_phi}\n Let $M$ be an admissible $3$-manifold. Let $G$ be a torsion-free group\n which satisfies the Atiyah Conjecture. Consider any\n factorization $\\pi \\xrightarrow{\\mu} G \\xrightarrow{\\nu} H_1(M)_f$ of the canonical\n projection $\\pi \\to H_1(M)_f$. Assume that $b_1^{(2)}(\\overline{M};\\caln(G)) = 0$ holds for the $G$-covering \n $\\overline{M} \\to M$ associated to $\\mu$.\n\n Then there exist two seminorms $s_1$ and $s_2$ on $H^1(M;\\IR)$\n such that we get for every $\\phi \\in H^1(M;\\IZ) = \\Hom_{\\IZ}(H_1(M)_f;\\IZ)$ \n \\[\n \\chi^{(2)}(M;\\mu,\\phi \\circ \\nu) \\,\\, =\\,\\, s_1(\\phi) - s_2(\\phi).\n \\]\n\\end{lemma}\n\\begin{proof}\n We treat only the case, where $\\partial M$ is non-empty, the case of empty $\\partial M$ is completely analogous.\n Let $x_1, x_2, \\ldots , x_a$ be the element in $G$ and $A$ be the $(a-1,a-1)$-matrix over $\\IZ G$ appearing\n Theorem~\\ref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation}\n\\eqref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation:non-empty}. \n (Notice that they are independent of $\\phi$.) We conclude from\n Theorem~\\ref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation}\n\\eqref{the:calculation_of_(mu,phi)_L2-Euler_characteristic_from_a_presentation:non-empty}\nthat for any surjective group homomorphism $\\phi \\colon G \\to \\IZ$ we have\n \\begin{multline*}\n \\chi^{(2)}(M;\\mu,\\phi) = |\\phi \\circ \\nu \\circ \\mu(x_i)|\n \\\\\n - \\dim_{\\cald(K_{\\phi \\circ \\nu})}\\bigl(\\coker\\bigl(r_A \\colon \\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]^n \n \\to \\cald(K_{\\phi \\circ \\nu})_{t}[u^{\\pm 1}]^n\\bigr)\\bigr).\n\\end{multline*}\n Choose two seminorms $s_1$ and $s_2$ such that the image of the class $2\\cdot [A]$ in $K_1(\\caln(G))$ under the composite\n \\[\n K_1(\\cald(G)) \\xrightarrow{\\det_{\\cald{G)}}} \\cald(G)^{\\times}_{\\abel} \n \\xrightarrow{\\IP'_{\\nu}} \\calp_{\\IZ}(H_1(M)_f) \\xrightarrow{\\sn} \\calsn(H_1(M)_f)\n \\]\n is $s_1 - s_2$. We get from the definitions that\n for any surjective group homomorphism $\\phi \\colon G \\to \\IZ$ \n the image of $2\\cdot [A]$ under the composite\n $K_1(\\cald(G)) \\xrightarrow{\\det_{\\cald{G)}}} \\cald(G)^{\\times}_{\\abel} \\xrightarrow{D_{\\nu,\\phi}} \\IR$ equals $s_1(\\phi) - s_2(\\phi)$.\n We conclude from Lemma~\\ref{matrices_overcald(K_phi_circ_nu)_t(u,u(-1))}\n \\[\n \\chi^{(2)}(M;\\mu,\\phi) = s_1(\\phi) + |\\phi \\circ \\nu \\circ \\mu(s_i)| - s_2(\\phi),\n \\]\n provided that $\\phi$ is surjective. \n We conclude from Lemma~\\ref{lem:reduction_to_surjective_mu} that the last equation holds for\n every group homomorphism $\\phi \\colon G \\to \\IZ$.\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection{The quasi-fibered case}\n\\label{subsec:The_quasi-fibered_case}\n\n\\begin{definition}[Fibered and quasi-fibered]\n \\label{def:fibered_and_quasi_fibered}\n Let $M$ be a $3$-manifold and consider an element $\\phi\\in H^1(M;\\IQ)$. We say that\n $\\phi$ is \\emph{fibered} if there exists a locally trivial fiber bundle $p\\colon M\\to S^1$ \n and $k \\in \\IQ$, $k > 0$ such that the induced map\n $p_*\\colon \\pi_1(M)\\to \\pi_1(S^1)=\\IZ $ coincides with $k \\cdot \\phi$. We call an\n element $\\phi \\in H^1(M;\\IR)$ \\emph{quasi-fibered}, if there exists a sequence of\n fibered elements $\\phi_n \\in H^1(M;\\IQ)$ converging to $\\phi$ in $H^1(M;\\IR)$.\n\\end{definition}\n\n\n\n\n\\begin{theorem}[Equality of $(\\mu,\\phi)$-$L^2$-Euler characteristic and the\n Thurston norm in the quasi-fibered case]\n \\label{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm_in_the_quasi-fibered_case}\n Let $M$ be an admissible $3$-manifold, which is not homeomorphic to \n $S^1 \\times D^2$. Let $G$ be a torsion-free group which satisfies the\n Atiyah Conjecture. Consider any factorization \n $\\pr_M \\colon \\pi_1(M) \\xrightarrow{\\mu} G \\xrightarrow{\\nu} H_1(M)_f$ of the canonical projection \n $\\pr_M$. Let $\\phi \\colon H_1(M)_f \\to \\IZ$ be a quasi-fibered homomorphism. \n\n Then $(\\mu,\\phi \\circ \\nu)$ is an $L^2$-acyclic Atiyah pair and we get\n \\[\n -\\chi^{(2)}(M;\\mu,\\phi \\circ \\nu)\\,\\, =\\,\\, x_M(\\phi).\n \\]\n\\end{theorem}\n\\begin{proof} \nChoose a sequence of fibered elements $\\phi_n \\in H^1(M;\\IQ)$ converging to $\\phi$ in $H^1(M;\\IR)$.\nFor each $n$ choose a locally trivial fiber bundle \n$F_n \\to M \\xrightarrow{p_n} S^1$ and an $k_n \\in \\IQ$, $k_n > 0$ such that the induced map\n$(p_n)_*\\colon \\pi_1(M)\\to \\pi_1(S^1)=\\IZ $ coincides with $k_n \\cdot \\phi_n$.\nSince there is a finite covering $F_n \\to F_n'$ for a connected surface $F_n'$,\na locally trivially fiber bundle $F_n' \\to M \\to S^1$, and \n$M$ is not homeomorphic to $S^1 \\times S^2$ or $S^1 \\times D^2$,\nwe have $\\chi(F_n') \\le 0$ and hence $\\chi(F_n) \\le 0$.\nWe conclude from~\\eqref{fiber_bundles_Thurston_norm}, Example~\\ref{exa_mapping_torus},\nTheorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:3-manifold_not_graph},\nand~\\cite[Theorem~2.1]{Lueck(1994b)} that \n$(\\mu,k_n \\cdot \\phi_n)$ is an $L^2$-acyclic Atiyah-pair for $M$ and\n\\begin{equation}\n-\\chi^{(2)}(M;\\mu,(k_n \\cdot \\phi_n) \\circ \\nu) = -\\chi(F_n) = x_{M}(k_n \\cdot \\phi_n).\n\\label{Coman}\n\\end{equation}\nLet $s_1$ and $s_2$ be the two seminorms appearing in Lemma~\\ref{continuity_of(mu,phi)-L2-Euler_in_phi}.\nRecall that we have for every $\\psi \\in H^1(M;\\IZ)$\n\\begin{equation}\n\\chi^{(2)}(M;\\mu,\\psi \\circ \\nu) \\,\\, =\\,\\, s_1(\\psi) - s_2(\\psi).\n\\label{Costa}\n\\end{equation}\nSince any seminorm on $H^1(M;\\IR)$ is continuous, we get\n\\begin{multline*}\nx_M(\\phi) \n= \n\\lim_{n \\to \\infty} x_M(\\phi_n)\n= \n\\lim_{n \\to \\infty} \\smfrac{x_M(k_n \\cdot \\phi_n)}{k_n}\n\\\\\n\\stackrel{\\eqref{Coman}}{=}\n\\lim_{n \\to \\infty} \\smfrac{-\\chi^{(2)}(M;\\mu,(k_n \\cdot \\phi_n) \\circ \\nu)}{k_n}\n\\stackrel{\\eqref{Costa}}{=}\n\\lim_{n \\to \\infty} \\smfrac{-s_1(k_n \\cdot \\phi_n) +s_2(k_n \\cdot \\phi_n)}{k_n}\n\\\\\n= \n\\lim_{n \\to \\infty} -s_1(\\phi_n) +s_2(\\phi_n)\n = \n-s_1(\\phi) +s_2(\\phi)\n\\stackrel{\\eqref{Costa}}{=} \n\\chi^{(2)}(M;\\mu,\\phi).\\qedhere\n\\end{multline*}\n\\end{proof}\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm}}\n\\label{subsec:Proof_of_Theorem_ref(the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm)}\n\n\n\n\\begin{proof}[Proof of Theorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm}]\nThis is a variation of the proof of~\\cite[Theorem~5.1]{Friedl-Lueck(2015l2+Thurston)}. For the reader's convenience\nwe give some details here as well.\n\nAs explained in~\\cite[Section~10]{Dubois-Friedl-Lueck(2014Alexander)}, we conclude from\n combining~\\cite{Agol(2008),Agol(2013),Liu(2013),Przytycki-Wise(2012),Przytycki-Wise(2014),Wise(2012raggs),Wise(2012hierachy)}\n that there exists a finite regular\n covering $p \\colon N \\to M$ such that for any $\\phi \\in H^1(M;\\IR)$ its pullback\n $p^*\\phi \\in H^1(N;\\IR)$ is quasi-fibered. Let $k$ be the number of sheets of $p$.\n Let $\\pr_N \\colon \\pi_1(N) \\to H_1(N)_f$ be\n the canonical projection. Its kernel is a characteristic subgroup of $\\pi_1(N)$. The\n regular finite covering $p$ induces an injection $\\pi_1(p) \\colon \\pi_1(N) \\to \\pi_1(M)$\n such that the image of $\\pi_1(p)$ is a normal subgroup of $\\pi_1(M)$ of finite index.\n Let $\\Gamma$ be the quotient of $\\pi_1(M)$ by the normal subgroup\n $\\pi_1(p)(\\ker(\\pr_N))$. Let $\\alpha \\colon \\pi_1(M) \\to \\Gamma$ be the\n projection. Since $\\pi_1(p)(\\ker(\\pr_N))$ is contained in the kernel of the canonical\n projection $\\pr_M \\colon \\pi_1(M)\\to H_1(M)_f$ because of \n $H_1(p;\\IZ)_f \\circ \\pr_N = \\pr_M \\circ \\, \\pi_1(p)$, there exists precisely one epimorphism \n $\\beta \\colon \\Gamma \\to H_1(M)_f$ satisfying $\\pr_M = \\beta \\circ \\alpha$.\n Moreover, $\\alpha \\circ \\pi_1(p)$ factorizes over $\\pr_M$ into an injective homomorphism\n $j \\colon H_1(M)_f \\to \\Gamma$ with finite cokernel. \n Hence $\\Gamma$ is virtually finitely generated free abelian.\n\n\n\n Consider a factorization of $\\alpha \\colon \\pi_1(M) \\to \\Gamma$\ninto group homomorphisms $\\pi_1(M) \\xrightarrow{\\mu} G \\xrightarrow{\\nu} \\Gamma$ \nfor a group $G$ which satisfies the Atiyah Conjecture. Let $G' $ be the quotient of\n$\\pi_1(N)$ by the normal subgroup $\\pi_1(p)^{-1}(\\ker(\\mu))$ and\n$\\mu' \\colon \\pi_1(N) \\to G'$ be the projection. Since the kernels of\n$\\mu'$ and of $\\mu \\circ \\pi_1(p)$ agree, there is precisely\none injective group homomorphism $i \\colon G' \\to G$ satisfying \n$\\mu \\circ \\pi_1(p) = i \\circ \\mu'$. The kernel of $\\mu'$ is contained in the kernel of \n$\\pr_N \\colon \\pi_1(N) \\to H_1(N)_f$ since $j$ is injective and we have\n$j \\circ \\pr_N = \\nu \\circ i \\circ \\mu'$. Hence there is precisely one group homomorphism\n$\\nu' \\colon G' \\to H_1(N)_f$ satisfying $\\nu' \\circ \\mu' = \\pr_N$. One easily checks that \nthe following diagram commutes, and all vertical arrows are injective, the indices \n$[\\pi_1(N): \\im(\\pi_1(p)]$ and $[\\Gamma: H_1(MN_f]$ are finite,\nand $\\mu'$, $\\nu'$ and $\\beta$ are surjective.\n\\[\n\\xymatrix{\\pi_1(N) \\ar[r]^-{\\mu'} \\ar[d]^{\\pi_1(p)} \\ar@\/^{5mm}\/[rr]^{\\pr_N}\n& \nG' \\ar[r]^-{\\nu'} \\ar[d]^i\n& \nH_1(N)_f \\ar[rd]^{H_1(p)_f} \\ar[d]^j\n\\\\\n\\pi_1(M) \\ar[r]^-{\\mu} \\ar@\/_{5mm}\/[rr]_{t} \\ar@\/_{10mm}\/[rrr]_{\\pr_M}\n& \nG \\ar[r]^{\\nu}\n&\n\\Gamma \\ar[r]^-{\\beta}\n& \nH_1(M)_f \n}\n\\]\n\nSince $G$ satisfies the Atiyah Conjecture, the group $G'$ \nsatisfies the Atiyah Conjecture by \nTheorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:subgroups}.\n\nSince $\\ker(\\mu) \\subseteq \\ker(\\alpha) \\subseteq \\im(\\pi_1(p))$ holds, we get $[G:G'] = k$ conclude \nfrom~\\eqref{finite_coverings_Thurston_norm} and from\nLemma~\\ref{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings}\n\\eqref{the:Basic_properties_of_the_phi_L2-Euler_characteristic_for_universal_coverings:finite_coverings}\n\\begin{eqnarray*}\n -\\chi^{(2)}(M;\\mu,\\phi \\circ \\beta \\circ \\nu) \n& = & \n -\\lmfrac{\\chi^{(2)}(N;\\mu',p^*\\phi \\circ \\nu')}{k};\n\\\\\nx_M(\\phi) \n& = & \n\\lmfrac{x_N(p^*\\phi)}{k}.\n\\end{eqnarray*}\nWe get from\nTheorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm_in_the_quasi-fibered_case}\napplied to $N$, $\\mu'$, $\\nu'$ and $p^*\\phi$\n\\[\n -\\chi^{(2)}(N;\\mu',p^*\\phi \\circ \\nu')\\,\\, =\\,\\, x_N(p^*\\phi).\n\\]\nHence we get \n\\[\n -\\chi^{(2)}(M;\\mu,\\phi \\circ \\beta \\circ \\nu)\\,\\,=\\,\\, x_M(\\phi).\n\\]\nThis finishes the proof of Theorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm}.\n\\end{proof}\n\n\n\n\n\n\n \\typeout{---- Section 7: Epimorphism of fundamental groups and the Thurston norm}\n\n\\section{Epimorphism of fundamental groups and the Thurston norm}\n\\label{sec:Epimorphism_of_fundamental_groups_and_the_Thurston_norm}\n\n\n\n\nOn several occasions we will use the following lemma.\n\n \\begin{lemma} \\label{lem:elementary_amenable_and_ore} \n Let $G$ be a torsion-free elementary amenable group.\n Then the Ore localization \n $T^{-1} \\IQ G$ for $T$ the set of non-trivial elements in $\\IQ G$ exists\n and agrees with the skew field $\\cald(G)$. In particular $\\cald(G)$ is flat over $\\IQ G$.\n Moreover, $G$ satisfies the Atiyah Conjecture and we get for every finitely generated free $\\IQ G$-chain complex\n \\[\n b_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*) \\,\\, =\\,\\, \\dim_{\\cald(G)}(H_n(\\cald(G) \\otimes_{\\IQ G} C_*)).\n \\] \n\\end{lemma}\n\n \\begin{proof} This follows from \n Theorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:Linnell},\n Theorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):dim}\n and~\\cite[ Example~8.16 on page~324 and Lemma~10.16 on page~376]{Lueck(2002)}.\n The proofs there deal only with $\\IC$, but carry over without changes to any field $F$ with $\\IQ \\subseteq F \\subseteq \\IC$.\n\\end{proof}\n\n \n\n\n\\begin{theorem} \\label{the:localization_down_to_Z_versus_up_to_U(G)_new} Let $G$ be a group\n which is residually a locally indicable elementary amenable group. Let $f_* \\colon C_* \\to\n D_*$ be a $\\IQ G$-chain map of finitely generated free $\\IQ G$-chain complexes.\n Suppose that $\\id_{\\IQ} \\otimes_{\\IQ G} f_* \\colon \\IQ\\otimes_{\\IQ G} C_* \\to\n \\IQ\\otimes_{\\IQ G} D_*$ induces an isomorphism on homology. Then we get for $n \\ge 0$\n\\[\nb_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*)\\,\\, =\\,\\, b_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} D_*).\n\\]\n\\end{theorem}\n\\begin{proof}\n By considering the mapping cone, it suffices to show for a finitely generated free\n $\\IQ G$-chain complex $C_*$ that $b_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*)$ vanishes \nfor all $n \\ge 0$ if $H_n(\\IQ \\otimes_{\\IQ G} C_*)$ vanishes for all $n \\ge 0$. \n \nSince $G$ is residually a torsion-free locally indicable elementary amenable group,\nthere exists a sequence of epimorphisms $G\\to G_i$, $i\\in \\IN$ onto \ntorsion-free locally indicable elementary amenable groups such that the intersections of the kernels is trivial. We\nconclude from~\\cite{Clair(1999)},~\\cite[Theorem~1.14]{Schick(2001b)} \nor~\\cite[Theorem~13.3 on page~454]{Lueck(2002)}\n\\begin{eqnarray*}\nb_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*) & = & \n\\lim_{i \\in \\IN} \\;b_n^{(2)}(\\caln(G_i) \\otimes_{\\IQ G_i} \\IQ G_i \\otimes_{\\IQ G} C_*);\n\\\\\n\\IQ \\otimes_{\\IQ G} C_* & \\cong & \\IQ \\otimes_{\\IQ G_i} \\IQ G_i \\otimes_{\\IQ G} C.\n\\end{eqnarray*}\nIt follows that without loss\nof generality we can assume that $G$ itself is torsion-free locally indicable elementary amenable.\n\n\nThere is an involution of rings $\\IQ G \\to \\IQ G$ sending $\\sum_{g \\in G} \\lambda_g \\cdot g$ \nto $\\sum_{g \\in G} \\lambda_g \\cdot g^{-1}$. In the sequel we equip each\n$C_n$ with a $\\IQ G$-basis. With respect to this involution and $\\IQ G$-basis one can define the\ncombinatorial Laplace operator $\\Delta_n \\colon C_n \\to C_n$ which is the $\\IQ G$-linear\nmap given by $c_{n+1} \\circ c_n^* + c_{n-1}^* \\circ c_{n-1}$. Since the augmentation\nhomomorphism $\\IQ G \\to \\IQ$ sending $\\sum_{g \\in G} \\lambda_g \\cdot g $ to \n$\\sum_{g \\in G} \\lambda_g$ is compatible with the involution, \n$\\id_{\\IQ} \\otimes_{\\IQ G} \\Delta_n \\colon \\id_{\\IQ} \\otimes_{\\IQ G} C_n \\to \\id_{\\IQ} \\otimes_{\\IQ G} C_n$ \nis the combinatorial Laplace operator for $\\IQ \\otimes_{\\IQ G} C_*$. We conclude\nfrom~\\cite[Lemma~1.18 on page~24 and Theorem~6.25 on page~249]{Lueck(2002)}\n\\begin{eqnarray*}\nb_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*) & = & \n\\dim_{\\caln(G)}(\\ker(\\id_{\\caln(G)} \\otimes_{\\IQ G} \\Delta_n));\n\\\\\n\\dim_{\\IQ}(H_n(\\IQ \\otimes_{\\IQ G} C_*) & = & \\dim_{\\IQ}(\\ker(\\id_{\\IQ} \\otimes_{\\IQ G} \\Delta_n)).\n\\end{eqnarray*}\nSince $G$ is amenable, we conclude \nfrom~\\cite[(6.74) on page~275,]{Lueck(2002)}\n\\begin{eqnarray*}\n\\dim_{\\caln(G)}(\\ker(\\id_{\\caln(G)} \\otimes_{\\IQ G} \\Delta_n))\n& = & \n\\dim_{\\caln(G)}(\\caln(G) \\otimes_{\\IQ G} \\ker(\\Delta_n)).\n\\end{eqnarray*}\nHence $b_n^{(2)}(\\caln(G) \\otimes_{\\IQ G} C_*) $ vanishes if $\\Delta_n$ is injective. The injectivity of $\\Delta_n$ follows from the\ninjectivity of $\\id_{\\IQ} \\otimes_{\\IQ G} \\Delta_n$ by~\\cite[Theorem~1]{Howie-Schneebeli(1983)} \nor~\\cite[Theorem~4.1]{Gersten(1983)}, since $G$ is locally indicable.\n\\end{proof}\n\n\\begin{theorem} \\label{the:Inequality} Let $f \\colon M \\to N$ be a map of admissible\n $3$-manifolds. Suppose that $\\pi_1(f)$ is surjective and $f$ induces an isomorphism \n $H_n(f;\\IQ) \\colon H_n(M;\\IQ) \\to H_n(N;\\IQ)$ for $n \\ge 0$. Suppose that $G$ is residually\n locally indicable elementary amenable. Let $\\mu \\colon \\pi_1(N) \\to G$, \n $\\nu \\colon G \\to H_1(N)_f$ and $\\phi \\colon H_1(N)_f \\to \\IZ$ be group \n homomorphisms. Let $\\overline{N} \\to N$ be the $G$-covering associated to $\\mu$ and\n $\\overline{M} \\to M$ be the $G$-covering associated to $\\mu \\circ \\pi_1(f)$. Suppose that\n $b_n^{(2)}(\\overline{N};\\caln(G))$ vanishes for $n \\ge 0$. \n\n Then $b_n^{(2)}(\\overline{M};\\caln(G))$ vanishes \n for $n \\ge 0$, $M$ is $(\\mu \\circ \\pi_1(f),\\phi \\circ \\nu)$-$L^2$-finite, \n $N$ is $(\\mu,\\phi \\circ \\nu)$-$L^2$-finite and we get\n \\[\n - \\chi^{(2)}(M;\\mu \\circ \\pi_1(f);\\phi \\circ \\nu) \\,\\,\\ge\\,\\, - \\chi^{(2)}(N;\\mu,\\phi \\circ \\nu).\n \\]\n\\end{theorem}\n\\begin{proof}\n Since a locally indicable group is torsion-free,\n $G$ is a residually torsion-free elementary amenable group and hence satisfies the Atiyah Conjecture\n by Theorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:approx}.\n Because of Lemma~\\ref{lem:reduction_to_surjective_mu} and \n Theorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:subgroups}\n we can assume without loss of generality\n that $\\mu$ and $\\phi \\circ \\nu$ are epimorphisms.\n Theorem~\\ref{the:localization_down_to_Z_versus_up_to_U(G)_new} implies that\n $b_n^{(2)}(\\overline{M};\\caln(G))$ vanishes for $n \\ge 0$. We conclude from \n Theorem~\\ref{the:Atiyah_and_(mu,phi)-L2-Euler_characteristic}\n that $M$ is $(\\mu \\circ \\pi_1(f),\\phi \\circ \\nu)$-$L^2$-finite and $N$ is $(\\mu,\\phi\\circ \\nu)$-$L^2$-finite. \n \n Since $\\pi_1(f)$ is surjective and hence the $G$-map $\\overline{f} \\colon \\overline{M} \\to\n \\overline{N}$ induced by $f$ is $1$-connected, we get \n $b_1^{(2)}(i^*\\overline{M};\\caln(K)) \\ge b_1^{(2)}(i^* \\overline{N};\\caln(K))$ for the inclusion \n $i \\colon K = \\ker(\\phi \\circ \\nu) \\to G$. If $\\phi \\circ \\nu \\circ \\mu = 0$, we conclude\n $\\chi^{(2)}(M;\\mu \\circ \\pi_1(f);\\phi \\circ \\nu) = \\chi^{(2)}(N;\\mu,\\phi \\circ \\nu) = 0$ from \n Lemma~\\ref{lem:reduction_to_surjective_mu}\n \\eqref{lem:reduction_to_surjective_mu:mu_circ_phi_is_trivial} and the claim follows.\n Hence we can assume without loss of generality that $\\phi \\circ \\nu \\circ \\mu$ is not trivial.\n \n We begin with the case $\\im(\\mu) \\cap \\ker(\\phi \\circ \\nu) \\not= \\{1\\}$. Then also \n $\\im(\\mu \\circ \\pi_1(f)) \\cap \\ker(\\phi \\circ \\nu) \\not= \\{1\\}$. We conclude from \n Theorem~\\ref{the:The_(mu,phi)-L2-Euler_characteristic_in_terms_of_the_first_homology} \n \\begin{eqnarray*}\n -\\chi^{(2)}(N;\\mu,\\phi \\circ \\nu) & = & b_1^{(2)}(i^*\\overline{N};\\caln(K));\n \\\\\n -\\chi^{(2)}(M;\\mu \\circ \\pi_1(f);\\phi \\circ \\nu) & = & b_1^{(2)}(i^*\\overline{M};\\caln(K)).\n \\end{eqnarray*}\n and Theorem~\\ref{the:Inequality} follows. \n\n It remains to treat the case, where $\\im(\\mu) \\cap \\ker(\\phi \\circ \\nu) = \\{1\\}$.\n Then $\\phi \\circ \\nu \\colon G \\to \\IZ$\n is an injection and hence a bijection, $K = \\{0\\}$, and we get from Example~\\ref{exa:infinite_cyclic_covering}\n \\begin{eqnarray*}\n - \\chi^{(2)}(N;\\mu,\\phi \\circ \\nu) \\,\\,=\\,\\, \n \\begin{cases} \n \\dim_{Q}\\bigl(H_1(\\overline{M};\\IQ)\\bigr) - 1 & \\text{if}\\; \\partial M \\not= \\emptyset;\\\\\n \\dim_{Q}\\bigl(H_1(\\overline{M};\\IQ)\\bigr) - 2 & \\text{if}\\; \\partial M = \\emptyset;\n \\end{cases}\n \\\\\n - \\chi^{(2)}(N;\\mu \\circ \\pi_1(f),\\phi \\circ \\nu)\\,\\, =\\,\\, \n \\begin{cases} \n \\dim_{Q}\\bigl(H_1(\\overline{N};\\IQ)\\bigr) - 1 & \\text{if}\\; \\partial N \\not= \\emptyset;\\\\\n \\dim_{Q}\\bigl(H_1(\\overline{N};\\IQ)\\bigr) - 2 & \\text{if}\\; \\partial N = \\emptyset;\n \\end{cases}\n\\end{eqnarray*}\n We already have shown $b_1^{(2)}(i^*\\overline{N};\\caln(K)) \\ge b_1^{(2)}(i^* \\overline{M};\\caln(K))$ which boils down\n in this special case to $\\dim_{Q}(H_1(\\overline{M};\\IQ))\\bigr) \\ge \\dim_{Q}(H_1(\\overline{N};\\IQ))\\bigr)$. We conclude\n from $H_3(M;\\IQ) \\cong H_3(N;\\IQ)$ that $\\partial M$ is empty if and only if $\\partial N$ is empty. This finishes the proof\n of Theorem~\\ref{the:Inequality}.\n \\end{proof}\n\n\n\n\n\n\n\n\\begin{theorem}[Inequality of the Thurston norm] \\label{the:Inequality_of_the_Thurston_norm}\nLet $f \\colon M \\to N$ be a map of admissible $3$-manifolds which is surjective \non $\\pi_1(N)$ and induces an isomorphism \n$f_* \\colon H_n(M;\\IQ) \\to H_n(N;\\IQ)$ for $n \\ge 0$. Suppose that $\\pi_1(N)$ is residually \nlocally indicable elementary amenable. \n Then we get for any $\\phi\\in H^1(N;\\IR)$ that\n \\[\n x_M(f^*\\phi ) \\,\\,\\ge\\,\\, x_N(\\phi).\n \\]\n\\end{theorem}\n\n\\begin{proof}\nSince seminorms are continuous and homogeneous it suffices to prove the \nstatement for all primitive classes $\\phi\\in H^1(N;\\IZ)=\\Hom(\\pi_1(N),\\IZ)$. \nThe case $N = S^1 \\times D^2$ is trivial. Hence we can assume that $N\\ne S^1\\times D^2$. \n We conclude from Theorem~\\ref{the:Inequality} applied in the case $G = \\pi_1(N)$ and $\\mu= \\id_{\\pi_1(N)}$\n \\[\n -\\chi^{(2)}(M; \\pi_1(f),\\phi) \\,\\,\\ge\\,\\, -\\chi^{(2)}(\\widetilde{N};\\phi)\n \\]\n Theorem~\\ref{the:The_Thurston_norm_ge_the_(mu,phi)-L2-Euler_characteristic} implies \n \\[\n x_M(\\phi \\circ \\pi_1(f)) \\,\\,\\ge\\,\\,-\\chi^{(2)}(M; \\pi_1(f),\\phi)\n \\]\nand Theorem~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm_universal_covering_for_universal_coverings} implies that \n \\[\n -\\chi^{(2)}(\\widetilde{N};\\phi) \\,\\,=\\,\\, x_N(\\phi).\n \\]\n Now Theorem~\\ref{the:Inequality_of_the_Thurston_norm} follows.\n\\end{proof}\n\n\nThe following lemma shows that Theorem~\\ref{the:Inequality_of_the_Thurston_norm} applies in particular \nif the manifold $N$ is fibered. Since it is well-known, thus we only provide a sketch of the proof.\n\n\n\\begin{lemma}\\label{lem:fibered-manifolds-acapl}\nThe fundamental group of any fibered 3-manifold is residually locally indicable elementary amenable.\n\\end{lemma}\n\n\n\\begin{proof}[Sketch of proof]\n Let $N$ be a fibered 3-manifold. Then $\\pi_1(N)\\cong \\IZ\\ltimes_\\varphi \\Gamma$ where\n $\\Gamma$ is a free group or a surface group and where $\\varphi\\colon \\Gamma\\to \\Gamma$\n is an automorphism. The derived series of $\\Gamma$ is defined by $\\Gamma^{(0)}=\\Gamma$\n and inductively by $\\Gamma^{(n+1)}=[\\Gamma^{(n)},\\Gamma^{(n)}]$. Since $\\Gamma$ is a\n free group or a surface group, each quotient $\\Gamma^{(n)}\/\\Gamma^{(n+1)}$ is free\n abelian and $\\bigcap_{n \\ge 1} \\Gamma^{(n)}=\\{1\\}$.\n\n The subgroups $\\Gamma^{(n)}$ are characteristic subgroups of $\\Gamma$, in particular\n they are preserved by $\\varphi$. Thus $\\varphi$ descends to an automorphism on\n $\\Gamma\/\\Gamma^{(n)}$. It is now straightforward to see that the epimorphisms\n $\\pi_1(N)=\\IZ\\ltimes \\Gamma\\to \\IZ\\ltimes \\Gamma\/\\Gamma^{(n)}$, $n\\in \\IN$ form a\n cofinal nested sequence of epimorphisms onto locally indicable elementary amenable\n groups.\n\\end{proof}\n\n\n\n\n\n \\typeout{---- Section 8: The $\\phi$-$L^2$-Euler characteristic and the degree of higher order Alexander polynomials}\n\n\\section{The $(\\mu,\\phi)$-$L^2$-Euler characteristic and the degree of non-commutative Alexander polynomials}\n\\label{sec:The_(mu,phi)-L2-Euler_characteristic_and_the_degree_of_higher_order_Alexander_polynomials}\n\n\n\n\n\n\nLet $M$ be an admissible $3$-manifold. Regard group homomorphisms $\\mu \\colon \\pi_1(M)\\to G$, \n$\\nu \\colon G \\to H_1(M)_f$ and $\\phi \\colon H_1(M)_f \\to \\IZ$ such that $\\nu\n\\circ \\mu$ is the projection $\\pi_1(M) \\to H_1(M)_f$ and $G$ is torsion-free elementary amenable.\nFor simplicity we discuss only the\ncase, where $\\phi$ is surjective. Let $\\overline{M} \\to M$ be the $G$-covering associated \nto $\\mu \\colon \\pi_1(M) \\to G$. Recall the following definition from\nHarvey~\\cite{Harvey(2005)} which extends ideas of Cochran~\\cite{Cochran(2004)}.\n(Actually they consider only certain solvable quotients $G$ of $\\pi_1(M)$ coming from the\nrational derived series, but their constructions apply directly to torsion-free elementary\namenable groups.) Let $T$ be the set of non-trivial elements in $\\IZ G$. As recorded\nalready in Lemma~\\ref{lem:elementary_amenable_and_ore}, the Ore localization $T^{-1}\\IZ G$\nis defined and is a skewfield. Define a natural number\n\\begin{eqnarray}\nr_n(M;\\mu) \n:= \n\\dim_{T^{-1}\\IZ G}\\bigl(H_n\\bigl(T^{-1}\\IZ G\\otimes_{\\IZ G} C_*(\\overline{M})\\bigr)\\bigr).\n\\label{r_n(M,mu)}\n \\end{eqnarray}\nLet $i \\colon K \\to G$ be the inclusion of the kernel of the composite $\\phi \\circ \\nu \\colon G \\to \\IZ$. \nIf $T$ is the set of non-zero elements in $\\IZ K$, we can\nagain consider its Ore localization $T^{-1}\\IZ K$ which is a skew field. We obtain an isomorphism\nfor an appropriate automorphism $t$ of $T^{-1}\\IZ K$, which comes from the conjugation automorphism\nof $K$ associated to a lift of a generator of $\\IZ$ to $G$, an isomorphism of skew-fields\n\\begin{equation}\n (T^{-1}\\IZ K)_{t}[u^{\\pm 1}] \\xrightarrow{\\cong} T^{-1}\\IZ G.\n\\label{Sanchez}\n\\end{equation}\nIf $r_1(M;\\mu,\\nu,\\phi)$ vanishes for all $n \\ge 0$, then we can define natural numbers\n\\begin{equation}\n\\delta_n(M;\\mu, \\nu, \\phi) \n:= \n\\dim_{T^{-1}\\IZ K}\\bigl(H_1(T^{-1}\\IZ G \\otimes_{\\IZ G} C_*(\\overline{M})\\bigr).\n\\label{delta_n(phi)_surjective_tors}\n\\end{equation}\nThis construction and the invariants above turn out to be special cases of the constructions defined in this paper.\nNamely, $K$ and $G$ satisfy the Atiyah Conjecture by \nTheorem~\\ref{the:Status_of_the_Atiyah_Conjecture}~\\eqref{the:Status_of_the_Atiyah_Conjecture:Linnell},\nand Lemma~\\ref{lem:elementary_amenable_and_ore} shows that we get identifications\n$T^{-1}\\IZ K = \\cald(K)$ and $T^{-1}\\IZ G = \\cald(G)$ under which the isomorphism~\\eqref{Sanchez}\ncorresponds to the isomorphism appearing in \nTheorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):cald(K)_and_(cald(G)}.\nMoreover $r_n(M;\\nu)$ agrees with $b_n^{(2)}(\\overline{M};\\caln(G))$\nby Theorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):dim}.\nHence Theorem~\\ref{the:Main_properties_of_cald(G)}~\\eqref{the:Main_properties_of_cald(G):chain} \nand Lemma~\\ref{lem:b_1(2)_is_zero_implies_L2-acyclic}~\\eqref{lem:b_1(2)_is_zero_implies_L2-acyclic:b_1}\nimply\n\n\n\\begin{theorem}[The $(\\mu,\\phi)$-$L^2$-Euler characteristic and $\\delta_n(M,\\mu,\\nu,\\phi)$]\n \\label{the:The_mu,phi_L2-Euler_characteristic_and_delta_n(phi)}\n Let $M$ be an admissible $3$-manifold. Consider group homomorphisms\n $\\mu \\colon \\pi_1(M) \\to G$, $\\nu \\colon G \\to H_1(M)_f$ and $\\phi \\colon H_1(M)_f \\to \\IZ$\nsuch that $\\nu \\circ \\mu$ is the projection $\\pi_1(M) \\to H_1(M)_f$ and $\\phi$ is surjective.\n\n Then $r_1(M;\\mu)$ vanishes if and only if $(\\mu,\\phi \\circ \\nu)$ is an $L^2$-Atiyah pair, and in this case we get\n \\[ \\chi^{(2)}(M;\\mu,\\phi \\circ \\nu)~=~-\\delta(M;\\mu,\\nu,\\phi).\\]\n\\end{theorem}\n\n\n\\begin{remark} \\label{rem:extending_Harveys_invariant} Another way of interpreting\n Theorem~\\ref{the:The_mu,phi_L2-Euler_characteristic_and_delta_n(phi)} is to say that our\n $L^2$-Euler characteristic invariant extends the original invariant due to\n Cochran, Harvey and the third author~\\cite{Cochran(2004),Harvey(2005),Friedl(2007)} to other\n coverings, in particular to the universal covering or to a $G$-covering for residually\n torsion-free elementary amenable group $G$ of an admissible $3$-manifold.\n\\end{remark}\n\n\n\n\nThe following lemma might also be of independent interest.\n\n\\begin{lemma} \\label{lem:G_can_be_arranged_to_be_torsion-free_elementary_amenable} Let\n $\\alpha \\colon \\pi_1(M) \\to \\Gamma$ be an epimorphism onto a group that is virtually torsion-free abelian. Then\n there exists a factorization of $\\alpha$ into group homomorphisms \n $\\pi \\xrightarrow{\\mu} G \\xrightarrow{\\nu} \\Gamma$ such that $G$ is torsion-free elementary amenable.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\alpha \\colon \\pi_1(M) \\to \\Gamma$ be an epimorphism onto a group $\\Gamma$ that admits a finite index subgroup $F$ that is free abelian. After possibly going to the core of $F$ we can assume that $F$ is normal.\n\nSince $\\alpha^{-1}(F)$ is a finite-index subgroup of $\\pi_1(M)$ we conclude from~\\cite[Theorem~4.3]{Friedl-Schreve-Tillmann(2015)} that there is a\n torsion-free elementary amenable group $G'$ together with an epimorphism \n $\\mu' \\colon \\pi_1(M) \\to G'$ such that $\\ker(\\mu')\\subset \\alpha^{-1}(F)$. Define the epimorphism $\\mu \\colon \\pi_1(M) \\to G$ to be the\n projection onto the quotient $G = \\pi_1(M)\/\\bigl(\\ker(\\mu') \\cap \\ker(\\alpha)\\bigr)$. Obviously there is \n an epimorphism $\\nu \\colon G \\to \\Gamma$ such that $\\nu \\circ \\mu = \\alpha$ since $\\alpha$ is by \n construction the projection from $\\pi_1(M)$ to the quotient $\\Gamma = \\pi_1(M)\/\\ker(\\alpha)$.\n It remains to show that $G$ is torsion-free elementary amenable. We have the obvious exact sequence\n \\[\n 1 \\to \\ker(\\mu')\/\\bigl(\\ker(\\mu') \\cap \\ker(\\alpha)\\bigr) \\to G \\to G' \\to 1.\n \\]\n and obviously $\\alpha$ defines an injection\n \\[\\ker(\\mu')\/\\bigl(\\ker(\\mu') \\cap \\ker(\\alpha)\\bigr) \n\\hookrightarrow F.\n\\]\nSince $F$ and $G'$ are torsion-free elementary amenable, the same is true for $G$.\n\\end{proof}\n\n\nTheorems~\\ref{the:Equality_of_(mu,phi)-L2-Euler_characteristic_and_the_Thurston_norm} and~\\ref{the:The_mu,phi_L2-Euler_characteristic_and_delta_n(phi)}\nand Lemma~\\ref{lem:G_can_be_arranged_to_be_torsion-free_elementary_amenable} imply\nthat the non-commutative Reidemeister torsions of~\\cite{Friedl(2007)} detect the Thurston norm of most 3-manifolds.\n\n\\begin{corollary} \\label{cor:existence_of_torsion-free_elementary_coverings_with_equality} \nLet $M$ be a $3$-manifold, which is admissible, see Definition~\\ref{def:admissible_3-manifold},\nis not a closed graph manifold and is not homeomorphic to $S^1 \\times D^2$. \nThen there is a torsion-free elementary amenable group $G$\nand a factorization $\\pr_M \\colon \\pi_1(M) \\xrightarrow{\\mu} G \\xrightarrow{\\nu} H_1(M)_f$\nof the canonical projection $\\pr_M$ into epimorphisms\nsuch that for any group homomorphism $\\phi \\colon H_1(M)_f \\to \\IZ$, \nthe pair $(\\mu, \\phi \\circ \\nu)$ is an $L^2$-acyclic Atiyah-pair and we get\n\\[\n \\delta(M;\\mu,\\nu,\\phi)~=~-\\chi^{(2)}(M;\\mu,\\phi \\circ \\nu)~=~x_M(\\phi).\n\\]\n\\end{corollary}\n\n\\begin{remark}\n The invariant $\\delta(M;\\mu,\\nu), \\phi$ of~\\cite{Friedl(2007)} is\n essentially the same as the Cochran-Harvey invariant~\\cite{Cochran(2004),Harvey(2005)},\n except that Cochran--Harvey only study solvable quotients of $\\pi_1(M)$. But as pointed\n out in~\\cite[Example~2.3]{Cochran(2004)}, in general invariants coming from solvable\n quotients do not suffice to detect the knot genus respectively the Thurston norm. It is\n necessary to work with the extra flexibility given by torsion-free elementary amenable\n groups.\n\\end{remark}\n\n\n\n\n\n\n \n\n\n\n \\typeout{---- Section 9: The degree of the $L^2$-torsion function}\n\n\n\\section{The degree of the $L^2$-torsion function}\n\\label{sec:The_degree_of_the_L2-torsion_function}\n\n\nIn~\\cite{Lueck(2015twisting)} the $\\phi$-twisted $L^2$-torsion function has been\nintroduced and analyzed for $G$-coverings of compact connected manifolds in all\ndimensions. In the sequel we will consider only $G$-coverings $\\overline{M} \\to M$ of\nadmissible $3$-manifolds $M$ for countable residually finite $G$. Then all the necessary\nconditions such as $\\det$-$L^2$-acyclicity for $\\overline{M}$ and the $K$-theoretic\nFarrell-Jones Conjecture for $\\pi_1(M)$ are automatically satisfied and do not have to be\ndiscussed anymore. One can assign to the $L^2$-torsion function by considering its asymptotic behavior at\ninfinity a real number called its degree and denoted by $\\deg\\bigl(\\rho^{(2)}(M;\\mu,\\phi)\\bigr)$.\nIf $G = \\pi_1(M)$ and $\\mu \\,\\,=\\,\\, \\id_{\\pi_1(M)}$, i.e., for the universal covering\n$\\widetilde{M}$, the equality\n\\[\n\\deg\\bigl(\\rho^{(2)}(M;\\mu,\\phi)\\bigr) = x_M(\\phi \\circ \\mu)\n\\]\nwas proved by the authors in~\\cite[Theorem~0.1]{Friedl-Lueck(2015l2+Thurston)} and\nindependently by Liu~\\cite{Liu(2015)}. Actually many more instances of $G$-coverings are\nconsidered in~\\cite[Theorem~5.1]{Friedl-Lueck(2015l2+Thurston)}, where this equality\nholds.\n\nWe just mention without proof the following theorem.\n\n\\begin{theorem}[The $(\\mu,\\phi)$-$L^2$-Euler characteristic is a lower bound for the\n degree of the $L^2$-torsion function]\n \\label{the:The_(mu,phi)-L2-Euler_characteristic_is_an_lower_bound_for_the_degree_of_the_L2-torsion_function}\n Let $M$ be an admissible $3$-manifold. Let $\\mu \\colon \\pi \\to G$ be a\n homomorphism to a torsion-free, elementary amenable, residually finite, countable group $G$ and\n $\\phi \\colon G \\to \\IZ$ be a group homomorphism. Let $\\overline{M} \\to M$ be the $G$-covering\n associated to $\\mu$. Suppose $b_1^{(2)}(\\overline{M};\\caln(G)) = 0$.\n Then $(\\mu,\\phi)$ is an $L^2$-acyclic Atiyah pair and we get\n \\[\n \\chi^{(2)}(M;\\mu,\\phi) \\le \\deg\\bigl(\\overline{\\rho}^{(2)}(M;\\mu,\\phi)\\bigr).\n \\]\n \\end{theorem}\n\n\n\n\n\n\\typeout{-------------------------------------- References ---------------------r------------------}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nPhysicists like to think of `true' distributions of physics\nquantities, i.e those distributions \n(graphically represented by histograms and here also referred as `spectra') \none would observe under idealized conditions\nthat seldom -- never, strictly speaking -- happen in real live\n(ideal detector, no physical or instrumental background). \nThe `observed' distribution is then considered as a \n'noisy distortion' of the true one. \nAn important task of the experimental method is \ntherefore to infer the true distribution from the observed one, i.e.\nto correct the observed spectrum for distortion and noise. \nThis can be done by different methods that follow\ndifferent approaches.\nSince this is not a review paper, \nI just outline the two different classes of strategies and then \nfocus on the specific issues of this work. \n\nIn a first kind of approach a mathematical function for \nthe true distribution\nis assumed (together with other functions to model the noise)\nand the task becomes that of \nestimating the free parameters of the function(s). Indeed, we\nfall in the so called domain of {\\it parametric inference} (`fits'), \nusually associated to names like `least-squares' or \n'maximum likelihood'.\\footnote{However one can show that it is preferable\nto base the data analysis on more \nsolid probabilistic ground,\nfrom which the mentioned methods might be derived\n(see e.g. Ref.~\\cite{lfits}) under special conditions \nwhich fortunately hold in the large majority of \npractical cases (but it is better one knows \nwhen the conditions of validity hold \nand what to do when they do not!).} \n\nIn parametric inference all information contained in the observed \nspectrum is, to say, `distilled' into the model parameters.\nParametric inference, especially when the \nconditions to use least-squares methods\nhold, is usually the best and fastest way to proceed, \nif we have good reasons\nto believe the hypothesized family of functions.\n\nHowever, sometimes we wish\nto interpret the data as little as possible and just public \n`something equivalent' to an experimental distribution, \nwith the bin contents fluctuating according to an underlying \nmultinomial distribution, but having possibly got rid of physical\nand instrumental distortions, as well as of background.\nIn particle physics this second approach goes \nunder the name of {\\it unfolding}\n(or deconvolution, or restoration). \n\nSeveral years ago a simple algorithm \nbased on Bayes' theorem was presented ~\\cite{BU}\nwith which it was possible\nto achieve rather `good' results (`good'\ncompared with the difficulty of the task). \nThe main improvements presented here concern the handling \nof small numbers and the evaluation of the uncertainty on the \nunfolded distribution, while the guiding ideas \nand the basic assumptions are substantially unchanged. \nTo be more clear, \nthe algorithm of Ref.~\\cite{BU} was relying on \nthe underlying hypotheses of normality and `small relative errors':\n`best estimates' were provided, with uncertainty calculated\nfrom standard `error propagation' \nformulas.\\footnote{It is worth remembering that these formulas\nrely on an hypothesis of linearity between {\\it input} and \n{\\it output quantities}, an hypothesis that holds approximately, \nin non-strictly linear problems, if the relative uncertainties of \nthe input quantities are small enough, such that the \ndependence input$\\rightarrow$output can be locally\nlinearized~\\cite{asymmetric}.} \nThe new algorithm handles better small numbers, \nin the way it will be described in Sec.~\\ref{ss:improvements},\nand performs the propagation of uncertainty by sampling,\ni.e. by Monte Carlo (MC) integration.\nThe algorithm has been implemented in a R language~\\cite{R} code available \non the author's web page~\\cite{R-code}.\n\nThe paper is organized in the following way. \nSection \\ref{sec:BayesIntro} gives a short introduction to\nBayesian inference and to the specific application subject of this paper. \nThen the algorithm of Ref.~\\cite{BU} is reminded \nand the improvements are presented.\nThe issue of iterative use of the algorithm\nis also discussed, although the program now gives also the possibility\nto use priors, provided by the user, over all possible true\nspectra. \nThis option should avoid the\nneed of iterations (but I anticipate that it might be not that easy to \nmodel such priors and then the iterative strategy\nremains a pragmatic solution).\nFinally, some results on toy models are presented. \n\nSome technical issues concerning binomial and Dirichlet\ndistribution, including the use of the latter as prior\nconjugate of the former, are reminded in Appendix A. \nA second appendix is dedicated to the handling of the \nzeros occurring in the evaluation of the smearing matrix\n(a cause that produces no event in some effect bins, as it \nusually happens, depending also on the statistics of the\nMonte Carlo simulation). As it happens with intermediate \nsmoothing (or other kinds of regularization), this is\na suggestion of how the problem can be approached, whose\nsolution is delegated to the user, who is supposed to\nknow the physics case under investigation.\n\n\\section{Bayesian inference and Bayesian unfolding: \\\\\nfrom first principles to real life}\n\\label{sec:BayesIntro}\nThe so called Bayesian inference is a way to learn \nabout physical quantities and their relationships \n(all things we cannot directly see) \nfrom experimental data (what we can actually observe \nwith our senses, usually mediated by more or\nless sophisticated detectors)\nusing probability theory. This game\nis {\\it conceptually} rather\nsimple, proviso we accept that the intuitive meaning \nof probability -- a scale to rank our beliefs that \nseveral events might happen, or that several hypotheses \nmight be true -- is suitable in scientific reasoning\n(see Refs.~\\cite{BR} and \\cite{rpp} and references therein\nfor an introduction). \n\n\\begin{figure}[!t]\n\\begin{center}\n\\begin{tabular}{cc}\n\\epsfig{file=bn1.eps,clip=,width=0.37\\linewidth} &\n\\epsfig{file=bn3.eps,clip=,width=0.55\\linewidth} \\\\\n{\\large {\\it a)}} & {\\large {\\it b)}}\n\\end{tabular}\n\\end{center}\n\\caption{\\sl Bayesian network describing a fit model~\\cite{lfits}. \n$x_i$ and $y_i$ \nare the experimental observations, related to $\\mu_{x_i}$ and $\\mu_{y_i}$\nby experimental errors. Instead, a deterministic `law' connects \nthe `true' values \n$\\mu_{y_i}$ to $\\mu_{x_i}$ via the model parameters $\\mvec\\theta$\n(solid arrows stand for probabilistic links, dashed for deterministic).\nNetwork {\\it a)} describes a simple model with errors only on \nthe ordinate. Network {\\it b)} takes into account errors on both\naxes, extra variability of the data points around the believed\nphysical law and systematic effects.}\n\\label{fig:bn1}\n\\end{figure}\nThe starting point of a Bayesian inference is to build up a model \nfor the deterministic \n(``{\\it B follows from A}'')\nand the probabilistic \n(``{\\it B might follow from A}'')\nconnections\nthat relate the several entities that enter the problem.\nThis model has the interesting graphical representation\nof a network, usually called {\\it Bayesian network} or \n{\\it belief network} \n(to get an idea of their meaning and their application \n{\\it Google} these keywords, or browse the \n\\href{http:\/\/en.wikipedia.org\/wiki\/Bayesian_network}{\\it Wikipedia}). \nFor example, Fig.~\\ref{fig:bn1}, taken from \nRef.~\\cite{lfits}, shows a Bayesian network\nto model two physical quantities, $\\mu_x$ and $\\mu_y$,\nconnected to each other with a `law', whose parameters \nare denoted by $\\mvec\\theta$. In the \nvery elementary case depicted in the \n left hand network of Fig.~\\ref{fig:bn1}\n the solution to the problem can be rather simple, \nunder some assumptions that, fortunately, hold\nvery often in routine cases.\nBut, in general, the solution is not \nthat simple. Nevertheless, as stressed in Ref.~\\cite{lfits},\nafter one has built the graphical model, one is \noften more than half the way to the solution, \nthanks to the great progresses recently made in \nBayesian network computing.\n\n\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=unfnet_t.eps,clip=,width=0.75\\linewidth}\n\\end{center}\n\\caption{{\\sl Probabilistic links from causes to effects. \nThe node indicated by $T$ (`trash') stand for the \ninefficiency bin and corresponds to $E_{n_E+1}$}}\n\\label{fig:unfnet}\n\\end{figure}\nThe essence of the Bayesian unfolding of Ref.~\\cite{BU},\nthat is the starting point also of this new version,\nis to make the problem discrete and to treat the `cause'\nbins as independent degrees of freedom, i.e. without \nconstraints among each other.\\footnote{An important \ndrawback of this feature will be discussed in section \\ref{sec:iter}.}\nFor this reason the algorithm\ncan virtually handle any kind of `smearing' and it is\neasily extensible, \nat least in principle and with the only limit due to computer power, \n to multidimensional\nproblems. In fact, the core of the algorithm only knows about \n`cause-cells' and `effect-cells',\\footnote{`Bin' \nand `cell' are used here as synonyms, although `cell' refers\nto a region on the configuration space and \n`bin' to histograms representing them.} \nbut it does not know the \nlocation of the cells in the configuration space of the problem. \nFor the same reason, the treatment of background, \nand even of several independent sources of background, \ncan be easily embodied in the algorithm\nby just adding extra cause-cells, one cell per source of background. \nAs a by-product, the algorithm also provides the number of events\nto be attributed to each source of background.\n(It is worth remembering that \nbackground might have an interesting physical meaning,\nand thus the estimation of the level `noise' \nmight provides indeed a physics measurement, \nas in the analysis of Ref.~\\cite{Raso2}.)\n\nGiven the discretization of the problem,\nthe Bayesian network relating \ncauses and effects is that shown in Fig.~\\ref{fig:unfnet},\nwhere we use the same notation of Ref.~\\cite{BU}, with the addition \nof the effect bin $T$ (`trash'), equivalent to $E_{n_E+1}$, \nto describe inefficiency (the reason to introduce this \nextra bin will become clear later).\n\nRephrasing the problem in probabilistic terms, the purpose of the \nunfolding is to find the `true' number of events \nin each cause bin [$\\#(C_i)$ in Fig.~\\ref{fig:bn0},\nindicated by $x(C_i)$ in the text],\n\\begin{figure}[!t]\n\\begin{center}\n\\vspace{0.7cm}\n\\begin{tabular}{c}\n\\epsfig{file=unfnet_canc.eps,clip=,width=0.9\\linewidth}\n\\end{tabular}\n\\end{center}\n\\caption{{\\small From the `true' distribution (numbers of events in the \ncause-bins) \nto the observed distribution (numbers of events in the \neffect-bins). The number of events $\\#(\\,)$ is indicated with \n$x(\\,)$ in the text.}}\n\\label{fig:bn0}\n\\end{figure}\ngiven the observed spectrum and assuming \nsome knowledge about the smearing.\n \nSince the links cause$\\rightarrow$effects have a probabilistic nature,\nit follows that also\nthe links effect$\\rightarrow$causes will be probabilistic, \nand therefore it will be\nuncertain the number of events\nto be attributed to the cause-cells. We can only attempt to rank in \nprobability all possible spectra that might have caused the observed\none. \nIn other words, the realistic goal of our analysis\nis not to determine the {\\it the} true spectrum, but rather to assess\n\\begin{eqnarray}\nP(\\mvec x_C \\,|\\,\\mvec x_E,\\,\\Lambda,\\,I)\\,,\n\\label{eq:goal}\n\\end{eqnarray}\nwhere:\\footnote{In Ref.~\\cite{BU} \n$\\mvec x_C$, $\\mvec x_E$ and $\\Lambda$ were respectively indicated\nby $\\underline{n}_C$, $\\underline{n}_E$ and $\\mvec S$.\nAs in Fig.~\\ref{fig:bn0} and in Ref.~\\cite{BU}, the symbols $i$\nand $j$ are used to index causes and effects, respectively, \nno matter if this choice leads to the unusual convention of \nindexing the $\\Lambda$ rows by $j$ and the columns by $i$, as in \nEq.~(\\ref{eq:Lambda}). \nNote that, when we refers to MC simulations\nto infer the smearing matrix, $\\mvec x_E$, we\n also need include the trash bin, as it will be reminded\nat the proper place.\nLater on [see Eq.~(\\ref{eq:lambda_i})] it will be convenient\nto name $\\mvec{\\lambda}_i$ the columns of the matrix $\\Lambda$. \nIn summary\n\\begin{eqnarray}\n\\Lambda &=& \\left(\\!\\!\n \\begin{array}{cccc}\n P(E_1\\,|\\,C_1,\\,I) & P(E_1\\,|\\,C_2,\\,I) & \\ldots & P(E_1\\,|\\,C_{n_c},\\,I) \\\\\n P(E_2\\,|\\,C_1,\\,I) & P(E_2\\,|\\,C_2,\\,I) & \\ldots & P(E_2\\,|\\,C_{n_c},\\,I) \\\\ \n \\ldots & \\ldots & \\ldots & \\ldots \\\\\n P(E_{n_E+1}\\,|\\,C_1,\\,I) & P(E_{n_E+1}\\,|\\,C_2,\\,I) \n & \\ldots & P(E_{n_E+1}\\,|\\,C_{n_c},\\,I)\n \\end{array}\n \\!\\!\\right) \\nonumber \\\\\n &=& (\\mvec{\\lambda}_1,\\,\\mvec{\\lambda}_2,\n \\ldots,\\mvec{\\lambda}_{n_C})\\,. \\nonumber\n\\end{eqnarray}\n}\n\\begin{itemize}\n\\item\n$\\mvec x_C = \\{x(C_1),\\, x(C_2),\\, \\ldots,\\, x(C_{n_C}) \\}$ \nis the number of events in each bin of the true distribution,\ni.e. {\\it a} true spectrum.\\footnote{For \nthe use of the indefinite article in conjunction with \n`true values', see the ISO \n{\\it ``Guide to the expression of uncertainty\nin measurement''}\\,\\cite{ISO}, \naccording to which {\\it a} true value is\n{\\it ``a value compatible with the definition\nof a given particular quantity.''} }\n\\item\n$\\mvec x_E = \\{x(E_1),\\, x(E_2),\\, \\ldots,\\, x(E_{n_E}) \\}$\nis the observed spectrum. \n\\item\n$\\Lambda$ stands for the smearing matrix, whose elements \n$\\lambda_{ji}$ (see remarks in footnote 4 concerning notation) are \ndefined in probabilistic terms as\n\\begin{eqnarray}\n\\lambda_{ji} &\\equiv& P(E_j\\,|\\,C_i,\\,I)\\,. \n\\label{eq:Lambda}\n\\end{eqnarray}\nThe knowledge of $\\Lambda$ comes usually from MC simulation and it is\ntherefore affected by an uncertainty, described,\n in general terms, by a pdf $f(\\Lambda\\,|\\,I)$.\n\\item\n$I$ stands for the state of information \nunder which the analysis is performed\n(this underlying condition is often implicit in the pdf's).\n\\end{itemize}\nOnce we have stated clearly and in probabilistic terms our question\n({\\it ``what is $P(\\mvec x_C \\,|\\,\\mvec x_E,\\,I)$?''}),\nprobability theory provides us the answer,\\footnote{Instead, \nthe idea of inverting the smearing matrix (assuming it square\nand not singular) is wrong in principle \n(i.e. besides the ascertainment \n that such a method yields unacceptable results). \nIn fact, unfolding\nis a probabilistic problem and not a \ndeterministic linear one (or, equivalently, a geometric problem \nof rotating vectors). Therefore it needs to be solved by probabilistic\ntools and not by linear algebra methods (`rotations').\n It is easy to show that\n`rotation' works only when we \nhave an `infinite' number of events, such that stochastic\neffects are negligible and the observations coincides with the \nexpectations. In fact, each product $\\lambda_{ji}\\,x(C_i)$ is nothing but \nthe expected number of events in the effect-cell $E_j$ due to cause \n$C_i$ alone: \n\\begin{eqnarray*}\n\\mbox{E}[\\left.x(E_j)\\right|_{x(C_i)}] &=& P(E_j\\,|\\,C_i,\\,I)\\,x(C_i) =\n \\lambda_{ji}\\,x(C_i) \n\\end{eqnarray*}\nSumming up the contributions from all cause-cells, we get the\nexpected value in the effect-cell $E_j$\n\\begin{eqnarray*}\n\\mbox{E}[\\left.x(E_j)\\right|_{\\mvec x_C}] &=& \n \\sum_i \\lambda_{ji}\\,x(C_i)\\,,\n\\end{eqnarray*}\nthat can be written in matrix form as\n\\begin{eqnarray*}\n\\mvec \\mu_E \\equiv \\mbox{E}[\\mvec x_E] &=& \\Lambda \\, \\mvec x_C\\,.\n\\end{eqnarray*}\nThen, if $\\Lambda$ is square and not singular, we get\n\\begin{eqnarray*}\n\\mvec x_C &=& \\Lambda^{-1}\\,\\mvec \\mu_E \\,.\n\\end{eqnarray*}\nBut this might be, at most, the solution of a text book exercise in \nmathematical probability theory, and does not help\nto solve real problems. \nThis is the reason why, besides\nthe fact that the matrix inversion gives notoriously \nbad results,\n{\\it the very idea \nof inverting the smearing matrix is logically\nflawed}: we can certainly apply $\\Lambda^{-1}$ to \na vector of numbers {\\it already known} to be sums \nof expected values of binomials, but we cannot apply\nit to a vector of numbers that {\\it might be} \n(we are not even sure of this, because some counts could be\ndue to background we do not take into account!) \nsums of binomial random variables. If we do it, there is no\nguarantee that $\\Lambda^{-1}\\mvec x_E$ yields a vector\nof `valid numbers' of the $n$-parameters of binomials \n(the question that they might have the meaning of \na physical spectrum for the problem under study is \nsecondary at this level) and, in fact, even negative numbers\ncan be obtained!\nIt follows that unfolding methods which\nuse the matrix inversion as starting point and try to\ncure its bad features with some kitchen are not appealing from \na theoretical point view, although they might even provide \nacceptable results because of `mysterious' reasons\nI do not want to enter into (cooks might be extremely clever!).\n}\nat least in principle. \nIn fact, \n\\begin{enumerate}\n\\item\nBayes' theorem allows to calculate \n$P(\\mvec x_C \\,|\\,\\mvec x_E,\\,\\Lambda,\\,I)$, given the observation \n$\\mvec x_E$ and the smearing matrix $\\Lambda$, as\n\\begin{eqnarray}\nP(\\mvec x_C \\,|\\,\\mvec x_E,\\,\\Lambda,\\,I) &=& \n\\frac{P(\\mvec x_E \\,|\\,\\mvec x_C,\\,\\Lambda,\\,I) \\cdot P(\\mvec x_C\\,|\\,I)}\n {\\sum_{\\mvec x_C}P(\\mvec x_E \\,|\\,\\mvec x_C,\\,\\Lambda,\\,I) \n \\cdot P(\\mvec x_C\\,|\\,I)}\n\\label{eq:bayes1}\n\\end{eqnarray}\n(the formula will be explained in a while).\n\\item\nWe can take into account of the uncertainty about $\\Lambda$\nusing another theorem of probability theory, namely\n\\begin{eqnarray}\nP(\\mvec x_C \\,|\\,\\mvec x_E,I) &=& \\int \nP(\\mvec x_C \\,|\\,\\mvec x_E,\\Lambda,\\,I)\\,\nf(\\Lambda\\,|\\,I)\\,\\mbox{d}\\Lambda\\,.\n\\label{eq:marg_lambda}\n\\end{eqnarray}\n\\end{enumerate}\nLet us now go through the several ingredients needed to get \n$P(\\mvec x_C \\,|\\,\\mvec x_E,I)$\nand try to understand \nwere the problems arise. \n\\begin{itemize}\n\\item\nFirst of all, it is easy to realize that the denominator of \nEq.~(\\ref{eq:bayes1}) is just a normalization factor, and then \nwe can rewrite the Bayes' formula in a way that \nhighlights the main ingredients:\n\\begin{eqnarray}\nP(\\mvec x_C \\,|\\,\\mvec x_E,\\,\\Lambda,\\,I) & \\propto & \nP(\\mvec x_E \\,|\\,\\mvec x_C,\\,\\Lambda,\\,I) \\cdot P(\\mvec x_C\\,|\\,I)\\,,\n\\label{eq:bayes2}\n\\end{eqnarray}\nwhere $P(\\mvec x_E \\,|\\,\\mvec x_C,\\,\\Lambda,\\,I)$ is the \nso called {\\it likelihood}\n and $P(\\mvec x_C\\,|\\,I)$ the \n {\\it prior}. The left hand side of the Bayes' formula takes\nthe name of {\\it posterior}. As we see, \nin probabilistic inference the\nlikelihoods have the role of updating probabilities. \n\\item \nAlthough the presence of priors in the formula might\ncause anxiety in those who approach this kind\nof reasoning for the first time,\n it is a matter of fact that: 1) priors are logically necessary \nto get $P(\\mvec x_C \\,|\\,\\mvec x_E,\\,\\Lambda,\\,I)$\nstarting from the likelihood, i.e. to \nperform the so-called `probability inversion';\n2) they allow to plug into the model all relevant prior information,\nthat might come from previous data or from theoretical\nprejudices; \n3) priors are often so vague (or there are so many data -- that \nis the same for the relative balance of prior and likelihood \nin the Bayes' formula) that they influence negligibly the posterior,\nand the inference is often dominated by the likelihood. \n\\item \nLet us assume this last case of prior vagueness applies. \nThis is equivalent to have \n\\begin{eqnarray}\nP(\\mvec x_C\\,|\\,I) & = & \\mbox{\\it constant}\n\\label{eq:flat_prior}\n\\end{eqnarray}\nand the inference is then performed according to the rule\n\\begin{eqnarray}\nP(\\mvec x_C \\,|\\,\\mvec x_E,\\,\\Lambda,\\,I) & \\propto & \nP(\\mvec x_E \\,|\\,\\mvec x_C,\\,\\Lambda,\\,I)\\,.\n\\label{eq:bayes3}\n\\end{eqnarray}\nIt is then not a surprise that the most probable spectrum \n$\\mvec x_C$ is the one which maximizes the likelihood, \n{\\it if} we have no other good reason to believe \nthat this is not the case.\n(This is the meaning of the expression ``recovering \nmaximum likelihood estimators'' from the Bayesian approach.) \n\\item\nIf we were able to write down a closed expression for the \nlikelihood, or at least to provide a simple \nalgorithmic expression of it,\nwe could somehow scan all possible spectra $\\mvec x_C$,\nfind the most probable one (or an `average' spectrum that\nsummarizes in some sense the variety of true spectra \ncompatible with the data)\n and assess somehow the uncertainty \nabout the result. \nBut, unfortunately, this is not the case\nwith $P(\\mvec x_E \\,|\\,\\mvec x_C,\\,\\Lambda,\\,I)$ \nof our interest. \nLet us see why.\nGiven a certain number of events in a cause-bin $x(C_i)$, \nthe number of events in the effect-bins,\nincluded the `trash' one, is described by a \n{\\it multinomial distribution} (see Appendix A.1):\n\\begin{eqnarray}\nP(\\mvec x_E\\,|\\,x(C_i), \\Lambda, I) &=& \n\\frac{x(C_i)!}{\\prod_j^{n_E+1} x(E_j)!}\\, \n\\prod_j^{n_E+1} \\lambda_{ji}^{x(E_j)}\\,.\n\\end{eqnarray}\nIt follows that $P(\\mvec x_E \\,|\\,\\mvec x_C,\\,\\Lambda,\\,I)$ \nis the sum of independent multinomial distributions, \nfor which, unfortunately, a closed formula does not \nexist.\\footnote{It is well know that the sum of two Poisson variables\nis still a Poissonian. Similarly, the sum of two binomial variables \nhaving {\\it the same} parameter `$p$' is still a binomial. \n(This `reproductive property' applies to few other distributions). \nBut the sum of two binomial variables with different $p$ \ndoes not have a closed expression \n(this can be understood either intuitively, just thinking to the\nmeaning of a binomial, or, more formally, analyzing\nthe product of the characteristic functions of each binomial). \nThe same happens with a multinomial,\nthat is just an extension of the binomial to more than two \npossible outcomes.} This is the real serious \ntechnical problem that prevents\na straight application of the Bayes' formula.\n\\item\nThe elements of the smearing matrix $\\Lambda$ are obtained\nby MC: we generate a large number of events in each cell $C_i$ \nand count `where they end' after a realistic simulation. \nIntuitively, we expect $\\lambda_{ji}\\approx x(E_j)^{MC}\/x(C_i)^{MC}$, \nbut we also know that this is just an estimate, with some \nuncertainty. Fortunately, in this case\nwe can make direct use of the Bayes' formula\napplied to MC events, if we model the prior using \na convenient pdf. \nIn fact, if we indicate by $\\mvec \\lambda_i$ the \n$i$-th column of $\\Lambda$ (see footnote 4), i.e. \n\\begin{eqnarray}\n\\mvec \\lambda_i = \\{\\lambda_{1,i},\\,\\lambda_{2,i},\\ldots,\\, \n\\lambda_{n_E+1,i}\\}\\,, \\label{eq:lambda_i}\n\\end{eqnarray}\nwe have\n\\begin{eqnarray}\nf[\\mvec \\lambda_i\\,|\\, \\mvec x_E^{MC},\\,x(C_i)^{MC},\\, I] \n&\\propto& P[\\mvec x_E^{MC}\\,|\\,x(C_i)^{MC}, \\, \\mvec \\lambda_i,\\, I]\\cdot \nf(\\mvec \\lambda_i\\,|\\,I)\\,. \\nonumber \\\\\n&& \n\\end{eqnarray}\nSince $ P[\\mvec x_E^{MC}\\,|\\,x(C_i)^{MC}, \\, \\mvec \\lambda_i,\\, I]$\nis a multinomial,\nchoosing a Dirichlet prior\n(an extension of the Beta distribution to many dimensions \n-- see Appendix A.2),\nwe get a Dirichlet posterior (Appendix A.3), i.e.\n\\begin{eqnarray} \n\\mvec \\lambda_i & \\sim & \\mbox{Dir}(\\mvec\\alpha_{posterior_i})\\,,\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n\\mvec\\alpha_{posterior_i} &=& \\mvec\\alpha_{prior_i} + \n\\left.\\mvec x_E^{MC}\\right|_{x(C_i)}\\,.\\label{eq:update_alpha}\n\\end{eqnarray}\n(Hereafter the symbols `$\\sim$' stands for `follows the probability\ndistribution`.)\n\\item\nFinally there is the integral (\\ref{eq:marg_lambda}), which\nis in principle not a problem \n(for example, thanks to modern technologies, it can easily performed \nby MC methods). \n\\end{itemize}\nIn conclusion, the main serious problem is \n$P(\\mvec x_C \\,|\\,\\mvec x_E,\\Lambda,\\,I)$. Thus\nbeing unable to tackle the problem from the main door, \nwe need some `tricks' to circumvent the obstacle, but still under\nthe guidance of probability theory.\n\n\\section{Practical algorithm to perform an \nindependent-bin Bayesian unfolding}\nThe basic trick to avoid the mentioned difficulty is to apply\nBayes' theorem to causes and effects, instead than to \ntrue and observed spectrum. In practice, instead of using\nof Eq.~(\\ref{eq:bayes1}) we start from\n\\begin{eqnarray}\nP(C_i\\,|E_j,\\,I) &=& \\frac{P(E_j\\,|\\,C_i,\\,I)\\cdot P(C_i\\,|\\,I)}\n {\\sum_i P(E_j\\,|\\,C_i,\\,I)\\cdot P(C_i\\,|\\,I)}\\,,\n\\label{eq:bayes_CE} \n\\end{eqnarray}\nor\n\\begin{eqnarray}\n\\theta_{ij}&=& \\frac{\\lambda_{ji}\\cdot P(C_i\\,|\\,I)}\n {\\sum_i \\lambda_{ji}\\cdot P(C_i\\,|\\,I)}\\,, \n\\label{eq:bayes_CE_theta}\n\\end{eqnarray}\nhaving defined $\\theta_{ij} \\equiv P(C_i\\,|E_j,\\,I)$\nin analogy to $\\lambda_{ji} \\equiv P(E_j\\,|\\,C_i,\\,I)$\\,.\n\nAt this point a very important remark is in order. \nThe prior in Eqs.~(\\ref{eq:bayes_CE})-(\\ref{eq:bayes_CE_theta}) \nhas a different meaning from that of Eq.~(\\ref{eq:bayes1}). \nIn Eq.~(\\ref{eq:bayes1}) $P(\\mvec x_C\\,|\\,I)$ assigns different probabilities\nto all possible spectra. Instead,\nin Eqs.~(\\ref{eq:bayes_CE})-(\\ref{eq:bayes_CE_theta}) \n $P(C_i\\,|\\,I)$ stands for \n{\\it a single spectrum} (more precisely, all spectra that \ndiffer from each other just by normalization). This can be better\nunderstood analyzing the meaning of `uniform' (or `flat') \nreferred to $P(\\mvec x_C\\,|\\,I)$ and to $P(C_i\\,|\\,I)$.\n\\begin{itemize}\n\\item \n $P(\\mvec x_C\\,|\\,I) = \\mbox{\\it constant}$ means \n{\\it all spectra} are equally likely.\n\\item\n $P(C_i\\,|\\,I) = \\mbox{\\it constant}$ means we consider \nthe cause bins equally\nlikely, i.e. the prior assess an initial belief in {\\it flat spectra}. \n\\end{itemize}\nIn other words, while a flat $P(\\mvec x_C\\,|\\,I)$\nmeans indifference about all possible spectra in order \nto `let the data speak by themselves', \na flat $P(C_i\\,|\\,I)$ is a strong assumption that usually does \nnot correspond to our priors concerning the physics case.\nThis implies that we have to tune somehow the algorithm in order\nto take into account of this gross approximation. \nWe will come back to this issue in Sec.~\\ref{sec:iter}.\n\nAt this point, having evaluated $P(C_i\\,|\\,E_j,\\,I)$, \nwe can use it to share the counts observed\nin each effect-bin among all cause-bins\n(see Fig.~\\ref{fig:bn2}).\n\\begin{figure}[t]\n\\begin{center}\n\\begin{tabular}{c}\n\\epsfig{file=unfnet_xi.eps,clip=,width=0.9\\linewidth}\n\\end{tabular}\n\\end{center}\n\\caption{{\\sl Sharing counts observed in effect-cells\namong cause-cells according to $\\theta_{ij} = P(C_i\\,|\\,E_j,\\,I)$.}}\n\\label{fig:bn2}\n\\end{figure}\n The estimate of the true spectrum \nis obtained repeating this sharing for all observed bins\nand taking into account inefficiency. An uncertainty on the\nunfolded spectrum is also evaluated. Let us see how all this\nwas done in the old algorithm and how it has been improved.\n\n\\subsection{Old algorithm (Ref.~\\cite{BU})}\nIn the old algorithm the share of observed events among the causes\nwas done only considering\nexpectations and applying an inefficiency correction,\ngoing through the following steps:\n\\begin{itemize}\n\\item\nnumber of counts in $C_i$ due to the observation in $E_j$: \n\\begin{eqnarray}\n\\left.x(C_i)\\right|_{x(E_j)} & \\approx & P(C_i\\,|E_j,\\,I) \\cdot x(E_j) = \n \\theta_{ij}\\cdot x(E_j)\\,; \\hspace{1.0cm}\n\\end{eqnarray}\n\\item\nnumber of counts in $C_i$ due to all observations: \n\\begin{eqnarray}\n\\left.x(C_i)\\right|_{\\mvec x_E} & \\approx & \n \\sum_{j=1}^{n_E} P(C_i\\,|E_j,\\,I) \\cdot x(E_j) = \n \\sum_{j=1}^{n_E}\\theta_{ij}\\cdot x(E_j)\n\\,; \\hspace{0.5cm}\n\\end{eqnarray}\n\\item\nnumber of counts in $C_i$ that also takes into account efficiency\n($\\epsilon_i$):\n\\begin{eqnarray}\nx(C_i) & \\approx &\\frac{1}{\\epsilon_i}\\, \\left.x(C_i)\\right|_{\\mvec x_E} \n = \\frac{1}{\\epsilon_i}\\, \n \\sum_{j=1}^{n_E} P(C_i\\,|E_j,\\,I) \\cdot x(E_j) \n\\label{eq:old_estimate}\n\\\\\n&& \\hspace{2.2cm}= \n \\frac{1}{\\epsilon_i} \\sum_{j=1}^{n_E}\\theta_{ij}\\cdot x(E_j)\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\epsilon_i &=& \\sum_{j=1}^{n_E} P(E_j\\,|\\,C_i,\\,I) = \n \\sum_{j=1}^{n_E} \\lambda_{ji} =\n 1 - \\lambda_{n_E+1,\\,i} \\,. \\label{eq:def_eps}\n\\end{eqnarray}\n\\item\nFinally, we have to remember that the several $\\lambda_{ij}$ were\nestimated by MC simulation as \n\\begin{eqnarray}\n\\lambda_{ji}\\approx x(E_j)^{MC}\/x(C_i)^{MC}\\,,\n\\end{eqnarray}\nfrom which $\\epsilon_i $ and $\\theta_{ij}$ are calculated according\nto Eqs. (\\ref{eq:def_eps}) and (\\ref{eq:bayes_CE_theta}).\n\\end{itemize}\nEssentially, $x(C_i)$ of Eq.~(\\ref{eq:old_estimate}) was considered \nan `estimator', whose `error' was calculated by standard\n`error propagation' (this was a pragmatic compromise between `standard\nmethods' I was accustomed at that time \nand the Bayesian approach I was in the process of learning\n -- anyhow, not bad to begin, and not \nworst than other methods.)\n\nAt this point, there is still open the issue of the inappropriate priors,\nthat we have encountered \nat the beginning of this section. \nSince this question remains in the improved algorithm, \nit will be discussed later in Sec.~\\ref{sec:iter}. \n\n\n\\subsection{Improvements}\\label{ss:improvements}\nAs it is easy to understand from the previous\ndescription, weak points of the \nold algorithm are the treatment of small numbers and \nthe calculation of uncertainty by `error propagation\nformulas'. \nLet us see how we can improve them. \n\\begin{itemize}\n\\item \nInstead of just `estimate' $\\lambda_{ij}$ as \n$x(E_j)^{MC}\/x(C_i)^{MC}$, we can model their \npdf by a Dirichlet (see Appendix A): \n\\begin{eqnarray} \n\\mvec \\lambda_i & \\sim & \\mbox{Dir}[\\mvec \\alpha_{prior}\\, + \\, \n \\left.\\mvec x_E^{MC}\\right|_{x(C_i)^{MC}}]\\,,\n\\label{eq:lambda_i_dir}\n\\end{eqnarray}\nwhere \n\\begin{itemize}\n\\item \n$\\mvec \\alpha_{prior}$ is the initial set of Dirichlet \nparameters -- typically unitary for uniform priors \n (see Appendix B for details); \n\\item\n$\\left.\\mvec x_E^{MC}\\right|_{x(C_i)^{MC}}$\nstands for the numbers of MC counts generated in $C_i$ \nand which end in each of \n{\\it all} possible $n_E+1$ `effects' \n(i.e. we also have to consider the inefficiency bin --\nsee Fig.~\\ref{fig:unfnet} -- in order to satisfy the condition \n$\\sum_i\\lambda_i = 1$\n of the Dirichlet variables).\n\\end{itemize}\n\\item\nThe uncertainty about $\\Lambda$ is taken into account by {\\it sampling}\nits values, that is equivalent to perform the integral \n(\\ref{eq:marg_lambda}).\\footnote{Note that if we are in doubt\nabout several smearing matrices, because e.g. they are obtained\nfrom different parameters of the simulation, the \n(`systematic') effect of this uncertainty can be taken\ninto account sampling from the different $\\Lambda$'s, with \nweights depending on our confidence on each of them.} \nTherefore, for each sampling, \n \\begin{itemize}\n \\item the smearing matrix elements $\\lambda_{ji}$ are extracted \n according to a Dirichlet distribution;\n \\item the efficiencies $\\epsilon_i = \\sum_{j=1}^{n_E}\\lambda_{ji}$ \n are calculated;\n \\item the inverse probabilities $\\theta_{ij}$ are obtained \n from the Bayes formula. \n \\end{itemize}\n\\item \nThe sharing of the number of counts $x(E_j)$ observed in an effect-cell\namong all cause-bins is performed using a \nmultinomial distribution (see Appendix A.1):\n\\begin{eqnarray} \n\\left.\\mvec x_C\\right|_{x(E_j)} & \\sim & \n\\mbox{Mult}[x(E_j),\\,\\mvec \\theta_j]\\,,\n\\label{eq:extract_xC}\n\\end{eqnarray}\nwith $\\mvec \\theta_j = \\{\\theta_{1,j},\\,\n\\theta_{2,j},\\,\\ldots,\\,\\theta_{n_C,j},\\,\\}$.\n\\item\nThe contributions of all {\\it observed} \neffect-bins are summed up, i.e. \n \\begin{eqnarray} \n\\left.\\mvec x_C\\right|_{\\mvec x_E} \n& = & \\sum_{j=1}^{n_E} \\left.\\mvec x_C\\right|_{x(E_j)}\\,.\n\\label{eq:x_C_obs}\n\\end{eqnarray}\nInefficiency are taken into account as in the old algorithm:\n \\begin{eqnarray} \n \\mvec x_C\n & = & \\frac{\\left.\\mvec x_C\\right|_{\\mvec x_E}}\n {\\mvec \\epsilon }\\,,\n \\label{eq:x_C_mu_eps}\n \\end{eqnarray}\n where the ratio in the r.h.s is done component by component. \n\\end{itemize}\nHowever, we need to remember that the observed number of events\nin each effect bin, $x(E_j)$,\n comes from a Poisson distribution with unknown \nparameter $\\mu_j$, whose inference can be conveniently performed\nstarting from a conjugate gamma distribution (see e.g. Ref.~\\cite{BR}).\nIt follows that \n\\begin{eqnarray}\n\\mu_j & \\sim & \\mbox{Gamma}[c_j+x(E_j),\\, r_j+1]\\,,\n\\label{eq:mu_j_Gamma}\n\\end{eqnarray}\nwhere $c_j$ and $r_j$ are the initial parameters of the gamma. \n[The case of a flat prior corresponds to \n$c_j=1$ and $r_j\\rightarrow 0$, i.e. an exponential with \nvanishing rate.] \n\nTherefore, what should be shared among \nthe cause-bins is not $x(E_j)$ but \n$\\mu_j$, extracted in each sampling \naccording to Eq.~(\\ref{eq:mu_j_Gamma}). \nHowever, $\\mu_j$ is a continuous variable and therefore\ncannot be used \nas `trials parameter' of a multinomial distribution.\nNevertheless,\nfractional events can indeed be shared making use of a little trick:\n \\begin{itemize}\n \\item \n $\\mu_j$ is extracted according to Eq.~(\\ref{eq:mu_j_Gamma}) and \n it is rounded to the closest {\\it positive} integer, \n that we indicate here by $m_j$: \n $m_j$ will be the number of trials used in the multinomial random\n generator.\n \\item \n $\\left.\\mvec x_C\\right|_{m_j}$ is extracted and then \n rescaled by the factor $\\mu_j\/m_j$, i.e.\n \\begin{eqnarray}\n \\left.\\mvec x_C\\right|_{m_j} & \\sim & \n \\mbox{Mult}(m_j,\\,\\mvec \\theta_j)\\,, \\\\\n \\left.\\mvec x_C\\right|_{\\mu_j} &=& \\frac{\\mu_j}{m_j}\\,\n \\left.\\mvec x_C\\right|_{m_j}\\,.\n \\end{eqnarray}\n\\end{itemize}\nThis means that Eq.~(\\ref{eq:x_C_obs}) has to be replaced by\n\\begin{eqnarray} \n\\left.\\mvec x_C\\right|_{\\mvec x_E} \n& = & \\sum_{j=1}^{n_E} \\left.\\mvec x_C\\right|_{\\mu_j}\\,;\n\\label{eq:x_C_mu}\n\\end{eqnarray}\nthen the inefficiency corrections are applied as usual. \n\nEach sampling provides a spectrum $\\mvec x_C^{(t)}$, where \n$t$ is the `time' index of the sampling. After $N$ samplings\nwe can calculate an average spectrum, variance \nand covariance for each cause-bin, as well as covariances\namong bins, and any other statistical summary; or we can inspect \ngraphically each $x(C_i)$.\n\nThis algorithm has been implemented in the R language\\,\\cite{R}\nand it is freely available on the web~\\cite{R-code}. \nThe R language has been chosen\nbecause it is a powerful \nscripting language,\\footnote{I take the opportunity to make a point\nabout the teaching of scripting languages, and in particular of\n$R$, in the physics courses. Surely physics students might need to learn \nC (and perhaps C++) at some point, but it is a matter of fact that writing \nsome C code to solve little\/medium problems that occur everyday,\nand that might also need some graphics,\nis a pain, not only for students. The consequence is that\nstudents usually do not write little programs in C to solve\nthe problems they meet in the general physics or \nlaboratory courses, continuing to use, instead, spread-sheets,\nlearned in high school (and forget real programming until they\nneed it for their theses, if they ever will need it).\nIn my opinion, teaching a scripting language\nlike R, perhaps in parallel to C, would be a good solution.\n(but if I really had\nto chose, I would opt for R, as the language to begin).\n(Personally, since I have discovered scripting languages,\nI use C only for though professional tasks. \nIndeed, I have even almost abandoned {\\it Mathematica},\nunless I really need symbolic calculation.)\n}\nopen source, multi-platform, oriented towards statistics\napplications, well \nmaintained and with tons of contributed extension packages.\nIn practice a kind of programming {\\it lingua franca}. \n\nIn order to test the \nalgorithm, also a simple event generator has been included, \nthat reproduces the toy models of Ref.~\\cite{BU}. \nSome results will be presented in Sec.~\\ref{sec:results}\n\n\n\\section{Iterations and smoothing}\\label{sec:iter}\nLet us now go back to the issue of the priors, \nthat we have left open from the beginning of the \nprevious section. The crucial thing to understand \n is that, as we stated above, \n{\\it instead of using a prior flat over the possible spectra, \nwe are using a particular, flat spectrum as prior}. Therefore, \nthe posterior [i.e. the ensemble of $\\mvec x_C^{(t)}$\nobtained by sampling] is affected by this quite \nstrong assumption, that seldom holds in real cases.\n\nObviously, the first idea a practical physicist has,\nin order to fix the problem, is to do some {\\it fine tuning}.\nIn fact, simulations on toy models show that\nthe unfolded distribution reproduces rather well \nthe true one, even with a flat spectrum as prior.\nHowever, it still `remembers the flat prior'. \nThis effect can be cured iterating the procedure, using the posterior\nas prior in a subsequent unfolding. \nEmpirically we learn that, in `normal' cases, just \ntwo or three steps are sufficient to recover quite accurately\nthe true spectrum.\n\nAs it has been discussed in length in Ref.~\\cite{BU},\nwe cannot repeat the iterations for a very long time,\notherwise there will be a kind of positive feedback, driven \nby fluctuations, that makes the asymptotically\nunfolded spectrum `crazy' (it simply means that it is\nfar away from all reasonable expectations). Again, \nas we understand it, it is a question of \njudging the outcome by our physics priors,\\footnote{Good \nphysicists do have priors and always use them! \n(Only the perfect idiot has no priors.)}\nthat, although rather vague, tell us that, given \nthe kind of measurement, a spectrum with wild oscillations \nis far from our beliefs. This kind of problem happens also with\nother kind of unfolding algorithms\nand it is usually cured by {\\it regularization}.\n\nRegularization is based on the subjective scientific prejudice \nthat `wild' distributions are not physical\n(and, as the reader knows or might imagine, this very reasonable \nsubjective ingredient is unavoidably also\nembedded in all non-Bayesian methods).\nIn practice, regularization can be implemented constraining\nadjacent bins not to be `too discontinuous', for example limiting\nthe absolute value of the derivatives calculated numerically\non the unfolded spectrum.\nOr one might fit the spectrum with some orthonormal functions, \nbut dumping `high frequencies'. And so on. \n\nI do not have a strong opinion on the matter. My \nrecommendation is that one should know what one is doing.\nMy preferred regularization method \nfor one-dimensional problems consists \nin smoothing the posterior before injecting it \nas prior of the next iteration \n(but not after the last step!).\nThis smoothing can be performed by a low order polynomial fit,\nas it is implemented in the demo scripts accompanying the program. \nI find this technique quite simple, well consistent with \nthe spirit of this unfolding method and with the reasons \nto go through the iterations. \nMoreover, it is easy to understand that the \nprocedure unfolding-smoothing-unfolding converges rapidly. \n\nIn conclusion, \nalthough the idea of iteration does not seem very\nBayesian (we cannot squeeze the same data twice!),\nit is in fact just an adaptive way to recover what \ncould have been obtained by a more reasonable uniform \nprior over the possible spectra. (Stated with other words,\nit would be non-Bayesian to stick to the \nfirst iteration, obtained from a prior that rarely\ncorresponds to physics spectra!) For this reason the \nidea of iteration has been kept, although some effort has \nbeen done to get rid of it. \nIn fact, in the new code the user can,\ninstead of providing a (typically flat) spectrum as prior, \nset up a function [\\,{\\tt priorf()}\\,] \nthat returns a normalized spectrum \nchosen at random according to a probability distribution\ndescribing prior knowledge.\nIn this way, in each sampling also the prior is sampled. \nHowever, I anticipate that implementing the function {\\tt priorf()}\nmight not be that trivial, and iterative procedure still\nremains a pragmatic compromise \nto slalom among the difficulties. \n\n\\section{Results on toy models}\\label{sec:results}\nThe program has been checked with the same toy smearing \nmatrices used in Ref.~\\cite{BU}, reproduced here in \nTab.~\\ref{tab:smearing}. \n\\begin{table}\n{\\small \n\\begin{center}\n\\begin{tabular}{|c|cccccccccc|}\n\\multicolumn{11}{c}{Smearing 1} \\\\\n\\multicolumn{11}{c}{} \\\\ \n\\hline\n& $C_1$& $C_2$& $C_3$& $C_4$& $C_5$& $C_6$& $C_7$& $C_8$& $C_9$& $C_{10}$ \\\\ \n\\hline\n$E_1$& 0.010& 0& 0& 0& 0& 0& 0& 0& 0& 0 \\\\\n$E_2$& 0.025& 0& 0& 0& 0& 0& 0& 0& 0& 0 \\\\\n$E_3$& 0.040& 0& 0& 0& 0& 0& 0& 0& 0& 0 \\\\\n$E_4$& 0.025& 0.050& 0& 0& 0& 0& 0& 0& 0& 0 \\\\\n$E_5$& 0& 0.100& 0& 0& 0& 0& 0& 0& 0& 0 \\\\\n$E_6$& 0& 0.050& 0.120& 0& 0& 0& 0& 0& 0& 0 \\\\\n$E_7$& 0& 0& 0.120& 0& 0& 0& 0& 0& 0 & 0 \\\\\n$E_8$& 0& 0& 0.060& 0.160& 0.025& 0& 0& 0& 0& 0 \\\\\n$E_9$& 0& 0& 0& 0.160& 0.100& 0.120& 0.140& 0& 0& 0\\\\\n$E_{10}$ & 0& 0& 0& 0.080& 0.250& 0.360& 0.280& 0.200& 0& 0 \\\\\n$E_{11}$ & 0& 0& 0& 0& 0.125& 0.120& 0.280& 0.400& 0.225& 0 \\\\\n$E_{12}$ & 0& 0& 0& 0& 0& 0& 0& 0.200& 0.450& 0.250 \\\\\n$E_{13}$ & 0& 0& 0& 0& 0& 0& 0& 0& 0.225& 0.500 \\\\\n$E_{14}$ & 0& 0& 0& 0& 0& 0& 0& 0& 0& 0.250 \\\\\n\\hline\n$\\epsilon$ & {\\it 0.1} & {\\it 0.2} & {\\it 0.3}& {\\it 0.4} & {\\it 0.5} &\n {\\it 0.6} & {\\it 0.7} & {\\it 0.8}& {\\it 0.9} & {\\it 1}\\\\ \n\\hline\n\\multicolumn{11}{c}{} \\\\ \n\\multicolumn{11}{c}{} \\\\ \n\\multicolumn{11}{c}{} \\\\ \n\\multicolumn{11}{c}{Smearing 2} \\\\\n\\multicolumn{11}{c}{} \\\\ \n\\hline\n& $C_1$& $C_2$& $C_3$& $C_4$& $C_5$& $C_6$& $C_7$& $C_8$& $C_9$& $C_{10}$ \\\\ \n\\hline\n$E_1$& 0.045& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.225 \\\\\n$E_2$& 0.090& 0.054 & 0 & 0 & 0 & 0 & 0 & 0 & 0.225 & 0.405 \\\\\n$E_3$& 0.045& 0.072 & 0.045 & 0 & 0 & 0 & 0 & 0.225& 0.360& 0.225 \\\\\n$E_4$& 0 & 0.054& 0.045& 0.045& 0.225& 0.180& 0.360& 0.405 &0.225 & 0 \\\\\n$E_5$& 0& 0& 0& 0.180& 0.360& 0.450& 0.315& 0.225& 0& 0 \\\\\n$E_6$& 0& 0& 0& 0.315& 0.180& 0.180 &0.180 &0 &0.045 &0 \\\\\n$E_7$& 0& 0& 0.135& 0.270& 0.045& 0& 0& 0.045& 0& 0 \\\\\n$E_8$& 0& 0.045& 0.270& 0.045& 0& 0& 0.045& 0& 0.045& 0 \\\\\n$E_9$& 0& 0.135& 0.270& 0& 0.045& 0.045& 0& 0& 0 &0.045 \\\\\n$E_{10}$& 0& 0.360& 0.045& 0& 0.045& 0.045& 0&0& 0& 0 \\\\\n$E_{11}$ & 0.180& 0.180& 0& 0& 0& 0& 0& 0& 0& 0 \\\\\n$E_{12}$ & 0.270& 0& 0& 0.045& 0& 0& 0& 0& 0& 0 \\\\\n$E_{13}$ & 0.270& 0& 0.045& 0& 0& 0& 0& 0& 0& 0 \\\\\n$E_{14}$ & 0.090& 0& 0.045& 0& 0& 0& 0& 0& 0& 0 \\\\\n\\hline\n$\\epsilon$ & {\\it 0.9} & {\\it 0.9} & {\\it 0.9}& {\\it 0.9} & {\\it 0.9} & \n {\\it 0.9} & {\\it 0.9} & {\\it 0.9}& {\\it 0.9} & {\\it 0.9}\\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\n\\caption{{\\sl Smearing matrices $P(E_j\\,|\\,C_i)$ of the toy models}}\n\\label{tab:smearing}\n\\end{table}\nThe results of the simulations are given in Fig.~\\ref{fig:toy1}. \n\\begin{figure}\n\\begin{center}\n\\epsfig{file=demo3_last_loop_sm1_df1_r1.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df1_r1.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df2_r1.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df2_r1.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df3_r1.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df3_r1.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df4_r1.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df4_r1.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\end{center}\n\\caption{\\sl Result of unfolding applied to four toy models \n({\\rm data.func} parameter from 1 to 4 in the demo program) and two different \nsmearing matrices (left and right figures correspond, respectively,\nto `Smearing 1' and `Smearing 2' of Tab.~\\ref{tab:smearing}.).\n}\n\\label{fig:toy1}\n\\end{figure}\nThe generated \ndistributions are shown in black.\nThe `measured' distributions (in red) \nlook completely different from the `true', \ndue to the very severe smearing matrices used. \nThe figures also show \nthe results after the first iteration\n(light blue) and the intermediate ones \n(yellow). Note that, although in all simulations\ntwenty iterations have been performed, only a few intermediate\niterations are visible, because the others overlap with the\nfinal step. This allows you to get a feeling \nabout the speed of convergence of the algorithm, depending \non the difficulty of the problem\n -- let us\nremind that, thanks to the intermediate smoothing the algorithm\n{\\it does} converge.\n\nOther independent simulations are reported in Figs.~\\ref{fig:toy2} and \n\\ref{fig:toy3}.\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=demo3_last_loop_sm1_df1_r2.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df1_r2.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df2_r2.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df2_r2.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df3_r2.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df3_r2.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df4_r2.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df4_r2.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\end{center}\n\\caption{{\\sl Same as Fig.~\\ref{fig:toy1}. Independent complete simulation.}}\n\\label{fig:toy2}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=demo3_last_loop_sm1_df1_r3.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df1_r3.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df2_r3.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df2_r3.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df3_r3.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth}\n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df3_r3.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \\\\\n\\epsfig{file=demo3_last_loop_sm1_df4_r3.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth}\n\\hspace{0.5cm}\n\\epsfig{file=demo3_last_loop_sm2_df4_r3.eps,bb=40 240 555 590,clip=,width=0.47\\linewidth} \n\\end{center}\n\\caption{{\\sl Same as Fig.~\\ref{fig:toy1}. Independent complete simulation.}}\n\\label{fig:toy3}\n\\end{figure}\n\n\n\\section{Conclusions}\nThe evaluation of uncertainties of \nthe Bayesian unfolding of Ref.~\\cite{BU} has been improved\ntaking the probability density functions of the quantities\nof interest. Simplifications are achieved using prior conjugates\nand performing the relevant integrations, needed to propagate \nuncertainties, by sampling. \n\nThe paper also discusses the issue of the iterations and the\nrole of the {\\it intermediate} smoothing \nto reach fast convergence. It is important to note that, \ncontrary to other algorithms,\nthe intermediate smoothing acts as a {\\it regularization on the \npriors} and not on the unfolded spectrum. For this reason\nphysical peaks should still appear in the unfolded spectrum.\n\nThe R code of the algorithm is available, together with \na little simulation script to run it on toy models. \nThis script also implements \nof simple intermediate prior regularization, but \nthe reader should have clear in mind that\nthis task is left to the user, who is supposed to\nunderstand well his\/her physics case. \n\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMany of today's proposals for quantum information applications\\cite{bouwmeester:00} rely on the controlled \nand fast manipulation of the discrete states of the corresponding devices' underlying\nstructures. Semiconductor quantum dots (QDs) are frequently discussed as building blocks\nfor such materials because they hold out the prospect of tailor-made energy spectra\nand a high integrability in a solid-state environment. The excitonic QD states\nare promising candidates to be used as qubits for quantum computing\\cite{biolatti:00, chen:01, li:03, piermarocchi:02, boyle:08, nielsen:00},\nwhile the radiative decay from the biexciton cascade offers the possibility of an on-demand creation of indistinguishable entangled photon \npairs\\cite{moreau:01, stevenson:06, mueller:14}. \n\nIt has been shown that both exciton and biexciton states of a QD can be prepared by\nusing ultra-fast laser pulses under a variety of excitation conditions\\cite{reiter:14,ramsay:10c}.\nThe most commonly known schemes for this purpose are resonant excitation leading to Rabi rotations\\cite{zrenner:02,machnikowski:04, vagov:07, ramsay:10},\ndifferent protocols using chirped laser pulses exploiting the adiabatic rapid passage effect \\cite{schmidgall:10,simon:11,wei:14,lueker:12, glaessl:13, gawarecki:12, debnath:12, debnath:13, mathew:14},\nand phonon-assisted off-resonant driving \\cite{glaessl:11b, reiter:12, hughes:13, glaessl:13b}.\nRecently, the latter method has also been experimentally demonstrated\\cite{ardelt:14, quilter:15, bounouar:15}. \nIndeed, there is an increased interest in this approach,\nbecause it is not only stable against\nfluctuations of the applied field intensity, but also leaves the quantum dot transition laser-free, which can be\nimportant when the emitted photons need to be spectrally separated from the laser pulse.\nFurthermore, in contrast to the other two protocols the phonon-assisted scheme makes active use of the phonon coupling\nand even works the better the stronger this coupling is. \n\nIn this paper, we examine the influence of the pulse shape on the phonon-assisted state preparation.\nWe identify three distinct processes that take place during the laser-driven evolution of the QD states.\nWhen the pulse starts the coupling to the laser-field leads to a \\emph{dressing} of the bare QD states.\nThis enables a \\emph{phonon-induced relaxation} between the dressed states\\cite{leggett:87, dekker:87, glaessl:11b}\nand finally when the pulse is switched off an \\emph{undressing} takes place.\nWhile previous studies mostly concentrated on the phonon induced relaxation and the resulting exciton and biexciton\noccupation obtained when applying an off-resonant driving pulse\\cite{glaessl:13b}, here we will explain the impact of all three\ndifferent processes and examine in detail the role of the switch-on and switch-off phase of the excitation.\nWe will show that within the relaxation process there is a trade-off situation between a sufficiently fast \npreparation and an optimal preparation fidelity and that for a high fidelity \npreparation of the bare QD states an adiabatic undressing, that can be realized by a long enough switch-off\ntime, is indispensable in short pulse protocols.\nOur analysis also shows that even though Gaussian pulses, as used in very recent experiments\\cite{ardelt:14, quilter:15, bounouar:15},\nare not the ideal choice for the phonon-assisted protocol they fulfill the requirements for fast state preparation\nsurprisingly well for a wide intensity range provided that \nthe regime of non-adiabatic dynamics is not yet reached. The latter condition sets a lower bound to the pulse\nduration.\n\nThe pulse characteristics not only allow to control the fidelity of the achieved inversion,\nbut in addition can be used to select the QD state that is targeted by the phonon-assisted process.\nFor the two-level case the sign of the pulse detuning determines whether\nthe QD is driven towards the ground-state or the exciton state. For the exciton-biexciton system\nalso the biexciton binding energy and the pulse length play a critical role in determining the targeted QD state.\nUnderstanding that the preparation is a three-step process gives us an intuitive answer to the important question which state is selected\nby the phonon-assisted preparation scheme.\n\n\n\\section{Model}\\label{model} \n\nWe consider a strongly confined GaAs QD driven by an external laser field and \ncoupled to a continuum of acoustic bulk phonons. Our model is defined by the Hamiltonian\n$H = H_{\\rm{dot,\\,light}} + H_{\\rm{dot-ph}}$, i.e., the electronic system coupled\nto the laser field and an additional phonon part.\nLet us first concentrate on \n\\begin{align}\n&H_{\\rm{dot,\\,light}} \\!= \\sum_\\nu \\hbar\\omega_\\nu |\\nu\\rangle\\langle\\nu|\n \\!+ \\sum_{\\nu\\nu'} \\hbar M_{\\nu\\nu'}|\\nu\\rangle\\langle\\nu'|,\n\\label{eq_H_dotlight}\n\\end{align}\nwhere $|\\nu\\rangle$ are the electronic basis states with corresponding energies $\\hbar \\omega_{\\nu}$. \nThe matrix element $M_{\\nu\\nu'}$ describes the coupling between the QD and the classical laser field using\nthe common dipole and rotating wave approximations.\nIn the first part of the paper we will restrict ourselves to a two-level system consisting of the ground-state $|0\\rangle$,\nfor which we set $\\hbar\\omega_0=0$, and a single exciton state $|X\\rangle$ with energy $\\hbar\\omega_X$ as illustrated in Fig.~\\ref{fig1}(a).\nThe two-level approximation is valid when considering circularly polarized light with a single polarization orientation\nand the exchange interaction is negligibly small. The exchange interaction strongly depends on the QD geometry\\cite{langbein:04} and \ncan be close to zero, as it is favorable for, e.g., entangled photon creation\\cite{stevenson:06}.\nIn this case the non-zero matrix elements of the light-matter coupling are given by\n\\begin{align}\n&M_{0X} = \\frac{1}{2}f(t) e^{i\\omega_L t}\\,, \\qquad M_{X0}=M_{0X}^*\\, ,\n\\label{eq_M2LS}\n\\end{align}\nwhere $f(t)$ is a real pulse envelope function, which in the following is referred to as field strength.\nThe field strength $f(t)$ is related to the electric field ${\\bf E}(t)$ with frequency $\\omega_L$ and the QD dipole matrix element\n${\\bf d}$ by $-{\\bf d}\\cdot{\\bf E}(t)=\\frac{\\hbar}{2}f(t)e^{-i\\omega_L t}$. \n\nA very useful picture for strongly driven few-level systems, which we will employ to analyze our results, is the dressed state picture. \nThe dressed states are the eigenstates of the coupled light-matter Hamiltonian\nin a frame rotating with the laser frequency \\cite{tannor:07}.\nFor the two-level system driven by a laser-field with a fixed field strength $f$ and a detuning $\\Delta$ they are given by the expressions\n\\begin{subequations} \\label{dressed_states}\n\\begin{eqnarray}\n |\\psi_{\\rm{up}}\\rangle &=& +\\cos(\\Theta)|0\\rangle + \\sin(\\Theta)|X\\rangle\\\\\n |\\psi_{\\rm{low}}\\rangle &=& -\\sin(\\Theta)|0\\rangle + \\cos(\\Theta)|X\\rangle\n\\end{eqnarray}\n\\end{subequations}\nwhere $\\Theta$ is the mixing angle defined by $\\tan(\\Theta)=\\frac{\\hbar f}{\\Delta+ \\hbar\\Omega}$ and $\\Omega$ is the Rabi frequency given by \n\\begin{align}\n\\hbar \\Omega = \\sqrt{(\\hbar f)^2+\\Delta^2}.\n \\label{Omega}\n\\end{align}\nThe corresponding dressed state energies read\n\\begin{align}\n E_{\\rm{up\/low}}=\\frac{1}{2}(-\\Delta\\pm\\hbar\\Omega).\n \\label{dressed_state_energies}\n\\end{align}\nIt is worth noting that the contributions of the ground and exciton states to the dressed states vary depending on the\ndetuning and the field strength.\n\nThe final part of this paper will be devoted to excitations with linearly polarized light. In this case one also has to take into account\nthe biexciton state and the system described so far needs to be extended to a three-level model consisting\nof the ground state $|0\\rangle$, the single exciton state $|X\\rangle$ and the biexciton state $|B\\rangle$. \nNote that the single exciton state $|X\\rangle$ in the three-level system has different polarization than in the two-level system described further above.\nAs is illustrated in Fig.~\\ref{fig1}(b), the biexciton state has the energy $\\hbar\\omega_B=2 \\hbar\\omega_X- \\Delta_B$,\nwhere $\\Delta_B$ is the biexciton binding energy. \nIn the exciton-biexciton system, the non-vanishing dipole matrix elements are given by\n\\begin{align}\n&M_{0X} = M_{XB}= \\frac{1}{2}f(t)e^{i\\omega_L t}, M_{X0}=M_{BX}=M_{0X}^*.\n\\label{eq_M3LS}\n\\end{align}\n\nLet us now focus on the coupling of the QD to the phonon environment. We model the electron-phonon interaction \nby a pure-dephasing Hamiltonian which, together with the free phonon Hamiltonian, has the form\n\\begin{align}\n&H_{\\rm{dot-ph}} \\!= \\! \\sum_{\\bf q} \\hbar\\omega_{\\bf q}\\,b^\\dag_{\\bf q} b_{\\bf q} \n + \\sum_{{\\bf q} \\nu} \\hbar n_\\nu \\big(\n \\gamma_{\\bf q} b_{\\bf q} + \\gamma^{*}_{\\bf q} b^\\dag_{\\bf q}\n\\big)\n|\\nu\\rangle\\langle\\nu|.\n\\label{eq_H_phonon}\n\\end{align}\nThe operators $b^{\\dagger}_{\\bf q}$ ($b_{\\bf q}$) create (annihilate) a phonon with wave vector ${\\bf q}$ \nand linear dispersion $\\hbar \\omega_{{\\bf q}}=\\hbar c_s |{\\bf q}|$, where $c_s$ denotes the longitudinal sound velocity.\n$n_\\nu$ counts the number of excitons present in the state $|\\nu \\rangle$. The coupling constants $\\gamma_{\\bf q}$ are \nchosen specific for the deformation potential coupling to longitudinal acoustic phonons, which has been shown to be dominant \nfor typical self-assembled GaAs QDs\\cite{ramsay:10, krummheuer:02}. As described in more detail in Ref.~\\onlinecite{krummheuer:02},\nthe coupling constants $\\gamma_{\\bf q}$ depend on the electronic wave functions $\\Psi_{e(h)}$ as well as on the deformation potential\nconstants $D_{e(h)}$ for electrons (e) and holes (h), respectively. For simplicity, we assume the wave functions to be the ground-state\nsolutions of a spherically symmetric harmonic potential, i.e., $\\Psi_{e(h)}({\\bf r}) \\propto \\exp\\big(-r^2\/(2a_{e(h)}^2)\\big)$,\nand refer to $a_e$ as the QD radius. The characteristics of the exciton-phonon coupling can be expressed by the spectral density\n$J(\\omega)=\\sum_{\\bf q}|\\gamma_{\\bf q}|^2\\delta(\\omega-\\omega_{\\bf q})$, which under the above assumptions reads\\cite{krummheuer:02}\n\\begin{align}\n&J(\\omega ) =\\frac{\\omega^{3}}{4\\pi^{2}\\rho \\hbar c_{s}^{5}} \n \\big[D_{e}e^{-\\omega^{2}a_{e}^{2}\/(4c_{s}^{2})} \n - D_{h}e^{-\\omega^{2}a_{h}^{2}\/(4c_{s}^{2})} \\big]^{2}\n\\label{eq_J}\n\\end{align}\nwith $\\rho$ being the mass density. For our present calculations we use material parameters specific for GaAs taken from the literature\\cite{krummheuer:05}\n$\\rho\\!=\\!5370$~kg\/m$^3$, $c_s\\!=\\!5110$~m\/s, $D_e=7.0$~eV, and $D_h=-3.5$~eV. For the QD size we choose $a_e=3$~nm and set $a_e\/a_h=1.15$ assuming equal potentials \nbut taking into account the different effective masses of electrons and holes for GaAs. \n\\begin{figure}[t]\n \\includegraphics[width=\\columnwidth]{fig1.eps}\n \\caption{(Color online) Energetic level diagram of (a) the exciton system with circular polarized excitation and (b) the exciton-biexciton system with linear polarized excitation.\n (c) Spectral density of the phonon coupling as a function of energy for the parameters used in our simulations (see text).}\n \\label{fig1}\n\\end{figure}\nThe spectral density for these parameters is shown in Fig.~\\ref{fig1}(c) where \nit can be seen that $J(\\omega)$ exposes a clear maximum at $\\hbar\\omega = E_J^{\\rm max} \\approx 2$~meV. \nThe electron-phonon interaction leads to a polaron-shift $\\delta_{ph}$ of the exciton and biexciton energy, such that when driving the exciton-to-ground state transition\nthe resonant excitation is on the shifted exciton energy $\\hbar \\tilde{\\omega}_X = \\hbar \\omega_X - \\delta_{ph}$. In this paper we are mostly interested\nin detuned excitations and define the detuning as the difference between the laser energy and the polaron-shifted exciton energy, i.e., \n\\[ \\Delta = \\hbar \\omega_L - \\hbar \\widetilde{\\omega}_X. \\]\n\nTo study the time evolution of the electronic QD occupations under excitation with the laser field we employ an implementation\nof a real-time path-integral approach\\cite{vagov:11}. This method allows a numerically exact treatment of the above model\ndespite the infinite number of LA phonon modes and yields the dynamics of the reduced electronic density matrix of the \nQD including arbitrary multi-phonon processes and taking into account all non-Markovian effects. \nWe assume the QD to be initially in a product state of the electronic ground-state and a thermal equilibrium of\nthe phonon modes at temperature $T=1$~K.\n\n\n\\section{Results}\\label{results} \n\nWe start by analyzing the phonon-induced relaxation using continuous excitation that is switched on \ninstantaneously. Then we will apply short pulses with different pulse shapes to\nanalyze the influence of the adiabaticity of the dressing and undressing process \nin view of high-fidelity state preparation.\nFor this analysis we will also visualize the system trajectory on the Bloch sphere.\nFinally, we will demonstrate the selective addressing of all three states in the exciton-biexciton\nsystem using the phonon-assisted state preparation protocol.\n\n\\subsection{Phonon-induced relaxation}\\label{phonon_induced_relaxation}\n\n\\begin{figure}[ttt]\n \\includegraphics[width=8.7cm]{fig2.eps}\n \\caption{(Color online)\n (a) Exciton occupation $C_X$ as a function of time for different\n\t field strengths $f=0.5$~ps$^{-1}$, $1.0$~ps$^{-1}$ and $1.5$~ps$^{-1}$. (b) Time $\\tau$ after which\n\t the time average of the oscillations of $C_X$ has\n\t reached 99\\% of $C_{X}^{\\infty}$ (see text) as a function of the field strength $f$.\n\t Inset of (b): $C_{X}^{\\infty}$ as a function of the field strength $f$.\n\t The detuning is $\\Delta=1.0$~meV.\n }\n \\label{fig2}\n\\end{figure}\n\nThe phonon-induced relaxation can be best analyzed by considering a continuous excitation of the QD with a \nconstant field strength. Further, the laser field shall be circularly polarized such that our model\ncan be restricted to two electronic levels, as we explained in the previous section.\nGenerally, for a weak coupling between the electronic states and the phonon environment the standard expectation\nfor the driven QD dynamics is that the Markovian approximation is \nwell justified and that in the long-time limit the relaxation leads to a thermal occupation\nof the dot-photon dressed states\\cite{leggett:87, weiss:99} [c.f. Eq.~(\\ref{dressed_states})].\nIn a very good approximation, this has been shown to hold also true for the two-level model of the QD with standard GaAs-type parameters\nconsidered here\\cite{glaessl:11b}.\nFor very low temperatures, the system ends up mainly in the lower dressed state $|\\psi_{\\rm{low}}\\rangle$,\nwhich corresponds to an exciton occupation of\n\\begin{align}\n C_{X}^{\\infty} \\approx \\cos^2(\\Theta)= \\frac{1}{2} \\left(1 + \\frac{\\Delta}{\\sqrt{\\hbar^2 f^2 + \\Delta^2}} \\right).\n \\label{CX_infty}\n\\end{align}\nFigure~\\ref{fig2}(a) shows the simulated temporal evolution of the exciton state under constant excitation for a detuning of $\\Delta = 1.0$~meV\nand three different field strengths $f$. Here, the laser field is switched on instantaneously at $t = 0$~ps.\nThe occupations perform damped Rabi oscillations around a mean value that approaches a constant value.\nFor a decreasing field strength the stationary exciton occupation rises as can also be seen in the inset of Fig.~\\ref{fig2}(b)\nwhere $C_{X}^{\\infty}$ given by Eq.~(\\ref{CX_infty}) is shown as a function of $f$ for $\\Delta = 1.0$~meV.\nMost importantly for phonon-assisted exciton preparation, the stationary exciton occupation is very close to one and only limited\nby the finite temperature for sufficiently small field strengths. Larger values of $f$, however, strongly reduce $C_{X}^{\\infty}$.\n\nOn the other hand, when we look at the time required to reach the final state, we find that for small field strengths $f$, \nthe time to reach the final state becomes longer. This is quantified in Fig.~\\ref{fig2}(b), where we have plotted the time $\\tau$ it\ntakes for the mean value of the oscillations to reach 99\\% of the exciton occupation expected for a thermal distribution of the\ndot-photon dressed states $C_{X}^{\\infty}$ as a function of the field strength.\nFor example at $f = 0.5$~ps$^{-1}$ a relaxation time of several hundred ps is required, which might exceed the time until other\nrelaxation processes occurring on longer time scales than the phonon-induced relaxation such as the radiative decay\nwhich is not considered here, come into play. Therefore such a low driving strength does not yield a sufficiently fast relaxation\nfor phonon-assisted state preparation. Indeed, we find that for $f\\rightarrow 0 $ the time needed for the state preparation diverges.\n\nTo understand the varying relaxation times for different driving strengths one has to keep in mind that the pure dephasing type\nphonon coupling does not induce direct transitions between the electronic states, but the phonon-induced relaxation is only\nenabled by the laser field. Transitions between the QD states only take place due to a non-vanishing overlap of both of the photon\ndressed states with the exciton state, which in turn is coupled to the phonon-environment. \nFor an efficient relaxation also the Rabi splitting $\\Omega$, i.e. the difference between the two dressed state energies, needs to be \nclose to typical phonon energies and ideally matches the maximum of the spectral density of the phonon coupling $J(\\omega)$ \n[c.f. Eq.~(\\ref{eq_J})]. Both of these properties of the relaxation rate are also captured by a simple application of Fermi's Golden Rule,\nwhich yields a relaxation time between the upper dressed state without phonons and the lower dressed state with one phonon of\n\\begin{align}\n\\tau_{\\rm relax} = \\left(\\frac{\\pi}{2}\\sin^2(2\\Theta)\\,J(\\Omega)\\right)^{-1}.\n\\label{golden_rule}\n\\end{align}\nHere it can be seen that the mixing angle $\\Theta$ as well as the spectral density evaluated at the Rabi splitting $J(\\Omega)$ enter\nthe relaxation rate. According to the simulations the maximal relaxation for the detuning $\\Delta = 1.0$~meV used here occurs for a field strength around $2.7$~ps$^{-1}$,\nwhich is reflected by the position of the minimum of the time needed for an almost complete relaxation plotted in Fig.~\\ref{fig2}(b).\nIn a good approximation this is expected from the rough estimation in Eq.~(\\ref{golden_rule}).\nDespite the short relaxation time at the optimal field strength, which is below $20$~ps, such a strong driving field is not applicable for high-fidelity\nstate preparation either, because the achieved final occupation of below $0.8$ is too far from the desired fidelity of one.\n\nLooking at our results, it becomes clear that the maximum exciton occupation for a given detuning can only be realized for almost\nvanishing field strengths where on the other hand it takes arbitrarily long times to complete the relaxation process.\nTherefore, we conclude, that when only the phonon relaxation process of the preparation takes place, \nthere is a trade-off between reaching the targeted state with high fidelity and realizing fast preparation times.\n\n\\subsection{Dressing and Undressing}\\label{dressing_undressing}\n\n\\begin{figure}[t]\n \\includegraphics[width=8.7cm]{fig3.eps}\n \\caption{(Color online)\n Dynamics for an off-resonant excitation of the QD with rectangular pulses (left) having a length of $40$~ps and Gaussian pulses (right) \n with a FWHM of $20$~ps. The pulse areas are $\\alpha=10\\pi$ (green), $20\\pi$ (blue) and $30\\pi$ (red). \n \t(a),(b) Exciton occupation $C_X$,\n \t(c),(d) occupation of the energetically lower dressed state $C_{\\rm{low}}$, \n \t(e),(f) instantaneous energy of the upper and lower dressed state $E_{\\rm{up\/low}}$, \n \t(g),(h) pulse envelopes.\n \tThe detuning is $\\Delta=1$~meV.\n\t }\n \\label{fig3}\n\\end{figure}\n\nIn the previous section the switch-on of the laser field has been taken to be instantaneous. This implies that a sudden \ntransformation of the photon dressed states occurs at the beginning of the pulse.\nSimilarly, when switching off the laser field the photon dressed states are transformed back to the pure ground and exciton states.\nTo understand the importance of these \n\\emph{dressing} and \\emph{undressing} processes in terms of fast and high fidelity phonon-assisted state preparation,\nwe now look at excitations with different pulse shapes. First we compare an excitation of the two-level system with rectangular and Gaussian pulses.\nLater, we approximate the Gaussian pulse using a rectangular pulse with softened edges. \n\nFig.~\\ref{fig3} shows the temporal evolution under excitation for the rectangular (left) and Gaussian (right) pulses including (a),(b) the exciton occupation,\n(c),(d) the lower dressed state occupation, (e),(f) the instantaneous dressed state energies and (g),(h) the pulse envelopes of the electric field.\nThe length of the rectangular pulses is chosen as $40$~ps, which is twice the full width of half maximum (FWHM) of the Gaussian pulses, which is set to $20$~ps.\nAll calculations are performed for a detuning of $\\Delta = 1.0$~meV and for three different pulse areas $\\alpha=\\int_{-\\infty}^\\infty f(t)\\rm{d}t$ with $\\alpha=10\\pi$ (green curves),\n$20\\pi$ (blue curves), and $30\\pi$ (red curves). \n\nLet us start with the rectangular pulse (left panels in Fig.~\\ref{fig3}). The laser pulse sets in instantaneously at $t=-20$~ps and besides damped Rabi oscillations\nthere is an overall increase of the exciton occupation that depends on the strength of the pulse [Fig.~\\ref{fig3}(a) and see also Fig.~\\ref{fig2}(a)].\nIn the dressed state picture this behavior can be understood as follows:\nWhen there is no laser pulse, the dressed states are equal to the bare states where for a positive detuning the ground state corresponds to the upper dressed state $E_{\\rm{up}}$ and\nthe exciton state corresponds to the lower dressed state $E_{\\rm{low}}$. Note that in this picture the energies of the photons needed for the excitation\nare counted as part of the dressed state energies.\nAs soon as the laser pulse sets in the dressed states become a mixture of ground and exciton state [cf. Eq.~(\\ref{dressed_states})] with shifted energies shown in Fig.~\\ref{fig3}(e),\ni.e. the bare QD states become dressed by the laser field.\nDue to their overlap with the ground-state both dressed states instantly become occupied, which is reflected by Rabi oscillations in the bare states\\cite{hughes:13}.\nPhonons induce transitions between the dressed states \\cite{lueker:12, reiter:14} and\nat low temperatures transitions to the lower dressed state, that correspond to phonon emission, outweigh.\nTherefore, during the pulse the lower dressed state becomes more and more occupied as it can be seen in Fig.~\\ref{fig3}(c). This also means that the exciton occupation\nsuccessively approaches its stationary value $C_X^{\\infty}$, which depends on the exciton contribution to the lower dressed state but is above $0.5$ for all field strengths \nand positive detunings [c.f. Eq.~(\\ref{CX_infty})].\nWhen the laser pulse is switched off at $t=40$~ps the exciton occupation [Fig.~\\ref{fig3}(a)] keeps the value it has right before the switch-off. \nAt the same time the dressed states are abruptly transformed back to the bare QD states due to the sudden stop of the rectangular pulse, which means that \nthe undressing takes place instantaneously. This is reflected by a step-like drop of the occupation of the lower dressed state that can be seen in Fig.~\\ref{fig3}(c).\nFor the weakest pulse ($\\alpha=10\\pi$) the final state is not reached within the time window of $40$~ps,\nwhile for the higher pulse areas the relaxation process is mostly completed. Either way, the final exciton occupation stays below $0.9$ for all of the three rectangular pulses and\nin fact no matter which rectangular shape is assumed for pulse lengths of a few tens of ps one never achieves a final value close to one due to the trade-off\nsituation described in the previous section.\n\nThe situation is quite different for the Gaussian pulses shown in the right column of Fig.~\\ref{fig3}.\nIn this case, no oscillations of the exciton occupation are visible and instead $C_X$ smoothly rises to its final value that for pulse areas\n$\\alpha=20\\pi$ and $30\\pi$ is considerably higher than for the rectangular pulses and practically reaches $1.0$ as it can be seen in Fig.~\\ref{fig3}(b).\nLike for the rectangular pulses, a phonon relaxation process takes place from the upper to the lower dressed state yielding a steady increase of the\noccupation of the lower dressed state in Fig.~\\ref{fig3}(d). However, because the field strength for Gaussian pulses is time-dependent,\nthe bare QD state contributions of the dressed states change during the pulse. This is also reflected by the time dependence of the dressed state energies shown in\nFig.~\\ref{fig3}(f), which can be used to extract the Rabi splitting at a given time. \nFor the weakest Gaussian pulse shown here ($\\alpha=10\\pi$), and similar to the case of the weakest rectangular pulse, the phonon-induced relaxation is \ntoo weak to complete the relaxation process within the duration of the pulse. This is because the Rabi splitting is well below the maximum of the phonon density\nat energy $E_J^{\\rm max}\\approx2$~meV [c.f. Fig.~\\ref{fig1}(c)] even at the maximum of the pulse where the Rabi splitting reaches its maximum value of approx.\n$1.4$~meV [c.f. Fig.~\\ref{fig3}(f)].\nIn contrast, for the higher pulse areas $\\alpha=20\\pi$ and $\\alpha=30\\pi$ the relaxation is more effective and leads to a full occupation of the lower dressed state.\nAt the pulse maximum the Rabi splitting for the $30\\pi$ pulse already becomes larger ($3.0$~meV) than $E_J^{\\rm max}$ temporarily leading to \na weakening of the relaxation which is visible by a reduced gain of the lower dressed state occupation [red curve in Fig.~\\ref{fig3}(d)] around $t=0$~ps.\nMost interestingly, the phonon-induced relaxation is practically complete around $t=10$~ps while the exciton occupation obtained by the stronger pulses is still\nfar from its final value and therefore the remaining increase of $C_X$ cannot be attributed to the relaxation.\nThe final value of $C_X$ is only reached within a second phase of increase that takes place while the pulse is switched-off. \nThis is only possible because during the switch-off the dressed states are \\emph{adiabatically undressed}, i.e. adiabatically transformed back to the bare QD states.\nIn this process the ground state component of the then almost fully occupied lower dressed state is reduced to zero which yields\na drastic increase of the exciton occupation. Importantly, a necessary precondition for this increase \nis that the undressing takes place slow enough such that the system can follow the transformation of the dressed states adiabatically and\nthe occupation stays in the lower dressed state. For example this is not fulfilled for the instantaneous undressing in case of the rectangular\npulses discussed earlier where the lower dressed state occupation experiences a step-like drop at the end of the pulse [Fig.~\\ref{fig3}(c)]\nthat does not occur for Gaussian pulses [Fig.~\\ref{fig3}(d)].\n\nAs it turns out the adiabatic undressing towards the end of the pulse is in fact essential for a successful {\\em fast} exciton preparation. \nThe absence of the adiabatic ending prevents the phonon-assisted state preparation protocol from working with high fidelity for short \nrectangular pulses and any other pulse shapes with too fast switch-off times.\n\n\\begin{figure}[t]\n \\includegraphics[width=8.7cm]{fig4.eps}\n \\caption{(Color online)\n Time dependent exciton occupation $C_X$ under pulsed excitation detuned by $\\Delta=1.0$~meV for different (a) \n\t switch-on (b) and switch-off times (see key). (c),(d) corresponding\n\t occupations of the energetically lower dressed state $C_{\\rm{low}}$ and (e), (f) corresponding pulse envelopes. \n\t \t}\n \\label{fig4}\n\\end{figure}\n\nHaving seen that the temporal evolution of the dressing and undressing plays a crucial role, we investigate in more detail the role of the switch-on and switch-off phase.\nTo this end we choose a special pulse shape that is designed to highlight the important role of these two processes on the preparation process. \nThe thermalization of the dressed states is obviously achieved best by a constant part of the pulse that is chosen such that the\ncoupling to the phonon environment is maximal ($f = 2.7$~ps$^{-1}$ for the parameters chosen here) and that is sufficiently long\nto complete the thermalization. In our case, the duration of this part of constant driving is chosen to be $40$~ps.\nIn order to systematically study the influences of the switch-on (switch-off) characteristics we add the left (right)\nhalf of a Gaussian pulse with FWHM $\\tau_{\\rm{on}}$ ($\\tau_{\\rm{off}}$) before (after) the constant part of the pulse envelope as illustrated in Fig.~\\ref{fig4}(e)[(f)]. \nHere, we compare a strictly rectangular pulse with $\\tau_{\\rm{on\/off}} = 0$~ps (green) with pulses of different switch-on\/off times \n$\\tau_{\\rm{on\/off}} =1$~ps (black), $3$~ps (blue) and $5$~ps (red).\nThe first row [panels (a) and (b)] shows the corresponding evolution of $C_X$, while the lower dressed state occupation is plotted in the second row [panels (c) and (d)] \nand the pulse envelopes are shown in the third row [panels (e) and (f)].\n\nLooking at the left panels of Fig.~\\ref{fig4}, we find that a longer switch-on time does not alter the resulting exciton occupation after the pulse,\nwhich is about $0.75$ for all three cases. However, we find a significant reduction of the Rabi oscillations, which almost disappear for $\\tau_{\\rm{on}} = 5$~ps. \nThe insensitivity of the final occupation with respect to the switch-on dynamics is related to the irreversible nature of the relaxation process which,\nin case of a complete relaxation, leads to a complete loss of memory of the initial state. We thus find that the first phase, i.e., the dressing,\nis necessary to start the relaxation process, but for a sufficiently long relaxation phase the details of the switch-on are irrelevant for the final fidelity. \n\nIn contrast, as demonstrated in the right panels of Fig.~\\ref{fig4}, a slower switch-off gives rise to a further gain of exciton occupation\nwhen the field strength is reduced to zero. This effect is considerably less pronounced if the switch-off time becomes too short and does not occur at all\nif the pulse stops instantaneously. For example for $\\tau_{\\rm{off}} = 5$~ps (red curve) $C_X$ increases by more than $0.2$ during the switch-off, \nwhile for $\\tau_{\\rm{off}} = 1$~ps (black) the increase is below $0.05$.\nThus, we conclude, that to yield a high fidelity state preparation the transformation of the dressed states into the bare QD states needs to happen \nslow enough in order to allow for an adiabatic evolution, i.e., the undressing needs to be adiabatic.\n\nTo sum up, for an efficient and fast phonon-assisted state preparation there are basically two features of the pulse envelope that are significant:\nFirst, there has to be a phase of the pulse where the QD thermalizes in the dressed state basis, which must be long enough to complete the relaxation.\nThis phase can be chosen the shorter the stronger the phonon-induced relaxation is and therefore can be minimized when the field strength is such that\nthe strength of the phonon-coupling is maximal. Therefore in this phase a constant field strength corresponding to the maximal relaxation rate is optimal\nto achieve a complete thermalization with a minimal pulse length. Second, the pulse must be switched-off slowly enough to allow the QD system for \nan adiabatic undressing of the states.\n\nAlthough a Gaussian pulse shape does not meet these requirements in an optimal fashion it still works surprisingly well,\nmostly because the optimum for the ideal field strength at which\nthe dressed state relaxation is maximal is not a sharp maximum, but rather a broad peak as it can be seen from Fig.~\\ref{fig1}.\nTherefore, even though a Gaussian\npulse does not have a constant plateau, a sufficient relaxation takes place during most of the time of the pulse\nprovided a reasonable value of the pulse area is chosen. Most importantly, deviations from the Gaussian pulse shape\nthat go towards faster switch-off times, as they might be induced by a pulse-shaping setup, can be harmful to \nthe efficiency of the phonon-assisted preparation protocol.\n\n\\subsection{Interpretation on the Bloch sphere}\\label{bloch_sphere}\n\n\\begin{figure}[t]\n \\includegraphics[width=\\columnwidth]{fig5.eps}\n \\caption{(Color online)\n (a),(b) Illustration of the system trajectory on the Bloch sphere and\n (c),(d) length of the Bloch vector as a function of time for \n (a),(c) a Gaussian excitation with pulse areas $\\alpha=10\\pi$ (green), $20\\pi$ (blue) and $30\\pi$ (red)\n and (b),(d) a rectangular excitation with different switch-on\/off times\n $\\tau_{\\rm{on}}=5$~ps; $\\tau_{\\rm{off}}=0$~ps (blue) and $\\tau_{\\rm{on}}=0$~ps; $\\tau_{\\rm{off}}=5$~ps (red). \n The detuning is $\\Delta=1$~meV.\n\t }\n \\label{fig5}\n\\end{figure}\n\nAnother interesting aspect not highlighted so far is that the incoherent phonon scattering can result in a pure state, which even \ncan be transformed to a bare QD state.\nThis is best illustrated using the Bloch vector picture\\cite{allen:75}. In this picture the projection of the Bloch vector on the $z$-axis represents the inversion,\ni.e., the difference between the occupations of the upper and the lower level of the two-level system, while the in-plane component reflects the polarization.\nResonant lossless driving of a two-level system is reflected by the Bloch vector moving at the surface of the Bloch sphere taking the shortest path from one pole to the\nother corresponding to the well known coherent Rabi oscillations.\nPreparing the exciton state using a $\\pi$ pulse then means going from the lower pole corresponding to the ground level to the upper pole \ncorresponding to the excited level.\nA detuned excitation leads to a tilted oscillation, which starting from the lower pole does not reach the upper pole.\nIf the system is fully coherent, the Bloch vector stays on the surface, i.e., its length is constantly one. If decoherence takes place,\nthe length of the Bloch vector is decreased. \nClearly, the phonon-assisted preparation involves an incoherent phonon-induced relaxation, but as we have seen it can still \neventually lead to an almost perfect exciton state. \n\n\nTo analyze this in more detail we have calculated the trajectory of the Bloch vector of the driven QD system, which is shown in Fig.~\\ref{fig5}(a),(b)\nalongside the time evolution of the length of the Bloch vector presented in Fig.~\\ref{fig5} (c),(d) for an excitation with Gaussian pulses (left panel)\nand for rectangular pulses with softened switch-on or switch-off (right panel). The corresponding pulse shapes can be seen in Fig.~\\ref{fig3} (h) and Fig.~\\ref{fig4} (e),(f).\n\nLet us first look at the Gaussian pulses in the left column.\nWhen the pulse is switched on the Bloch vector leaves the surface of the sphere for all pulse areas in Fig.~\\ref{fig5}(a) \n[$\\alpha=10\\pi$ (green), $\\alpha=20\\pi$ (blue) and $\\alpha=30\\pi$ (red)]. The corresponding vector length\nshown in Fig.~\\ref{fig5}(c) decreases \nas it is expected as the result of the pure-dephasing phonon coupling. \nIndeed, the loss of coherence turns out to be as high as $95\\%$. \nThe time to reach the minimal Bloch vector length depends on the pulse area and is shorter for the stronger pulses where the relaxation is more\nefficient. \nThe vector length would vanish at the minimum in the ideal case of a completely adiabatic dressing process\nand provided that the phonons realize a fully incoherent occupation transfer between the dressed states. In that case the electronic density matrix would\nstay diagonal in the dressed state basis all the time and coherences between the dressed states would neither build up due to the laser driving nor \ndue to the phonons. The continuous occupation transfer from the upper to the lower dressed state would then necessarily lead to a zero of the Bloch vector length\nat some point in time. In our case, however, the minimal Bloch vector length has a small but finite value because the Gaussian pulses used in the simulations already\ninduce some small coherences between the dressed states (not shown).\nSubsequently, the coherence between $|0\\rangle$ and $|X\\rangle$ is restored to a large degree and the Bloch vector length approaches $1$ for the two stronger\npulses as the phonon-assisted transitions result in an almost complete occupation of the lower dressed state which is again a pure state lying \non the surface of the Bloch sphere. For the weaker pulse, $\\alpha=10\\pi$ (green), the trajectory ends inside the Bloch sphere and the vector\nlength stays well below $0.5$, because due to the insufficient pulse strength the relaxation does not complete.\nFor the stronger pulses, the dressed states transform from a superposition of ground and exciton state into\nthe pure exciton state during the adiabatic undressing, which corresponds to a motion of the Bloch vector on the surface towards the upper pole. \nIt thus becomes obvious that the laser-driven QD evolution describes a way {\\em through} the Bloch sphere to the upper pole, which is very\ndifferent to the motion along the surface in the case of an inversion yielded by applying a resonant $\\pi$ pulse.\n\nA similar analysis can be done for the Bloch vector trajectory for rectangular pulses where the switch-on or the switch-off\nedge is softened. The red curve in Fig.~\\ref{fig5}(b),(d) has a sharp switch-on and a smooth switch-off with $\\tau_{\\rm{off}}=5$~ps,\nwhile the blue curve corresponds to a smooth switch-on with $\\tau_{\\rm{on}}=5$~ps and a sharp switch-off. \nWhen the switch-on is instantaneous (red curve), we see that the Bloch vector trajectory exhibits a spiral movement within the Bloch sphere \nreflecting the damped Rabi oscillations. In agreement with the essentially complete relaxation, the spiral ends up on a point close to the \nsurface of the Bloch sphere. For $\\tau_{\\rm{on}}=5$~ps (blue curve), the oscillations are less pronounced and the motion goes roughly \nalong the axis of the spiral movement. The spiraling will eventually vanish for even larger values of $\\tau_{\\rm{on}}$, \nbut the end point after the relaxation will be the same. If the laser pulse is switched off rapidly (blue curve), \nthe z-component of the Bloch vector stays constant after the end of the pulse and performs a circular motion.\nIn contrast, for $\\tau_{\\rm{off}}=5$~ps an adiabatic undressing takes place during the switch-off and the Bloch vector approaches the upper pole.\n\n\\subsection{Selective state preparation in the exciton-biexciton system}\\label{exciton_biexciton_system}\n\n\\begin{figure}[ttt]\n \\includegraphics[width=8.7cm]{fig6.eps}\n \\caption{(Color online)\n Occupations of the electronic levels $|0\\rangle$ (green), $|X\\rangle$ (blue) and $|B\\rangle$ (red) after optically exciting the\n exciton-biexciton system with a Gaussian pulse of pulse area $\\alpha=20\\pi$ and FWHM$=20$~ps as a function of the detuning $\\Delta$\n for a (a) positive biexciton binding energy $\\Delta_B=2.0$~meV and (b) negative biexciton binding energy $\\Delta_B=-2.0$~meV. (c) and (d) :\n corresponding energies of the QD states in the rotating frame.\n\t }\n \\label{fig6}\n\\end{figure}\n\nThe clear understanding of the different phases of the phonon-assisted relaxation mechanism worked out so far turns out to be highly valuable \nto predict in an easy way the QD state that can be obtained by pulsed off-resonant excitation for arbitrary initial states and also for the \nexciton-biexciton system illustrated in Fig.~\\ref{fig1}(b).\nTo this end it is required as before that during the switch-off phase of the pulse the system evolves adiabatically.\nWhen the pulse duration is sufficiently long such that the relaxation all the way to the lowest\ndressed state is fully completed the prepared state at the end of the pulse is determined exclusively by\nthe energetic order of the dressed states in the limit of vanishing pulse strength. \nRecalling that the dressed states are defined with respect to the rotating frame, the prepared state can technically be determined by subtracting \nfrom the energies of the QD states without light coupling the energy of the corresponding number of photons needed \nfor reaching that state and looking at the resulting order of the states. More specifically, zero photons have to be\nsubtracted for the ground state, one photon for an exciton state and two photons for the biexciton state.\nThe resulting state lowest in energy will be occupied predominantly at the end of the preparation process.\n\nFor example in the two-level system subtracting the laser energy from the laser-free QD state energies yields \ndressed state energies that are in the limit of vanishing field strength separated by the detuning between the laser\nand the polaron-shifted QD transition. For a positive detuning the energy of the exciton-like dressed state will be below\nthe corresponding ground state-like dressed state energy in the rotating frame and consequently the phonon-assisted preparation protocol with adiabatic undressing prepares \nthe exciton state. On the other hand, a negative detuning reverses the order of the dressed state energies and at low temperatures where there is mostly phonon\nemission the protocol prepares the QD ground state independent of the initial state\\cite{glaessl:13b}.\n\nIn the case of an exciton-biexciton system subtracting the energy of the corresponding photons gives the energies of the QD states in the frame rotating\nwith the laser frequency as $E'_0=0$, $E'_X = -\\Delta$, $E'_B = -\\Delta_B - 2\\Delta$. Thus, the energetic ordering depends also on the \nbiexciton binding energy and we will consider both cases with positive and negative biexciton binding energy, which can both be realized \ndepending on the QD geometry\\cite{michler:09}. In the following we will refer to the dressed state that in the limit of vanishing field\nstrength transforms into the $|0\\rangle$ [$|X\\rangle$, $|B\\rangle$] state as $|0'\\rangle$ [$|X'\\rangle$, $|B'\\rangle$]. \n\nLet us first discuss the most commonly encountered situation of a QD with a positive biexciton binding energy $\\Delta_B = 2$~meV.\nFigure~\\ref{fig6} (a) shows the final occupation after excitation with a detuned Gaussian\npulse with a FWHM of $20$~ps and a pulse area of $\\alpha=20\\pi$ as a function of the detuning, while Fig.~\\ref{fig6} (c) shows the corresponding energies\nin the rotating frame. It can be seen that for detunings below the two-photon resonance, i.e., $\\Delta < -\\Delta_B\/2 = -1$~meV, \nthe occupation remains in the ground state (green curve). This is consistent with the energetic order of the states, since $E'_0$ is the lowest\nenergy in this parameter region. \nAt $\\Delta = -1$~meV the energetic order of the dressed states changes, because for detunings above the two-photon resonance $E'_B$ is \nthe lowest energy. This leads to a significant drop of the final ground state occupation in favor of the biexciton occupation (red curve), \nwhich approaches its maximum close to one at $\\Delta\\approx-0.5$~meV. At the one-photon resonance at $\\Delta = 0$, \nthe energetic order changes once again and for all positive detunings $|0'\\rangle$ is the highest energy dressed state, while $E'_B$ remains \nbeing the lowest energy. For sufficiently long pulses all the occupation would, of course, end up in the lowest branch resulting in\nthe preparation of the biexciton. However, for $\\Delta > 1$~meV the energetic splitting $E'_X - E'_B > 3$~meV exceeds the maximum of\nthe phonon spectral density $E_J^{\\rm max}$ lying around $2$~meV by far, which results in a very inefficient relaxation to the $|B'\\rangle$ state,\nwhich is not completed in the time window set by the pulse length.\nTherefore, instead, we observe a gain of the exciton occupation (blue curve) as soon as $E'_X$ crosses $E'_0$.\nThe maximal exciton occupation of about $0.8$ is reached around $\\Delta = 2$~meV, where the energy splitting $E'_0 - E'_X = 2$~meV \nagrees with $E_J^{\\rm max}$.\nTherefore, it turns out that for a system with more than two states like the one considered here an incomplete relaxation\ncan also be advantageous for preparation purposes if a preparation of a QD state that in the rotating frame\nis not the lowest lying state, like in our case the exciton, is intended.\nAn even higher exciton occupation is possible for a larger biexciton binding energy which favors transitions to \n$|X'\\rangle$, because the coupling to the lowest dressed state $|B'\\rangle$ in this case gets even more\nout of resonance. \nIt also follows that whether the prepared state is \nthe exciton or the biexciton can in principle be selected by suitably adjusting the pulse length for all\npositive detunings, because the final state depends on whether the relaxation completes during the pulse \nor whether the intermediate level corresponding to the exciton \nis still predominantly occupied when the pulse is switched off. \nDetunings higher than $2.0$~meV lead to a decrease of both the final \nexciton and the biexciton occupations, because the\nphonon-induced relaxation becomes weaker and weaker as the phonon environment becomes more and more out of resonance.\nFinally, for very large detunings above $\\Delta=4.0$~meV the energetic splittings between the dressed states are so large that for the given pulse length practically no relaxation\ntakes place and the QD remains in the ground-state.\n\nThe case of a negative biexciton binding energy of $\\Delta_B=-2$~meV is shown in the right column. Fig.~\\ref{fig6}(b) shows \nthe final occupations and the corresponding energies are plotted in Fig.~\\ref{fig6}(d). Similar to the case of \npositive biexciton binding energies the system remains in the ground state up to a detuning of $0$, since $E'_0$ is the lowest energy. \nBetween $\\Delta = 0$ and $2$~meV $|X'\\rangle$ is the dressed state with the lowest energy. This energetic order appears\nexclusively for negative biexciton binding energies, resulting in a broad region where a complete preparation of the exciton occurs.\nThis finding is somewhat surprising, because the two-photon resonance at $\\Delta = 1$~meV also lies within this interval,\nwhere an efficient preparation of the exciton state might be unexpected, but can easily be understood in the context described here.\nAt $\\Delta = 2$~meV the energetically lowest state changes a second time and a sharp transition of the prepared state from the exciton to\nthe biexciton occurs.\nHowever, for $\\Delta > 2.5$~meV the energy splittings between the dressed states that are connected to the excitonic states and $|0'\\rangle$ already become too large for \na complete thermalization to take place during the length of the pulse.\nThis leads to a significant reduction of the biexciton occupation for higher detunings and above $\\Delta = 5$~meV the QD does not get affected by the pulse\nanymore staying in the ground-state.\nImportantly, also in the case of a negative biexciton binding energy the targeted state can be derived from the energetic order shown in Fig.~\\ref{fig6}(d),\ndemonstrating the correctness of the description found for the dynamics of the phonon-induced preparation process also in the case of the \nexciton-biexciton system.\n\nAs we have seen, to selectively address a state in an exciton-biexciton system by phonon-assisted state preparation the biexciton binding energy\nneeds to be considered carefully, while the detuning $\\Delta$ and the pulse length can act as control parameters to choose\nthe targeted state.\n\n\n\\section{Conclusions}\\label{conclusions} \n\nWe have analyzed the dynamics of an optically driven semiconductor QD coupled to longitudinal acoustic phonons for different off-resonant\nexcitation conditions focusing on two main questions: a) what are the requirements for a fast state preparation using off-resonant\ndriving and b) how is the prepared state selected. We demonstrated that a fast high fidelity preparation process not only relies on\nan efficient phonon-induced relaxation between the dot-photon dressed states, but also on a successful adiabatic undressing of the states.\nTo understand the influence of the adiabatic undressing we compared the exciton occupations produced by Gaussian and rectangular pulse \nshapes and systematically studied the influence of the switch-on and switch-off times. This analysis revealed that while the switch-on \ntime plays a subordinate role it is crucial that the pulse is switched off slowly enough. Furthermore, we analyzed the coherence properties\nduring the state preparation process and revealed that the incoherent phonon scattering can also restore coherence and the system trajectory takes a way through\nthe Bloch sphere to the opposite pole.\nFinally, we showed that the concept of phonon-assisted state preparation by adiabatic undressing \nalso applies to the exciton-biexciton system, where an easy prediction of the prepared state is possible.\nWhen the pulse is long enough to support a full relaxation the prepared state is the one that is lowest in energy in the rotating frame\nin the limit of vanishing field strength.\nWe demonstrated that even pulses too short for a full relaxation can be used for preparing intermediate states that are not the lowest \nin energy in the rotating frame. \nAltogether, it is demonstrated that in an exciton-biexciton system the decisive parameters for selecting the prepared state \nare the detuning, the biexciton binding energy and the pulse length.\n\n\\section{ACKNOWLEDGMENTS}\n\nA.M.B. and V.M.A. gratefully acknowledge the financial support from Deutsche Forschungsgemeinschaft\nvia the Project No. AX 17\/7-1. D.E.R. is thankful for financial support from the German Academic Exchange Service (DAAD) within the P.R.I.M.E. programme. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzimac b/data_all_eng_slimpj/shuffled/split2/finalzzimac new file mode 100644 index 0000000000000000000000000000000000000000..96eed0c7d8d620920b140d2a627ef54048cba16b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzimac @@ -0,0 +1,5 @@ +{"text":"\\section{intro}\n\nStudies to exorcise Maxwell's demon \\cite{leff2014maxwell} shed light on the relationship between information theory and statistical physics \\cite{szilard1929entropieverminderung,bennett1982thermodynamics,lloyd1989use,lloyd1997quantum,kim2011quantum,schaller2011probing}.\nResearchers reported models for the Maxwell demon, including the demon as a feedback controller \\cite{touchette2000information,touchette2004information,sagawa2008second,allahverdyan2009thermodynamic,cao2009thermodynamics,fujitani2010jarzynski,ponmurugan2010generalized,horowitz2010nonequilibrium,sagawa2010generalized,abreu2012thermodynamics,sagawa2012fluctuation,sagawa2012nonequilibrium,esposito2012stochastic,hartich2014stochastic,horowitz2014thermodynamics,horowitz2014second,shiraishi2015role,miyahara2018work,ribezzi2019large,ito2013information,sandberg2014maximum}, information reservoir such as a bit sequence \\cite{mandal2012work,mandal2013maxwell,deffner2013information,strasberg2014second,barato2014stochastic,merhav2015sequence,strasberg2015thermodynamics,chapman2015autonomous,boyd2016identifying,mcgrath2017biochemical,strasberg2017quantum,stopnitzky2019physical,boyd2017leveraging}, and the unifying approaches \\cite{horowitz2013imitating,barato2014unifying,shiraishi2016measurement}.\nIn those studies, researchers have shown second-law-like inequalities ({\\SLLI}s), establishing that the second law of thermodynamics (SLT)\n\\begin{IEEEeqnarray}{lCr}\n\\Delta\\shaent{\\Xtot}+\\Qtot\\geq 0,\n\\label{eq:SLT}\n\\end{IEEEeqnarray}\nis not violated by the demon in principle, where we denote by $\\shaent{\\Xtot}$ and $\\Qtot$ the entropy and the heat transfer of the whole system in contact with a heat bath.\nWe assume that all processes are isothermal processes with a heat bath of temperature $T$ in this Letter. We set $k_B T = 1$.\n\nThe {\\SLLI}s appeared in those previous studies can be tentatively classified into two types: the first one, which was reported in the studies of the feedback control model, asserting that the entropy of the system can be reduced by the amount of information gathered by the controller (demon):\n\\begin{IEEEeqnarray}{lCr}\n\\Delta \\shaent{X} + Q \\geq \\genI,\n\\IEEEeqnarraynumspace\n\\label{eq:HItype1}\n\\end{IEEEeqnarray}\nand the second one, which was reported in the studies of the information reservoir, asserting that work can be extracted in exchange for the entropy production of the information reservoir:\n\\begin{IEEEeqnarray}{lCr}\n\\Wavg \\leq \\Delta \\shaent{X},\n\\IEEEeqnarraynumspace\n\\label{eq:WHtype2}\n\\end{IEEEeqnarray}\nwhere $X$ denotes the state of the controlled system such as gases enclosed in a piston or a bit sequence;\n$Q$ denotes the heat transfer from the controlled system;\n$\\Wavg$ denotes the work extracted from the controlled system;\nFor $\\genI$ in the inequality~(\\ref{eq:HItype1}), several quantities such as transfer entropy have been employed.\nIn the following,\nwe specify it as the mutual information production: $\\genI = \\muI{\\Xsys\\prm}{\\memX}-\\muI{\\Xsys}{\\memX}$, where $\\memX$ is the memory's state.\nThen, the \\SLLI~(\\ref{eq:HItype1}) coincides with the one shown in Ref.~\\cite{sagawa2012fluctuation}, \n\n\n\nBoth types of the previous {\\SLLI}s do not imply that feedback control increases the \\textit{net maximum work}.\nHere, we call the upper limit of extracted work that accounts for the memory cost determined by the Landauer's principle \\cite{landauer1961irreversibility,sagawa2009minimal} the net maximum work. \nIn other words, the net maximum work is the upper limit of $\\Wavg$ minus the lower limit of the memory cost.\nAccording to the Landauer's principle, the work required to exploit the memory is greater than $\\genI$.\nThus, the \\SLLI~(\\ref{eq:HItype1}) does not imply that the net maximum work increases by measurements because $\\genI$ is canceled out by the memory cost.\nLikewise, the \\SLLI~(\\ref{eq:WHtype2}) does not provide the difference of the net maximum work between feedback and open-loop control because all of the terms in the \\SLLI~(\\ref{eq:WHtype2}) do not vary depending on whether the control is feedback or open-loop.\n\nOur first main result\nis that the sum of the entropy bounds for all subsystems leads to\nthe first \\SLLI\n\\begin{IEEEeqnarray}{lCr}\n\\Delta\\Htot + \\Qtot \\geq \\DelI\n\\IEEEeqnarraynumspace\n\\label{eq:HHQD5}\n\\end{IEEEeqnarray}\nimplying that feedback control increases the net maximum work.\nHere, $\\DelI$ is the positive quantity defined later.\nDue to the positivity of $\\DelI$, this \\SLLI is a stronger bound than the SLT~(\\ref{eq:SLT}).\nIn terms of extracting work, $\\DelI$ coincides with how much the system's entropy production is useless for a controller to extract the work from the system.\nAlthough $\\DelI$ can grow in proportional to the system size, the memory cost can be constant with the system size when there are correlations within the system.\nHence, the feedback control can vanish $\\DelI$ with significantly less memory cost than $\\DelI$.\nThus, the feedback control can increase the net maximum work.\n\n\nIn addition, the sum of the entropy bounds of the controlled subsystems leads to\nthe stronger inequality than {\\SLLI}~(\\ref{eq:HItype1}):\n\\begin{IEEEeqnarray}{lCr}\n\\Delta \\Hsys + \\Qsys \\geq \\genI + \\DelIsys,\n\\label{eq:HQID23}\n\\end{IEEEeqnarray}\nwhere $\\DelIsys$ denotes the positive quantity similar to $\\DelI$. \nThis inequality is our second main result and derived at the end of the main result part.\nLikewise, the stronger bound corresponding to \\SLLI~(\\ref{eq:WHtype2}) is immediately derived from \\SLLI~(\\ref{eq:HHQD5}) by\nregarding all subsystems as a controlled system such as a bit sequence and ignoring the change in the internal energy:\n\\begin{IEEEeqnarray}{lCr}\n\\Wavg \\leq \\Delta \\Htot - \\DelI,\n\\label{eq:WQHD4}\n\\end{IEEEeqnarray}\n\n\n\n\n\n\\textit{Main result --}\nWe study a classical system in contact with a heat bath of temperature $T$.\nWe assume that the system and the heat bath constitute a closed system.\nThe system consists of non-overlapping subsystems $1, 2,\\dots, \\Nx$.\nThe evolution is Markovian with discretized time intervals, and we focus on a single step of this discrete Markov process.\nThe initial and final states of the $k$-th subsystem are denoted by $\\Xs{k}$ and $\\Xs{k}\\prm$, respectively.\nLet $\\DepX{k}$ be the subset of $\\Xtot=\\{\\Xs{1},\\dots,\\Xs{\\Nx}\\}$ that influences the evolution of $\\Xk$ other than $\\Xk$ itself.\nIn other words, the state $\\Xs{k}$ evolves depending only on $\\{\\Xs{k}\\}\\cup\\DepX{k}$\n\\footnote{For the formal definition of the independence, see Definition~\\ref{def:dependence} in Section~\\ref{sect:Positivity of Co-dissipation}.\n}\n.\n\nWe premise the local detailed balance for each subsystem, which result in the entropy bound for a subsystem\n\\footnote{We demonstrate the inequality~(\\ref{eq:yuragi_k}) in\nSection~\\ref{ap:Fluctuation relation for a subsystem}.}\n:\n\\begin{IEEEeqnarray}{rCl}\n\\YuragiK := \\shaent{\\Xs{k}\\prm\\mid \\DepX{k}} - \\shaent{\\Xs{k}\\mid \\DepX{k}} + \\Qs{k} \\geq 0,\n\\label{eq:yuragi_k}\n\\end{IEEEeqnarray}\nwhere we denote by $\\YuragiK$ the total entropy production accompanied with the evolution of $\\Xs{k}$ and by $\\Qs{k}$ the heat transfer from the $k$-th subsystem to the heat bath.\n\nThe dependency among subsystems induces the graph $\\GcaC{0}=\\{\\VcaC{0},\\EcaC{0}\\}$ as follows:\n\\begin{IEEEeqnarray}{lCr}\n\\VcaC{0} := \\{\\{\\Xs{1}\\},\\{\\Xs{2}\\},\\dots,\\{\\Xs{\\Nx}\\}\\},\n\\label{eq:Vdef}\n\\\\\n\\EcaC{0} :=\\{\\EdgeX{j}{k}\\mid \\Xj \\in \\DepXX{\\Xk}, \\Xk\\in\\Xtot\\},\n\\label{eq:Edef}\n\\end{IEEEeqnarray}\nwhere we denote by $\\EdgeX{j}{k}$ the directed edge from the vertex $\\{\\Xs{j}\\}$ to the vertex $\\{\\Xs{k}\\}$.\nThe graph $\\GcaC{0}$ may contain cycles.\nTo eliminate dependencies causing these cycles, we consider the coarse-grained subsystems where original subsystems are merged if they are members of the same cycle in $\\Gorg$.\nHereinafter, $\\Xj$ denotes the states of the $j$-th subsystem in the coarse-grained subsystems, and $\\Nx$ denotes the number of the coarse-grained subsystems.\nWe denote by $\\Gca=(\\Vca,\\Eca)$ the graph induced from the coarse-grained subsystems by the same way shown in Eqs.~(\\ref{eq:Vdef}) and (\\ref{eq:Edef}).\nBy its definition, $\\Gca$ is a directed acyclic graph (DAG).\n\nLet $\\Gcomp=(\\Vcomp,\\Ecomp)$ be a connected component of $\\Gca$.\nThe graph sequence $\\GcompS{1}, \\GcompS{2}, \\cdots, \\GcompS{\\Ncomp}$ is constructed by the following procedure applying the edge contraction \\cite{gross2003handbook} repeatedly to $\\Gcomp$:\n\\begin{easylist}[enumerate]\n@ Initialization: let $j=1$, $\\GcompS{1}=\\Gcomp$.\n@ If the graph $\\GcompS{j}$ has exactly one vertex, terminate. Otherwise, go to step \\ref{list:step3}\n\\label{list:step2}\n@ Select a vertex couple $\\pcPair$ from $\\GcompS{j}$ such that every child of $\\p$ is a sink, and $\\c$ is a child of $\\p$.\n\\label{list:step3}\n@ Execute the edge contraction for $\\p$ and $\\c$, and let the result of it be $\\GcompS{j+1}$.\n\\label{list:step4}\n@ Increment $j$, and go back to step \\ref{list:step2}\n\\end{easylist}\n\\vspace{0.5em}\n\\noindent This procedure is feasible because a DAG with more than two vertices always has a vertex such that has only sink as its children.\nThe essence of this procedure is that any $\\GcompS{j}$ is a DAG, and the adjacency of $\\Gca$ preserves \\footnote{See Lemma~\\ref{lem:ReduceGraphProperties} in Section~\\ref{sect:Properties of the graph sequence}.}.\n\n\\newcommand{\\hat{V}}{\\hat{V}}\n\nFor simplicity, we introduce a convention regarding collections of subsystems $\\hat{V}$.\nSuppose a function $f$ takes an arbitrary number of subsystems as arguments.\nThen, we just write $f(\\hat{V})$ in the sense of $f(\\Xs{V_1},\\Xs{V_2}\\cdots)$, where $\\Xs{V_j}$ is a $j$-th member of $\\hat{V}$.\nLikewise, we denote by $\\hat{V}\\prm$ the collection of the subsystem's final states in $\\hat{V}$.\nFurthermore, we denote by $\\Flatten(\\hat{V})$ the set of the subsystems created by flattening the nested structure of $\\hat{V}$.\nFor example, if $\\hat{V}=\\{\\{\\Xs{1},\\Xs{2}\\},\\{\\Xs{3},\\Xs{4}\\}\\}$, then\n\\begin{IEEEeqnarray}{cCc}\n\\Prb{}{\\hat{V}}=P(\\Xs{1},\\Xs{2},\\Xs{3},\\Xs{4}),\n\\label{eq:PVPXXXX8}\n\\\\\n\\hat{V}\\prm=\\{\\{\\Xs{1}\\prm,\\Xs{2}\\prm\\},\\{\\Xs{3}\\prm,\\Xs{4}\\prm\\}\\},\n\\label{eq:VXXXX9}\n\\\\\n\\Flatten(\\hat{V})=\\{\\Xs{1},\\Xs{2},\\Xs{3},\\Xs{4}\\}.\n\\end{IEEEeqnarray}\nWe denote by $\\Qset{\\hat{V}} := \\Sum{X\\in\\Flatten(\\hat{V})}{}\\Qs{}$ the heat transfer from all the subsystems composing $\\hat{V}$.\nWe denote by $\\DepXX{\\hat{V}}:=\\bigcup_{\\Xx\\in \\Flatten(\\hat{V})}{\\DepXX{\\Xx}}$ the sets of subsystems influencing the evolution of the members of $\\hat{V}$.\n\nWe use another notation to ignore the empty set as the conditioning event:\n\\begin{IEEEeqnarray}{cCr}\nP(V\\mid\\emptyset) := P(V),\n\\quad\n\\shaent{V\\mid\\emptyset} := \\shaent{V},\n\\\\ \n\\MutualInfo{V}{W\\mid\\emptyset}{}:=\\MutualInfo{V}{W}{}.\n\\label{eq:convention_empty_condition}\n\\end{IEEEeqnarray}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{F3_PC_graph.eps}\n\\caption{(Color online). The schematic diagram of the sets defined as Eqs.~(\\ref{eq:dpdpdp14}) and $\\DelITwoPC$, $\\DelIThreePC$, and $\\DelIFourPC$.\n}\n\\label{fig:pc_depend}\n\\end{center}\n\\end{figure}\n\nIn addition, \nwe prepare notations for sets of subsystems regarding the dependency of $\\p$ and $\\c$:\n\\begin{IEEEeqnarray}{rCl}\n\\aaa := \\DepXX{c} \\cap \\p&,&\n\\quad\n\\CE := \\DepXX{\\c} \\setminus \\p,\n\\nonumber\n\\\\\n\\D := \\DepXX{\\p} \\setminus \\DepXX{c}&,&\n\\label{eq:dpdpdp14}\n\\quad\n\\DCE := \\DepXX{\\p\\cup\\c} \\setminus \\p,\n\\\\\n\\C := \\DepXX{\\c} \\setminus \\DepXX{\\p} \\setminus \\vMergeP&,&\n\\quad\n\\E := \\DepXX{\\p} \\cap \\DepXX{\\c} \\setminus \\p\\,.\n\\nonumber\n\\end{IEEEeqnarray}\nAs illustrated in Fig.~\\ref{fig:pc_depend}, these sets have the following meaning: \n$\\aaa$ are the subsystems that are members of $\\p$ and influence to $\\c$; $\\D$ are the subsystems that influence $\\p$ but not $\\c$; $\\E$ are the subsystems that influence both $\\p$ and $\\c$; $\\C$ are the subsystems that influences $\\c$ but not $\\p$. We also note that $\\CE=\\C\\cup\\E$ and $\\DCE = \\C\\cup\\E\\cup\\D$. The minus symbol ``-'' put at the superscript of $\\Dep$ represents the elimination of $\\p$.\n\nThe key quantity $\\DelI$ is the sum of two modules, each caused by the independency of the evolution within or among connected components in $\\Gca$.\nThe first module $\\DelIinner{\\Gcomp}$, which is caused by the independency within a connected component, is provided by\n\\begin{IEEEeqnarray}{cCc}\n\\DelIinner{\\Gcomp}\n:= \n- \\Sum{(p,c)\\in \\Vpccc}{}\n\\left(\\DelIOne{p,c} + \\DelITwo{p,c}+\\DelIThree{p,c} +\\DelIFour{p,c}\\right),\n\\IEEEeqnarraynumspace\n\\label{eq:def_DELI_intra}\n\\end{IEEEeqnarray}\nwhere $\\Vpccc$ is the set of all $\\pcPair$ selected through constructing the graph sequence $\\GcompS{1},\\cdots,\\GcompS{\\Ncomp}$, and each term of $\\DelIinner{\\Gcomp}$ is mutual information production as follows:\n\\begin{IEEEeqnarray}{cCc}\n\\DelIOne{p,c} := \\muI{\\p\\prm}{\\c\\prm\\mid\\DCE}-\\MutualInfo{p}{c\\prm\\mid \\DCE}{},\n\\label{eq:def_delIOne}\n\\\\\n\\DelITwo{p,c} := \\MutualInfo{p\\prm}{\\C\\mid \\DE}{}- \\MutualInfo{p}{ \\C\\mid \\DE}{},\n\\label{eq:def_delITwo}\n\\\\\n\\DelIThree{p,c} := \\MutualInfo{c\\prm}{\\D\\mid p,\\CE}{}- \\MutualInfo{c}{\\D\\mid p,\\CE}{} ,\n\\label{eq:def_delIThree}\n\\\\\n\\DelIFour{p,c} :=\n\\MutualInfo{\\c\\prm}{\\p\\mid \\DepC}{}\n-\n\\MutualInfo{\\c}{\\p\\mid \\DepC}{},\n\\label{eq:def_delIFour}\n\\end{IEEEeqnarray}\nwhere we used the convention introduced in Eqs.~(\\ref{eq:PVPXXXX8}) and (\\ref{eq:VXXXX9}).\nThe independencies of the evolutions induced from the definitions~(\\ref{eq:dpdpdp14})\nlead to the negativity of $\\DelIOnePC$, $\\DelITwoPC, \\DelIThreePC$, and $\\DelIFourPC$ \n\\footnote{For the formal definition of the independent evolution, see Definition~\\ref{def:independent Markov process} in Section~\\ref{sect:Positivity of Co-dissipation}.\nIn addition, Lemma~\\ref{lem:ind_mutual_prod_nonincrease1} in this section establishes the negativity of the mutual information productions such as $\\DelIOnePC$, $\\DelITwoPC, \\DelIThreePC$ and $\\DelIFourPC$.}.\nFor example, $\\DelIOnePC$ is the mutual information production through the independent process of $\\p$ from $\\c\\prm$ conditioned on $\\DCE$.\nThis fact leads to the negativity of $\\DelIOnePC$.\nConsequently, $\\DelIinner{\\Gcomp}$ is positive.\nLikewise, the following quantity is caused by the independency among the connected components in $\\Gca$:\n\\begin{IEEEeqnarray}{rCl}\n\\DelIinter{\\Gca}\n :=\\Sum{j=2}{\\Ncc}\\left[\\MutualInfo{\\VcompI{j}}{\\VcompI{1:j-1}}{}\n - \\MutualInfo{{\\VcompIprm{j}}}{{\\VcompIprm{1:j-1}}}{}\n \\right],\n\\IEEEeqnarraynumspace\n\\label{eq:defDelIinter}\n\\end{IEEEeqnarray}\nwhere we denote by $\\Ncc$ the number of connected components in $\\Gca$ and by $\\VcompI{j}$ all vertices in the $j$-th connected component of $\\Gca$.\nWe use the colons to represent the union of indexed sets such as $\\VcompI{1:j-1}:=\\bigcup_{k=1}^{j-1}\\VcompI{k}$. \nSince subsystems contained in different connected components evolve mutually independently, the mutual information production $\\MutualInfo{\\VcompIprm{j}}{\\VcompIprm{1:j-1}}{}\n - \\MutualInfo{{\\VcompI{j}}}{{\\VcompI{1:j-1}}}{}$ is negative, which leads to the positivity of $\\DelIinter{\\Gca}$.\nWe define $\\DelI$ as the sum of the two contributions:\n\\begin{IEEEeqnarray}{rCl}\n\\DelI := \\Sum{\\Gcomp\\in\\Gca}{}\\DelIinner{\\Gcomp}+\n\\DelIinter{\\Gca}.\n\\IEEEeqnarraynumspace\n\\label{eq:defDelI39}\n\\end{IEEEeqnarray}\nThe first main result of this Letter is the SLLI~(\\ref{eq:HHQD5}) with $\\DelI$ provided by Eq.~(\\ref{eq:defDelI39}).\nFor a rigorous proof of this result, see Theorem~\\ref{lem:sum_all_is_bound} in Section~\\ref{subsec:Calculation of the sum of YuragiRel using Gca}, where we demonstrate that the sum $\\Sum{k=1}{\\Nx}\\YuragiK$ results in the SLLI~(\\ref{eq:HHQD5}).\nMoreover, since all of its modules are positive, $\\DelI$ takes a positive value. \nThe positivity is formally proved in Theorem~\\ref{lem:JKLMI} in Section~\\ref{sect:Positivity of Co-dissipation}.\nThus, by the positivity of $\\DelI$, the \\SLLI~(\\ref{eq:HHQD5}) is a stronger bound than the SLT~(\\ref{eq:SLT}).\nIn particular, if all subsystems influence each other, then $\\DelI$ vanishes, and \\SLLI coincides with the SLT~(\\ref{eq:SLT}).\n\nTo obtain the \\SLLI~(\\ref{eq:HQID23}), we take the sum of the entropy bounds within the controlled system.\nWe regard $\\Xs{1}$ as the state $Y$ of the memory and $\\Xs{2:\\Nx}$ as the state $\\Xsys$ of the controlled system.\nLet $\\DelIsysS$ be $\\DelI$ determined by the definition~(\\ref{eq:defDelI39}) under the setting where the subsystems $2,3,\\dots,\\Nx$ constitute the whole system.\nIn addition, let $\\DelIsys$ be the quantity provided by adding $Y$ as the conditioning event in $\\DelIOnePC, \\DelITwoPC, \\DelIThreePC$, and $\\DelIFourPC$ when calculating $\\DelIsysS$.\nThen, we can show\n\\footnote{See Corollary~\\ref{th:strong_type1} in Section~\\ref{subsec:Calculation of the sum of YuragiRel using Gca}.}\nthat the sum $\\Sum{k=2}{\\Nx}\\YuragiK$ leads to the \\SLLI~(\\ref{eq:HQID23}).\nThe positivity of $\\DelIsys$ is established by a similar way of proving the positivity of $\\DelI$.\n\n\n\n\\textit{Example 1 --}\nWe observe that \\SLLI~(\\ref{eq:HHQD5}) implies that the feedback control increases the net maximum work contrary to the SLT~(\\ref{eq:SLT}) and the {\\SLLI}s~(\\ref{eq:HItype1})(\\ref{eq:WHtype2}) by the model inspired by the Szilard engine. \nWe regard subsystem 1 as a controller with a 1-bit memory and subsystems 2 through $N$ as a controlled system.\nAs illustrated in Fig.~\\ref{fig:f1}, the controlled system consists of $N-1$ ideal gas particles in a rigid box.\nIn the initial state, the box is separated by a wall, and all particles are on the left or right side of the wall with a probability of 0.5. Then, subsystems of the controlled system correlate with each other: $\\muI{\\Xj}{\\Xk}=\\ln2$ for $j,k\\geq2$.\nWe assume that feasible controls consist of three actions: the controller moves the wall to the left or right edge of the box or pulls the wall out from the box.\nWe investigate only when the final state is the maximum entropy state, where the particles move around the entire box.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.43\\textwidth]{PRL_F1.eps}\n \\caption{The initial and final state in Example 1.}\n \\label{fig:f1}\n\\end{figure}\n\nWe suppose that the feedback control consists of two steps: 1) the controller performs a binary measurement of whether a single particle stays on the left or right side without error; 2) the controller moves the wall to the opposite side of the particles.\nIn the second step, the entropy productions of the whole system and the controlled system are $\\Delta\\Htot=(N-1)\\ln2$ and $\\Delta\\Hsys=(N-2)\\ln2$. In addition, the increase in mutual information between the memory and the controlled system is $-\\ln2$, and the memory cost determined by the Landauer's principle is $\\ln2$.\n\nBesides the feedback control, we can consider three cases in terms of the measurement and the dependency.\nWith the measurement and without the influence from the memory's state on the controlled system's evolution, which we call the ``open-loop with measurement (\\OpenLoopWithMeasure),''\n$\\Delta\\Htot=(N-1)\\ln2$, $\\Delta\\Hsys=(N-2)\\ln2$ and the memory cost is $\\ln2$.\nWithout the measurement, regardless of the influence of the memory state on the controlled evolution, $\\Delta\\Htot=\\Delta\\Hsys=(N-2)\\ln2$ with the zero memory cost. \nWe call these cases the ``open-loop control without the measurement (\\OpenLoopWithoutMeasure).''\n\n\nThen, although the SLT~(\\ref{eq:SLT}) and the {\\SLLI}s~(\\ref{eq:HItype1})(\\ref{eq:WHtype2}) claim the same net maximum work $(N-2)\\ln2$ for all four cases,\nour \\SLLI~(\\ref{eq:HHQD5}) asserts a strict limit such that no work can be extracted without the feedback control:\n\\begin{IEEEeqnarray}{lCr}\n\\Wavg \\leq\n\\begin{cases}\n(N-1)\\ln2 & (\\text{feedback control})\\\\\n0 & (\\text{\\OpenLoopWithMeasure or \\OpenLoopWithoutMeasure})\n\\end{cases}\n\\label{eq:NLN205}\n\\end{IEEEeqnarray}\nbecause\nwe have \\footnote{\nSee Section~\\ref{ap:Calculation of $\\DelI$ in Example 1}.\n}\n\\begin{IEEEeqnarray}{lCr}\n\\DelI=\n\\begin{cases}\n0 & (\\text{feedback control}) \\\\\n(N-1)\\ln2 & (\\text{\\OpenLoopWithMeasure})\\\\\n(N-2)\\ln2. & (\\text{\\OpenLoopWithoutMeasure})\n\\end{cases}\n\\end{IEEEeqnarray}\nAccordingly, only the feedback control enjoys the positive net maximum work $(N-2)\\ln2$ despite $-\\ln2$ or zero for the \\OpenLoopWithMeasure or the \\OpenLoopWithoutMeasure case.\nIn addition, this advantage of the feedback control \nincreases with the number of particles, which means the feedback control can be significantly beneficial more at a macroscopic level.\n\n\n\n\n\\textit{Example 2 --}\nTo observe how $\\DelIOnePC$ appears,\nwe consider a two-body system with the following dependency:\n\\begin{IEEEeqnarray}{lCr}\n\\DepX{1} = \\emptyset, \\quad \\DepX{2} = \\{\\Xs{1}\\}\n\\label{eq:x1tox2depen}\n\\end{IEEEeqnarray}\nThen, $\\Ncc=1$ and $\\pcPair=(\\Xs{1},\\Xs{2})$.\nSuppose that all subsystems take binary states, and\nthe entropy of each subsystem takes the maximum amount: $\\shaent{\\Xj}=\\shaent{\\Xj\\prm}=\\ln2$ for all $j$.\nWe assume the following correlations:\n\\begin{IEEEeqnarray}{cCc}\n\\muI{\\Xs{1}}{\\Xs{2}} = \\muI{\\Xs{1}}{\\Xs{2}\\prm} = \\muI{\\Xs{2}}{\\Xs{2}\\prm} = \\ln2,\n\\\\\n\\muI{\\Xs{1}}{\\Xs{1}\\prm} = \\muI{\\Xs{2}}{\\Xs{1}\\prm} = \\muI{\\Xs{2}\\prm}{\\Xs{1}\\prm} = 0.\n\\end{IEEEeqnarray}\nThen, we have $\\Delta \\Htot=\\ln2$. Since $\\DelIOnePC=\\ln2$ and $\\DelITwoPC = \\DelIThreePC = \\DelIFourPC = 0$, we obtain $\\DelI=\\ln2$. \nAssuming that the internal energy is constant, our inequality~(\\ref{eq:WQHD4}) implies $\\Wavg \\leq 0$\nalthough the SLT~(\\ref{eq:SLT}) and the {\\SLLI}s~(\\ref{eq:HItype1}) (\\ref{eq:WHtype2}) claim that the net maximum work is $\\ln 2$.\n\n\\textit{Example 3 --}\nTo observe how $\\DelITwoPC, \\DelIThreePC$ and $\\DelIFourPC$ appear, we consider a system consisting of five subsystems with the following dependency:\n\\begin{IEEEeqnarray}{cCr}\n\\DepX{1} = \\{\\Xs{3}\\},\\quad\n\\DepX{2} = \\{\\Xs{3}, \\Xs{5}\\},\\quad\n\\DepX{3} = \\{\\Xs{4}\\},\n\\NonumberNewline\n\\DepX{4} = \\{\\Xs{5}\\},\\quad\n\\DepX{5} = \\emptyset\n\\IEEEeqnarraynumspace\n\\end{IEEEeqnarray}\nThen, $\\Ncc=1$ and the sequence of $\\pcPair$ can be $(\\Xs{3},\\Xs{1})$, $(\\{\\Xs{1},\\Xs{3}\\}, \\Xs{2})$, $(\\Xs{4}, \\{\\Xs{1},\\Xs{2}, \\Xs{3}\\})$, $(\\Xs{5}, \\{\\Xs{1}, \\Xs{2},\\Xs{3}, \\Xs{4}\\})$. \nWe suppose all subsystems take binary states except $\\Xs{1}$, which takes four states.\nWe denote by $\\Xs{1H}$ and $\\Xs{1V}$ the first and second bit of $\\Xs{1}$.\nThe entropy of each subsystem takes the maximum amount: $\\shaent{\\Xj}=\\shaent{\\Xj\\prm}=\\ln2$ for all $j$ except $\\shaent{\\Xs{1}}=\\shaent{\\Xs{1}\\prm}=2\\ln2$.\nThere are no correlations among the subsystems other than the following:\n\\begin{IEEEeqnarray}{rCl}\n\\muI{\\Xs{1H}}{\\Xs{2}} &=& \\muI{\\Xs{2}}{\\Xs{4}} \n= \\muI{\\Xs{1V}}{\\Xs{3}} \n\\NonumberNewline\n&=& \\muI{\\Xs{3}}{\\Xs{5}} = \\ln2.\n\\end{IEEEeqnarray}\nThen, we have $\\Delta \\Htot = 4\\ln2$. \nSince $\\DelIOnePC=\\DelITwoPC = \\DelIThreePC=\\DelIFourPC=0$ other than $\\DelIThree{\\Xs{3},\\Xs{1}}=\\DelITwo{\\{\\Xs{3},\\Xs{1}\\}, \\Xs{2}}=\\DelIFour{\\{\\Xs{3},\\Xs{1}\\}, \\Xs{2}}=\\ln2$, we obtain $\\DelI=3\\ln2$. \nAssuming that the internal energy is constant, our inequality~(\\ref{eq:WQHD4}) implies $\\Wavg \\leq \\ln2$, although the SLT~(\\ref{eq:SLT}) and the {\\SLLI}s~(\\ref{eq:HItype1})(\\ref{eq:WHtype2}) assert that the maximum work is $4\\ln2$. \nIndeed, when the third subsystem performs feedback control, the maximum work $\\ln2$ is achieved. \n\nMeanwhile, if the dependency is provided by \n$\\DepX{1} = \\emptyset,\\quad\\DepX{2} = \\DepX{3} = \\DepX{4} = \\DepX{5} = \\{\\Xs{1}\\}$,\nthen $\\DelI$ is zero, and our \\SLLI claims that the same upper limit $\\Wavg=4\\ln2$ as the SLT and the {\\SLLI}s~(\\ref{eq:HItype1})(\\ref{eq:WHtype2}).\n\n\\textit{Discussion --}\nBy the definition, $\\DelI$ is the decrease in mutual informations between subsystems evolving independently.\nPrecisely, $\\DelIinner{\\Gcomp}$ is the negative sum of $\\DelIOnePC, \\DelITwoPC, \\DelIThreePC$, and $\\DelIFourPC$ that are the mutual information productions through the independent processes conditioned on the directly influencing subsystems.\nFor example, $\\DelITwoPC$ is the mutual information production between $\\p$ and $\\CE$, where $\\p$ evolves independently from $\\CE$ conditioned on $\\D$.\nWe can refer to Fig.~\\ref{fig:pc_depend}, where the arrows represent dependencies, to understand the independencies regarding $\\DelIOnePC, \\DelITwoPC, \\DelIThreePC$, and $\\DelIFourPC$.\nLikewise, $\\DelIinter{}$ is the decrease in the mutual informations through the mutually independent processes, as mentioned earlier.\nHence, $\\DelI$ is varied by changing the dependency and the conditioning event.\n\nIn Example 1, we can interpret that the feedback control reduces $\\DelI$ by turning the independent process not conditioned on other states into the independent process conditioned on the correlated memory's state.\nWhen the controller neither performs the measurement nor influences the controlled system's evolution,\n$\\DelI$ is the decrease in the mutual informations through \\textit{the independent processes that are not conditioned on another state}.\nIn contrast, in the feedback control, the memory state influences the controlled system's evolution, and then\n$\\DelI$ becomes the mutual information decrease through the \\textit{independent process conditioned on the correlated memory state}.\nSince the measurement provides the correlation: $\\muI{\\Xs{1}}{\\Xj}$, this conditioning by the memory state causes less $\\DelI$ in the feedback control.\n\n\n\nAt last, we compare the present study with the exorcise of the Maxwell demon.\nThe latter study recovers the SLT\nfrom a situation in which the feedback control appeared to extract more work than the upper limit by the second law.\nIn contrast, this study illustrates that the feedback control improves the maximum work, which is lower than what the SLT claims in the open-loop control.\nMoreover, our {\\SLLI}s imply that the feedback control does not violate the SLT because our {\\SLLI}s are always tighter than the SLT.\n\n\n\n\n\n\n\n\n\n\n\n\\textit{Conclusion --}\nWe reported the first {\\SLLI}s~(\\ref{eq:HHQD5})-(\\ref{eq:WQHD4}) implying that the feedback control can increase the \\textit{net maximum work}. \nAlthough both the {\\SLLI}s (\\ref{eq:HItype1}) and (\\ref{eq:WHtype2}) do not imply that the feedback control increases the net maximum work, \nour {\\SLLI}s imply it\nbecause the open-loop control causes $\\DelI$, which leads to less net maximum work than what the SLT claims.\nBy the positivity of $\\DelI$,\nout {\\SLLI}s are stronger bounds than the SLT and the previous {\\SLLI}s.\nSince all components of $\\DelI$ are the decreases of the mutual informations among the subsystems, the advantage of the feedback control suggested in this Letter comes from the internal correlations of the controlled system.\nOur work might serve as a theoretical basis for quantifying the usefulness of information processing in physical systems.\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\IEEEPARstart{I}{n} recent decades, the rapid development of the digital camera hardware has revolutionized human lives. On one hand, even mid-level mobile devices can easily produce high resolution images and videos. Besides the physical elements, the widespread use of the images and videos also reflects the importance of developing software technology for them. On the other hand, numerous registration techniques for images and video frames have been developed for a long time. The existing registration techniques work well on problems with a moderate size. However, when it comes to the current high quality images and videos, most of the current registration techniques suffer from extremely long computations. This limitation in software seriously impedes fully utilizing the state-of-the-art camera hardware.\n\nOne possible way to accelerate the computation of the registration is to introduce a much coarser grid on the images or video frames. Then, the registration can be done on the coarse grid instead of the high resolution images or video frames. Finally, the fine details can be added back to the coarse registration. It is noteworthy that the quality of the coarse grid strongly affects the quality of the final registration result. If the coarse grid cannot capture the important features of the images or video frames, the final registration result is likely to be unsatisfactory. In particular, for the conventional rectangular coarse grids, since the partitions are restricted in the vertical and horizontal directions, important features such as slant edges and irregular shapes cannot be effectively recorded. By contrast, triangulations allow more freedom in the partition directions as well as the partition sizes. Therefore, it is more desirable to make use of triangulations in simplifying the registration problems.\n\nIn this work, we propose a novel content-aware algorithm to {\\em TR}iangulate {\\em IM}ages, abbreviated as \\emph{TRIM}, that largely simplifies the registration problem for high quality images and video frames. Specifically, given a possibly very high definition image or video frame, we compute a coarse triangulation representation of it. The computation involves a series of steps including subsampling, unsharp masking, segmentation and sparse feature extraction for locating sample points on important features. Then, a high quality triangulation can be created on the set of the sample points using the Delaunay triangulation. With a coarse triangular representation of the images, the registration problem can be significantly accelerated. We apply a landmark-based quasi-conformal registration algorithm \\cite{Lam14} for computing the coarse registration. Finally, the fine detail of the image or video frame is computed with the aid of the coarse registration result. Our proposed framework also serves as a highly efficient and accurate initialization for many different registration approaches.\n\nThe rest of this paper is organized as follows. In Section \\ref{previous}, we review the literature on image and triangular mesh registration. In Section \\ref{contribution}, we highlight the contribution of our work. Our proposed method is explained in details in Section \\ref{main}. In Section \\ref{experiment}, we demonstrate the effectiveness of our approach with numerous real images. The paper is concluded in Section \\ref{conclusion}.\n\n\n\\section{Previous works} \\label{previous}\n\nIn this section, we describe the previous works closely related to our work.\n\nImage registration have been widely studied by different research groups. Surveys on the existing image registration approaches can be found in \\cite{Zitova03,Crum04,Klein09}. In particular, one common approach for guaranteeing the accuracy of the registration is to make use of landmark constraints. Bookstein \\cite{Bookstein78,Bookstein91,Bookstein99} proposed the unidirectional landmark thin-plate spline (UL-TPS) image registration. In \\cite{Johnson02}, Johnson and Christensen presented a landmark-based consistent thin-plate spline (CL-TPS) image registration algorithm. In \\cite{Joshi00}, Joshi et al. proposed the Large Deformation Diffeomorphic Metric Mapping (LDDMM) for registering images with a large deformation. In \\cite{Glaunes04,Glaunes08}, Glaun\\`es et al. computed large deformation diffeomorphisms of images with prescribed displacements of landmarks.\n\nA few works on image triangulations have been reported. In \\cite{Gee94}, Gee et al. introduced a probabilistic approach to the brain image matching problem and described the finite element implementation. In \\cite{Kaufmann13}, Kaufmann et al. introduced a framework for image warping using the finite element method. The triangulations are created using the Delaunay triangulation method \\cite{Shewchuk96} on a point set distributed according to variance in saliency. In \\cite{Lehner07,Lehner08}, Lehner et al. proposed a data-dependent triangulation scheme for image and video compression. Recently, a commercial triangulation image generator called DMesh \\cite{dmesh} has been designed.\n\nIn our work, we handle image registration problems with the aid of triangulations. Numerous algorithms have been proposed for the registration of triangular meshes. In particular, the landmark-driven approaches use prescribed landmark constraints to ensure the accuracy of mesh registration. In \\cite{Wang05,Lui07,Wang07}, Wang et al. proposed a combined energy for computing a landmark constrained optimized conformal mapping of triangular meshes. In \\cite{Lui10}, Lui et al. used vector fields to represent surface maps and computed landmark-based close-to-conformal mappings. Shi et al. \\cite{Shi13} proposed a hyperbolic harmonic registration algorithm with curvature-based landmark matching on triangular meshes of brains. In recent years, quasi-conformal mappings have been widely used for feature-endowed registration \\cite{Zeng11,Zeng14,Lui14,Meng15}. Choi et al. \\cite{Choi15} proposed the FLASH algorithm for landmark aligned harmonic mappings by improving the algorithm in \\cite{Wang05,Lui07} with the aid of quasi-conformal theories. In \\cite{Lam14}, Lam and Lui reported the quasi-conformal landmark registration (QCLR) algorithm for triangular meshes.\n\n\n\\section{Contribution} \\label{contribution}\nOur proposed \\emph{TRIM} approach for accelerating image and video frame registration is advantageous in the following aspects:\n\\begin{enumerate}\n \\item The triangulation algorithm is fully automatic. The important features of the input image are well recorded in the resulting coarse triangulation.\n \\item The algorithm is fast and robust. The coarse triangulation of a typical high resolution image can be computed within seconds.\n \\item The registration of the original images can be simplified by a mapping of the coarse triangulation. The fine details are then restored with the aid of the coarse registration result.\n \\item Our proposed \\emph{TRIM} algorithm is particularly suitable for landmark-based registration as the landmark constraints can be easily included in the coarse triangulations. By contrast, for regular grid-based approaches, accurate landmark correspondences can only be achieved on a very dense grid domain representation.\n \\item Using our proposed approach, the problem scale of the image and video frame registration is significantly reduced. Our method can serve as a fast and accurate initialization for the state-of-the-art image registration algorithms.\n\\end{enumerate}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[height=95mm]{pl.png}\n \\caption{The pipeline of our proposed {\\em TRIM} algorithm for accelerating image registration via coarse triangulation.}\n \\label{fig:pl}\n\\end{figure}\n\n\n\n\\section{Proposed method} \\label{main}\n\nIn this section, we describe our proposed {\\em TRIM} image triangulation approach for efficient registration in details. The pipeline of our proposed framework is described in Figure \\ref{fig:pl}.\n\n\\bigbreak\n\n\\subsection{Construction of coarse triangulation on images}\nGiven two high resolution images $I_1$ and $I_2$, our goal is to compute a fast and accurate mapping $f: I_1 \\to I_2$. Note that directly working on the high resolution images can be inefficient. To accelerate the computation, the first step is to construct a coarse triangular representation of the image $I_1$. In the following, we propose an efficient image triangulation scheme called {\\em TRIM}. Our triangulation scheme is content-aware. Specifically, special objects and edges in the images are effectively captured by a segmentation step, and a suitable coarse triangulation is constructed with the preservation of these features. Our proposed {\\em TRIM} method consists of 6 steps in total.\n\n\\bigbreak\n\n\\subsubsection{Subsampling the input image without affecting the triangulation quality}\nDenote the input image by $I$. To save the computational time for triangulating the input image $I$, one simple remedy is to reduce the problem size by performing certain subsampling on $I$. For ordinary images, subsampling unavoidably creates adverse effects on the image quality. Nevertheless, it does not affect the quality of the coarse triangulation we aim to construct on images.\n\nIn our triangulation scheme, we construct triangulations based on the straight edges and special features on the images. Note that straight edges are preserved in all subsamplings of the images because of the linearity. Hence, our edge-based triangulation is not affected by the mentioned adverse effects. In other words, we can subsample high resolution images to a suitable size for enhancing the efficiency of the remaining steps for the construction of the triangulations. We denote the subsampled image by $\\tilde{I}$.\n\n\\bigbreak\n\n\\subsubsection{Performing unsharp masking on the subsampled image}\nAfter obtaining the subsampled image $\\tilde{I}$, we perform an unsharp masking on $\\tilde{I}$ in order to preserve the edge information in the final triangulation. More specifically, we first transform the data format of the subsampled image $\\tilde{I}$ to the CIELAB standard. Then, we apply the unsharp masking method in \\cite{Polesel00} on the intensity channel of the CIELAB representation of $\\tilde{I}$. The unsharp masking procedure is briefly described as follows.\n\nBy an abuse of notation, we denote $\\tilde{I}(x,y)$ and $\\bar{I}(x,y)$ the intensities of the input subsampled image $\\tilde{I}$ and the output image $\\bar{I}$ respectively, and $G_{\\sigma}(x,y)$ the Gaussian mean of the intensity of the pixel $(x,y)$ with standard derivation $\\sigma$. Specifically, $G_{\\sigma}(x,y)$ is given by\n\n\\begin{equation}\n G_{\\sigma}(x,y) \\triangleq \\frac{1}{\\sigma\\sqrt{2\\pi}}\\int_{(u,v) \\in \\Omega} e^{-\\frac{(u-x)^2 + (v-y)^2}{2\\sigma^2}}.\n\\end{equation}\n\nWe perform an unsharp masking on the image using the following formula\n\\begin{equation}\n \\bar{I}(x,y) = \\tilde{I}(x,y) - \\lambda\\begin{cases} G_{\\sigma}*\\tilde{I}(x,y) & \\text{ if } V_s(x,y)>\\theta, \\\\ 0 & \\text{ if } V_s(x,y) < \\theta, \\end{cases}\n\\end{equation}\nwhere\n\\begin{equation}\n V_s(x,y) \\triangleq \\sqrt{\\frac{1}{Area(M_s)} \\int_{(u,v) \\in M_s} (\\tilde{I}(u,v)-\\tilde{I}_{mean}(x,y))^2 }\n\\end{equation}\nand\n\\begin{equation}\n\\tilde{I}_{mean}(x,y) = \\frac{1}{Area(M_s)} \\int_{(u,v) \\in M_s} \\tilde{I}(u,v).\n\\end{equation}\n\nHere, $M_s$ is the disk with center $(x,y)$ and radius $s$. The effect of the unsharp masking is demonstrated in Figure \\ref{fig:unsharp}. With the aid of this step, we can highlight the edge information in the resulting image $\\bar{I}$ for the construction of the triangulation in the later steps. In our experiment, we choose $\\lambda = 0.5,~\\sigma = 2,~s = 2$, and $\\theta = 0.5$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.22\\textwidth]{edge_2.png}\n \\includegraphics[width=0.22\\textwidth]{edge_1.png}\n \\caption{An illustration of unsharp masking. Left: the input image. Right: the resulting image. The unsharp masking procedure helps preserving the edge information of the input image to ensure that the vertices in unclear edges can also be extracted.}\n \\label{fig:unsharp}\n\\end{figure}\n\n\\bigbreak\n\n\\subsubsection{Segmenting the image}\nAfter obtaining the image $\\bar{I}$ upon unsharp masking, we perform a segmentation in this step in order to optimally locate the mesh vertices for computing the coarse triangulation. Mathematically, our segmentation problem is described as follows.\n\nSuppose the image $\\bar{I}$ has $L$ intensity levels in each RGB channel. Denote $i$ as a specific intensity level $(i.e. ~ 0 \\leq i \\leq L-1)$. Let $C$ be a color channel of the image $(i.e. ~ C \\in \\lbrace R,G,B \\rbrace)$, and let $h_{i}^{C}$ denote the image histogram for channel $C$, in other words, the number of pixels which correspond to its $i$-th intensity level.\n\nDefine $p_{i}^{C} \\triangleq \\frac{h_{i}^{C}}{N}$, where $N$ represents the total number of pixels in the image $\\bar{I}$. Then we have\n\\begin{equation}\n\\sum\\limits_{\\substack{i = 0, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^L p_{i}^{C} = 1 \\ \\ \\text{ and } \\ \\ \\mu_{T}^{C} = \\sum\\limits_{\\substack{i = 0, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^L ip_{i}^{C}.\n\\end{equation}\n\nSuppose that we want to compress the color space of the image $\\bar{I}$ to $l$ intensity levels. Equivalently, $\\bar{I}$ is to be segmented into $l$ regions $D_{1}^{C},\\cdots,D_{l}^{C}$ by the ordered threshold levels $ x_{j}^{C}, j = 1,\\cdots,l-1$. We define the best segmentation criterion to be maximizing the inter-region intensity-mean variance. More explicitly, we define the cost\n\\begin{equation}\\label{eqt:pso_cost}\n \\sigma^C \\triangleq \\sum\\limits_{\\substack{j = 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^l w_{j}^{C}(\\mu_{j}^{C} - \\mu_{T}^{C})^2,\n\\end{equation}\nwhere the probability $w_{j}^{C}$ of occurrence and the intensity-mean $\\mu_{j}^{C}$of the region $D_{j}^{C}$ are respectively given by\n\\begin{equation}\n w_{j}^{C} =\n\\begin{cases}\n\\sum\\limits_{\\substack{i = 0, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{t_{j}^{C}} p_{i}^{C} & \\text{ if } j = 1, \\\\\n\\sum\\limits_{\\substack{i = t_{j-1}^{C} + 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{t_{j}^{C}} p_{i}^{C} & \\text{ if } 1 < j < l, \\\\\n\\sum\\limits_{\\substack{i = t_{j}^{C} + 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{L - 1} p_{i}^{C} & \\text{ if } j = l,\n\\end{cases}\n\\end{equation}\nand\n\\begin{equation}\n\\mu_{j}^{C} =\n\\begin{cases}\n\\sum\\limits_{\\substack{i = 0, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{t_{j}^{C}} \\frac{ip_{i}^{C}}{w_{j}^{C}} & \\text{ if } j = 1, \\\\\n\\sum\\limits_{\\substack{i = t_{j-1}^{C} + 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{t_{j}^{C}} \\frac{ip_{i}^{C}}{w_{j}^{C}}\n& \\text{ if } 1 < j < l, \\\\\n\\sum\\limits_{\\substack{i = t_{j}^{C} + 1, \\\\ C \\in \\lbrace R,G,B \\rbrace}}^{L - 1} \\frac{ip_{i}^{C}}{w_{j}^{C}}\n& \\text{ if } j = l.\n\\end{cases}\n\\end{equation}\n\nHence, we maximize three objective functions of each RGB channel\n\\begin{equation}\n\\underset{1 < x_{1}^{C} < \\cdots < x_{l-1}^{C} < L}{\\arg\\max} \\sigma^C(\\lbrace x_{j}^{C} \\rbrace_{j = 1}^{l-1}),\n\\end{equation}\nwhere $C \\in \\{R, G, B\\}$. Our goal is to find a set of $\\textbf{x} = \\lbrace x_{j}^{C} \\rbrace_{j = 1}^{l-1}$ such that above function is maximized for each RGB channel.\n\nTo solve the aforementioned segmentation optimization problem, we apply the Particle Swarm Optimization (PSO) segmentation algorithm \\cite{Ghamisi12} on the image $\\bar{I}$. The PSO method is used in this segmentation optimization problem for reducing the chance of trapping in local optimums.\n\nAn illustration of the segmentation step is provided in Figure \\ref{fig:pso}. After performing the segmentation, we extract the boundaries of the segments. Then, we can obtain a number of large patches of area in each of which the intensity information is almost the same. They provide us with a reasonable edge base for constructing a coarse triangulation in later steps.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.22\\textwidth]{mnm_1.png}\n \\includegraphics[width=0.22\\textwidth]{mnm_2.png}\n \\caption{An illustration of the segmentation step for compressing the color space to achieve a sparse intensity representation. Left: the original image. Right: the segmentation result.}\n \\label{fig:pso}\n\\end{figure}\n\n\\bigbreak\n\\subsubsection{Sparse feature extraction on the segment boundaries}\nAfter computing the segment boundaries $\\mathcal{B}$ on the image $\\bar{I}$, we aim to extract sparse feature points on $\\mathcal{B}$ in this step. For the final triangulation, it is desirable that the edges of the triangles are as close as possible to the segment boundaries $\\mathcal{B}$, so as to preserve the geometric features of the original image $I$. Also, to improve the efficiency for the computations on the triangulation, the triangulation should be much coarser than the original image. To achieve the mentioned goals, we consider extracting sparse features on the segment boundaries $\\mathcal{B}$ and use them as the vertices of the ultimate triangulated mesh.\n\nConsider a rectangular grid table $G$ on the image $\\bar{I}$. Apparently, the grid table $G$ intersects the segment boundaries $\\mathcal{B}$ at a number of points. Denote $\\mathcal{P}$ as our desired set of sparse features. Conceptually, $\\mathcal{P}$ is made up of the set of points at which $\\mathcal{B}$ intersect the grid $G$, with certain exceptions.\n\nIn order to further reduce the number of feature points for a coarse triangulation, we propose a merging procedure for close points. Specifically, let $g_{i,j}$ be the vertex of the grid $G$ at the $i$-th row and the $j$-th column. We denote $\\mathcal{P}_{i,j}^1$ and $\\mathcal{P}_{i,j}^2$ respectively as the set of points at which $\\mathcal{B}$ intersect the line segment $\\displaystyle \\overline{g_{i,j} g_{i,j+1}}$ and the line segment $\\displaystyle \\overline{g_{i,j} g_{i+1,j}}$.\n\nThere are 3 possible cases for $\\mathcal{P}_{i,j}^k$:\n\\begin{enumerate}[(i)]\n \\item If $|\\mathcal{P}_{i,j}^k|=0$, then there is no intersection point between the line segment and $\\mathcal{B}$ and hence we can neglect it.\n \\item If $|\\mathcal{P}_{i,j}^k|=1$, then there is exactly one intersection point $p_{i,j}^k$ between the line segment and $\\mathcal{B}$. We include this intersection point $p_{i,j}^k$ in our desired set of sparse features $\\mathcal{P}$.\n \\item If $|\\mathcal{P}_{i,j}^k|>1$, then there are multiple intersection points between the line segment and $\\mathcal{B}$. Since these multiple intersection points lie on the same line segment, it implies that they are sufficiently close to each other. In other words, the information they contain about the segment boundaries $\\mathcal{B}$ is highly similar and redundant. Therefore, we consider merging these multiple points as one point.\n\\end{enumerate}\nMore explicitly, for the third case, we compute the centre $m_{i,j}^k$ of the points in $\\mathcal{P}_{i,j}^k$ by\n\\begin{equation}\n\\displaystyle m_{i,j}^k = mean_{\\{p | p \\in \\mathcal{P}_{i,j}^k\\}} p.\n\\end{equation}\nThe merged point $m_{i,j}^k$ is then considered as a desired feature point. In summary, our desired set of sparse features is given by\n\\begin{equation}\n\\displaystyle \\mathcal{P} = \\bigcup_i \\bigcup_j \\{p_{i,j}^1, p_{i,j}^2, m_{i,j}^1, m_{i,j}^2\\}.\n\\end{equation}\n\nAn illustration of the sparse feature extraction scheme is given in Figure \\ref{fig:feature}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.28\\textwidth]{feature.png}\n \\caption{An illustration of our sparse feature extraction scheme. The chosen sparse feature points are represented by the red dots. If the segment boundary does not intersect an edge, no point is selected. If the segment boundary intersects an edge at exactly one point, the point is selected as a feature point. If the segment boundary intersects an edge at multiple points, the centre of the points is selected as a feature point.}\n \\label{fig:feature}\n\\end{figure}\nHowever, one important problem in this scheme is to determine a suitable size of the grid $G$ so that the sparse feature points are optimally computed. Note that to preserve the regularity of the extracted sparse features, it is desirable that the elements of the grid $G$ are close to perfect squares. Also, to capture the important features as complete as possible, the elements of $G$ should be small enough. Mathematically, the problem can be formulated as follows.\n\nDenote $w$ as the width of the image $\\bar{I}$, $h$ as the height of the image $\\bar{I}$, $w'$ as the number of columns in $G$, $h'$ as the number of rows in $G$, $l_{w}$ as the horizontal length of every element of $G$, and $l_{h}$ as the vertical length of every element of $G$. We further denote $p$ as the percentage of grid edges in $G$ which intersect the segment boundaries $\\mathcal{B}$, and $n$ as the desired number of the sparse feature points. Given the two inputs $p$ and $n$ which respectively correspond to the desired sparse ratio and the desired number of feature points, to find a suitable grid size of $G$, we aim to minimize the cost function\n\n\\begin{equation}\nc(l_{w},l_{h}) = \\left| l_{w} - l_{h} \\right|^2\n\\end{equation}\nsubject to\n\\begin{enumerate}[(i)]\n \\item \\begin{equation}h = h'l_{h},\\end{equation}\n \\item \\begin{equation}w = w'l_{w},\\end{equation}\n \\item \\begin{equation} \\label{eqt:number} p(w'+h'+2w'h') = n.\\end{equation}\n\\end{enumerate}\n\nHere, the first and the second constraint respectively correspond to the horizontal and vertical dimensions of the grid $G$, and the third constraint corresponds to the total number of intersection points. To justify Equation (\\ref{eqt:number}), note that\n\\begin{equation}\n\\begin{split}\n&\\text{Total number of line segments} \\\\\n= \\ &\\text{Total number of horizontal line segments} \\\\\n& + \\text{Total number of vertical line segments}\\\\\n= \\ &h'(w'+1) + w'(h'+1)\\\\\n= \\ &w'+h'+2w'h'.\\\\\n\\end{split}\n\\end{equation}\n\nNote that this minimization problem is nonlinear. To simplify the computation, we assume that $w'$, $h'$ are very large, that is, the grid $G$ is sufficiently dense. Then, from Equation (\\ref{eqt:number}), we have\n\\begin{equation}\n\\begin{split}\n\\frac{p}{n} &= \\frac{1}{w'+h'+2w'h'} \\\\\n& \\approx \\frac{1}{2w'h'} \\\\\n&= \\frac{1}{2\\left(\\frac{w}{l_w}\\right) \\left(\\frac{h}{l_h}\\right)} \\\\\n&= \\frac{l_{w}l_{h}}{2wh}.\n\\end{split}\n\\end{equation}\nBy further assuming that the grid $G$ is sufficiently close to a square grid, we have $l_{w} \\approx l_{h}$. Then, it follows that\n\\begin{equation}\n\\frac{p}{n} \\approx \\frac{l_{w}^2}{2wh}\n\\end{equation}\nand hence\n\\begin{equation}\nl_{w} \\approx \\sqrt{\\frac{2pwh}{n}}.\n\\end{equation}\nSimilarly,\n\\begin{equation}\nl_{h} \\approx \\sqrt{\\frac{2pwh}{n}}.\n\\end{equation}\n\nTo satisfy the integral constraints for $w'$ and $h'$, we make use of the above approximations and set\n\\begin{equation}\nh' = h_0' := \\left\\lfloor \\frac{h}{\\sqrt{\\frac{2pwh}{n}}} \\right\\rfloor = \\left\\lfloor \\sqrt{\\frac{nh}{2pw}} \\right\\rfloor.\n\\end{equation}\nSimilarly, we set\n\\begin{equation}\nw' = w_0' := \\left\\lfloor \\frac{w}{\\sqrt{\\frac{2pwh}{n}}} \\right\\rfloor = \\left\\lfloor \\sqrt{\\frac{nw}{2ph}} \\right\\rfloor.\n\\end{equation}\nFinally, we take\n\\begin{equation}\nl_{h} = \\frac{h}{h_0'}\n \\end{equation}\nand\n\\begin{equation}\nl_{w} = \\frac{w}{w_0'}.\n \\end{equation}\n\nTo summarize, with the abovementioned strategy for the feature point extraction, we obtain a set of sparse feature points which approximates the segment boundaries $\\mathcal{B}$. Specifically, given the inputs $p$ and $n$, the rectangular grid $G$ we introduce leads to approximately $n$ regularly-extracted sparse feature points. An illustration of the sparse feature extraction scheme is shown in Figure \\ref{fig:delaunay} (left). In our experiments, $p$ is set to be 0.2. A denser triangulated representation can be achieved by increasing the value of $p$.\n\n\\bigbreak\n\n\\subsubsection{Adding landmark points to the vertex set of the desired coarse triangulation}\nThis step is only required when our {\\em TRIM} algorithm is used for landmark-constrained registration. For accurate landmark-constrained registration, it is desirable to include the landmark points in the vertex set of the coarse representations of the input image $I$. One of the most important features of our coarse triangulation approach is that it allows registration with exact landmark constraints on a coarse triangular representation. By contrast, the regular grid-based registration can only be achieved on very dense rectangular grid domains in order to reduce the numerical errors.\n\nWith the abovementioned advantage of our approach, we can freely add a set of landmark points $\\mathcal{P}_{LM}$ to the set of sparse features $\\mathcal{P}$ extracted by the previous procedure. In other words, the landmark points are now considered as a part of our coarse triangulation vertices:\n\\begin{equation}\n\\mathcal{P} = \\bigcup_i \\bigcup_j \\{p_{i,j}^1, p_{i,j}^2, m_{i,j}^1, m_{i,j}^2\\} \\cup \\mathcal{P}_{LM}.\n\\end{equation}\nThen, the landmark-constrained registration of images can be computed by the existing feature-matching techniques for triangular meshes. The existing feature detection approaches such as \\cite{harris88} and \\cite{Lowe04} can be applied for obtaining the landmark points.\n\n\\bigbreak\n\n\\subsubsection{Computing a Delaunay triangulation}\nIn the final step, we construct a triangulation based on the set $\\mathcal{P}$ of feature points. Among all triangulation schemes, the Delaunay triangulation method is chosen since the triangles created by the Delaunay triangulations are more regular. More specifically, if $\\alpha$ and $\\beta$ are two angles opposite to a common edge in a Delaunay triangulation, then they must satisfy the inequality\n\\begin{equation}\n \\alpha + \\beta \\leq \\pi.\n\\end{equation}\n\nIn other words, Delaunay triangulations avoid the formation of sharp and irregular triangles. Note that the regularity does not only enhance the visual quality of the resulting triangulation but also lead to a more stable approximation of the derivatives on the triangles when applying various registration schemes. Therefore, we compute a Delaunay triangulation on the set $\\mathcal{P}$ of feature points for achieving the ultimate triangulation $\\mathcal{T}$. An illustration of the construction of the Delaunay triangulations is shown in Figure \\ref{fig:delaunay}.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.23\\textwidth]{mnm_sparse_point.png} \\ \\\n \\includegraphics[width=0.23\\textwidth]{mnm_mesh.png} \\ \\\n \\includegraphics[width=0.23\\textwidth]{mnm_mesh_color.png}\n \\caption{An illustration of extracting the vertices and computing a Delaunay triangulation. Left: the points obtained by the feature extraction step from Figure \\ref{fig:pso}. Middle: a Delaunay triangulation generated on the feature points. Right: the triangulation with a color approximated on each triangle.}\n \\label{fig:delaunay}\n\\end{figure*}\n\nThese 6 steps complete our {\\em TRIM} algorithm as summarized in Algorithm \\ref{algorithm:triangulation}.\n\n\\begin{algorithm}[h]\n\\KwIn{An image $I$, the desired number of image intensity levels $l$ for segmentation, the desired number of feature points $n$, the sparse ratio $p$.}\n\\KwOut{A coarse triangulation $\\mathcal{T}$ that captures the main features of the image.}\n\\BlankLine\n Subsample the input image $I$ to a suitable size and denote the result by $\\tilde{I}$\\;\n Apply an unsharp masking on the subsampled image $\\tilde{I}$ and denote the result by $\\bar{I}$\\;\n Apply the PSO segmentation for compressing the color space of $\\bar{I}$ to $l$ intensity levels, and extract boundaries $\\mathcal{B}$ of the segments\\;\n Extract a set of sparse feature points $\\mathcal{P}$ from the segment boundaries $\\mathcal{B}$ based on the parameters $n$ and $p$\\;\n Add a set of extra landmark points $\\mathcal{P}_{LM}$ to $\\mathcal{P}$ if necessary\\;\n Compute a Delaunay triangulation $\\mathcal{T}$ on the sparse feature points $\\mathcal{P}$.\n\\caption{Our proposed {\\em TRIM} algorithm for triangulating an image.}\n\\label{algorithm:triangulation}\n\\end{algorithm}\n\nIt is noteworthy that our proposed {\\em TRIM} algorithm significantly trims down high resolution images without distorting their important geometric features. Experimental results are shown in Section \\ref{experiment} to demonstrate the effectiveness of the {\\em TRIM} algorithm.\n\n\\subsection{Simplification of the registration using the coarse triangulation}\nWith the above triangulation algorithm for images, we can simplify the image registration problem as a mapping problem of triangulated surfaces. Many conventional image registration approaches are hindered by the long computational time and the accuracy of the initial maps. With the new strategy, it is easy to obtain a highly efficient and reasonably accurate registration of images. Our registration result can serve as a high quality initial map for various algorithms.\n\nIn this work, we apply our proposed {\\em TRIM} algorithm for landmark-aligned quasi-conformal image registration. Ideally, conformal registration are desired as conformal mappings preserve angles and hence the local geometry of the images. However, with the presence of landmark constraints, conformal mappings may not exist. We turn to consider quasi-conformal mappings, a type of mappings which is closely related to the conformal mappings. Mathematically, a \\emph{quasi-conformal mapping} $f: \\mathbb{C} \\to \\mathbb{C}$ satisfies the Beltrami equation\n\\begin{equation}\n\\frac{\\partial f}{\\partial \\bar{z}} = \\mu(z) \\frac{\\partial f}{\\partial z}\n\\end{equation}\nwith $\\mu$ is a complex-valued function with sup norm less than 1. $\\mu$ is called the \\emph{Beltrami coefficient} of $f$. Intuitively, a conformal mapping maps infinitesimal circles to infinitesimal circles, while a quasi-conformal mapping maps infinitesimal circles to infinitesimal ellipses (see Figure \\ref{fig:beltrami}). Readers are referred to \\cite{Gardiner00} for more details.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{beltrami.png}\n \\caption{An illustration of quasi-conformal mappings. The maximal magnification and shrinkage are determined by the Beltrami coefficient $\\mu$ of the mappings.}\n \\label{fig:beltrami}\n\\end{figure}\n\nIn \\cite{Lam14}, Lam and Lui proposed the quasi-conformal landmark registration (QCLR) algorithm for triangular meshes. In this work, we apply the QCLR algorithm on our coarse triangulations of images. More explicitly, to compute a registration mapping $f:I_1 \\to I_2$ between two images $I_1$ and $I_2$ with prescribed point correspondences\n\\begin{equation} \\label{eqt:constraint}\np_i \\longleftrightarrow q_i, i = 1,2,\\cdots,n,\n\\end{equation}\nwe first apply our proposed {\\em TRIM} algorithm and obtain a coarse triangulation $\\mathcal{T}_1$ on $I_1$. Here, we include the feature points $\\{p_i\\}_{i=1}^n$ in the generation of the coarse triangulation, as described in the fifth step of the {\\em TRIM} algorithm. Then, instead of directly computing $f$, we can solve for a map $\\tilde{f}: \\mathcal{T}_1 \\to I_2$. Since the problem size is significantly reduced under the coarse triangulation, the computation for $\\tilde{f}$ is much more efficient than that for $f$.\n\nThe QCLR algorithm \\cite{Lam14} aims to compute the desired quasi-conformal mapping $\\tilde{f}$ by minimizing the energy functional\n\\begin{equation} \\label{eqt:qclr}\nE_{LM}(\\nu) = \\int_{\\mathcal{T}_1} | \\nabla \\nu|^2 + \\alpha \\int_{\\mathcal{T}_1} |\\nu|^2\n\\end{equation}\nsubject to\n\\begin{enumerate}[(i)]\n \\item $\\tilde{f}(p_i) = q_i$ for all $i = 1,2,\\cdots,n$,\n \\item $\\|\\nu\\|_{\\infty} <1$, and\n \\item $\\nu = \\mu(\\tilde{f})$.\n\\end{enumerate}\n\nTo solve the above minimization problem (\\ref{eqt:qclr}), the QCLR algorithm makes use of the penalty splitting method and minimizes\n\n\\begin{equation}\n E_{LM}^{split}(\\nu,\\tilde{f}) = \\int_{\\mathcal{T}_1} |\\nabla \\nu|^2 + \\alpha \\int_{\\mathcal{T}_1} |\\nu|^2 + \\rho \\int_{\\mathcal{T}_1} |\\nu - \\mu(\\tilde{f})|^2\n\\end{equation}\nsubject to\n\\begin{enumerate}[(i)]\n \\item $\\tilde{f}(p_i) = q_i$ for all $i = 1,2,\\cdots,n$ and\n \\item $\\|\\nu\\|_{\\infty} <1$.\n\\end{enumerate}\nThe QCLR algorithm then alternately minimizes the energy $E_{LM}^{split}$ over one of $\\nu$ and $\\tilde{f}$ while fixing the other one.\n\nSpecifically, for computing $\\tilde{f}_n$ while fixing $\\nu_n$ and the landmark constraints, the QCLR algorithm applies the Linear Beltrami Solver proved by Lui et al.\\cite{Lui13}. On the other hand, for computing $\\nu_{n+1}$ while fixing $\\tilde{f}_n$, by considering the Euler-Lagrange equation, it suffices to solve\n\\begin{equation}\n (-\\Delta + 2 \\alpha I + 2\\rho I) \\nu_{n+1} = 2\\rho \\mu(\\tilde{f}_n).\n\\end{equation}\n\nFrom $\\nu_{n+1}$, one can compute the associated quasi-conformal mapping $\\tilde{f}_{n+1}$. However, note that $\\tilde{f}_{n+1}$ may not satisfy the hard landmark constraints \\ref{eqt:constraint} anymore. To enforce the landmark constraints, instead of directly using $\\tilde{f}_{n+1}$ as the updated result, the QCLR algorithm updates $\\nu_{n+1}$ by\n\\begin{equation}\n \\nu_{n+1} \\leftarrow \\nu_{n+1} + t(\\mu(\\tilde{f}_{n+1}) - \\nu_{n+1})\n\\end{equation}\nfor some small $t$. By repeating the abovementioned procedures, we achieve a landmark-constrained quasi-conformal mapping $\\tilde{f}$. For more details of the QCLR algorithm, readers are referred to \\cite{Lam14}.\n\nAfter computing the quasi-conformal mapping $\\tilde{f}$ on the coarse triangulation, we apply an interpolation to retrieve the fine details of the registration. Since the triangulations created by our proposed {\\em TRIM} algorithm preserves the important geometric features and prominent straight lines of the input image, the details of the registration results can be accurately interpolated. Moreover, since the coarse triangulation largely simplifies the input image and reduces the problem size, the computation is largely accelerated.\n\nThe registration procedure is summarized in Algorithm \\ref{algorithm:registration}. Experimental results are illustrated in Section \\ref{experiment} to demonstrate the significance of our coarse triangulation in the registration scheme.\n\n\\begin{algorithm}[h]\n\\KwIn{Two images or video frames $I_1$, $I_2$ to be registered, with the prescribed feature correspondences.}\n\\KwOut{A feature-matching registration mapping $f:I_1 \\to I_2$.}\n\\BlankLine\nCompute a coarse triangulation $\\mathcal{T}_1$ of $I_1$ using our proposed {\\em TRIM} algorithm (Algorithm \\ref{algorithm:triangulation}). Here, we include the prescribed feature points on $I_1$ in the generation of the coarse triangulation $\\mathcal{T}_1$\\;\nSelect landmark correspondences of the coarse triangulation $\\mathcal{T}_1$ and the target image $I_2$. Denote the landmark points on $\\mathcal{T}_1$ and $I_2$ by $\\{p_i\\}_{i=1}^n$ and $\\{q_i\\}_{i=1}^n$ correspondingly\\;\nCompute a landmark based quasi-conformal mapping $\\tilde{f}: \\mathcal{T}_1 \\to \\mathbb{C}$ by the QCLR algorithm in \\cite{Lam14}\\;\nObtain $f$ by $\\tilde{f}$ with a bilinear interpolation between $\\mathcal{T}_j$ and $I_j$.\n\\caption{Feature-based registration via our proposed {\\em TRIM} algorithm.}\n\\label{algorithm:registration}\n\\end{algorithm}\n\n\n\\section{Experimental results} \\label{experiment}\n\nIn this section, we demonstrate the effectiveness of our proposed triangulation scheme. The algorithms are implemented using MATLAB. For solving the mentioned linear systems, the backslash operator ($\\backslash$) in MATLAB is used. The test images are courtesy of the RetargetMe dataset \\cite{Rubinstein10}. The bird image is courtesy of the first author. All experiments are performed on a PC with an Intel(R) Core(TM) i7-4500U CPU @1.80 GHz processor and 8.00 GB RAM.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.215\\textwidth]{bird.png} \\\n \\includegraphics[width=0.16\\textwidth]{book.png} \\\n \\includegraphics[width=0.32\\textwidth]{pencils.png} \\\n \\includegraphics[width=0.255\\textwidth]{bear_1.png}\n \\center{\\hspace{5mm} Bird \\hspace{27mm} Book \\hspace{35mm} Pencils \\hspace{45mm} Teddy}\n \\newline\n \\newline\n \\includegraphics[width=0.215\\textwidth]{bird_mesh.png} \\\n \\includegraphics[width=0.16\\textwidth]{book_mesh.png} \\\n \\includegraphics[width=0.32\\textwidth]{pencils_mesh.png} \\\n \\includegraphics[width=0.255\\textwidth]{bear_mesh.png}\n \\center{Triangulation of Bird \\hspace{3mm} Triangulation of Book \\hspace{9mm} Triangulation of Pencils \\hspace{19mm} Triangulation of Teddy}\n \\newline\n \\caption{Several images and the triangulations created by our {\\em TRIM} algorithm. Top: the input images. Bottom: the resulting triangulations. The key features of the original images are well represented in our triangulations, and the regions with similar color can be represented by coarse triangulations. }\n \\label{fig:triangulation}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.24\\textwidth]{bee.png}\n \\includegraphics[width=0.24\\textwidth]{bee_mesh.png}\n \\includegraphics[width=0.24\\textwidth]{bee_mesh_color.png}\n \\includegraphics[width=0.24\\textwidth]{bee_dmesh.png}\n \\caption{A bee image and the triangulations created by our {\\em TRIM} algorithm and DMesh \\cite{dmesh}. Left: The input image. Middle left: The coarse triangulation created by {\\em TRIM}. Middle right: Our {\\em TRIM} coarse triangulation with a color approximated on each triangle. Right: The triangulation by DMesh \\cite{dmesh}.}\n \\label{fig:triangulation_bee}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.24\\textwidth]{butterfly.png}\n \\includegraphics[width=0.24\\textwidth]{butterfly_mesh.png}\n \\includegraphics[width=0.24\\textwidth]{butterfly_mesh_color.png}\n \\includegraphics[width=0.24\\textwidth]{butterfly_dmesh.png}\n \\caption{An butterfly image and the triangulations created by our {\\em TRIM} algorithm and DMesh \\cite{dmesh}. Left: The input image. Middle left: The coarse triangulation created by {\\em TRIM}. Middle right: Our {\\em TRIM} coarse triangulation with a color approximated on each triangle. Right: The triangulation by DMesh \\cite{dmesh}.}\n \\label{fig:triangulation_butterfly}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.27\\textwidth]{baseball.png} \\\n \\includegraphics[width=0.27\\textwidth]{baseball_mesh_color.png} \\\n \\includegraphics[width=0.27\\textwidth]{baseball_dmesh.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{eagle.png} \\\n \\includegraphics[width=0.27\\textwidth]{eagle_triangulation.png} \\\n \\includegraphics[width=0.27\\textwidth]{eagle_dmesh.png}\n \\newline\n \\caption{Two more examples created by our {\\em TRIM} algorithm and Dmesh \\cite{dmesh}. Our coarse triangulations capture the important features and closely resemble the original images. Left: The input image. Middle: The coarse triangulation created by {\\em TRIM}. Right: The triangulation by DMesh \\cite{dmesh}.}\n \\label{fig:triangulation_more}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.24\\textwidth]{bear_1.png} \\\n \\includegraphics[width=0.24\\textwidth]{bear_trim.png} \\\n \\includegraphics[width=0.24\\textwidth]{bear_noisy.png} \\\n \\includegraphics[width=0.24\\textwidth]{bear_noisy_trim.png}\n \\newline\n \\newline\n \\includegraphics[width=0.24\\textwidth]{cone_1.png} \\\n \\includegraphics[width=0.24\\textwidth]{cone_trim.png} \\\n \\includegraphics[width=0.24\\textwidth]{cone_noisy.png} \\\n \\includegraphics[width=0.24\\textwidth]{cone_noisy_trim.png}\n \\newline\n \\caption{Two triangulation examples by our {\\em TRIM} algorithm for noisy images. Left: The noise-free images. Middle left: The triangulations computed by {\\em TRIM} based on the noise-free images. Middle right: The noisy images. Right: The triangulations computed by {\\em TRIM} based on the noisy images. It can be observed that the important features of the images are preserved even for noisy images.}\n \\label{fig:noisy}\n\\end{figure*}\n\n\\subsection{The performance of our proposed {\\em TRIM} triangulation scheme}\nIn this subsection, we demonstrate the effectiveness of our triangulation scheme by various examples.\n\nOur proposed algorithm is highly content-aware. Specifically, regions with high similarities or changes in color on an image can be easily recognized. As a result, the triangulations created faithfully preserve the important features by a combination of coarse triangles with different sizes. Some of our triangulation results are illustrated in Figure \\ref{fig:triangulation}. For better visualizations, we color the resulting triangulations by mean of the original colors of corresponding patches. In Figure \\ref{fig:triangulation_bee}, we apply our {\\em TRIM} algorithm on a bee image. It can be observed that the regions of the green background can be effectively represented by coarser triangulations, while the region of the bee and flowers with apparent color differences is well detected and represented by a denser triangulation. Figure \\ref{fig:triangulation_butterfly} shows another example of our triangulation result. The butterfly and the flowers are well represented in our triangulation result. The above examples demonstrate the effectiveness of our triangulation scheme for representing images in a simplified but accurate way. Some more triangulation examples created by our {\\em TRIM} algorithm are shown in Figure \\ref{fig:triangulation_more}. Figure \\ref{fig:noisy} shows some triangulation examples for noisy images. It can be observed that our {\\em TRIM} algorithm can effectively compute content-aware coarse triangulations even for noisy images.\n\nWe have compared our algorithm with the DMesh triangulator \\cite{dmesh} in Figure \\ref{fig:triangulation_bee}, Figure \\ref{fig:triangulation_butterfly} and Figure \\ref{fig:triangulation_more}. It can be observed that our triangulation scheme outperforms DMesh \\cite{dmesh} in terms of the triangulation quality. Our results can better capture the important features of the images. Also, the results by DMesh \\cite{dmesh} may contain unwanted holes while our triangulation results are always perfect rectangles. The comparisons reflect the advantage of our coarse triangulation scheme.\n\nThen, we evaluate the efficiency of our triangulation scheme for various images. Table \\ref{table:trim} shows the detailed statistics. The relationship between the target coarse triangulation size and the computational time is illustrated in Figure \\ref{fig:triangulation_time}. Even for high resolution images, the computational time for the triangulation is only around 10 seconds. It is noteworthy that our {\\em TRIM} algorithm significantly compresses the high resolution images as coarse triangulations with only several thousand triangles.\n\n\\begin{table*}[h!]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nImage & Size & Triangulation time (s) & \\# of triangles & Compression rate \\\\ \\hline\nSurfer & 846 $\\times$ 421 & 5.78 & 1043 & 0.1536\\% \\\\ \\hline\nHelicopter & 720 $\\times$ 405 & 5.78 & 1129 & 0.1989\\%\n\\\\ \\hline\nBee & 640 $\\times$ 425 & 7.13 & 1075 & 0.2029\\% \\\\ \\hline\nBird & 1224 $\\times$ 1224 & 7.04 & 1287 & 0.0859\\% \\\\ \\hline\nButterfly & 1024 $\\times$ 700 & 8.00 & 1720 & 0.1232\\% \\\\ \\hline\nBook & 601 $\\times$ 809 & 8.38 & 1629 & 0.3350\\% \\\\ \\hline \nBaseball & 410 $\\times$ 399 & 7.85 & 2315 & 0.7201\\% \\\\ \\hline\nBear & 450 $\\times$ 375 & 7.48 & 2873 & 0.8652\\% \\\\ \\hline\nPencil & 615 $\\times$ 410 & 8.93 & 2633 & 0.5838\\% \\\\ \\hline\nTiger & 2560 $\\times$ 1600 & 13.91 & 3105 & 0.0414\\% \\\\ \\hline\nEagle & 600 $\\times$ 402 & 13.27 & 1952 & 0.4299\\% \\\\ \\hline\n\\end{tabular}\n\\bigbreak\n\\caption{Performance of our {\\em TRIM} algorithm. Here, the compression rate is defined by $\\frac{\\text{\\# of triangle nodes}}{\\text{\\# of pixels}} \\times 100\\%$.}\n\\label{table:trim}\n\\end{table*}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{time_graph.png}\n \\caption{The relationship of the desired coarse triangulation size and the computational time of our proposed {\\it TRIM} algorithm.}\n \\label{fig:triangulation_time}\n\\end{figure}\n\nIt is noteworthy that the combination of the steps in our {\\em TRIM} algorithm is important for achieving a coarse triangulation. More specifically, if certain steps in our algorithm are removed, the triangulation result will become unsatisfactory. Figure \\ref{fig:segmentation} shows two examples of triangulations created by our entire {\\em TRIM} algorithm and by our algorithm with the segmentation step excluded. It can be easily observed that without the segmentation step, the resulting triangulations are extremely dense and hence undesirable for simplifying further computations. By contrast, the number of triangles produced by our entire {\\em TRIM} algorithm is significantly reduced. The examples highlight the importance of our proposed combination of steps in the {\\em TRIM} algorithm for content-aware coarse triangulation.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.22\\textwidth]{bee_seg_923.png}\n \\includegraphics[width=0.22\\textwidth]{bee_no_seg_3612.png}\n \\includegraphics[width=0.22\\textwidth]{bird_seg_1496.png}\n \\includegraphics[width=0.22\\textwidth]{bird_no_seg_8685.png}\n \\caption{The triangulations created by our entire {\\em TRIM} algorithm (left) and by the algorithm without the segmentation step (Right). It can be observed that the segmentation step is crucial for achieving a coarse triangulation. Number of triangles produced: 923 (top left), 3612 (top right), 1496 (bottom left), 8685 (bottom right).}\n \\label{fig:segmentation}\n\\end{figure}\n\\subsection{The improvement in efficiency of image registration by our {\\em TRIM} algorithm}\nIn this subsection, we demonstrate the effectiveness of our proposed triangulation-based method for landmark-based image registration. In our experiments, the feature points on the images are extracted using the algorithm by Harris and Stephens \\cite{harris88} as landmark constraints. For simplifying the image registration problems, one conventional approach is to make use of coarse regular grids followed by interpolation. It is natural to ask whether our proposed coarse triangulation-based method produces better results. In Figure \\ref{fig:registration_teddy}, we consider a stereo registration problem of two scenes. With the prescribed feature correspondences, we compute the feature-endowed stereo registration via the conventional grid-based approach, the DMesh triangulation approach \\cite{dmesh} and our proposed {\\it TRIM} method. For the grid-based approach and the DMesh triangulation approach \\cite{dmesh}, we take the mesh vertices nearest to the prescribed feature points on the source image as source landmarks. For our proposed {\\it TRIM} method, as the landmark vertices are automatically embedded in the content-aware coarse triangulation, the source landmarks are exactly the feature points detected by the method in \\cite{harris88}.\n\nIt can be observed that our triangulation-based approach produces a much more natural and accurate registration result when compared with both the grid-based approach and the DMesh triangulation approach. In particular, sharp features such as edges are well preserved using our proposed method. By contrast, the edges are seriously distorted in the other two methods. In addition, the geometry of the background in the scenes are well retained via our {\\em TRIM} method but not the other two methods. The higher accuracy of the registration result by our approach can also be visualized by the intensity difference plots. Our triangulation-based approach results in an intensity difference plot with more dark regions than the other two approaches. The advantage of our method over the other two methods is attributed to the geometry preserving feature of our {\\it TRIM} algorithm, in the sense that the triangulations created by {\\it TRIM} are more able to fit into complex features and have more flexibilities in size than regular grids. Also, the triangulations created by DMesh \\cite{dmesh} do not capture the geometric features and hence the registration results are unsatisfactory. They reflect the significance of our content-aware {\\it TRIM} triangulation scheme in computing image registration. Another example is illustrated in Figure \\ref{fig:registration_cone}. Again, it can be easily observed that our proposed {\\em TRIM} triangulation approach leads to a more accurate registration result.\n\nTo highlight the improvement in the efficiency by our proposed {\\em TRIM} algorithm, Table \\ref{table:registration} records the computational time and the error of the registration via the conventional grid-based approach and our {\\it TRIM} triangulation-based approach. It is noteworthy that our proposed coarse triangulation-based method significantly reduces the computational time by over 85\\% on average when compared with the traditional regular grid-based approach. To quantitatively assess the quality of the registration results, we define the matching accuracy by\n\\begin{equation}\n\\frac{\\text{\\# pixels s.t.} \\|\\text{final intensity - original intensity}\\|_1 < \\epsilon}{\\text{Total \\# of pixels}} \\times 100\\%.\n\\end{equation}\nThe threshold $\\epsilon$ is set to be $0.2$ in our experiments. Our triangulation-based method produces registration results with the matching accuracy higher than that of the regular grid-based method by 3\\% on average. The experimental results reflect the advantages of our {\\it TRIM} content-aware coarse triangulations for image registration\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.27\\textwidth]{bear_1.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_2.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_registration.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{bear_grid_result.png} \\\n \\includegraphics[width=0.27\\textwidth]{dmesh_bear.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_result.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{bear_difference_map_grid.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_difference_map_dmesh.png} \\\n \\includegraphics[width=0.27\\textwidth]{bear_difference_map_TRIM.png}\n \\newline\n \\caption{Stereo landmark registration using different algorithms. Top left: The source image. Top middle: The target image. Top Right: The prescribed feature correspondences. Middle left: The registration result by the dense grid-based approach (4 pixels per grid). Middle: The registration result via DMesh \\cite{dmesh}. Middle right: The registration result by our proposed coarse triangulation-based method. Bottom left: The intensity difference after the registration by the dense grid-based approach. Bottom middle: The intensity difference after the registration via DMesh \\cite{dmesh}. Bottom right: The intensity difference after the registration by our proposed coarse triangulation-based method. Because of the greater flexibility of our triangulation scheme, the accuracy of registration by our method is higher than that of the dense grid-based approach.}\n \\label{fig:registration_teddy}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.27\\textwidth]{cone_1.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_2.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_registration.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{cone_grid_result.png} \\\n \\includegraphics[width=0.27\\textwidth]{dmesh_cone.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_result.png}\n \\newline\n \\newline\n \\includegraphics[width=0.27\\textwidth]{cone_difference_map_grid.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_difference_map_dmesh.png} \\\n \\includegraphics[width=0.27\\textwidth]{cone_difference_map_TRIM.png}\n \\newline\n \\caption{Stereo landmark registration using different algorithms. Top left: The source image. Top middle: The target image. Top Right: The prescribed feature correspondences. Middle left: The registration result by the dense grid-based approach (4 pixels per grid). Middle: The registration result via DMesh \\cite{dmesh}. Middle right: The registration result by our proposed coarse triangulation-based method. Bottom left: The intensity difference after the registration by the dense grid-based approach. Bottom middle: The intensity difference after the registration via DMesh \\cite{dmesh}. Bottom right: The intensity difference after the registration by our proposed coarse triangulation-based method.}\n \\label{fig:registration_cone}\n\\end{figure*}\n\\begin{table*}[h!]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\nImages & Size & \\multicolumn{4}{c|}{Registration} & Time saving rate \\\\ \\cline{3-6}\n& & \\multicolumn{2}{c|}{Via regular grids} & \\multicolumn{2}{c|}{Via {\\em TRIM}} & \\\\ \\cline{3-6}\n& & Time (s) & Matching accuracy (\\%) & Time (s) & Matching accuracy (\\%) & \\\\ \\hline\nBear & 450 $\\times$ 375 & 102.3 & 59.5 & 13.8 & 70.7 & 13.4897\\% \\\\ \\hline\nCone & 450 $\\times$ 375 & 108.7 & 51.3 & 28.2 & 61.2 & 25.9430\\%\\\\ \\hline\nCloth& 1252 $\\times$ 1110 & 931.0 & 70.7 & 36.0 & 75.4 & 3.8668\\% \\\\ \\hline\nBook & 1390 $\\times$ 1110 & 1204.5 & 59.0 & 51.0 & 63.0 & 4.2341\\% \\\\ \\hline\nBaby & 1390 $\\times$ 1110 & 94.3 & 62.3 & 11.0 & 62.3 & 11.6649\\% \\\\ \\hline\n\n\\end{tabular}\n\n\\bigbreak\n\\caption{The performance of feature-based image registration via our proposed {\\em TRIM} coarse triangulation method and the ordinary coarse grids. Here, the time saving rate is defined by $\\frac{\\text{Registration time via {\\em TRIM}}}{\\text{Registration time via regular grids}} \\times 100\\%$.\n\\label{table:registration}\n\\end{table*}\n\n\n\\section{Conclusion and future work} \\label{conclusion}\nIn this paper, we have established a new framework for accelerating image registration. Using our proposed {\\em TRIM} algorithm, a content-aware coarse triangulation is built on a high resolution image. Then, we can efficiently compute a landmark-based registration using the coarse triangulation. From the coarse registration result, we can obtain the desired fine registration result. Our proposed method is advantageous for a large variety of registration applications with a significant improvement of the computational efficiency and registration accuracy. Hence, our proposed method can serve as an effective initialization for various registration algorithms. In the future, we aim to extend our proposed {\\em TRIM} algorithm in the following two directions.\n\n\\bigbreak\n\n\\subsubsection{n-dimensional registration via {\\em TRIM}}\nOur work can be naturally extended for higher dimensional registration problems. For accelerating $n$-dimensional volumetric registration, one can consider constructing a coarse representation using $n$-simplices. Analogous to our 2-dimensional {\\em TRIM} algorithm, the $n$-dimensional coarse representation should also result in a significant improvement of the efficiency for $n$-dimensional registration problems.\n\n\\bigbreak\n\n\\subsubsection{{\\em TRIM} image compression format}\nRecall that in our proposed {\\em TRIM} algorithm, there are two important parameters, namely, the number of intensity threshold levels $l$ in the image segmentation step and the sparse ratio $p$ for controlling the fineness of the triangular representation. By increasing the values of these two parameters, an image intensity representation becomes more accurate. Hence, given a compression error threshold, we aim to develop an iterative {\\em TRIM} scheme for continuously increasing the two values until the intensity difference between the compressed image and the original image is smaller than the prescribed error threshold. This provides us with a new image compression format using {\\em TRIM}. Since the {\\em TRIM} compression format only consists of the coarse triangular mesh vertices and the corresponding colors, the compression size is much smaller than that of the existing image compression formats.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nNear term quantum devices have a small number of noisy qubits that can support execution of shallow depth circuits (i.e., those with few operational cycles) only. Variational Quantum Algorithms (VQA) aim to leverage the power as well as the limitations imposed by these devices to solve problems of interest such as combinatorial optimization \\cite{farhi2014quantum,wang2018quantum,hadfield2019quantum,cook2020quantum}, quantum chemistry \\cite{mcclean2016theory, grimsley2019adaptive}, and quantum machine learning \\cite{10.1007\/978-3-030-50433-5_45,biamonte2017quantum,torlai2020machine}. VQA divides the entire computation into functional modules, and outsources some of these modules to classical computers. The general framework of VQA can be divided into four steps: (i) encode the problem into a parameterized quantum state $\\ket{\\psi(\\theta)}$ (called the ansatz), where $\\theta = \\{\\theta_1,\\theta_2, \\hdots, \\theta_k\\}$ are $k$ parameters; (ii) prepare and measure the ansatz in a quantum computer, and determine the value of some objective function $C(\\theta)$ (which depends on the problem at hand) from the measurement outcome; (iii) in a classical computer, optimize the set of parameters to find a better set $\\theta' = \\{\\theta'_1,\\theta'_2, \\hdots, \\theta'_k\\}$ such that it minimizes (or maximizes) the objective function; (iv) repeat steps (ii) and (iii) with the new set of parameters until convergence.\n\nQuantum Approximate Optimization Algorithm (QAOA) is a type of VQA that focuses on finding good approximate solutions to combinatorial optimization problems. It has been studied most widely for finding the maximum cut of a (weighted or unweighted) graph (called the Max-Cut problem) \\cite{farhi2014quantum}. For this problem, given a graph $G = (V,E)$ where $V$ is the set of vertices and $E$ is the set of edges, the objective is to partition $V = V_1 \\cup V_2$, such that $V_1 \\cap V_2 = \\phi$, and the number of edges crossing the partition is maximized. Throughout this paper, we shall consider {\\it connected graphs} with $|V| = n$ and $|E| = m$, but the results can be easily extended to disconnected graphs as well.\n\nIn the initial algorithm proposed by Farhi \\cite{farhi2014quantum} for the Max-Cut problem, a depth-$p$ QAOA consists of $p \\geq 1$ layers of alternating operators on the initial state $\\ket{\\psi_0}$ \n\\begin{equation}\n\\label{eq:ansatz}\n \\ket{\\psi(\\gamma,\\beta)} = ( \\displaystyle \\Pi_{l = 1}^{p} e^{(-i\\beta_l H_M)} e^{(-i\\gamma_l H_P)}) \\ket{\\psi_0}\n\\end{equation}\n\nwhere $H_P$ and $H_M$ are called the Problem and Mixer Hamiltonian respectively, and $\\gamma = \\{\\gamma_1, \\gamma_2, \\hdots, \\gamma_p\\}$ and $\\beta = \\{\\beta_1, \\beta_2, \\hdots, \\beta_p\\}$ are the parameters. It is to be noted that the depth $p$ of the QAOA is not related to the depth of the quantum circuit realizing the algorithm. The problem Hamiltonian describing the Max-Cut can be represented as in Eq.~(\\ref{eq:max_cut}), where $w_{jk}$ is the weight associated with the edge $(j,k)$.\n\\begin{equation}\n\\label{eq:max_cut}\n H_P = \\frac{1}{2}\\displaystyle \\sum_{(j,k) \\in E} w_{jk} (I - Z_j Z_k)\n\\end{equation}\n\nFurthermore, the mixer Hamiltonian should be an operator that does not commute with the Problem Hamiltonian. In the traditional QAOA, the mixer Hamiltonian is $H_M = \\displaystyle \\sum_{i} X_i$.\n\nVariations to this have been studied to improve the performance of the algorithm --- such as using other mixers \\cite{bartschi2020grover,zhu2020adaptive, yu2021quantum}, training the parameters to reduce the classical optimization cost \\cite{larkin2020evaluation}, and modifying the cost function for faster convergence \\cite{barkoutsos2020improving}. In this paper we stick to the original problem and mixer hamiltonians proposed in the algorithm by Farhi et al. \\cite{farhi2014quantum}. The applicability and effectiveness of our proposed method on the modifications of this algorithm can be looked at as a follow-up work. However, our proposed methods optimize the circuit corresponding to the problem hamiltonian. Since most of the modifications suggested in the literature aim to design more efficient mixers, our proposed optimization should be applicable on those as well.\n\nThe realization of the QAOA circuit for Max-cut requires two CNOT gates for each edge (details given in Sec.~\\ref{sec:ansatz}). Hardware realization of a CNOT gate is, in general, significantly more erroneous than a single qubit gate. Even in the higher end devices of IBM Quantum, such as \\textit{ibmq\\_montreal}, \\textit{ibmq\\_manhattan}, the probability of error for a single qubit gate and a CNOT gate are $\\mathcal{O}(10^{-4})$ and $\\mathcal{O}(10^{-2})$, respectively~\\cite{ibmquantum}. In other words, CNOT gates are $100$ times more likely to be erroneous than single qubit gates. Therefore, we focus primarily on reducing the number of CNOT gates in the design of QAOA ansatz for Max-cut.\n\n\\subsubsection*{Contributions of this paper}\n\nIn this paper, we\n\n\\begin{enumerate}[(i)]\n \\item propose two optimization methods for reducing the number of CNOT gates in the first layer of the QAOA ansatz based on (1) an Edge Coloring that can reduce upto $\\lfloor \\frac{n}{2} \\rfloor$ CNOT gates, and (2) a Depth First Search (DFS) that can reduce $n-1$ CNOT gates.\n \n \\item prove that there exists no method that can reduce more than $n-1$ CNOT gates while still maintaining a fidelity of 1 with the original QAOA ansatz \\cite{farhi2014quantum}.\n \n \\item show that while the Edge Coloring based optimization reduces the depth of the circuit, the DFS based method may increase the depth. We further analytically derive the criteria (involving the increase in the depth and the reduction in the number of CNOT gates) for which the DFS based optimization method still leads to a lower probability of error in the circuit, and show that the IBM Quantum Hardwares \\cite{ibmquantum} conform to that criteria.\n \n \n \n \\item simulate our proposed optimization methods in Qiskit~\\cite{Qiskit} with the \\textit{ibmq\\_manhattan} noise model and show that for graphs of different sparsity (Erdos-Renyi graphs with the probability of edge varying from 0.4 - 1)\n \\begin{enumerate}\n \\item the proposed reduction in the CNOT gate is still retained post transpilation\n \\item the DFS based method has lower error probability than the Edge Coloring method, which in its turn has lower error probability than the traditional QAOA ansatz.\n \\end{enumerate}\n\\end{enumerate}\n\nTherefore, for any graph $G = (V,E)$, our proposed method provides reduction in the number of CNOT gates, and hence lowers the error probability of the circuit. Although the DFS method provably surpasses the Edge Coloring method, both in terms of reduction in CNOT gates and lowering the error probability, the latter reduces the depth of the QAOA circuit, and is also used in the DFS based method to arrange the edges which do not form a part of the DFS tree.\n\nFor the rest of this paper, we consider \\emph{unweighted and connected graphs}, $i.e.$, $w_{jk} = 1$, $\\forall$ $(j,k) \\in E$. However, the circuit corresponding to the ansatz does not change if we have a weighted graph \\cite{hadfield2019quantum}. Therefore, every analysis in this paper holds for a weighted graph as well. Furthermore, the analysis of this paper will hold as it is, or with some minimal modification, for disconnected graphs as well.\n\nThe rest of the paper is organized as follows - Section~\\ref{sec:ansatz} briefly discusses the traditional QAOA ansatz design. In Section~\\ref{sec:thm} we provide the proposed optimization and the criteria for it. Section~\\ref{sec:edge_col} and ~\\ref{sec:dfs} describe two methods of optimization based on Edge Coloring and DFS respectively. We provide the respective algorithms and analyze the conditions under which each one reduces the probability of error. We present the results of our simulation in section~\\ref{sec:sim} and conclude in Section~\\ref{sec:con}.\n\n\\section{Traditional ansatz design for QAOA}\n\\label{sec:ansatz}\n\nThe objective function of a depth-$p$ QAOA for Max-Cut~\\cite{farhi2014quantum} can be expressed as\n\\begin{equation}\n\\label{eq:objective}\n \\max_{\\psi(\\gamma,\\beta)} \\bra{\\psi(\\gamma,\\beta)} H_P \\ket{\\psi(\\gamma,\\beta)}\n\\end{equation}\nwhere $\\gamma = \\{\\gamma_1, \\gamma_2, \\hdots, \\gamma_p\\}$ and $\\beta = \\{\\beta_1, \\beta_2, \\hdots, \\beta_p\\}$ are the parameters. The trial wavefunction $\\ket{\\psi(\\gamma,\\beta)}$ is called the ansatz. The QAOA ansatz has a fixed form as described in Eq.~(\\ref{eq:ansatz}). The initial state $\\ket{\\psi_0}$ is usually the equal superposition of $n$ qubits, where $n = |V|$. Note that the depth of the circuit required to prepare $\\ket{\\psi_0}$ is 1 (Hadamard gates acting simultaneously on all the qubits). Similarly, for each layer of QAOA, the operator $exp(-i \\beta_l H_M)$ can be realized by a depth one circuit of $R_x(\\beta_l)$ gates acting simultaneously on all the qubits.\n\nThe operator $exp(-i \\gamma_l H_P)$ has a more costly implementation. Note that\n\\begin{eqnarray*}\nexp(-i \\gamma_l H_P) = \\displaystyle \\Pi_{(i,j) \\in E} exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right).\n\\end{eqnarray*}\n\nThe operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ acts on each edge $(j,k)$, and is realized as shown below:\n\\begin{figure}[H]\n\\centering\n\t\\begin{quantikz}\n\t\t{q_{j}}&&\\ctrl{1} & \\qw & \\ctrl{1} & \\qw \\\\\n\t\t{q_{k}}&&\\targ{} & \\gate{R_z(2\\gamma_l)} & \\targ{} & \\qw\n\t\\end{quantikz}\n\t\\label{fig:z_jz_k}\n\\end{figure}\n\nHere, $q_j$ and $q_k$ represent qubit indices $j$ and $k$, respectively. Note that Max-Cut is a symmetric problem, and therefore, the selection of control and target from qubits $q_j$ and $q_k$ for the CNOT gate corresponding to the edge $(j,k)$ is irrelevant, $i.e.$ the operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ can be equivalently realized as $CNOT_{kj} (I_k \\otimes R_z(2\\gamma_l)_{j})$ $CNOT_{kj}$. In Fig.~\\ref{fig:depth_qaoa}(a) and (b), we show a 2-regular graph with four vertices and its corresponding QAOA circuit for $p = 1$ respectively.\n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[scale=0.35]{graph.png}\n \\caption{A 2-regular graph with four vertices}\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{qaoa_circ.png}\n \\caption{Max-Cut QAOA circuit for $p=1$ corresponding to the graph}\n \\label{dfs}\n \\end{subfigure}\n \\caption{The Max-Cut QAOA circuit for $p=1$ corresponding to the 2-regular graph with four vertices; the values of $\\gamma$ and $\\beta$ can be arbitrary but those in this figure are the optimum values for this graph when $p = 1$}\n \\label{fig:depth_qaoa}\n\\end{figure}\n\n\n\\section{Methods for Optimized ansatz design}\n\\label{sec:thm}\n\nSome recent studies have proposed optimization methods for the circuit of the QAOA ansatz with respect to the underlying hardware architecture \\cite{alam2020circuit}. In this paper we propose two \\textit{hardware independent} methods to reduce the number of CNOT gates in the traditional QAOA ansatz. The intuition is that in the circuit realization of the operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ as $CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k})$ $CNOT_{jk}$, the first CNOT gate can be removed whenever it does not make any contribution to the overall effect of the operator. Our proposed method reduces the number of CNOT gates in the circuit irrespective of the hardware architecture, and hence is applicable for any quantum device.\n\nIn Theorem~\\ref{thm:equiv} we prescribe the condition where the first CNOT gate is irrelevant to the effect of the said operator, and hence may be removed.\n\n\\begin{theorem}\n\\label{thm:equiv}\nLet $\\ket{\\psi}$ be an $n$-qubit state prepared in a uniform superposition (upto relative phase) over all basis states $\\ket{x_1, \\hdots, x_n}$ such that the relative phase on each basis state is a function of a subset $S \\subset$ $\\{1,2,...,n\\}$ of the $n$ qubits (and independent of remaining qubits) $i.e.$\n\\begin{center}\n $\\ket{\\psi} = \\frac{1}{\\sqrt{2^n}}\\displaystyle \\sum_{x_1,...,x_n} e^{(i \\phi(x_S))}\\ket{x_1,...,x_n}$\n\\end{center}\nwhere $x_S = \\{x_i : i \\in S\\}$ and $\\phi(x_S)$ depicts the relative phase of each superposition state. For any two qubits $\\ket{j}$ and $\\ket{k}$, where $\\ket{k} \\notin S$, and for the two operators $U_1 = CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k}) CNOT_{jk}$ and $U_2 = (I_j \\otimes R_z(2\\gamma_l)_{k}) CNOT_{jk}$, we have \n\\begin{center}\n $U_1\\ket{\\psi} = U_2\\ket{\\psi}$.\n\\end{center}\n\\end{theorem}\n\n\\begin{proof}\nLet us consider the action of the operators $U_1$ and $U_2$ on any edge $(j,k)$.\n\\begin{equation}\nU_1\\ket{\\psi} = CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k}) (CNOT_{jk}) \\ket{\\psi} \\nonumber\n\\end{equation}\n\\begin{align}\n=&\\displaystyle \\sum_{x_1,...,x_n} CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k}) (CNOT_{jk})\\nonumber \\\\\n& e^{i\\phi(x_S)} \\ket{x_1,...,x_n} \\\\\n=&\\displaystyle \\sum_{x_1,...,x_n} CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{k}) \\nonumber \\\\\n& e^{i \\phi(x_S)} \\ket{x_1,..,x_k'=x_j \\oplus x_k,.,x_n} \\\\\n=&\\displaystyle \\sum_{x_1,...,x_n} e^{i(\\phi(x_S)- \\gamma_l (x_j \\oplus x_k))} CNOT_{jk} \\nonumber \\\\\n& \\ket{x_1,..,x_k'=x_j \\oplus x_k,.,x_n} \\\\\n\\label{eq:u1}\n=& \\displaystyle \\sum_{x_1,...,x_n} e^{i(\\phi(x_S) - \\gamma_l (x_j \\oplus x_k))}\\ket{x_1,...,x_n}\n\\end{align}\nwhere $e^{i \\phi(x_S)}$ is the cumulative effect of operators acting on previous edges (= 0 if $(j,k)$ is the first). We have dropped the normalization constant for brevity.\n\nSimilarly,\n\\begin{equation}\n U_2\\ket{\\psi} = CNOT_{jk} (I_j \\otimes R_z(2\\gamma_l)_{x_k}) \\ket{\\psi} \\nonumber\n\\end{equation}\n\\begin{equation}\n = CNOT_{jk} \\displaystyle \\sum_{x_1,...,x_n} e^{i((\\phi(x_S)) - \\gamma_l x_k)}\\ket{x_1,...,x_n} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\label{eq:u2_mid}\n = \\displaystyle \\sum_{x_1,...,x_n} e^{i((\\phi(x_S)) - \\gamma_l x_k)} \\ket{x_1,..,x_j \\oplus x_k,..,x_n}\n\\end{equation}\n\nwhere the qubit in $k^{\\text{th}}$ position changes to $x_j \\oplus x_k$ due to the $CNOT_{jk}$ operation. Now, substituting $x_k' = x_j \\oplus x_k$ in the above equation, we get\n\n\\begin{equation}\n U_2\\ket{\\psi} = \\displaystyle \\sum_{x_1,...,x_n} e^{i((\\phi(x_S)) - \\gamma_l x_k)} \\ket{x_1,..,x_j \\oplus x_k,..,x_n} \\nonumber\n\\end{equation}\n\\begin{equation}\n = \\displaystyle \\sum_{x_1,..,x_k',..,x_n} e^{i((\\phi(x_S)) - \\gamma_l (x_j \\oplus x_k'))} \\ket{x_1,..,x_k',..,x_n} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\label{eq:u2}\n = \\displaystyle \\sum_{x_1,..,x_k,..,x_n} e^{i((\\phi(x_S)) - \\gamma_l (x_j \\oplus x_k))} \\ket{x_1,..,x_k,..,x_n}\n\\end{equation}\n\nHere since $k \\notin S$, the substitution in second last step, does not change the phase $e^{i \\phi(x_S)}$. The last step is valid since $x_k'$ is a running index and hence can be changed to $x_k$. Thus Eq.~(\\ref{eq:u1}) and Eq.~(\\ref{eq:u2}) are identical.\n\\end{proof}\n\n\\begin{corollary}\n\\label{cor:cond}\nFor a graph $G$, we can optimize the circuit for the operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ corresponding to an edge $(j,k)$ replacing $U_1$ by $U_2$, provided that the target vertex does not occur in any of the edge operators run earlier. In other words, the following conditions are sufficient to optimize an edge:-\n\\begin{enumerate}\n \\item if the vertex $j$ is being operated on for the first time, then it acts either as a control or a target for the CNOT gate corresponding to the operator;\n \\item the vertex $j$ does not act as a target of the CNOT gate if it occurs as a part of any other edge operators run earlier.\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\n\nThe first time we consider an edge adjacent to a vertex $j$, where $j \\notin x_S$, (see Theorem~\\ref{thm:equiv}) the relative phase $\\phi(x_S)$ does not depend on $j$. Thus it satisfies the condition of Theorem~\\ref{thm:equiv} and allows optimization of the operator.\n\n On the other hand, if the vertex $j$ occurs as part of an edge operator already run, the phase on the basis state $\\phi$ can potentially depend on $S$, $i.e.$ $j \\in S$. By not allowing it to act as target, we satisfy the conditions of Theorem~\\ref{thm:equiv}.\n\\end{proof}\n\nFrom the above discussion, it follows that if we arbitrarily choose edges for applying the operator $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$, then it cannot be guaranteed that a large number of edges will conform to Corollary~\\ref{cor:cond}. The requirement, in fact, imposes a precedence ordering among the edges. In Section~\\ref{sec:edge_col} and ~\\ref{sec:dfs}, we provide two algorithmic procedures for maximizing the number of edges that satisfy the requirement in order to reduce the number of CNOT gates in the ansatz.\n\nFor the rest of the paper, we say that {\\it an edge is optimized} if the operator $U_2$ can be operated on that edge instead of $U_1$.\n\n\\section{Edge Coloring based Ansatz Optimization }\n\\label{sec:edge_col}\nThe total error one incurs in a circuit depends on the number of operators (since a larger number of operators tend to incur more error) and the depth of the circuit (corresponding to relaxation error). In this section, we discuss how one can minimize the depth of the circuit. We also discuss the possibility of reduction in CNOT gates in the depth optimized circuit. \n\nThe operators $H_M$ act on distinct qubits and hence can be run in parallel contributing to a depth of 1 (for each step of the QAOA). On the other hand, the operators in $H_P$ can potentially contribute a lot to depth since the edge operators do not act on disjoint vertices.\nAt a given level of the circuit, we can only apply edge operators corresponding to a vertex disjoint set of edges. Thus the minimum depth of the circuit will correspond to the minimum value $k$ such that we can partition the set of edges $E$ as a disjoint union $\\cup_i E_i$ where each subset $E_i$ consists of a vertex disjoint collection of edges. This in turn corresponds to the edge coloring problem in a graph.\n\nGiven a graph $G = (V,E)$ and a set of colors $\\chi' = \\{\\chi'_1, \\chi'_2, \\hdots, \\chi'_k\\}$, an edge coloring \\cite{west2001introduction} assigns a color to each edge $e \\in E$, such that any two adjacent edges ($i.e.$, edges incident on a common vertex) must be assigned distinct colors. The edge coloring problem comprises of coloring the edges using the minimum number of colors $k$. Note that the operators corresponding to edges having the same color can therefore be executed in parallel.\nMoreover,\n\\begin{enumerate}\n \\item the number of colors in optimal coloring, called the chromatic index, corresponds to the minimum depth of the circuit;\n \\item edges having the same color corresponds to the operators $exp\\left(-i \\gamma_l \\left(\\frac{I-Z_j Z_k}{2}\\right)\\right)$ that can be executed simultaneously.\n\\end{enumerate}\n\nOptimal edge coloring is an NP-complete problem \\cite{west2001introduction}. But it is not practical to allocate exponential time to find the optimal edge-coloring as a pre-processing step for QAOA. Vizing's Theorem states that every simple undirected graph can be edge-colored using at most $\\Delta + 1$ colors, where $\\Delta$ is the maximum degree of the graph \\cite{vizing1964estimate}. This is within an additive factor of 1 since any edge-coloring must use at least $\\Delta$ colors. Misra and Gries algorithm \\cite{misra1992constructive} achieves the above bound constructively in $\\mathcal{O}(n\\cdot m)$ time. Therefore, we use the Misra and Gries edge coloring algorithm. Algorithm~\\ref{alg:edgecol} below computes the sets of edges having the same color using Misra and Gries algorithm as a subroutine. It returns the largest set $S_{max}$ of edges having the same color in the coloring computed by Misra and Gries algorithm.\n\n\\begin{algorithm}[H]\n\\caption{Edge Coloring based Ansatz Optimization}\n\\label{alg:edgecol}\n\\begin{algorithmic}[1]\n\\REQUIRE A graph $G = (V,E)$.\n\\ENSURE Largest set $S_{max}$ of edges having the same color.\n\\STATE Use the Misra and Gries algorithm to color the edges of the graph $G$.\n\\STATE $S_i \\leftarrow$ set of edges having the same color $i$, $1 \\leq i \\leq \\chi'$.\n\\STATE $S_{max} \\leftarrow$ $max\\{S_1, S_2, \\hdots, S_{\\chi'}\\}$.\n\\STATE Return $S_{max}$.\n\\end{algorithmic}\n\\end{algorithm}\n\nThis edge coloring approach provides the minimum depth achievable for QAOA ansatz using a polynomial time pre-processing. After reducing the depth, we now try to further reduce errors by decreasing the number of CNOT gates.\nRecall that the operators corresponding to edges with the same color can be executed in parallel. We use the operators corresponding to the edges of $S_{max}$ as the first layer of operators. The other layers can be used in any order.\n\n\\begin{lemma}\nEvery edge in the first layer can be optimized according to Corollary~\\ref{cor:cond}.\n\\end{lemma}\n\n\\begin{proof}\nFor every edge $(u,v)$ in the first layer, both the vertices are adjacent to an edge for the first time, $i.e.$, both $u, v \\notin S$. Therefore, it satisfies the criteria of Corollary~\\ref{cor:cond}, and hence can be optimized. In fact, any one of the qubits corresponding to the two vertices can be selected as the control for the CNOT operation.\n\\end{proof}\n\nSome edges in the corresponding layers may be optimized as well. Nevertheless, it is trivial to come up with examples where this is not the case (e.g., a complete graph of 4-vertices). Therefore, in the worst case scenario, only the edges in the first layer can be optimized. However, since this method does not increase the depth of the circuit, it always leads to a more efficient circuit design than the traditional QAOA circuit with lower depth (by 1 since the first layer of CNOT is absent) and fewer CNOT gates.\n\nFor general graphs, the worst case scenario is, therefore, that only the edges in the first layer can be optimized. In the following subsection we provide an analysis on the number of optimized edges using this method.\n\n\\subsection{Lower and upper bound on the number of optimized edges}\nLet us assume that the chromatic index of a graph $G = (V,E)$ is $\\chi'$. Using the Misra and Gries Theorem \\cite{misra1992constructive} we can find a polynomial time coloring using at most $\\Delta + 1$ colors, where $\\Delta$ is the maximum degree of the graph. Therefore, on an average, $\\lceil \\frac{m}{\\Delta + 1} \\rceil$ edges have the same color.\n\nMore precisely, two extreme cases arise: (i) the colors may be uniformly distributed, and the maximum number of edges having the same color is $\\lceil \\frac{m}{\\Delta + 1} \\rceil$; or (ii) one of the colors is used dominantly for most of the edges. Nevertheless, note that for all the edges adjacent to the same vertex, a particular color can be assigned to one of the edges only. Therefore, the dominant color can be used at most on $\\lfloor \\frac{n}{2} \\rfloor$ edges, where $n = |V|$. Therefore, the possible number of optimized edges that can be obtained via the Edge Coloring method is as shown in Eq.~(\\ref{eq:edge_col}).\n\\begin{equation}\n\\label{eq:edge_col}\n \\lceil \\frac{m}{\\Delta + 1} \\rceil \\leq ~\\# ~Optimized ~Edges \\leq \\lfloor \\frac{n}{2} \\rfloor.\n\\end{equation}\n\n\\section{Depth First Search based Ansatz Optimization}\n\\label{sec:dfs}\nAs the edge coloring based algorithm can optimize at most $\\lfloor \\frac{n}{2} \\rfloor$ edges, in this section, we present a Depth First Search (DFS) based optimization procedure which can optimize $n-1$ edges. Algorithm~\\ref{alg:dfs}, for obtaining the optimized QAOA ansatz, uses the standard DFS algorithm \\cite{cormen2009introduction}, by returning the tree edges or discovery edges forming the DFS tree.\n\nIn this method, we start from the first vertex of the DFS tree. For every edge $e = (u,v)$ in the DFS tree, the vertex $u$ is made the control and $v$ is made the target for the CNOT gate corresponding to that edge. The edges are operated on sequentially one after another, as in the set $E_{dfs}$ (the tree edges). Once every edge in the DFS tree has been operated on, the remaining edges can be executed in any order. In fact, one may opt to use the Edge Coloring method on the remaining edges to obtain the minimum depth of the circuit corresponding to these edges, although CNOT gates cannot be reduced any further.\n\n\\begin{algorithm}[H]\n\\caption{DFS based Ansatz Optimization }\n\\label{alg:dfs}\n\\begin{algorithmic}[1]\n\\REQUIRE A graph $G = (V,E)$.\n\\ENSURE A list $E_{dfs}$ of $n-1$ edges.\n\\STATE $E_{dfs} = \\{\\}$\n\\STATE $u \\leftarrow$ randomly selected vertex from $V$.\n\\STATE Start DFS from the vertex $u$. For every vertex $v$ discovered from its predecessor $v'$, $E_{dfs} = E_{dfs} \\cup (v',v)$.\n\\STATE Return $E_{dfs}$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{theorem}\nEach edge in the DFS tree can be optimized according to Corollary~\\ref{cor:cond}.\n\\label{thm:dfs}\n\\end{theorem}\n\n\\begin{proof}\nWe prove this by the method of induction. Let $u$ be the vertex from which the DFS tree starts. Then $u$ is being operated on for the first time, and, hence, can act both as a control\/target for the CNOT operation corresponding to the first edge (Corollary~\\ref{cor:cond}). Choose $u$ to be the control.\n\n\\textbf{Base case}: If $v$ is the vertex that is discovered from $u$ via the edge $(u,v)$, then choosing $u$ as the control and $v$ as the target satisfies Corollary~\\ref{cor:cond}. Therefore, the edge $(u,v)$ can be optimized.\n\n\\textbf{Induction hypothesis}: Let the DFS tree has been constructed upto some vertex $j$, and every edge $(e_1, e_2)$ in this DFS tree so far can be optimized, $i.e.$ $e_1$ acts as the control and $e_2$ as the target.\n\n\\textbf{Induction step}: Let the next vertex in the DFS tree, that is discovered from some vertex $i$, is $k$. From DFS algorithm, the vertex $i$ must have been discovered in some previous step. Since vertex $k$ was not previously discovered, so $k \\notin x_S$ and hence the edge $(i,k)$ can be optimized if we select $i$ to be the control and $k$ as the target.\n\\end{proof}\n\nTherefore, the DFS based optimization method provides $n-1$ optimized edges, $i.e.$, a reduction in the number of CNOT gates by $n-1$. We now show in Theorem~\\ref{thm:optimal} that this is the maximum number of edges that can be optimized.\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{col.png}\n \\caption{Edge Coloring Based Optimization}\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{dfs.png}\n \\caption{Depth First Search Based Optimization}\n \\label{dfs}\n \\end{subfigure}\n \\caption{Depth of the ansatz circuit when using (a) Edge Coloring and (b) DFS based method; edges having same color can be executed simultaneously. The depth of the spanning tree in the DFS based method is 4, compared to depth 2 for the Edge Coloring based method. However, the number of optimized edges in the Edge Coloring based method is 2, while that by the DFS based method is 3.}\n \\label{fig:depth}\n\\end{figure}\n\n\\begin{theorem}\nOptimization of ansatz for Max-Cut QAOA with p=1, by deletion of the CNOT gate in the first level for an edge of the graph, can be done for no more than $n-1$ edges.\n\\label{thm:optimal}\n\\end{theorem}\n\n\\begin{proof}\nLet us assume that there is some method by which at least $n$ edges can be optimized. Now, the connected subgraph which contains all the $n$ vertices and at least $n$ optimized edges must contain a cycle. Let $(u,v)$ be an edge of this cycle, $i.e.$, if $(u,v)$ is removed then the residual graph is a tree (in case there are $> n$ edges, the removal of edges can be performed recursively till such an edge $(u,v)$ is obtained whose removal makes the residual graph a tree). For this edge $(u,v)$, both the vertices $u$ and $v$ are endpoints of some other optimized edges as well. Therefore, from Corollary~\\ref{cor:cond} both $u$ and $v$ must act as the control for the CNOT gate corresponding to the edge $(u,v)$ in order for this edge to be optimized, which is not possible. Therefore, it is not possible to optimize more than $n-1$ edges.\n\\end{proof}\n\nTherefore, the DFS method is optimal in the number of optimized edges. However, we note that the DFS based method associates an ordering of the edges, $i.e.$, some of the edges which could have been operated on simultaneously, cannot be done so now.\nThis, in turn, can lead to an increase in the depth of the circuit. Hence, a penalty for this method producing optimal reduction in CNOT gates, is that it increases the depth of the circuit. \n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.37]{qaoa_col.png}\n \\caption{Optimized circuit by Edge Coloring based method }\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[scale=0.37]{qaoa_dfs.png}\n \\caption{Optimized circuit by DFS based method}\n \\label{dfs}\n \\end{subfigure}\n \\caption{Max-Cut QAOA ansatz with $p=1$ corresponding to (a) Edge Coloring and (b) DFS based optimization. In (a), the first CNOT gates of the operators have been deleted. The operators corresponding to $(q_1,q_2)$ and $(q_3,q_0)$ act in parallel. In (b), the first CNOT gates of three operators have been deleted, but the depth has increased.}\n \\label{fig:opt}\n\\end{figure}\n\nIn Fig.~\\ref{fig:depth}, we show a 2-regular graph with four vertices. In Fig.~\\ref{fig:depth}(a), the depth of the circuit corresponding to the operator $exp(-i\\gamma_l H_P)$ is 2; the edges of the same color can be operated on simultaneously. If the red (or blue) edges form the first layer, then those two edges are optimized. However, if we use the DFS method, with the DFS tree starting from, say, vertex 1, then the edges $(1,2),(2,3)$ and $(3,4)$ can be optimized (Fig.~\\ref{fig:depth}(b)). Now these three edges must be operated on one after another, followed by the fourth edge. Thus the depth of the circuit corresponding to the operator $exp(-i\\gamma_l H_P)$ becomes 4. The circuits corresponding to these two scenarios are depicted in Fig~\\ref{fig:opt}(a) and (b) respectively.\n\n\nThe question, therefore, is whether this increase in depth is always acceptable, even with the increased reduction in the number of CNOT gates as, with increased depth, the circuit becomes more prone to relaxation error.\nNumerical analysis and simulation (Section~\\ref{sec:sim}) establises that although the depth of the circuit is increased, the overall error probability of the circuit is reduced further.\n\n\\subsection{When is the DFS based method useful?}\n\nIn this subsection, we formulate a relation for which the increase in the depth still leads to a lower probability of error for the reduction in the number of CNOT gates. For this analysis, we make an assumption that the error in the circuit arises only from noisy CNOT gates and the depth of the circuit ($i.e.$, the $T_1$ time). Although this assumption is idealistic, the ansatz primarily consists of layers of CNOT gates. Furthermore, in superconductors, $R_z$ gates are executed virtually \\cite{mckay2017efficient}, and hence does not lead to any gate error. Therefore, CNOT is the primary source of gate error and with increasing depth, the qubits become more prone to relaxation error. Therefore, this assumption allows for a simple but powerful model for analyzing the query at hand.\n\nLet us assume that the time duration and the error probability of each CNOT gate is $t_{cx}$ and $p_{cx}$ respectively. Let there be $N$ layers of CNOT operations. Note that although there can be multiple CNOT gates in each layer, the time duration of each layer is $t_{cx}$ only. Therefore, the probability of no error ($i.e.$, the probability that the circuit remains error free) after $N$ layers of operations, considering only relaxation error, is\n $exp(-\\frac{N t_{cx}}{T_1})$.\n\nLet there be $k$ CNOT gates in the original circuit. Therefore, the probability of no error after the operation of the CNOT gates, considering only CNOT gate error, is\n $(1 - p_{cx})^k$.\n\nCombining both the sources of the errors, Eq.~(\\ref{eq:no_err}) gives the probability of success ($i.e.$, the probability of no error) after a single cycle of computation of the QAOA ansatz.\n\\begin{equation}\n \\label{eq:no_err}\n P_{success} = (1 - p_{cx})^k \\cdot exp(-\\frac{N t_{cx}}{T_1})\n\\end{equation}\n\nHenceforth, $P_{success}$ will refer to the probability of success ($i.e.$, how close the noisy outcome is to the noise-free ideal outcome) of the ansatz circuit execution for a single run of the algorithm. Note that in QAOA, the ansatz is computed multiple times for multiple cycles, and the objective is to maximize the expectation value of the outcome.\n\nWe further assume that after the optimization using DFS based method, $k_1$ CNOT gates have been reduced leading to an increase in $N_1$ layers of operations. The probability that this optimized circuit remains error-free is given in Eq.~(\\ref{eq:opt_err}).\n\\begin{equation}\n \\label{eq:opt_err}\n P^{opt}_{success} = (1 - p_{cx})^{(k-k_1)} \\cdot exp(-\\frac{(N+N_1) t_{cx}}{T_1})\n\\end{equation}\n\nThe optimization is fruitful only when $P^{opt}_{success} \\geq P_{success}$. Note that\n\\begin{center}\n $P^{opt}_{success} = P_{success} \\cdot exp(-\\frac{N_1 t_{cx}}{T_1}) \/ (1 - p_{cx})^{k_1}$\n\\end{center}\n\nSince both $P^{opt}_{success}$ and $P_{success} \\leq 1$, the required inequality holds only if $exp(- \\frac{N_1 t_{cx}}{T_1}) \/ (1 - p_{cx})^{k_1} \\geq 1$. In other words,\n\\begin{eqnarray}\n\\label{eq:cond1}\nexp(-\\frac{N_1 t_{cx}}{T_1}) &\\geq& (1 - p_{cx})^{k_1} \\nonumber\\\\\n\\Rightarrow N_1 &\\leq& \\lambda \\times k_1, \\nonumber\\\\\n~where && \\lambda = \\frac{-ln(1 - p_{cx}) \\cdot T_1}{t_{cx}}.\n\\end{eqnarray}\n\nThe constant $\\lambda$ is defined in terms of parameters specific to the quantum device.\n\n\n\\subsubsection{Effect of varying $\\lambda$}\nGiven that $\\lambda = f(t_{cx},p_{cx},T_1)$, we expect the $T_1$ value to increase, and the $t_{cx}$ and $p_{cx}$ values to decrease as technology improves. The value of $\\lambda$ increases for increasing $T_1$ and\/or decreasing $t_{cx}$, whereas it decreases for decreasing $p_{cx}$. Therefore, \n\n\\begin{itemize}\n \\item If $p_{cx}$, the probability of error for CNOT gates decreases, the optimization becomes less useful since we are increasing the probability of relaxation error, but the reduction in error probability becomes less. As per this observation, for smaller $\\lambda$, Eq.~(\\ref{eq:cond1}) is satisfied when the increase in depth is reduced as well.\n \n \\item Similarly, if (i) $T_1$ increases, then the qubit can retain its conherence for a longer period of time, or (ii) $t_{cx}$ decreases, then the overall computation time of the circuit decreases as well, and the circuit can allow some relaxation in the depth even if $T_1$ remains unchanged. We observe that for both of these cases, by Eq.~(\\ref{eq:cond1}), $\\lambda$ increases, thus allowing more increase in depth for a given reduction in the number of CNOT gates.\n\\end{itemize}\n\n\\subsection{Trade-off between depth and reduction in CNOT gates}\n\nIf the DFS based method is not applied, then the number of layers of CNOT gates is equal to the number of color classes (as in Edge Coloring method). The maximum number of color classes is $\\Delta + 1$ (as discussed in the previous section), and hence the maximum depth of the circuit is $\\Delta + 1$ as well. Now, when the DFS based method is applied, the circuit can be divided into two disjoint sets of edges:\n\\begin{enumerate}\n \\item The set of edges belonging to the DFS tree which can be optimized. The depth of this portion of the circuit is at most $n-1$ ($i.e.$, the depth of the DFS tree). Each of the operators corresponding to these edges contains a single CNOT gate only, and hence the number of CNOT gate layers is $n-1$ as well.\n \\item The set of edges that do not belong to the DFS tree and hence are not optimized. The operators corresponding to these edges can be applied in any order, but after all the optimized edges. When removing the edges of the DFS tree, the degree of each vertex is reduced by at least 1. Therefore, the maximum degree of the remaining subgraph is at most $\\Delta - 1$. Therefore, the depth of this portion of the circuit will be at most $\\Delta$ (From Misra and Gries Algorithm). Each of the layer in this portion contains 2 CNOT gates, and hence the number of CNOT gate layers is $2\\Delta$.\n\\end{enumerate}\n\nTherefore, the maximum depth of the circuit after applying the DFS based optimization is $n-1 + \\Delta$. In other words, the increase in depth due to this method is given by Eq.~(\\ref{eq:cond2}).\n\\begin{eqnarray}\n\\label{eq:cond2}\n n-1 + \\Delta - (\\Delta + 1) = n - 2\n\\end{eqnarray}\n\nRecall that the number of CNOT gates reduced due to the DFS method is always $n-1$. Therefore, from Eq.~(\\ref{eq:cond1}) and $~(\\ref{eq:cond2})$, we get\n\\begin{eqnarray}\n\\label{eq:condition}\nn - 2 &\\leq& \\lambda \\cdot (n-1) \\nonumber \\\\\n\\Rightarrow \\lambda &\\geq& \\frac{n-2}{n-1}\n\\end{eqnarray}\n\nIn Table~\\ref{tab:lambda} we show the average value of $\\lambda$ for some IBM Quantum~\\cite{ibmquantum} devices, ranging from the comparatively more noisy \\textit{ibmq\\_melbourne} to the comparatively less noisy \\textit{ibmq\\_manhattan}.\n\n\\begin{table}[htb]\n \\centering\n \\caption{Average value of $\\lambda$ for four IBM Quantum machines ~\\cite{ibmquantum}}\n \\begin{tabular}{|c|c|}\n \\hline\n IBM Quantum devices & Avg value of $\\lambda$\\\\\n \\hline\n \\textit{ibmq\\_manhattan} & 3.6\\\\\n \\hline\n \\textit{ibmq\\_montreal} & 2.47\\\\\n \\hline\n \\textit{ibmq\\_sydney} & 3.35\\\\\n \\hline\n \\textit{ibmq\\_melbourne} & 2.03\\\\\n \\hline\n \\end{tabular}\n \\label{tab:lambda}\n\\end{table}\n\nNote that the lower bound on $\\lambda$, $\\frac{n-2}{n-1}$ (Eq.~(\\ref{eq:condition}), is always less than $1$ for all $n$. In the asymptotic limit, $\\frac{n-2}{n-1} \\rightarrow 1$. Thus, the proposed DFS based optimization method leads to a lower error probability on any quantum device for which $\\lambda \\geq 1$. Table~\\ref{tab:lambda} readily shows that the IBM Quantum hardwares conform to this requirement.\n\n\n\n\n\\section{Results of simulation}\n\\label{sec:sim}\n\n\\begin{table*}[t]\n \\centering\n \\caption{Comparison of Max-Cut QAOA ansatz circuits post transpilation on \\textit{ibmq\\_manhattan}: (i) Traditional, (ii) Edge coloring and (iii) DFS based optimization}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{Graph Family} & \\multirow{2}{*}{\\# qubits} & \\multicolumn{3}{c|}{\\# CNOT gates in Max-Cut QAOA ansatz circuit}\\\\\n \\cline{3-5}\n & & Traditional & Edge coloring & DFS\\\\\n \\hline\n \\multirow{6}{*}{Complete graph} & 10 & 90 & 85 & 81\\\\\n \\cline{2-5}\n & 20 & 380 & 370 & 361\\\\\n \\cline{2-5}\n & 30 & 870 & 855 & 841\\\\\n \\cline{2-5}\n & 40 & 1560 & 1540 & 1521\\\\\n \\cline{2-5}\n & 50 & 2450 & 2425 & 2401\\\\\n \\cline{2-5}\n & 60 & 3540 & 3510 & 3481\\\\\n \\hline\n \\multirow{6}{*}{Erdos-Renyi ($p_{edge}$ = 0.8)} & 10 & 70 & 66 & 61\\\\\n \\cline{2-5}\n & 20 & 302 & 292 & 283\\\\\n \\cline{2-5}\n & 30 & 698 & 683 & 669\\\\\n \\cline{2-5}\n & 40 & 1216 & 1197 & 1177\\\\\n \\cline{2-5}\n & 50 & 1956 & 1931 & 1907\\\\\n \\cline{2-5}\n & 60 & 2822 & 2792 & 2763 \\\\\n \\hline\n \\multirow{6}{*}{Erdos-Renyi ($p_{edge}$ = 0.6)} & 10 & 50 & 46 & 41\\\\\n \\cline{2-5}\n & 20 & 234 & 225 & 215\\\\\n \\cline{2-5}\n & 30 & 504 & 491 & 475\\\\\n \\cline{2-5}\n & 40 & 960 & 940 & 921\\\\\n \\cline{2-5}\n & 50 & 1504 & 1479 & 1455\\\\\n \\cline{2-5}\n & 60 & 2114 & 2085 & 2055 \\\\\n \\hline\n \\multirow{6}{*}{Erdos-Renyi ($p_{edge}$ = 0.4)} & 10 & 36 & 31 & 27\\\\\n \\cline{2-5}\n & 20 & 164 & 154 & 145\\\\\n \\cline{2-5}\n & 30 & 362 & 348 & 333\\\\\n \\cline{2-5}\n & 40 & 586 & 566 & 547\\\\\n \\cline{2-5}\n & 50 & 950 & 925 & 901\\\\\n \\cline{2-5}\n & 60 & 1468 & 1440 & 1409 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:cx_count}\n\\end{table*}\n\nIn this section we show the effect of our optimization methods on reducing the probability of error and the CNOT count of QAOA for Max-Cut. We first show that our proposed reduction is retained in the post transpilation circuit, which is executed on the quantum hardware. Next, we run our simulation with the noise model for \\textit{ibmq\\_manhattan} from IBM Quantum; this noise model corresponds to the actual noise in the IBM Quantum Manhattan device which has $65$ qubits and a Quantum Volume of $32$~\\cite{ibmquantum}. For our simulation purpose, we have considered Erdos-Renyi graphs, where the probability that an edge exists between two vertices, $p_{edge}$, varies respectively from 0.4 to 1 (complete graph). The choice of Erdos-Renyi graph allows us to study the performance of these proposed methods for various sparsity of graphs.\n\nThe circuit that we construct is usually not executed as it is in the IBM Quantum hardware. It undergoes a process called transpilation in which\n\n\\begin{enumerate}[(i)]\n \\item the gates of the circuit are replaced with one, or a sequence of, basis gates which are actually executed in the quantum hardware. The basis gates of the IBM Quantum devices are \\{$CNOT$, $SX$, $X$, $R_z$ and Identity\\} \\cite{ibmquantum},\n \n \\item the circuit is mapped to the underlying connectivity (called the coupling map) of the hardware \\cite{bhattacharjee2020survey},\n \n \\item the number of gates in the circuit is reduced using logical equivalence \\cite{burgholzer2020advanced}.\n\\end{enumerate}\n\nA natural question, therefore, is whether the reduction in CNOT gates is retained post transpilation. In Table~\\ref{tab:cx_count} we show the number of CNOT gates in the post transpilation circuit for the \\textit{ibmq\\_manhattan} device as the number of vertices is varied from $10 - 60$ for each of the graph family considered. Our results readily show that the proposed optimization in the number of CNOT gates still hold good even in the transpiled circuit. Since the \\textit{ibmq\\_manhattan} device is a 65-qubits device, we show the results upto 60 qubits, but the results show that the trend will continue for higher qubit devices as well, when they become available.\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{erdos04.png}\n \\caption{Erdos-Renyi graphs with $p_{edge} = 0.4$}\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{erdos06.png}\n \\caption{Erdos-Renyi graphs with $p_{edge} = 0.6$}\n \\label{dfs}\n \\end{subfigure}\n \\newline\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{erdos08.png}\n \\caption{Erdos-Renyi graphs with $p_{edge} = 0.8$}\n \\label{col}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{complete.png}\n \\caption{Complete graphs}\n \\label{dfs}\n \\end{subfigure}\n \\caption{$|\\braket{\\psi|\\psi_e}|^2$ for graphs of various sparsity: Erdos Renyi graphs ($p_{edge} = 0.4,~0.6,~ 0.8$) and complete graphs}\n \\label{fig:prob_succ}\n\\end{figure*}\n\nLet $\\ket{\\psi}$ be the state obtained from the noise-free (ideal) computation of the QAOA circuit, and the state obtained from noisy computation be $\\ket{\\psi_e}$. The probability of success of the noisy computation, then, is defined as\n\\begin{equation}\n \\label{eq:graph_succ}\n P_{succ} = |\\braket{\\psi|\\psi_e}|^2\n\\end{equation}\nIn Fig.~\\ref{fig:prob_succ}(a) - (d) we plot $P_{succ}$ of the traditional QAOA ansatz, Edge Coloring based and the DFS based optimization method for Erdos-Renyi graphs, where $p_{edge}$, the probability that an edge exists between two vertices, varies from 0.4 to 1 (complete graph). The choice of Erdos-Renyi graph allows us to study the performance of these proposed methods for various sparsity of graphs. For each case we vary the number of vertices $n$ from 4 to 12. For each value of $n$ and $p_{edge}$, the results are averaged over 20 input graph instances, and each instance is an average of 100 randomly generated noisy circuits by the simulator model for \\textit{ibmq\\_manhattan} with noise. Our results readily show that the DFS based method outperforms both the Edge Coloring based method and the traditional QAOA in terms of lower error probability.\n\nFrom Table~\\ref{tab:cx_count}, and our simulation results in Fig.~\\ref{fig:prob_succ}(a)-(d), we can infer that the DFS based optimization outperforms the Edge Coloring based optimization, which again, outperforms the traditional QAOA in the reduction in CNOT count, and the probability of error in the circuit in (i) the actual transpiled circuit that is executed on the quantum devices, as well as (ii) in realistic noisy scenario of quantum devices.\n\n\\section{Conclusion}\n\\label{sec:con}\n\nIn this paper we have proposed two methods to reduce the number of CNOT gates in the traditional QAOA ansatz. The Edge Coloring based method can reduce upto $\\lfloor \\frac{n}{2} \\rfloor$ CNOT gates whereas the DFS based method can reduce $n - 1$ CNOT gates. While the former method provides a depth-optimized circuit, the latter method can increase the depth of the circuit. We analytically derive the constraint for which a particular increase in depth is acceptable given the number of CNOT gates reduced, and show that every graph satisfies this constraint. Therefore, these methods can reduce the number of CNOT gates in the QAOA ansatz for any graph. Finally, we show via simulation, with the \\textit{ibmq\\_manhattan} noise model, that the DFS based method outperforms the Edge Coloring based method, which in its turn, outperforms the traditional QAOA in terms of lower error probability in the circuit. The transpiler procedure of Qiskit maps a circuit to the underlying hardware connectivity graph, and some gates are reduced in this process. This transpiled circuit is executed on the real hardware. We show, with the \\textit{ibmq\\_manhattan} coupling map, that the reduction in the number of CNOT gates still holds post transpilation. Therefore, our proposed methods provide a universal way to an improved QAOA ansatz design. On a final note, all the calculations in this paper considers connected graph, but these carry over easily to disconnected graphs as well.\n\n\\section*{Acknowledgement}\n\nWe acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. In this paper we have used the noise model of \\textit{ibmq\\_manhattan}, which is one of IBM Quantum Hummingbird r2 Processors. \n\n\\section*{Code Availability}\n\nA notebook providing the code to generate the plots of Fig.~\\ref{fig:prob_succ}(a)-(d) is available open source at \\href{https:\/\/github.com\/RitajitMajumdar\/Optimizing-Ansatz-Design-in-QAOA-for-Max-cut}{https:\/\/github.com\/RitajitMajumdar\/Optimizing-Ansatz-Design-in-QAOA-for-Max-cut}.\n\n\n\n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nMagnetic Resonance Fingerprinting (MRF)~\\cite{ref:ma2013} is an emerging Quantitative MRI technology for simultaneous measurement of multiple quantitative bio-properties (e.g. T1 and T2 relaxation times, and proton density (PD)) of tissues in a single and time-efficient scan. However, due to the aggressive spatiotemporal subsampling needed for short scan times, the MRF time-series data and consequently the tissue maps usually contain aliasing artefacts. \n\nCompressed sensing reconstruction algorithms based on using analytical image priors (e.g., sparsity, Total Variation and\/or low-rank) have been proposed to address this problem \\cite{ref:mcgivney2014, ref:asslander2018, ref:golbabaee2021-lrtv_mrfresnet}. Recent works e.g.~\\cite{ref:fang2019oct, ref:chen2020} have been focussed around deep learning approaches that utilise spatiotemporal image priors learned from data for artefact removal. These approaches which are conventionally trained end-to-end on pairs of subsampled and clean data, outperform those using analytical image priors and produce excellent results. However, unlike the traditional compressed sensing algorithms, current trained deep learning models are specific to the subsampling processes used during their training and unable to generalise and remove aliasing artefacts from other subsampling processes given at the testing time.\n\nThe contribution of this work is to propose the first deep Plug-and-Play (PnP) iterative reconstruction algorithm for MRF to address this issue, and to demonstrate that this approach effectively adapts to changing acquisition models, specifically, the MRF k-space subsampling patterns. Iterations of the PnP algorithm~\\cite{ref:venkatakrishnan2013, ref:ahmad2020} follow steps of the Alternating Direction Method of Multipliers (ADMM)~\\cite{ref:boyd2011} which is an optimisation method for model-based compressed sensing reconstruction (for imaging applications of deep learning ADMM and competing methods, see~\\cite{ref:ahmad2020, ref:liang2020, ref:romano2017, ref:sun2019, ref:sun2021}). In our work, the spatiotemporal MRF image priors are learned from data through an image denoiser i.e. a Convolutional Neural Network (CNN), that is pre-trained for removing Additive White Gaussian Noise (AWGN), and not any particular subsampling artefact. This CNN denoiser is used as a data-driven shrinkage operator within the ADMM's iterative reconstruction process. The reconstructed (de-aliased) image time-series are then fed to an MRF dictionary matching step~\\cite{ref:ma2013} for mapping tissues' quantitative parameters.\n\n\n\\section{The MRF Inverse imaging Problem}\n\\label{sec:mrf}\n\nMRF adopts a linear spatiotemporal compressive acquisition model:\n\\begin{equation}\n\t\\label{eq:standard_linear_inverse_problem}\n\t\\bm{y} = \\bm{A}\\bm{x} + \\bm{w}\n\\end{equation}\nwhere $ \\bm{y} \\in \\mathbb{C}^{\\,m\\times T}$ are the k-space measurements collected at $T$ temporal frames and corrupted by some bounded noise $\\bm{w}$. The acquisition process i.e. the linear forward operator $ \\bm{A}: \\mathbb{C}^{\\,n\\times t}\\rightarrow \\mathbb{C}^{\\,m\\times T}$ models Fourier transformations subsampled according to a set of temporally-varying k-space locations in each timeframe combined with a temporal-domain compression scheme~\\cite{ref:mcgivney2014,ref:asslander2018,ref:golbabaee2021-lrtv_mrfresnet} for (PCA subspace) dimensionality reduction i.e. $t\\ll T$. $ \\bm{x}\\in \\mathbb{C}^{\\,n\\times t} $ is the Time-Series of Magnetisation Images (TSMI) across $n$ voxels and $t$ dimension-reduced timeframes (channels). Accelerated MRF acquisition implies working with under-sampled data which makes the inversion of~\\eqref{eq:standard_linear_inverse_problem} i.e. estimating the TSMIs $\\bm{x}$ from the compressed MRF measurements $\\bm{y}$, an ill-posed problem. \n\\vspace{\\baselineskip}\n\\\\\n\\textbf{The Bloch response model:} \nThe TSMIs magnetisation time-responses are the main source of tissue quantification. At each voxel $v$, the time responses i.e. the fingerprints, are related to the corresponding voxel's bio-properties, namely, the T1 and T2 relaxation times, through the solutions of the Bloch differential equations $\\mathcal{B}$ scaled by the proton density (PD)~\\cite{ref:ma2013, jiang2015mr}:\n\\begin{equation}\n\t\\label{eq:magnetisation_response}\n\t\\bm{x}_{v} \\approx \\text{PD}_{v} \\, \\mathcal{B} \\! \\left( \\text{T1}_{v}, \\text{T2}_v \\right)\n\\end{equation}\nWhile this relationship could temporally constrain the inverse problem~\\eqref{eq:standard_linear_inverse_problem}, it turns out to be inadequate to make the problem well-posed~\\cite{ref:golbabaee2021-lrtv_mrfresnet}. The model~\\eqref{eq:magnetisation_response} alone does not capture cross-voxel correlations. Spatial-domain image priors that account for this must be further added to resolve the ill-posedness problem. For this we rely on datasets of anatomical quantitative maps i.e. the T1, T2 and PD maps, to create the TSMIs via \\eqref{eq:magnetisation_response}, and train a denoiser model on them to learn the spatiotemporal structures\/priors for $\\bm{x}$. We then use this trained denoiser to iteratively solve~\\eqref{eq:standard_linear_inverse_problem} for any given forward model $\\bm{A}$. This process is detailed in the next section.\n\n\n\\section{image reconstruction Algorithm}\n\\label{sec:algorithms}\n\nWe describe our algorithm to reconstruct artefact-free TMSIs before feeding them to MRF dictionary-matching for quantitative mapping.\n\n\n\\subsection{The PnP-ADMM algorithm}\n\\label{ssec:pnp_admm}\n\nA model-based reconstruction approach to solve inverse problems like (1) would typically lead to an optimisation of the form\n\\begin{equation}\n\t\\label{eq:opt}\n\t\\arg\\min_{x} \\| \\bm{y} - \\bm{A}\\bm{x}\\|_2^2 + \\phi(\\bm{x})\n\\end{equation}\nwhich can be solved by variety of iterative shrinkage algorithms including ADMM~\\cite{ref:boyd2011} and by iterating between a step to minimise the first term and promote the k-space data consistency according to the tested acquisition process, and another shrinkage step according to a regularisation term $\\phi$ to promote certain structural priors on $\\bm{x}$ to combat the ill-posedness of (1). In the Plug-and-Play (PnP) approach~\\cite{ref:venkatakrishnan2013}, the shrinkage term is an AWGN image denoiser $\\bm{f}$ that builds an implicit regularisation for (1). The denoiser model $\\bm{f}$ could be a trained convolutional neural network (CNN) like~\\cite{ref:ahmad2020} that captures the structural image priors for $\\bm{x}$ by removing generic gaussian noise from $\\bm{x}+\\bm{n}$, where $\\bm{n} \\sim\\mathcal{N}(0,\\sigma)$, and the noise power $\\sigma$ is an experimentally-chosen hyperparameter. \n\nWe use the following ADMM based iterations of the PnP algorithm~\\cite{ref:ahmad2020}: $\\bm{v}_0 =\\bm{A}^Hy$, $\\bm{u}_0=\\mathbf{0}$, $\\forall k, \\text{iteration number} =1,2,...\\!\\!\\!$ \n\\begin{subequations}\n\t\\begin{align}\n\t\\label{eq:pnp_admm_a}\n\t\\bm{x}_k & = \\bm{h} \\left( \\bm{v}_{k-1} - \\bm{u}_{k-1} \\right) \\\\\n\t\\label{eq:pnp_admm_b}\n\t\\bm{v}_k & = \\bm{f} \\left( \\bm{x}_{k} + \\bm{u}_{k-1} \\right) \\\\\n\t\\label{eq:pnp_admm_c}\n\t\\bm{u}_k & = \\bm{u}_{k-1} + \\left( \\bm{x}_{k} - \\bm{v}_{k} \\right)\n\t\\end{align}\n\\end{subequations}\nwhere\n\\begin{equation}\n\t\\label{eq:h}\n\t\\bm{h} \\left( \\bm{z} \\right) = \\arg\\min_{x} \\|\\bm{y}-\\bm{Ax}\\|_2^2\n\t + \\gamma \\|\\bm{x}-\\bm{z}\\|_2^2 \n\\end{equation}\nThe step \\eqref{eq:pnp_admm_a} enforces the k-space data consistency, where $ \\bm{h} $ is solved using the conjugate gradient algorithm. Here the ADMM's internal convergence parameter is set to $\\gamma=0.05$ and $ \\bm{z} = \\bm{v}_{k-1} - \\bm{u}_{k-1} $. The step \\eqref{eq:pnp_admm_b} applies the image denoising shrinkage to promote spatiotemporal TSMI priors, and the final step aggregates the two previous steps to update the next iteration. \n\n\n\\subsection{CNN denoiser}\n\\label{ssec:cnn_denoiser}\nOur PnP algorithm is combined with a pre-trained CNN denoiser which is plugged in as $ \\bm{f} $ in \\eqref{eq:pnp_admm_b} to iteratively restore the TMSI using learned image priors. The denoiser has a U-Net shape architecture following that of~\\cite{ref:zhang2021}. Here we modified the network's input and output dimensions to match the number of TSMI's multiple channels (i.e. we used real-valued TSMIs with $t=10$ in our experiments) to enable multichannel spatiotemporal image processing. A noise level map, filled with $\\sigma$ values, of the same dimensions as other channels was appended to the network's input for multi-noise level denoising, following ~\\cite{ref:zhang2021}. Other hidden layers follow exactly the same specifications as~\\cite{ref:zhang2021}. We trained this model using (multichannel) image patches extracted from $\\{(\\bm{x},\\bm{x}+\\bm{n})\\}$ pairs of clean and noise-contaminated TSMIs for various levels $\\sigma$ of AWGN noise. Further, image patches are patch-wise normalised to the [0,1] range. The clean TSMIs were obtained from a dataset of anatomical quantitative maps via (2). \n\n\n\\section{Numerical Experiments}\n\\label{sec:experiments}\n\n\\textbf{Dataset:} A dataset of quantitative T1, T2 and PD tissue maps of 2D axial brain scans of 8 healthy volunteers across 15 slices each were used in this study\\footnote{Data was obtained from a 3T GE scanner (MR750w system - GE Healthcare, Waukesha, WI) with 8-channel receive-only head RF coil, $230\\times230$ mm$^2$ FOV, $230\\times230$ image pixels, $5$ mm slice thickness, and used a FISP acquisition protocol with $T=1000$ repetitions, the same flip angles as~\\cite{jiang2015mr}, inversion time of 18 ms, repetition time of 10 ms and echo time of 1.8 ms. The groundtruth tissue maps were obtained by the LRTV algorithm~\\cite{ref:golbabaee2021-lrtv_mrfresnet}. }.\nClean (groundtruth) TSMIs were retrospectively simulated from these maps via~\\eqref{eq:magnetisation_response} using the EPG Bloch model formalism~\\cite{weigel2015extended} and a truncated (accelerated) variant of the FISP MRF protocol~\\cite{jiang2015mr} with $T=200$ repetitions. PCA was applied to obtain $t=10$ channel dimension-reduced TSMI data~\\cite{ref:mcgivney2014}. The TSMIs were real-valued and their spatial resolution was cropped \nfrom $ 230 \\times 230 $ pixels to $ 224 \\times 224 $ pixels for the U-Net. The dataset was split into 105 slices (from 7 subjects) for training and 15 slices (for 1 held-out subject) for testing.\n\\vspace{\\baselineskip}\n\\\\\n\\textbf{Training the CNN denoiser:} The TSMI training data was augmented by random resizing of the spatial resolution. \nThe CNN is a multichannel image-patch denoiser. For this we extracted patches of size $128\\times128\\times10$ with spatial strides of $17$ from the TSMIs. We augmented the patches by flipping across image axes and used random rotations. The patches were then [0,1] normalised. Random AWGN was then added during each iteration of the training algorithm to create pairs of clean and noisy TSMI patches. The CNN was trained following~\\cite{ref:zhang2021} to 500 epochs using L1 loss, Adam optimiser, batch size $16$, initialised weights using Kaiming uniform with no bias, and the learning rate initialised at $10^{-4}$ and halved every 100,000 iterations. Randomly selected levels of AWGN noise from $\\sigma=10^{\\{-4\\,\\text{to}\\,0\\}}$ were used for training the denoiser, following~\\cite{ref:zhang2021}. For the PnP algorithm, the denoiser was tested with five levels of AWGN noise $\\sigma=10^{\\{-4,-3,-2,-1,0\\}}$ from which $\\sigma=10^{-2}$ yielded the best result. A second denoiser trained solely on the optimum noise level found was utilised for our comparisons.\n\\vspace{\\baselineskip}\n\\\\\n\\textbf{Subsampling models:} We simulated two single-coil Cartesian acquisition processes with distinct k-space subsampling patterns: (i) a spiral readout as in~\\cite{ref:chen2020} with rotating patterns across $T=200$ repetitions was used to subsample the $224\\times224$ Cartesian FFT grid across all timeframes, and (ii) k-space multiple rows subsampling pattern i.e. the multi-shot Echo Planar Imaging (EPI)~\\cite{benjamin2019multi}, with shifted readout lines across the timeframes was used for MRF subsampling. \nBoth MRF acquisitions subsampled 771 k-space locations in each timeframe from a total of $224\\times224$, leading to a compression ratio of 65, and were contaminated with AWGN noise of $30$ dB SNR.\n\\vspace{\\baselineskip}\n\\input{.\/sections\/results_fig_tissue_maps_spiral.tex}\n\\input{.\/sections\/results_fig_tissue_maps_epi.tex}\n\\\\\n\\textbf{Tested reconstruction algorithms:} \nWe compared the performance of the proposed PnP-ADMM algorithm to the SVD-MRF~\\cite{ref:mcgivney2014} and LRTV~\\cite{ref:golbabaee2021-lrtv_mrfresnet} TSMI reconstruction baselines. All algorithms incorporated subspace dimensionality reduction ($t=10$). Further they can all adapt to the MRF subsampled acquisition process at testing time. SVD-MRF is a non-iterative approach based on zero-filling reconstruction $\\hat{\\bm{x}}=\\bm{A}^Hy$. The LRTV is an iterative approach to~\\eqref{eq:opt} based on an analytical Total Variation image (prior) regularisation for $\\phi$. The PnP approach uses data-driven image priors learned by the CNN denoiser. The PnP-ADMM ran for $100$ iterations, used $\\gamma=0.05$ and its conjugate gradient steps ran to the solution tolerance $10^{-4}$. The LRTV ran for $200$ iterations and used regularisation parameter $\\lambda=4\\times10^{-5}$.\n\\vspace{\\baselineskip}\n\\\\\n\\textbf{Quantitative mapping:} Dictionary matching was used for mapping the reconstructed TSMIs to the T1, T2 and PD images. For this an MRF dictionary of $94'777$ atoms (fingerprints) was created using the EPG Bloch response simulations for a logarithmically-spaced grid of $($T1, T2$)\\in[0.01, 6]\\times [0.004,0.6]$ (s). PCA was applied to compress this dictionary's (i.e. the Bloch responses) temporal dimension to a $t=10$ dimensional subspace~\\cite{ref:mcgivney2014}. The same subspace was used for the TSMIs dimensionality reduction and within all tested reconstruction algorithms.\n\\vspace{\\baselineskip}\n\\input{.\/sections\/results_table.tex}\n\\\\\n\\textbf{Evaluation Metrics:} For evaluation of the TSMI reconstruction performance, the Peak Signal-to-Noise-Ratio (PSNR) and Structural Similarity Index Measure (SSIM) averaged across all 10 temporal channels were used. To evaluate tissue maps, the Mean Absolute Error (MAE), PSNR and SSIM were used. Metrics were calculated for all 15 slices of the held-out test subject and averaged.\n\\vspace{\\baselineskip}\n\\\\\n\\textbf{Results and Discussion:} Fig.\\ref{fig:spiral_results}, Fig.\\ref{fig:epi_results} and Table.\\ref{tab:results_table}, shows PnP-ADMM utilising the same CNN denoiser, trained on generic AWGN, can apply to two different forward models with drastically different subsampling patterns. The output is consistent de-aliasing performance for time-series data and consequently the tissue maps (see supplementary material for a comparison of TSMIs).\n\nPnP-ADMM outperforms tested baselines subjectively (Fig.\\ref{fig:spiral_results}, Fig.\\ref{fig:epi_results}) and objectively (Table.\\ref{tab:results_table}) across all tested metrics, for both tested subsampling patterns. Recovering tissue maps from EPI subsampled data is observed to be generally more challenging than the spiral subsampling scheme (because the centre of k-space is densely sampled by spiral readouts), however, as observed the PnP-ADMM algorithm succeeds while other tested baselines fail. The key to the superior performance of PnP-ADMM lies in its ability to utilise spatiotemporal prior information related to the dataset. SVD-MRF utilises only temporal prior information through the use of PCA, while LRTV utilises generic prior information in the form of temporal priors through PCA and spatial priors through Total Variation. The use of dataset specific spatiotemporal priors, learned by a CNN denoiser, is the crux of PnP-ADMM's superior performance.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nA proof-of-concept is proposed for a PnP-ADMM approach using deep CNN image denoisers for multi-parametric tissue property quantification using MRF compressed sampled acquisitions, which may be useful for cases where the measurement acquisition scheme is unknown during training for deep learning methods and\/or may change during testing. \nThe method was validated on simulated data and consistently outperformed the tested baselines. This was possible due to the use of data-driven spatiotemporal priors learnt by a pre-trained CNN denoiser, which were critical for enhancing the reconstruction of the TSMIs. Future work will include a variety of measurement acquisition settings, the use of non-gridded sampling trajectories and prospective in-vivo scans.\n\n\n\\section{Compliance with ethical standards}\n\\label{sec:ethics}\n\nThis research study was conducted retrospectively using anonymised human subject scans made available by GE Healthcare who obtained informed consent in compliance with the German Act on Medical Devices. Approval was granted by the Ethics Committee of The University of Bath (Date. Sept 2021 \/ No. 6568). \n\n\n\\section{Acknowledgments}\n\\label{sec:acknowledgments}\n\nCarolin M. Pirkl and Marion I. Menzel receive funding from the European Union's Horizon 2020 research and innovation programme, grant agreement No. 952172.\n\n\\bibliographystyle{IEEEbib}\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\@ifstar{\\origsection*}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}{\\@ifstar{\\origsection*}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}}\n\\def\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}\n\\makeatother\n\n\\usepackage{amsmath,amssymb,amsthm}\n\\usepackage{mathrsfs}\n\\usepackage{mathabx}\\changenotsign\n\\usepackage{dsfont}\n\\usepackage{bbm}\n\n\\usepackage{xcolor}\n\\usepackage[backref=section]{hyperref}\n\\usepackage[ocgcolorlinks]{ocgx2}\n\\hypersetup{\n\tcolorlinks=true,\n\tlinkcolor={red!60!black},\n\tcitecolor={green!60!black},\n\turlcolor={blue!60!black},\n}\n\n\n\\usepackage[open,openlevel=2,atend]{bookmark}\n\n\\usepackage[abbrev,msc-links,backrefs]{amsrefs}\n\\usepackage{doi}\n\\renewcommand{\\doitext}{DOI\\,}\n\n\\renewcommand{\\PrintDOI}[1]{\\doi{#1}}\n\n\\renewcommand{\\eprint}[1]{\\href{http:\/\/arxiv.org\/abs\/#1}{arXiv:#1}}\n\n\n\\usepackage[T1]{fontenc}\n\\usepackage{lmodern}\n\\usepackage[babel]{microtype}\n\\usepackage[english]{babel}\n\n\\linespread{1.3}\n\\usepackage{geometry}\n\\geometry{left=27.5mm,right=27.5mm, top=25mm, bottom=25mm}\n\n\\numberwithin{equation}{section}\n\\numberwithin{figure}{section}\n\n\\usepackage{enumitem}\n\\def\\upshape({\\itshape \\roman*\\,}){\\upshape({\\itshape \\roman*\\,})}\n\\def\\upshape(\\Roman*){\\upshape(\\Roman*)}\n\\def\\upshape({\\itshape \\alph*\\,}){\\upshape({\\itshape \\alph*\\,})}\n\\def\\upshape({\\itshape \\Alph*\\,}){\\upshape({\\itshape \\Alph*\\,})}\n\\def\\upshape({\\itshape \\arabic*\\,}){\\upshape({\\itshape \\arabic*\\,})}\n\n\\let\\polishlcross=\\ifmmode\\ell\\else\\polishlcross\\fi\n\\def\\ifmmode\\ell\\else\\polishlcross\\fi{\\ifmmode\\ell\\else\\polishlcross\\fi}\n\n\\def\\ \\text{and}\\ {\\ \\text{and}\\ }\n\\def\\quad\\text{and}\\quad{\\quad\\text{and}\\quad}\n\\def\\qquad\\text{and}\\qquad{\\qquad\\text{and}\\qquad}\n\n\\let\\emptyset=\\varnothing\n\\let\\setminus=\\smallsetminus\n\\let\\backslash=\\smallsetminus\n\n\\makeatletter\n\\def\\mathpalette\\mov@rlay{\\mathpalette\\mov@rlay}\n\\def\\mov@rlay#1#2{\\leavevmode\\vtop{ \\baselineskip\\z@skip \\lineskiplimit-\\maxdimen\n\t\t\\ialign{\\hfil$\\m@th#1##$\\hfil\\cr#2\\crcr}}}\n\\newcommand{\\charfusion}[3][\\mathord]{\n\t#1{\\ifx#1\\mathop\\vphantom{#2}\\fi\n\t\t\\mathpalette\\mov@rlay{#2\\cr#3}\n\t}\n\t\\ifx#1\\mathop\\expandafter\\displaylimits\\fi}\n\\makeatother\n\n\\newcommand{\\charfusion[\\mathbin]{\\cup}{\\cdot}}{\\charfusion[\\mathbin]{\\cup}{\\cdot}}\n\\newcommand{\\charfusion[\\mathop]{\\bigcup}{\\cdot}}{\\charfusion[\\mathop]{\\bigcup}{\\cdot}}\n\n\n\\DeclareFontFamily{U} {MnSymbolC}{}\n\\DeclareSymbolFont{MnSyC} {U} {MnSymbolC}{m}{n}\n\\DeclareFontShape{U}{MnSymbolC}{m}{n}{\n\t<-6> MnSymbolC5\n\t<6-7> MnSymbolC6\n\t<7-8> MnSymbolC7\n\t<8-9> MnSymbolC8\n\t<9-10> MnSymbolC9\n\t<10-12> MnSymbolC10\n\t<12-> MnSymbolC12}{}\n\\DeclareMathSymbol{\\powerset}{\\mathord}{MnSyC}{180}\n\\DeclareMathSymbol{\\YY}{\\mathord}{MnSyC}{42}\n\n\n\\usepackage{tikz}\n\\usetikzlibrary{calc,decorations.pathmorphing}\n\\usetikzlibrary{arrows,decorations.pathreplacing}\n\\pgfdeclarelayer{background}\n\\pgfdeclarelayer{foreground}\n\\pgfdeclarelayer{front}\n\\pgfsetlayers{background,main,foreground,front}\n\n\\usepackage{multicol}\n\\usepackage{subcaption}\n\\captionsetup[subfigure]{labelfont=rm}\n\n\\let\\epsilon=\\varepsilon\n\\let\\eps=\\epsilon\n\\let\\phi=\\varphi\n\\let\\rho=\\varrho\n\\let\\theta=\\vartheta\n\\let\\wh=\\widehat\n\n\\def{\\mathds E}{{\\mathds E}}\n\\def{\\mathds N}{{\\mathds N}}\n\\def{\\mathds Q}{{\\mathds Q}}\n\\def{\\mathds G}{{\\mathds G}}\n\\def{\\mathds Z}{{\\mathds Z}}\n\\def{\\mathds P}{{\\mathds P}}\n\\def{\\mathds R}{{\\mathds R}}\n\\def{\\mathds C}{{\\mathds C}}\n\\def{\\mathds 1}{{\\mathds 1}}\n\\def<_{\\mathrm{lex}}{<_{\\mathrm{lex}}}\n\n\\newcommand{\\mathcal{A}}{\\mathcal{A}}\n\\newcommand{\\mathcal{B}}{\\mathcal{B}}\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathcal{E}}{\\mathcal{E}}\n\\newcommand{\\mathcal{F}}{\\mathcal{F}}\n\\newcommand{\\mathcal{G}}{\\mathcal{G}}\n\\newcommand{\\mathcal{H}}{\\mathcal{H}}\n\\newcommand{\\mathcal{I}}{\\mathcal{I}}\n\\newcommand{\\mathcal{J}}{\\mathcal{J}}\n\\newcommand{\\mathcal{K}}{\\mathcal{K}}\n\\newcommand{\\mathcal{L}}{\\mathcal{L}}\n\\newcommand{\\mathcal{N}}{\\mathcal{N}}\n\\newcommand{\\mathcal{P}}{\\mathcal{P}}\n\\newcommand{\\mathcal{Q}}{\\mathcal{Q}}\n\\newcommand{\\mathcal{R}}{\\mathcal{R}}\n\\newcommand{\\mathcal{S}}{\\mathcal{S}}\n\\newcommand{\\mathcal{T}}{\\mathcal{T}}\n\\newcommand{\\mathcal{X}}{\\mathcal{X}}\n\n\\newcommand{\\mathscr{A}}{\\mathscr{A}}\n\\newcommand{\\mathscr{B}}{\\mathscr{B}}\n\\newcommand{\\mathscr{S}}{\\mathscr{S}}\n\\newcommand{\\mathscr{C}}{\\mathscr{C}}\n\\newcommand{\\mathscr{P}}{\\mathscr{P}}\n\\newcommand{\\mathscr{U}}{\\mathscr{U}}\n\\newcommand{\\mathscr{W}}{\\mathscr{W}}\n\\newcommand{\\mathscr{X}}{\\mathscr{X}}\n\\newcommand{\\mathscr{Z}}{\\mathscr{Z}}\n\n\\newcommand{\\mathfrak{S}}{\\mathfrak{S}}\n\\newcommand{\\mathfrak{B}}{\\mathfrak{B}}\n\\newcommand{\\mathfrak{U}}{\\mathfrak{U}}\n\\newcommand{\\mathfrak{A}}{\\mathfrak{A}}\n\\newcommand{\\mathfrak{E}}{\\mathfrak{E}}\n\\newcommand{\\mathfrak{X}}{\\mathfrak{X}}\n\\newcommand{\\mathfrak{P}}{\\mathfrak{P}}\n\n\\newtheoremstyle{note} {4pt} {4pt} {\\sl} {} {\\bfseries} {.} {.5em} {}\n\\newtheoremstyle{introthms} {3pt} {3pt} {\\itshape} {} {\\bfseries} {.} {.5em} {\\thmnote{#3}}\n\\newtheoremstyle{remark} {2pt} {2pt} {\\rm} {} {\\bfseries} {.} {.3em} {}\n\n\\theoremstyle{plain}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{prop}[theorem]{Proposition}\n\\newtheorem{proposition}[theorem]{Proposition}\n\\newtheorem{constr}{Construction}\n\\newtheorem{conj}[theorem]{Conjecture}\n\\newtheorem{cor}[theorem]{Corollary}\n\\newtheorem{fact}[theorem]{Fact}\n\\newtheorem{claim}[theorem]{Claim}\n\n\n\\theoremstyle{note}\n\\newtheorem{dfn}[theorem]{Definition}\n\\newtheorem{definition}[theorem]{Definition}\n\\newtheorem{setup}[theorem]{Setup}\n\n\\theoremstyle{remark}\n\\newtheorem{remark}[theorem]{Remark}\n\\newtheorem{notation}[theorem]{Notation}\n\\newtheorem{exmpl}[theorem]{Example}\n\\newtheorem{question}[theorem]{Question}\n\n\n\\usepackage{lineno}\n\\newcommand*\\patchAmsMathEnvironmentForLineno[1]{\n\t\\expandafter\\let\\csname old#1\\expandafter\\endcsname\\csname #1\\endcsname\n\t\\expandafter\\let\\csname oldend#1\\expandafter\\endcsname\\csname end#1\\endcsname\n\t\\renewenvironment{#1}\n\t{\\linenomath\\csname old#1\\endcsname}\n\t{\\csname oldend#1\\endcsname\\endlinenomath}}\n\\newcommand*\\patchBothAmsMathEnvironmentsForLineno[1]{\n\t\\patchAmsMathEnvironmentForLineno{#1}\n\t\\patchAmsMathEnvironmentForLineno{#1*}}\n\\AtBeginDocument{\n\t\\patchBothAmsMathEnvironmentsForLineno{equation}\n\t\\patchBothAmsMathEnvironmentsForLineno{align}\n\t\\patchBothAmsMathEnvironmentsForLineno{flalign}\n\t\\patchBothAmsMathEnvironmentsForLineno{alignat}\n\t\\patchBothAmsMathEnvironmentsForLineno{gather}\n\t\\patchBothAmsMathEnvironmentsForLineno{multline}\n}\n\n\n\\usepackage{scalerel}\n\n\\makeatletter\n\\newcommand{\\overrighharpoonup}[1]{\\ThisStyle{%\n\t\t\\vbox {\\m@th\\ialign{##\\crcr\n\t\t\t\t\\rightharpoonupfill \\crcr\n\t\t\t\t\\noalign{\\kern-\\p@\\nointerlineskip}\n\t\t\t\t$\\hfil\\SavedStyle#1\\hfil$\\crcr}}}}\n\n\\def\\rightharpoonupfill{%\n\t$\\SavedStyle\\m@th\\mkern+0.8mu\\cleaders\\hbox{$\\shortbar\\mkern-4mu$}\\hfill\\rightharpoonuptip\\mkern+0.8mu$}\n\n\\def\\rightharpoonuptip{%\n\t\\raisebox{\\z@}[2pt][1pt]{\\scalebox{0.55}{$\\SavedStyle\\rightharpoonup$}}}\n\n\\def\\shortbar{%\n\t\\smash{\\scalebox{0.55}{$\\SavedStyle\\relbar$}}}\n\\makeatother\n\n\\let\\seq=\\overrighharpoonup\n\n\\makeatletter\n\\newcommand{\\overlefharpoonup}[1]{\\ThisStyle{%\n\t\t\\vbox {\\m@th\\ialign{##\\crcr\n\t\t\t\t\\leftharpoonupfill \\crcr\n\t\t\t\t\\noalign{\\kern-\\p@\\nointerlineskip}\n\t\t\t\t$\\hfil\\SavedStyle#1\\hfil$\\crcr}}}}\n\n\\def\\leftharpoonupfill{%\n\t$\\SavedStyle\\m@th\\mkern+0.8mu\\cleaders\\hbox{$\\shortbar\\mkern-4mu$}\\hfill\\leftharpoonuptip\\mkern+0.8mu$}\n\n\\def\\leftharpoonuptip{%\n\t\\raisebox{\\z@}[2pt][1pt]{\\scalebox{0.55}{$\\SavedStyle\\leftharpoonup$}}}\n\n\\let\\seqq=\\overlefharpoonup\n\n\\let\\lra=\\longrightarrow\n\\def\\mathrm{Red}{\\mathrm{Red}}\n\\def\\mathrm{Green}{\\mathrm{Green}}\n\\def\\mathrm{left}{\\mathrm{left}}\n\\def\\mathrm{right}{\\mathrm{right}}\n\\def\\mathrm{L}{\\mathrm{L}}\n\\def\\Ra\\mathrm{R}\n\n\n\\makeatletter\n\\newsavebox\\myboxA\n\\newsavebox\\myboxB\n\\newlength\\mylenA\n\n\\newcommand*\\xoverline[2][0.75]{%\n\t\\sbox{\\myboxA}{$\\m@th#2$}%\n\t\\setbox\\myboxB\\nul\n\t\\ht\\myboxB=\\ht\\myboxA%\n\t\\dp\\myboxB=\\dp\\myboxA%\n\t\\wd\\myboxB=#1\\wd\\mybox\n\t\\sbox\\myboxB{$\\m@th\\overline{\\copy\\myboxB}$\n\t\\setlength\\mylenA{\\the\\wd\\myboxA\n\t\\addtolength\\mylenA{-\\the\\wd\\myboxB}%\n\t\\ifdim\\wd\\myboxB<\\wd\\myboxA%\n\t\\rlap{\\hskip 0.5\\mylenA\\usebox\\myboxB}{\\usebox\\myboxA}%\n\t\\else\n\t\\hskip -0.5\\mylenA\\rlap{\\usebox\\myboxA}{\\hskip 0.5\\mylenA\\usebox\\myboxB}%\n\t\\fi}\n\\makeatother\n\n\\DeclareSymbolFont{symbolsC}{U}{txsyc}{m}{n}\n\\SetSymbolFont{symbolsC}{bold}{U}{txsyc}{bx}{n}\n\\DeclareFontSubstitution{U}{txsyc}{m}{n}\n\\DeclareMathSymbol{\\strictif}{\\mathrel}{symbolsC}{74}\n\n\n\\DeclareSymbolFont{stmry}{U}{stmry}{m}{n}\n\\DeclareMathSymbol\\arrownot\\mathrel{stmry}{\"58}\n\\DeclareMathSymbol\\Arrownot\\mathrel{stmry}{\"59}\n\\def\\mathrel{\\mkern5.5mu\\arrownot\\mkern-5.5mu}{\\mathrel{\\mkern5.5mu\\arrownot\\mkern-5.5mu}}\n\\def\\mathrel{\\mkern5.5mu\\Arrownot\\mkern-5.5mu}{\\mathrel{\\mkern5.5mu\\Arrownot\\mkern-5.5mu}}\n\\def\\longarrownot\\longrightarrow{\\mathrel{\\mkern5.5mu\\arrownot\\mkern-5.5mu}\\longrightarrow}\n\\def\\Longarrownot\\Longrightarrow{\\mathrel{\\mkern5.5mu\\Arrownot\\mkern-5.5mu}\\Longrightarrow}\n\\DeclareMathOperator{\\ER}{ER}\n\n\n\\begin{document}\n\\dedicatory{Dedicated to the memory of Ronald Graham}\n\\title{On quantitative aspects of a canonisation theorem for edge-orderings}\n\n\\author[Chr.~Reiher]{Christian Reiher}\n\\address{Fachbereich Mathematik, Universit\\\"at Hamburg, Hamburg, Germany}\n\\email{Christian.Reiher@uni-hamburg.de}\n\\email{schacht@math.uni-hamburg.de}\n\\email{kevin.sames@uni-hamburg.de} \n\n\\author[V.~R\\\"{o}dl]{Vojt\\v{e}ch R\\\"{o}dl}\n\\address{Department of Mathematics,\nEmory University, Atlanta, USA}\n\\email{vrodl@emory.edu\n\\email{marcelo.tadeu.sales@emory.edu}\n\n\\thanks{The second author is supported by NSF grant DMS 1764385.}\n\n\\author[M.~Sales]{Marcelo Sales}\n\\thanks{The third author is partially supported by NSF grant DMS 1764385.}\n\n\\author[K.~Sames]{Kevin Sames}\n\n\n\n\n\\author[M.~Schacht]{Mathias Schacht}\n\\thanks{The fifth author is supported by the ERC (PEPCo 724903)}\n\n\\begin{abstract}\n\tFor integers $k\\ge 2$ and $N\\ge 2k+1$ there are $k!2^k$ {\\it canonical orderings} of \n\tthe edges of the complete $k$-uniform hypergraph with vertex set $[N] = \\{1,2,\\dots, N\\}$.\n\tThese are exactly the orderings with the property that any two subsets $A, B\\subseteq [N]$ \n\tof the same size induce isomorphic suborderings. We study the associated {\\it canonisation} problem \n\tto estimate, given $k$ and $n$, the least integer $N$ such that no matter how the $k$-subsets \n\tof $[N]$ are ordered there always exists an $n$-element set $X\\subseteq [N]$ whose $k$-subsets \n\tare ordered canonically. For fixed $k$ we prove lower and upper bounds on these numbers \n\tthat are $k$ times iterated exponential \n\tin a polynomial of $n$. \n\\end{abstract}\n\n\\maketitle\n\n\\@ifstar{\\origsection*}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}{Introduction}\\label{sec:intro}\n\nWe consider numerical aspects of an unpublished result of Klaus Leeb from the early seventies\n(see~\\cites{NPRV85, NR17}). For a natural number $N$ we set $[N]=\\{1,2,\\dots,N\\}$. \nGiven a set $X$ and a nonnegative integer $k$ we write $X^{(k)}=\\{e\\subseteq X\\colon |e|=k\\}$ \nfor the set of all $k$-element subsets of $X$. \n\n\\subsection{Ramsey theory}\nRecall that for any integers $k, n, r\\ge 1$, Ramsey's theorem~\\cite{Ramsey30} informs us \nthat every sufficiently large integer $N$ satisfies the partition relation \n\\begin{equation}\\label{eq:1500}\n\tN\\longrightarrow (n)^k_r\\,,\n\\end{equation}\nmeaning that for every colouring of $[N]^{(k)}$ with $r$ colours there exists a set $X\\subseteq [N]$\nof size~$n$ such that~$X^{(k)}$ is monochromatic. The least number~$N$ validating~\\eqref{eq:1500}\nis denoted by~$\\mathrm{R}^{(k)}(n, r)$. The {\\it negative} partition relation $N\\longarrownot\\longrightarrow (n)^k_r$\nexpresses the fact that~\\eqref{eq:1500} fails. \n\n\nFor colourings $f\\colon [N]^{(k)}\\longrightarrow {\\mathds N}$ with infinitely many colours, however, \none can, in general, not even hope to obtain a monochromatic set of size $k+1$. \nFor instance, it may happen \nthat every $k$-element subset of $[N]$ has its own private colour. \nNevertheless, Erd\\H{o}s and Rado~\\cites{ER52, ER50}\nestablished a meaningful generalisation of Ramsey's theorem to colourings with infinitely many \ncolours, even if the ground sets stay finite. In this context it is convenient to regard such \ncolourings as equivalence relations, two $k$-sets being in the same equivalence class if and only \nif they are of the same colour. \nNow Erd\\H{o}s and Rado~\\cites{ER52, ER50} call an equivalence relation \non $[N]^{(k)}$ {\\it canonical} if for any two subsets $A, B\\subseteq [N]$ of the same size \nthe equivalence relations induced on~$A^{(k)}$ and~$B^{(k)}$ correspond to each other via \nthe order preserving map between $A$ and $B$. So, roughly speaking, an equivalence relation is \ncanonical if you cannot change its essential properties by restricting your attention to a subset. \nIt turns out that for~$N\\ge 2k+1$ there are always exactly $2^k$ canonical equivalence relations \non $[N]^{(k)}$ that can be parametrised by subsets~$I\\subseteq [k]$ in the following natural way. \nLet $x, y\\in [N]^{(k)}$ be two $k$-sets and let $x=\\{x_1, \\dots, x_k\\}$ \nas well as $y=\\{y_1, \\dots, y_k\\}$ enumerate their elements in increasing order. \nWe write $x\\equiv_I y$ if and only if $x_i=y_i$ holds for all~$i\\in I$.\nNow $\\{\\equiv_I\\colon I\\subseteq [k]\\}$ is the collection of all canonical equivalence \nrelations on~$[N]^{(k)}$. The {\\it canonical Ramsey theorem} of Erd\\H{o}s and Rado \nasserts that given two integers~$k$ and~$n$ there exists an integer $\\ER^{(k)}(n)$ such that \nfor every $N\\ge\\ER^{(k)}(n)$ and every equivalence relation~$\\equiv$ on~$[N]^{(k)}$ there exists a \nset $X\\subseteq [N]$ of size $n$ with the property that $\\equiv$ is canonical on~$X^{(k)}$. \n\nThis result sparked the development of {\\it canonical Ramsey theory} in the seventies. \nLet us commemorate some contributions to the area due to {\\sc Ronald Graham}, \nwho was among the main protagonists of Ramsey theory in the $20^{\\mathrm{th}}$ century: Together \nwith Erd\\H{o}s~\\cite{EG80}, he established a canonical version of van der Waerden's theorem, \nstating that if for $N\\ge \\mathrm{EG}(k)$ we colour $[N]$ with arbitrarily many colours, then there \nexists an arithmetic progression of length $k$ which is either monochromatic or receives~$k$ \ndistinct colours. With Deuber, Pr\\\"omel, \nand Voigt~\\cite{DGPV} he later obtained a canonical version of the Gallai-Witt theorem, \nwhich is more difficult to state --- in $d$ dimensions the canonical colourings \nfor a given configuration $F\\subseteq {\\mathds Z}^d$ are now parametrised by vector \nsubspaces $U\\subseteq {\\mathds Q}^d$ having a basis of vectors of the form $x-y$ with $x, y\\in F$.\nInterestingly, {\\sc Ronald Graham} was also among the small number of co-authors of Klaus \nLeeb~\\cite{GLR, GLK2} and in joint work with Rothschild they settled a famous conjecture of Rota \non Ramsey properties of finite vector spaces.\n\nThe canonisation discussed here concerns linear orderings. We say that a linear \norder $([N]^{(k)}, \\strictif)$ is {\\it canonical} if for any two sets $A, B\\subseteq [N]$\nof the same size the order preserving map from $A$ to $B$ induces an order preserving \nmap from $(A^{(k)}, \\strictif)$ to $(B^{(k)}, \\strictif)$. It turns out that for $N\\ge 2k+1$ \nthere are exactly $k!2^k$ canonical orderings of $[N]^{(k)}$ that can uniformly be parametrised by \npairs $(\\eps, \\sigma)$ consisting of a {\\it sign vector} $\\eps\\in \\{-1,+1\\}^k$ and a \npermutation $\\sigma\\in \\mathfrak{S}_k$, i.e., a bijective map $\\sigma\\colon [k]\\lra [k]$. \n\n\\begin{definition}\\label{def:canonical}\n\tLet $N, k\\ge 1$ be integers, and let $(\\eps, \\sigma)\\in \\{+1, -1\\}^k\\times \\mathfrak{S}_k$ be a pair \n\tconsisting of a sign vector $\\eps=(\\eps_1, \\dots, \\eps_k)$ and a permutation $\\sigma$ of $[k]$. \n\tThe ordering $\\strictif$ on~$[N]^{(k)}$ \n\t{\\it associated with $(\\eps, \\sigma)$} is the unique ordering with the property that \n\tif \n\t%\n\t\\[\n\t\t1\\le a_1< \\dots 2$, where $c$ is a positive constant. \nThey also proved $\\mathrm{R}^{(k)}(n, 4)\\ge t_{k-1}(c_k n)$ for all $k\\ge 2$, \nwhere $c_k>0$ is an absolute constant. In the other direction, it is known \nthat $\\mathrm{R}^{(k)}(n, r)\\le t_{k-1}(C_{k, r}n)$.\n\nEstimates on the canonical Ramsey numbers $\\ER^{(k)}(n)$ were studied in \\cites{LR95, Sh96}. \nNotably, Lefmann and R\\\"{o}dl proved the lower bound $\\ER^{(k)}(n)\\ge t_{k-1}(c_kn^2)$,\nwhile Shelah obtained the complementary upper bound $\\ER^{(k)}(n)\\le t_{k-1}(C_kn^{8(2k-1)})$.\n\nLet us now turn our attention to the Leeb numbers $\\mathrm{L}^{(k)}(n)$. For $k=1$, there exist\nonly two canonical orderings of $[N]^{(1)}$, namely the ``increasing'' and the ``decreasing''\none corresponding to the sign vectors $\\eps=(+1)$ and $\\eps=(-1)$, respectively. \nThus the well-known Erd\\H{o}s-Szekeres theorem~\\cite{ES35} yields the exact \nvalue $\\mathrm{L}^{(1)}(n)=(n-1)^2+1$. Accordingly, Leeb's theorem \ncan be viewed as a multidimensional version of the Erd\\H{o}s-Szekeres theorem and we refer \nto~\\cite{Lang} for further variations on this theme. Our own results can be summarised as follows.\n\n\\begin{theorem}\\label{thm:lower}\n\tIf $n\\ge 4$ and $R\\longarrownot\\longrightarrow (n-1)^2_2$, then $\\mathrm{L}^{(2)}(n)> 2^{R}$. Moreover, if $n\\ge k\\ge 3$, \n\tthen $\\mathrm{L}^{(k)}(4n+k) > 2^{\\mathrm{L}^{(k-1)}(n)-1}$. \n\\end{theorem}\n\nDue to the known lower bounds on diagonal Ramsey numbers this implies \n\\[\n\t\\mathrm{L}^{(2)}(n)\\ge t_2(n\/2)\n\t\\quad \\text {as well as } \\quad \n\t\\mathrm{L}^{(k)}(n)\\ge t_{k}(c_kn) \\text{ for } k\\ge 3\\,. \n\\]\nWe offer an upper bound with the same number of exponentiations.\n\n\\begin{theorem}\\label{thm:upper}\n\tFor every $k\\ge 2$ there exists a constant $C_k$ such that $\\mathrm{L}^{(k)}(n)\\le t_{k}(n^{C_k})$\n\tholds for every $n\\ge 2k$.\n\\end{theorem}\n\nThe case $k=2$ of Theorems~\\ref{thm:lower} and~\\ref{thm:upper} was obtained independently by Conlon, \nFox, and Sudakov in unpublished work~\\cite{Fox}.\n\n\\subsection*{Organisation} We prove Theorem~\\ref{thm:lower} in Section~\\ref{sec:comb}.\nThe upper bound, Theorem~\\ref{thm:upper}, is established in Section~\\ref{sec:upper}. \nLemma~\\ref{lem:2150} from this proof turns out to be applicable to Erd\\H{o}s-Rado \nnumbers as well and Section~\\ref{sec:ERS} illustrates this by showing a variant of \nShelah's result, namely $\\ER^{(k)}(n)\\le t_{k-1}(C_kn^{6k})$. \n\n\\@ifstar{\\origsection*}{\\@startsection{section}{1}\\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}{\\normalfont\\scshape\\centering\\S}}{Lower bounds: Constructions}\\label{sec:comb}\n\n\\subsection{Trees and combs}\nOur lower bound constructions take inspiration from the negative stepping-up lemma due\nto Erd\\H{o}s, Hajnal, and Rado \\cite{EHR65}. Notice that in order to prove an inequality \nof the form $\\mathrm{L}^{(k)}(n)>N$ one needs to exhibit a special ordering $([N]^{(k)}, \\strictif)$\nfor which there is no set $Z\\subseteq [N]$ of size $n$ such that $\\strictif$ orders $Z^{(k)}$ \ncanonically. For us, $N=2^m$ will always be a power of two and for reasons of visualisability \nwe identify the numbers in $[N]$ with the leaves of a binary tree of height $m$. \nWe label the levels of our tree from the bottom to the top with the numbers in $[m]$. \nSo the root is at level $1$ and the leaves are immediately above \nlevel $m$ (see Figure~\\ref{fig:optimalline}).\n\n\\begin{figure}[h]\n\\centering\n{\\hfil \\begin{tikzpicture}[scale=1.0\n \\coordinate (A) at (0,0);\n \\coordinate (B) at (-2,1);\n \\coordinate (C) at (2,1);\n \\coordinate (D) at (-3,2);\n \\coordinate (E) at (-1,2);\n\t\\coordinate (F) at (1,2);\n \\coordinate (G) at (3,2);\n \\coordinate [label=above:$1$] (H1) at (-3.5,3);\n \\coordinate [label=above:$2$] (H2) at (-2.5,3);\n \\coordinate [label=above:$3$] (H3) at (-1.5,3);\n \\coordinate [label=above:$4$] (H4) at (-0.5,3);\n \\coordinate [label=above:$5$] (H5) at (0.5,3);\n \\coordinate [label=above:$6$] (H6) at (1.5,3);\n \\coordinate [label=above:$7$] (H7) at (2.5,3);\n \\coordinate [label=above:$8$] (H8) at (3.5,3);\n \n \\coordinate [label=left:$1$] (L1) at (-5,0);\n \\coordinate [label=left:$2$] (L2) at (-5,1);\n \\coordinate [label=left:$3$] (L3) at (-5,2);\n \\coordinate (R1) at (5,0);\n \\coordinate (R2) at (5,1);\n \\coordinate (R3) at (5,2);\n\n \\draw[line width=0.5] (C)--(A)--(B);\n \\draw[line width=0.5] (D)--(B)--(E);\n \\draw[line width=0.5] (F)--(C)--(G);\n \\draw[line width=0.5] (H1)--(D)--(H2);\n \\draw[line width=0.5] (H3)--(E)--(H4);\n \\draw[line width=0.5] (H5)--(F)--(H6);\n \\draw[line width=0.5] (H7)--(G)--(H8);\n \\draw[line width=0.5] (L1)--(L2)--(L3);\n \\draw[dashed] (L1)--(R1);\n \\draw[dashed] (L2)--(R2);\n \\draw[dashed] (L3)--(R3);\n \\draw (L1) circle [radius=0.05];\n \t\\draw (L2) circle [radius=0.05];\n \t\\draw (L3) circle [radius=0.05];\n \n \\end{tikzpicture}\\hfil}\n\\caption{The binary tree and its levels for $m=3$.}\n\\label{fig:optimalline}\n\\end{figure}\n\nAlternatively, we can identify $[N]$ with the set $\\{+1, -1\\}^m$ of all $\\pm1$-vectors of length~$m$\nsuch that the standard ordering on $[N]$ corresponds to the lexicographic one \non~$\\{+1, -1\\}^m$. The literature often works with the number $0$ instead of $-1$ here, but for the \napplication we have in mind our choice turns out to be advantageous. \n\nNow every set $X\\subseteq [N]$ generates a tree $T_X$, which is the subtree of our binary tree \ngiven by the union of the paths from the leaves in $X$ to the root (see Figure~\\ref{fig:2057}).\n\n\n\\begin{figure}[h]\n\\centering\n{\\hfil \\begin{tikzpicture}[scale=1.0\n \\coordinate (A) at (0,0);\n \\coordinate (B) at (-2,1);\n \\coordinate (C) at (2,1);\n \\coordinate (D) at (-3,2);\n \\coordinate (E) at (-1,2);\n\t\\coordinate (F) at (1,2);\n \\coordinate (G) at (3,2);\n \\coordinate [label=above:$1$] (H1) at (-3.5,3);\n \\coordinate [label=above:$2$] (H2) at (-2.5,3);\n \\coordinate [label=above:$3$] (H3) at (-1.5,3);\n \\coordinate [label=above:$4$] (H4) at (-0.5,3);\n \\coordinate [label=above:$5$] (H5) at (0.5,3);\n \\coordinate [label=above:$6$] (H6) at (1.5,3);\n \\coordinate [label=above:$7$] (H7) at (2.5,3);\n \\coordinate [label=above:$8$] (H8) at (3.5,3);\n \n \\coordinate [label=left:$1$] (L1) at (-5,0);\n \\coordinate [label=left:$2$] (L2) at (-5,1);\n \\coordinate [label=left:$3$] (L3) at (-5,2);\n \\coordinate (R1) at (5,0);\n \\coordinate (R2) at (5,1);\n \\coordinate (R3) at (5,2);\n\n \\draw[line width=2.5] (C)--(A)--(B);\n \\draw[line width=2.5] (D)--(B)--(E);\n \\draw[line width=0.5] (F)--(C);\n \\draw[line width=2.5] (G)--(C);\n \\draw[line width=0.5] (H1)--(D);\n \\draw[line width=2.5] (H2)--(D);\n \\draw[line width=2.5] (H3)--(E);\n \\draw[line width=0.5] (H4)--(E);\n \\draw[line width=0.5] (H5)--(F)--(H6);\n \\draw[line width=2.5] (H7)--(G);\n \\draw[line width=0.5] (H8)--(G);\n \\draw[line width=0.5] (L1)--(L2)--(L3);\n \\draw[dashed] (L1)--(R1);\n \\draw[dashed] (L2)--(R2);\n \\draw[dashed] (L3)--(R3);\n \\draw (L1) circle [radius=0.05];\n \t\\draw (L2) circle [radius=0.05];\n \t\\draw (L3) circle [radius=0.05];\n \t\\draw[fill] (H2) circle [radius=0.1];\n \t\\draw[fill] (H3) circle [radius=0.1];\n \t\\draw[fill] (H7) circle [radius=0.1];\n \t\\draw[fill] (B) circle [radius=0.1];\n \t\\draw[fill] (A) circle [radius=0.1];\n \n \\end{tikzpicture}\\hfil}\n\\caption{The auxiliary tree $T_X$ for $X=\\{2,3,7\\}$.}\n\\label{fig:2057}\n\\end{figure}\n\nAn essential observation on which both the double-exponential lower bound construction \nfor the four-colour Ramsey number of triples and our lower bound on $\\mathrm{L}^{(2)}(n)$ rely \nis that there are two essentially different kinds of triples: those engendering a ``left tree'' \nand those engendering a ``right tree''. Roughly speaking the difference is that when descending\nfrom the leaves to the root, the two elements concurring first are the two smaller \nones for left trees and the two larger ones for right trees. For instance, Figure~\\ref{fig:2057} \ndisplays a left tree.\n\nTo make these notions more precise we introduce the following notation. Given at least two distinct \nvectors $x_1, \\dots, x_t\\in \\{-1, +1\\}^m$ with coordinates $x_i=(x_{i1}, \\dots, x_{im})$ we write \n\\[\n\t\\delta(x_1, \\dots, x_t)=\\min\\{\\mu\\in [m]\\colon x_{1\\mu}= \\dots = x_{t\\mu} \\text{ is not the case}\\}\\,.\n\\]\nfor the first level where two of them differ. \n\nNow let $xyz\\in [N]^{(3)}$ be an arbitrary triple with $x\\delta(y, z)$ and a right tree whenever $\\delta(x, y)<\\delta(y, z)$. \n\nWhen the triples are coloured with $\\{\\mathrm{left}, \\mathrm{right}\\}$ depending on whether they form left\ntrees or right trees, the monochromatic sets are called {\\it combs}. \nMore explicitly, for \n\\[\t\n\t1\\le x_1<\\dots < x_t\\le N\n\\]\nthe set $X=\\{x_1, \\dots, x_t\\}$ is a {\\it left comb} if \n$\\delta(x_1, x_2)>\\delta(x_2, x_3)>\\dots >\\delta(x_{t-1}, x_t)$\nand a {\\it right comb} if \n$\\delta(x_1, x_2)<\\delta(x_2, x_3)<\\dots <\\delta(x_{t-1}, x_t)$ (see Figure~\\ref{fig:0001}). \nFor instance, the empty set, every singleton, and every pair are both a left comb and a \nright comb. Since every \ntriple is either a left tree or a right tree, triples are combs in exactly one of the two \ndirections. Some quadruples fail to be combs. \n\n\\begin{figure}[h]\n\\centering\n{\\hfil \\begin{tikzpicture}[scale=0.7\n \\coordinate (R1) at (-3,0);\n \\coordinate (R2) at (-3.5,1);\n \\coordinate (R3) at (-4,2);\n \\coordinate (R4) at (-4.5,3);\n \\coordinate (R5) at (-5,4);\n\t\\coordinate (R6) at (-4,4);\n \\coordinate (R7) at (-3,4);\n \\coordinate (R8) at (-2,4);\n \\coordinate (R9) at (-1,4);\n \\coordinate (H1) at (3,0);\n \\coordinate (H2) at (3.5,1);\n \\coordinate (H3) at (4,2);\n \\coordinate (H4) at (4.5,3);\n \\coordinate (H5) at (5,4);\n \\coordinate (H6) at (4,4);\n \\coordinate (H7) at (3,4);\n \\coordinate (H8) at (2,4);\n \\coordinate (H9) at (1,4);\n \n\n \\draw[line width=1] (R1)--(R2)--(R3)--(R4)--(R5);\n \\draw[line width=1] (R1)--(R9);\n \\draw[line width=1] (R2)--(R8);\n \\draw[line width=1] (R3)--(R7);\n \\draw[line width=1] (R4)--(R6);\n \n \\draw[line width=1] (H1)--(H2)--(H3)--(H4)--(H5);\n \\draw[line width=1] (H1)--(H9);\n \\draw[line width=1] (H2)--(H8);\n \\draw[line width=1] (H3)--(H7);\n \\draw[line width=1] (H4)--(H6);\n \n \\draw[fill] (R1) circle [radius=0.1];\n \t\\draw[fill] (R2) circle [radius=0.1];\n \t\\draw[fill] (R3) circle [radius=0.1];\n \\draw[fill] (R4) circle [radius=0.1];\n \t\\draw[fill] (R5) circle [radius=0.1];\n \t\\draw[fill] (R6) circle [radius=0.1];\n \\draw[fill] (R7) circle [radius=0.1];\n \t\\draw[fill] (R8) circle [radius=0.1];\n \t\\draw[fill] (R9) circle [radius=0.1];\n \\draw[fill] (H1) circle [radius=0.1];\n \t\\draw[fill] (H2) circle [radius=0.1];\n \t\\draw[fill] (H3) circle [radius=0.1];\n \\draw[fill] (H4) circle [radius=0.1];\n \t\\draw[fill] (H5) circle [radius=0.1];\n \t\\draw[fill] (H6) circle [radius=0.1];\n \t\\draw[fill] (H7) circle [radius=0.1];\n \t\\draw[fill] (H8) circle [radius=0.1];\n \t\\draw[fill] (H9) circle [radius=0.1];\n \t\n \n \\end{tikzpicture}\\hfil}\n\\caption{A left comb and a right comb.}\n\\label{fig:0001}\n\\end{figure}\n\n\\subsection{Ordering pairs}\nLet us recall that for Ramsey numbers the negative stepping-up lemma shows, e.g., that if \n$m\\longarrownot\\longrightarrow (n-1)^2_2$, then $2^m\\longarrownot\\longrightarrow (n)^3_4$. As we shall reuse the argument, we provide a brief\nsketch. Let $f\\colon [2^m]^{(3)}\\lra \\{\\mathrm{left}, \\mathrm{right}\\}$ be the colouring telling\nus whether a triple generates a left tree or a right tree and \nlet $g\\colon [m]^{(2)}\\lra \\{+1, -1\\}$ be a colouring exemplifying the negative partition \nrelation $m\\longarrownot\\longrightarrow (n-1)^2_2$. Define a \ncolouring \n\\[\n\th\\colon [2^m]^{(3)}\\lra \\{\\mathrm{left}, \\mathrm{right}\\}\\times \\{+1, -1\\}\n\\]\nof the triples with four colours by \n\\[\n\th(xyz)=\\bigl(f(xyz), g\\bigl(\\delta(xy), \\delta(yz)\\bigr)\\bigr)\n\\]\nwhenever $1\\le x2^m$ will be complete. \n\nLet us introduce one final piece of notation.\nGiven a vector $x=(x_1, \\dots, x_m)\\in \\{-1, +1\\}^m$ and~$\\delta\\in [m]$ we \ndefine the vector $x\\ltimes\\delta\\in\\{-1, +1\\}^{m-\\delta}$ by\n\\[\n\tx\\ltimes\\delta=\\bigl(x_{\\delta+1}g(\\delta, \\delta+1), \\dots, x_mg(\\delta, m)\\bigr)\\,.\n\\]\n\nNow let $\\strictif$ be an arbitrary ordering with the property that for any two distinct \npairs $xy$ and $x'y'$ with $xz_{r-1}$ \n\t\t\t\tis in $Z_\\nu\\cup A_\\nu$, then $f(z_1, \\dots, z_r)=f'(z_1, \\dots, z_{r-1})$.\n\t\t\\item\\label{it:52} If $i\\in I$, the numbers $z_1<\\dotsz_{s-1}$ is in $Z_\\nu\\cup A_\\nu$, \n\t\t\t\tthen $g_i(z_1, \\dots, z_{s})=g'_i(z_1, \\dots, z_{s-1})$.\n\t\t\\item\\label{it:53} If $i\\in I$, the numbers $z_1<\\dots0$ is always valid.\nLet us check that, conversely, in the permutation-definite case we can define a permutation $\\sigma$ \nhaving this property. \n\n\\begin{lemma}\\label{lem:1518}\n\tSuppose that $N\\ge 2k$. If the ordering $([N]^{(k)}, \\strictif)$ is both sign-definite \n\tand permutation-definite, then there exists a permutation $\\sigma\\in \\mathfrak{S}_k$ such that \n\t%\n\t\\[\n\t\t\\sigma_{ij}\\cdot \\bigl(\\sigma^{-1}(j)-\\sigma^{-1}(i)\\bigr)>0\n\t\\]\n\n\tholds whenever $1\\le i0$ holds.\n\\end{proof}\n\nThe remainder of this section deals with the question how much a sign-definite and \npermutation-definite ordering $([N]^{(k)}, \\strictif)$ with sign-vector $\\eps$ and \npermutation $\\sigma$ (obtained by means of Lemma~\\ref{lem:1518}) needs to have in \ncommon with the canonical ordering associated \nwith $(\\eps, \\sigma)$ in the sense of Definition~\\ref{def:canonical}. For instance, Lemma~\\ref{lem:1458} below asserts that under these circumstances there\nexists a dense subset $W\\subseteq [N]$ such that $(W^{(k)}, \\strictif)$ is ordered in this \ncanonical way.\n\nThe strategy we use for this purpose is the following. Suppose that $x, y\\in [N]^{(k)}$ \nare written in the form $x=\\{x_1, \\dots, x_k\\}$ and $y=\\{y_1, \\dots, y_k\\}$, \nwhere $x_1<\\dots\\max\\{x_{\\sigma(i)-1}, y_{\\sigma(i)-1}\\}$\n\t\\end{enumerate} \n\t%\n\thold, then the pair $(x, y)$ is $(\\eps, \\sigma)$-sound. \n\\end{lemma}\n\n\\begin{proof}\n\tWithout loss of generality we may assume $\\eps_{\\sigma(i)}=+1$. We are to prove \n\tthat $x\\strictif y$ and in all cases this will be accomplished by finding a $k$-set \n\t$z\\subseteq L$ such that the pairs~$(x, z)$ and~$(z, y)$ are $(\\eps, \\sigma)$-sound\n\tand $x\\strictif z\\strictif y$ holds. \n\t\n\tSuppose first that $\\min(I)<\\sigma(i)<\\max(I)$. \n\tDue to $a\\in (x_{\\sigma(i)}, y_{\\sigma(i)})\\subseteq (x_{\\sigma(i)-1}, y_{\\sigma(i)+1})$\n\tthe set $z=\\{x_1, \\dots, x_{\\sigma(i)-1}, a, y_{\\sigma(i)+1}, \\dots, y_k\\}$ has $k$ \n\telements that have just been enumerated in increasing order. Moreover $\\sigma(i)\\ne \\min(I)$\n\tyields $|\\Delta(x, z)|\\le t$ and, therefore, $(x, z)$ is indeed $(\\eps, \\sigma)$-sound, which\n\timplies $x\\strictif z$. Similarly, $\\sigma(i)\\ne \\max(I)$ leads to $z\\strictif y$.\n\t\n\tNext we suppose that $\\sigma(i)=\\min(I)$, which owing to the second bullet entails \n\t%\n\t\\[\n\t\ta<\\min\\{x_{\\sigma(i)+1}, y_{\\sigma(i)+1}\\}\\,.\n\t\\]\n\n\tLet $j\\in [k]$ be the second smallest index with $\\sigma(j)\\in I$. \n\t\n\t\\smallskip\n \t%\n\t\\begin{center}\n\t\\begin{tabular}{c|c|c}\n\n\tIf & and & then we set $z=$\\\\ \\hline \n\t\\rule{0pt}{3ex} $x_{\\sigma(j)}< y_{\\sigma(j)}$ & $\\sigma(j)<\\max(I)$ & \n\t\t$\\{x_1, \\dots, x_{\\sigma(i)-1}, a, x_{\\sigma(i)+1}, \\dots, x_{\\sigma(j)}, \n\t\t\ty_{\\sigma(j)+1}, \\dots, y_k\\}$,\\\\\n\t\\rule{0pt}{3ex} $x_{\\sigma(j)}< y_{\\sigma(j)}$ & $\\sigma(j)=\\max(I)$ & \n\t\t$\\{x_1, \\dots, x_{\\sigma(i)-1}, a, x_{\\sigma(i)+1}, \\dots, x_{\\sigma(j)-1}, \n\t\t\ty_{\\sigma(j)}, \\dots, y_k\\}$,\\\\\n\t\\rule{0pt}{3ex} $x_{\\sigma(j)}> y_{\\sigma(j)}$ & $\\sigma(j)<\\max(I)$ & \n\t\t$\\{x_1, \\dots, x_{\\sigma(i)-1}, a, y_{\\sigma(i)+1}, \\dots, y_{\\sigma(j)}, \n\t\t\tx_{\\sigma(j)+1}, \\dots, x_k\\}$,\\\\\n\t\\rule{0pt}{3ex} $x_{\\sigma(j)}> y_{\\sigma(j)}$ & $\\sigma(j)=\\max(I)$ &\n\t\t$\\{x_1, \\dots, x_{\\sigma(i)-1}, a, y_{\\sigma(i)+1}, \\dots, y_{\\sigma(j)-1}, \n\t\t\tx_{\\sigma(j)}, \\dots, x_k\\}$.\n\t\\end{tabular}\n\t\\end{center}\n\\medskip\n\nIn all four cases we have listed the elements of $z$ in increasing order. Furthermore, \nin almost all cases we have $|\\Delta(x, z)|\\le t$ and, consequently, $x\\strictif z$. \nIn fact, the only exception occurs if we are in the second case and $t=2$. But if this happens, \nthen \n\\[\n\tP=\\bigl(\\{x_1, \\dots, x_{\\sigma(i)-1}\\}, \\{x_{\\sigma(i)}, a\\}, \n\t\t\\{x_{\\sigma(i)+1}, \\dots, x_{\\sigma(j)-1}\\}, \\{x_{\\sigma(j)}, y_{\\sigma(j)}\\}, \n\t\t\\{y_{\\sigma(j)+1}, \\dots, y_k\\}\\bigr)\n\\]\nis an $\\sigma(i)\\sigma(j)$-decider. \nSince the ordering $\\strictif$ is permutation-definite, we know that it orders the four-element \nset $\\langle P\\rangle$ ``correctly'' and, as $x$, $z$ belong to this set, $x\\strictif z$ holds \nin this case as well. \n\nSimilarly, in almost all cases we have $\\Delta(y, z)\\le t$ and $y\\strictif z$, the only \nexception occurring if we are in the fourth case and $t=2$. Under these circumstances \n\\[\n\tQ=\\bigl(\\{x_1, \\dots, x_{\\sigma(i)-1}\\}, \\{a, y_{\\sigma(i)}\\}, \n\t\t\\{y_{\\sigma(i)+1}, \\dots, y_{\\sigma(j)-1}\\}, \\{x_{\\sigma(j)}, y_{\\sigma(j)}\\}, \n\t\t\\{x_{\\sigma(j)+1}, \\dots, x_k\\}\\bigr)\n\\]\nis an $\\sigma(i)\\sigma(j)$-decider and, as before, $y, z\\in \\langle Q\\rangle$ implies $y\\strictif z$.\n\nThis concludes our discussion of the case $\\sigma(i)=\\min(I)$ and it remains to deal with \nthe case $\\sigma(i)=\\max(I)$. Here one can either perform a similar argument, or one argues\nthat this case reduces to the previous one by reversion the ordering of the ground set. That\nis, one considers the ordering $\\bigl((-L)^{(k)}, \\strictif^\\star\\bigr)$ defined by \n\\[\n\tx\\strictif^\\star y \n\t\\,\\,\\, \\Longleftrightarrow \\,\\,\\,\n\t(-x) \\strictif (-y)\\,,\n\\]\nwhere $-L$ means $\\{-\\ell\\colon \\ell\\in L\\}$ and $-x$, $-y$ are defined similarly. \nIf one replaces $\\strictif$ by $\\strictif^\\star$, \nthen $I^\\star=(k+1)-I$ assumes the r\\^{o}le of $I$, minima correspond to maxima and the second and \nthird bullet in the assumption of our lemma are exchanged. \n\\end{proof}\n\nWe proceed with the existence of ``dense'' canonical subsets promised earlier. \n\n\\begin{lemma}\\label{lem:1458}\n\tIf $N\\ge 2k$ and the ordering $([N]^{(k)}, \\strictif)$ is both sign-definite \n\tand permutation-definite, then there exists a set $W\\subseteq [N]$ of size $|W|\\ge 2^{1-k}N$\n\tsuch that $\\strictif$ is canonical on~$W^{(k)}$.\n\\end{lemma}\n\n\\begin{proof}\n\tFor every $t\\in [k]$ we set \n\t%\n\t\\[\n\t\tW_t=\\bigl\\{n\\in [N]\\colon n\\equiv 1\\pmod{2^{t-1}}\\bigr\\}\\,.\n\t\\]\n\t%\n\tClearly $|W_k|\\ge 2^{1-k}N$, so it suffices to prove that $\\strictif$ is canonical on~$W_k^{(k)}$.\n\t\n\tDenote the sign-vector of $\\strictif$ by $\\eps=(\\eps_1, \\dots, \\eps_k)$ and let $\\sigma\\in \\mathfrak{S}_k$\n\tbe the permutation obtained from $\\strictif$ by means of Lemma~\\ref{lem:1518}. \n\tLet us prove by induction on $t\\in [k]$ that if two $k$-element subsets $x, y\\subseteq W_t$\n\tsatisfy $|\\Delta(x, y)|\\le t$, then $(x, y)$ is $(\\eps, \\sigma)$-sound. \n\t\n\tIn the base case $t=1$ we have $W_1=[N]$ and everything follows from our assumption \n\tthat $\\strictif$ be sign-definite. In the induction step from $t\\in [k-1]$ to $t+1$ \n\twe appeal to Lemma~\\ref{lem:1610}. Since no two elements of $W_{t+1}$ occur in consecutive \n\tpositions of $W_t$, the bulleted assumptions are satisfied and thus there are no problems \n\twith the induction step. \n\t\n\tSince $|\\Delta(x, y)|\\le k$ holds for all $k$-element subsets of $W_k$, the case $t=k$ of \n\tour claim proves the lemma. \n\\end{proof}\n\nNext we characterise the canonical orderings on $([N]^{(k)}, \\strictif)$\nfor $N\\ge 2k+1$. \n\n\\begin{lemma}\\label{lem:canonical}\n\tIf $N\\ge 2k+1$ and the ordering $([N]^{(k)}, \\strictif)$ is canonical, then there exists\n\ta pair $(\\eps, \\sigma)\\in \\{-1, +1\\}^k\\times \\mathfrak{S}_k$ such that $\\strictif$ is the ordering \n\tassociated with $(\\eps, \\sigma)$. \n\\end{lemma}\n\n\\begin{proof}\n\tLet $\\eps$ and $\\sigma$ denote the sign vector and the permutation of $\\strictif$ as introduced\n\tin \\S\\ref{subsec:32}. \n\tWe shall show that $\\strictif$ coincides with the canonical ordering associated to $(\\eps, \\sigma)$.\n\t\n\tOtherwise there exists a pair $(x, y)$ of subsets of $[N]$ that fails to be $(\\eps, \\sigma)$-sound.\n\tAssume that such a pair $(x, y)$ is chosen with $t+1=|\\Delta(x, y)|$ minimum. Since $\\strictif$ \n\tis sign-definite we know that $t\\in [k-1]$. Set $I=\\Delta(x, y)$ and let $i\\in [k]$ be the \n\tsmallest index with $\\sigma(i)\\in I$. By symmetry we may assume that $x_{\\sigma(i)} x_{\\sigma(i)}$. Now by Lemma~\\ref{lem:1610}\n\tthe pair $\\bigl(\\phi[x], \\phi[y]\\bigr)$ is $(\\eps, \\sigma)$-sound. Moreover, the canonicity of\n\t$\\strictif$ tells us that $\\big((x\\cup y)^{(k)}, \\strictif\\bigr)$ \n\tand $\\big(\\phi[x\\cup y]^{(k)}, \\strictif\\bigr)$ are isomorphic via $\\phi$. For these reasons,\n\tthe pair $(x, y)$ is $(\\eps, \\sigma)$-sound as well. \n\n\t\\smallskip\n\n\t{\\it \\hskip2em Second Case: $\\max(x\\cup y)=N$}\n\n\t\\smallskip\n \n \tLet $\\eta\\colon x\\cup y\\lra [|x\\cup y|]$ be order-preserving. In view of $|x\\cup y|\\le 2k0$ there exists a largest \n\tinteger $i(\\star) \\in [k]$ such that $a_{i(\\star)} \\ne b_{i(\\star)}$. \n\tApplying the assumption that $\\equiv$ be $i(\\star)$-purged to \n\t%\n\t\\begin{enumerate}\n\t\t\\item[$\\bullet$] $a_1<\\dots\\frac{\\sin(\\omega t_d)}{\\omega}$, respectively. This provides that the value of $b=\\frac{\\sin(\\omega t_d)}{\\omega}$ is the bifurcating point having the center solution. Similarly, for the case of non-zero value of $a$ i.e., in presence of nonlinearity, if $b<\\frac{\\sin(\\omega t_d)}{\\omega}$ then the system has a non-zero radius providing the stable limit cycle solution and if $b>\\frac{\\sin(\\omega t_d)}{\\omega}$ having imaginary radius giving the unstable limit cycle (asymptotically stable fixed point). So, $b=\\frac{\\sin(\\omega t_d)}{\\omega}$ is the Hopf bifurcating parameter, defining the energy transfer zone between stable and unstable focus connecting a center and slowly decaying center-type solution. Numerical simulation is given below for better understanding.\n\nFig.~\\ref{fig1} shows the corresponding numerical simulations of Eq.~\\eqref{eq} owing to four situations, where (a) is the case for the delay with no nonlinearity and damping, i.e, $a=0,b=0$, which shows a simple feedback oscillator with continuously increasing energy in the system. Further, (b), (c) and (d) shows a center with $a=0,b=\\frac{\\sin(\\omega t_d)}{\\omega}$, a limit cycle(LC) with $a=1,b=0.2<\\frac{\\sin(\\omega t_d)}{\\omega}$ and a slowly decaying center-type orbit or focus with $a=1,b=\\frac{\\sin(\\omega t_d)}{\\omega}$, respectively. Here the finite delay, $t_d$ and $\\epsilon$ are kept fixed at $0.623$ and $0.05$, respectively along with unit frequency.\n \n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics*[width=0.75\\linewidth]{f1-phase_protrait.eps}\n\\caption{\\textbf{Time-delayed system:} Phase space plots of Eq.~\\eqref{eq} are shown using approximate amplitude and phase equations with time delay, $t_d=0.623$ where (a) $a=0,b=0$ for a feedback system with increasing phase space area, (b) center with $a=0,b=\\frac{\\sin(\\omega t_d)}{\\omega}$, (c) LC with $a=1,b=0.2<\\frac{\\sin(\\omega t_d)}{\\omega}$ and (d) center-type orbit with $a=1,b=\\frac{\\sin(\\omega t_d)}{\\omega}$ with $\\epsilon=0.05$ and $\\omega=1$.} \\label{fig1}\n\\end{center}\n\\end{figure}\n\n\\section{Parametrically Excited Time Delayed Nonlinear Feedback Oscillator} \\label{sec3}\nAs an application of the approximate analytical solution and bifurcation situation, in this section, we have investigated the time delayed nonlinear oscillator under parametric excitation. The resonance and antiresonance behaviours for the limit cycle, center and center-type orbits are described both analytically and numerically for weak delayed situation. The stability regions at the resonances of the system under excitation are explored.\n\n\\subsection{Periodic Solutions for Resonance and Antiresonance Cases}\t\\label{subsec1}\nMaking the system parametrically excited~\\cite{penvo,momeni} by a periodic force $\\cos(\\Omega t)$ with a weighted constant, $\\gamma$, the basic equations become:\n\\begin{align}\n\\dot{x}(t) =& y(t), \t\\nonumber\\\\\n\\dot{y}(t) =& - \\epsilon \\lbrace x(t-t_d)+[1+\\gamma \\cos(\\Omega t)] (a x^2(t)+b) \\dot{x}(t) \\rbrace-\\omega^2 x(t); \\quad \\gamma \\neq 0, \\Omega \\in \\mathbb{Z}_{\\neq 0}, \\label{7}\n\\end{align}\nwhere $0 < \\epsilon \\ll 1$, $a,b$ are the system parameters and $t_d(0 0$. For $\\alpha=0$, the eigenvalues are imaginary with close orbital path will be center or slowly decaying center-type situation. The zero critical value of the parameter, $\\alpha$ determines a Hopf bifurcation point, where one encounters a significant qualitative change in the system's dynamical profile. In our case, one physically assumes $\\alpha > 0$ with an unstable equilibrium state.\n\nFor unit frequency, the eigenvalues for resonance and antiresonance are $$\\lambda_{1,2} \\approx \\pm 0.03125 \\epsilon \\sqrt{64. b^2 \\gamma ^2-168.847}-0.5 b \\epsilon +0.291737 \\epsilon$$ and $$\\sigma_{1,2} \\approx -0.5 b \\epsilon +(0.291737\\, \\pm 0.406066 i) \\epsilon$$ (the eigenvalues of all $\\Omega \\neq 2$) respectively, where the values of $a$ and $t_d$ are $1$ and $0.623$ respectively. The respective eigenvalues give the nature of the above autonomous system as well as the original parametrically excited system. From the graphs, we find that $\\Omega = 2$ is the resonant point and at the non-zero singular point $\\Omega = 4$, there exist an oscillating antiresonance. Since, we have chosen $\\omega=1$, the resonance came at $\\Omega = 2$ as a parametric resonance appears at twice the eigen frequency.\n\\begin{figure}\n\\centering\n\\subfigure[]{\n\\resizebox*{2.7in}{!}{\\includegraphics*{f6a-EV_Space_CO_2.eps}} \\label{fig7.1a}} \n\\quad\n\\subfigure[]{\n\\resizebox*{2.7in}{!}{\\includegraphics*{f6b-EV_Space_CO_4.eps}} \\label{fig7.2b}}\n\\caption{Stability diagram for parametrically excited time delayed system for $b$ with time delay ($t_d$) is obtained from linear stability analysis for the cases (a) $\\Omega=2$ and (b) $\\Omega=4$.} \\label{fig es}\n\\end{figure}\nIn fig.~\\ref{fig es} stability diagram for parametrically excited time delayed system for $b$ with time delay ($t_d$) are shown from linear stability analysis for the limiting case (a) $\\Omega=2$ and (b) $\\Omega=4$.\n\n\\section{Stability Analysis and Hopf Bifurcation} \\label{sec4}\nHere we have proposed stability analysis and Hopf bifurcation of the system for the full range of parameter space. We shall perform local stability analysis of the trivial fixed point, $E_0(0,0)$ and the existence of Hopf bifurcation of the model system \\eqref{eq}. The linearized system at the origin is written as\n\n\\begin{align}\n\\left\\{ \\begin{array}{l}\n\\dot{x}(t) = y(t),\t\\\\\n\\dot{y}(t) = -\\epsilon x(t - t_d) - \\epsilon b \\dot{x}(t) - {\\omega^2} x(t).\n\\end{array} \\right. \\label{15}\n\\end{align}\n \nThe corresponding characteristic equation of system \\eqref{15} is given by\n\\begin{align*}\n\\lambda^2 + b \\epsilon \\lambda + \\omega^2 + \\epsilon e^{-\\lambda t_d} = 0.\n\\end{align*} \nAbove equation can be rewritten as\n\\begin{align}\n(\\lambda^2 + b \\epsilon \\lambda + \\omega^2) e^{\\lambda t_d} + \\epsilon = 0. \\label{17}\n\\end{align} \n \nNow, we have the following cases:\n\n\\textbf{Case 1:} When $t_d =0$, Eq.~\\eqref{17} becomes\n\\begin{align}\n\\lambda^2 + b \\epsilon \\lambda + \\omega^2 + \\epsilon = 0. \\label{18}\n\\end{align} \n\nWe can see that the conditions for all the roots of Eq.~\\eqref{18} are negative (as all the coefficients are positive) or to have negative real part is given by Routh-Hurwitz criterion as $b \\epsilon>0$ and $\\omega^2 +\\epsilon>0$. Then, the trivial equilibrium point $E_0$ is locally asymptotically stable.\n\n\\textbf{Case 2:} When $t_d \\neq 0$, $\\lambda= i s$ be the root of Eq.~\\eqref{17}, we have\n\\begin{align}\n(-{s^2} + i s b \\epsilon +\\omega^2)(\\cos(t_d s) + i \\sin(t_d s)) +\\epsilon = 0.\t\\label{17a}\n\\end{align}\n\nEquating real and imaginary parts, we have\n\\begin{align}\n(-s^2 + \\omega^2) \\cos(t_d s) -s b \\epsilon \\sin(t_d s)+ \\epsilon =& 0, \\notag \\\\\ns b \\epsilon \\cos(t_d s) +(-s^2 + \\omega^2) \\sin(t_d s) =& 0. \\notag\n\\end{align}\nWe have from above equation\n\\begin{align}\n\\cos(t_d s) =& \\frac{\\epsilon (s^2 -\\omega^2)}{(-s^2 + \\omega^2)^2 +(s b \\epsilon)^2}=\\frac{s^2-\\omega^2}{\\epsilon}, \\notag \\\\\n\\sin(t_d s) =& \\frac{s b \\epsilon^2}{(-s^2 + \\omega^2)^2 +(s b \\epsilon)^2}=sb. \t\\label{19}\n\\end{align}\nNow, from the trigonometrical relation $\\sin^2(t_d s) +\\cos^2(t_d s) =1$, we have\n\\begin{align}\ns^4 +(b^2 \\epsilon^2 -2 \\omega^2) s^2 + \\omega^4 -\\epsilon^2 = 0. \\label{20}\n\\end{align}\nEq.~\\eqref{20} has the following roots $s_j$, which is given by\n\\begin{align}\ns_j = \\pm \\sqrt{ \\frac{ 2 \\omega^2 -b^2 \\epsilon^2 \\pm \\epsilon \\sqrt{b^2 (b^2 \\epsilon^2 - 4 \\omega^2) +4}}{2}}; \\quad j=1,2,3,4. \\label{21}\n\\end{align}\nThe above Eq.~\\eqref{21} has at least one positive root $s_0$ if the condition $ b < \\frac{\\sqrt{2}}{\\epsilon} \\sqrt{\\omega^2 + \\sqrt{\\omega^4 -\\epsilon^2}}$ satisfies.\n\nWe obtain the corresponding critical value of time delay $t_{d_l}$ for $s_0$ as\n\\begin{align}\nt_{d_l} = \\left\\lbrace \\begin{array}{ll}\n\\frac{1}{s_0} \\left( \\sin^{-1} {b s_0} +2\\pi l \\right), & \\text{if } b < \\frac{\\sqrt{2}}{\\epsilon} \\sqrt{\\omega^2 + \\sqrt{\\omega^4 -\\epsilon^2}}, \\\\\n\\frac{1}{s_0} \\left(\\pi -\\sin^{-1} {b s_0} +2\\pi l \\right), & \\text{if } b > \\frac{\\sqrt{2}}{\\epsilon} \\sqrt{\\omega^2 - \\sqrt{\\omega^4 -\\epsilon^2}},\n\\end{array} \\right. \\quad l= 0, 1, 2, \\cdots.\t\\label{22}\n\\end{align}\n\nDefine $t_d^* = \\min \\lbrace t_{d_l} \\rbrace$, i.e., $t_d^*$ is the smallest positive value of $t_{d_l}$; $l=0,1,2,\\cdots$, given by the above Eq.~\\eqref{22}. Now, we determine whether the roots of Eq.~\\eqref{17} cross the imaginary axis of the complex plane as $t_d$ varies. Let $\\lambda(t_d) = \\xi(t_d) + i s(t_d)$ be a root of the Eq.~\\eqref{17} such that these two conditions $\\lambda(t_d) = \\xi(t_{d_l}) =0$ and $s(t_{d_l})=s_0$ satisfies. \n\n\\begin{lemma}\nTransversality condition satisfies if the following holds\n $$\\left[ \\Re \\left(\\frac{d\\lambda}{dt_d}\\right)^{-1} \\right]_{t_d =t_d^*} \\neq 0.$$\n\\end{lemma}\n\\begin{proof} \nDifferentiating Eq.~\\eqref{17} with respect to $t_d$, we obtain\n\\begin{align}\n\\left( \\frac{d\\lambda}{dt_d} \\right)^{-1} = \\frac{b \\varepsilon + i 2 s}{s (b \\varepsilon s+ i (s^2-\\omega^2))} +i \\frac{t_d}{s}.\t\\notag\n\\end{align}\nNow collecting the real parts at $t_d = t_d^*$, we have\n\\begin{align}\n\\left[ \\Re \\left( \\frac{d\\lambda}{dt_d} \\right)^{-1} \\right]_{t_d = t_d^*} = \\frac{ (b \\varepsilon)^2 +2 (s^2 - \\omega^2) }{(b \\varepsilon s)^2 + (s^2 - \\omega^2)^2},\t\\label{22a}\n\\end{align}\nwhich shows that \n\\begin{align}\n\\left[ \\Re \\left( \\frac{d\\lambda}{dt_d} \\right)^{-1} \\right]_{t_d = t_d^*} >0, \\text{ if } b < \\frac{\\sqrt{2}}{\\epsilon} \\sqrt{\\omega^2 + \\sqrt{\\omega^4-\\epsilon^2}}. \\notag\n\\end{align}\nTherefore, the transversality condition is satisfied for each $t_d = t_d^*$ and hence Hopf bifurcation occurs at $t_d = t_d^*$. This completes the proof. \n\\end{proof}\nIf $ \\Re \\left( \\frac{d\\lambda}{dt_d} \\right)>0,$ then all those roots that crosses the imaginary axis with non-zero speed at $(\\lambda=) i s$ from left to right as $t_d$ increases. Now, we state the following result:\n\\begin{theorem}\nIf trivial equilibrium point $E_0$ exists then that point of the model system \\eqref{eq} is locally asymptotically stable when $t_d \\in [0, t_d^*) $ and unstable for $t_d>t_d^* $. Furthermore, the system undergoes Hopf bifurcation at $E_0$ when $t_d = t_d^*,$ provided $b < \\frac{\\sqrt{2}}{\\epsilon} \\sqrt{\\omega^2 + \\sqrt{\\omega^4-\\epsilon^2}}$.\n\\end{theorem}\n\n\\textbf{Note:} For this section and the next section, we have considered $b$ analytically as a constant parameter only.\n\n\\subsection{Direction and Stability of Hopf Bifurcation} \\label{subsec5} \nIn this subsection, we will discuss the direction, stability and period of the bifurcating periodic solutions using normal form and center manifold theory, introduced by Hassard et al.~\\cite{hassard}. We assume that the system \\eqref{eq} about the fixed point $E_0$ undergoes Hopf bifurcation at the critical point $t_{d} =t_d^*$. Then $\\pm is$ are corresponding purely imaginary roots of the characteristic equation at the critical point throughout this subsection. \\\\\nLet $t_d =t_d^* +\\mu$, where $\\mu \\in \\mathbb{R}$. Define the space of continuous real valued functions as $\\mathbb{C}= \\mathbb{C}([-1,0], \\mathbb{R}^2)$. Let $u_{1}(t)=x(t),~ u_{2}(t)=y(t)$ and ${u_i}(t)=u_{i}(t_d t)$ for $i=1,2$; the delay system \\eqref{eq} then converts into functional differential equation in $\\mathbb{C}$ as\n\\begin{align} \n\\dot{u}(t) = L_{\\mu} (u_t) + F(\\mu, u_t), \\label{23}\n\\end{align} \nwhere $u(t)=(u_1(t)$, $u_2(t))^\\top \\in \\mathbb{C}$, $u_t(\\theta) = u(t+\\theta) = (u_1(t+\\theta)$, $u_2(t+\\theta))^\\top \\in \\mathbb{C}$ and $L_{\\mu}: \\mathbb{C} \\to \\mathbb{R}^2$, $F: \\mathbb{R}\\times \\mathbb{C} \\to \\mathbb{R}^2$ are given respectively by \n\\begin{align}\nL_{\\mu}(\\phi)&=(t_d^* +\\mu) \\begin{pmatrix} \n0 & 1 \\\\ \n-\\omega^2 & -b \\epsilon\n\\end{pmatrix} \\phi(0)+(t_d^*+\\mu) \\begin{pmatrix}\n0 & 0 \\\\ \n-\\varepsilon & 0\n\\end{pmatrix} \\phi(-1), \t\\label{24} \\\\\nF(\\mu, \\phi) &= (t_d^* +\\mu) \\begin{pmatrix}\n0 \\\\ -a \\epsilon \\phi_1^2(0) \\phi_2(0) \n\\end{pmatrix},\t\t\\label{25}\n\\end{align} \nwhere $\\phi = (\\phi_1, \\phi_2)^\\top \\in \\mathbb{C}$. \\\\\nBy Riesz representation theorem, there exists a matrix $\\eta(\\theta,\\mu)$, $\\theta \\in [0,1]$, whose components are of bounded variation functions such that \n\\begin{align}\nL_{\\mu} \\phi = \\int_{-1}^0 d \\eta(\\theta,\\mu) \\phi(\\theta), \\quad \\text{for } \\phi \\in \\mathbb{C}.\t\\notag\n\\end{align}\nBy considering Eq.~\\eqref{24}, one can choose \n\\begin{align} \n\\eta(\\theta, \\mu) = (t_d^* +\\mu) \\begin{pmatrix} \n0 & 1 \\\\ \n-\\omega^2 & -b \\epsilon\n\\end{pmatrix} \\delta(\\theta)+(t_d^*+\\mu) \\begin{pmatrix}\n0 & 0 \\\\ \n-\\epsilon & 0\n\\end{pmatrix} \\delta(\\theta+1), \\notag\n\\end{align} \nwhere $\\delta$ is Dirac delta function. \nFor $\\phi \\in \\mathbb{C}$, define\n\\begin{align}\nA(\\mu)\\phi(\\theta) =\\left\\{\\begin{array}{ll}\\displaystyle{\\frac{d \\phi(\\theta)}{d \\theta}},& \\theta \\in [-1,0),\\\\\n\\\\ \\int_{-1}^0 d\\eta(\\theta,\\mu) \\phi(\\theta), & \\theta = 0, \\end{array}\\right. \\label{26} \\\\\nR(\\mu) \\phi(\\theta) = \\left\\{\\begin{array}{ll}\n\\begin{pmatrix}\n 0 \\\\ 0 \\end{pmatrix}, & \\theta \\in [-1,0), \\\\\nF(\\mu, \\phi),& \\theta=0.\n\\end{array}\\right. \t\\notag \n\\end{align}\nIn order to convenient study of Hopf bifurcation problem, we transform system \\eqref{23} into an operator equation of the form \n\\begin{align}\n\\dot{u}(t) = A(\\mu) u_t + R(\\mu) u_t,\t\\label{27}\n\\end{align}\nwhere $u_{t}(\\theta)=u(t+\\theta)$, $\\theta \\in [-1,0]$. \\\\\nFor $\\psi \\in \\mathbb{C} ([0,1], (\\mathbb{R}^2)^*)$, the adjoint operator $A^*$ of $A$ is defined by \n\\begin{align}\nA^*(\\mu) \\psi(m) = \\left\\{ \\begin{array}{ll} -\\frac{d\\psi(m)}{dm}, & m\\in(0,1], \\\\ \\int_{-1}^0 \\psi(-t) d\\eta(t,0), & m=0. \\end{array}\\right. \\notag \n\\end{align} \nFor $\\phi \\in \\mathbb{C} ([-1,0],\\mathbb{R}^2)$ and $\\psi \\in \\mathbb{C} ([0,1],(\\mathbb{R}^2)^*)$, define the bilinear inner product in order to normalize the eigenvectors of operator $A$ and adjoint operator $A^*$. \n\\begin{align}\n\\left\\langle \\psi(m), \\phi(\\theta) \\right\\rangle &= \\bar{\\psi}(0).\\phi(0)-\\int_{-1}^0 \\int_{\\xi = 0}^{\\theta} \\bar{\\psi}^{\\top} (\\xi -\\theta) d\\eta(\\theta) \\phi(\\xi) d\\xi,\t \\label{28} \n\\end{align}\nwhere $\\eta(\\theta)= \\eta(\\theta,0)$. Then $A$ and $A^*$ are adjoint operators. We know that $\\pm i s_{0} t_d^*$ are eigenvalues of $A$. Therefore, they are also eigenvalues of $A^*$. Next we calculate the eigenvector $q(\\theta)$ of $A(0)$ belonging to the eigenvalue $i s_{0} t_d^*$ and eigenvector $q^* (\\theta)$ of $A^*(0)$ belonging to the eigenvalue $-i s_{0} t_d^*$. \\\\\nThen we have $ A(0) q(\\theta) = i s_{0} t_d^* q(\\theta)$ and $A^*(0) q^*(\\theta) = -i s_{0} t_d^* q^*(\\theta)$. Let $q(\\theta) = (1,\\alpha)^{\\top} e^{i s_{0} t_d^* \\theta}$ and $q^*(\\theta) = P(1,\\beta)^{\\top} e^{-i s_{0} t_d^* \\theta}$.\nThus, we can obtain\n\\begin{align} \n\\alpha =i s_0, \\quad \\beta =\\frac{i s_0}{\\omega^2 +\\epsilon e^{i s_{0} t_d^*}}. \\label{29} \n\\end{align} \nFrom \\eqref{28}, we have\n\\begin{align}\n\\langle q^*&(m),q(\\theta) \\rangle = \\bar{q^*}(0).q(0)-\\int_{-1}^{0} \\int_{\\xi=0}^{\\theta} \\bar{q^*}^{\\top} (\\xi-\\theta) d\\eta(\\theta) q(\\xi) d\\xi, \\notag \\\\\n&= \\bar{P} (1+\\alpha \\bar{\\beta}) -\\int_{-1}^{0} \\int_{\\xi=0}^{\\theta} \\bar{P} \\begin{pmatrix} 1 & \\bar{\\beta} \\end{pmatrix}\\times e^{-i t_d^* s_0 (\\xi-\\theta)} d\\eta(\\theta) \\begin{pmatrix} 1 \\\\ \\alpha \\end{pmatrix} e^{i t_d^* s_0 \\xi} d\\xi, \\notag \\\\\n&= \\bar{P} \\bigg[ 1+\\alpha \\bar{\\beta} \\bigg. \\left. -\\int_{-1}^0 \\begin{pmatrix} 1 & \\bar{\\beta} \\end{pmatrix} \\theta e^{i t_d^* s_0 \\theta} \\begin{pmatrix} 1 \\\\ \\alpha \\end{pmatrix} d\\eta(\\theta) \\right],\t\\notag \\\\ \n&= \\bar{P} \\bigg[ 1+\\alpha \\bar{\\beta} \\bigg. \\left. +t_d^* \\begin{pmatrix} 1 & \\bar{\\beta} \\end{pmatrix} \\begin{pmatrix} 0 & 0 \\\\ -\\epsilon & 0 \\end{pmatrix}\\begin{pmatrix} 1 \\\\ \\alpha \\end{pmatrix} e^{-i t_d^* s_0} \\right], \\notag \\\\\n&= \\bar{P} [ 1+\\alpha \\bar{\\beta} - \\bar{\\beta} \\epsilon t_d^* e^{-i t_d^* s_0} ].\n\\end{align}\nThus, we can choose $\\bar{P}$ as \n\\begin{align} \n\\bar{P} =& \\frac{1}{ 1+\\alpha \\bar{\\beta} -\\bar{\\beta} \\epsilon t_d^* e^{-i t_d^* s_0}}, \\label{30}\n\\end{align} \nthen $ \\langle q^*(m), q(\\theta) \\rangle = 1 $. Furthermore, $ \\langle q^*(m), \\bar{q}(\\theta) \\rangle = 0$. Now we obtain $q$ and $q^{*}$. \\\\\nNext, we use the same notations as in~\\cite{hassard} and we first compute the coordinates to describe the center manifold $C_{0}$ at $\\mu=0$. Here, $u_{t}$ be the solution of \\eqref{23} at $\\mu =0$. Define\n\\begin{align} \nz(t)= \\langle q^{*}, u_{t} \\rangle, \\label{31} \n\\end{align} \nand then define \n\\begin{align}\nW(t,\\theta) =& u_t (\\theta) -z(t) q(\\theta)-\\bar{z}(t) \\bar{q}(\\theta), \\notag \\\\\n=& u_t (\\theta) -2 \\Re \\{ z(t) q(\\theta) \\}. \\label{32} \n\\end{align}\nOn center manifold $C_{0}$, we have $W(t, \\theta) = W (z(t), \\bar{z}(t),\\theta)$, where \n\\begin{align} \nW(z(t),\\bar{z}(t),\\theta) = W_{20} (\\theta) \\frac{z^2}{2} +W_{11} (\\theta) z \\bar{z} + W_{02} (\\theta) \\frac{\\bar{z}^2}{2} + \\cdots, \\label{33}\n\\end{align} \n$z$ and $\\bar{z}$ are the local coordinates for center manifold $C_{0}$ in the direction of $q$ and $q^{*}$ respectively. Note that $W$ is real if $u_t$ is real. We consider only real solutions.\\\\\nFor the solution $u_t \\in C_{0}$ of \\eqref{23}, since $\\mu =0$, we have\n\\begin{align}\n\\dot{z}(t) =& \\langle q^*, \\dot{u}_t \\rangle,\t\\notag \\\\\n=& \\langle q^*, A(0) u_t +R(0)u_t \\rangle,\t\\notag \\\\\n=& i t_d^* s_0 z + \\bar{q^*}(0).F(0, W(z,\\bar{z},0) +2 \\Re\\{z(t)q(0)\\}), \\notag \\\\\n\\overset{\\Delta}{=} & i t_d^* s_0 z+ \\bar{q^*}(0).F_0(z,\\bar{z}). \\notag\n\\end{align}\nRewrite this equation as\n\\begin{align} \n\\dot{z}(t) = i t_d^* s_0 z + g(z,\\bar{z}), \t \\label{34}\n\\end{align} \nwhere $g(z,\\bar{z}) = \\bar{q^*}(0).F_{0}(z,\\bar{z})$ and expand $g(z,\\bar{z})$ in powers of $z$ and $\\bar{z}$, that is \n\\begin{align} \ng(z, \\bar{z}) = g_{20} \\frac{z^2}{2} + g_{11} z \\bar{z} + g_{02} \\frac{\\bar{z}^2}{2} + g_{21} \\frac{z^2 \\bar{z}}{2}+ \\cdots. \\label{35}\n\\end{align} \nWe have\n\\begin{align}\ng(z,\\bar{z}) =& \\bar{q^*}(0).F_{0}(z,\\bar{z}),\t\\notag \\\\\n=& \\bar{P} t_d^* \\begin{pmatrix} 1 & \\bar{\\beta} \\end{pmatrix} \\begin{pmatrix} 0 \\\\ -a \\epsilon u_{1t}^2(0) u_{2t}(0) \\end{pmatrix}, \\notag \n\\end{align} \nwhere $u_{t}(\\theta) = (u_{1t}(\\theta), u_{2t}(\\theta))^\\top = W(t,\\theta)+z(t) q(\\theta) +\\bar{z}(t) \\bar{q}(\\theta)$ and $q(\\theta) = (1,\\alpha)^\\top e^{i s_0 t_d^* \\theta}$, then we have\n\n\\begin{align}\n\\begin{pmatrix} u_{1t}(\\theta) \\\\ u_{2t}(\\theta) \\end{pmatrix} = \\begin{pmatrix} W_{20}^{(1)}(\\theta) \\frac{z^2}{2} + W_{11}^{(1)}(\\theta) z \\bar{z}+W_{02}^{(1)}(\\theta) \\frac{\\bar{z}^2}{2} + O(|z, \\bar{z}|^3) \\\\ \nW_{20}^{(2)}(\\theta) \\frac{z^2}{2} + W_{11}^{(2)}(\\theta) z \\bar{z} + W_{02}^{(2)}(\\theta) \\frac{\\bar{z}^2}{2} + O(|z,\\bar{z}|^3) \\end{pmatrix} + z \\begin{pmatrix} 1 \\\\ \n\\alpha \\end{pmatrix} e^{i t_d^* s_0 \\theta} + \\bar{z} \\begin{pmatrix} 1 \\\\ \n\\bar{\\alpha} \\end{pmatrix} e^{-i t_d^* s_0 \\theta}, \\notag\n\\end{align} \nFor $\\theta=0$, we have\n\\begin{align}\nu_{1t}(0) =& z + \\bar{z} + W_{20}^{(1)}(0) \\frac{z^2}{2} + W_{11}^{(1)}(0) z \\bar{z} + W_{02}^{(1)}(0) \\frac{\\bar{z}^2}{2} +O(|z,\\bar{z}|^3), \\notag \\\\\nu_{2t}(0) =& \\alpha z +\\bar{\\alpha} \\bar{z} +W_{20}^{(2)}(0) \\frac{z^2}{2} +W_{11}^{(2)}(0) z \\bar{z} +W_{02}^{(2)}(0) \\frac{\\bar{z}^2}{2} +O(|z,\\bar{z}|^3). \\notag \n\\end{align}\nIt follows that\n\\begin{align}\ng(z,\\bar{z}) =& \\bar{P} t_d^* \\begin{pmatrix} 1, & \\bar{\\beta} \\end{pmatrix} \\begin{pmatrix} 0, & -a \\epsilon \\left(z + \\bar{z} + W_{20}^{(1)}(0)\\frac{z^2}{2} +\\cdots \\right)^2 \\left(\\alpha z + \\bar{\\alpha} \\bar{z} + W_{20}^{(2)}(0) \\frac{z^2}{2} +\\cdots \\right) \\end{pmatrix}^\\top, \\notag \\\\\n=& - a \\epsilon \\bar{\\beta} \\bar{P} t_d^* (2\\alpha +\\bar{\\alpha}) z^{2} \\bar{z} + \\cdots. \\notag\n\\end{align}\n\nComparing the coefficients with \\eqref{35}, we have\n\\begin{align}\ng_{20} =& g_{11} = g_{02} = 0, \\notag \\\\ \ng_{21} =& -2 a \\epsilon \\bar{\\beta} \\bar{P} t_d^* (2 \\alpha + \\bar{\\alpha}). \\label{36}\n\\end{align} \nThus, we can compute the following quantities:\n\\begin{align}\nC_1(0) =& \\frac{i}{2s_0 t_d^*} \\left(g_{20} g_{11} - 2 |g_{11}|^2 - \\frac{|g_{02}|^2}{3} \\right) + \\frac{g_{21}}{2}, \\notag \\\\\n\\mu_2 =& -\\frac{ \\Re \\{C_1(0) \\}}{\\Re\\{ \\lambda'_{0}(t_d^*) \\}}, \\notag \\\\\n\\beta_2 =& 2 \\Re \\{C_1(0)\\}, \\notag \\\\\nT_2 =& -\\frac{ \\Im \\{C_1(0) \\} + \\mu_2 \\Im \\{ \\lambda'_0(t_d^*) \\}}{s_0 t_d^*}. \\label{37} \n\\end{align}\nAbove formulae give a description of Hopf bifurcation periodic solutions of \\eqref{23} at $t_d = t_d^*$ on the center manifold. Notations $\\mu_2$, $\\beta_2$ and $T_2$ determine respectively the direction of Hopf bifurcation, stability and period of bifurcating periodic solutions~\\cite{hassard}. We summarize the following theorem.\n\\begin{theorem} \\label{4.3}\nFor expressions given in \\eqref{37}, following results hold\n\\begin{enumerate}\n\\item[(i)] If $\\mu_2 > 0, \\, (\\mu_2<0)$, then Hopf bifurcation is supercritical, (subcritical) and the bifurcating periodic solutions exist for $t_d > t_d^*, \\, (t_d 0$.\n\\item[(iii)] The period of the bifurcating periodic solutions increases if $T_2 > 0$ and decreases if $T_2 < 0$.\n\\end{enumerate}\n \\end{theorem}\n\n\\begin{example}\nFor the following model\n\\begin{align} \n\\dot{x}(t) =& y(t), \\notag \\\\ \n\\dot{y}(t) =& -0.05 (x(t-0.623)+(x^{2}(t) +0.58347)y(t))-x(t). \\notag\n\\end{align} \n By taking $a=1, \\omega =1, \\epsilon =0.05, b=0.583474$ for the model system \\eqref{7}. \nWe have from Eq.~\\eqref{37},\n\\begin{align*}\n\\mu_2 =& 0.7563576 > 0, \\\\\n\\beta_2 =& -0.0308725 < 0, \\\\\nT_2 =& 0.0174092 >0.\n\\end{align*} \nTherefore, from Theorem \\ref{4.3}, we conclude that Hopf bifurcation is supercritical and bifurcating periodic solution is stable with increasing period.\n\\end{example}\n\n\\section{Bifurcation Analysis: Exact Numerical Simulation} \t\\label{sec5}\nTo investigate the effects of nonlinearity ($a$) and damping term ($b$), we carried out detailed bifurcation analysis of the model system \\eqref{7}. Our main objective is to detect the existence of complex system dynamics in the presence of nonlinear damping. The system \\eqref{7} is integrated using Matlab software for different cases of nonlinearity and damping with resonance ($\\Omega=2$) and antiresonance ($\\Omega=4$). For analysing the exact range of stability in details, we have represented bifurcation plots for both the state variables throughout the simulation. System dynamics show symmetric property throughout the simulation for both resonance and antiresonance cases. Time span is $[0,220]$ for fig.~\\ref{fig9} and $[0,500]$ for figs.~\\ref{fig8}, \\ref{fig10} and \\ref{fig11}.\n\n\n\\begin{figure}[!ht]\n\\centering\n\\subfigure[]{\n\\resizebox*{2.2in}{!}{\\includegraphics{f7a-x_I.eps}} \\label{fig8.1a}} \n\\quad\n\\subfigure[]{\n\\resizebox*{2.2in}{!}{\\includegraphics{f7b-y_I.eps}} \\label{fig8.2b}} \n\\caption{Bifurcation diagram for the model system \\eqref{7} with the parameter values $\\epsilon =0.05, \\omega=1$, $a = 0, b = 0$. \\textbf{(Feedback system with increasing phase space area)}} \\label{fig8}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f8a-II_x_2.eps}} \\label{fig9.1a}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f8b-II_y_2.eps}} \\label{fig9.2b}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f8c-II_x_4.eps}} \\label{fig9.3c}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f8d-II_y_4.eps}} \\label{fig9.4d}} \n\\caption{Bifurcation diagram for the model system \\eqref{7} with the parameter values $\\epsilon =0.05, \\gamma = 2, \\omega=1$, $a=0, b = \\frac{\\sin(\\omega t_d)}{\\omega}$. For figures (a) and (b) $\\Omega=2$, and (c) and (d) $\\Omega=4$. \\textbf{(Center)}} \\label{fig9}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f9a-III_x_2.eps}} \\label{fig10.1a}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f9b-III_y_2.eps}} \\label{fig10.2b}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f9c-III_x_4.eps}} \\label{fig10.3c}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f9d-III_y_4.eps}} \\label{fig10.4d}} \n\\caption{Bifurcation diagram for the model system \\eqref{7} with the parameter values $\\epsilon =0.05, \\gamma = 2, b= \\frac{\\sin(\\omega t_d)}{2 \\omega}, \\omega=1$, $a=1, b < \\frac{\\sin(\\omega t_d)}{\\omega}$. For figures (a) and (b) $\\Omega=2$ and (c) and (d) $\\Omega=4$. \\textbf{(Limit cycle)}} \\label{fig10}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f10a-IV_x_2.eps}} \\label{fig11.1a}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f10b-IV_y_2.eps}} \\label{fig11.2b}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f10c_IV_x_4.eps}} \\label{fig11.3c}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f10d_IV_y_4.eps}} \\label{fig11.4d}}\n\\caption{Bifurcation diagram for the model system \\eqref{7} with the parameter values $\\epsilon =0.05, \\gamma = 2, \\omega=1$, $a=1, b = \\frac{\\sin(\\omega t_d)}{\\omega}$. For figures (a) and (b) $\\Omega=2$, and (c) and (d) $\\Omega=4$. \\textbf{(Center-type orbit)}} \\label{fig11}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f11a-x_b_td.eps}} \\label{fig12.1a}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f11b-y_b_td.eps}} \\label{fig12.2b}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f11c-x_b_td_4.eps}} \\label{fig12.3c}} \n\\quad\n\\subfigure[]{\n\\resizebox*{1.35in}{!}{\\includegraphics{f11d-y_b_td_4.eps}} \\label{fig12.4d}} \n\\caption{Bifurcation diagram of $b$ with the parameter values $\\epsilon =0.05, \\gamma = 1, a=1, \\omega=1, t_d=0.623$. For figures (a) and (b) $\\Omega=2$, and (c) and (d) $\\Omega=4$.} \\label{fig12}\n\\end{figure}\n\nIn fig.~\\ref{fig8}, first we consider the case when $a=0$, $b=0$ i.e., no nonlinearity and damping terms present in the system. In this case, bifurcation depends only on the feedback controller $\\epsilon$ as the frequency $\\omega$ is equal to $1$. Feedback system lies in the range $[-200,200]$, can be observed from fig.~\\ref{fig8}. Initially system is stable and as delay increases system becomes unstable.\n\nWe have investigated the effect of damping term $b = \\frac{\\sin(\\omega t_d)}{\\omega}$ in the absence of nonlinearity for the resonance case in figs.~\\ref{fig9.1a}-\\ref{fig9.2b} and antiresonance in figs.~\\ref{fig9.3c}-\\ref{fig9.4d}. System dynamics exhibits gap dependent bifurcation for both cases. The state variables lie in the domain $[-400,400]$ and $[-20,20]$, and $t_d \\in (0,3]$ and $t_d \\in (0,20]$ for resonance and antiresonance cases respectively. Phase space area of resonance case is larger than the antiresonance case. We observe center solution exists for the system \\eqref{7} as it is highly dependent on damping terms.\n\nIn fig.~\\ref{fig10}, we consider the effect of both nonlinear term $a=1$ and damping term $b = \\frac{\\sin(\\omega t_d)}{2 \\omega}$. Bifurcation diagram for resonance and antiresonance cases are executed in figs.~\\ref{fig10.1a} and \\ref{fig10.2b} respectively. System variables lie between $[-4,4]$ and $[-20,20]$, and $t_d \\in (0,20]$ and shows number of stability switching scenario in fig.~\\ref{fig10}. Sequences of period doubling of order 2 and 4 with symmetricity can be remarked for resonance in figs. \\ref{fig10.1a}-\\ref{fig10.2b} whereas for antiresonance, repeated scenario of period doubling and inverse period doubling of order 2, 6 and 12 are perceived in figs. \\ref{fig10.3c}-\\ref{fig10.4d}. Rich period doubling and halving scenario confirms the presence of limit cycles. We observe oscillatory dynamics which converges to a steady state for the system \\eqref{7} from the bifurcation diagrams. \n\nIn fig.~\\ref{fig11}, nonlinearity and damping terms are $a=1$ and $b = \\frac{\\sin(\\omega t_d)}{\\omega}$ respectively. Resonance case is considered in figs.~\\ref{fig11.1a}-\\ref{fig11.2b} and antiresonance in figs.~\\ref{fig11.3c}-\\ref{fig11.4d}. System variables lie between $[-4,4]$ and $t_d \\in [0,10]$ and shows center-type solution in bifurcation plots for both the cases. As $t_d$ increases, $x$ and $y$ also increases for non-resonance case. However, in resonance case $x$ and $y$ does not increases with $t_d$, it attains a maximum saturated peak and repeat it. Sequences of stability switches take places; period-doubling and halving layouts appear or disappear. As $t_d$ increases, the point of projection of period-doubling and halving occurs, exhibits more dense plot; such as it tending towards the chaotic scenario.\n\nIn fig.~\\ref{fig12}, initially system shows limit cycle behaviour with $a=1, ~\\gamma=1$ and in the range of $b \\in [0,0.583]$ and stable for $b>0.583$. Hopf bifurcation occurs for both the resonance and non-resonance cases shown in figs.~\\ref{fig12.1a}-\\ref{fig12.2b} and \\ref{fig12.3c}-\\ref{fig12.4d}, respectively. Hence, we observed that the dynamics of the system bifurcates at the $b=0.583$.\n\n\\section{Discussions and Conclusions}\t\t \\label{sec7}\n\nA delay model in a damped quartic nonlinear oscillator is solved by multiscale perturbation method to obtain various periodic orbits, namely a limit cycle, center and a slowly decaying center with reference to a van der Pol oscillator having a limit cycle. This delay induced periodicity and bifurcation are probed through the parametric resonance and antiresonance. {\\color{black}The calculation of response function due to a parametric excitation of an arbitrary periodic orbit is carried out here through K-B approach which is much handier than RG method specially in the context of results obtained for the direction of Hopf bifurcation and stability of the bifurcating periodic solutions.} The effect of control parameters such as damping and nonlinear terms are investigated via bifurcation analysis using normal form and center manifold theory.\n\\begin{enumerate}\n\n\\item We have found the characteristics of resonances due to parametric excitation. The nature of the resonances at $\\Omega=2\\omega$ and $\\Omega=4\\omega$ are investigated for limit cycle, center and center-type cases with $\\omega=1$.\n\n\\item Stability criteria of the parametrically excited system for $\\Omega=2 \\omega$ and $4\\omega$ resonances are investigated with $\\omega=1$ and they correspond to the approximate solution using K-B method.\n\n\\item Linear stability analysis and bifurcation scenario for the full range of parameter space of the system \\eqref{eq} is accomplished for the trivial fixed point. Occurrence of Hopf bifurcation of the fixed point ${E_0}(0,0)$ at critical point $t_d^*=0.623$ has been shown at which trivial point looses its stability. The values $\\mu_2 = 0.7563576, \\beta_2 = -0.0308725$ and $T_2 = 0.0174092$ indicates that Hopf bifurcation is supercritical and the bifurcating periodic solutions are stable with increasing periods. \n\n\\item Stability and direction of Hopf bifurcation have been investigated using center manifold and normal form theory. We have also concluded from the bifurcation diagram, the system is unstable initially only for limit cycle solutions and stable for all the other solutions at the trivial point.\n\n\\item When the periodic orbit is a limit cycle in our system it can be clearly identified for the weakly delayed van der Pol case where the sign of $b$ and coefficient of $x(t-t_d)$ are both $<0$. \n\\end{enumerate}\n\nThus, we can find that the damping is one of most effective parameter which stabilizes the system and delay plays a crucial role to lead the system unstable. In the presence of both damping and nonlinearity with delay stabilizes the system. The possibility of stabilizing a feedback delayed system with parametric excitation have a great effect in control of periodic flows, stabilization of high-speed milling in material formation and cutting process via spindle speed variation~\\cite{stepan}.\n\n\\section*{Acknowledgements}\nSandip Saha acknowledges RGNF, UGC, India for the partial financial support. This work is supported by the Council of Scientific and Industrial Research (CSIR), Govt. of India under grant no. 25(0277)\/17\/EMR-II to R.K. Upadhyay. SS and GG are thankful to Dr. Sagar Chakraborty for useful comments.\n\n\n\\section*{References}\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $G$ be a connected semisimple Lie group with finite center. Let $G = NAK$ be a choice of Iwasawa decomposition. Define $\\mathfrak a = \\operatorname{Lie}(A)$, and recall Harish-Chandra's formula for the spherical function of parameter $i\\lambda \\in i\\mathfrak a^*$:\n\\begin{equation}\\label{harishchandraformula}\n\\varphi_{i\\lambda}(g) = \\int_K e^{(\\rho+i\\lambda)(H(kg))} dk \\,,\n\\end{equation}\nwhere $H : G \\to \\mathfrak a$ is the Iwasawa projection and $\\rho \\in \\mathfrak a^*$ the half-sum of positive roots. In \\cite{Michels2022} we are interested in spectral asymptotics in a relative trace formula for maximal flat submanifolds of an associated locally symmetric space. In the analysis one requires asymptotics for the integral\n\\[ \\int_A \\varphi_{i\\lambda}(a) b(a) da \\]\n(with a smooth cutoff $b(a)$)\\,, as well as bounds for twisted versions thereof, as $\\lambda$ grows in $\\mathfrak a^*$. The problems are relative analogues of the problem of bounding spherical functions considered in \\cite{duistermaat1983}.\n\nIn view of \\eqref{harishchandraformula}, one is naturally led to study the behavior of the Iwasawa $\\mathfrak a$-projection along sets of the form $kA$. More specifically, we are interested in the critical points of\n\\[ \\lambda(H(ka)) \\]\nas a function of $a \\in A$.\nThe maximal flat submanifolds of the symmetric space $G\/K$ are the images of the sets $gA$. When $G = \\operatorname{PSL}_2(\\mathbb R)$, then $G\/K$ is the hyperbolic plane, the maximal flats are the geodesics, and the motivating problem of bounding orbital integrals is solved in \\cite{Marshall2016}. It takes as input classical facts about the geometry of geodesics in the hyperbolic plane. For general semisimple $G$ we require analogous results for maximal flat submanifolds, which to our knowledge are not established. This article is concerned with them.\n\nThe article is divided into three parts, which are about the Iwasawa $N$-, $A$- and $K$- projections of the sets $gA$, in each of which we generalize geometric facts that are apparent in the case of $\\operatorname{PSL}_2(\\mathbb R)$.\n\n\\subsection{Statement of results}\n\nConsider the hyperbolic plane $\\mathbb H = \\{x+iy \\in \\mathbb C : y > 0\\}$ with its hyperbolic metric.\nThe group $\\operatorname{PSL}_2(\\mathbb R)$ acts on $\\mathbb H$ by homographies. We make the standard choice of Iwasawa decomposition: Let $N$ be the subgroup of unipotent upper triangular matrices, $A$ the subgroup of diagonal matrices and $K = \\operatorname{PSO}_2(\\mathbb R)$. Then $\\operatorname{PSL}_2(\\mathbb R) \\cong N \\times A \\times K$, the diffeomorphism being given by multiplication. The upper half-plane model naturally lends itself to describe the Iwasawa projections geometrically. The element\n\\[ g = \\begin{pmatrix}\n1 & x \\\\\n0 & 1\n\\end{pmatrix}\\begin{pmatrix}\n\\sqrt y & 0 \\\\\n0 & 1\/\\sqrt{y}\n\\end{pmatrix} \\in NA\\] sends $i$ to $x+iy$. That is, the real part of $gi$ can be identified with the $N$-projection of $g$, and the imaginary part with the $A$-projection.\n\nBy a (maximal) geodesic in $\\mathbf H$ we mean the $1$-dimensional submanifold defined by it, although we will sometimes informally speak of geodesics with an orientation. The geodesics in $\\mathbb H$ are the semicircles with centers on the horizontal axis, together with the vertical lines. The action of $\\operatorname{PSL}_2(\\mathbb R)$ on $i$ induces a diffeomorphism between $A$ and the vertical geodesic through $i$. Every geodesic is of the form $gAi$ with $g \\in \\operatorname{PSL}_2(\\mathbb R)$.\n\nObserve that the real part of every geodesic is a bounded set. We generalize this observation to semisimple Lie groups $G$, as follows.\n\n\\begin{theorem}\\label{npartbounded}Let $G$ be a connected semisimple Lie group. For all $g \\in G$, the $N$-projection of $gA$ is a relatively compact set.\n\\end{theorem}\n\nTheorem~\\ref{npartbounded} is proved in \\S\\ref{secNprojection} by diving into the mechanics of the Gram--Schmidt process, and showing that the orthogonalization part can be done with uniformly bounded operations. In fact, we will prove a stronger version with some uniformity, which is Theorem~\\ref{npartboundeduniform}. The uniform version requires to partition the set of all maximal flats, which is naturally identified with $G\/N_G(A)$, into a Zariski open `generic' set and several lower-dimensional `exceptional' sets. This partition generalizes the distinction between semicircles and vertical geodesics in the upper half plane $\\mathbb H$, in which case the semicircles form the generic set. The partition is not dependent on any choice of model for the symmetric space $G\/K$, but inherent to the choice of Iwasawa decomposition of $G$. Some of the lower-dimensional sets come from semistandard Levi subgroups, and it is no surprise that they are exceptional. But in general there are other exceptional sets, and the partition remains quite mysterious.\n\nThe second group of results concerns the Iwasawa $A$-projection. In the case of $G = \\operatorname{PSL}_2(\\mathbb R)$, define the height of a point in $\\mathbb H$ to be its imaginary part. The heights of the points of a geodesic $gAi$ form a bounded-from-above set precisely when the geodesic is a semicircle. In that case, the height is maximized at a unique critical point, which is the midpoint of the semicircle, and the height tends to $0$ at infinity on the geodesic. Such a critical point exists if and only if $gAi$ is not vertical, meaning that $g \\notin N \\cdot N_G(A)$.\n\nFor a general connected semisimple Lie group $G$, we prove the following.\n\n\\begin{theorem}\\label{aresultsallintro}Let $\\lambda \\in \\mathfrak a^*$ be an element that is positive with respect to the choice of Iwasawa decomposition, non-singular and which does not lie in a proper subspace spanned by roots. \n\\begin{enumerate}[label = (\\roman*)]\n\\item For all $g \\in G$ the ``height function''\n\\begin{align*}\nh_{\\lambda, g} : A & \\to \\mathbb R \\\\\na &\\mapsto \\lambda(H(ga))\n\\end{align*}\nhas at most one critical point. If it exists, it is non-degenerate and maximizes $h_{\\lambda, g}$. \n\\item The set of $g \\in G$ for which a given $a \\in A$ is a critical point of $h_{\\lambda, g}$, is a non-empty smooth submanifold of codimension $\\dim(A)$.\nThe set of $g \\in G$ for which $h_{\\lambda, g}$ has a critical point, is open.\n\\end{enumerate}\n\\end{theorem}\n\nWe prove Theorem~\\ref{aresultsallintro} in \\S\\ref{secAprojection}. The proof is quite technical and occupies the larger part of this article. Regarding the second part, note that it is by no means obvious that $h_{\\lambda, g}$ has a critical point for even a single $g$, because the domain $A$ is noncompact. The way in which we prove existence is by varying $g \\in K$, and realizing the elements $g$ with a given critical point as the minima of a certain smooth function on the compact group $K$. It is likely that the set of $g \\in G$ for which $h_{\\lambda, g}$ has a critical point is in fact dense and that this can be proved using a very different argument, which is related to Theorem~\\ref{npartbounded}; see Remark~\\ref{atonegativechamber}.\n\n\\begin{remark}\nThe critical points in Theorem~\\ref{aresultsallintro} can be thought of as giving the midpoints of the flat $gA \\subset S$. It is in general too much to hope that $H(ga)$ has a critical point as a function of $a$. That is, the critical points can depend on $\\lambda \\in \\mathfrak a^*$. (See Example~\\ref{examplemidpointdepends}.) We will not use the term ``midpoint'', all the more because we have found no way to generalize the observation that for $\\operatorname{PSL}_2(\\mathbb R)$, the critical point corresponds to the center of the semicircle $gAi \\subset \\mathbb H$.\\end{remark}\n\n\\begin{remark}In Theorem~\\ref{aresultsallintro}, many things break down when $\\lambda$ is singular, lies in a proper subspace spanned by roots or is nonpositive: the nondegeneracy, uniqueness, and existence of critical points. Regarding non-degeneracy, see Remark~\\ref{hessiancriticalsometimesdef}. For non-uniqueness and non-existence, see \\S\\ref{sectionexistence}.\\end{remark}\n\n\nFinally, we turn our attention to the Iwasawa $K$-projection. The Lie group $\\operatorname{PSL}_2(\\mathbb R)$ is naturally identified with the unit tangent bundle $T^1(\\mathbb H)$ via its action on the vertical vector at the base point, $(i, (0, 1))$. The Iwasawa $K$-projection of an element $g$ corresponds to a choice of direction at the point $gi$. The elements of the Weyl group $N_K(A)$ correspond to the vertical directions, up and down. As we approach infinity along a geodesic, the tangent line to the geodesic tends to a vertical one.\n\nWe generalize this observation as follows.\n\n\\begin{theorem}\\label{KparttoMcentralizer}Let $G$ be a connected semisimple Lie group. Let $g \\in G$ and $H \\in \\mathfrak{a}$. Then the $K$-projection of $g e^{tH}$ tends to $N_K(A) Z_{K}(H)$ as $t \\to + \\infty$.\n\\end{theorem}\n\nTheorem~\\ref{KparttoMcentralizer} is proved by projecting the $K$-projection of the geodesic flow down to the Lie algebra in a specific way and realizing it as the flow of a vector field. The resulting dynamical system is quite mysterious, but the asymptotic behavior of individual orbits can be well understood. It might seem that Theorem~\\ref{KparttoMcentralizer} is a statement about individual geodesics rather than maximal flats, but it is possible to formulate statements with uniformity in the variables $g$ and $H$; see Remark~\\ref{remarkKvsNproofs}.\n\nIn the end the results on the $N$- and $K$-projections were not needed for the analysis in \\cite{Michels2022}. But they complete the picture nicely, might be useful elsewhere, and they barely fail to provide alternative proofs of parts of Theorem~\\ref{aresultsallintro}; see Remark~\\ref{atonegativechamber}. A number of things remain mysterious, in particular the apparent relation between the partition in Theorem~\\ref{npartboundeduniform} on the $N$-projection, and the dynamical system used in the proof of Theorem~\\ref{KparttoMcentralizer}; see Remark~\\ref{remarkKvsNproofs}.\n\n\\section{Preliminaries}\n\n\\label{preliminariesLiegroups}\n\n\\subsection{Lie groups and Lie algebras}\n\n\\label{notationliegroups}\n\nLet $G$ be a reductive Lie group in the sense of Harish-Chandra \\cite{harishchandra1975}; see also \\cite[Chapter VII]{Knapp2002}. Most of the time $G$ will be connected semisimple with finite center and we will then simply say $G$ is semisimple. Let $K \\subset G$ be a maximal compact subgroup and $\\theta$ an involution of $G$ whose fixed point set is $K$. It induces an involution $\\theta$ of the Lie algebra $\\mathfrak g$, whose $+1$ and $-1$ eigenspaces we denote by $\\mathfrak k$ and $\\mathfrak p$ respectively. We denote the exponential of $X \\in \\mathfrak g$ by $\\exp(X)$. Define the semisimple part $\\mathfrak{g}_{ss} = [\\mathfrak g, \\mathfrak g]$. We extend the Killing form on $\\mathfrak{g}_{ss}$ to a nondegenerate symmetric bilinear form $\\langle \\cdot, \\cdot \\rangle$ on $\\mathfrak{g}$ that is positive definite on $\\mathfrak{k}$ and negative definite on $\\mathfrak{p}$, and with respect to which the center $\\mathfrak{z}(\\mathfrak{g})$ is orthogonal to $\\mathfrak{g}_{ss}$. Define $\\langle \\cdot, \\cdot \\rangle_{\\theta} = \\langle \\cdot, - \\theta(\\cdot) \\rangle$, a positive definite symmetric bilinear form. All statements on $\\mathfrak{g}$ involving norms, orthogonality and adjoints will be with respect to $\\langle \\cdot, \\cdot \\rangle_{\\theta}$. Let $\\mathfrak a \\subset \\mathfrak p$ be a maximal abelian subalgebra and $A = \\exp(\\mathfrak a)$. The choices of $\\mathfrak a$ are all conjugate under $K$. Define $P = \\exp(\\mathfrak p)$. The multiplication map $P \\times K \\to G$ is a diffeomorphism, known as the Cartan decomposition.\n\n\n\\subsection{Symmetric spaces and maximal flats}\n\n\\label{propsmaximalflatssection}\n\nReferences for the following facts about symmetric spaces are \\cite{Eberlein1996, Helgason1978}. \n\nAssume here that $G$ is semisimple. Then $\\theta$ is a Cartan involution. The quotient $S = G\/K$ carries a left-$G$-invariant Riemannian metric induced by the Killing form on $\\mathfrak p$. It is a symmetric space of non-compact type, and every such space arises in this way.\n\nThe maximal flat submanifolds of $S$ are of the form $gAK$ with $g\\in G$. Such $g$ is uniquely determined by the submanifold up to multiplication on the right by $N_G(A)$. When $\\dim(A) = 1$, the maximal flats are precisely the geodesics. The rank of $G$ is defined to be $\\dim(A)$.\n\nLet $p \\in P$. The tangent space $T_{pK}S$ is identified with $\\mathfrak p$, using left multiplication by $p^{-1}$. Take $X \\in \\mathfrak p$ with norm $1$. The geodesic through $pK \\in S$ with tangent vector $X$ has equation $t \\mapsto p e^{tX}K$. A geodesic is called regular when a nonzero tangent vector at any (and hence every) point is a regular element of $\\mathfrak g$. It is called singular otherwise. A geodesic is regular if and only if it lies in a unique maximal flat.\n\n\n\\subsection{Iwasawa decomposition}\n\n\\label{secroots}\n\nLet $\\Sigma$ be the set of restricted roots of $\\mathfrak{a}$ in $\\mathfrak{g}$. By convention, $0 \\notin \\Sigma$. We denote by $\\mathfrak{g}_{\\alpha}$ the root space of a root $\\alpha \\in \\Sigma$, by $m(\\alpha) = \\dim(\\mathfrak g_\\alpha)$ its multiplicity and by $H_\\alpha \\in \\mathfrak a$ the element corresponding to $\\alpha$ under the isomorphism $\\mathfrak a \\cong \\mathfrak a^*$ given by $\\langle \\cdot, \\cdot \\rangle$. Fix a set of positive roots $\\Sigma^{+}$ with basis $\\Pi$. Let $\\mathfrak n \\oplus \\mathfrak a \\oplus \\mathfrak k$ and $N \\times A \\times K$ be the corresponding Iwasawa decompositions of $\\mathfrak g$ and $G$. Define $M = Z_{K}(A)$ and $M' = N_{K}(A)$ and denote by $\\mathfrak{m}$ the Lie algebra of $M$.\n\nDenote the Lie algebra Iwasawa projections by $E_{\\mathfrak{n}}$, $E_{\\mathfrak{a}}$ and $E_{\\mathfrak{k}}$. We have the orthogonal restricted root space decomposition\n\\begin{equation}\\label{rootspacedecomp}\n\\mathfrak{g} = \\mathfrak{a} \\oplus \\mathfrak{m} \\oplus \\bigoplus_{\\alpha \\in \\Sigma} \\mathfrak{g}_{\\alpha} \\,.\n\\end{equation}\nDenote the projection onto $\\mathfrak g_{\\alpha}$ by $R_\\alpha$.\n\nDenote the Iwasawa projections from $G$ onto $N$, $A$ and $K$ by $n$, $a$ and $\\kappa$. Define the height $H(g) = \\log(a(g)) \\in \\mathfrak{a}$, the logarithm being the Lie logarithm on $A$.\n\nAll choices of the data $(K, A, N)$ are conjugate by an element of $G$. When $G = \\operatorname{GL}_n(\\mathbb R)$ or $\\operatorname{SL}_n(\\mathbb R)$ we make all the standard choices: $K = \\operatorname{O}_n(\\mathbb R)$ respectively $\\operatorname{SO}_n(\\mathbb R)$, $A$ is the connected component of the diagonal subgroup and $N$ is the upper triangular unipotent subgroup.\n\n\n\n\\subsection{Centralizers}\n\\label{defgenericset}\n\n\\label{definitionlevis}\nDenote by $\\mathcal{L}$ the set of centralizers in $G$ of subgroups of $A$. They are the standard Levi subgroups of semistandard parabolic subgroups of $G$. We will denote such a centralizer typically by $L$. It is again reductive and inherits all the data as in the beginning of \\S\\ref{notationliegroups} and \\S\\ref{secroots} from $G$ in the natural way. We allow $G$ to be reductive because we will occasionally need to apply results to Levis $L \\in \\mathcal{L}$. When $L \\in \\mathcal{L}$ with Lie algebra $\\mathfrak{l}$, define $\\mathfrak{a}^{L} = \\mathfrak{l}_{ss} \\cap \\mathfrak{a}$ and $\\mathfrak{a}_{L} = \\mathfrak{z}(\\mathfrak{l}) \\cap \\mathfrak{a}$. Then $\\mathfrak{a} = \\mathfrak{a}^{L} \\oplus \\mathfrak{a}_{L}$ orthogonally.\nThe set $\\mathcal L$ contains $A$ and $G$, and when $G$ is semisimple we have $\\mathfrak a^A = \\mathfrak a_G = 0$ and $\\mathfrak a_A = \\mathfrak a^G = \\mathfrak a$.\n\n\nDefine the positive Weyl chamber $\\mathfrak{a}^{+} = \\{ H \\in \\mathfrak{a} : \\forall \\alpha \\in \\Sigma^{+} : \\alpha(H) > 0 \\}$, the regular set\n\\[ \\mathfrak{a}^{\\operatorname{reg}} = \\mathfrak{a} - \\bigcup_{L \\neq M} \\mathfrak{a}_{L} = \\mathfrak{a} - \\bigcup_{\\alpha \\in \\Sigma} \\ker(\\alpha) \\,, \\]\nand the generic set\n\\[ \\mathfrak{a}^{\\operatorname{gen}} = \\mathfrak{a}^{\\operatorname{reg}} - \\bigcup_{L \\neq G} \\mathfrak{a}^{L} \\,. \\]\nCombined superscripts correspond to intersections: $\\mathfrak{a}^{\\operatorname{gen}, +} = \\mathfrak{a}^{\\operatorname{gen}} \\cap \\mathfrak{a}^{+}$.\n\nWe also define $(\\mathfrak a^*)^{\\operatorname{reg}}$, $(\\mathfrak a^*)^{\\operatorname{gen}}$ and $(\\mathfrak a^*)^{+}$ to be the corresponding subsets under the isomorphism $\\mathfrak a \\cong \\mathfrak a^*$ defined by the Killing form. When $H \\in \\mathfrak a$ corresponds to $\\lambda \\in \\mathfrak a^*$ under the isomorphism given by the Killing form, then $H \\in \\mathfrak a^{\\operatorname{reg}}$ if and only if $\\lambda$ is not orthogonal to any roots, and $H \\in \\mathfrak a^{\\operatorname{gen}}$ if and only if $\\lambda$ is in addition not contained in a proper subspace spanned by roots.\n\nWe will frequently use the following lemma, so we take care to properly reference it.\nWe will frequently use the following lemmas, so we take care to properly reference them.\n\n\\begin{lemma} \\label{conjugatetoAimplieslevi}Let $g \\in G$ and $H \\in \\mathfrak{a}$. If $\\operatorname{Ad}_{g}(H) \\in \\mathfrak{a}$, then $g \\in M' Z_{G}(H)$.\\end{lemma}\n\n\\begin{proof}This is stated in \\cite[\\S 5, Lemma~1]{harishchandra1975}. When $g \\in K$, the statement gives precisely the degree of uniqueness in the $KAK$ decomposition of $G$, and a proof can be found in \\cite[Lemma 7.38]{Knapp2002}. The general case can be reduced to $g \\in K$ as follows. Write $g = kp$ in the Cartan decomposition. Then $\\operatorname{Ad}_{p}(H) \\in \\operatorname{Ad}_{k^{-1}}(\\mathfrak{a}) \\subset \\mathfrak{p}$, and \\cite[\\S V.24.C, Proposition 1]{borel1991} implies that $p \\in Z_{G}(H)$. Then $\\operatorname{Ad}_{k}(H) \\in \\mathfrak{a}$, and the conclusion follows from the $g \\in K$ case.\n\\end{proof}\n\n\\begin{lemma}\\label{truelemma}Let $g \\in G$ and $H \\in \\mathfrak{a}$. If $\\operatorname{Ad}_{g}(H) \\in \\mathfrak m \\oplus \\mathfrak{a}$, then $g \\in M' Z_{G}(H)$.\\end{lemma}\n\n\\begin{proof}If we can show that $\\operatorname{Ad}_g(H) \\in \\mathfrak a$, the claim follows from Lemma~\\ref{conjugatetoAimplieslevi}.\n\nConsider the adjoint embedding $\\operatorname{ad} : \\mathfrak{g} \\to \\mathfrak{sl}(\\mathfrak{g})$. Equip $\\mathfrak g$ with any orthonormal basis compatible with the restricted root space decomposition \\eqref{rootspacedecomp}. In such a basis, $\\operatorname{ad}$ sends elements of $\\mathfrak{a}$ to diagonal matrices and elements of $\\mathfrak{k}$ to antisymmetric matrices.\n\nWrite $\\operatorname{Ad}_g(H) = X + H'$ with $X \\in \\mathfrak{m}$ and $H' \\in \\mathfrak a$. In the chosen basis, $\\operatorname{ad}_{ H'}$ is diagonal with real eigenvalues, and the antisymmetric matrix $\\operatorname{ad}_{X}$ is diagonalizable with purely imaginary eigenvalues. Because $[H, X] = 0$, the elements $\\operatorname{ad}_{ H'}$ and $\\operatorname{ad}_{X}$ are simultaneously diagonalizable, so that the eigenvalues of $\\operatorname{ad}_{ H'}+\\operatorname{ad}_{X}$ are those of $\\operatorname{ad}_{ H'}$ plus those of $\\operatorname{ad}_{X}$, in a suitable ordering. If these eigenvalues are real, it must be that $X = 0$.\n\nThis proves that $\\operatorname{Ad}_g(H) \\in \\mathfrak a$, and the lemma follows.\n\\end{proof}\n\n\\begin{lemma}We have $Z_G(A) = MA$ and $N_G(A) = M' A$.\\end{lemma}\n\n\\begin{proof}\nThe first statement follows from \\cite[Proposition 7.25]{Knapp2002}; the second statement follows by combining it with Lemma~\\ref{conjugatetoAimplieslevi}.\n\\end{proof}\n\n\n\\subsection{Derivatives}\n\n\\label{sectionderivatives}\n\nWhen $G$ is any Lie group with Lie algebra $\\mathfrak g$ and $b$ is an element of the universal enveloping algebra $U(\\mathfrak{g})$, we denote by $L_{b}$ the corresponding left invariant differential operator on $C^{\\infty}(G)$\n. When $X \\in \\mathfrak g \\subset U(\\mathfrak g)$, by definition\n\\[ (L_Xf)(g) = \\left.\\frac d{dt}\\right\\rvert_{t=0} f( ge^{tX}) \\,.\\]\nWhen $f : M \\to N$ is a differentiable map between differentiable manifolds, denote its differential at $m \\in M$ by $(Df)_m$.\nUsing left translation we identify all tangent spaces $T_{g} G$ with $\\mathfrak{g}$. When $g \\in G$, denote by $L_{g}$ and $R_{g}$ the left and right multiplication by $g$ on $G$. With our convention on tangent spaces, we then have for all $g, h \\in G$ that\n\\begin{align}\n(D L_{g})_{h} & = \\operatorname{id} \\,, \\nonumber \\\\\n(D R_{g})_{h} & = \\operatorname{Ad}_{g^{-1}} \\,. \n\\end{align}\nWhen $X, Y \\in \\mathfrak{g}$ we have\n\\begin{align}\nL_{X} \\operatorname{Ad}_{g}(Y) & = \\operatorname{Ad}_{g}([X, Y]) \\,, \\label{derivativeAd} \\\\\nL_{X} \\operatorname{Ad}_{g^{-1}}(Y) & = - [X, \\operatorname{Ad}_{g^{-1}}(Y)] \\label{derivativeAdinverse} \\,.\n\\end{align}\n\nAssume now that $G$ is reductive as in the beginning of \\S \\ref{notationliegroups}.\n\n\n\\begin{lemma} \\label{computationAfirstderivative} \\label{differentialkpart}The differentials of $n$, $a$ and $\\kappa$ at $g \\in G$ are as follows:\n\\begin{align*}\n(D n)_{g} & = \\operatorname{Ad}_{a(g)} \\circ E_{\\mathfrak{n}} \\circ \\operatorname{Ad}_{\\kappa(g)} \\,, \\\\\n(D a)_{g} & = E_{\\mathfrak{a}} \\circ \\operatorname{Ad}_{\\kappa(g)} \\,, \\\\\n(D \\kappa)_{g} & = \\operatorname{Ad}_{\\kappa(g)^{-1}} \\circ E_{\\mathfrak{k}} \\circ \\operatorname{Ad}_{\\kappa(g)} \\,.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}Write $g = nak$ and take $X \\in \\mathfrak{g}$. For the $N$-projection, write\n\\begin{align*}\nn(ge^{X}) & = n \\cdot n(ake^{X}) \\\\\n& = n \\cdot n(a k e^{X}k^{-1}) \\\\\n& = n \\cdot (a \\cdot n(e^{\\operatorname{Ad}_{k}(X)}) \\cdot a^{-1}) \\,.\n\\end{align*}\nIn the last step, we have used that $n(ah) = a n(h) a^{-1}$ for $a \\in A$ and $h \\in G$. Therefore \n\\[ (Dn)_g = (D L_n)_e \\circ (D (\\left.\\operatorname{Ad}_a\\right\\rvert_N))_e \\circ (Dn)_e \\circ \\operatorname{Ad}_k \\,. \\] The first statement now follows from the fact that $(Dn)_{e} = E_{\\mathfrak{n}}$. The other statements are proved similarly.\nFor the $A$-projection we write\n\\[\na(ge^{X}) = a \\cdot a(ke^{X}) = a \\cdot a(e^{\\operatorname{Ad}_{k}(X)}) \n\\]\nand use that $(D a)_{e} = E_{\\mathfrak{a}}$.\nFor the $K$-projection we write\n\\[\n\\kappa(ge^{X}) = \\kappa(k e^{X}) \\\\\n= \\kappa(e^{\\operatorname{Ad}_{k}(X)}) \\cdot k\n\\]\nand use that $(D \\kappa)_{e} = E_{\\mathfrak{k}}$.\n\\end{proof}\n\nThe differential of the $A$-projection is also computed in \\cite[Lemma~6.1]{marshall2015}. Even though the proof there is for $G = \\operatorname{SL}_3(\\mathbb R)$, the argument works in general. Compare also \\cite[Corollary~5.2]{duistermaat1983}, but note that the Iwasawa decomposition used there is $KAN$ instead of $NAK$. \n\n\\begin{lemma} \\label{computationAsecondderivative} For $X, Y \\in \\mathfrak{g}$ and $g \\in G$ we have\n\\begin{align*}\n(L_{X}L_{Y} a)_g & = E_{\\mathfrak{a}}( [E_{\\mathfrak{k}}(\\operatorname{Ad}_{\\kappa(g)}(X)), \\operatorname{Ad}_{\\kappa(g)}(Y)] ) \\\\\n& = E_{\\mathfrak{a}}( [ E_{\\mathfrak{k}}(\\operatorname{Ad}_{\\kappa(g)}(X)), E_{\\mathfrak{n}}( \\operatorname{Ad}_{\\kappa(g)}(Y) ) ] ) \\\\\n& = E_{\\mathfrak{a}}( [ \\operatorname{Ad}_{\\kappa(g)}(X), E_{\\mathfrak{n}}( \\operatorname{Ad}_{\\kappa(g)}(Y) ) ] ) \\\\\n&= \\sum_{\\alpha \\in \\Sigma^+} \\langle\\theta R_{-\\alpha}(\\operatorname{Ad}_{\\kappa(g)}(X)), (R_{\\alpha} - \\theta R_{-\\alpha}) ( \\operatorname{Ad}_{\\kappa(g)}(Y) ) \\rangle_\\theta \\cdot H_\\alpha \\,.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nSimilar to the proof of \\cite[Lemma~6.1]{duistermaat1983}. Alternatively, by Lemma~\\ref{computationAfirstderivative} we find\n\\[ L_{Y} a(g) = E_{\\mathfrak{a}} (\\operatorname{Ad}_{\\kappa(g)}(Y) ) \\,. \\]\nUsing the chain rule, \\eqref{derivativeAd} and Lemma~\\ref{differentialkpart} to compute $(D \\kappa)_{g}$ we have\n\\begin{align*}\n L_{X} L_{Y} a(g) & = E_{\\mathfrak{a}} (\\operatorname{Ad}_{\\kappa(g)}([(D \\kappa)_{g} (X), Y]) ) \\\\\n& = E_{\\mathfrak{a}}( [E_{\\mathfrak{k}}(\\operatorname{Ad}_{\\kappa(g)}(X)), \\operatorname{Ad}_{\\kappa(g)}(Y)] ) \\,.\n\\end{align*}\nThis is the first equality. The other equalities follow as in \\cite[Lemma~6.1]{duistermaat1983}.\n\\end{proof}\n\n\\section{The \\texorpdfstring{$N$}{N}-projection and the Gram--Schmidt process}\n\\label{secNprojection}\n\nUnless otherwise specified, $G$ is a semisimple Lie group as in \\S\\ref{notationliegroups}. In this subsection we prove Theorem~\\ref{npartbounded} and the stronger Theorem~\\ref{npartboundeduniform}.\n\n\nRecall that the Iwasawa decomposition for $\\operatorname{SL}_n(\\mathbb R)$ is nothing but the Gram--Schmidt process: Take $g \\in \\operatorname{SL}_n(\\mathbb R)$. There is a unique $n \\in N$ such that the rows of $n^{-1}g$ are orthogonal for the Euclidean inner product on $\\mathbb R^{n}$. There is a unique $a \\in A$ such that the rows of $k := a^{-1}n^{-1}g$ have norm $1$. The Iwasawa decomposition of $g$ is then $nak$.\n\n\\subsection{Intro: \\texorpdfstring{$\\operatorname{SL}_2(\\mathbb R)$}{SL2(R)}}\n\n\\label{npartsltwo}\n\nWe first prove Theorem~\\ref{npartbounded} when $G = \\operatorname{SL}_2(\\mathbb R)$. This is quite trivial but already gives a good idea of what is happening.\n\n\n\\begin{proof}[Proof of Theorem~\\ref{npartbounded} when $G = \\operatorname{SL}_2(\\mathbb R)$] Write $g = \\begin{psmallmatrix}\nv \\\\w\n\\end{psmallmatrix}$, and view $v, w \\in \\mathbb R^2$ as row vectors. Let $y > 0$. Multiplication on the right by $a = \\operatorname{diag}(y, y^{-1}) \\in A$ corresponds to letting the matrix $a$ act on $v$ and $w$. The projection $n(ga)$ is the matrix $\\begin{psmallmatrix}\n1 & x \\\\\n0 & 1\n\\end{psmallmatrix}$ where\n\\[ x = \\frac{\\langle va, wa \\rangle}{\\left\\langle wa, wa \\right\\rangle} \\,. \\]\n It is the unique matrix $n \\in N$ for which the rows of $n^{-1}ga$ are orthogonal. We must show that $x$ is bounded. In the generic case where both coordinates of $w$ in the standard basis are nonzero, the denominator $\\left\\langle wa, wa \\right\\rangle$ is $\\asymp \\max(y^2, y^{-2})$, because $a$ acts by $y$ in one coordinate and by $y^{-1}$ in the other. The numerator is always $\\ll \\max(y^2, y^{-2})$, for the same reason. Therefore in this generic case, $x$ is bounded and the claim follows. When a coordinate of $w$ vanishes, the corresponding term in the denominator $\\left\\langle wa, wa \\right\\rangle$ is nonzero, but the same is then true in the numerator. So the same bounds hold, with $\\max(y^2, y^{-2})$ replaced by $y^{2}$ or $y^{-2}$, depending on which coordinate of $w$ vanishes. The conclusion follows.\n\\end{proof}\n\n\\subsection{\\texorpdfstring{$\\operatorname{SL}_n(\\mathbb R)$}{SLn(R)}, exterior powers}\n\n\\label{npartsln}\n\nFor $\\operatorname{SL}_n(\\mathbb R)$ the same phenomenon occurs, where if a term in a denominator of a certain fraction vanishes, then so does the corresponding term in the numerator. The denominators will here be norms of orthogonal projections onto the orthocomplement of previous vectors, and they are most conveniently expressed using norms on exterior powers\n\nWhen $(V, b)$ is a bilinear space of finite dimension over a field, $b$ can be identified with a linear map $V \\to V^*$. It induces a linear map $\\bigwedge^k V \\to \\bigwedge^k V^* \\cong (\\bigwedge^k V)^*$ for all $k \\geq 0$, where the identification $\\bigwedge^k V^* \\cong (\\bigwedge^k V)^*$ comes from the natural pairing between $\\bigwedge^k V$ and $\\bigwedge^k V^*$. We get a natural bilinear form $\\bigwedge^k b$ on $\\bigwedge^k V$. Assume now that the field is $\\mathbb R$, and that $b$ is symmetric positive definite. Then so is $\\bigwedge^k b$. We denote all induced bilinear forms by $\\langle \\cdot, \\cdot \\rangle$ for convenience. If $(e_i)$ is an orthonormal basis of $V$, then an induced basis of $\\bigwedge^k V$ consisting of elements of the form $e_{j_1} \\wedge \\cdots \\wedge e_{j_k}$ is also orthonormal. As a consequence, if $v, w_1, \\ldots, w_k \\in V$ with the $w_i$ linearly independent, $W = \\operatorname{span}(w_i)$ and $\\operatorname{pr}_{W^\\perp}$ denotes the orthogonal projection onto $W^\\perp$, then\n\\begin{equation}\\label{normprojwedgesingle}\n\\lVert \\operatorname{pr}_{W^\\perp}(v) \\rVert = \\frac{\\left\\lVert v \\wedge \\left(\\bigwedge_{i=1}^k w_i\\right)\\right\\rVert}{\\left\\lVert \\bigwedge_{i=1}^k w_i \\right\\rVert} \\,,\n\\end{equation}\nwhere the norms are those induced on $V$, $\\bigwedge^{k+1} V$ and $\\bigwedge^{k} V$. More generally, for $v_1, v_2 \\in V$ we have\n\\begin{equation}\\label{orthprojectionnormintermsofwedge}\n\\left\\langle \\operatorname{pr}_{W^\\perp}(v_1), \\operatorname{pr}_{W^\\perp}(v_2) \\right\\rangle = \\frac{\\left\\langle v_1 \\wedge \\left(\\bigwedge_{i=1}^k w_i\\right), v_2 \\wedge \\left(\\bigwedge_{i=1}^k w_i\\right) \\right\\rangle}{\\left\\lVert \\bigwedge_{i=1}^k w_i \\right\\rVert^2} \\,.\n\\end{equation}\nWe also have the $i$th coefficient of $\\operatorname{pr}_W(v)$ in the basis $(w_i)$ is given by\n\\begin{equation}\\label{orthoprojectioncoeffintermsofwedge}\n\\frac{\\left \\langle w_1 \\wedge \\cdots \\wedge \\widehat{w_i} \\wedge v\\wedge \\cdots \\wedge w_{k}, \\bigwedge_{i=1}^k w_i \\right \\rangle}{\\left\\lVert \\bigwedge_{i=1}^k w_i \\right\\rVert^2} \\,,\n\\end{equation}\nwhere the hat denotes omission. \n\n\\begin{proof}[Proof of Theorem~\\ref{npartbounded} when $G = \\operatorname{SL}_n(\\mathbb R)$] Write $g$ as a column of row vectors: $g = (v_1, \\ldots, v_n)^T$ with the $v_i \\in \\mathbb R^n$, and view $a \\in A$ as acting on $\\mathbb R^n$ diagonally. Define $V_i = \\operatorname{span}(v_{i}, \\ldots, v_n)$, so that $V_{n+1} = \\{0\\}$. Define $w_i^a$ to be the $i$th row of $n(ga)^{-1}ga$. Equip $\\mathbb R^n$ with the Euclidean inner product. The first step in the Gram--Schmidt process applied to $ga$ is the recurrence relation\n\\begin{align*}\nw_{n}^a &= v_n a\\,, \\\\\nw_{i-1}^a &= \\operatorname{pr}_{(V_{i}a)^\\perp} (v_{i-1}a) \\\\\n&= v_{i-1}a - \\sum_{j=i}^n \\frac{\\langle v_{i-1}a, w_j^a \\rangle}{\\langle w_j^a, w_j^a \\rangle }w_j^a \\qquad (i = n, \\ldots, 2) \\,.\n\\end{align*}\nIn the above sum, the coefficient in the term with index $j$ is the $(i-1,j)$th entry of $n(ga)$. We must show that those coefficients are bounded independently of $a$. For such a coefficient, with $i \\leq j$, we have\n\\begin{align*}\n\\frac{\\langle v_{i-1}a, w_j^a \\rangle}{\\langle w_j^a, w_j^a \\rangle }\n&= \\frac{\\left\\langle \\operatorname{pr}_{(V_{j+1}a)^\\perp}(v_{i-1}a), \\operatorname{pr}_{(V_{j+1}a)^\\perp}(v_ja)\\right\\rangle}{\\left\\lVert \\operatorname{pr}_{(V_{j+1}a)^\\perp}(v_ja)\\right\\rVert^2} \\\\\n&= \\frac{\\left\\langle v_{i-1}a \\wedge \\left(\\bigwedge_{k = j+1}^n v_k a\\right), \\bigwedge_{k = j}^n v_k a \\right\\rangle}{\\left\\lVert \\bigwedge_{k = j}^n v_k a \\right\\rVert^2} \\,,\n\\end{align*}\nwhere we have used \\eqref{orthprojectionnormintermsofwedge}.\nThe action of $A$ on $\\bigwedge^{n-j+1} \\mathbb R^n$ is still diagonal in an induced basis. Fix such a basis and write $a = \\operatorname{diag}(a_1, \\ldots, a_K)$ in that basis. If $c_1, \\ldots, c_K$ denote the coordinates of $v_{i-1} \\wedge \\left(\\bigwedge_{k = j+1}^n v_k \\right)$ and $d_1, \\ldots, d_K$ those of $\\bigwedge_{k = j}^n v_k$, then by the triangle inequality\n\\begin{align}\\label{Nprojtriangleinequality}\n\\begin{split}\n\\left\\lvert\\frac{\\langle v_{i-1}a, w_j^a \\rangle}{\\langle w_j^a, w_j^a \\rangle }\\right\\rvert &= \\left\\lvert\\frac{\\sum_{k=1}^K a_k^2 c_k d_k}{\\sum_{k=1}^K a_k^2 d_k^2} \\right\\rvert \\\\\n&\\leq \\max_{d_k \\neq 0} \\left\\lvert \\frac{c_kd_k}{d_k^2}\\right\\rvert \\,.\n\\end{split}\n\\end{align}\nThis bound does not depend on $a$, so that the entries of $n(ga)$ are bounded.\n\\end{proof}\n\n\\subsection{Semisimple groups}\n\n\\label{Npartsemisimplegroup}\n\nNow let $G$ be a semisimple Lie group. There is no real reason to assume that $G$ has finite center, and Theorem~\\ref{npartbounded} is insensitive to central extensions in any case, but we do so because our setup in \\S\\ref{notationliegroups} is under this assumption. Let $\\rho : G \\to \\operatorname{GL}(V)$ be any finite-dimensional representation with discrete kernel, such as the adjoint representation $\\operatorname{Ad} : G \\to \\operatorname{GL}(\\mathfrak g)$, or the standard representation $\\operatorname{Std}$ if $G$ is already linear. The fact that $\\rho$ can be arbitrary is not essential, but it leads to questions about uniformity.\n\n\\begin{remark}Necessarily $\\rho(G) \\subset \\operatorname{SL}(V)$, because $\\mathfrak g = [\\mathfrak g, \\mathfrak g]$ consists of commutators. Note that $\\rho(G)$ is automatically closed, by \\cite[Proposition 7.9]{Knapp2002} or alternatively by properties of Malcev closure \\cite[\\S1.4.2, Theorem 3]{onishchik1980}.\\end{remark}\n\n\\begin{lemma}\\label{compatibleiwasawa}In a suitable basis of $V$, the groups $\\rho(N)$, $\\rho(A)$ and $\\rho(K)$ are contained in the standard Iwasawa components of $\\operatorname{GL}(V)$.\\end{lemma}\n\n\\begin{proof}As in the proof of \\cite[Proposition 7.9]{Knapp2002}, there is a basis of $V$ such that $\\rho(K) \\subset \\operatorname{SO}_n(\\mathbb R)$ and $\\rho(P)$ consists of symmetric matrices. Because $A \\subset P$ is commutative, by acting on the basis by a suitable element of $\\operatorname{SO}_n(\\mathbb R)$ we may assume that $\\rho(A)$ is diagonal. Then the basis consists of restricted weight vectors. Sorting them by non-increasing weight ensures that $\\rho(N)$ is upper triangular unipotent.\n\\end{proof}\n\nEquip $V$ with an Iwasawa-compatible basis given by Lemma~\\ref{compatibleiwasawa} and denote the corresponding Iwasawa decomposition of $\\operatorname{SL}(V)$ by $N' A' K'$. Denote the $N'$-projection on $\\operatorname{SL}(V)$ by $n'$. \n\n\\begin{proof}[Proof of Theorem~\\ref{npartbounded}] By the case $G = \\operatorname{SL}_n(\\mathbb R)$ proved in \\S\\ref{npartsln}, the projection $n'(\\rho(gA)) \\subset n'(\\rho(g) A')$ is relatively compact in $N'$. It is contained in the closed set $\\rho(N)$, and therefore relatively compact in $\\rho(N)$. Because $\\rho$ has central kernel contained in $K$, it induces an isomorphism $N \\to \\rho(N)$. Therefore $n(gA) \\cong n'(\\rho(gA))$ is relatively compact in $N$.\n\\end{proof}\n\n\\subsection{Uniformity}\n\n\\label{npartpartitionnoncanonical}\n\nFor $G = \\operatorname{SL}_2(\\mathbb R)$ it is apparent from the proof in \\S\\ref{npartsltwo} that uniformity holds in Theorem~\\ref{npartbounded} when the entries of $g$ are bounded and both entries on the second row of $g \\in G$ are bounded away from $0$. We generalize this and partition any semisimple group $G$ into subsets for which uniformity holds on compact subsets. We briefly describe one such partition here and state a result about a more optimal partition in \\S\\ref{npartcanonicalpartitions}.\n\nWe use the notation from \\S\\ref{Npartsemisimplegroup}. Let $d = \\dim(V)$ and let $\\langle \\cdot, \\cdot \\rangle$ be the Euclidean inner product for the chosen basis of $V$. For $\\lambda \\in \\mathfrak a^*$ denote by $V_\\lambda$ the restricted weight space. To prove Theorem~\\ref{npartbounded}, instead of reducing to the case of $\\operatorname{SL}_n(\\mathbb R)$ as in \\S\\ref{Npartsemisimplegroup}, one can also directly use the argument in \\S\\ref{npartsln}, but restricted to elements $g \\in \\rho(G)$, and this leads to a statement with uniformity. For every tuple $\\mathcal S = (\\mathcal S_1, \\ldots,\\mathcal S_d)$ ($\\mathcal S$ for support) of sets of integral weights of $\\mathfrak g$, we may consider the subset $\\Omega_{\\mathcal S}$ of $G$ that consists of the elements $g$ with the following property: For every $j \\in \\{1, \\ldots, d \\}$, the wedge product of every last $j$ rows of $\\rho(g)$, as an element of $\\bigwedge^j V$, has a nonzero component precisely along the weight spaces $(\\bigwedge^j V)_\\lambda$ with $\\lambda \\in \\mathcal S_j$. It is then apparent that the upper bound analogous to \\eqref{Nprojtriangleinequality} is uniform for $g$ in compact subsets of $\\Omega_{\\mathcal S}$, because the coefficients $d_k$ (which are now norms of weight space projections) are either zero or bounded away from zero.\n\n\\subsection{Canonical partitions}\n\n\\label{npartcanonicalpartitions}\n\nThe sets $\\Omega_{\\mathcal S}$ in \\S\\ref{npartpartitionnoncanonical} depend on the choice of basis of weight vectors of $V$. Moreover, they may form a partition of $G$ that is unnecessarily fine for the uniformity statement to be true; this can happen when the weight spaces of $V$ are not 1-dimensional. It is desirable to construct a basis-independent partition, which is what we do now, and which we expect to be the coarsest possible.\n\nWe continue to use the notation from \\S\\ref{Npartsemisimplegroup} and consider the right action of $\\operatorname{GL}(V)$ on $V$ by transposition: $vg := g^T v$. That is, in the chosen basis of $V$, the rows of $g \\in \\operatorname{GL}(V)$ are the images of the basis elements under the right action of $g$. (This awkward definition is an artifact of our decision to write Iwasawa decompositions as $NAK$ rather than $KAN$.) Note that for $g \\in G$ we have $\\rho(g)^T = \\rho(\\theta(g))^{-1}$, because $\\rho(A)$ consists of diagonal matrices and $\\rho(K) \\subset \\operatorname{SO}_n(\\mathbb R)$. That is, while the right action of $\\operatorname{GL}(V)$ by transposition depends on the choice of basis, the restriction to $G$ depends only on the choice of Iwasawa decomposition of $G$ (and in fact, only on $A$ and $K$).\n\nWe will prove Theorem~\\ref{npartboundeduniform} below in a way similar to the argument sketched in \\S\\ref{npartpartitionnoncanonical}, but using a block-by-block rather than a row-by-row orthogonalization process, where we transform a matrix so that a block of rows (corresponding to a weight space) is orthogonal to previous blocks of rows. That the Iwasawa decomposition corresponds to such a process, is the content of the following lemma.\n\nFor integral weights $\\lambda, \\mu$ of $\\mathfrak g$, we write $\\mu < \\lambda$ if $\\lambda - \\mu$ is nonzero and is a nonnegative integer linear combination of positive roots.\n\n\\begin{lemma}\\label{nprojectionintermsoforthogonalweight}Let $g \\in G$ and $n' = n'(\\rho(g))$. Then for distinct weights $\\lambda, \\mu$ of $V$ we have $V_\\lambda n'^{-1} \\rho(g)\\perp V_\\mu n'^{-1} \\rho(g)$ and $V_\\lambda (n'^{-1}-1) \\perp V_\\lambda$.\\end{lemma}\n\n\\begin{proof}We have that $n'^{-1} \\rho(g) \\in A' K'$. The first statement follows from the facts that $V_\\lambda A' = V_\\lambda$, that $V_\\lambda \\perp V_\\mu$ and that $K'$ preserves orthogonality. For the second statement, we have for $X \\in \\operatorname{Lie}(N')$ that $X^T V_\\lambda \\in \\bigoplus_{\\mu < \\lambda} V_\\mu$, and exponentiating gives that $V_\\lambda(N' -1) \\in \\bigoplus_{\\mu < \\lambda} V_\\mu$.\n\\end{proof}\n\nLet $\\Lambda$ be the sets of weights of $\\mathfrak a$ in $V$, denote by $m_\\lambda$ the multiplicity of $\\lambda \\in \\Lambda$, and define $s_\\lambda = \\sum_{\\mu < \\lambda} m_\\mu$. For any tuple $\\mathcal S = (\\mathcal S_\\lambda)_{\\lambda \\in \\Lambda}$ of sets of integral weights of $\\mathfrak g$, consider the subset $\\Omega_{\\mathcal S}$ of $G$ that consists of the elements $g$ with the following property: For every $\\lambda \\in \\Lambda$, the line $\\bigwedge^{s_\\lambda} \\bigoplus_{\\mu < \\lambda} V_\\mu \\rho(g) \\subset \\bigwedge^{s_\\lambda} V$ has nonzero orthogonal projection on the weight-$\\alpha$ spaces $(\\bigwedge^{s_\\lambda} V)_\\alpha$ for $\\alpha \\in S_\\lambda$, and zero projection when $\\alpha \\notin \\mathcal S_\\lambda$.\n\n\\begin{theorem}\\label{npartboundeduniform} Let $\\mathcal S$ be any tuple as above, and $D \\subset \\Omega_{\\mathcal S}$ compact. Then the $N$-projection $n(DA)$ is relatively compact.\n\\end{theorem}\n\nBefore proving Theorem~\\ref{npartboundeduniform}, we state some basic properties of the sets $\\Omega_{\\mathcal S}$.\n\n\\begin{proposition}\\label{coordOmegaspartitionG}The sets of the form $\\Omega_{\\mathcal S}$ (or those that are non-empty) partition $G$. They are stable on the left by $NAM$. \\end{proposition}\n\n\\begin{proof}\nThat the $\\Omega_{\\mathcal S}$ partition $G$ is clear from their definition. Acting on the left by $\\rho(NAM)$ on $g \\in \\operatorname{SL}(V)$ does not change the lines $\\bigwedge^{s_\\lambda} \\bigoplus_{\\mu < \\lambda} V_\\mu g$, so that the individual conditions defining $\\Omega_{\\mathcal S}$ are left-invariant under $NAM$.\n\\end{proof}\n\nFor $\\lambda \\in \\Lambda$, define $\\mathcal S^0_\\lambda$ to be the set of weights $\\alpha$ of $\\mathfrak g$ such that the line\n\\[ \\bigwedge^{s_\\lambda} \\bigoplus_{\\mu < \\lambda} V_\\mu \\rho(g) \\subset \\bigwedge^{s_\\lambda} V \\]\nhas nonzero projection onto the weight-$\\alpha$ subspace for at least one $g \\in G$. Define $\\mathcal S^0 = (\\mathcal S^0_\\lambda)_{\\lambda \\in \\Lambda}$. Then $\\Omega_{\\mathcal S^0}$ contains ``most'' elements of $G$.\n\n\\begin{proposition}\\label{omegaopendense}The set $\\Omega_{\\mathcal S^0}$ is open and dense in $G$. \\end{proposition}\n\n\\begin{proof}\n The image $\\rho(G)$ is the identity component of the real points of a real algebraic closed subgroup of $\\operatorname{SL}(V)$ \\cite[\\S3.3.3]{onishchik1980}. The set $\\rho(\\Omega_{\\mathcal S^0})$ is defined by the non-vanishing of finitely many rational functions (corresponding to projections onto weight spaces) and therefore Zariski open in $\\rho(G)$. (There are no vanishing conditions when $\\mathcal S = \\mathcal S^0$, or rather, they are satisfied on all of $\\rho(G)$.) It follows that $\\Omega_{\\mathcal S^0}$ is open and dense in $G$.\n\\end{proof}\n\n\\begin{example}Take $G = \\operatorname{SL}_2(\\mathbb R)$ and $\\rho = \\operatorname{Std}$ the standard representation. Then $\\Omega_{\\mathcal S^0} = G - NAM'$, because the only defining condition is that both entries on the bottom row are nonzero. The only other non-empty sets of the form $\\Omega_{\\mathcal S}$ are $NAM$ and $NAM \\cdot\\begin{psmallmatrix}\n0 & 1 \\\\\n-1 & 0\n\\end{psmallmatrix}$. The geodesics in $G\/K$ corresponding to elements of $\\Omega_{\\mathcal S^0}$ are semicircles. The other $\\Omega_{\\mathcal S}$ give rise to the vertical geodesics, with both orientations. One can check using an explicit computation that $\\rho = \\operatorname{Ad}$ gives the same partition of $\\operatorname{SL}_2(\\mathbb R)$.\n\\end{example}\n\n\\begin{example}\\label{omegaslthree}Take $G = \\operatorname{SL}_3(\\mathbb R)$ and $\\rho = \\operatorname{Std}$. The sets $\\Omega_{\\mathcal S}$ are determined by their $K$-projection, in view of Lemma~\\ref{coordOmegaspartitionG}. The sets $\\kappa(\\Omega_{S})$ come in all possible dimensions: The $0$-dimensional ones, which are the six right cosets of $M$ in $M'$. The $1$-dimensional ones: the three right $M'$-translates of $G \\cap (\\operatorname{O}(2) \\times \\operatorname{O}(1)) - M'$ and the three right $M'$-translates of $G \\cap (\\operatorname{O}(1) \\times \\operatorname{O}(2)) - M'$. The $2$-dimensional ones: the three right $M'$-translates of the product $(G \\cap (\\operatorname{O}(2) \\times \\operatorname{O}(1)) - M' )(G \\cap (\\operatorname{O}(1) \\times \\operatorname{O}(2)) - M')$ and the three right $M'$-translates of that product with the two factors interchanged. And finally, the dense open set $\\kappa(\\Omega_{S^0})$.\\end{example}\n\n\\begin{remark}For any $G$, the set $\\kappa(\\Omega_{S^0})$ is disjoint from the sets $LM'$ with $L \\in \\mathcal L$ a standard Levi subgroup of a semistandard parabolic; this follows already from the constraint corresponding to the minimal weights $\\lambda$. But in general it is smaller than just the complement of the sets of the form $K \\cap L M'$, as illustrated in Example~\\ref{omegaslthree}.\n\nIt remains unclear to us whether the partition of $G$ into the $\\Omega_{\\mathcal S}$ depends on the choice of representation $\\rho$, and if not, whether the partition is the coarsest possible (up to splitting into connected components) for uniformity to hold. It would also be desirable to have a concrete description of the partition in general, as we do have when $G = \\operatorname{SL}_3(\\mathbb R)$.\\end{remark}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{npartboundeduniform}] Let $g \\in G$ and $a \\in A$. We view $n'(\\rho(ga))^{-1}$ as a block matrix: For weights $\\lambda \\geq \\mu$ of $V$, define\n\\begin{align*}\nT_{\\lambda, \\mu} : V_\\lambda &\\to V_\\mu \\\\\nv & \\mapsto \\operatorname{pr}_{V_\\mu} \\left( v \\cdot n'(\\rho(ga))^{-1} \\right)\n\\end{align*}\nand define $T_\\lambda = \\sum_{\\mu < \\lambda} T_{\\lambda, \\mu} : V_\\lambda \\to \\bigoplus_{\\mu < \\lambda} V_\\mu$. We must show that each $T_\\lambda$ is a bounded operator, uniformly in $a \\in A$ and $g$ in compact subsets of each $\\Omega_{\\mathcal S}$. By Lemma~\\ref{nprojectionintermsoforthogonalweight}, the map $T_\\lambda$ satisfies\n\\[ ((1+ T_{\\lambda}) V_\\lambda) \\rho(ga) \\perp \\bigoplus_{\\mu < \\lambda} V_\\mu \\rho(ga) \\,. \\]\nThat is, $(T_{\\lambda} v) \\rho(ga) = - \\operatorname{pr}_{\\bigoplus_{\\mu < \\lambda} V_\\mu \\rho(ga)} (v \\rho(ga) )$. (These two identities express the block-by-block orthogonalization process.) To show that $T_\\lambda$ is bounded, we may fix any basis $(b_i)_{1 \\leq i \\leq s_\\lambda}$ of $\\bigoplus_{\\mu < \\lambda} V_\\mu $ and we must show that for every $v \\in V_\\lambda$ the coordinates of the orthogonal projection of $v \\rho(ga)$ onto $\\bigoplus_{\\mu < \\lambda} V_\\mu \\rho(ga)$ in the basis $(b_i \\rho(ga))$ are bounded. By \\eqref{orthoprojectioncoeffintermsofwedge}, the $i$th coordinate is equal to\n\\begin{align*}\n\\frac{\\langle (b_1 \\wedge \\cdots \\wedge \\widehat{b_i} \\wedge v\\wedge \\cdots \\wedge b_{s_\\lambda}) \\rho(ga), (b_1 \\wedge \\cdots \\wedge b_{s_\\lambda}) \\rho(ga) \\rangle}{\\left\\lVert (b_1 \\wedge \\cdots \\wedge b_{s_\\lambda})\\rho(ga) \\right\\rVert^2} \\,,\n\\end{align*}\nwhere the hat denotes omission. We may bound this using an inequality similar to \\eqref{Nprojtriangleinequality}. Concretely, call $d_\\mu$ the projection of $(b_1 \\wedge \\cdots \\wedge b_{s_\\lambda})\\rho(g)$ onto the weight-$\\mu$ subspace of $\\bigwedge ^{s_\\lambda}V$, and $c_\\mu$ that of $(b_1 \\wedge \\cdots \\wedge \\widehat{b_i} \\wedge v\\wedge \\cdots \\wedge b_{s_\\lambda}) \\rho(g)$. Then the fraction above equals\n\\[\n\\frac{\\sum_{\\mu} \\mu(a)^2 \\langle c_\\mu, d_\\mu \\rangle}{\\sum_{\\mu} \\mu(a)^2 \\lVert d_\\mu \\rVert^2}\\,, \\]\nwhere we define the weights on $A$ using the exponential map. By the triangle inequality, this is at most\n\\[ \\max_{d_\\mu \\neq 0} \\frac{\\lvert\\langle c_\\mu, d_\\mu \\rangle \\rvert}{\\lVert d_\\mu \\rVert^2} \\,,\\]\nwhich gives a uniform upper bound for $g$ in compact subsets of each $\\Omega_{\\mathcal S}$, because the $d_\\mu$ are then either zero or are bounded away from zero.\n\\end{proof}\n\n\\section{The \\texorpdfstring{$A$}{A}-projection and extreme points}\n\\label{secAprojection}\n\nIn this subsection we prove Theorem~\\ref{aresultsallintro}. Unless otherwise stated, $G$ is a semisimple Lie group as in \\S\\ref{notationliegroups}. For $H_0 \\in \\mathfrak a$ and $g \\in G$ define\n\\begin{align*}\nh_{H_{0}, g} : A & \\to \\mathbb{R} \\\\\na & \\mapsto \\langle H_{0}, H(g a) \\rangle \\,.\n\\end{align*}\nWhen $H_0$ corresponds to $\\lambda \\in \\mathfrak a^*$ under the isomorphism given by the Killing form, this is precisely the function $h_{\\lambda , g}$ in Theorem~\\ref{aresultsallintro}. Recall from \\S\\ref{defgenericset} that $\\lambda$ is nonsingular if and only if $H_0$ is, $\\lambda$ lies in the positive chamber of $\\mathfrak a^*$ if and only if $H_0 \\in \\mathfrak a^{+}$, and $\\lambda$ does not lie in a proper subspace spanned by roots if and only if $H_0 \\notin \\bigcup_{L \\in \\mathcal L - \\{ G\\}} \\mathfrak a^L$. Thus the condition on $\\lambda$ in Theorem~\\ref{aresultsallintro} is equivalent to $H_0 \\in \\mathfrak a^{\\operatorname{gen}, +}$.\n\nThe first part of Theorem~\\ref{aresultsallintro}, concerning uniqueness and nondegeneracy, is proved in \\S\\ref{sectionuniqueness}; see Corollary~\\ref{criticalpointshnondegenerate} and Proposition~\\ref{uniquenesscriticalpointcartan}. The second part of Theorem~\\ref{aresultsallintro}, concerning existence, is proved in \\S\\ref{sectionstructurelevelsets} (although the hard work is done in \\S\\ref{sectionexistence}); see Corollary~\\ref{levelsetGcodim} and Proposition~\\ref{regularsetGopen}.\n\n\n\\subsection{Critical points and level sets}\n\nUsing Lemma~\\ref{computationAfirstderivative} we find that the differential of $h_{H_{0}, g}$ at $a \\in A$ is\n\\begin{align} \\label{cartanorbitdifferential}\n\\begin{split}\n(Dh_{H_{0}, g})_{a} : \\mathfrak{a} & \\to \\mathbb{R} \\\\\nH & \\mapsto \\langle H_{0}, \\operatorname{Ad}_{\\kappa(ga)}(H) \\rangle \\,.\n\\end{split}\n\\end{align}\nWhen $G$ is any reductive group with all associated data as in \\S \\ref{notationliegroups}, and $H_{0} \\in \\mathfrak{a}$, define the set $\\mathcal C(G, H_0)$ as follows:\n\\begin{equation}\\label{definitionCreductive}\n\\mathcal{C}(G, H_{0}) = \\{ k \\in K : \\operatorname{Ad}_{k^{-1}} (H_{0}) \\perp \\mathfrak{a} \\} \\,.\n\\end{equation}\nWe allow $G$ to be reductive because we will occasionally need to work with the sets $\\mathcal C(L, H_0)$ for $L \\in \\mathcal L$.\n\nNow let $G$ again be semisimple. The following two lemmas are clear from \\eqref{cartanorbitdifferential} and the definition of $\\mathcal{C}(G, H_{0})$.\n\n\\begin{lemma} \\label{criticalpointsintermsofC}Let $H_{0} \\in \\mathfrak{a}$ and $g \\in G$. Then $a \\in A$ is a critical point of $h_{H_{0}, g}$ if and only if $\\kappa(ga) \\in \\mathcal{C}(G, H_{0})$. \\hfill $\\square$\\end{lemma}\n\n\\begin{lemma}\\label{levelsetszerocritpoint}The set of $k \\in K$ for which $1$ is a critical point of $h_{H_0, k}$ equals $\\mathcal C(G, H_0)$. More generally, the set of $k \\in K$ for which $a \\in A$ is a critical point of $h_{H_0, k}$ equals $\\kappa(\\mathcal C(G, H_0)a^{-1})$.\\hfill $\\square$\\end{lemma}\n\nIn view of Lemma~\\ref{levelsetszerocritpoint}, we refer to the sets $\\kappa(\\mathcal C(G, H_0)a^{-1})$ as level sets. This will be fully justified after we show uniqueness of critical points in Proposition~\\ref{uniquenesscriticalpointcartan}.\n\nBy decomposing $g$ in the Iwasawa decomposition we see that\n\\begin{equation}\\label{heightfunctionreducetokpart}\nh_{H_{0}, g}(a) = \\langle H_{0}, H(g) \\rangle + h_{H_{0}, \\kappa(g)}(a) \\,. \n\\end{equation}\nThe first term in the right hand side being a mere constant, we see that the critical points of $h_{H_0, g}$ coincide with those of $h_{H_0, \\kappa(g)}$. It therefore suffices to understand the critical points of $h_{H_{0}, k}$ for $k \\in K$.\n\n\n\\begin{example} When $G = \\operatorname{PSL}_2(\\mathbb R)$ it is clear from the geometric picture in the introduction that a critical point of $h_{H_{0}, g}$ is a point $a \\in A$ such that the tangent line to $gAi$ at $gai$ is horizontal. This is the content of the above Lemma~\\ref{criticalpointsintermsofC} in this case. The set $\\mathcal{C}(G, H_{0})$ consists of the two elements\n\\begin{align*}\nc_{\\pm} = \\begin{pmatrix}\n\\cos(\\pi \/4) & \\pm \\sin(\\pi \/4) \\\\\n\\mp \\sin(\\pi \/4) & \\cos(\\pi \/4)\n\\end{pmatrix}\n\\end{align*}\ncorresponding to the two horizontal directions. See Figure~\\ref{sltwocpicture} for a picture.\n\\end{example}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[scale=1,angle=0]{sl2_c.pdf}\n\\caption{Geodesics corresponding to an element $k_1 \\in \\mathcal C(G, H_0)$, and to an element $k_2$ that does not lie in $\\mathcal C(G, H_0)$.}\n\\label{sltwocpicture}\n\\end{center}\n\\end{figure}\n\n\\begin{example}\\label{examplemidpointdepends} Let $G = \\operatorname{SL}_3(\\mathbb R)$. Take elements $H_0, H_0' \\in \\mathfrak a$ that are not proportional, and take $k\\in K$. We claim that $a \\in A$ can never be a critical point of both $h_{H_0, k}$ and $h_{H_0', k}$. Indeed, replacing $k$ by $\\kappa(ka)$ we may assume that $a = 0$, and that $k \\in \\mathcal C(G, H_0) \\cap \\mathcal C(G, H_0')$. That is, the symmetric matrices $\\operatorname{Ad}_{k^{-1}}(H_0)$ and $\\operatorname{Ad}_{k^{-1}}(H_0')$ have zeros on the diagonal. Because $\\mathfrak a$ is $2$-dimensional the same is then true for any $H_0'' \\in \\mathfrak a$. Take now $H_0'' = H_0^2$. The matrix $\\operatorname{Ad}_{k^{-1}}(H_0'') = \\operatorname{Ad}_{k^{-1}}(H_0)\\operatorname{Ad}_{k^{-1}}(H_0)^T$ has zeros on the diagonal on the one hand, but its trace is the sum of squares of the entries of $H_0$ on the other hand. This is a contradiction.\\end{example}\n\n\\subsection{Existence of critical points}\n\n\\label{sectionexistence}\n\nWe want to show that there exists $k \\in K$ with the property that $h_{H_0, k}$ has a critical point. Equivalently, in view of Lemma~\\ref{levelsetszerocritpoint}, that the set $\\mathcal C(G, H_0)$ is nonempty. Before proving that this is indeed the case for generic $H_0 \\in\\mathfrak a$, we need a negative result.\n\nWhen $k \\in K$ centralizes a nonzero subspace $V \\subset \\mathfrak a$, then $a \\mapsto H(ka)$ grows linearly in the directions of $V$. In particular, for $h_{H_0, k}$ to have a critical point, $H_0$ must be orthogonal to $V$. Those critical points behave badly, and this is the reason to impose that $H_0 \\notin \\bigcup_{L \\in \\mathcal L - \\{ G\\}} \\mathfrak a^{L}$ in Theorem~\\ref{aresultsallintro}. We make this more precise in Lemma~\\ref{mLnocriticalpoints}.\n\n\\begin{lemma} \\label{kpartofmL}When $L \\in \\mathcal{L}$ and $m \\in M'$, we have $\\kappa(mL) \\subset mL$.\\end{lemma}\n\n\\begin{proof}\nIt follows from \\cite[Proposition 7.25, Proposition 7.31]{Knapp2002} that when $L' \\subset G$ is a semistandard Levi subgroup, one has $\\kappa(L') \\subset L'$. We may apply this to $L' = m L m^{-1}$.\n\\end{proof}\n\n\\begin{proposition} \\label{mLnocriticalpoints}When $H_{0} \\notin \\bigcup_{L \\in \\mathcal L - \\{ G\\}}\\mathfrak{a}^{L}$ and $k \\in \\bigcup_{L \\in \\mathcal L - \\{ G\\}} M' L$, the function $h_{H_{0}, k}$ has no critical point.\n\\end{proposition}\n\n\\begin{proof}Assume $a \\in A$ is a critical point of $h_{H_{0}, k}$, and let $m \\in M'$ and $L \\in \\mathcal L$ be such that $k \\in mL$. By Lemma~\\ref{kpartofmL} we also have $\\kappa(ka) \\in mL$. From \\eqref{cartanorbitdifferential}, letting $H$ vary in $\\mathfrak a_L$, we see that $H_0 \\perp \\operatorname{Ad}_m(\\mathfrak a_L) = \\mathfrak a_{\\operatorname{Ad}_m(L)}$. That is, $H_0 \\in \\mathfrak a^{\\operatorname{Ad}_m(L)}$, which contradicts our assumption.\n\\end{proof}\n\nAn equivalent formulation of Proposition~\\ref{mLnocriticalpoints} is the following.\n\n\\begin{lemma}\\label{CdoesnotmeetML} When $H_{0} \\notin \\bigcup_{L \\in \\mathcal L - \\{ G\\} }\\mathfrak{a}^{L}$, the set $\\mathcal{C}(G, H_{0})$ does not meet the set $\\bigcup_{L \\in \\mathcal L - \\{ G\\}} M' L$.\\end{lemma}\n\n\\begin{proof}\nThe elements $k \\in \\mathcal{C}(G, H_{0})$ have the property that $h_{H_0, k}$ has a critical point, namely $1$.\n\\end{proof}\n\nWhen $G$ is reductive, the set $\\mathcal C(G, H_0)$ is trivially empty when $H_{0}$ has a component along $Z(\\mathfrak{g})$. That is, when $H_0 \\notin \\mathfrak g_{ss}$. In particular, when $L$ is a semistandard Levi subgroup of $G$, the set $\\mathcal{C}(L, H_{0})$ is empty if $H_{0} \\notin (\\mathfrak{a} \\cap \\mathfrak{l}_{ss}) = \\mathfrak{a}^{L}$. To study the sets $\\mathcal{C}(G, H_{0})$ with $G$ reductive and $H_{0} \\in \\mathfrak{a} \\cap \\mathfrak{g}_{ss}$, consider the map\n\\begin{align}\\label{definitionfmapC}\n\\begin{split}\nf_{G, H_{0}} : K & \\to \\mathfrak{a} \\cap \\mathfrak{g}_{ss} \\\\\nk & \\mapsto E_{\\mathfrak{a}} (\\operatorname{Ad}_{k^{-1}}(H_{0})) \\,.\n\\end{split}\n\\end{align}\nBy definition, $\\mathcal{C}(G, H_{0}) = f_{G, H_{0}}^{-1}(0)$. By \\eqref{derivativeAdinverse}, the differential of $f_{G, H_{0}}$ at $k \\in K$ is given by\n\\begin{align} \\label{differentialf}\n\\begin{split}\n(Df_{G, H_{0}})_{k} : \\mathfrak{k} & \\to \\mathfrak{a} \\cap \\mathfrak{g}_{ss} \\\\\nX & \\mapsto - E_{\\mathfrak{a}} ([X, \\operatorname{Ad}_{k^{-1}}(H_{0})] ) \\,.\n\\end{split}\n\\end{align}\nFor $G$ semisimple, we want to prove that the sets $\\mathcal{C}(G, H_{0})$ are generically nonempty. To this end, define\n\\begin{align*}\ng_{G, H_{0}} : K & \\to \\mathbb{R}_{\\geq 0} \\\\\nk & \\mapsto \\lVert f_{G, H_{0}}(k) \\rVert^{2} \\,.\n\\end{align*}\nWe have $\\mathcal{C}(G, H_{0}) = g_{G, H_{0}}^{-1}(0)$. Because $g_{G, H_{0}}$ is a continuous function on a compact set, it reaches a minimum. Our aim is to show that the minima of $g_{G, H_{0}}$ satisfy $g_{G, H_{0}}(k) = 0$, it it will follow that $\\mathcal{C}(G, H_{0})$ is nonempty.\n\nUsing Leibniz's rule and \\eqref{differentialf}, the differential of $g_{G, H_0}$ at $k \\in K$ is given by\n\\begin{align}\n(Dg_{G, H_{0}})_{k}(X) & = 2 \\langle (Df_{G, H_{0}})_{k}(X), f_{G, H_{0}}(k) \\rangle \\nonumber \\\\\n& = -2 \\langle [X, \\operatorname{Ad}_{k^{-1}}(H_{0})], f_{G, H_{0}}(k) \\rangle \\label{differentialg} \\,.\n\\end{align}\n\n\\begin{lemma} \\label{criticalsetsubmersion} Let $G$ be reductive and $H_{0} \\in \\mathfrak{a}^{\\operatorname{reg}} \\cap \\mathfrak{g}_{ss}$. Then $f_{G, H_{0}}$ is a submersion at points $k \\notin \\bigcup_{L \\in \\mathcal L - \\{ G\\}} M' L$.\n\\end{lemma}\n\n\\begin{proof}Suppose $(Df_{G, H_{0}})_{k}$ is not surjective for some $k \\in K$. Then there exists a nonzero $H \\in \\mathfrak{a} \\cap \\mathfrak{g}_{ss}$ such that $\\langle [X, \\operatorname{Ad}_{k^{-1}}(H_{0})] , H \\rangle = 0$ for all $X \\in \\mathfrak{k}$. Using associativity of the Killing form, this is equivalent to\n\\[ [\\operatorname{Ad}_{k^{-1}}(H_{0}), H] \\perp \\mathfrak{k} \\,. \\]\nBut $[\\operatorname{Ad}_{k^{-1}}(H_{0}), H] \\in [\\mathfrak{p}, \\mathfrak{p}] \\subset \\mathfrak{k}$, so that $[\\operatorname{Ad}_{k^{-1}}(H_{0}), H] = 0$. That is, $\\operatorname{Ad}_{k}(H) \\in Z_{\\mathfrak{p}}(H_{0})$. Because $H_{0}$ is regular, this implies $\\operatorname{Ad}_{k}(H) \\in \\mathfrak{a}$ (see \\cite[Lemma~6.50]{Knapp2002}), and Lemma~\\ref{conjugatetoAimplieslevi} then implies that $k \\in M' L$ with $L = Z_{G}(H)$. Because $H \\notin Z(\\mathfrak{g})$, this is a proper semistandard Levi subgroup. This proves the first statement.\n\\end{proof}\n\n\\begin{corollary} \\label{maintermgenericsubmersionatcriticalsetreductive} Let $G$ be reductive and $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}} \\cap \\mathfrak{g}_{ss}$. Then $f_{G, H_0}$ is a submersion at the points of $\\mathcal{C}(G, H_{0})$.\\end{corollary}\n\n\\begin{proof}The set $\\mathcal C(G, H_0)$ does not meet any of the sets $M' L$ by Lemma~\\ref{CdoesnotmeetML}, and therefore $f_{G, H_0}$ is a submersion at points of $\\mathcal C(G, H_0)$, by Lemma~\\ref{criticalsetsubmersion}.\n\\end{proof}\n\n\\begin{lemma} \\label{criticalpointgequations}Let $H_{0} \\in \\mathfrak{a}^{\\operatorname{reg}}$. Define $\\mathcal{D}_{H_{0}} = \\{ k \\in K : k \\in Z_{G}(f_{G, H_{0}}(k)) \\}$. The following hold:\n\\begin{enumerate}[label = (\\roman*)]\n\\item The function $g_{G, H_{0}}$ is right invariant under $M'$.\n\\item The set of critical points of $g_{G, H_{0}}$ is $\\mathcal{D}_{H_{0}} M'$.\n\\item Let $k \\in \\mathcal{D}_{H_{0}}$ and define $L = Z_{G}(f_{G, H_{0}}(k))$. Write $H_{0} = H_{L} + H^{L}$ with $H_{L} \\in \\mathfrak{a}_{L}$ and $H^{L} \\in \\mathfrak{a}^{L}$. Then $f_{G, H_{0}}(k) = H_{L}$ and $k \\in \\mathcal{C}(L, H^{L})$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n\\begin{enumerate}[label = (\\roman*)]\n\\item We show that $f_{G, H_{0}}(km) = \\operatorname{Ad}_{m^{-1}}(f_{G, H_{0}}(k))$ for $m \\in M'$. Because the adjoint action of $M'$ on $\\mathfrak{a}$ is isometric, it will then follow that $g_{G, H_{0}}$ is right invariant under $M'$.\n\nBecause $M'$ normalizes $\\mathfrak{a}$, it normalizes the orthogonal complement $\\mathfrak{a}^{\\perp}$, so that the adjoint action of $M'$ commutes with $E_{\\mathfrak{a}}$. This implies\n\\begin{align*}\nf_{G, H_{0}}(km) & = E_{\\mathfrak{a}}(\\operatorname{Ad}_{m^{-1}k^{-1}}(H_{0})) \\\\\n& = \\operatorname{Ad}_{m^{-1}} (E_{\\mathfrak{a}}( \\operatorname{Ad}_{k^{-1}}(H_{0}))) \\\\\n& = \\operatorname{Ad}_{m^{-1}}(f_{G, H_{0}}(k)) \\,.\n\\end{align*}\n\\item Let $k$ be a critical point of $g$. Using \\eqref{differentialg} we have for all $X \\in \\mathfrak{k}$,\n\\begin{align*}\n0 = (Dg_{G, H_{0}})_{k} & = -2 \\langle E_{\\mathfrak{a}}([X, \\operatorname{Ad}_{k^{-1}}(H_{0})]), f_{G, H_{0}}(k) \\rangle \\,.\n\\end{align*}\nThat is,\n\\[ \\langle \\mathfrak{k}, [\\operatorname{Ad}_{k^{-1}}(H_{0}), f_{G, H_{0}}(k)] \\rangle = 0 \\,. \\]\nBut $[\\operatorname{Ad}_{k^{-1}}(H_{0}), f_{G, H_{0}}(k)] \\in [\\mathfrak{p}, \\mathfrak{p}] = \\mathfrak{k}$, so that this must be zero. Because $H_{0} \\in \\mathfrak{a}^{\\operatorname{reg}}$, this implies that $f_{G, H_{0}}(k) \\in \\operatorname{Ad}_{k^{-1}}(\\mathfrak{a})$. By Lemma~\\ref{conjugatetoAimplieslevi} it follows that $k \\in M' Z_{G}(f_{G, H_{0}}(k))$. Replacing $k$ by an appropriate right translate under $M'$, this becomes $k \\in Z_{G}(f_{G, H_{0}}(k))$. That is, $k \\in \\mathcal{D}_{H_{0}}$.\n\\item Take $k \\in \\mathcal{D}_{H_{0}}$. By definition of $L$ we have $f_{G, H_{0}}(k) \\in \\mathfrak{a}_{L}$. Writing $H_{0} = H_{L} + H^{L}$ and using that $k \\in L$, we have $f_{G, H_{0}}(k) = E_{\\mathfrak{a}}(\\operatorname{Ad}_{k^{-1}}(H_{0})) = H_{L} + E_{\\mathfrak{a}}(\\operatorname{Ad}_{k^{-1}}(H^{L}))$. If we prove that $E_{\\mathfrak{a}}(\\operatorname{Ad}_{k^{-1}}(H^{L})) = 0$, the two statements follow. On the one hand $f_{G, H_{0}}(k) \\in \\mathfrak{a}_{L}$ implies $E_{\\mathfrak{a}}(\\operatorname{Ad}_{k^{-1}}(H^{L})) = f_{G, H_{0}}(k) - H_{L} \\in \\mathfrak{a}_{L}$. On the other hand, $H^{L} \\perp \\mathfrak{a}_{L}$ and $k \\in L$ implies $\\operatorname{Ad}_{k^{-1}}(H^{L}) \\perp \\mathfrak{a}_{L}$, and hence $E_{\\mathfrak{a}}(\\operatorname{Ad}_{k^{-1}}(H^{L})) \\perp \\mathfrak{a}_{L}$. It follows that $E_{\\mathfrak{a}}(\\operatorname{Ad}_{k^{-1}}(H^{L})) \\in \\mathfrak{a}_{L} \\cap \\mathfrak{a}^{L} = \\{ 0 \\}$. \\qedhere\n\\end{enumerate}\n\\end{proof}\n\n\\begin{lemma} Let $H_{0} \\in \\mathfrak{a}^{\\operatorname{reg}}$, $\\mathcal{D}_{H_{0}}$ be as in Lemma~\\ref{criticalpointgequations} and $k \\in \\mathcal{D}_{H_{0}}$ a critical point of $g_{G, H_{0}}$. Define $L$, $H_{L}$ and $H^{L}$ as in the same lemma. Then the Hessian of $g_{G, H_{0}}$ at $k$ satisfies\n\\begin{align}\n\\label{existenceHessiansimplified}\n\\begin{split}\n\\frac12 (\\operatorname{Hess}_{k} g_{G, H_0}) (X, X) & = - \\left\\lVert [X, H_{L}] \\right\\rVert^{2} - \\langle [X, \\operatorname{Ad}_{k^{-1}}(H^{L})], [X, H_{L}] \\rangle \\\\\n& \\mathrel{\\phantom{=}} + \\left\\lVert E_{\\mathfrak{a}}([ X, \\operatorname{Ad}_{k^{-1}}(H^{L})]) \\right\\rVert^{2}\n\\end{split}\n\\end{align}\nfor all $X \\in \\mathfrak{k}$.\n\\end{lemma}\n\n\\begin{proof}\nStarting from \\eqref{differentialg} and using Leibniz's rule and \\eqref{derivativeAdinverse}, we have that the Hessian of $g$ at the critical point $k$ takes the form\n\\begin{align*}\n\\operatorname{Hess}_{k} g : \\mathfrak{k} \\times \\mathfrak{k} & \\to \\mathbb{R} \\\\\n(X, Y) & \\mapsto 2 \\left \\langle E_{\\mathfrak{a}}( [ Y, [ X, \\operatorname{Ad}_{k^{-1}}(H_{0})]]) , f_{G, H_{0}}(k) \\right\\rangle \\\\\n& \\mathrel{\\phantom{\\mapsto}} + 2 \\left \\langle E_{\\mathfrak{a}}( [ Y, \\operatorname{Ad}_{k^{-1}}(H_{0})]) , E_{\\mathfrak{a}}([ X, \\operatorname{Ad}_{k^{-1}}(H_{0})]) \\right\\rangle \\,.\n\\end{align*}\nAs $f_{G, H_{0}}(k) \\in \\mathfrak{a}$, we may drop the projection $E_{\\mathfrak{a}}$ in the first term. Replacing $f_{G, H_{0}}(k)$ by $H_{L}$ and using associativity of the Killing form we obtain\n\\begin{align*}\n& -2 \\left \\langle [ X, \\operatorname{Ad}_{k^{-1}}(H_{0})] , [Y, H_{L}] \\right\\rangle \\\\\n& \\qquad + 2 \\left \\langle E_{\\mathfrak{a}}( [ Y, \\operatorname{Ad}_{k^{-1}}(H_{0})]), E_{\\mathfrak{a}}([ X, \\operatorname{Ad}_{k^{-1}}(H_{0})]) \\right\\rangle \\,.\n\\end{align*}\nThe associated quadratic form on $\\mathfrak{k}$ sends\n\\begin{align*}\nX \\mapsto -2 \\left \\langle [ X, \\operatorname{Ad}_{k^{-1}}(H_{0})] , [X, H_{L}] \\right \\rangle + 2 \\left\\lVert E_{\\mathfrak{a}}([ X, \\operatorname{Ad}_{k^{-1}}(H_{0})]) \\right\\rVert^{2} \\,.\n\\end{align*}\nWriting $H_{0} = H_{L} + H^{L}$ and recalling that $k \\in L$ centralizes $H_{L}$, this becomes\n\\begin{align*}\n& - 2 \\left\\lVert [X, H_{L}] \\right\\rVert^{2} -2 \\langle [X, \\operatorname{Ad}_{k^{-1}}(H^{L})], [X, H_{L}] \\rangle \\\\\n& \\quad + 2 \\left\\lVert E_{\\mathfrak{a}}([ X, H_{L} + \\operatorname{Ad}_{k^{-1}}(H^{L})]) \\right\\rVert^{2} \\,.\n\\end{align*}\nBecause $H_{L} \\in \\mathfrak{a}$, associativity of the Killing form implies $E_{\\mathfrak{a}}([X, H_{L}]) = 0$, so that the third term simplifies and we obtain \\eqref{existenceHessiansimplified}.\n\\end{proof}\n\n\\begin{lemma} \\label{criticalpointgminimum}Let $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}}$ and $k \\in K$ be a critical point of $g_{G, H_{0}}$ with positive semidefinite Hessian. Then $g_{G, H_{0}}(k) = 0$. \\end{lemma}\n\n\\begin{proof}The proof is by contradiction. Assuming that $f_{G, H_{0}}(k) \\neq 0$, we will construct a direction in which the Hessian of $g_{G, H_{0}}$ is negative definite at $k$.\n\nLet $\\mathcal{D}_{H_{0}}$ be as in Lemma~\\ref{criticalpointgequations}. By Lemma~\\ref{criticalpointgequations} and right invariance of $g$ under $M'$, we may assume that $k \\in \\mathcal{D}_{H_{0}}$. Let $L = Z_{G}(f_{G, H_{0}}(k))$ and let $H_{L}$ and $H^{L}$ be as in Lemma~\\ref{criticalpointgequations}, so that $f_{G, H_{0}}(k) = H_{L}$ and $k \\in \\mathcal{C}(L, H^{L})$. \nSuppose that $H_L \\neq 0$. We will construct $X \\in \\mathfrak{k}$ for which the third term in \\eqref{existenceHessiansimplified} is zero, and for which the other terms are nonpositive and not both zero.\n\nSuppose first that $H^{L} \\notin \\mathfrak{a}^{\\operatorname{reg}}$. Then there exists a nonzero element $X \\in \\mathfrak{k}$ with $[X, \\operatorname{Ad}_{k^{-1}}(H^{L})] = 0$. Indeed, if $\\alpha \\in \\Sigma$ is such that $\\alpha(H^{L}) = 0$, take a nonzero $X' \\in (\\mathfrak{g}_{\\alpha} + \\mathfrak{g}_{- \\alpha}) \\cap \\mathfrak{k}$. Then $[X', H^{L}] = 0$, and we can take $X = \\operatorname{Ad}_{k^{-1}}(X')$.\n\nWith this choice of $X$, the second and third terms in \\eqref{existenceHessiansimplified} vanish. We have $[X, H_{L}] = [X, \\operatorname{Ad}_{k^{-1}}(H_{0})] \\neq 0$ by regularity of $H_{0}$, so that\n\\[ (\\operatorname{Hess}_{k} g_{G, H_0}) (X, X) = -2 \\cdot \\left\\lVert [X, H_{L}] \\right\\rVert^{2} < 0 \\,. \\]\nThis is a contradiction.\n\nSuppose now that $H^{L} \\in \\mathfrak{a}^{\\operatorname{reg}}$.\nWe first show that there exists $X \\in \\mathfrak{k}$ for which the second term in \\eqref{existenceHessiansimplified} is strictly negative. After that, we will modify $X$ in such a way that the second term stays the same and such that the third becomes zero.\n\nThe second term in \\eqref{existenceHessiansimplified} equals\n\\[ - \\langle [\\operatorname{Ad}_{k}(X), H^{L}], [\\operatorname{Ad}_{k}(X), H_{L}] \\rangle \\,. \\]\nWrite $\\operatorname{Ad}_{k}(X)$ in the restricted root space decomposition \\eqref{rootspacedecomp} as\n\\[ X = \\sum_{\\alpha \\in \\Sigma} X_{\\alpha} \\,. \\]\nThen the above equals\n\\begin{equation} \\label{secondtermnonemptyproofroots}\n- \\sum_{\\alpha \\in \\Sigma} \\alpha(H^{L}) \\alpha(H_{L}) \\lVert X_{\\alpha} \\rVert_{\\theta}^{2} \\,.\n\\end{equation}\nBecause $H_{L} \\neq 0$ by assumption, there exists $\\beta \\in \\Sigma$ such that $\\beta(H_{L}) \\neq 0$. Because $H^{L} \\in \\mathfrak{a}^{\\operatorname{reg}}$, we have $\\beta(H^{L}) \\beta(H_{L}) \\neq 0$. Using \\cite[Corollary 2.24]{Knapp2002} we have\n\\[ 0 = \\langle H^{L}, H_{L} \\rangle = \\sum_{\\alpha \\in \\Sigma} \\alpha(H^{L}) \\alpha(H_{L}) \\,, \\]\nso that there is at least one strictly positive term in this sum. Let $\\alpha \\in \\Sigma$ be such that $\\alpha(H^{L}) \\alpha(H_{L}) > 0$. Take now a nonzero $X \\in \\mathfrak{k} \\cap (\\mathfrak{g}_{\\alpha} + \\mathfrak{g}_{- \\alpha})$, then \\eqref{secondtermnonemptyproofroots} is strictly negative. That is, the second term in \\eqref{existenceHessiansimplified} is strictly negative.\n\nObserve that when $Y \\in \\mathfrak{k} \\cap \\mathfrak{l}$, adding $Y$ to $X$ does not change the first two terms of \\eqref{existenceHessiansimplified}, because $[Y, H_{L}] = 0$. Thus in order to make the third term in \\eqref{existenceHessiansimplified} zero and obtain a contradiction, it suffices to find $Y \\in \\mathfrak{k} \\cap \\mathfrak{l}$ with\n\\[ E_{\\mathfrak{a}}([Y, \\operatorname{Ad}_{k^{-1}}(H^{L})]) = E_{\\mathfrak{a}}([X, \\operatorname{Ad}_{k^{-1}}(H^{L})]) \\,, \\]\nand it will follow that\n\\[ (\\operatorname{Hess}_{k} g_{G, H_0}) (X - Y, X - Y) = -2 \\left\\lVert [X, H_{L}] \\right\\rVert^{2} - 2 \\langle [X, \\operatorname{Ad}_{k^{-1}}(H^{L})], [X, H_{L}] \\rangle < 0 \\,. \\]\nLet $L' \\subset L$ be the smallest semistandard Levi subgroup with the property that $k \\in L' M'$, say $k = \\ell'm$. From $k \\in \\mathcal{C}(L, H^{L})$ it follows that $\\operatorname{Ad}_{k^{-1}} (H^{L}) \\perp \\mathfrak{a}$, and therefore $H^{L} \\in \\mathfrak{a}^{L'}$. We also have that $[\\mathfrak{k}, \\operatorname{Ad}_{k^{-1}}(H^{L})] \\perp \\operatorname{Ad}_{k^{-1}}(\\mathfrak{a}) \\supset \\operatorname{Ad}_{m}^{-1}(\\mathfrak{a}_{L'})$, so that $E_{\\mathfrak{a}}([\\mathfrak{k}, \\operatorname{Ad}_{k^{-1}}(H^{L})]) \\subset \\operatorname{Ad}_{m}^{-1}(\\mathfrak{a}^{L'})$. Hence it suffices to show that the map\n\\begin{align} \\label{nonemptycmapsurjective}\n\\begin{split}\n\\mathfrak{k} \\cap \\mathfrak{l} & \\to \\operatorname{Ad}_{m}^{-1}(\\mathfrak{a}^{L'}) \\\\\nY & \\mapsto E_{\\mathfrak{a}}([Y, \\operatorname{Ad}_{k^{-1}}(H^{L})])\n\\end{split}\n\\end{align}\nis surjective. Its restriction to $\\mathfrak{k} \\cap \\mathfrak{l}'$ is precisely $-\\operatorname{Ad}_{m}^{-1} \\circ (Df_{L', H^{L}})_{\\ell'}$ (compare \\eqref{differentialf}). We seek to apply Lemma~\\ref{criticalsetsubmersion} to $L'$.\n\nBy assumption we have $H^{L} \\in \\mathfrak{a}^{\\operatorname{reg}}$, so that $H^{L} \\in (\\mathfrak{a}^{L'})^{\\operatorname{reg}}$. By minimality of $L'$, we have $\\ell' \\notin \\bigcup_{L'' \\subsetneq L'} M' L''$. Therefore Lemma~\\ref{criticalsetsubmersion} shows that $(Df_{L', H^{L}})_{\\ell'}$ is surjective, so that the map \\eqref{nonemptycmapsurjective} is surjective and the conclusion follows.\n\\end{proof}\n\n\\begin{corollary} \\label{Cnonempty}When $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}}$ we have $\\mathcal{C}(G, H_{0}) \\neq \\varnothing$.\\end{corollary}\n\n\\begin{proof}\nThe continuous function $g_{G, H_{0}} : K \\to \\mathbb{R}_{\\geq 0}$ attains a minimum, which must be $0$ by Lemma~\\ref{criticalpointgminimum}.\n\\end{proof}\n\n\n\n\\subsection{Uniqueness of critical points}\n\n\\label{sectionuniqueness}\n\n\\begin{lemma} \\label{hessianquadformnegativedefinite} Let $H_{0} \\in \\mathfrak{a}^{+}$ and $k \\in K - \\bigcup_{L\\in \\mathcal L - \\{ G\\}} M' L$. Then the quadratic form on $\\mathfrak{a}$ defined by\n\\[ H \\mapsto \\langle H_0, [ \\operatorname{Ad}_{k}(H), E_{\\mathfrak n} ( \\operatorname{Ad}_{k}(H) ) ] \\rangle \\]\nis negative definite. In particular, it is nondegenerate.\n\\end{lemma}\n\n\\begin{proof}Take a nonzero $H \\in \\mathfrak{a}$, and write the element $\\operatorname{Ad}_{k}(H) \\in \\mathfrak{p}$ in the restricted root space decomposition \\eqref{rootspacedecomp} as $\\sum_{\\alpha \\in \\Sigma \\cup \\{ 0 \\} } X_{\\alpha}$. Then $\\theta X_{-\\alpha} = -X_\\alpha$. By Lemma~\\ref{computationAsecondderivative}, the quadratic form evaluated at $H$ equals\n\\[ \\sum_{\\alpha \\in \\Sigma^{+}} \\langle -X_{\\alpha}, 2 X_{\\alpha} \\rangle_{\\theta} \\langle H_\\alpha, H_0 \\rangle = -2 \\sum_{\\alpha \\in \\Sigma^{+}} \\lVert X_{\\alpha} \\rVert_{\\theta}^2 \\cdot \\alpha(H_0) \\,, \\]\nwhich is nonpositive because $H_{0}$ lies in the positive Weyl chamber. Therefore the quadratic form is negative semidefinite. The fact that $k \\notin \\bigcup_{L \\in \\mathcal L - \\{ G\\}} M' L$ implies $\\operatorname{Ad}_{k}(H) \\notin \\mathfrak{a}$ (Lemma~\\ref{conjugatetoAimplieslevi}). So at least one $X_{\\alpha}$ is nonzero, and the statement follows.\n\\end{proof}\n\n\\begin{remark}\\label{hessiancriticalsometimesdef}When $H_{0}$ lies in a mixed Weyl chamber, the quadratic form in Lemma~\\ref{hessianquadformnegativedefinite} can sometimes be degenerate. This happens already for $G = \\operatorname{SL}_{3}(\\mathbb{R})$, even for completely generic values of $H_0$, and there appears to be no structure in the bad pairs $(H_0, k)$.\\end{remark}\n\n\\begin{proposition} \\label{surjectivityorbitalintegralH} Let $H_{0} \\in \\mathfrak{a}^{+}$ and $k \\in K - \\bigcup_{L \\in \\mathcal L - \\{ G\\}} M'L$. Then the Hessian of $h_{H_{0}, k}$ is everywhere negative definite.\\end{proposition}\n\n\\begin{proof}Take $a \\in A$ and define $k_{1} = \\kappa(k a)$. From Lemma~\\ref{kpartofmL} it follows that $k_{1} \\notin \\bigcup_{L\\in \\mathcal L - \\{ G\\}} M' L$. Using Lemma~\\ref{computationAsecondderivative} we have for $H \\in \\mathfrak{a}$ that\n\\begin{align*}\n(\\operatorname{Hess}_{a} h_{H_{0}, k})(H) & = \\langle H_{0}, [\\operatorname{Ad}_{k_{1}}(H), E_{\\mathfrak{n}}(\\operatorname{Ad}_{k_{1}}(H))] \\rangle \\\\\n& = \\langle [H_{0}, \\operatorname{Ad}_{k_{1}}(H)], E_{\\mathfrak n}(\\operatorname{Ad}_{k_{1}}(H)) \\rangle \\,.\n\\end{align*}\nBy Lemma~\\ref{hessianquadformnegativedefinite}, this quadratic form is negative definite.\n\\end{proof}\n\n\\begin{corollary} \\label{criticalpointshnondegenerate}When $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}, +}$ and $k \\in K$, the critical points of $h_{H_{0}, k}$ are nondegenerate.\\end{corollary}\n\n\\begin{proof}If $h_{H_{0}, k}$ has a critical point, then $k \\notin \\bigcup_{L \\in \\mathcal L - \\{ G\\}} M'L$ by Proposition~\\ref{mLnocriticalpoints}. The Hessian of $h_{H_{0}, k}$ is then everywhere nondegenerate by Proposition~\\ref{surjectivityorbitalintegralH}. In particular, its critical points are nondegenerate.\n\\end{proof}\n\n\\begin{proposition} \\label{uniquenesscriticalpointcartan}When $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}, +}$ and $k \\in K$, the function $h_{H_{0}, k}$ has at most one critical point, which maximizes $h_{H_{0}, k}$.\\end{proposition}\n\n\\begin{proof}By Proposition~\\ref{mLnocriticalpoints}, the existence of one critical point implies that $k \\notin \\bigcup_{L \\in \\mathcal L - \\{ G\\}} M'L$. Proposition~\\ref{surjectivityorbitalintegralH} then implies that the Hessian of $h_{H_{0}, k}$ is everywhere negative definite, so that there can be no other critical point and any critical point maximizes $h_{H_{0}, k}$.\n\\end{proof}\n\n\\subsection{Structure of the level sets}\n\n\\label{sectionstructurelevelsets}\n\nWe have all the necessary information to describe the set $\\mathcal C(G, H_0)$.\n\n\\begin{proposition} \\label{genericcriticalsetstructuresemisimple} When $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}}$, the set $\\mathcal{C}(G, H_{0})$ is a smooth manifold of dimension $\\dim K - \\dim A$ that varies smoothly with $H_{0}$.\n\\end{proposition}\n\n\\begin{proof} By Corollary~\\ref{Cnonempty}, this set is nonempty, and by Lemma~\\ref{maintermgenericsubmersionatcriticalsetreductive}, $f_{G, H_{0}}$ is a submersion at the points of $\\mathcal{C}(G, H_{0})$. Thus $\\mathcal{C}(G, H_{0})$ is a smooth manifold of dimension $\\dim K - \\dim A$, and from the local normal form for submersions it follows that it admits local parametrizations that depend smoothly on $H_{0}$.\n\\end{proof}\n\n\\begin{lemma}\\label{CinvariantbyM} When $H_0 \\in \\mathfrak a$, the set $\\mathcal{C}(G, H_{0})$ is invariant on both sides by $M$.\n\\end{lemma}\n\n\\begin{proof}This is clear from the definition \\eqref{definitionCreductive}, using $\\operatorname{Ad}_G$-invariance of the Killing form.\n\\end{proof}\n\n\\begin{remark}Many arguments simplify when $\\mathcal C(G, H_0)$ is but a finite union of cosets of $M$ in $K$, effectively reducing arguments to the case of $G = \\operatorname{SL}_2(\\mathbb R)$. However, the set $\\mathcal C(G, H_0)$ is usually much bigger; see Proposition~\\ref{Klargecharacterization}.\\end{remark}\n\n\\begin{corollary}\\label{levelsetGcodim}When $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}}$ and $a \\in A$, the set of $g \\in G$ for which $a$ is a critical point of $h_{H_0, g}$ is a smooth submanifold of codimension $\\dim(A)$. It is stable on the left by $NAM$ and on the right by $M$.\\end{corollary}\n\n\\begin{proof}From \\eqref{heightfunctionreducetokpart} we saw that $h_{H_0, g}$ and $h_{H_0, \\kappa(g)}$ have the same critical points. Therefore by Lemma~\\ref{levelsetszerocritpoint} the set in question is equal to $NA \\cdot \\kappa(\\mathcal C(G, H_0)a^{-1})$, which has codimension $\\dim(A)$ in $G = NAK$ by Proposition~\\ref{genericcriticalsetstructuresemisimple}. It is stable on the left by $NA$. It is stable on the left by $M$ because $M$ normalizes $NA$, which implies that $\\kappa(\\cdot)$ commutes with left multiplication by $M$, and $\\mathcal C(G, H_0)$ is stable on the left by $M$ by Lemma~\\ref{CinvariantbyM}. Similarly, invariance on the right follows from Lemma~\\ref{CinvariantbyM}, using that $\\kappa(\\cdot)$ commutes with right multiplication by $K$.\n\\end{proof}\n\nWhen $H_0 \\in \\mathfrak a$, define $\\mathcal{R}_{H_{0}} \\subset G$ to be the set of elements $g$ for which the function $h_{H_{0}, g}$ has a critical point.\n\n\\begin{proposition} \\label{regularsetGopen} When $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}, +}$, the set $\\mathcal{R}_{H_0} \\subset G$ is open and stable on the left by $NAM$.\\end{proposition}\n\n\\begin{proof} Take $g_0 \\in \\mathcal{R}_{H_0}$. By Proposition~\\ref{criticalpointshnondegenerate}, the critical point of $h_{H_{0}, g_0}$ is nondegenerate, call it $\\xi$. We may reformulate this by saying that the map\n\\begin{align*}\nA & \\to \\mathfrak{a}^{*} \\\\\na & \\mapsto (D h_{H_{0}, g_0})_{a}\n\\end{align*}\nhas invertible differential at the level set above $0$, which is the singleton $\\{\\xi \\}$. By the implicit function theorem applied to this smooth map with parameter $g \\in G$, it follows that it has a zero for all $g$ in a neighborhood of $g_0$, which is to say that $\\mathcal{R}_{H_0}$ is open in $G$. The stability under $NAM$ follows from Corollary~\\ref{levelsetGcodim}.\n\\end{proof}\n\n\\begin{remark} \\label{atonegativechamber} Let $\\rho : G \\to \\operatorname{SL}(V)$ be any representation with finite kernel, and let $\\Omega_{\\mathcal S^0} \\subset G$ be the set defined in \\S\\ref{npartcanonicalpartitions}. It is open and dense by Proposition~\\ref{omegaopendense}. It is reasonable to expect that for $g \\in \\Omega_{\\mathcal S^0}$ and $\\lambda \\in (\\mathfrak a^*)^+$ we have $\\lambda(H(ga)) \\to - \\infty$ as $a \\to \\infty$ in $A$. This would then imply that when $H_0 \\in \\mathfrak a^+$ the function $h_{H_0, g}$ has a critical point for all $g \\in \\Omega_{S^0}$, in particular, for all $g$ in an open dense subset of $G$ that does not depend on $H_0$. In fact, when $H_0 \\in \\mathfrak a^{\\operatorname{gen}, +}$ it is reasonable to expect that $\\mathcal R_{H_0} = \\Omega_{\\mathcal S^0}$.\n\nThe statement about the limit and the corollary that $\\mathcal R_{H_0} \\supset \\Omega_{\\mathcal S^0}$ are certainly true when $G = \\operatorname{SL}_n(\\mathbb R)$ and $\\rho = \\operatorname{Std}$. Then the entries of $H(ga)$ can be expressed in terms of quotients of sums of subdeterminants (using \\eqref{normprojwedgesingle}), and when $g \\in \\Omega_{\\mathcal S^0}$ those entries are bounded from above and below by constants times the entries of $a$, but arranged in decreasing order. The general case would require a more careful analysis of the weights of the exterior powers of $V$. This would in particular give an alternative proof of Corollary~\\ref{Cnonempty}, whose only proof we have now is very technical and little intuitive.\\end{remark}\n\n\\subsection{Some dimension bounds}\n\n\\begin{lemma}\\label{complexrootsbigger}Let $\\mathfrak g$ be a complex simple Lie algebra of rank $r$, $\\mathfrak h$ a Cartan subalgebra with roots $\\Sigma$ and a choice of simple roots $\\Pi$. Let $\\alpha \\in \\Pi$. Then there exist at least $r$ roots $\\beta \\in \\Sigma$ with the property that $\\beta \\geq \\alpha$.\\end{lemma}\n\n\\begin{proof}We will show by induction the following statement: for every $1 \\leq m \\leq r$, there exists a set of simple roots $S$ with $|S| = m$ whose span contains $m$ linearly independent roots $\\beta \\geq \\alpha$. For $m = 1$ we take $S = \\{\\alpha\\}$. Assume we have found $S$ of cardinality $m < r$. Because $\\mathfrak g$ is simple, $\\Sigma$ is irreducible, so there exists a simple root $\\gamma \\in \\Pi - S$ which is not orthogonal to all roots in $S$. Thus there exists a positive root $\\beta \\in \\operatorname{span}(S)$ with $\\beta \\geq \\alpha$ which is not orthogonal to $\\gamma$. Because $\\langle \\beta, \\gamma \\rangle_\\theta \\leq 0$ by \\cite[Lemma 2.51]{Knapp2002}, we must have $\\langle \\beta, \\gamma \\rangle_\\theta < 0$. By \\cite[Proposition 2.48]{Knapp2002} this implies that $\\beta + \\gamma$ is a root. We may now take $S' = S \\cup \\{\\gamma\\}$, whose span contains $m$ linearly independent roots $\\geq \\alpha$ in $\\operatorname{span}(S)$ together with the root $\\beta + \\gamma \\geq \\alpha$. This completes the induction.\n\\end{proof}\n\n\\begin{lemma}\\label{simplewithfewroots}Let $\\mathfrak g$ be a real simple Lie algebra with Cartan decomposition $\\mathfrak k \\oplus \\mathfrak p$, maximal abelian subalgebra $\\mathfrak a \\subset \\mathfrak p$ with restricted roots $\\Sigma \\subset \\mathfrak a^*$ and choice of positive roots $\\Sigma^+$. The following are equivalent:\n\\begin{enumerate}\n\\item $\\mathfrak g$ is compact or isomorphic to $\\mathfrak{sl}_2(\\mathbb R)$.\n\\item $\\sum_{\\alpha \\in \\Sigma^+} m(\\alpha) = \\dim(\\mathfrak a)$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}Write $r = \\dim(\\mathfrak a)$.\nNote that $\\Sigma$ spans $\\mathfrak a^*$ so that we always have\n\\[ \\sum_{\\alpha \\in \\Sigma^+} m(\\alpha) \\geq \\#\\Sigma^+ \\geq r \\,. \\]\nIt is clear that equality holds if $\\mathfrak g$ is compact or isomorphic to $\\mathfrak{sl}_2(\\mathbb R)$.\n\nNow let $\\mathfrak g$ be any simple Lie algebra. We will show that equality only holds for the examples stated. There are two cases, coming from the classification of simple real Lie algebras.\n\nSuppose first that the complexification $\\mathfrak g_{\\mathbb C}$ is not simple. By \\cite[Theorem 6.94]{Knapp2002} $\\mathfrak g$ is the restriction of scalars of a simple complex Lie algebra. Let $\\mathfrak h \\supset \\mathfrak a$ be a Cartan subalgebra. It follows as in \\cite[\\S VI.11, p.425]{Knapp2002} that all roots of $\\mathfrak h$ have nonzero restriction to $\\mathfrak a$ (and all restrictions are different) and all restricted roots have multiplicity $2$. It follows that\n\\[ \\sum_{\\alpha \\in \\Sigma^+} m(\\alpha) = 2\\#\\Sigma^+ \\geq 2r \\geq r + 1 \\,, \\]\nbecause $r \\geq 1$, meaning that equality does not hold in this case.\n\nSuppose now that the complexification $\\mathfrak g_{\\mathbb C}$ is simple. Let $\\Pi$ be a system of simple restricted roots of $\\mathfrak a$. Let $\\theta$ be a Cartan involution corresponding to $\\mathfrak k \\oplus \\mathfrak p$ and $\\mathfrak h \\subset \\mathfrak g$ a $\\theta$-stable Cartan subalgebra containing $\\mathfrak a$. We may choose an ordering on $\\mathfrak h_{\\mathbb C}^*$ such that the restrictions of positive roots of $\\mathfrak h_{\\mathbb C}$ to $\\mathfrak a$ are either zero or positive. As in \\cite[\\S VI.12, Problem 7]{Knapp2002}, all simple $\\alpha \\in \\Pi$ are restrictions of simple roots of $\\mathfrak h_{\\mathbb C}$. Let $\\alpha_1, \\ldots, \\alpha_r$ be simple lifts to $\\mathfrak h_{\\mathbb C}$ of the simple restricted roots of $\\mathfrak a$. If $\\mathfrak g$ is noncompact, then $r \\geq 1$. Because $\\mathfrak g_{\\mathbb C}$ is simple, by Lemma~\\ref{complexrootsbigger} there are at least $\\dim_{\\mathbb C}(\\mathfrak h_{\\mathbb C}) - 1$ positive roots $\\beta$ of $\\mathfrak h_{\\mathbb C}$ with $\\beta > \\alpha_1$. The restrictions of all these roots are positive. Though the restrictions may coincide, they give the following lower bound for multiplicities:\n\\[ \\sum_{\\alpha \\in \\Sigma^+} m(\\alpha) \\geq r + \\dim_{\\mathbb C}(\\mathfrak h_{\\mathbb C}) - 1 \\,. \\]\nif this is equal to $r$, then $\\dim_{\\mathbb C}(\\mathfrak h_{\\mathbb C}) = 1$. That is, $\\mathfrak g_{\\mathbb C} \\cong \\mathfrak{sl}_2(\\mathbb C)$. Because we are assuming $\\mathfrak g$ is noncompact, it must be the unique noncompact form of $\\mathfrak{sl}_2(\\mathbb C)$, which is $\\mathfrak{sl}_2(\\mathbb R)$.\n\\end{proof}\n\n\\begin{proposition}\\label{Klargecharacterization}Let $G$ be a semisimple Lie group. The following are equivalent:\n\\begin{enumerate}\n\\item $\\dim(K) - \\dim(A) = \\dim(M)$.\n\\item All simple factors of $\\mathfrak g$ are either compact or $\\mathfrak{sl}_2(\\mathbb R)$.\n\\end{enumerate}\n\\end{proposition}\n\nNote that we always have $\\dim(K) \\geq \\dim(M) + \\dim(A)$.\n\n\\begin{proof}\nThe difference $\\dim(K) - \\dim(M)$ is equal to $\\sum_{\\alpha \\in \\Sigma^+} m(\\alpha)$. If this is an equality, then we must have the same equality for all simple factors of $\\mathfrak g$. By Lemma~\\ref{simplewithfewroots}, this is equivalent to saying that all simple factors of $\\mathfrak g$ are either compact or $\\mathfrak{sl}_2(\\mathbb R)$.\n\\end{proof}\n\n\n\\subsection{Proofs specific to \\texorpdfstring{$\\operatorname{SL}_3(\\mathbb{R})$}{SL3(R)}}\n\nSome of the lemmas that go into the proof of Proposition~\\ref{genericcriticalsetstructuresemisimple} admit more direct and computational proofs when $G = \\operatorname{SL}_{3}(\\mathbb{R})$. We include one of those here, because it features a rather curious inequality.\n\n\\begin{proof}[Direct proof of Corollary~\\ref{Cnonempty} when $G = \\operatorname{SL}(3, \\mathbb{R})$] We use the standard choice of Iwasawa decomposition. Write $H_{0} = \\operatorname{diag}(a, b, c)$. Proving that the set $\\mathcal{C}(G, H_{0})$ is nonempty is equivalent to showing that there exists a symmetric matrix $X = \\operatorname{Ad}_{k^{-1}}(H_{0})$ with zeros on the diagonal, which is isospectral with $H_{0}$. Write $X = \\begin{psmallmatrix}\n0 & x & y \\\\\nx & 0 & z \\\\\ny & z & 0\n\\end{psmallmatrix}$. Then $H_{0}$ and $X$ have the same characteristic polynomial when\n\\begin{align} \\label{XHnoughtequations}\n\\begin{cases}\nx^{2} + y^{2} + z^{2} = - (ab + bc + ca) = \\frac{1}{2}(a^{2} + b^{2} + c^{2}) \\\\\n2 xyz = abc \\,.\n\\end{cases}\n\\end{align}\nBecause the first equation does not see the signs of $x, y, z$, squaring the second does not affect solvability. We may now view \\eqref{XHnoughtequations} as prescribing the arithmetic and geometric mean of $x^{2}, y^{2}, z^{2}$, namely $\\frac{1}{6}(a^{2} + b^{2} + c^{2})$ respectively $(\\frac{1}{4} a^{2}b^{2}c^{2})^{1\/3}$. It is well known that a necessary and sufficient condition for this to have a solution in nonnegative numbers $x^{2}, y^{2}, z^{2}$, is that the arithmetic mean is at least the geometric mean, meaning that\n\\[ \\frac{1}{6}(a^{2} + b^{2} + c^{2}) \\geq \\left( \\frac{1}{4} a^{2}b^{2}c^{2} \\right)^{1\/3} \\,. \\]\nThat this condition is satisfied is Lemma~\\ref{AMGMlemma} below, and it does not require the hypothesis that $H_{0} \\in \\mathfrak{a}^{\\operatorname{gen}}$.\n\\end{proof}\n\n\\begin{lemma}[AM--\\texorpdfstring{$\\sqrt[3]{2}$}{sqrt 3 2}GM] \\label{AMGMlemma}Let $a, b, c \\in \\mathbb{R}$ with $a + b + c = 0$. Then\n\\[ \\left( \\frac{a^{2} + b^{2} + c^{2}}{3} \\right)^{3} \\geq 2 \\cdot a^{2}b^{2}c^{2} \\,. \\]\n\\end{lemma}\n\nThe proof of Lemma~\\ref{AMGMlemma} is an amusing exercise using the AM--GM inequality. Equality holds if and only if $(a, b, c)$ is proportional to $(1, 1, -2)$ or a permutation thereof. With the notations as in Corollary~\\ref{Cnonempty}, this corresponds to the case where $H_{0}$ lies in $\\mathfrak{a}_{L}$ for some semistandard Levi subgroup $L \\in \\mathcal L - \\{ G\\}$.\n\n\n\n\n\\section{The \\texorpdfstring{$K$}{K}-projection and the geodesic flow}\n\\label{secKprojection}\n\nIn this subsection we prove Theorem~\\ref{KparttoMcentralizer}. For the possibility of uniformity, see Remark~\\ref{remarkKvsNproofs}. For $H \\in \\mathfrak{a}$ and $k \\in K$, define a map\n\\begin{align}\n\\label{definitionp}\n\\begin{split}\np_{H, k} : \\mathbb{R} & \\to \\mathfrak{p} \\\\\nt & \\mapsto \\operatorname{Ad}_{\\kappa(k e^{tH})}(H) \\,.\n\\end{split}\n\\end{align}\nUsing \\eqref{derivativeAd} and Lemma~\\ref{differentialkpart} to differentiate $\\kappa$, we find that\n\\[ p_{H, k}' (t) = [E_{\\mathfrak{k}}(p_{H, k}(t)), p_{H, k}(t)] \\,.\\]\nTherefore $p_{H, k}(t)$ is a solution to the homogeneous quadratic differential equation in $\\mathfrak{p}$ given by\n\\begin{equation} \\label{diffequationpt}\nX' = [E_{\\mathfrak{k}}X, X] \n\\end{equation}\nwith initial value $\\operatorname{Ad}_{k}(H)$. Its flow\n\\begin{align} \\label{dynamicalsystem}\n\\begin{split}\np : \\mathbb{R} \\times \\mathfrak{p} & \\to \\mathfrak{p} \\\\\n(t, X) & \\mapsto p_{t}(X) \\,.\n\\end{split}\n\\end{align}\nis given explicitly by $p_{t}(X) = p_{H, k}(t)$ when $X = \\operatorname{Ad}_{k}(H)$, and is in particular defined for all $t \\in \\mathbb{R}$. We will prove Theorem~\\ref{KparttoMcentralizer} by gathering enough information about the dynamical system \\eqref{dynamicalsystem}.\n\n\\begin{example}Let $G = \\operatorname{PSL}_{2}(\\mathbb{R})$ and identify $\\mathfrak{p}$ with $\\mathbb{R}^{2}$ via an isometry that sends $\\mathfrak{a}$ to the horizontal axis $\\mathbb{R} \\times \\{ 0 \\}$. It is clear from \\eqref{diffequationpt} that the points of $\\mathfrak{a}$ are stationary, and it is clear from \\eqref{definitionp} that the norm of $X \\in \\mathfrak{p}$ is invariant under the flow. It is then not hard to see that the phase portrait of the dynamical system \\eqref{dynamicalsystem} is as follows: the points of $\\mathfrak{a}^{+}$ are unstable equilibra, the points of $\\mathfrak{a}^{-}$ are stable equilibria, and apart from the equilibrium at $0$ every other orbit is heteroclinic and describes a Euclidean half circle with endpoints on the horizontal axis, starting at $\\mathfrak{a}^{+}$ and ending at $\\mathfrak{a}^{-}$.\\end{example}\n\nIt is clear from \\eqref{diffequationpt} that all elements of $\\mathfrak{a}$ are equilibria. The key in the proof of Theorem~\\ref{KparttoMcentralizer} is the existence of functions that are monotonic along orbits. For a root $\\alpha \\in \\Sigma$, let $H_{\\alpha} \\in \\mathfrak{a}$ be as in \\S\\ref{secroots}. The set $\\{ H_{\\alpha} : \\alpha \\in \\Pi \\}$ is a basis of $\\mathfrak{a}$. When $\\alpha \\in \\Pi$ and $H \\in \\mathfrak{a}$, define $c_{\\alpha}(H)$ to be the $\\alpha$th coordinate of $H$ in this basis. More generally, define $c_{\\alpha}(X) = c_{\\alpha}(E_{\\mathfrak{a}}(X))$ for $X \\in \\mathfrak{g}$.\n\n\\begin{lemma} \\label{aptderivative}Let $X \\in \\mathfrak{p} - \\mathfrak{a}$. Then $c_{\\alpha}([E_{\\mathfrak{k}}X, X]) \\leq 0$ for all $\\alpha \\in \\Pi$, and at least one is strictly negative. That is, $E_{\\mathfrak{a}}[E_{\\mathfrak{k}}X, X]$ is nonzero and lies in the closure of the negative Weyl chamber.\\end{lemma}\n\n\\begin{proof}Write $X = \\sum_{\\alpha \\in \\Sigma \\cup \\{ 0 \\}} X_{\\alpha}$ in the restricted root space decomposition \\eqref{rootspacedecomp}. Using that $X_{- \\alpha} = - \\theta(X_{\\alpha})$ and $E_{\\mathfrak{k}} X = \\sum_{\\alpha \\in \\Sigma^{+}} (X_{- \\alpha} - X_{\\alpha})$, we find that\n\\begin{align*}\nE_{\\mathfrak{a}}[E_{\\mathfrak{k}}X, X] & = \\sum_{\\alpha \\in \\Sigma^{+}} [X_{- \\alpha} - X_{\\alpha}, X_{- \\alpha} + X_{\\alpha}] \\\\\n& = - 2 \\sum_{\\alpha \\in \\Sigma^{+}} [X_{\\alpha}, X_{- \\alpha}] \\\\\n& = -2 \\sum_{\\alpha \\in \\Sigma^{+}} \\langle X_{\\alpha}, X_{- \\alpha} \\rangle \\cdot H_{\\alpha} \\\\\n& = -2 \\sum_{\\alpha \\in \\Sigma^{+}} \\lVert X_{\\alpha} \\rVert_{\\theta}^{2} \\cdot H_{\\alpha} \\\\\n& = -2 \\sum_{\\alpha \\in \\Pi} \\sum_{\\beta \\geq \\alpha} \\lVert X_{\\beta} \\rVert_{\\theta}^{2} \\cdot H_{\\alpha}\n\\end{align*}\nwhere in the third equality we have used \\cite[\\S II, Lemma 2.18]{Knapp2002}. Therefore the coefficients of $E_{\\mathfrak{a}}[E_{\\mathfrak{k}}X, X]$ in the basis $\\{ H_{\\alpha} : \\alpha \\in \\Pi \\}$ are nonnegative, and because $X \\notin \\mathfrak{a}$ at least one is nonzero. (This is essentially the same proof as that of Lemma~\\ref{hessianquadformnegativedefinite}.)\n\\end{proof}\n\n\\begin{lemma} \\label{apartdecreasing} Let $X \\in \\mathfrak{p} - \\mathfrak{a}$ and $t > 0$. Then $c_{\\alpha}(p_{t}(X)) \\leq c_{\\alpha}(X)$ for all $\\alpha \\in \\Pi$, and at least one inequality is strict.\n\\end{lemma}\n\n\\begin{proof}By Lemma~\\ref{aptderivative}, every $c_{\\alpha}(p_{t}(X))$ is monotonically decreasing, and near every $t \\in \\mathbb{R}$ at least one is strictly decreasing.\n\\end{proof}\n\n\\begin{lemma} \\label{nonwanderinga}The only non-wandering points for the dynamical system \\eqref{dynamicalsystem} are those of $\\mathfrak{a}$.\n\\end{lemma}\n\n\\begin{proof}Let $X \\in \\mathfrak{p} - \\mathfrak{a}$. By Lemma~\\ref{apartdecreasing} there exists $\\alpha \\in \\Pi$ for which $c_{\\alpha}(p_{1}(X)) < c_{\\alpha}(X)$. By continuity of the flow, there exists a neighborhood $U$ of $X$ such that $c_{\\alpha}(p_{1}(U)) < c_{\\alpha}(U)$. Because $c_{\\alpha}$ is decreasing along orbits, we have $p_{t}(U) \\cap U = \\varnothing$ for all $t \\geq 1$. Thus $X$ is wandering.\n\\end{proof}\n\n\\begin{lemma} \\label{dynamicallimita}Let $X \\in \\mathfrak{p}$. There exists $H \\in \\mathfrak{a}$ such that $\\lim_{t \\to \\infty }p_{t}(X) = H$. \\end{lemma}\n\n\\begin{proof}Write $X = \\operatorname{Ad}_{k}(H)$ with $k \\in K$ and $H \\in \\mathfrak{a}$. Consider its $\\omega$-limit $\\omega(X)$. By Lemma~\\ref{nonwanderinga} we have $\\omega(X) \\subset \\mathfrak{a}$. In view of the explicit expression for $p_{t}(X)$, we have $\\omega(X) \\subset \\operatorname{Ad}_{K}(H)$ by continuity. Thus $\\omega(X) \\subset \\mathfrak{a} \\cap \\operatorname{Ad}_{K}(H)$. By Lemma~\\ref{conjugatetoAimplieslevi}, this intersection is equal to the Weyl group orbit of $H$. In particular, it is discrete. Because $\\omega(X)$ is connected, it must consist of a single point of $\\mathfrak{a}$.\n\\end{proof} \n\n\\begin{proof}[Proof of Proposition~\\ref{KparttoMcentralizer}] The $K$-projection of $ge^{tH}$ does not depend on the triangular part of $g$, so we may assume that $g \\in K$. Conjugation by $k \\in K$ gives a diffeomorphism\n\\[ K \/ Z_{K}(H) \\to \\operatorname{Ad}_{K}(H) \\,. \\]\nTake $k \\in G$. When $t \\to + \\infty$, the image of $\\kappa(k e^{tH})$ under this map tends to an element $\\operatorname{Ad}_{m}(H) \\in \\operatorname{Ad}_{M'}(H)$ by Lemma~\\ref{dynamicallimita}. Therefore $\\kappa(k e^{tH})$ tends to the point $m Z_{K}(H)$ in the quotient $K \/ Z_{K}(H)$. Equivalently, $\\kappa(k e^{tH})$ tends to the set $m Z_{K}(H)$ in $K$.\n\\end{proof}\n\n\\begin{remark}\\label{remarkKvsNproofs}In general the convergence in Lemma~\\ref{dynamicallimita} is not uniform. Already for $G = \\operatorname{SL}_3(\\mathbb R)$ there are heteroclinic orbits that have nearby orbits with totally different limit points, even some that emanate from equilibria in the positive Weyl chamber. Some of these orbits, that come in one-dimensional families, can be seen to lie in Levi subalgebras (Lie algebras of $L \\in \\mathcal L$). But not all of them do and certainly not those that come in two-dimensional families. In fact, it seems likely that these badly behaved orbits correspond exactly to the partition of $K$ into the sets $\\kappa(\\Omega_{\\mathcal S})$ defined in \\S\\ref{npartcanonicalpartitions}; compare Example~\\ref{omegaslthree}.\n\nFurther evidence for this is the following observation. Let $\\rho : G \\to \\operatorname{SL}(V)$ be any representation with finite kernel, and let $\\Omega_{\\mathcal S^0}$ be the set defined in \\S\\ref{npartcanonicalpartitions}. As explained in Remark~\\ref{atonegativechamber} we expect that for $g \\in \\Omega_{\\mathcal S^0}$ and $\\lambda \\in (\\mathfrak a^*)^+$ we have $\\lambda(H(ga)) \\to - \\infty$ as $a \\to \\infty$ in $A$, and we sketched a proof of this when $G =\\operatorname{SL}_n(\\mathbb R)$ and $\\rho = \\operatorname{Std}$. Moreover, when $\\lambda \\in \\Sigma^+$ is merely a positive root, it is still reasonable to expect that $\\lambda(H(ga)) \\to - \\infty$ as $a \\to \\infty$ in $A$ uniformly along regular directions; this is also apparent for $G =\\operatorname{SL}_n(\\mathbb R)$. Now writing $ga = n'a'k'$, we have for $H \\in \\mathfrak a$ that $\\operatorname{Ad}_{k'}(H) = \\operatorname{Ad}_{a'^{-1} n'^{-1}g}(H)$. When $g \\in \\Omega_{\\mathcal S^0}$ stays in a compact set, then so does $n'$, by Theorem~\\ref{npartboundeduniform}, and if $\\lambda(a') \\to - \\infty$ for all positive roots $\\lambda$, then the positive root space components of $\\operatorname{Ad}_{a'^{-1} n'^{-1}g}(H)$ tend to zero. On the other hand this equals $\\operatorname{Ad}_{k'}(H)$, which lives in a bounded subset of $\\mathfrak p$, so that the negative root space components approach zero as well, meaning that $\\operatorname{Ad}_{k'}(H) \\to \\mathfrak a$ and therefore $k' \\to M' Z_K(H)$. Taking $H \\in \\mathfrak a$ regular, we would obtain that $k' \\to M'$, uniformly for $g$ in compact subsets of $\\Omega_{\\mathcal S^0}$ and $a\\to \\infty$ inside regular cones of $A$.\n\\end{remark}\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe discovery of the 125-{\\rm GeV } SM-like scalar \\cite{LHC} and the present absence of any new physics signals has opened up a whole host of questions as to the true nature of the electroweak symmetry breaking and to what may lie beyond the Standard Model. The sole existence of the 125-{\\rm GeV } particle would leave unanswered several deep questions such as the origin of neutrino masses, the hierarchy of quark and lepton masses among many others. It also implies that the electroweak vacuum is metastable with drastic consequences in the very far-distant future \\cite{metastable}. It remains to be seen whether this most simple picture- albeit one with many question marks- will be the ultimate theory of nature or it is merely an effective theory at current accessible energies whose reality tests are incomplete and more non-SM phenomena will pop up in the not-too-distant future with Run II of the LHC.\n\nDespite the present lack of new physics at the LHC, it does not imply that it is not there. On the contrary, new physics has already appeared in the neutrino sector through neutrino oscillation and its implication on neutrino masses. This evidence, although quite clear, is only indirect and does not show where the new physics that gives rise to the aforementioned phenomena may appear. This difficulty in finding a direct evidence for the new physics involved in generating neutrino masses is compounded by the fact that these masses are so tiny, more than seven orders of magnitude smaller than the lightest lepton: the electron. In the most generic scenario of the elegant seesaw mechanism for generating tiny masses, the right-handed neutrinos are sterile i.e. singlets under the electroweak gauge group. In a nutshell, the two mass eigenvalues are $m_D^2\/M$ and $M_R$ where the Dirac mass $m_D$ is proportional to the electroweak scale while the Majorana mass $M_R$ is $\\gg m_D$.\nIn addition to the fact that $\\nu_R$'s are assumed to be electroweak singlets, the very large values for $M$ in a generic scenario makes it very very difficult to probe the crucial physics, namely that which gives rise to $M_R$ which is responsible for the lightness of the ``active\" neutrinos. Another facet of this new physics is the Majorana nature of the ``active\" neutrinos themselves which could manifest itself through neutrino less double beta decays which so far have not been observed. Through neutrino oscillations, we have a hint of new physics but what it might be and where to look for it is still a big mystery at the present time. \n\nThe aforementioned uncertainties rest in large part on the assumption that right-handed neutrinos are electroweak singlets. This usually comes from\na certain extension of the SM such as the Left-Right symmetric model $SU(2)_L \\times SU(2)_R \\times U(1)_{B-L}$ \\cite{senjanovic} or the Grand Unified model $SO(10)$, among others. \nThe L-R symmetric version has an advantage over the GUT version in that the right-handed neutrino Majorana mass is proportional to the breaking scale of $SU(2)_R$ which could be much lower than a typical GUT scale and which could open up the possibility for detecting right-handed neutrinos (and $W_R$ as well). A recent search for right-handed neutrinos and $W_R$ at the LHC \\cite{CMSWR} has yielded a lower bound of around 3 TeV for $W_R$. Bounds on right-handed neutrino masses were also presented in \\cite{CMSWR}. We will come back to the aforementioned remarks below.\n\nIf one is however willing to entertain the idea that right-handed neutrinos {\\em are not} sterile, there is an entire panorama of accessible phenomena that can be searched for and studied. A non-sterile right-handed neutrino necessarily interacts with the electroweak gauge bosons and the Majorana mass term is expected to carry the electroweak quantum number and hence is proportional to the electroweak breaking scale. Right-handed neutrinos could then be searched for both from an interaction point of view and from an energetic one. A model of this kind was put forth by one of us (PQH) \\cite{pqnur} (the EW$\\nu_R$ model).\n\nIn the EW$\\nu_R$ model \\cite{pqnur}, right-handed neutrinos are parts of $SU(2)$ doublets along with their charged partners (the mirror charged leptons). Anomaly freedom dictates the existence of doublets of right-handed mirror quarks. The particle content of the model is listed in the next section. The existence of extra doublets of chiral fermions, the mirror quarks and leptons, is potentially fatal for the model because of their contributions to the electroweak precision parameters, in particular the S-parameter. Those extra chiral doublets would make a ``large\" contribution to the S-parameter, an undesirable outcome. Fortunately, the EW$\\nu_R$ model contains a Higgs triplet which makes opposite contributions to the S-parameter and thus offsetting those of the mirror fermions. An exhaustive study of the electroweak precision parameters within the framework of the EW$\\nu_R$ model has been carried out in \\cite{hung2} with the main result being that there is a large parameter space which satisfies the precision constraints. \n\nThe EW$\\nu_R$ model in its original inception \\cite{pqnur} contains, beside one Higgs doublet which couples to both SM and mirror fermions, two scalar triplets, one (complex) with hypercharge $Y\/2=1$ and another (real) with $Y\/2=0$. Out of the thirteen degrees of freedom (4 for the doublet, 6 for the complex triplet and 3 for the real triplet), three are absorbed by W's and Z and the remaining {\\em ten} become physical degrees of freedom. Can one of those ten physical scalars describe the observed 125-{\\rm GeV } SM-like scalar? If not, what minimal extension would be needed for that purpose? Where and how does one look for the more massive scalars which could be CP even or odd? \n\nThe plan of the paper is as follows. Section~\\ref{sec:summary} will be devoted to a summary of the EW$\\nu_R$ model with its particle content and, in particular for this paper, its scalar sector. For completeness, the electroweak precision parameter constraints will also be summarized. Section~\\ref{sec:extewnur} presents some of the salient points concerning the scalar sector of the original EW$\\nu_R$ model. A particular attention is paid to what this sector has to say about the 125-{\\rm GeV } SM-like scalar. We show why the lightest spin-0 particle has to be CP-odd if one wishes to identify it with the 125-{\\rm GeV } object. This has to do with the fact that the production cross section for the scalar is very large compared with the equivalent SM quantity. This occurs when a single Higgs doublet couples to both SM and mirror fermions. The CP-odd option unfortunately is ruled out by the likelihood analysis which favors the CP-even case \\cite{LHC2}. At the end of this section we present a simple extension of the original model by adding one extra Higgs doublet. In this extension, by imposing a global symmetry, one Higgs doublet is made to couple to SM fermions while the other one couples only to mirror fermions. The scalar mass eigenstates and eigenvalues are shown as well as their couplings to fermions and gauge bosons. Section~\\ref{sec:data} discusses the implications of the extended model in light of the existence of the 125-{\\rm GeV } SM-like scalar. We will show in that section the dual nature of the 125-{\\rm GeV } SM-like scalar and only further measurements can tell whether or not it is an ``impostor\".\n\n\n\\section{The EW$\\nu_R$ model: A summary}\\label{sec:summary}\n\nThe main idea of the EW$\\nu_R$ model \\cite{pqnur} was to search for a model in which right-handed neutrinos {\\em naturally} acquire a mass proportional to the electroweak scale $\\Lambda_{EW} = 246 ~{\\rm GeV }$. For this to occur, the {\\em most natural} way to implement this idea is for right-handed neutrinos to be {\\em non-sterile}. In particular, the simplest way is to put them in doublets along with right-handed mirror charged lepton partners. In this manner, a Majorana mass term of the type $M \\nu_{R}^{T} \\sigma_2 \\nu_R$ necessarily carries an $SU(2) \\times U(1)$ quantum number and transforms like an $SU(2)$ triplet. (Details are summarized below.) As shown in \\cite{pqnur}, a new Higgs sector including triplets is needed and it obviously participates in the symmetry breaking of the electroweak gauge group. The EW$\\nu_R$ model of \\cite{pqnur} is highly testable for the following reasons: 1) $\\nu_R$'s are sufficiently light; 2) $\\nu_R$'s couple to W and Z and can be produced through these couplings; 3) The presence of an extended Higgs sector.\n\nAt this point, a comparison with the popular Left-Right symmetric model (L-R model) is in order here. Basically, one would like to probe the physics that governs the right-handed neutrino Majorana mass scale $M_R$ since it is a cornerstone of the seesaw mechanism. \n\nAs we will review below, the gauge structure of the EW$\\nu_R$ model \\cite{pqnur} is the same as that of the SM, namely $SU(2) \\times U(1)$. The gauge structure of the L-R model is $SU(2)_L \\times SU(2)_R \\times U(1)_{B-L}$. As a result, the highest mass scale of the EW$\\nu_R$ model is $\\Lambda_{EW} \\sim 246\\; {\\rm GeV }$, while the L-R model is characterized by {\\em two} scales: $\\Lambda_{L}$ and $\\Lambda_{R}$ with $\\Lambda_{R}$ being generally much larger than $\\Lambda_{L}$. The $\\nu_R$ Majorana mass is proportional to $\\Lambda_{EW}$ in the EW$\\nu_R$ model and is naturally {\\em ``light\"} while, for the L-R model, it is proportional to $\\Lambda_{R}$ and could be ``heavy\". \n\nThe next difference lies in the production mechanism and detection of $\\nu_R$'s. In the L-R model, $\\nu_R$'s are produced through the process $\\bar{u} + d \\rightarrow W_R \\rightarrow N + l$ \\cite{keung}. In contrast, the production of $\\nu_R$'s in EW$\\nu_R$ model proceeds through $q + \\bar{q} \\rightarrow Z \\rightarrow \\nu_R + \\bar{\\nu}_R$ or $\\nu_R + \\nu_R$ ($\\nu_R$ is a Majorana particle). Since $M_Z $ is the mass scale that enters the production cross section for $\\nu_R$ in the EW$\\nu_R$ model, one expects the number of events characteristic of that model to be {\\em significantly} larger than that for the L-R model which is controlled by $M_{W_R}$, making it much easier to probe signals such as like-sign dileptons \\cite{pqnur}. \n\nTo summarize, the main difference between the EW$\\nu_R$ model and the L-R model is the question of energy scales. Right-handed neutrinos in the EW$\\nu_R$ model have masses proportional to $\\Lambda_{EW}$ and hence are accessible experimentally through the direct coupling with the Z-boson at present (and future) colliders. Its physics is {\\em bounded} by the {\\em electroweak scale}. The scale $\\Lambda_{R}$ of the L-R model on the other hand is unknown and is only bounded from below experimentally. It would be much harder to probe $SU(2)_R$ if that scale turns out to be much higher than the present experimental bound of 3 {\\rm TeV }'s. The direct evidence for the seesaw mechanism through the production and decay of $\\nu_R$ is certainly within the reach of the EW$\\nu_R$ model, both from the energetic and production points of view.\n\nNext, concerning the scalar sector, the Higgs triplet of $SU(2)_R$ could be quite heavy, its mass being proportional to $\\Lambda_{R}$. The physics of the Higgs triplet of the EW$\\nu_R$ model is controlled by the electroweak scale and its existence (or non-existence) can hopefully be verified in Run II of the LHC. This is the subject of the present manuscript.\n\nLast but not least, the L-R model can be embedded in a grand unified group $SO(10)$. As mentioned in \\cite{pqnur}, the EW$\\nu_R$ model can be embedded in the group $E_6$. However, we consider the testability of the EW$\\nu_R$ model to be more important than its embedding in a GUT group.\n\n\n\\subsection{Gauge structure and particle content of the EW$\\nu_R$ model }\nBelow is a summary of the gauge structure and particle content of the minimal EW$\\nu_R$ model of \\cite{pqnur}.\nThe notations for the leptons and quarks are generic for any family.\n\n\\begin{itemize}\n\n\\item Gauge group: $SU(3)_C \\times SU(2) \\times U(1)_Y$\n\n\\item Lepton $SU(2)$ doublets (generic notation):\n\nSM: \n\\begin{equation}\nl_{L} = \\left( \\begin{array}{c}\n\\nu_L \\\\\ne_{L}\n\\end{array} \\right)\\;;\n\\end{equation}\n\nMirror: \n\\begin{equation}\nl^{M}_{R} = \\left( \\begin{array}{c}\n\\nu_R \\\\\ne^{M}_{R}\n\\end{array} \\right)\\;.\n\\end{equation}\n\n\\item Lepton $SU(2)$ singlets (generic notation):\n\nSM: $e_R$ ; Mirror: $e^{M}_L$\n\n\\item Quark $SU(2)$ doublets (generic notation):\n\nSM: \n\\begin{equation}\nq_{L} = \\left( \\begin{array}{c}\nu_{L} \\\\\nd_{L}\n\\end{array} \\right)\\;;\n\\end{equation}\n\nMirror: \n\\begin{equation}\nq^{M}_{R} = \\left( \\begin{array}{c}\nu^{M}_{R} \\\\\nd^{M}_{R}\n\\end{array} \\right)\\;.\n\\end{equation}\n\n\\item Quark $SU(2)$ singlets (generic notation):\n\nSM: $u_R$ , $d_R$ ; Mirror: $u^{M}_L$ , $d^{M}_L$.\n\n\\item The Higgs sector:\n\na) One Higgs doublet: $\\Phi$. This Higgs doublet couples to both SM and mirror fermions.\n\nb) One complex Higgs triplet with $Y\/2=1$ containing doubly-charged scalars:\n\\begin{equation}\n\\tilde{\\chi} = \\frac{1}{\\sqrt{2}}\\,\\vec{\\tau}.\\vec{\\chi}=\n\\left( \\begin{array}{cc}\n\\frac{1}{\\sqrt{2}}\\,\\chi^{+} & \\chi^{++} \\\\\n\\chi^{0} & -\\frac{1}{\\sqrt{2}}\\,\\chi^{+}\n\\end{array} \\right)\\;.\n\\end{equation}\n\nc) One real Higgs triplet with $Y\/2=0$:\n\\begin{equation}\n(\\xi^+, \\xi^0, \\xi^-)\\,.\n\\end{equation}\n\nd) One SM singlet Higgs: $\\phi_S$.\n\n\n\\end{itemize}\n\n\\subsection{Symmetry breaking in the EW$\\nu_R$ model}\n$SU(2) \\times U(1)$ is spontaneously broken by the vacuum expectation values (VEV) of the Higgs doublet and triplets. The Higgs potential \\cite{ChanoGold,pqnur} has a global $SU(2)_L \\times SU(2)_R$ symmetry. The triplets transform as $(3,3)$ and the doublet as $(2,2)$ under that global symmetry. Specifically,\n\\begin{equation}\n\\chi = \\left( \\begin{array}{ccc}\n\\chi^{0} &\\xi^{+}& \\chi^{++} \\\\\n\\chi^{-} &\\xi^{0}&\\chi^{+} \\\\\n\\chi^{--}&\\xi^{-}& \\chi^{0*}\n\\end{array} \\right) \\,,\n\\end{equation}\nand\n\\begin{equation}\n\\Phi = \\left( \\begin{array}{cc}\n\\phi^{0} & \\phi^{+} \\\\\n\\phi^{-} & \\phi^{0} \n\\end{array} \\right) \\, .\n\\end{equation}\nProper vacuum alignment dictates $\\langle \\chi^{0} \\rangle = \\langle \\xi^{0} \\rangle = v_M$ i.e.\n\\begin{equation}\n\\langle \\chi \\rangle = \\left( \\begin{array}{ccc}\nv_M &0&0 \\\\\n0&v_M&0 \\\\\n0&0&v_M\n\\end{array} \\right) \\,,\n\\end{equation}\n\\begin{equation}\n\\langle \\Phi \\rangle = \\left( \\begin{array}{cc}\nv_2\/\\sqrt{2} &0 \\\\\n0&v_2\/\\sqrt{2} \n\\end{array} \\right) \\,.\n\\end{equation}\nThese VEVs leave an unbroken $SU(2)_D$ custodial symmetry i.e. $SU(2)_L \\times SU(2)_R \\rightarrow SU(2)_D$. This ensures that $\\rho = M_{W}^2\/M_{Z}^2 \\cos^2 \\theta_W=1$ at tree level and one now has\n\\begin{equation}\nv= \\sqrt{v_2^2+ 8\\,v_M^2} \\approx 246 \\, {\\rm GeV } \\,.\n\\end{equation}\n\nAs discussed in \\cite{pqaranda,hung2,GeorgMach}, with respect to $SU(2)$, the two triplets (one real and one complex) and one doublet sum up to 13 degrees of freedom, 3 of which are Nambu-Goldstone bosons absorbed by W's and Z leaving 10 physical degrees of freedom. Under the custodial symmetry $SU(2)_D$, these transform as\n\\begin{eqnarray}\n\t\t\\text{five-plet (quintet)} &\\rightarrow& H_5^{\\pm\\pm},\\; H_5^\\pm,\\; H_5^0;\\nonumber\\\\[0.5em]\n\t\t\\text{triplet} &\\rightarrow& H_3^\\pm,\\; H_3^0;\\nonumber\\\\[0.5em]\n\t\t\\text{two singlets} &\\rightarrow& H_1^0,\\; H_1^{0\\prime}\\,. \\nonumber\n\\end{eqnarray} \nThe expressions for the scalar states can be explicitly found in Eq.~(\\ref{eq:higgs}), by setting $s_{2M}\\rightarrow 0$ and $\\Phi_{2M} \\rightarrow 0$.\n\n\\subsection{The seesaw mechanism in the EW$\\nu_R$ model}\nThe main purpose of the EW$\\nu_R$ model was to provide a scenario in which right-handed neutrinos are non-sterile and get their masses out of the symmetry breaking of $SU(2) \\times U(1)$. A Majorana mass term of the form $M_{R} \\, \\nu_{R}^{T}\\, \\sigma_{2} \\, \\nu_R$ in the EW$\\nu_R$ model comes from the following Yukawa interaction:\n\\begin{equation}\ng_{M} l_R^{M,T} \\,\\sigma_2\\, \\tilde{\\chi} \\, l_R^{M} \\, ,\n\\end{equation}\nwhich gives \n\\begin{equation}\ng_M \\, \\nu_{R}^{T}\\, \\sigma_{2} \\, \\nu_R \\chi^{0} \\,.\n\\end{equation}\nThe right-handed neutrino Majorana mass is now intrinsically linked to the breaking scale of $SU(2) \\times U(1)$ through the VEV of $\\tilde{\\chi}$ as\n\\begin{equation}\nM_R = g_M\\,v_M \\, .\n\\end{equation}\nAs stressed in \\cite{pqnur}, $M_R$ is bounded from below because $\\nu_R$ are now members of an $SU(2)$ doublet and would contribute to the Z-boson width leading to the lower bound: \n\\begin{equation}\nM_R \\geq M_Z\/2 \\approx 46 ~{\\rm GeV }\\,.\n\\end{equation}\n\nAs discussed in \\cite{pqnur}, a global symmetry $U(1)_{MF}$ (referred to as $U(1)_M$ in \\cite{pqnur}) was imposed so as to forbid a term of the form $g_{L} l_L^{M,T} \\,\\sigma_2\\, \\tilde{\\chi} \\, l_L^{M}$ which would give a large Majorana mass $g_L v_M$ to the left-handed neutrino, unless $g_L$ is unnaturally fine-tuned to be tiny. As discussed in \\cite{pqnur}, this is accomplished by the following transformation properties: $(l_R^M, ~e_L^M) \\rightarrow e^{\\imath\\alpha_{MF}} (l_R^M, ~e_L^M)$, $\\widetilde{\\chi} \\rightarrow e^{-2\\imath \\alpha_{MF}} \\widetilde{\\chi}$ and $\\phi_S \\rightarrow e^{-\\imath\\alpha_{MF}}\\phi_S$, with all other SM particles being $U(1)_{MF}$ singlets. We will come back to this symmetry and its extended version below.\n\nA Dirac mass term is of the form $m_D (\\nu_{L}^{\\dagger}\\, \\nu_R + h.c.)$. This is a product of two doublets and the simplest choice for the Higgs scalar is an $SU(2)$ singlet with zero hypercharge, namely $\\phi_S$.\n\\begin{equation}\n{\\cal L}_S = g_{Sl} \\, \\bar{l}_{L}\\, \\phi_S \\, l^{M}_{R} + H.c. \\,.\n\\end{equation}\nWith $\\langle \\phi_S \\rangle = v_S$, the Dirac mass is given by\n\\begin{equation}\nm_D = g_{Sl} v_S \\,.\n\\end{equation}\n\nThe magnitude of the light neutrino mass given by\n\\begin{equation}\nm_\\nu = \\frac{m_D^2}{M_R} 100~{\\rm GeV }$, the decay channel $\\widetilde{H} \\rightarrow f^M \\bar{f}^M$ does not occur at the leading order, when $f^M$ is on-shell.\n\nIn what follows we identify the relevant variables in the analysis and estimate their allowed ranges.\n\n\n\\subsubsection{Lower limit on the masses of charged mirror fermions}\n\nThe lower limit of $102 ~{\\rm GeV }$ on the masses of charged mirror leptons and mirror quarks is imposed based on the results of search for sequential heavy charged leptons and quarks at LEP3 (refer `Heavy Charged Leptons' and `Heavy Quarks' sections in \\cite{pdg} and references therein). Strictly speaking these constraints apply only to sequential heavy fermions, such as $L' \\rightarrow \\tau Z \\rightarrow \\tau l\\bar{l}, \\tau q \\bar{q}$ or $Q^\\prime\\rightarrow bZ\\rightarrow b~ l\\bar{l}, b~ q \\bar{q}$ or $Q^\\prime\\rightarrow bW^+\\rightarrow b~ l\\bar{\\nu}, ~b~q\\bar{q^\\prime}$ etc.\n\nHowever, charged mirror fermions in the EW$\\nu_R$ model couple to the SM fermions in an altogether different way, through the scalar singlet $\\phi_S$ \\cite{pqnur,pqnur2}: $q^M\\rightarrow q\\phi_S$, $l^M\\rightarrow l\\phi_S$. This $\\phi_S$ would appear as missing energy in the detector. Thus, the signature of final states in charged mirror fermion decay would involve a lepton + missing $E_T$ or a jet + missing $E_T$. Moreover, at CMS or ATLAS these decays could occur outside the beam-pipe and inside the silicon vertex detector \\cite{pqnur,pqnur2}. Therefore, the constraints from the aforementioned searches do not directly apply to charged mirror fermions. We still impose these constraints on charged mirror fermions, arguing that if these mirror fermions were lighter than $\\sim 100 ~{\\rm GeV }$, they would have been discovered at 200 {\\rm GeV } LEP3 \\cite{pdg}.\n\n\\subsubsection{Limits on VEVs, scalar and Yukawa couplings}\n\nWe consider only the cases where the scalar couplings and Yukawa couplings of mirror fermions are perturbative. The perturbative constraint on scalar and Yukawa couplings are $\\lambda_i\/4\\pi \\lesssim \\mathcal{O}(1)$ and $\\alpha_{f^M} = g_{MF}^2\/4\\pi \\lesssim \\mathcal{O}(1)$ respectively. For numerical analysis we limit ourselves to cases, where $\\lambda_i \/4\\pi\\leq 1.3$ and $\\alpha_{f^M} \\leq 1.5$.\n\nAs discussed towards the end of Sec.~\\ref{sec:extewnur}, the $SU(2)_D$ singlet mass eigenstates depend on $s_2$, $s_{2M}$ and $s_M$. Therefore, they also depend on the vacuum expectation values (VEVs) of the real parts of $\\Phi_2$, $\\Phi_{2M}$ and $\\chi$. While investigating different numerical forms of $\\{a_{ij}\\}$, one needs to vary the VEV's. Hence, it is necessary to estimate the limits on these VEVs before analyzing the 125-{\\rm GeV } candidate in detail. \n\nRecall that the charged SM fermions, the charged mirror fermions and the right handed neutrinos get their masses due to $v_2$, $v_{2M}$, and $v_M$ respectively. Various constraints on these masses constrain the ranges of the VEVs. \n\nIf the pole mass of top quark ($173.5~{\\rm GeV }$), the heaviest SM fermion, is {\\em perturbative} and comes from $v_2$, then $v_2 \\gtrsim 69 ~{\\rm GeV }$ (because $g^2_{top} \\leq 4 \\pi$). We set the lower bound on the masses of all the charged mirror fermions at $102~{\\rm GeV }$, which is the LEP3 \\cite{pdg} bound on the heavy BSM quarks and BSM charged leptons. Hence, considering a constraint of $g^2_{MF}\/4\\pi \\leq 1.5$ on the Yukawa couplings of all the charged mirror fermions, $v_{2M} \\gtrsim 27 ~{\\rm GeV }$, implying $v_M \\lesssim 80 ~{\\rm GeV }$. Thus, for perturbative determination of $M_R$ requires $M_R \\lesssim 283 ~{\\rm GeV }$. We also know that $M_R \\geq M_Z \/ 2\\; \\approx\\; 45.5 ~{\\rm GeV }$ \\cite{pqnur}, and, hence, $v_M \\gtrsim 13 ~{\\rm GeV }$. This implies that $v_2 \\lesssim 241 ~{\\rm GeV }$ and $v_{2M} \\lesssim 233 ~{\\rm GeV }$. This limit on $v_{2M}$ along with the perturbative limit on $g_{MF}$ sets an upper limit on the masses of the mirror fermions: $m_{MF} \\lesssim 715 ~{\\rm GeV }$. The allowed ranges for VEVs and for parameters defined in Eq~(\\ref{eq:sdefs}) are summarized in the table below.\n\\begin{table}[!htb]\n\t\\renewcommand{\\arraystretch}{2}\n\t\\centering\n\t\\caption{\\label{tab:v_ranges}Allowed ranges of VEVs and parameters defined in Eq. (\\ref{eq:sdefs}). All values are given in {\\rm GeV }.}\n\t\\begin{tabular}{l|l}\n\t\\hline\n\t$69 ~\\lesssim ~v_2 ~~\\lesssim ~241$ & $~0.28 ~\\lesssim ~s_2 ~~\\lesssim ~0.98$\\\\\n\t$33 ~\\lesssim ~v_{2M} \\lesssim ~233~~$ & $~0.13 ~\\lesssim ~s_{2M} \\lesssim ~0.95$\\\\\n\t$13 ~\\lesssim ~v_M ~\\lesssim ~~83$ & $~0.15 ~\\lesssim ~s_M ~\\lesssim ~0.95$\\\\\n\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\subsubsection{Common predictions for multiple decay channels}\n\nIn the EW$\\nu_R$ model, predictions for the signal strengths of $\\widetilde{H} \\rightarrow W^+W^-$ and $\\widetilde{H} \\rightarrow ZZ$ are equal. Similarly, predictions for the signal strengths of $\\widetilde{H} \\rightarrow b\\bar{b}$ are equal to those for $\\widetilde{H} \\rightarrow\\tau\\bar{\\tau}$. This is expected, since as seen in Appendix~\\ref{sec:hdecay},\n\\begin{eqnarray}\\label{eq:ww_zz}\n\t\\frac{\\Gamma^{EW\\nu_R}(\\widetilde{H} \\rightarrow W^+W^-)}{\\Gamma^{SM}(H_{SM}^0\\rightarrow W^+W^-)} &=& \\frac{\\Gamma^{EW\\nu_R}(\\widetilde{H} \\rightarrow ZZ)}{\\Gamma^{SM}(H_{SM}^0\\rightarrow ZZ)}\\,,\\nonumber\\\\[0.5em]\n\t\\frac{\\Gamma^{EW\\nu_R}(\\widetilde{H} \\rightarrow b\\bar{b})}{\\Gamma^{SM}(H_{SM}^0\\rightarrow b\\bar{b})} &=& \\frac{\\Gamma^{EW\\nu_R}(\\widetilde{H} \\rightarrow \\tau\\bar{\\tau})}{\\Gamma^{SM}(H_{SM}^0\\rightarrow \\tau\\bar{\\tau})}\\,.\n\\end{eqnarray}\n\nKeeping all this in mind, in the next two subsections we analyze in detail the decay properties of the 125-{\\rm GeV } candidate in the EW$\\nu_R$ model.\n\n\\subsubsection{Numerical Analysis}\\label{sec:numerical}\n\nFor this analysis a C++ code was written, also using some functionality of ROOT \\cite{ROOT}. We investigated this case in following steps:\n\\begin{itemize}\n\t\\item\n\tWe generated random combinations of $s_2,~s_{2M},~s_M,~\\lambda_1,~\\lambda_2,~\\lambda_3$ and $\\lambda_4$, using TRandom3 random number generator in ROOT. These parameters were varied over the following ranges:\n\\begin{eqnarray}\\label{eq:hyde125para}\n\t-4 \\pi ~~\\leq ~~\\lambda_1, ~&\\lambda_2, ~\\lambda_3&, ~\\lambda_4 ~~\\leq ~~4 \\pi\\,,\\nonumber\\\\[0.5em]\n\t0.28 ~~\\leq ~~&s_2& ~~\\leq ~~0.98\\,,\\nonumber\\\\[0.5em]\n\t0.13 ~~\\leq ~~&s_{2M}& ~~\\leq ~~0.95\\,,\\nonumber\\\\[0.5em]\n\t0.15 ~~\\leq ~~&s_M& ~~\\leq ~~0.95\\,.\n\\end{eqnarray}\nThe limits $|\\lambda|\/4\\pi \\lesssim 1$ are set so that $\\lambda$'s are perturbative. Limits on $s_2$, $s_{2M}$, $s_M$ are based on Table~\\ref{tab:v_ranges}.\n\t\\item\n\tWe numerically diagonalized the singlet mass matrix in Eq.~(\\ref{eq:m2sing}) formed by every combination of the parameters to find the mass eigenvalues and corresponding eigenvector matrix (mixing matrix) in Eq.~(\\ref{eq:Hmasseigen}). Only those combinations of parameters, which yielded the lightest mass eigenvalue in the range $125.7\\pm 1.0~{\\rm GeV }$, were saved. 4 million such parameter combinations were found. \n\t\\item\n\tFor all the saved combinations we calculated various signal strengths for each of these combinations. The gluon-gluon fusion channel was considered to calculate the predicted production cross section of the $\\widetilde{H} $. The partial decay widths were calculated according to Appendix~\\ref{sec:hdecay}, and the total width was calculated using Eq.~(\\ref{eq:Hwidth}).\n\t\\item\n\tIn addition to the parameters in Eq~(\\ref{eq:hyde125para}), following parameters are required to calculate the partial widths of $\\widetilde{H}\\rightarrow \\gamma\\gamma$ and $\\widetilde{H}\\rightarrow g g$, and the cross section of $g g \\rightarrow \\widetilde{H}$:\n\\begin{eqnarray}\\label{eq:hydemupara}\n\t0 ~~&\\leq& ~~\\lambda_5 ~~\\leq ~~15\\,, \\text{ varied with }\\Delta\\lambda_5 \\sim 1.07\\,,\\nonumber\\\\[0.5em]\n\t\\lambda_8 &=& -1\\,, ~m_{H_3^+} = ~m_{H_{3M}^+} = 500 ~{\\rm GeV }\\,,\\nonumber\\\\[0.5em]\n\tm_{H_5^+} &=& 200 ~{\\rm GeV }\\,, ~m_{H_5^{++}} = 320 ~{\\rm GeV }\\,, ~m_{q_3^M} = 120 ~{\\rm GeV }\\,,\\nonumber\\\\[0.5em]\n\tm_{q_1^M} &=& m_{q_2^M} = m_{l^M} = 102 ~{\\rm GeV }\\,.\n\\end{eqnarray}\n\t\\item\n\tWe checked if the signal strengths $\\mu$'s of the 125-{\\rm GeV } $\\widetilde{H} $ in various decay channels are within the $1\\sigma$ constraints on the signal strengths, as measured by CMS experiment. We did not impose constraints from both the CMS and ATLAS, because for some of the decay channels considered here, the signal strength measurements from CMS and ATLAS do not agree with each other within the $1\\sigma$ constraints. Also, CMS and ATLAS have not published their combined measurements from the recent analyses. We therefore chose to check agreement with the CMS measurements. \n\t\n\tDepending on their $1\\sigma$ constraints, certain combinations out of the 4 million would agree with either only with CMS or with ATLAS results. Thus, imposing the constraints from ATLAS would discard some of the combinations that the CMS constraints would allow and vice versa. However, this would not change any of the conclusions of the paper.\n\t\\item\n\tWe found 1501 out of 4 million combinations of the parameters that satisfy $1\\sigma$ constraints from CMS on the 125-{\\rm GeV } Higgs signal strengths in $WW$, $ZZ$, $b\\bar{b}$, $\\tau\\bar{\\tau}$ and $\\gamma\\gamma$ decay channels. Table~\\ref{tab:hyde} lists 16 examples out of 1501 cases, with the masses of $\\widetilde{H} $, $\\widetilde{H}^\\prime $, $\\widetilde{H}^{\\prime\\prime} $, their mixing-matrix elements, and the signal strengths of the 125-{\\rm GeV } $\\widetilde{H} $ for various decay channels. \n\t\\item\n\tIn the code, there was no constraint imposed as to what is to be the dominant component in $\\widetilde{H} $. Interestingly, hardly any combinations among the 4 million had $H_1^0$ as a dominant component in the 125-{\\rm GeV } $\\widetilde{H} $. This means that either\n\t\\begin{enumerate}\n\t\t\\item\n\t\tat the mass of about 125-{\\rm GeV }, 4 million combinations do not yield enough resolution in the parameter space so as to find the $\\widetilde{H} \\sim H_1^0$ case, OR\n\t\t\\item\n\t\tthe $\\widetilde{H} \\sim H_1^0$ case cannot be found with the imposed limits on the parameters, and it requires at least some of these parameters to have values outside of these limits. \n\t\\end{enumerate}\n\t\\item\n\tThus, this scan of the parameter space only yielded {\\em Mr.~Hyde} cases, where the SM-like $H_1^0$ is a subdominant component in the 125-{\\rm GeV } $\\widetilde{H} $. Implications of these cases will be further discussed in section \\ref{sec:hyde}.\n\t\\item\n\tOn the other hand, to find the combinations of the parameters for which the 125-{\\rm GeV } $\\widetilde{H} $ has a dominant SM-like $H_1^0$ component, and which also satisfy the CMS constraints on the signal strengths, we had to choose some of the scalar couplings to have values outside $[-4\\pi,~4\\pi]$. These {\\em Dr.~Jekyll} cases thus require some interactions within the scalar sector to be in the strong-coupling regime. In the next subsection we discuss this scenario in detail.\n\n\\end{itemize}\n\n\n\\subsection{$\\widetilde{H} $ as 125-GeV Higgs candidate with a dominant SM-like component}\\label{sec:jekyll}\n\n\nWe illustrate the step-by-step process which we followed to analyze this case. \n\\begin{itemize}\n\t\\item\n\tA Mathematica code was written to numerically diagonalize the custodial-singlet mass matrix in Eq.~(\\ref{eq:m2sing}) and obtain its mass eigenvalues and eigenvector matrix i.e. the mixing matrix in Eq.~(\\ref{eq:Hmasseigen}). \n\t\\item\n\tIn this code, the values of $s_2 = 0.92, ~s_{2M} = 0.16$ (and thus, $s_M \\approx 0.36$) were fixed. The analysis was performed for different $s_2$ values, but, for $\\widetilde{H} \\sim H_1^0$, only the cases with $s_2 \\gtrsim 0.9$ were found to satisfy the experimental constraints on the signal strengths of the 125-{\\rm GeV } Higgs at LHC.\n\t\\item\n\tAfter fixing $s_2$ and $s_{2M}$, the scalar couplings $\\lambda_1$, $\\lambda_2$, $\\lambda_3$ and $\\lambda_4$ were manually varied so that $|\\lambda|\/4\\pi \\leq 1.3$, in order to find the combinations of $\\lambda$'s that yield the lowest eigenvalue of the mass matrix to be $125.7\\pm 1.0 ~{\\rm GeV }$ and the corresponding eigenstate to have dominant $H_1^0$ component.\n\t\\item\n\tRecall (refer to Eq.~(\\ref{eq:pot})) that $\\lambda_1$, $\\lambda_2$ and $\\lambda_3$ are the self-couplings of $\\Phi_2$, $\\Phi_{2M}$ and $\\chi$ respectively. $\\lambda_5$ is the measure of cross couplings of $\\Phi_2$, $\\Phi_{2M}$ and $\\chi$.\n\t\\item\n\tAs stated in section \\ref{sec:numerical}, we found combinations of the parameters which satisfy the CMS constraints on the signal strengths, when $\\lambda_2, ~\\lambda_5 > 4\\pi$. $|\\lambda_1|, ~|\\lambda_4|, ~|\\lambda_8|$ are still $\\leq 4\\pi$, while $\\lambda_3 \\approx 15$. For illustrative purpose, we show below two of many cases which satisfy the CMS constraints.\n\t\\item\nThe calculation of the partial width of the $\\widetilde{H} ~\\rightarrow~\\gamma\\gamma$ channel necessitates fixing the values or ranges for the remaining parameters. In the example cases shown below we fix other parameters as follows:\n\\begin{itemize}\n\t\\item\n\t$m_{H_3^+} = 600 ~{\\rm GeV }, ~m_{H_{3M}^+} = 700 ~{\\rm GeV }$,\n\t\\item\n\tmasses of all three charged mirror leptons $m_{l^M} = 102 ~{\\rm GeV }$,\n\t\\item\n\tmass of lightest two generations of mirror quarks $m_{q_1^M} = m_{q_2^M} = 102 ~{\\rm GeV }$,\n\t\\item\n\tfor the purpose of partial widths of $\\widetilde{H} $-decays in scenarios above, we also fix mass of the third mirror quark generation at $m_{q^M} = 120 ~{\\rm GeV }$. This mass will be varied to analyze constraints on $\\widetilde{H} \\sim H_{1M}^0$.\n\\end{itemize}\n\t\\item\nThe values of $m_{H_3^+}$ and $m_{H_{3M}^+}$ are chosen so as to have largest allowed ranges for $m_{H_5^+}$ and $m_{H_5^{++}}$. We vary the latter two over the range $\\sim 400 ~- ~730 ~{\\rm GeV }$ for Example 1 and 2. This variation does not affect much the predictions for the signal strengths of the $\\widetilde{H} $ decays to $W^+W^-$, $ZZ$ and $f\\bar{f}$, but only changes that for $\\widetilde{H} \\rightarrow \\gamma\\gamma$. $m_{H_5^+}$ and $m_{H_5^{++}}$ vary in correlation when the CMS constraints on the signal strength of the diphoton decay channel are imposed. For the numerical calculation of other signal strengths in the following two examples we chose on of these correlated pairs of the two masses.\n\n\t\\item\nExample 1: $\\lambda_1 = -0.077, ~\\lambda_2 = 14.06, ~\\lambda_3 = 15.4, ~\\lambda_4 = 0.1175, ~\\lambda_5 = 15, ~\\lambda_8 = -1$ and $m_{H_5^+} = 500~{\\rm GeV }, ~m_{H_5^{++}} = 540~{\\rm GeV }$. \nFixing these along with $s_2 = 0.92, ~s_{2M} = 0.16,~s_M \\approx 0.36$, fully determines the singlet mass matrix, and hence the mixing matrix, given by:\n\\begin{equation}\\label{eq:jekyll1}\n\t\\left( \\begin{array}{l}\t\t\\widetilde{H} \\\\ \\widetilde{H}^\\prime \\\\ \\widetilde{H}^{\\prime\\prime} \\end{array}\\right) = \n\t\\left( \\begin{array}{ccc} \t0.998 \t\t& -0.0518 \t\t& -0.0329 \\\\\n\t\t\t\t\t\t\t\t0.0514 \t\t& 0.999 \t\t& -0.0140 \\\\\n\t\t\t\t\t\t\t\t0.0336 \t\t& 0.0123 \t\t& 0.999\n\t\\end{array} \\right) \n\t\\left( \\begin{array}{l} \t\tH_1^0\\\\ H_{1M}^0\\\\ H_1^{0\\prime}\\end{array}\\right)\\,,\n\\end{equation}\nwith $\\widetilde{H} \\sim H_1^0$, $\\widetilde{H}^\\prime \\sim H_{1M}^0$, $\\widetilde{H}^{\\prime\\prime} \\sim H_1^{0\\prime}$ and $m_{\\widetilde{H} } = 125.7 ~{\\rm GeV }$, $m_{\\widetilde{H}^\\prime } = 420 ~{\\rm GeV }$, $m_{\\widetilde{H}^{\\prime\\prime} } = 601 ~{\\rm GeV }$. $a_{1,1M}$ - the (1,2) element of the $3\\times 3$ matrix can actually vary between (-0.0515, -0.05295) and still satisfy CMS constraints. \\\\%We will refer to this as `Example 1'.\nAnother example is Example 2: $\\lambda_1 = 0.0329, ~\\lambda_2 = 14.2, ~\\lambda_3 = 15.4, ~\\lambda_4 = 0.0056, ~\\lambda_5 = 15, ~\\lambda_8 = -1$, and $m_{H_5^+} = 590~{\\rm GeV }, ~m_{H_5^{++}} = 600~{\\rm GeV }$,\n\\begin{flalign}\\label{eq:jekyll2}\n\t&\\left( \\begin{array}{l}\t\t\\widetilde{H} \\\\ \\widetilde{H}^\\prime \\\\ \\widetilde{H}^{\\prime\\prime} \\end{array}\\right) =&\\nonumber\\\\[0.5em]\n\t&\\left( \\begin{array}{ccc} \t0.99999...\t& -2.49\\times10^{-3}\t\t& -1.60\\times10^{-3} \\\\\n\t\t\t\t\t\t\t\t2.49\\times10^{-3}\t& 0.99999...\t\t\t& -5.30\\times10^{-4} \\\\\n\t\t\t\t\t\t\t\t1.60\\times10^{-3}\t& 5.26\\times10^{-4}\t& 0.99999..\n\t\\end{array} \\right) \n\t\\left( \\begin{array}{l} \t\tH_1^0\\\\ H_{1M}^0\\\\ H_1^{0\\prime}\\end{array}\\right),&\n\\end{flalign}\nwith $\\widetilde{H} \\sim H_1^0$, $\\widetilde{H}^\\prime \\sim H_{1M}^0$, $\\widetilde{H}^{\\prime\\prime} \\sim H_1^{0\\prime}$ and $m_{\\widetilde{H} } = 125.7 ~{\\rm GeV }$, $m_{\\widetilde{H}^\\prime } = 420 ~{\\rm GeV }$, $m_{\\widetilde{H}^{\\prime\\prime} } = 599 ~{\\rm GeV }$. The allowed range for $a_{1,1M}$ - the (1,2) element of the $3\\times 3$ matrix is $(-1.20, -3.40)\\times10^{-3}$.\n\n\\begin{table*}[!htb]\n\t\\renewcommand{\\arraystretch}{1.5}\n\t\\centering\n\t\\caption{\\label{tab:sigXBRjekyll}Partial width of $H\\rightarrow gg$ as the measure of the production cross section, partial widths and branching ratios for various channels in SM (for $m_{H_{SM}} = 125.7~{\\rm GeV }$ with total width = $4.17\\times 10^{-3}$ {\\rm GeV }, and the EW$\\nu_R$ model for {\\em Dr.~Jekyll} example 2 scenario: $a_{1,1M} = -0.0025$, where $m_{\\widetilde{H} } = 125.7~{\\rm GeV }$, total width = $4.45\\times 10^{-3}$ {\\rm GeV } and $\\widetilde{H} \\sim H_1^0$. All the partial widths are given in {\\rm GeV }.}\n\t\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\t\\hline\n ~ & \\multicolumn{3}{c|}{SM} & \\multicolumn{3}{c|}{EW$\\nu_R$ } & \\multirow{2}{*}{$\\mu = \\dfrac{(\\sigma_{Hgg} \\times BR)_{\\text{SM}}}{(\\sigma_{\\widetilde{H} gg} \\times BR)_{\\text{EW$\\nu_R$ }}}$} \\\\\n \\cline{2-7}\n ~ & $\\Gamma_{H\\rightarrow gg}$ & Partial Width & BR & $\\Gamma_{\\widetilde{H} \\rightarrow gg}$ & Partial Width & BR & ~ \\\\\n ~ & $\\propto \\sigma_{gg\\rightarrow H}$ & ~ & ~ & $\\propto \\sigma_{gg\\rightarrow H}$ & ~ & ~ & ~ \\\\\n \\hline\n\t$\\widetilde{H} \\rightarrow W^+W^-$ \t& 3.55E-04 \t\t& 9.42E-04 & 2.26E-01 & 3.46E-04 & 7.63E-04 & 1.72E-01 & 0.74 \\\\\n\t$\\widetilde{H} \\rightarrow ZZ$ \t& 3.55E-04 \t\t& 1.17E-04 & 2.81E-02 & 3.46E-04 & 9.49E-05 & 2.13E-02 & 0.74 \\\\\n\t$\\widetilde{H} \\rightarrow b\\bar{b}$ \t& 3.55E-04 \t\t& 2.36E-03 & 5.66E-01 & 3.46E-04 & 2.79E-03 & 6.26E-01 & 1.07 \\\\\n\t$\\widetilde{H} \\rightarrow \\tau\\bar{\\tau}$\t& 3.55E-04\t& 2.59E-04 & 6.21E-02 & 3.46E-04 & 3.06E-04 & 6.87E-02 & 1.07 \\\\\n\t$\\widetilde{H} \\rightarrow \\gamma\\gamma$\t\t& 3.55E-04\t& 9.51E-06 & 2.28E-03 & 3.46E-04 & 1.26E-05 & 2.82E-03 & 1.21 \\\\\n\t\\hline\n\t\\end{tabular}\n\\end{table*}\n\n\t\\item\nNotice that, although Examples 1 and 2 have very different values for the off-diagonal elements in $\\{a_{ij}\\}$, they yield comparable numerical signal strength predictions, the reason being principally that in both the cases $\\widetilde{H} \\sim H_1^0$. We can also find other cases having intermediate values for the off-diagonal elements yielding comparable signal strengths.\n\n\t\\item\n\tTable \\ref{tab:sigXBRjekyll} shows cross section of 125-{\\rm GeV } $H \\rightarrow gg$ (as a measure of production cross section), partial widths and branching ratios in the SM and the EW$\\nu_R$ model, for example 2. We see that these partial widths are not very different from those in SM. This is expected as, in this case, the couplings of $H_1^0$ with the SM gauge bosons and fermions are also close to those of the SM Higgs.\n\t\\item\nThe partial widths and the signal strengths for $W^+W^-$ and $ZZ$ decay channels are smaller, whereas those for $b\\bar{b}$, $\\tau\\bar{\\tau}$ and $\\gamma\\gamma$ decay channels are larger, than the corresponding values in SM. \n\nIt is because, for the example in Table \\ref{tab:sigXBRjekyll}, $s_2 < |a_{1,1}| < 1$, and as per Eq.~(\\ref{eq:WWZZwid}) the partial width $\\Gamma^{EW\\nu_R}(\\widetilde{H} \\rightarrow W^+W^-, ~ZZ) \\sim |s_2 ~a_{1,1}|^2 \\times \\Gamma^{SM}(H^0_{SM}\\rightarrow W^+W^-, ~ZZ)$ . \n\nOn the other hand, as seen in Eq.~(\\ref{eq:ffwid}), $\\Gamma^{EW\\nu_R}(\\widetilde{H} \\rightarrow f\\bar{f}) \\sim |a_{1,1}\/s_2|^2 ~\\Gamma^{SM}(H^0\\rightarrow f ~\\bar{f})>\\Gamma^{SM}(H^0\\rightarrow f ~\\bar{f})$. \n\n$\\Gamma^{EW\\nu_R}(\\widetilde{H} \\rightarrow \\gamma\\gamma)$ is larger than the corresponding SM value, because in the EW$\\nu_R$ model, charged scalars and mirror fermions also contribute to this decay through triangle loops (refer to Appendix \\ref{sec:diphoton}). Recall that in SM this decay is dominated only by the $W$ loop.\n\n\t\\item\nFig.~\\ref{fig:hdecaylim} shows the comparison between the CMS data for signal strengths $\\mu(H\\text{-decay})$ of the 125-{\\rm GeV } Higgs boson, and the corresponding predictions for the 125-{\\rm GeV } $\\widetilde{H} $ in the EW$\\nu_R$ model, for examples 1 and 2 in {\\em Dr.~Jekyll} scenario and examples 1, 2 and 3 in {\\em Mr.~Hyde} scenario, discussed in the next subsection. \n\nFor calculating the EW$\\nu_R$ predictions, we have considered the gluon-gluon fusion production channel $(gg\\rightarrow \\widetilde{H} )$, which is the most dominant Higgs-production channel at the LHC. Calculations of the predictions in the EW$\\nu_R$ model are explained in Appendix~\\ref{sec:hdecay}.\n\t\\item\n\tNotice that the predicted ranges for $\\mu(\\widetilde{H} \\rightarrow W^+W^-, ~ZZ)$ and $\\mu(\\widetilde{H} \\rightarrow b\\bar{b}, \\tau\\bar{\\tau})$ are much narrower than the allowed ranges by the CMS constraints. \n\t\n\tA wider range of $a_{1,1M}$ than shown in Eqs.~(\\ref{eq:jekyll1}), (\\ref{eq:jekyll2}) is allowed if we impose the constraints on only, say, $\\widetilde{H} \\rightarrow W^+W^-$ decay. However, for a part of the $a_{1,1M}$ range that satisfies the constraints on $\\mu(\\widetilde{H} \\rightarrow W^+W^-)$, the constraints on one or more of the other decay channels are not satisfied, and vice versa. So is true for all the other decay channels. Hence, when we seek the range of $a_{1,1M}$ that satisfies the constraints on {\\em all 4} of the $\\widetilde{H} \\rightarrow W^+W^-, ~ZZ, ~b\\bar{b}, ~\\tau\\bar{\\tau})$ decay channels, the predicted ranges for the signal strengths of these different channels are correlated. This shortens the range of $a_{1,1M}$ and of the signal strength predictions. These correlated predictions are shown in Fig.~\\ref{fig:hdecaylim}.\n\n\t\\item\nThe predicted range for $\\mu(\\widetilde{H} \\rightarrow \\gamma\\gamma)$ spans over $0-2.5$, because over the ranges of $m_{H_5^+}$ and $m_{H_5^{++}}$, $\\mu(\\widetilde{H} \\rightarrow \\gamma\\gamma)$ can easily vary without significantly affecting the predictions for the signal strengths of other decay channels.\n\n\t\\item\n\t{\\bf Conclusions from Fig.~\\ref{fig:hdecaylim}:}\n\tWe see that in the $\\widetilde{H} \\sim H_1^0$ scenario predictions of the EW$\\nu_R$ model for various signal strengths agree with those of the 125-{\\rm GeV } Higgs boson, as measured by CMS. A slightly, but not very, different mixing matrices can also agree with the ATLAS measurements. Future measurements of partial widths would therefore be required to disentangle this scenario from that the SM.\n\n\\end{itemize}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{hdecay_limits_040715.pdf}\n\t\\caption{\\label{fig:hdecaylim}Figure shows the predictions of $\\mu(\\widetilde{H} \\rightarrow ~b\\bar{b}, ~\\tau\\bar{\\tau}, ~\\gamma\\gamma, ~W^+W^-, ~ZZ)$ in the EW$\\nu_R$ model for examples 1 and 2 in {\\em Dr.~Jekyll} and example 1, 2 and 3 in {\\em Mr.~Hyde} scenarios, in comparison with corresponding best fit values by CMS \\cite{h_ww_122013, h_zz_4l_122013, h_bb_102013, h_tautau_012014}.}\n\\end{figure}\n\nWe now come to the most interesting part of our analysis, the one in which the 125-{\\rm GeV } Higgs boson is very unlike the SM Higgs.\n\n\n\\subsection{$\\widetilde{H} $ as the 125-GeV Higgs candidate with a sub-dominant SM-like component}\\label{sec:hyde}\n\n\nCan the 125-{\\rm GeV } $\\widetilde{H} $ in the EW$\\nu_R$ model have $H_1^0$ as a subdominant component and still satisfy the experimental constraints on its signal strengths? There are only two CP-even, neutral scalar states other than $H_1^0$, and they are $H_{1M}^0$ and $H_1^{0\\prime}$. The analysis explained in section \\ref{sec:numerical} revealed 1501 out of 4 million parameter combinations, for which $H_1^0$ can, indeed, be a subdominant component in 125-{\\rm GeV } $\\widetilde{H} $ while agreeing with the measured signal strengths of the 125-{\\rm GeV } Higgs at the LHC - the scenario we earlier referred to as {\\em Mr.~Hyde} scenario.\n\n\\subsubsection{Results of the analysis}\n\n\\begin{itemize}\n\t\\item\n\tTable~\\ref{tab:hyde} shows 16 out of the 1501 combinations of the parameters. \n\t\t\\item\n\tIt can be seen from Table~\\ref{tab:hyde} that in {\\em Mr.~Hyde} scenario, the CMS constraints on the signal strength can be satisfied, even when the scalar couplings satisfy $|\\lambda\/4\\pi|<1$. This means that the scalar particles heavier than the 125-{\\rm GeV } Higgs, need not be strongly coupled, and could be potentially detected as {\\em narrow resonances} at the LHC.\n\t\n\t\\item\n\tSimilarly, $s_{2M}$ can be larger than in {\\em Dr.~Jekyll} scenario. The mirror fermion masses are given in terms of $s_{2M}$ by\n\\begin{equation}\n\tm_{f^M} = \\dfrac{g_{MF} s_{2M} v}{\\sqrt{2}}\\,.\n\\end{equation}\n\nConsequently, larger (than in {\\em Dr.~Jekyll} scenario) masses of the mirror fermions are allowed by the perturbative limit on their Yukawa couplings. In other words, for a given mass of the mirror fermions, their Yukawa couplings in {\\em Mr.~Hyde} scenario can be smaller than those in {\\em Dr.~Jekyll} scenario.\n\n\t\\item\n\tTo highlight interesting features of this scenario, we consider three examples listed in Table~\\ref{tab:hyde}.\\\\\n\t\t\n\t\\item\nExample 1 (row 1 of Table~\\ref{tab:hyde}): $s_2 = 0.900$, $s_{2M} = 0.270$, $s_M = 0.341$, $\\lambda_1 = -0.481, ~\\lambda_2 = 6.00, ~\\lambda_3 = 1.46, ~\\lambda_4 = 2.99, ~\\lambda_5 = 2, ~\\lambda_8 = -1$,\n\\begin{equation}\\label{eq:hyde1}\n\t\\left( \\begin{array}{l}\t\t\\widetilde{H} \\\\ \\widetilde{H}^\\prime \\\\ \\widetilde{H}^{\\prime\\prime} \\end{array}\\right) = \n\t\\left( \\begin{array}{ccc} \t0.300 \t\t& -0.094 \t\t& -0.949 \\\\\n\t\t\t\t\t\t\t\t0.334 \t\t& -0.921\t \t& -0.197 \\\\\n\t\t\t\t\t\t\t\t0.893\t \t& 0.376 \t\t& 0.246\n\t\\end{array} \\right) \n\t\\left( \\begin{array}{l} \t\tH_1^0\\\\ H_{1M}^0\\\\ H_1^{0\\prime}\\end{array}\\right),\n\\end{equation}\nwith $\\widetilde{H} \\sim H_1^{0\\prime}$, $\\widetilde{H}^\\prime \\sim H_{1M}^0$, $\\widetilde{H}^{\\prime\\prime} \\sim H_1^0$; $m_{\\widetilde{H} } = 125.8 ~{\\rm GeV }$, $m_{\\widetilde{H}^\\prime } = 416 ~{\\rm GeV }$, $m_{\\widetilde{H}^{\\prime\\prime} } = 1100 ~{\\rm GeV }$, $M_R \\lesssim 105 ~{\\rm GeV }$, and $\\mu(\\widetilde{H} \\rightarrow W^+W^-\/ZZ) = 0.72$, $\\mu(\\widetilde{H} \\rightarrow \\gamma\\gamma) = 0.91$, $\\mu(\\widetilde{H} \\rightarrow b\\bar{b}\/\\tau\\bar{\\tau}) = 1.00$.\\\\ \n\n\n\t\\item\nExample 2 (row 2 of Table~\\ref{tab:hyde}): $s_2 = 0.514$, $s_{2M} = 0.841$, $s_M = 0.168$, $\\lambda_1 = 6.15, ~\\lambda_2 = 7.68, ~\\lambda_3 = 8.84, ~\\lambda_4 = -2.131502, ~\\lambda_5 = 5, ~\\lambda_8 = -1$,\n\\begin{equation}\\label{eq:hyde2}\n\t\\left( \\begin{array}{l}\t\t\\widetilde{H} \\\\ \\widetilde{H}^\\prime \\\\ \\widetilde{H}^{\\prime\\prime} \\end{array}\\right) = \n\t\\left( \\begin{array}{ccc} \t0.188 \t\t& 0.091 \t\t& 0.978 \\\\\n\t\t\t\t\t\t\t\t-0.941 \t\t& -0.268\t \t& 0.207 \\\\\n\t\t\t\t\t\t\t\t-0.281\t \t& 0.959 \t\t& -0.035\n\t\\end{array} \\right) \n\t\\left( \\begin{array}{l} \tH_1^0\\\\ H_{1M}^0\\\\ H_1^{0\\prime}\\end{array}\\right),\n\\end{equation}\nwith $\\widetilde{H} \\sim H_1^{0\\prime}$, $\\widetilde{H}^\\prime \\sim H_1^0$, $\\widetilde{H}^{\\prime\\prime} \\sim H_{1M}^{0}$; $m_{\\widetilde{H} } = 125.2 ~{\\rm GeV }$, $m_{\\widetilde{H}^\\prime } = 633 ~{\\rm GeV }$, $m_{\\widetilde{H}^{\\prime\\prime} } = 1427 ~{\\rm GeV }$, $M_R \\lesssim 52.0 ~{\\rm GeV }$, and $\\mu(\\widetilde{H} \\rightarrow W^+W^-\/ZZ) = 0.94$, $\\mu(\\widetilde{H} \\rightarrow \\gamma\\gamma) = 0.89$, $\\mu(\\widetilde{H} \\rightarrow b\\bar{b}\/\\tau\\bar{\\tau}) = 0.65$.\\\\ \n\n\n\t\\item\nExample 3 (row 3 of Table~\\ref{tab:hyde}): $s_2 = 0.401$, $s_{2M} = 0.900$, $s_M = 0.151$, $\\lambda_1 = 4.76, ~\\lambda_2 = 3.41, ~\\lambda_3 = 7.71, ~\\lambda_4 = -1.29, ~\\lambda_5 = 4, ~\\lambda_8 = -1$,\n\\begin{equation}\\label{eq:hyde3}\n\t\\left( \\begin{array}{l}\t\t\\widetilde{H} \\\\ \\widetilde{H}^\\prime \\\\ \\widetilde{H}^{\\prime\\prime} \\end{array}\\right) = \n\t\\left( \\begin{array}{ccc} \t0.187\t\t& 0.115 \t\t& 0.976 \\\\\n\t\t\t\t\t\t\t\t0.922 \t\t& 0.321\t \t\t& -0.215 \\\\\n\t\t\t\t\t\t\t\t0.338\t \t& -0.940 \t\t& 0.046\n\t\\end{array} \\right) \n\t\\left( \\begin{array}{l} \t\tH_1^0\\\\ H_{1M}^0\\\\ H_1^{0\\prime}\\end{array}\\right),\n\\end{equation}\nwith $\\widetilde{H} \\sim H_1^{0\\prime}$, $\\widetilde{H}^\\prime \\sim H_1^0$, $\\widetilde{H}^{\\prime\\prime} \\sim H_{1M}^{0}$; $m_{\\widetilde{H} } = 125.6 ~{\\rm GeV }$, $m_{\\widetilde{H}^\\prime } = 454 ~{\\rm GeV }$, $m_{\\widetilde{H}^{\\prime\\prime} } = 959 ~{\\rm GeV }$, $M_R \\lesssim 46.4 ~{\\rm GeV }$, and $\\mu(\\widetilde{H} \\rightarrow W^+W^-\/ZZ) = 0.89$, $\\mu(\\widetilde{H} \\rightarrow \\gamma\\gamma) = 1.09$, $\\mu(\\widetilde{H} \\rightarrow b\\bar{b}\/\\tau\\bar{\\tau}) = 1.06$.\n\n\t\\item\n\tIn example 1, $H_{1M}^0$ is the dominant component in $\\widetilde{H}^\\prime $, whereas $H_1^0$ is the dominant in $\\widetilde{H}^\\prime $ in examples 2 and 3. Although the mixing matrices in examples 2 and 3 are not very different, the ratio of VEV's $s_2$, $s_{2M}$ are different enough to result in the signal strengths that are not very similar (especially for $\\widetilde{H} \\rightarrow f\\bar{f}$). As the partial width of $\\widetilde{H} \\rightarrow f\\bar{f}$ is proportional to $|a_{1,1}\/s_2|^2$, it changes rapidly with $s_2$. Also, because we have 6 mirror quarks which contribute to the cross section of gluon-gluon fusion, the production cross section dominantly changes as $\\sim |a_{1,1}\/s_2+6 ~a_{1,1M}\/s_{2M}|^2$. Thus, any change in $a_{1,1M}\/s_{2M}$ is amplified while calculating the signal strengths. \n\n\t\\item\nComparison of the signal strengths for the three examples with the CMS constraints on them can be seen in Fig.~\\ref{fig:hdecaylim}. Notice the agreement between the predictions for the signal strengths with the CMS constraints in the figure. This agreement demonstrates that the {\\em SM-like signal strengths of 125-{\\rm GeV } Higgs at the LHC are not sufficient to conclude that it is a SM-like Higgs, or even if it has a dominant SM-like component}.\n\n\t\\item\nTable~\\ref{tab:sigXBRhyde} shows the partial widths, branching ratios and the signal strengths for {\\em Mr.~Hyde} scenario in the EW$\\nu_R$ model and SM. It can be seen that the partial widths in this scenario are very different from the SM (smaller by a factor of $\\sim 5$ for the example in the table), but it results in similar signal strengths. Measurements of the partial widths are therefore necessary to be able to experimentally distinguish between {\\em Mr.~Hyde} scenario and SM.\n\\end{itemize}\n\\begin{table*}[!htb]\n\t\\renewcommand{\\arraystretch}{1.5}\n\t\\centering\n\t\\caption{\\label{tab:sigXBRhyde}Partial width of $H\\rightarrow gg$ as the measure of the production cross section, partial widths and branching ratios for various channels in SM (for $m_{H_{SM}} = 125.6~{\\rm GeV }$ and total width $4.15\\times 10^{-3}$ {\\rm GeV }), and the EW$\\nu_R$ model for row 3 in Table \\ref{tab:hyde}, also in Eq.~(\\ref{eq:hyde3}) where $\\widetilde{H} \\sim H_1^{0\\prime}$ (with $m_{\\widetilde{H} } = 125.6~{\\rm GeV }$ and total width $1.34\\times 10^{-3}$ {\\rm GeV }). All the partial widths are given in {\\rm GeV }.}\n\t\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\t\\hline\n ~ & \\multicolumn{3}{c|}{SM} & \\multicolumn{3}{c|}{EW$\\nu_R$ } & \\multirow{2}{*}{$\\mu = \\dfrac{(\\sigma_{Hgg} \\times BR)_{\\text{SM}}}{(\\sigma_{\\widetilde{H} gg} \\times BR)_{\\text{EW$\\nu_R$ }}}$} \\\\\n \\cline{2-7}\n ~ & $\\Gamma_{H\\rightarrow gg}$ & Partial Width & BR & $\\Gamma_{\\widetilde{H} \\rightarrow gg}$ & Partial Width & BR & ~ \\\\\n ~ & $\\propto \\sigma_{gg\\rightarrow H}$ & ~ & ~ & $\\propto \\sigma_{gg\\rightarrow H}$ & ~ & ~ & ~ \\\\\n \\hline\n\t$\\widetilde{H} \\rightarrow W^+W^-$ \t& 3.54E-04 \t\t& 9.30E-04 & 2.24E-01 & 5.75E-04 & 1.64E-04 & 1.23E-01 & 0.89 \\\\\n\t$\\widetilde{H} \\rightarrow ZZ$ \t\t& 3.54E-04 \t\t& 1.16E-04 & 2.79E-02 & 5.75E-04 & 2.04E-05 & 1.53E-02 & 0.89 \\\\\n\t$\\widetilde{H} \\rightarrow b\\bar{b}$ \t& 3.54E-04 \t\t& 2.35E-03 & 5.67E-01 & 5.75E-04 & 5.07E-04 & 3.79E-01 & 1.06 \\\\\n\t$\\widetilde{H} \\rightarrow \\tau\\bar{\\tau}$\t& 3.54E-04\t& 2.58E-04 & 6.22E-02 & 5.75E-04 & 5.42E-05 & 4.06E-02 & 1.06 \\\\\n\t$\\widetilde{H} \\rightarrow \\gamma\\gamma$\t\t& 3.54E-04\t& 9.46E-06 & 2.28E-03 & 5.75E-04 & 2.04E-06 & 1.53E-03 & 1.09 \\\\\n\t\\hline\n\t\\end{tabular}\n\\end{table*}\n\n\\subsubsection{Remarks on the $H_{1M}^0$ component in $\\widetilde{H} $}\n\nA few remarks are in order here:\n\\begin{itemize}\n\t\\item\n\tNotice that for all the cases listed in Table~\\ref{tab:hyde} $H_1^{0\\prime}$ is the dominant component in the 125-{\\rm GeV } $\\widetilde{H} $. In all 1501 cases we found, the modulus of the coefficient of $H_{1M}^0$ in the 125-{\\rm GeV } $\\widetilde{H} $ was $\\leq 0.32$.\n\t\\item\n\tIn the gluon-gluon fusion channel $H_{1M}^0$ is produced through triangle loops of 6 mirror quarks. Therefore, if $H_{1M}^0$ is the dominant component in $\\widetilde{H} $, then the production cross section of $\\widetilde{H} $ could become too high to be compensated by small branching ratios. Thus, it makes sense that $H_{1M}^0$ is disfavored to be the dominant component in $\\widetilde{H} $, by the constraints on the signal strengths.\n\t\\item\n\tEven if $H_{1M}^0$ is a sub-dominant component in $\\widetilde{H} $, one should not think that it has decoupled from the other two singlets. In other words, the scalar doublet $\\Phi_{2M}$ does not really decouple from $\\Phi_2$ and $\\chi$. This is because:\n\t\\begin{itemize}\n\t\t\\item\n\t\tEven if $H_{1M}^0$ has a small coefficient in $\\widetilde{H} $, its production amplitude through 6 mirror quarks has a significant contribution to the production cross section of $\\widetilde{H} $.\n\t\t\\item\n\t\tThe real degree of freedom of $\\Phi_{2M}$ leads to $H_{1M}^0$. But its other degrees of freedom also contribute to other physical particles such as $H_3^{0,\\pm}$, $H_{3M}^{0,\\pm}$. These particles contribute to $\\widetilde{H} \\rightarrow \\gamma\\gamma$ and the total width. Hence, they play a role in ensuring that the branching ratios are in the appropriate range to achieve an agreement with the signal strength constraints.\n\t\\end{itemize}\n\t\\item\n\tThus, although $H_{1M}^0$ is a sub-dominant component in $\\widetilde{H} $, the scalar doublet $\\Phi_{2M}$, newly added to the minimal EW$\\nu_R$ model, plays a crucial role in accommodating the 125-{\\rm GeV } Higgs boson in the EW$\\nu_R$ model, in {\\em Mr.~Hyde} as well as {\\em Dr.~Jekyll} scenario.\n\\end{itemize}\n\n\\begin{turnpage}\n\\begin{table*}[!]\n\t\\renewcommand{\\arraystretch}{1.2}\n\t\\centering\n\t\\caption{\\label{tab:hyde}All the masses and the total width of $\\widetilde{H} $ are given in {\\rm GeV }. Fixed parameters as given in Eq. (\\ref{eq:hydemupara}).\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n~ & $s_2$ & $s_{2M}$ & $s_M$ & $\\lambda_1$ & $\\lambda_2$ & $\\lambda_3$ & $\\lambda_4$ & $\\lambda_5$ & $m_{\\widetilde{H} }$ & $m_{\\widetilde{H}^\\prime }$ & $m_{\\widetilde{H}^{\\prime\\prime} }$ & $a_{1,1}$ & $a_{1,1M}$ & $a_{1,1'}$ & $a_{1M,1}$ & $a_{1M,1M}$ & $a_{1M,1'}$ & $a_{1',1}$ & $a_{1',1M}$ & $a_{1',1'}$ & $\\Gamma_{\\widetilde{H} }$ & $\\mu_{WW\/ZZ}$ & $\\mu_{\\gamma\\gamma}$ & $\\mu_{bb\/\\tau\\tau}$ & $M_R\\lesssim$ \\\\\n\\hline\n1 & 0.90 & 0.27 & 0.34 & -0.48 & 6.00 & 1.46 & 2.99 & 2 & 125.8 & 416 & 1100 & 0.301 & -0.094 & -0.949 & 0.334 & -0.922 & 0.197 & 0.893 & 0.376 & 0.246 & 1.66E-03 & 0.719 & 0.914 & 1.002 & 105.0 \\\\\n2 & 0.51 & 0.84 & 0.17 & 6.15 & 7.68 & 8.84 & -2.13 & 5 & 125.2 & 633 & 1427 & 0.189 & 0.091 & 0.978 & -0.941 & -0.268 & 0.207 & -0.281 & 0.959 & -0.035 & 9.61E-04 & 0.941 & 0.895 & 0.647 & 52.0 \\\\\n3 & 0.41 & 0.90 & 0.15 & 4.76 & 3.41 & 7.71 & -1.29 & 4 & 125.6 & 454 & 959 & 0.187 & 0.115 & 0.976 & 0.922 & 0.321 & -0.215 & 0.338 & -0.940 & 0.046 & 1.34E-03 & 0.891 & 1.089 & 1.062 & 46.4 \\\\\n4 & 0.87 & 0.32 & 0.36 & -0.39 & 4.40 & 1.21 & 4.48 & 1 & 126.2 & 420 & 1382 & 0.303 & -0.087 & -0.949 & 0.357 & -0.913 & 0.197 & 0.884 & 0.398 & 0.246 & 1.12E-03 & 0.753 & 1.108 & 0.849 & 111.9 \\\\\n5 & 0.35 & 0.91 & 0.22 & 8.73 & 5.26 & 4.88 & -1.59 & 9 & 126.1 & 617 & 1237 & 0.143 & 0.090 & 0.986 & 0.966 & 0.204 & -0.159 & 0.216 & -0.975 & 0.057 & 1.12E-03 & 0.994 & 0.995 & 0.682 & 69.2 \\\\\n6 & 0.31 & 0.87 & 0.38 & 4.99 & 9.67 & 2.10 & -1.02 & 1 & 126.4 & 435 & 1786 & 0.238 & 0.041 & 0.970 & 0.970 & 0.035 & -0.239 & 0.044 & -0.999 & 0.031 & 2.62E-03 & 0.953 & 1.128 & 1.092 & 118.1 \\\\\n7 & 0.31 & 0.87 & 0.38 & 4.99 & 9.67 & 2.10 & -1.02 & 2 & 126.4 & 435 & 1786 & 0.238 & 0.041 & 0.970 & 0.970 & 0.035 & -0.239 & 0.044 & -0.999 & 0.031 & 2.62E-03 & 0.954 & 0.894 & 1.093 & 118.1 \\\\\n8 & 0.92 & 0.21 & 0.31 & -0.73 & 9.34 & 1.83 & 9.25 & 5 & 125.6 & 412 & 1988 & 0.239 & -0.096 & -0.966 & 0.210 & -0.966 & 0.148 & 0.948 & 0.239 & 0.211 & 2.91E-03 & 0.882 & 1.341 & 0.708 & 95.7 \\\\\n9 & 0.36 & 0.92 & 0.16 & 6.78 & 3.10 & 7.16 & -1.34 & 4 & 126.1 & 501 & 905 & 0.156 & 0.129 & 0.979 & -0.901 & -0.388 & 0.195 & -0.405 & 0.912 & -0.056 & 1.37E-03 & 1.031 & 1.001 & 1.043 & 49.6 \\\\\n10 & 0.92 & 0.19 & 0.35 & -0.34 & 11.69 & 1.17 & 2.79 & 1 & 126.7 & 428 & 1067 & 0.272 & -0.047 & -0.961 & 0.237 & -0.965 & 0.114 & 0.932 & 0.259 & 0.251 & 9.77E-04 & 0.729 & 1.107 & 0.663 & 108.4 \\\\\n11 & 0.95 & 0.12 & 0.29 & -0.46 & 9.71 & 1.68 & 3.37 & 9 & 126.0 & 248 & 1167 & 0.230 & -0.126 & -0.965 & 0.113 & -0.981 & 0.155 & 0.967 & 0.145 & 0.211 & 1.57E-02 & 0.742 & 1.083 & 0.678 & 89.2 \\\\\n12 & 0.38 & 0.89 & 0.26 & 3.47 & 5.24 & 3.25 & -0.99 & 10 & 125.2 & 409 & 1281 & 0.241 & 0.065 & 0.968 & 0.964 & 0.097 & -0.247 & 0.110 & -0.993 & 0.039 & 1.82E-03 & 0.849 & 0.967 & 1.080 & 79.6 \\\\\n13 & 0.30 & 0.93 & 0.22 & 5.01 & 2.67 & 3.06 & -0.75 & 12 & 125.1 & 415 & 906 & 0.131 & 0.075 & 0.989 & 0.979 & 0.146 & -0.141 & 0.155 & -0.986 & 0.054 & 1.06E-03 & 0.770 & 1.266 & 0.666 & 69.0 \\\\\n14 & 0.36 & 0.89 & 0.28 & 2.34 & 2.56 & 2.04 & -0.52 & 12 & 126.0 & 333 & 890 & 0.211 & 0.071 & 0.975 & 0.971 & 0.104 & -0.217 & 0.117 & -0.992 & 0.047 & 1.74E-03 & 1.006 & 1.267 & 0.984 & 87.0 \\\\\n15 & 0.48 & 0.86 & 0.20 & 2.26 & 2.90 & 4.11 & -0.74 & 6 & 126.0 & 376 & 896 & 0.217 & 0.090 & 0.972 & 0.950 & 0.208 & -0.231 & 0.223 & -0.974 & 0.040 & 1.29E-03 & 1.012 & 0.917 & 0.857 & 60.9 \\\\\n16 & 0.32 & 0.91 & 0.26 & 2.76 & 2.07 & 2.18 & -0.49 & 11 & 126.3 & 323 & 804 & 0.182 & 0.076 & 0.980 & 0.975 & 0.115 & -0.190 & 0.127 & -0.991 & 0.053 & 1.69E-03 & 0.919 & 0.930 & 1.019 & 80.6 \\\\\n \\hline\n\\end{tabular}\n\\end{table*}\n\\end{turnpage}\nBefore concluding this section, we will briefly discuss some indirect constraints on the next heavier scalar $\\widetilde{H}^\\prime $, in both these scenarios.\n\\subsection{The next heavier neutral scalar $\\widetilde{H}^\\prime $}\n\nIn {\\em Dr.~Jekyll} scenario, examples 1 and 2 that we considered have $H_{1M}^0$ as the dominant component in $\\widetilde{H}^\\prime $, which is the next heavier physical scalar after the 125-{\\rm GeV } $\\widetilde{H} $. Here the total width of $\\widetilde{H}^\\prime $ is also greater than its mass, with the scalar coupling $\\lambda_2>4\\pi$. Thus, it is a strongly coupled scalar, which is difficult to detect as a narrow resonance.\n\nIn example 1 of {\\em Mr.~Hyde} scenario, $H_{1M}^0$ is the dominant component in $\\widetilde{H}^\\prime $, while in examples 2 and 3 $H_1^0$ is the dominant component in $\\widetilde{H}^\\prime $. In all 3 examples, $\\widetilde{H}^\\prime $ has a total width $<10\\%$ of its mass.\n\nThis subsection compares the signal strength of $\\widetilde{H}^\\prime \\rightarrow W^+W^-$ and the $\\sigma \\times BR(\\widetilde{H}^\\prime \\rightarrow \\gamma\\gamma)$ with the CMS constraints on SM-like heavy Higgs, for examples having $m_{\\widetilde{H}^\\prime } \\lesssim 600 ~{\\rm GeV }$. These CMS constraints \\cite{CMS_heavy_Higgs_WW, HWWsearch, ATLAS_diphoton_july2014, CMS_diphoton_july2014} assume the Standard model background, whereas, in the EW$\\nu_R$ model, extra processes involving mirror fermions and extra scalars also contribute to the background in addition to the SM processes. The background in this model is therefore expected to be larger than that in the SM. A detailed study of this background is out of the scope of this paper.\n\nAlthough the SM background does not strictly apply to $\\widetilde{H}^\\prime $ in the EW$\\nu_R$ model, we show how the EW$\\nu_R$ predictions compare with the experimental constraints.\n\nFor our calculations we computed the total width of $\\widetilde{H}^\\prime $ using\n\\begin{eqnarray}\\label{eq:widthH1M}\n\\Gamma_{\\widetilde{H}^\\prime } = &&~\\sum_{i=1}^3\\Gamma_{\\widetilde{H}^\\prime \\rightarrow q_i^M\\bar{q}_i^M} ~+ \\sum^3_{j=1} ~\\times ~\\Gamma_{\\widetilde{H}^\\prime \\rightarrow l_j^M\\bar{l}_j^M}\\nonumber\\\\[0.5em]\n &&+ ~\\Gamma_{\\widetilde{H}^\\prime \\rightarrow t\\bar{t}} ~+ ~\\Gamma_{\\widetilde{H}^\\prime \\rightarrow b\\bar{b}} \\nonumber\\\\[0.5em]\n&& + ~\\Gamma_{\\widetilde{H} \\rightarrow \\tau\\bar{\\tau}} ~+ ~\\Gamma_{\\widetilde{H} \\rightarrow c\\bar{c}} ~+\\Gamma_{\\widetilde{H}^\\prime \\rightarrow W^+ W^-}\\nonumber\\\\[0.5em]\n&& + ~\\Gamma_{\\widetilde{H}^\\prime \\rightarrow ZZ} ~+ ~\\Gamma_{\\widetilde{H}^\\prime \\rightarrow gg} ~+ ~\\Gamma_{\\widetilde{H}^\\prime \\rightarrow \\gamma \\gamma}\\,.\n\\end{eqnarray}\nThe partial decay widths were calculated using the method illustrated in Appendix~\\ref{sec:hdecay}.\n\n\\subsubsection{Constraints on the signal strength of $\\widetilde{H}^\\prime \\rightarrow W^+W^-$}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{muH1MWW_042915.pdf}\n\t\\caption{\\label{fig:h1mww}Predicted signal strength of $\\widetilde{H}^\\prime \\rightarrow W^+W^-$ in 4 example scenarios (blue and purple squares). The results of the search for SM-like Higgs boson up to 600 {\\rm GeV } with the $1\\sigma$ (green band) and $2\\sigma$ (yellow band) limits on the SM background (dotted curve) and CMS data (solid black curve) are also displayed.}\n\\end{figure}\n\\begin{itemize}\n\t\\item\n\tFig.~\\ref{fig:h1mww} shows the signal strength of $\\widetilde{H}^\\prime \\rightarrow W^+W^-$ for examples 1 and 2 in {\\em Dr.~Jekyll} scenario (blue squares) and exmaples 1 and 3 in {\\em Mr.~Hyde} scenario (purple squares). The $1-$ and $2-\\sigma$ SM background bands and the CMS data \\cite{CMS_heavy_Higgs_WW, HWWsearch} are also displayed. The signal strength for example 2 in {\\em Mr.~Hyde} scenario is not displayed as $m_{\\widetilde{H}^\\prime } = 633~{\\rm GeV }$ for this example, but the CMS data for this decay channel are only available up to 600~{\\rm GeV }. \n\t\n\t\\item\n\tIn the figure, notice that the predicted signal strengths for examples 1 and 2 in {\\em Dr.~Jekyll} case and example 1 in {\\em Mr.~Hyde} case are within the $\\pm 1\\sigma$ SM-background bands. Therefore, the CMS data are surely not conclusive for confirming or ruling out these examples.\n\t\n\t\\item\n\tExample 1 in {\\em Mr.~Hyde} scenario predicts a signal strength $\\mu(\\widetilde{H}^\\prime \\rightarrow W^+W^-) \\approx 1.3$, which is certainly larger the SM-background band and the data. $H_1^0$ - the SM-like Higgs is the dominant component in $\\widetilde{H}^\\prime $ in this example. However, the SM-background still does not strictly apply here, since additional background processes can contribute to it. For example, production of $W^+W^-$ from two gluons through a box loop of mirror quarks\n\\end{itemize}\n\\subsubsection{Constraints on $\\widetilde{H}^\\prime \\sim H_{1M}^0$ from $\\gamma\\gamma$-decay channel}\n\nThe constraints on $\\sigma(gg\\rightarrow \\widetilde{H}^\\prime )\\times BR(\\widetilde{H}^\\prime \\rightarrow \\gamma\\gamma)$ from CMS \\cite{CMS_diphoton_july2014} and ATLAS \\cite{ATLAS_diphoton_july2014} are accompanied by assumptions that the total width of the SM-like heavy Higgs is $0.1 ~{\\rm GeV }$ or $10\\%$ of its mass. The total width of $\\widetilde{H}^\\prime $ in our scenarios does not follow either of these patterns. We observed that $\\sigma(gg\\rightarrow\\widetilde{H}^\\prime )\\times BR(\\widetilde{H}^\\prime \\rightarrow \\gamma\\gamma)$ predictions for all the examples in both the scenarios is consistently lower than the CMS and ATLAS constraints.\n\n\\subsubsection{A comment on $\\widetilde{H}^{\\prime\\prime} $}\n\nIn the examples of {\\em Dr.~Jekyll} scenario considered in this paper, $\\widetilde{H}^{\\prime\\prime} \\sim H_1^{0\\prime}$, and in the examples of {\\em Mr.~Hyde} scenario that we have considered, $\\widetilde{H}^{\\prime\\prime} \\sim H_1^0$ or $H_{1M}^0$. For all these examples, $m_{\\widetilde{H}^{\\prime\\prime} } \\gtrsim 600~{\\rm GeV }$. So far, the CMS data in Fig~\\ref{fig:h1mww} are not sensitive to the signal strengths of the order of SM predictions in this mass range. A detailed analysis of its signal strengths will be of interest, when more data for this higher mass range are available and are analyzed with full EW$\\nu_R$ background.\n\\subsection{Conclusions about the 125-GeV Higgs candidate in the EW$\\nu_R$ model}\\label{sec:125conclude}\n\n\\begin{itemize}\n\t\\item\n\tIn this section we investigated two very different scenarios of the 125-{\\rm GeV } Higgs boson at the LHC from the perspective of the EW$\\nu_R$ model:\n\t\\begin{enumerate}\n\t\t\\item\n\t\t{\\em Dr.~Jekyll}: $H_1^0$, which is the real part of the SM-like scalar doublet $\\Phi_2$, is the dominant component in the 125-{\\rm GeV } $\\widetilde{H} $, and\n\t\t\\item\n\t\t{\\em Mr.~Hyde}: $H_1^0$ is a sub-dominant component in the 125-{\\rm GeV } $\\widetilde{H} $ - a more interesting scenario.\n\t\\end{enumerate}\n\t\n\t\\item\n\tWe demonstrated that in both these scenarios the signal strengths of the 125-{\\rm GeV } $\\widetilde{H} $ in $W^+W^-, ~ZZ, ~\\gamma\\gamma, ~b\\bar{b}$ and $\\tau\\bar{\\tau}$ decay channels agree with the constraints from CMS (and also ATLAS) data. Thus, from the perspective of the EW$\\nu_R$ model, the present data at the LHC are inconclusive about whether SM-like $H_1^0$ is {\\em the dominant} or {\\em a sub-dominant component} in the 125-{\\rm GeV } particle. Hence ``the dual nature\" of the 125-{\\rm GeV } Higgs in the EW$\\nu_R$ model.\n\t\n\tMore data, measurements of individual partial widths and study of the heavier physical scalars in the EW$\\nu_R$ model are necessary to distinguish between either of these scenarios and the SM-Higgs.\n\n\t\\item\n\tAs expected, the individual partial widths of the 125-{\\rm GeV } $\\widetilde{H} $ in {\\em Dr.~Jekyll} scenario are not very different from those in SM. Here, the scalar couplings $|\\lambda_2|, ~|\\lambda_3|, ~|\\lambda_5|$ need to be greater than $4\\pi$ to satisfy the constraints on the signal strengths. This means that the heavier scalars in this scenario tend to be strongly coupled and have large widths.\n\n\tDominant SM-like component in the 125-{\\rm GeV } Higgs also leads to $v_2$ (the VEV of $\\Phi_2$) being the dominant part in $v$, and smaller $v_{2M}$, which gives masses to the mirror fermions. Consequently, the masses of the mirror fermions, allowed by the perturbative limit on their Yukawa couplings, cannot be much greater than $\\sim 120 ~{\\rm GeV }$. We adopt a lower limit of 102~{\\rm GeV } set by the constraints from LEP3 \\cite{pdg}.\n\n\t\\item\n\tHence, if future measurements of the individual decay widths of 125-{\\rm GeV } Higgs result in SM-like widths, then it is more likely to be consistent with {\\em Dr.~Jekyll} scenario. In this case, the heavier scalars would appear not as narrow resonances, but as broad resonances or enhancement in the background in this model.\n\n\tSince the SM-like $H_1^0$ is the dominant component in the 125-{\\rm GeV } $\\widetilde{H} $, the effective theory around this energy looks like SM, in which the heavier scalars are {\\em integrated out}.\n\t\n\t\\item\n\tIn contrast, the individual partial widths of the 125-{\\rm GeV } $\\widetilde{H} $ are very different from those in SM, in {\\em Mr.~Hyde} scenario. In all 1501 combinations of the parameters that we found to agree with the experimental $1\\sigma$ constraints on the signal strengths contain $H_1^{0\\prime}$ as the dominant component in the 125-{\\rm GeV } $\\widetilde{H} $. The predicted signal strengths of this $\\widetilde{H} $ agree with the experimental $1\\sigma$ constraints on the signal strengths even when the scalar couplings $|\\lambda|$'s are smaller than $4\\pi$. The heavier scalars in this case are not strongly coupled, as a result.\n\t\n\tThe $H_{1M}^0$ as a dominant component in $\\widetilde{H} $ is disfavored to agree with the constraints on the signal strengths, due to its large contribution to the cross section of $gg \\rightarrow \\widetilde{H} $. \n\t\n\tBecause $v_{2M}$ is not constrained to be small in this case, the perturbative upper limit on the mirror fermions is about 700~{\\rm GeV }.\n\t\n\t\\item\n\tTherefore, if the partial widths of the 125-{\\rm GeV } $\\widetilde{H} $ are measured to be very different from those in SM, it would point towards {\\em Mr.~Hyde} scenario. The heavier scalars in this case have narrow widths and can be detected as resonances.\n\t\n\tThe SM-like $H_1^0$ is the dominant component in one of the heavier scalars, $\\widetilde{H}^\\prime $ or $\\widetilde{H}^{\\prime\\prime} $. Thus, the effective theory around 125~{\\rm GeV } is very different from SM, while the SM-like $H_1^0$ is {\\em integrated out} with the heavier scalars.\n\n\t\\item\n\tAs can be seen from Eq~(\\ref{eq:hydemupara}) we scanned only a part of the entire parameter space in the EW$\\nu_R$ model, by fixing values or ranges of the parameters. A more thorough scan of the parameter space could be of interest, especially if more data from the LHC Run II show any signs of physics Beyond the Standard Model. \n\n\\end{itemize}\n\nIn the next section we will briefly discuss some of the decay properties of the CP-odd neutral spin-0 states - $H_3^0$ and $H_{3M}^0$.\n\n\\section{Signals of CP-odd spin zero states }\n\nIn addition to the 125-{\\rm GeV } $\\tilde{H}$, the EW$\\nu_R$ ~model also contains CP-odd spin zero states - $H_{3}^0, H_{3M}^0 $ - and two other heavy CP-even spin zero states - $\\widetilde{H}^\\prime $, and $\\widetilde{H}^{\\prime\\prime} $. In this section, we show possibilities to probe the signal of the neutral pseudo-scalars in various decay channels at LHC. To do so, we will investigate the product of production cross section and the branching ratio, a.k.a the absolute signal strength, in $H_{3,3M}^0\\rightarrow \\gamma\\gamma,~\\tau\\tau$ channels. We will also calculate the signal strengths ($\\mu$) for $H_{3}^0,~H_{3M}^0$ and the SM Higgs boson $H_{SM}$ in other channels.\\\\\n\\begin{equation}\\label{eq:muy}\n\\mu \\; = \\; \\dfrac{\\sigma(gg \\rightarrow H_{3,3M}^0) Br( H_{3,3M}^0 \\rightarrow XX )}{\\sigma(gg \\rightarrow H_{SM}) Br( H_{SM} \\rightarrow XX)}\n\\\\\n\\end{equation}\n\nIn this extension of the EW$\\nu_R$ model, the degenerate masses of two $SU(2)_D$ custodial triplets are related by\n\\begin{equation}\\label{eq:mH3H3M}\n \\dfrac{m_{H_3}^2}{m_{H_{3M}}^2} = \\dfrac{1}{1+c_M^2}\n\\end{equation}\nWe assume that the neutral states $H_3^0$ and $H_{3M}^0$ obey this relationship, and use two cases of $s_M = 0.168;\\;0.36$. The lighter one, $m_{H_3^0}$, is scanned over ranges $130 - 440\\;{\\rm GeV }$, whereas the mass $m_{H_{3M}^0}$ of the heavier one over $182 - 618\\; {\\rm GeV }$ and $177 - 601\\; {\\rm GeV }$, respectively.\n\n\\subsection{Ratio of production cross sections }\n\nAt the LHC, $H_3^0,~H_{3M}^0$ are expected to be produced mainly through gluon-gluon fusion, similar to $H_{SM}$. By using effective coupling approximation, we have\n\\begin{equation}\\label{eq:csection}\n R=\\dfrac{\\sigma(gg \\rightarrow H_{3,3M}^0)}{\\sigma(gg \\rightarrow H_{SM})} \\approx \\dfrac{\\Gamma(H_{3,3M}^0 \\rightarrow gg)}{\\Gamma(H_{SM} \\rightarrow gg)}\\,.\n\\end{equation}\n$H_{3,3M}^0$ do not couple directly to the gauge bosons $W, ~Z, ~\\gamma$. And triplet couplings, such as $ H_{3,3M}^0 H_{3,3M}^+ H_{3,3M}^-,\\; H_{3,3M}^0 H_5^+ H_5^-,\\; H_{3,3M}^0 H_5^{++} H_5^{--}$, are forbidden by CP conservation. Therefore, only fermionic loops involving the top quark and the mirror quarks contribute to the gluonic decay of $H_3^0$ and $H_{3M}^0$ \\cite{qcd_corr3}:\n \\begin{eqnarray}\\label{eq:H3ogg}\n \\Gamma \\left( H_{3,3M}^0 \\rightarrow gg \\right) =&& \\dfrac{G_F \\alpha_s^2}{16\\sqrt{2} \\pi^3} m_{H_{3,3M}^0}^3\\nonumber\\\\[1em]\n &\\times& \\left| \\sum\\limits_{Q} g_Q^{H_{3,3M}^0} F_Q^{H_{3,3M}^0}(\\tau_f)\\right|^2\\,,\n \\end{eqnarray}\n \\begin{equation}\\label{eq:F_H3o} \n F_Q^{H_{3,3M}^0}(\\tau_f) = \\tau_f f(\\tau_f)\\,,\n \\end{equation}\n \\begin{equation}\n \\tau_t = 4 m_f^2\/m_{H_{3,3M}^0}^2\\,,\n \\end{equation}\n\nwhere $g_Q^{H_{3,3M}^0}$ are couplings of $H_3^0$ and $H_{3M}^0$ to the top quark and mirror quarks, listed in Table \\ref{table:h_ferm}.\n\nHere, $\\sum\\limits_{Q}$ is summed over the top quark and mirror quarks. However, the contributions from mirror quarks can be suppressed due to the fact that the mirror-up quarks and the mirror-down quarks couple to $H_{3,3M}^0$ with opposite signs. In this work, we particularly consider degenerate mirror fermion doublets, meaning $m_{u^M} = m_{d^M}$, for simplicity. As a result, the contributions from mirror quarks cancel out. Thus, only the top-quark loop appears in the production cross section of $H_3^0, H_{3M}^0$. Then, the ratios of production cross section are given by\n \\begin{equation}\\label{eq:csectionapprox}\n R_{H_3^0}= \\tau_t^2 \\dfrac{\\left|\\tan\\theta_M f(\\tau_t)\\right|^2}{\\left|\\tau_t+(\\tau_t-1)f(\\tau_t)\\right|^2}\n \\end{equation}\nfor $H_3^0$, and\n \\begin{equation}\\label{eq:csection3oM}\n R_{H_{3M}^0}= \\tau_t^2 \\dfrac{\\left|s_{2M}f(\\tau_t)\\right|^2}{\\left|s_2(\\tau_t+(\\tau_t-1)f(\\tau_t))\\right|^2}\n \\end{equation}\nfor $H_{3M}^0$.\n\n\\subsection{ In \\;$ \\gamma\\gamma$ channel}\n\nATLAS \\cite{ATLAS_diphoton_july2014} and CMS \\cite{CMS_diphoton_july2014} have recently reported their results of the search for narrow resonances in spin-0 state to diphoton decay channel, up to $600~{\\rm GeV }$ for CMS and $840~{\\rm GeV }$ for ATLAS. Both the reports make certain assumptions about the total width of the decaying spin-0 state. They present the upper limit on the production cross section times branching ratio for this channel at $ 95\\% $ confidence level. So far, no significant excess has been found, except for the two $2\\sigma$ resonances above the background at $m = 201\\;{\\rm GeV }$ and $m = 530\\;{\\rm GeV }$ in the ATLAS analysis. We compare our predictions in $\\gamma\\gamma$ channel with those results, even though the assumptions about the total width of resonances are not generally applicable to the EW$\\nu_R$ ~model.\n\nSimilar to the gluonic decay, only fermionic loops contribute to the partial width of $H_{3,3M}^0 \\rightarrow \\gamma \\gamma$, given by \\cite{qcd_corr3}\n\\begin{equation}\\label{eq:H3ogamma}\n\\Gamma\\left( H_{3,3M}^0\\rightarrow \\gamma\\gamma\\right) = \\dfrac{g^2\\;\\alpha^2\\; m^3_{H_{3,3M}^0}}{256\\;\\pi\\; m_W^2}\\left|\\sum\\limits_{i} ~ N_{ci}\\; e_i^2 \\; g_i \\; F_i^{H_{3,3M}^0} \\right|^2\\,.\n\\end{equation}\nHere, $i=$ top quark, six mirror quarks, and three charged mirror leptons. The total widths of $H_{3,3M}^0$ are calculated by summing all the partial widths.\n\\begin{eqnarray}\\label{eq:width33M}\n\\Gamma_{H_{3,3M}}=&&~\\Gamma(H_{3,3M}^0\\rightarrow\\gamma\\gamma)+\\Gamma(H_{3,3M}^0\\rightarrow gg)\\nonumber\\\\\n &&+\\Gamma(H_{3,3M}^0\\rightarrow W^+W^-)+\\Gamma(H_{3,3M}^0\\rightarrow ZZ)\\nonumber\\\\\n &&+\\Gamma(H_{3,3M}^0\\rightarrow\\tau\\bar{\\tau})+\\Gamma(H_{3,3M}^0\\rightarrow t\\bar{t})\\nonumber\\\\\n &&+\\Gamma(H_{3,3M}^0\\rightarrow c\\bar{c})+\\Gamma(H_{3,3M}^0\\rightarrow b\\bar{b})\\nonumber\\\\\n &&+ \\sum\\limits_{i=1}^{6}\\Gamma(H_{3,3M}^0\\rightarrow q^M_i\\bar{q}^M_i)\\nonumber\\\\\n && +\\sum\\limits_{j=1}^{3}\\Gamma(H_{3,3M}^0\\rightarrow l^M_j\\bar{l}^M_j)\n\\end{eqnarray}\nThe branching ratio of $H_{3,3M}^0\\rightarrow\\gamma\\gamma$ is\n\\begin{equation}\n \\label{eq:Branching33M}\n Br(H_{3,3M}^0\\rightarrow\\gamma\\gamma)=\\dfrac{\\Gamma(H_{3,3M}^0\\rightarrow \\gamma\\gamma)}{\\Gamma_{H_{3,3M}^0}}\n\\end{equation}\nThe absolute signal strength of $H_{3,3M}^0\\rightarrow\\gamma\\gamma$ is defined as\\\\\n\\begin{flalign*}\n\\label{eq:Xbranching33M}\n \\sigma \\times BR(H_{3,3M}^0\\rightarrow\\gamma\\gamma) = R\\times\\sigma(gg \\rightarrow H_{SM})\\\\\n \\times Br(H_{3,3M}^0 \\rightarrow \\gamma \\gamma)\\\\\n\\end{flalign*}\nAt any particular mass, $R_{H_{3,3M}^0}$ and $Br(H_{3,3M}^0 \\rightarrow \\gamma\\gamma)$ are calculated directly, while $\\sigma(gg \\rightarrow H_{SM})$ is taken from the handbook of Higgs cross section \\cite{HXreport3}. To be consistent with the previous analysis, we also provide two scenarios which correspond to the dual nature of the 125-{\\rm GeV } scalar. For illustrative purposes, we consider up- and down- members of mirror quark doublets to have degenerate masses. The first two generations of mirror quarks and all charged mirror leptons have the same mass, $m_{q^M_1} = m_{q^M_2} = m_{l^M} = 102 \\;{\\rm GeV }$. The heaviest mirror quark generation has a mass $m_{q^M_3} = 120 \\;{\\rm GeV }$. Masses of all right-handed neutrinos are at $M_R = 50 \\;{\\rm GeV }$. \n \\begin{itemize}\n \\item\nIn the case of {\\em Dr.~Jekyll}, as $\\widetilde{H} \\sim H_1^0$, the mixing angles are $s_2 = 0.92,\\;s_{2M} = 0.16,\\;s_M = 0.36$, which corresponds to the first example in the section \\ref{sec:jekyll}. In the plot below, we present the dependence of the production cross section times branching ratio of $H_{3,3M}$ on mass. Moreover, the next heavy CP-even state is $\\tilde{H}^{\\prime}$ with the mass of $m_{\\widetilde{H}^\\prime }=420\\;{\\rm GeV }$. So we incorporate the production cross section times branching ration of $\\widetilde{H}^\\prime \\rightarrow\\gamma\\gamma$ also. \n\n \\begin{figure}[H]\n \\includegraphics[scale=0.45]{XBranching_gamma_H1oDominant.pdf}\n \\caption{\\label{fig:gamma_H1oDominant} {\\small The production cross section times branching ratio in $\\gamma\\gamma$ channel of $H_3^0$ and $H_{3M}^0$, in {\\em Dr.~Jekyll} scenario. $m_{H_3^0}=130 - 440 \\;{\\rm GeV }$, $m_{H_{3M}^0}=177 - 601\\;{\\rm GeV }$ } }\n \\end{figure}\n \\item\n In the other case when $H_1^0$ is sub-dominant in $\\widetilde{H} $ or {\\em Mr.~Hyde}, a set of parameters is chosen as $s_2=0.514,\\;s_{2M}=0.841,\\;s_M=0.168$ corresponding to the example 2 in the section \\ref{sec:hyde}. In this scenario, all the heavy CP-even states are above $600 {\\rm GeV }$. So that we just present the dependence of the production cross section times branching ratio of $H_{3,3M}\\rightarrow\\gamma\\gamma$. \n \\begin{figure}[H]\n \\includegraphics[scale=0.45]{XBranching_gamma_H1oSubdominant.pdf}\n \\caption{\\label{fig:gamma_H1oSubdominant} {\\small The production cross section times branching ratio in $\\gamma\\gamma$ channel of $H_3^0$ and $H_{3M}^0$, in {\\em Mr.~Hyde} scenario. $m_{H_3^0}=130 - 440 \\;{\\rm GeV }$, $m_{H_{3M}^0}=182 - 618\\;{\\rm GeV }$ } }\n \\end{figure}\n\\end{itemize}\nRemarks:\n\\begin{enumerate}\n \\item\n Before fermionic thresholds, $2m_{q^M_{1,2}}$ and $2m_{l^M}$, the signal strength for $H_{3,3M}^0$ can be larger than what is measured by ATLAS and CMS for heavy SM-like scalar. To be conservative, we can exclude $m_{H_{3}^0} \\lesssim 150~{\\rm GeV }$ and $m_{H_{3M}^0} \\lesssim 205~{\\rm GeV }$ (FIG.\\ref{fig:gamma_H1oDominant}) and $\\lesssim 210~{\\rm GeV }$ (FIG.\\ref{fig:gamma_H1oSubdominant}). However, for some other set of parameters, the signal strengths could be well below the upper limit set by ATLAS and CMS.\n \\item\n As $m_{H_3^0}$ increases, more mirror fermionic decay channels are kinematically allowed. On the other hand, the production cross section decreases. The branching ratios of $H_{3,3M}^0\\rightarrow\\gamma\\gamma$ therefore decrease rapidly beyond the thresholds, $2m_{q^M_{1,2}}, \\;2m_{q^M_3},\\;2m_{l^M},\\;2m_t$. As a result, the signal strengths in both the cases are below the experimental upper limits.\n \\item\n At the same mass, the signal strengths of CP-odd spin-0 states are generally larger than those of CP-even scalars. Consequently, it is easier to detect CP-odd spin-0 states than the CP-even ones. \n\\end{enumerate}\n\n\\subsection{In $\\tau\\bar{\\tau}$ channel}\n\nRecently, ATLAS \\cite{ATLAS_tau} and CMS \\cite{CMS_tau} also reported their new results in $\\tau\\bar{\\tau}$ channel. Although, the main aim of their reports is to look for MSSM neutral boson, they provide a model independent limit on the production cross section times branching ratio of a general spin-zero state. Therefore, in this part we investigate the signal strength of our $H_{3,3M}^0\\rightarrow\\tau\\bar{\\tau}$ with two sets of parameters considered in the previous subsection.\n\n \\begin{itemize}\n \\item In the {\\em Dr.~Jekyll} case. \n \\begin{figure}[H]\n \\includegraphics[scale=0.45]{XBranching_tau_H1oDominant.pdf}\n \\caption{\\label{fig:tau_H1oDominant} {\\small The production cross section times branching ratio in $\\tau\\bar{\\tau}$ channel of $H_3^0$ and $H_{3M}^0$, in {\\em Dr.~Jekyll} scenario. $m_{H_3^0}=130 - 440 \\;{\\rm GeV }$, $m_{H_{3M}^0}=177 - 601\\;{\\rm GeV }$ } }\n \\end{figure}\n \\item In the {\\em Mr.~Hyde} case.\n \\begin{figure}[H]\n \\includegraphics[scale=0.45]{XBranching_tau_H1oSubdominant.pdf}\n \\caption{\\label{fig:tau_H1oSubdominant} {\\small The production cross section times branching ratio in $\\tau\\bar{\\tau}$ channel of $H_3^0$ and $H_{3M}^0$, in {\\em Mr.~Hyde} scenario. $m_{H_3^0}=130 - 440 \\;{\\rm GeV }$, $m_{H_{3M}^0}=182 - 618\\;{\\rm GeV }$ }} \n \\end{figure}\n\\end{itemize}\nRemarks:\n\\begin{enumerate}\n \\item\n In both the cases, the signal strengths can exceed the upper limit from ATLAS and CMS before the thresholds of 2 times mirror fermion masses, which is $204\\;{\\rm GeV }$. It happens because, unlike SM Higgs, the decay processes such as $H_{3,3M}^0\\rightarrow WW\/ZZ$ occur only at the loop level, and their partial widths are relatively small. Consequently, the branching ratios of $H_{3,3M}^0\\rightarrow\\tau\\bar{\\tau}$ are not as small as in SM. Hence, the signal strength for this channel is one order above the upper limits set by ATLAS and CMS. However, we believe that in a wide range of parameter space of the $EW\\nu_R$ model, it is still possible to agree with the limits in this mass region. \n\t\\item\n\tAfter passing the first threshold, the signal strengths of both $H_{3,3M}^0\\rightarrow\\tau\\bar{\\tau}$ decrease rapidly, because the total widths $\\Gamma_{H_{3,3M}^0}$ are dominated by the fermionic decays. Then they reach another peak at $2m_t$. Over the entire region after the first threshold, the signal strengths for both $H_{3,3M}^0$ are below the limits.\n \\end{enumerate}\n\n\\subsection{In $WW\/ZZ$ channels}\n\nIn this model, pseudo-scalars $H_{3,3M}^0$ do not couple directly to $W^\\pm$ and $Z$. Decay processes $H_{3,3M}^0 \\rightarrow WW\/ZZ$ occur only at loop levels. It is expected that these processes will be highly suppressed. To prove that, we calculate the signal strengths ($\\mu$) for $H_{3}^0\\rightarrow WW\/ZZ$ and $H_{SM}\\rightarrow WW\/ZZ$. $\\mu$ is defined in Eq.~(\\ref{eq:muy}). \n\\begin{eqnarray}\\label{eq:muyVV}\n\\mu_{VV} &=& \\dfrac{\\sigma(gg \\rightarrow H_3^0) Br(H_3^0 \\rightarrow VV)}{\\sigma(gg \\rightarrow H_{SM}) Br(H_{SM} \\rightarrow VV)}\\nonumber\\\\[1em]\n&=& R_{H_3^0}\\dfrac {Br(H_3^0 \\rightarrow VV)}{ Br(H_{SM} \\rightarrow VV)}\\,,\n\\end{eqnarray}\nwhere $V=W,Z$. Once again, $Br(H_{SM} \\rightarrow VV)$ is taken from \\cite{HXreport3}, while the ratio of production cross sections $R_{H_3^0}$ in Eq.~(\\ref{eq:csectionapprox}) and $Br(H_3^0 \\rightarrow VV)$ are calculated directly. At one loop order, the partial decay width for these processes are \\cite{GunionKao}\n\\begin{itemize}\n\\item $H_3^0 \\rightarrow WW$\n \\begin{equation}\\label{eq:H3oWW}\n \\Gamma( H_3^0 \\rightarrow WW ) = \\dfrac{3^2g^6(m_{H_3^0}^2 - 4m_{W}^2)^{3\/2}}{2^{14}\\pi^5m_W^2}|\\textit{A}_{WW}|^2\n \\end{equation}\n \\begin{equation}\\label{eq:Aww}\n\\textit{A}_{WW} = m_t^2t_MA_t^W - m_b^2t_M^2A_b^W + \\dfrac{m_{l^M}^2}{\\sqrt{2}}A_{l^M}^W + \\dfrac{M_R^2}{\\sqrt{2}c_M}A_{\\nu_R}^W\\,;\n \\end{equation}\n\\item $H_3^0\\rightarrow ZZ$\n \\begin{equation}\\label{eq:H3ozz}\n \\Gamma( H_3^0 \\rightarrow ZZ ) = \\dfrac{3^2g^6(m_{H_3^0}^2 - 4m_{Z}^2)^{3\/2}}{2^{15}\\pi^5m_W^2}|\\textit{A}_{ZZ}|^2\n \\end{equation}\n \\begin{equation}\\label{eq:Azz}\n\\textit{A}_{ZZ} = m_t^2t_MA_t^Z - m_b^2t_MA_b^Z + \\dfrac{m_{l^M}^2}{\\sqrt{2}}A_{l^M}^Z\\,.\n \\end{equation}\n\\end{itemize}\n$A_f^{W\/Z}$ are amplitudes with top and bottom quarks, mirror charged leptons, and right-handed neutrinos in the loops. They have specific forms given in the Appendix~\\ref{sec:AH3}.\n\n \\begin{figure}[H] \n \\includegraphics[scale=0.45]{MuyH3oHsm_WWZZ_170_450.pdf}\n \\caption{\\label{fig:muyWW}{\\small Ratio of strength signal in $WW\/ZZ$ channel of $H_3^0$ comparing to $H_{SM}$}}\n \\end{figure}\nAs expected, the signal strengths of $H_3^0$ in vector boson channels are highly suppressed. \n\n\\subsection{$(H_{3}^0 \\rightarrow \\bar{l}^M l^M )\/ (H_{SM}\\rightarrow WW)$}\n\nAt the LHC, $H\\rightarrow WW$ is an important channel to search for new scalars in the high-mass region. From Fig.~\\ref{fig:muyWW}, we see that the ratio of the signal strengths is suppressed very much for pseudoscalar $H_3^0$ in this model. However, it can also decay through $H_{3}^0 \\rightarrow \\bar{l}^M l^M \\rightarrow \\bar{l} \\phi_S l \\phi_S^* $, where $\\phi_S$ is invisible and considered as missing transverse energy ($E_T^M$). Thus, the signal of this process is identified with 2 leptons and missing $E_T^{M}$, which mimics the signal for $WW$ decay of a scalar such as the SM Higgs boson: $H_{SM} \\rightarrow W^+ W^- \\rightarrow \\bar{l} \\nu l \\bar{\\nu}$. For the case $m_{l^M} = 102~{\\rm GeV }$, $m_H = 210 - 500 {\\rm GeV }$, both the mirror lepton intermediate states are on-shell.\n\\begin{equation}\\label{eq:H3olmlm}\n\\Gamma \\left(H_{3}^0 \\rightarrow \\bar{l}^M l^M\\right) = \\frac{g^2m_{H_3^0}^3t_M^2}{256\\pi ~m_W^2} \\tau_{l^M}\\sqrt{1 - \\tau_{l^M}}\\,, \n\\end{equation}\nwhere $\\tau_{l^M}= 4m_{l^M}^2\/m_{H_3^0}^2$, and\n\\begin{equation}\\label{eq:lMlphiS}\n \\Gamma \\left(l^M\\rightarrow l\\phi_S^*\\right) = g_{sl}^2\\dfrac{m_{l^M}}{32\\pi}\\,.\n\\end{equation} \nSimilar to the previous comparisons, we define the signal strength\n\\begin{equation}\\label{eq:RH3oHsm} \n \\mu_{l\\bar{l}}=R_{H_3^0} \\frac{Br\\left(H_{3}^0 \\rightarrow \\bar{l}^M l^M\\right) Br\\left(l^M \\rightarrow l\\phi_S^* \\right)}{Br \\left( H_{SM} \\rightarrow W^+ W^- \\right) Br\\left( W\\rightarrow l\\bar{\\nu}\\right)}\\,,\n\\end{equation}\nwhere $Br\\left( W^-\\rightarrow l\\bar{\\nu}\\right) = 0.108 $ \\cite{pdg}. With $M_R=50\\;{\\rm GeV }$, $l^M\\rightarrow \\nu_R\\bar{\\nu} l$ is kinamatically possible.\n\\begin{equation}\\label{eq:BrlMPhiS}\n Br\\left(l^M \\rightarrow l\\phi_S^* \\right) = \\dfrac{\\Gamma \\left(l^M\\rightarrow l\\phi_S^*\\right)}{\\Gamma \\left(l^M\\rightarrow l\\phi_S^*\\right)+\\Gamma \\left(l^M\\rightarrow \\nu_R\\bar{\\nu} l\\right)}\\,.\n\\end{equation}\nIt is clear that the ratio $\\mu_{l\\bar{l}}$ depends on the value of $g_{sl}$. The search for a high-mass Higgs boson in $H\\rightarrow W^+W^-\\rightarrow l\\bar{\\nu} \\bar{l}\\nu$ was carried out at both ATLAS (in the range $260 - 1000 \\;{\\rm GeV }$ \\cite{ATLAS-CONF-2013-067}) and CMS (in the range $145 - 1000\\;{\\rm GeV }$) \\cite{CMS_heavy_Higgs_WW}. No excess over the background was detected in the entire mass region that was scanned. \n\nThe observed $95\\%$ CL upper limit on the ratio of the signal strengths is below $\\mu=1$ all the way up to $m_H\\approx 600\\;{\\rm GeV }$ \\cite{CMS_heavy_Higgs_WW}. Therefore, we can set upper limits as $\\mu_{l\\bar{l}}\\leq 1$ and hence, $g_{sl}\\leq 10^{-3}$. \n\n\n\\section{Conclusions}\n\nThe 125-{\\rm GeV } object has presented us with a challenge to understand its nature: Is it really the SM Higgs boson as it appears to be or is it simply an {\\em impostor}? So far, the only data available to us are given in terms of the so-called signal strengths, $\\mu$, as defined in Eq.~(\\ref{eq:mudef}). The signal strengths for the various decay modes of the SM Higgs boson are consistent with data. However, It turns out that it might be possible for various BSM models to be consistent with experiment also based solely on such signal strengths. This is what we have shown in this paper in the context of the $EW\\nu_R$ in its extended version.\n\nAs we have described in the beginning of our paper, the $EW\\nu_R$ \\cite{pqnur} was invented with the purpose of realizing the seesaw mechanism at the electroweak scale instead of some GUT scale. As such one can {\\em directly} test the seesaw mechanism at the LHC and at the proposed ILC through the physics associated with the model such as lepton-number violating production of electroweak-scale Majorana right-handed neutrinos and -this is the subject of the present paper- Higgs physics beyond that of the SM. \n\nThe extended $EW\\nu_R$ model discussed in this paper contains three neutral CP-even mass eigenstates, $\\widetilde{H} $, $\\widetilde{H}^\\prime $ and $\\widetilde{H}^{\\prime\\prime} $, which are linear combinations of $H_{1}^{0}$, $H_{1M}^0$ which couple to SM fermions and mirror fermions respectively and $H_1^{0\\prime}$ which couples only to $\\nu_R$'s. The notation for the mass eigenstates $\\widetilde{H} $, $\\widetilde{H}^\\prime $ and $\\widetilde{H}^{\\prime\\prime} $ refers to states with increasing masses. We scanned the parameter space with the following requirements in mind: 1) The mass of the lightest state should be $\\sim 125 \\, {\\rm GeV }$; 2) The mixing angles should be such that the signal strengths fit the data from CMS and ATLAS. We found many combinations of $H_{1}^{0}$, $H_{1M}^0$ and $H_1^{0\\prime}$ which satisfy those requirements. What is interesting here is the dual nature of the 125-{\\rm GeV } scalar that we uncovered in our scanning of the parameter space: 1) There are states with the SM-like scalar $H_{1}^{0}$ as a dominant component; 2) There are states with $H_1^{0\\prime}$ as a dominant component and is thus {\\em very unlike} that of the SM model. In other words, these states are impostors. All of these states - and we are far from exhausting the parameter space - yield the signal strengths compatible with the CMS and ATLAS data.\n\nIt goes without saying that detailed studies of various properties of the 125-{\\rm GeV } scalar such as the total width, partial widths, ..., are needed to determine if it were indeed the SM Higgs boson or just simply an {\\em impostor}. Of course, a discovery of one or several extra scalars would definitely point toward physics beyond the SM. In the extended $EW\\nu_R$ model, although the aforementioned 125-{\\rm GeV }-like scalars all yield comparable signal strengths, details such as production cross sections, branching ratios, total widths and partial widths can differ quite a bit from one another. States with $H_{1}^{0}$ as a dominant component ({\\em Dr.~Jekyll}) tend to behave more like the SM Higgs boson while the scenario in which $H_{1}^{0}$ as a sub-dominant component ({\\em Mr.~Hyde}) is very different. In other words, we may have discovered a scalar which is involved in the electroweak symmetry breaking but which {\\em may not be} the SM Higgs boson. \n\nIf, in the absence of direct measurements of decay widths, how could one tell {\\em Dr.~Jekyll} from {\\em Mr.~Hyde}? First, it goes without saying that a discovery of one or more extra scalars will definitely point to BSM physics. In the context of the EW$\\nu_R$ model, if the extra states are {\\em broad} and strongly interacting, we would be dealing with {\\em Dr.~Jekyll}'s scenario which is more SM-like in terms of the 125-{\\rm GeV } scalar. On the other hand, if the extra states are narrow resonances, we would be facing a truly interesting scenario, that of {\\em Mr.~Hyde} in which the 125-{\\rm GeV } scalar is truly an {\\em impostor}. Direct measurements of decay widths in this case would confirm whether or not this is the case.\n\n\n\n\n\\section{Acknowledgements}\n\nWe would like to thank Giuseppe Cerati for providing results of the search for SM-like heavy Higgs boson at CMS. This work was supported by US DOE grant DE-FG02-97ER41027. ASK was supported by the Graduate Fellowship of the Department of Physics, University of Virginia.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Intro}\nIn recent years, distributionally robust optimization formulations\nbased on Wasserstein distances have sparked a substantial amount of\ninterest in the literature. One reason for this interest, as\ndemonstrated by a range of examples in statistical learning\nand operations research,\nis that these formulations provide a flexible way to quantify and\nhedge against the impact of model misspecification. Motivated by those\napplications, this paper aims to understand the fundamental\nstatistical properties, such as asymptotic normality of the\ndistributionally robust estimators and the associated confidence\nregions deemed optimal in a suitable sense to be described shortly.\n\nBefore providing a review of Wasserstein distributionally robust\noptimization and its connections to several areas, such as artificial\nintelligence, machine learning and operations research, we set the\nstage by first introducing the elements of a typical data-driven\ndistributionally robust estimation problem.\n\n Suppose that\n$\\{X_k: 1 \\leq k \\leq n\\} \\subset \\mathbb{R}^m$ are independent and\nidentically distributed samples\nfrom an unknown distribution $P_\\ast.$ A typical non-robust stochastic\noptimization formulation informed by $P_{n}$ focuses on minimizing\nempirical expected loss of the form,\n$E_{P_{n}}\\left\\{ \\ell (X;\\beta )\\right\\} = n^{-1} \\sum_{i=1}^n\n\\ell(X_i;\\beta)$, over the parameter choices\n$\\beta \\in B \\subseteq \\mathbb{R}^d.$ In this paper, we take $B$ to be\na closed, convex subset of $\\mathbb{R}^d.$\nLet the empirical risk minimization estimators be\n\\begin{equation}\n \\beta _{n}^{ERM}\\in \\text{arg}\\min_{\\beta \\in B}E_{P_{n}}\\left\\{ \\ell (X;\\beta\n )\\right\\}. \\label{ERM_1}\n\\end{equation}%\n\n\nOn the other hand, a distributionally robust formulation recognizes\nthe distributional uncertainty inherent in $P_{n}$ being a noisy\nrepresentation of an unknown distribution. Therefore, it enriches the\nempirical risk minimization (\\ref{ERM_1}) by considering an estimator\nof the form,\n\\begin{equation}\n \\beta _{n}^{DRO}(\\delta )\\in \\arg \\min_{\\beta \\in B}\\sup_{P\\,\\in \\,\\mathcal{U}%\n _{\\delta }(P_{n})}E_{P}\\left\\{ \\ell (X;\\beta )\\right\\} ,\n\\label{Wass_DRO_A}\n\\end{equation}%\nwhere the set $\\mathcal{U}_{\\delta }(P_{n})$ is called the\ndistributional uncertainty region and $\\delta $ is the size of the\ndistributional uncertainty. Here, given a measurable function\n$f(\\cdot),$ the notation $E_P\\{f(X)\\}$ denotes expectation\nwith respect to a probability distribution $P.$ Wasserstein\ndistributionally robust formulations advocate choosing,\n\\begin{equation*}\n \\mathcal{U}_{\\delta }(P_{n})=\\{P \\in \\mathcal{P}(\\Omega):\n W(P_{n},P)\\leq \\delta^{1\/2}\\},\n\\end{equation*}%\nwhere $W(P_{n},P)$ is the Wasserstein distance between distributions\n$P_{n}$ and $P$ defined below, and $\\mathcal{P}(\\Omega)$ is the set of probability\ndistributions supported on a closed set\n$\\Omega \\subseteq \\mathbb{R}^m.$\n\\begin{definition}[Wasserstein distances]\n \\textnormal{Given a lower semicontinuous function\n $c:\\Omega \\times \\Omega \\rightarrow [0,\\infty],$ the optimal\n transport cost $D_c(P,Q)$ between any two distributions\n $P,Q \\in \\mathcal{P}(\\Omega)$ is defined as,\n \\begin{align*}\n D_c(P,Q) = \\min_{\\pi \\in \\Pi(P,Q)} E_{\\pi}\\left\\{c(X,Y)\\right\\}\n \\end{align*}\n where $\\Pi(P,Q)$ denotes the set of all joint distributions of the\n random vector $(X,Y)$ with marginal distributions $P$ and $Q,$\n respectively. If we specifically take $c(x,y) = d(x,y)^2,$ where\n $d(\\cdot)$ is a metric, we obtain the Wasserstein distance of\n order 2 by setting\n $\n W(P,Q) = \\left\\{D_c(P,Q)\\right\\}^{1\/2}.\n $\n }\n \\label{defn:WD}\n\\end{definition}\n\n\nThe quantity $W(P_{n},P)$ may be interpreted as the cheapest way to\ntransport mass from the distribution $P_{n}$ to the mass of another\nprobability distribution $P,$ while measuring the cost of\ntransportation from location $x \\in \\Omega$ to location $y \\in \\Omega$\nin terms of the squared distance between $x$ and $y.$ In this paper,\nwe shall work with Wasserstein distances of order 2, which explains\nwhy it is natural to use $\\delta^{1\/2}$ to specify the distributional\nuncertainty region $\\mathcal{U}_{\\delta }(P_{n})$ as above. Since\n$W(P_n,P_n) = 0,$ the empirical risk minimizing estimator in\n(\\ref{ERM_1}) can be seen as a special case of the formulation\n(\\ref{Wass_DRO_A}) by setting $\\delta = 0.$\n\n The need for selecting model parameters or making decisions using a data driven approach which is robust to model uncertainties has sparked a rapidly growing literature on Wasserstein distributionally robust optimization, via formulations such as (\\ref{Wass_DRO_A}); see, for example,\n\\citet{MohajerinEsfahani2017,ZHAO2018262,blanchet2016quantifying,gao2016distributionally,gao2018robust,wolfram2018}\nfor applications in operations research and\n\\cite{yang2017convex,yang2018Wasserstein} for examples\nspecifically in stochastic control.\n\nIn principle, the min-max formulation (\\ref{Wass_DRO_A}) is\n``distributionally robust'' in the\nsense that its solution guarantees a uniform performance over all\nprobability distributions in $%\n\\mathcal{U}_{\\delta_{n}}(P_{n}).$ Roughly speaking, for every choice\nof parameter or decision $\\beta,$ the min-max game type formulation in\n(\\ref{Wass_DRO_A}) introduces an adversary that chooses the most\nadversarial distribution from a class of distributions\n$\\mathcal{U}_{\\delta_{n}}(P_{n}).$ The goal of the procedure is to\nthen choose a decision that also hedges against these adversarial\nperturbations, thus introducing adversarial robustness into settings\nwhere the quality of optimal solutions are sensitive to incorrect\nmodel assumptions.\n\n\nInterestingly, the min-max formulation (\\ref{Wass_DRO_A}), which is\nderived from the above robustness viewpoint, has been shown to recover\nmany machine learning estimators when applied to suitable loss\nfunctions $\\ell(\\cdot)$; some examples include the square-root lasso and\nsupport vector machines \\citep{blanchet2016robust}, the group lasso\n\\citep{blanchet2017distributionally2}, adaptive regularization\n\\citep{volpi2018generalizing,blanchet2017data}, among others\n\\citep{shafieezadeh-abadeh_distributionally_2015,gaostatlearn,\n duchidistributionally,chen2018robust}.\nThe utility of the distributionally robust formulation\n(\\ref{Wass_DRO_A}) has also been explored in adversarial training of\nNeural Networks; see, for example\n\\citet{sinha2018certifiable,staib2017distributionally}.\n\n\nGeneric formulations such as (\\ref{Wass_DRO_A}) are becoming increasingly\ntractable; see, for example, %\n\\citet{MohajerinEsfahani2017,luo2017decomposition} for convex programming\nbased approaches and \\citet{sinha2018certifiable,blanchet2018optimal} for\nstochastic gradient descent based iterative schemes.\n\n\nMotivated by these wide range of applications, we investigate the asymptotic behaviour of the optimal value and optimal solutions of (\\ref{Wass_DRO_A}). In order to\nspecifically describe the contributions, let us introduce the following\nnotation. For any positive integer $n$ and $\\delta _{n}>0,$ let\n\\begin{equation*}\n\\Psi _{n}(\\beta )=\\sup_{P\\,\\in \\,\\mathcal{U}_{\\delta\n_{n}}(P_{n})}E_{P}\\left\\{ \\ell (X;\\beta )\\right\\}\n\\end{equation*}%\ndenote the distributionally robust objective function in\n\\eqref{Wass_DRO_A}. Suppose that $\\beta _{\\ast }$ uniquely minimizes\nthe population risk. According to (\\ref{ERM_1}) - (\\ref{Wass_DRO_A}),\nwe have $\\beta_n^{DRO}$ and $\\beta_n^{ERM}$ minimize, respectively,\nthe distributionally robust loss $\\Psi_n(\\beta)$ and the empirical\nloss in (\\ref{ERM_1}). Next, let\n\\begin{equation}\n\\Lambda _{\\delta _{n}}(P_{n})=\\big\\{ \\beta \\in B:\\beta \\in\n\\arg \\min_{\\beta \\in B }E_{P}\\left\\{ \\ell (X;\\beta )\\right\\} \\text{ for some }P\\in\n\\mathcal{U}_{\\delta _{n}}(P_{n})\\big\\}\n\\label{NaturalCR}\n\\end{equation}\ndenote the set of choices of $\\beta \\in B$ that are ``compatible'' with the distributional uncertainty\nregion, in the sense that for every\n$\\beta \\in \\Lambda _{\\delta _{n}}(P_{n}),$ there exists a probability\ndistribution $P\\in \\mathcal{U}%\n_{\\delta _{n}}(P_{n})$ for which $\\beta $ is optimal. In other words,\nif $\\mathcal{U}_{\\delta _{n}}(P_{n})$ represents the set of\nprobabilistic models which are, based on the empirical evidence,\nplausible representations of the\nunderlying phenomena, then each of such representations induces an\noptimal decision and $\\Lambda _{\\delta _{n}}(P_{n})$ encodes the set of plausible decisions. Let\n $\\Lambda^+ _{\\delta _{n}}(P_{n})$ be the closure of $\\cap_{\\epsilon>0}\\Lambda_{\\delta_n+\\epsilon}(P_n)$. Typically, $\\Lambda^+_{\\delta _{n}}(P_{n}) = \\Lambda_{\\delta _{n}}(P_{n})$, but this is not always true as illustrated in Example \\ref{eg:Lambda-plus}. Asymptotically, as $\\delta_n$ decreases to zero, the distinction is negligible. However, choosing a set such as $\\Lambda^+ _{\\delta _{n}}(P_{n})$ as a natural set of plausible decisions is sensible because we guarantee that a distributionally robust solution belongs to this region. Our main result also implies that all distributionally robust solutions are asymptotically equivalent; within $o_p(n^{-1\/2})$ distance from each other.\n\n{\nWith the above notation, the key contributions of this article can be\ndescribed as follows.\n\nWe first establish the convergence in distribution of the triplet,\n\\begin{equation}\n\\big( n^{1\/2}\\{\\beta_{n}^{ERM}-\\beta_{\\ast}\\},\\ n^{\\bar{\\gamma}\/2}\\{\\beta\n_{n}^{DRO}(\\delta_n)-\\beta_{\\ast}\\},\\ n^{1\/2}\\big\\{\n\\Lambda^+_{\\delta_{n}}(P_{n} )-\\beta_{\\ast}\\big\\} \\big) ,\n\\label{MLT-triplet}\n\\end{equation}\nfor a suitable $\\bar{\\gamma}\\in(0,1\/2]$ that depends\n on the rate at which the size of the distributional uncertainty,\n$\\delta_{n}$%\n, is decreased to zero; see Theorem\n \\ref{thm:levelsets-master}. We identify the joint limiting\n distributions of the triplet \\eqref{MLT-triplet}. The third\ncomponent of the triplet in \\eqref{MLT-triplet}, namely, $%\nn^{1\/2}\\{\\Lambda^+_{\\delta_{n}}(P_{n})-\\beta_{\\ast}\\}$, considers a\nsuitably scaled and centered version of the choices of $\\beta \\in B$\nwhich are compatible with the respective distributional uncertainty\nregion $\\mathcal{U}_{\\delta_{n}}(P_{n})$ in the sense described\nabove. Therefore, $\\Lambda^+_{\\delta_{n}}(P_{n})$ is a natural choice of the confidence region. We further develop an approximation for $\\Lambda^+_{\\delta_{n}}(P_{n})$; see Section \\ref{Sec_Confidence_Regions}.\n\nSecond, we utilize\nthe limiting result of \\eqref{MLT-triplet} to examine how the choice\nof the size of distributional ambiguity, $\\delta_{n}$, affects the\nqualitative properties of the distributionally robust estimators and\nthe induced confidence regions. Specifically, choosing\n$\\delta_{n}=\\eta n^{-\\gamma}$, we characterize the behaviour of the\nsolutions for different choices of $\\eta,\\gamma\\in (0,\\infty),$ as\n$n\\rightarrow\\infty$. It emerges that the canonical, $%\nO(n^{-1\/2})$, rate of convergence is achieved only if $\\gamma\\leq1$\nand the limiting distribution corresponding to the distributionally\nrobust estimator and that of the empirical risk minimizer\nare different only if $\\gamma\\geq1$. Hence to both\nobtain the canonical rate and tangible benefits from the distributionally robust optimization\nformulation, we must choose $\\gamma=1$, which corresponds to the\nresulting $\\bar{\\gamma}$ in \\eqref{MLT-triplet} to be equal to 1.\nMoreover, given any $\\alpha\\in(0,1)$, utilizing the limiting distribution of\nthe triplet in \\eqref{MLT-triplet}, we are able to identify a positive\nconstant $\\eta_{\\alpha}\\in(0,+\\infty)$ such that whenever $\\eta\\geq\n\\eta_{\\alpha}$ in the choice $\\delta_{n}=\\eta\/n$, the set $\\Lambda^+\n_{\\delta_{n}}(P_{n})$ is an asymptotic $(1-\\alpha)$-confidence region for $%\n\\beta_{\\ast}$.\n\n\n\n\n\nFinally, we establish the existence of an equilibrium game value. The distributionally robust optimization formulation assumes that the adversary selects a probability model after the statistician chooses a parameter. The equilibrium value of the game is attained if inf-sup equals sup-inf in (\\ref{Wass_DRO_A}), namely, if we allow the statistician to choose a parameter optimally after the adversary selects a probability model. We show in great generality that the equilibrium value of the game exists; see Theorem \\ref{minmax_prop}.}\n\n\n\n\n\n\n\n\n\nWe end the introduction with a discussion of related statistical results.\nThe asymptotic normality of M-estimators which minimize an empirical risk of\nthe form, $E_{P_{n}}\\{\\ell(X;\\beta)\\},$ was first established in the\npioneering work of \\citet{huber1967}. Subsequent asymptotic\ncharacterizations in the presence of constraints on the choices of\nparameter vector $\\beta$ have been developed in\n\\citet{dupacova1988,shapiro1989,Shapiro1991,shapiromoor,shapiro2000},\nagain in the standard M-estimation setting. Our work here is\ndifferent because of the presence of the adversarial perturbation to\nthe loss represented by the inner maximization in (\\ref{Wass_DRO_A}).\n\nAsymptotic normality in the related context of regularized estimators for\nleast squares regression has been established in %\n\\citet{knight2000asymptotics}. As mentioned earlier, distributionally robust\nestimators of the form (\\ref{Wass_DRO_A}) recover lasso-type estimators as\nparticular examples \\citep{blanchet2016robust}. In these cases, the\ninner max problem involving the adversary can be solved in closed form,\nresulting in the presence of regularization. However, our results can be\napplied even in the general context in which no closed form solution to the\ninner maximization can be obtained. Therefore, our results in this paper can\nbe seen as extensions of the results by \\citet{knight2000asymptotics}, from\na distributionally robust optimization perspective.\n\nWe comment that some of our results involving convergence of sets may\nbe of interest to applications in the area of empirical likelihood %\n\\citep{owen_empirical_1988,owen_empirical_1990,owen2001empirical}. This\nis because $\\Lambda_{\\delta_{n}}( P_{n}) $ can be characterized in\nterms of a function, namely, the robust Wasserstein profile function,\nwhich resembles the definition of the empirical likelihood profile\nfunction. We refer the reader to \\citet{blanchet2016robust} for more discussion on the robust Wasserstein profile function and its connections to empirical likelihood. We also refer to \\citet{cisneros2019distributionally} for additional applications, including graphical lasso, which could benefit from our results.\n\n\n\\label{sec:assumption_results}\n\n\\section{Preliminaries and Assumptions}\n\n\n\n\n\n\n\\subsection{Convergence of closed sets}\n\\label{SectConv_Sets}\nWe begin with a brief introduction to the notion of convergence of\nclosed sets before introducing the assumptions required to state our\nmain results. For a sequence $\\{A_k: k \\geq 1\\}$ of closed subsets of\n$\\mathbb{R}^d,$ the inner and outer limits are defined, respectively,\nby\n\\begin{align*}\n\\text{Li}_{n \\rightarrow \\infty}\\ A_n = &\\big\\{ z \\in \\mathbb{R}^d: \\text{\nthere exists a sequence } (a_n)_{n \\geq 1} {\\text{with } a_n \\in A_n} \\text{ convergent to } z\\},\n \\text{ and } \\\\\n\\text{Ls}_{n \\rightarrow \\infty}\\ A_n = &\\big\\{ z \\in \\mathbb{R}^d: \\text{\nthere exist positive integers } n_1 < n_2 < n_3 < \\cdots \\text{ and } a_k\n\\in A_{n_k} \\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\text{ such that the sequence }\n(a_k)_{k \\geq 1} \\text{ is convergent to } z\\big\\}.\n\\end{align*}\nWe clearly have\n$\\text{Li}_{n \\rightarrow \\infty}\\ A_{n}\\subseteq \\text{Ls}_{n \\rightarrow\n\\infty}\\ A_{n }.$\nThe sequence $\\{A_{n}:n\\geq 1\\}$ is said to converge to a set $A$ in the\nPainlev\\'{e}-Kuratowski (PK) sense if\n\\begin{equation*}\nA= \\text{Li}_{n\\rightarrow \\infty}\\ A_{n}=\\text{Ls}_{n \\rightarrow \\infty} \\\nA_{n},\n\\end{equation*}%\nin which case we write PK-$\\lim_{n}A_{n}=A$. Since $\\mathbb{R}^{d}$ is a\nlocally compact Hausdorff space, the topology induced by Painlev\\'{e}%\n-Kuratowski convergence on the space of closed subsets\n \nof $\\mathbb{R}^{d}$ is completely metrisable, separable and coincides\nwith the well-known topology of closed convergence, also known as Fell\ntopology; see \\citet[Chapter 1]{molchanov2005theory}. The notion of\nconvergence of sets we utilize here will be the above defined\nPainlev\\'{e}-Kuratowski convergence. { After equipping the space of closed subsets with the Borel $\\sigma$-algebra, we are able to define probability measures and further define the usual weak convergence of measures; see, for example, \\citet[Chapter 1]{billingsley2013convergence}.}\n\n\\subsection{Assumptions and notation}\nThroughout the paper, we use $A\\succ 0$ to denote that a given\nsymmetric matrix $A$ is positive definite and the notation $C^\\circ$ and ${\\rm cl} (C)$\nto denote the interior and closure of a subset $C$ of Euclidean space, respectively. In the case\nof taking expectations with respect to the data-generating distribution\n$P_\\ast,$ we drop the subindex in the expectation operator as in,\n$E_{P_\\ast} \\left\\{ f(X)\\right\\} = E\\left\\{ f(X)\\right\\}.$ We use $\\Rightarrow$ to denote weak convergence and $\\rightarrow$ to denote convergence in probability. We let $\\mathbb{I}(\\cdot)$ be the indicator function. { Let $\\|\\cdot\\|_p$ be the dual norm of $\\|\\cdot\\|_q$ where $1\/p + 1\/q = 1$ for $q\\in (1,\\infty),$ and $p = \\infty$ or $1$ for $q=1$ or $\\infty$, respectively.}\n\nAs mentioned in Section \\ref{sec:Intro}, suppose that $\\Omega$ is a\nclosed subset of $\\mathbb{R}^m$ and $B$ is a closed, convex subset of\n$\\mathbb{R}^d.$ Assumptions A1 and A2 below are taken to be satisfied\nthroughout the development, unless indicated otherwise.\n\n(A1) The transportation cost\n$c:\\Omega \\times \\Omega \\rightarrow [0,\\infty]$ is of the form\n$c(u,w)=\\Vert u-w\\Vert_{q}^{2}$.\n\n(A2) The function\n$\\ell:\\Omega \\times B \\rightarrow \\mathbb{R}$ satisfies the following\nproperties:\n\\begin{itemize}\n\\vspace*{-3mm}\n\\item[a)] The loss function $\\ell(\\cdot)$ is twice continuously differentiable,\n and for each $x$, $\\ell(x,\\cdot)$ is convex.\n\\item[b)] Let $h(x,\\beta)=D_{\\beta}\\ell(x,\\beta)$, and there\n exists $\\beta_{\\ast}\\in B^\\circ$ satisfying the optimality condition\n $E\\{ h(X,\\beta_{\\ast })\\} = 0.$ In addition,\n $E \\{\\Vert h(X,\\beta_{\\ast })\\Vert _{2}^{2}\\} <\\infty,$ the\n symmetric matrix\n $C=E\\left\\{ D_{\\beta }h(X,\\beta_{\\ast})\\right\\} \\succ 0,$\n $E\\left\\{ D_{x}h(X,\\beta_{\\ast})D_{x}h(X,\\beta_{\\ast})^{\\mathrm{\\scriptscriptstyle T} }\\right\\}\n \\succ 0$, and { $\\mathrm{pr}\\{\\|D_x\\ell(X,\\beta_*)\\|_p >0\\}>0$}\n\n\\item[c)] For every $\\beta \\in \\mathbb{R}^d,$\n $\\| D_{xx}\\ell(\\,\\cdot\\,;\\beta) \\|_p$ is uniformly continuous and bounded by a continuous\n function $%\n M(\\beta)$. {Further, there exists a positive constant $M'<\\infty$ such that $\\|D_xh(x,\\beta)\\|_q \\leq M'(1+\\|x\\|_q) $ for $\\beta$ in a neighborhood of $\\beta_*$}. In addition, $D_xh(\\cdot)$ and\n $D_\\beta h(\\cdot)$ satisfy the following locally Lipschitz\n continuity:\n\\begin{align*}\n\\left\\Vert D_{x}h(x+\\Delta,\\beta_{\\ast}+u)-D_{x}h(x,\\beta_{\\ast})\\right\\Vert\n_{q} & \\leq\\kappa^{\\prime}(x)\\big( \\left\\Vert \\Delta\\right\\Vert\n_{q}+\\left\\Vert u\\right\\Vert _{q}\\big), \\\\\n\\left\\Vert D_{\\beta}h(x+\\Delta,\\beta_{\\ast}+u)-D_{\\beta}h(x,\\beta_{\\ast\n})\\right\\Vert _{q} & \\leq\\bar{\\kappa}(x)\\big(\\left\\Vert \\Delta\\right\\Vert\n _{q}+\\left\\Vert u\\right\\Vert _{q}\\big),\n\\end{align*}\nfor $ \\left\\Vert\\Delta\\right\\Vert\n_{q}+\\left\\Vert u\\right\\Vert _{q} \\leq 1$, where $\\kappa^{\\prime},\\bar{\\kappa}:\\mathbb{R}^{m}\\rightarrow\\lbrack0,%\n\\infty) $ are such that $E[\\{\\kappa^{\\prime}(X_{i})\\}^{2}]<\\infty$ and $E\\{%\n\\bar {\\kappa}^{2}(X_{i})\\}<\\infty.$\n\\end{itemize}\n\n\nAssumption A1 covers most of the cases in the literature described in\nSection \\ref{sec:Intro}. One exception that does not immediately\nsatisfy Assumption A1, but which can be easily adapted after a simple\nchange-of-variables, is the weighted $l_{2}$ norm (also known as\nMahalanobis distance), namely\n$c( x,y) =( x-y) ^{{\\ \\mathrm{\\scriptscriptstyle T} }}A( x-y) $, where\n$A \\succ 0,$ see \\citet{blanchet2018optimal}.\nThe requirement that $\\ell (\\cdot )$ is twice differentiable in\nAssumption A2.a is useful in the analysis to identify a second-order\nexpansion for the objective in (\\ref{Wass_DRO_A}), which helps\nquantify the the impact of adversarial perturbations. Convexity of\n$\\ell (x,\\cdot ),$ together with $C$ being positive definite in A2.b,\nimplies uniqueness of $\\beta _{\\ast }$. The uniqueness\n of $\\beta_*$ is a standard assumption in the derivation of rates of\n convergence for estimators; see, for example,\n \\citet{huber1967}, \\citet[Section 3.2.2]{van1996weak}. Assumption\nA2.b also allows us to rule out redundancies in the underlying source\nof randomness (e.g. colinearity in the setting of linear regression).\n The first part of Assumption A2.c ensures that the inner maximization in (\\ref{Wass_DRO_A}) is finite by controlling the magnitude of the adversarial perturbations. {\n The local Lipschitz continuity requirement in $x$ arises with the optimal transportation analysis technique in \\cite{blanchet2016robust}, c.f. Assumption A6. Analogous regularity in $\\beta$ is useful in proving the confidence region limit theorem; see the discussion following Theorem \\ref{thm:master}. }\nLimiting results which study the impact of relaxing some of these assumptions are given immediately after describing the main result in Section\n\\ref{Sec_main_limit} below.\n\n\n\\section{Main results}\n\\subsection{The main limit theorem\\label{Sec_main_limit}}\nIn order to state our main results we introduce a few more definitions. Define\n\\begin{equation*}\n \\varphi (\\xi )= 4^{-1}E\\big[ \\big\\Vert \\big\\{ D_{x}h(X,\\beta _{\\ast\n })\\big\\} ^{ \\mathrm{\\scriptscriptstyle T} }\\xi \\big\\Vert _{p}^{2}\\big],\n\\end{equation*}%\nand its convex conjugate,\n$\\varphi ^{\\ast }(\\zeta )=\\sup_{\\xi \\in \\mathbb{R}^{d}}\\left\\{ \\xi ^{{\n\\mathrm{\\scriptscriptstyle T} }}\\zeta -\\varphi (\\xi )\\right\\}.$\nIn addition, define\n\\begin{align}\n S(\\beta )&=\\left[E \\left\\{\\Vert D_{x}\\ell (X;\\beta )\\Vert _{p}^{2}\n \\right\\}\\right]^{1\/2},\n \\label{defn:sensitivity-term}\\\\\n f_{\\eta ,\\gamma }(x) &= x\\mathbb{I}{(\\gamma \\geq 1)}-\\eta^{1\/2} D_{\\beta\n }S(\\beta _{\\ast })\\mathbb{I}{(\\gamma \\leq 1)},\n \\label{family-f-eta-gamma}\n\\end{align}\nfor $\\eta \\geq 0,\\gamma \\geq 0$. { By Assumption A2.b, we have $S(\\beta)$ is differentiable at $\\beta_*$.} Recall the matrix\n$C=E\\left\\{ D_{\\beta }h(X,\\beta _{\\ast })\\right\\} $ introduced in\nAssumption A2.b and\n\\begin{equation}\n \\Lambda^+_{\\delta _{n}}(P_{n}) = \\mathrm{cl}\\left\\{\\cap_{\\epsilon>0} \\Lambda_{\\delta _{n}+\\epsilon}(P_{n})\\right\\},\n \\label{NaturalCRplus}\n\\end{equation}\nwhich is the right limit of $\\Lambda_{\\delta_n}(P_n)$ defined in\n(\\ref{NaturalCR}). Finally, define the sets,\n\\begin{align}\n \\Lambda_{\\eta } =\\left\\{ u:\\varphi ^{\\ast }(Cu)\\leq \\eta \\right\\},\n \\qquad\n \\Lambda_{\\eta ,\\gamma} =\n\\begin{cases}\n \\Lambda_\\eta \\quad &\n \\text{ if } \\gamma =1, \\\\\n \\mathbb{R}^{d} \\quad & \\text{ if }\\gamma <1, \\\\\n \\{0\\} \\quad &\\text{ if }\\gamma >1.%\n\\end{cases}\n\\label{Lim_Set}\n\\end{align}\n\nWe now state our main result.\n\n\\begin{theorem}\n \\label{thm:levelsets-master}\n \\sloppy{Suppose that Assumptions A1 - A2 are satisfied with {$q\\in(1,\\infty)$},\n $\\Omega = \\mathbb{R}^m$ and $E\\left(\\Vert X \\Vert_2^2\\right) < \\infty.$ If\n $H\\sim\\mathcal{N}(0,\\mathrm{cov}\\{h(X,\\beta_{\\ast})\\})$ and\n $\\delta_{n}=n^{-\\gamma}\\eta$ \\ for some\n $\\gamma,\\eta\\in(0,\\infty),$} then we have the following joint\n convergence in distribution:\n\\begin{align*}\n & \\big( n^{1\/2}\\big\\{\\beta_{n}^{ERM}-\\beta_{\\ast}\\big\\},\\ n^{\\bar{\\gamma}\/2}\n \\big\\{\\beta_{n}^{DRO}(\\delta_n)-\\beta_{\\ast}\\big\\},\\ n^{1\/2}\\big\\{\n \\Lambda^+_{\\delta_{n}}(P_{n})-\\beta_{\\ast}\\big\\} \\big) \\\\\n &\\hspace{200pt} \\Rightarrow \\big( C^{-1}H,\\ C^{-1}f_{\\eta,\\gamma}(H),\\ \\Lambda\n _{\\eta,\\gamma}+C^{-1}H\\big) ,\n\\end{align*}\nwhere $\\bar{\\gamma}=\\min\\{\\gamma,1\\}$ and $\\Lambda_{\\eta,\\gamma}$ is\ndefined as in (\\ref{Lim_Set}).\n\\end{theorem}\n\nThe proof of Theorem \\ref{thm:levelsets-master} is presented in\nSection \\ref{sec:levelsets-master}. { For $q=1$ or $\\infty$, which corresponds to $%\np=\\infty $ or 1, $S(\\beta )$ may not be differentiable at $\\beta_*$, then the limiting\ndistribution presents a discontinuity which\nmakes it difficult to use in practice. Hence, we prefer not to cover\nthis here.} Theorem \\ref{thm:levelsets-master}\ncan be used as a powerful conceptual tool. For example, let us examine\nhow a sensible choice for the parameter $\\delta_{n}$ can be obtained\nas an application of Theorem \\ref{thm:levelsets-master} by considering\nthe following cases:\n\n{Case 1, where $\\gamma > 1$:} If $n\\delta_{n} \\rightarrow 0$\ncorresponding to the case $\\gamma > 1,$ we have $f_{0,\\gamma}(H)=H$\nfrom the definition of the parametric family in\n\\eqref{family-f-eta-gamma}. Therefore, from Theorem\n\\ref{thm:levelsets-master},\n\\begin{align*}\n & \\big( n^{1\/2}\\{\\beta_{n}^{ERM}-\\beta_{\\ast}\\},\\ n^{\\bar{\\gamma}\/2}\\{\\beta\n _{n}^{DRO}(\\delta_n)-\\beta_{\\ast}\\},\\ n^{1\/2}\\big\\{\n \\Lambda^+_{\\delta_{n}}(P_{n})-\\beta_{\\ast}\\big\\} \\big) \\\\& \\hspace{150pt}\\Rightarrow\n \\big(C^{-1}H,C^{-1}H,\\{C^{-1}H\\}\\big),\n\\end{align*}\nwhich implies that the influence of the robustification vanishes in\nthe limit when $\\delta_{n}=o(n^{-1})$.\n\n{Case 2, where $\\gamma < 1$:} If $n\\delta_{n}\\rightarrow\\infty$\ncorresponding to the case $\\gamma<1$, the rate of convergence for the\ndistributionally robust estimator is slower than the canonical than\n$O(n^{-1\/2})$ rate:\n\\begin{equation}\n\\beta_{n}^{DRO}(\\delta_n) = \\beta_{\\ast}-\\eta^{1\/2} n^{-\\gamma\/2}\nC^{-1}D_{\\beta}S(\\beta _{\\ast})+o_{p}\\big(\nn^{-\\gamma\/2}\\big), \\label{BDRO}\n\\end{equation}\nwhere $n^{\\gamma\/2}o_{p}( n^{-\\gamma\/2}) \\rightarrow0,$ in\nprobability, as $n\\rightarrow\\infty$. The relationship (\\ref{BDRO})\nreveals an uninteresting limit,\n$n^{1\/2}\\{ \\Lambda^+_{\\delta_{n}} (P_{n})-\\beta_{\\ast}\\}\n\\Rightarrow\\mathbb{R}^{d}$, exposing a slower than $O(n^{-1\/2})$ rate\nof convergence $\\Lambda^+_{\\delta_n }(P_{n}).$ In fact, (\\ref{BDRO})\nindicates that $O( n^{-\\gamma \/2})$ scaling will result in a\nnon-degenerate limit.\n\n{Case 3, where $\\gamma = 1$:} when $\\delta_{n}=\\eta\/n$, we have that all components in\nthe triplet in Theorem \\ref{thm:levelsets-master} have non-trivial\nlimits.\n\n Theorem \\ref{minmax_prop} below provides a geometric\ninsight relating $\\beta_{n}^{DRO}(\\delta_n)$, $\\beta_{n}^{ERM}$ and\n$\\Lambda^+_{\\delta_{n}}(P_{n})$, which justifies a picture describing\n$%\n\\Lambda^+_{\\delta_{n}}(P_{n})$ as a set containing both\n$\\beta_{n}^{DRO}(%\n\\delta_n)$ and $\\beta_{n}^{ERM}$. The observation that\n$\\beta_{n}^{ERM}\\in\\Lambda_{ \\delta_{n}}(P_{n})$ is immediate because\n$\\Lambda_{\\delta}(P_{n})$ is increasing in $\\delta$, so\n$\\beta_{n}^{ERM}\\in\\Lambda_{0}(P_{n})\\subset%\n\\Lambda^+_{\\delta_{n}}(P_{n})$. On the other hand, the observation that\n$\\beta_{n}^{DRO}(\\delta_n)\\in\\Lambda^+_{\\delta_{n}}(P_{n})$ is\nnon-trivial and it relies on the exchangeability of inf and sup in\nTheorem \\ref{minmax_prop} below. An appropriate choice of $\\eta$\nwhich results in the set $\\Lambda_{\\delta_n}^+(P_{n})$ also possessing\ndesirable coverage for $\\beta_\\ast$ is prescribed in Section\n\\ref{Sec_Confidence_Regions}.\n\\begin{theorem}\n \\label{minmax_prop}\n { Suppose that Assumption A1 is enforced. We further assume the loss function $\\ell(\\cdot)$ is\n continuous and non-negative, for each $x$, $\\ell(x,\\cdot)$ is convex, and $E_{P_*}\\{\\ell(X,\\beta)\\}$ has a unique optimizer $\\beta_*\\in B^\\circ$.}\n Then for any\n $\\delta >0,$\n\\begin{equation}\n \\inf_{\\beta \\in B}\\sup_{P \\in\\, \\mathcal{U}_{\\delta }(P_{n}) }\n E_{P}\\left\\{ \\ell(X;\\beta )\\right\\} =\n \\sup_{P \\in\\, \\mathcal{U}_{\\delta }(P_{n}) }\\inf_{\\beta \\in B} E_{P}\n \\left\\{ \\ell (X;\\beta )\\right\\},\n \\label{minmax_eqn}\n\\end{equation}\nand there exists a distributionally robust estimator choice\n$\\beta_{n}^{DRO}(\\delta) \\in \\Lambda^+_{\\delta}(P_{n})$.\n\\end{theorem}\nThe proof of Theorem \\ref{minmax_prop} is presented in Section\n\\ref{ssec:proof_prop_corr_1} of the supplementary material. Example\n\\ref{eg:Lambda-plus} below demonstrates that the set of minimizers of\nthe distributionally robust formulation (\\ref{Wass_DRO_A}) is not\nnecessarily unique and that the set $\\Lambda_{\\delta}(P_{n})$ may not contain Distributionally robust solutions.\nTheorem~\\ref{minmax_prop} indicates that the right-limit\n$ \\Lambda^+_{\\delta}(P_{n})$ contains a distributionally robust solution. Theorem\n \\ref{thm:levelsets-master} implies that the minimizers of\n (\\ref{Wass_DRO_A}) differ by at most $o_p(n^{-1\/2})$ in magnitude,\n which indicates that they are asymptotically equivalent and the\n inclusion of one solution of (\\ref{Wass_DRO_A}) in\n $\\Lambda^+_{\\delta}(P_{n})$ is sufficient for the scaling\n considered.\n\n\\begin{example}\n Let the loss function\nbe\n\\[\\ell(x,\\beta)=f(\\beta)+\\{x^2-\\log(x^2+1)\\} f(\\beta-4),\\]\nwhere $f(\\beta)= 3\\beta^2\/4-1\/8\\beta^4 +3\/8$ for $\\beta\\in [-1,1]$,\nand $f(\\beta)=|\\beta|$, otherwise. $\\ell(x,\\beta)$ is twice-differentiable and convex satisfying Assumptions A1 - A2. Then, if the empirical measure\n$P_n$ is a Dirac measure centered at zero with $n=1$, and\n$\\delta = 1$, we have the distributionally robust estimators\n$\\beta_{n}^{DRO}(\\delta) \\in [1,3]$. Further,\n$[1,3] \\subset \\Lambda^+_{\\delta}(P_{n})$ but\n$[1,3] \\cap \\Lambda_{\\delta}(P_{n}) = \\varnothing$.\n\\label{eg:Lambda-plus}\n\\end{example}\n\n\nNext, we turn to the relationship between $\\beta _{n}^{ERM}$ and\n$\\beta _{n}^{DRO}(\\delta _{n}),$ when $\\delta _{n}=\\eta \/ n$. From\nthe first two terms in the triplet, we have,\n\\begin{align}\n \\beta _{n}^{DRO}(\\delta _{n})\\\n & =\\ \\beta _{n}^{ERM}-\\eta^{1\/2}\n {C^{-1}D_{\\beta }S(\\beta_{\\ast })}{n^{-1\/2}}\n + o_{p}\\big( n^{-1\/2}\\big)\n \\label{Rel_DRO_ERM} \\\\\n & =\\ \\beta _{n}^{ERM}-\\delta_n^{1\/2}{C^{-1}D_{\\beta }S(\\beta _{n}^{ERM})}\n + o_{p}\\big( \\delta_n\\big) . \\notag\n\\end{align}%\nThe right hand side of (\\ref{Rel_DRO_ERM}) points to the canonical\n$O\\left( n^{-1\/2}\\right) $ rate of convergence of the Wasserstein distributionally robust\nestimator and it can readily be used to construct confidence regions,\nas we shall explain in Section \\ref{Sec_Confidence_Regions} below.\n\nRelation (\\ref{Rel_DRO_ERM}) also exposes the presence of an\nasymptotic bias term, namely,\n$S(\\beta)= [E \\{\\Vert D_{x}\\ell(X;\\beta)\\Vert_{p}^{2}\\}]^{1\/2}$, which\npoints towards selection of optimizers possessing reduced sensitivity\nwith respect to perturbations in data. A precise mathematical\nstatement of this sensitivity-reduction property is given in Corollary\n\\ref{Cor_Sensitivity} below and its proof is presented in Section\n\\ref{ssec:proof_prop_corr_1} of the supplementary material.\n\n\\begin{corollary}\n \\label{Cor_Sensitivity}Suppose that A1 - A2 are in force and\n consider\n\\begin{equation}\n\\bar{\\beta}_{n}^{DRO} \\in \\arg\\min_{\\beta \\in B}\\left[E_{P_{n}}\\left\\{ \\ell\n(X;\\beta)\\right\\} + n^{-1\/2} \\left[ \\eta E_{P_{n}}\\left\\{ \\Vert\n D_{x}\\ell(X;\\beta )\\Vert_{p}^{2}\\right\\} \\right]^{1\/2}\\right].\n\\label{eq:sensitivity-red}\n\\end{equation}\nThen, if $\\delta_{n}=\\eta\/n$, we have that $\\beta _{n}^{DRO}(\\delta_n) =\n\\bar{\\beta}_{n}^{DRO}+o_{p}(n^{-1\/2})$.\n\\end{corollary}\n\nWhile the formulation on the right-hand side of\n(\\ref{eq:sensitivity-red}) is conceptually appealing, it may not be\ndesirable from an optimization point of view due to the potentially\nnonconvex nature of the objective involved. On the other hand, under\nAssumption A2, the distributionally robust objective\n$\\Psi _{n}(\\beta )$ is convex; see, for example, the reasoning in\n\\citet[Theorem 2a]{blanchet2018optimal} while also enjoying the\nsensitivity-reduction property of the formulation in\n(\\ref{eq:sensitivity-red}).\n\nA similar type of result to Corollary \\ref{Cor_Sensitivity} is given\nin \\citet{gaostatlearn}, but the focus there is on the objective\nfunction of (\\ref{Wass_DRO_A}) being approximated by a suitable\nregularization. The difference between this type of result and\nCorollary \\ref{Cor_Sensitivity} is that our focus is on the asymptotic\nequivalence of the actual optimizers. Behind a result such as\nCorollary \\ref{Cor_Sensitivity}, it is key to have a more nuanced\napproximation which precisely characterizes the second order term of\nsize $O(\\delta_n)$; see Proposition A1 in the supplementary material.\n\n\n\n\nWe conclude this section with results which examine the effects of\nrelaxing some assumptions made in the statement of Theorem\n\\ref{thm:levelsets-master} above. Proposition\n\\ref{prop:constrained-support} below asserts that convergence of the\nnatural confidence region $\\Lambda^+_{\\delta_n}(P_n),$ as identified in\nTheorem \\ref{thm:levelsets-master}, holds even if the support of the\nprobability distributions in the uncertainty region\n$\\mathcal{U}_{\\delta_n}(P_n)$ is constrained to be a strict subset\n$\\Omega$ of $\\mathbb{R}^d.$ For this purpose, we introduce the\nfollowing notation: For any set $C\\in \\mathbb{R}^{m},$ let\n$C^{\\epsilon }=\\left\\{ x\\in C: B_{\\epsilon }\\left( x\\right) \\subset\n C\\right\\} ,$ where $B_{\\epsilon }\\left( x\\right) $ is the\nneighborhood around $x$ defined as\n$B_{\\epsilon }\\left( x\\right) =\\left\\{ y: \\left\\Vert y-x\\right\\Vert\n _{2}\\leq \\epsilon \\right\\} .$ Thus, for any probability measure $P$,\nwe have\n$\\lim_{\\epsilon \\rightarrow 0} {P}\\left( C^{\\epsilon }\\right)\n={P}\\left( C^{\\circ }\\right).$\n\n\\begin{proposition}\n Suppose that Assumptions A1 - A2 are satisfied with {$q\\in[1,\\infty]$} and\n $E\\left(\\Vert X \\Vert_2^2\\right) < \\infty$. In addition, suppose that the data\n generating measure $P_*$ satisfies\n $P_{\\ast}(\\Omega^\\circ) =1$. If we take\n $H\\sim\\mathcal{N}(0,\\mathrm{cov}\\{h(X,\\beta_{\\ast})\\})$ and\n $\\delta_{n}=n^{-\\gamma}\\eta$ \\ for some $\\gamma,\\eta\\in(0,\\infty),$\n then the following convergence holds as $n \\rightarrow \\infty$:\n \\begin{align*}\n n^{1\/2} \\left\\{ \\Lambda_{\\delta_n}(P_n) - \\beta_\\ast \\right\\}\n \\Rightarrow \\Lambda_{\\eta,\\gamma} + C^{-1}H.\n \\end{align*}\n \\label{prop:constrained-support}\n\\end{proposition}\nThe steps involved in proving Proposition\n\\ref{prop:constrained-support} are presented in Section\n\\ref{sec:proof}. A discussion on the validity of a central limit theorem\nfor the estimator $\\beta_n^{DRO},$ in the presence of constraints\nrestricting transportation within the support set $\\Omega$, is\npresented in Section \\ref{sec:discussion}.\n\nIn the case where the unique minimizer $\\beta_\\ast$ may not\nnecessarily lie in interior of the set $B$ (as opposed to the\nrequirement in Assumption A2.b, one may obtain the extension in\nProposition \\ref{prop:DRO-boundary} as the limiting result for the\nestimator $\\beta^{DRO}_n(\\delta_n)$. As in the previous results, we\ntake $h(x,\\beta) = D_\\beta\\ell(x;\\beta).$ The proof of Proposition\n\\ref{prop:DRO-boundary} is in Section \\ref{ssec:proofs_dro_bound_prop}\nof the supplementary material.\n\\begin{proposition}\n Suppose that Assumptions A1, A2.a, A2.c are satisfied and $\\beta_\\ast$\n is the unique minimizer of $\\min_{\\beta \\in B} E\\{\\ell(X,\\beta)\\}.$\n Suppose that the set $B$ is compact and there exist\n $\\varepsilon > 0$ and twice continuously differentiable functions\n $g_i(\\beta)$ such that,\n \\begin{align*}\n B \\cap B_\\varepsilon(\\beta_\\ast) =\n \\left\\{ \\beta \\in B_\\varepsilon(\\beta_\\ast): g_i(\\beta) = 0,\n i \\in I, \\ g_j(\\beta) \\leq 0, j \\in J\\right\\},\n \\end{align*}\n where $I,J$ are finite index sets and $g_i(\\beta_\\ast) = 0$ for all\n $i \\in J.$ With this identification of the set $B,$ suppose that the following so-called Mangasarian-Fromovitz constraint qualification is satisfied at $\\beta_\\ast$: the\n gradient vectors $\\{D g_i(\\beta_\\ast): i \\in I\\}$ are linearly\n independent and there exists a vector $w$ such that\n $w^{\\mathrm{\\scriptscriptstyle T} } D g_i(\\beta_\\ast) = 0$ for all $i \\in I$ and\n {$w^{\\mathrm{\\scriptscriptstyle T} } D g_j(\\beta_\\ast) < 0$} for all $j \\in J.$\n \n \n \n \n\n Suppose that $\\Lambda_0$ is the set of Lagrange multipliers satisfying\n the first-order optimality conditions and the following\n second-order sufficient conditions: $\\lambda \\in \\Lambda_0$ if and\n only if $D_\\beta L(\\beta_\\ast,\\lambda) = 0,$ $\\lambda_i \\geq 0$ for\n $i \\in J,$ and\n $\\max_{\\lambda \\in \\Lambda_0} w^{\\mathrm{\\scriptscriptstyle T} } D_{\\beta \\beta}\n L(\\beta_\\ast,\\lambda)w > 0$ for all $w \\in \\mathcal{C},$ where\n \\[L(\\beta,\\lambda) = E\\{\\ell(X,\\beta)\\} + \\sum_{i \\in I \\cup J}\n \\lambda_i g_i(\\beta)\\] is the Lagrangian function associated with\n the minimization $\\min_{\\beta \\in B} E\\{\\ell(X,\\beta)\\}$ and\n \\[ \\mathcal{C} = \\left\\{ w: w^{\\mathrm{\\scriptscriptstyle T} } Dg_i(\\beta_\\ast) = 0, i \\in I, \\\n w^{\\mathrm{\\scriptscriptstyle T} } Dg_j(\\beta_\\ast) \\leq 0, j \\in J, \\ w^{\\mathrm{\\scriptscriptstyle T} } E\\{h(X,\\beta_\\ast)\\}\n \\leq 0 \\right\\}\\]\n \n is the non-empty cone of critical directions. In addition, suppose\n that $\\omega(\\xi)$ is the unique minimizer of\n $\\min_{u \\in \\mathcal{C}}\\big\\{\\xi^{\\mathrm{\\scriptscriptstyle T} } u + 2^{-1}q(u) \\big\\},$ where\n \n $q(u) = \\max \\left\\{u^{\\mathrm{\\scriptscriptstyle T} } D_{\\beta \\beta}L(\\beta_\\ast,\\lambda) u :\n \\lambda \\in \\Lambda_0 \\right\\}.$ Then, if $\\delta_n = \\eta n^{-1}$\n for $\\eta \\in (0,\\infty)$,\n $E \\{\\Vert h(X,\\beta_\\ast) \\Vert_2^2\\} < \\infty$ and\n $E\\{D_\\beta h(X,\\beta_\\ast)\\} \\succ 0,$ we have the following\n convergence as $n \\rightarrow \\infty$:\n \\begin{align*}\n n^{1\/2}\\left\\{\\beta^{DRO}_n(\\delta_n) - \\beta_\\ast\\right\\} \\Rightarrow\n \\omega\\left\\{-H + \\eta^{1\/2}D_\\beta S(\\beta_\\ast)\\right\\},\n \\end{align*}\n where $H \\sim \\mathcal{N}(0,\\mathrm{cov}\\{h(X,\\beta_\\ast)\\}).$\n \\label{prop:DRO-boundary}\n\\end{proposition}\n\nThe Mangasarian-Fromovitz constraint qualification conditions and the\nnecessary and sufficient conditions in the statement of Proposition\n\\ref{prop:DRO-boundary} are standard in the literature if the optimal\n$\\beta_\\ast$ lies on the boundary of the set $B$; see, for example,\n\\cite{shapiro1989}. Please refer the discussion following Theorem 3.1\nin \\cite{shapiro1989} for sufficient conditions under which\n$\\omega(\\xi)$ is unique.\n\n\nProposition \\ref{prop:DRO-property} extends the sensitivity reduction\nproperty in Corollary \\ref{Cor_Sensitivity} to settings where the\nminimizer for $\\min_{\\beta \\in B} E_{P_*}\\{\\ell(X;\\beta)\\}$ is not\nunique. The proof of Proposition \\ref{prop:DRO-property} is presented\nin Section \\ref{ssec:proofs_dro_bound_prop} of the supplementary\nmaterial.\n\n\\begin{proposition}\n Suppose that Assumptions A1, A2.a and A2.c are satisfied, the set\n $B$ is compact, and the choice of the radii $(\\delta_n: n \\geq 1)$ is\n such that $n\\delta_n \\rightarrow \\eta \\in (0,\\infty).$ Let the set $B_* $ be $\\arg\\min_{\\beta \\in B} E_{P_*}\\{\\ell(X;\\beta)\\}$. Then, the distributionally robust optimization\n objective $\\Psi_n(\\beta)$ satisfies,\n \\begin{align}\n n ^{1\/2} \\left[\\Psi_n(\\beta) - E\\{\\ell(X;\\beta)\\} \\right]\n \\Rightarrow Z(\\beta) + \\eta^{1\/2} S(\\beta),\n \\label{rhs-nonunique}\n \\end{align}\n where $Z(\\cdot)$ is a zero mean Gaussian process with covariance\n function $\\text{cov}\\{Z(\\beta_1),Z(\\beta_2)\\}\n =\\text{cov}\\{\\ell(X,\\beta_1),\\ell(X,\\beta_2)\\}.$ The above\n weak convergence holds, as $n \\rightarrow \\infty,$ on the space of\n continuous functions equipped with the uniform topology on compact sets.\n %\n \n \n \n \n \n \n \n \n \n \n \n \n \n Consequently, if\n $\\arg \\min_{\\beta \\in B_\\ast}\\{Z(\\beta) +\n \\eta^{1\/2} S(\\beta)\\}$ is singleton with probability one, we have as\n $n\\rightarrow \\infty,$\n \\begin{equation*}\n \\beta_n^{DRO}(\\delta_n) \\Rightarrow \\textnormal{arg}\\,\\textnormal{min}_{\\beta \\in B_\\ast}\\big\\{\n Z(\\beta )+ \\eta^{1\/2} S(\\beta ) \\big\\}.\n\\end{equation*}%\n\n\\label{prop:DRO-property}\n\\end{proposition}\n\n\n\n\\subsection{Construction of Wasserstein distributionally robust confidence regions}\n \\label{Sec_Confidence_Regions}\nAs mentioned in the Introduction, for suitably chosen $\\delta_n,$ the\nset $\\Lambda^+_{\\delta_{n}}(P_{n})$ represents a natural confidence\nregion. In particular, $\\Lambda^+_{\\delta_{n}}(P_{n})$ possesses an\nasymptotically desired coverage, say at level at least $1-\\alpha$, if\nand only if\n\\begin{equation*}\n1-\\alpha\\leq\\lim_{n\\rightarrow\\infty}\\mathrm{pr}\\left\\{ \\beta_{\\ast}\\in\n\\Lambda^+_{\\delta_{n}}(P_{n})\\right\\} =\\mathrm{pr}[-C^{-1}H\\in\\{u:\\varphi^{%\n\\ast }(Cu)\\leq\\eta\\}],\n\\end{equation*}\nor, equivalently, if $\\eta\\geq\\eta_{\\alpha}$, where $\\eta_{\\alpha}$ is\nthe $%\n(1-\\alpha)$-quantile of the random variable $\\varphi^{\\ast}(H)$.\n\nRecall the earlier geometric insight describing\n$\\Lambda^+_{\\delta_{n}}(P_{n})$ as a set containing both\n$\\beta_{n}^{DRO}(\\delta_n)$ and $\\beta_{n}^{ERM},$ as a consequence of\nTheorem \\ref{minmax_prop}. Following this, if we let\n$\\eta\\geq\\eta_{\\alpha},$ we then have,\n\\begin{align*}\n \\lim_{n\\rightarrow\\infty}\\mathrm{pr}\\big\\{\\beta_{\\ast}\\in\\Lambda^+_{\\delta_{n}}(P_{n}),\\\n \\beta_{n}^{DRO}\\in\\Lambda^+_{\\delta_{n}}(P_{n}),\\\n \\beta_{n}^{ERM}\\in\\Lambda^+_{\\delta_{n}}(P_{n})\\big\\}\n &= \\lim_{n\\rightarrow\\infty}\\mathrm{pr} \\big\\{ \\beta_{\\ast}\\in\\Lambda^+_{\\delta_{n}}(P_{n})\\big\\}\\\\\n &\\geq 1-\\alpha,\n\\end{align*}\nwhich presents the picture of $\\Lambda^+_{\\delta_n}(P_n)$ as a\nconfidence region simultaneously containing $\\beta_\\ast,\\beta_n^{ERM}$\nand $\\beta_{n}^{DRO}(\\delta_n),$ with a desired level of confidence.\n\nThe function $\\varphi^{\\ast}(H)$ can be computed in closed form in\nsome settings. But, typically, computing $\\varphi^{\\ast}(\\cdot)$ may\nbe challenging. We now describe how to obtain a consistent estimator\nfor $\\eta_{\\alpha}$. Define the empirical version of $\\varphi (\\xi )$,\nnamely%\n\\begin{equation*}\n \\varphi _{n}(\\xi ) = \\frac{1}{4}E_{P_{n}}\\left[ \\left\\Vert \\left\\{\n D_{x}h(X,\\beta _{\\ast })\\right\\} ^{ \\mathrm{\\scriptscriptstyle T} }\\xi\n \\right\\Vert _{p}^{2}\\right] = \\frac{1}{4n}\\sum_{i=1}^{n}\\left\\Vert \\left\\{\n D_{x}h(X,\\beta _{\\ast })\\right\\} ^{ \\mathrm{\\scriptscriptstyle T} }\\xi\n \\right\\Vert_{p}^{2},\n\\end{equation*}%\nand the associated empirical convex conjugate,\n$\\varphi _{n}^{\\ast }(\\zeta )=\\sup_{\\xi \\in \\mathbb{R}^{d}}\\left\\{ \\xi\n ^{{ \\mathrm{\\scriptscriptstyle T} }}\\zeta -\\varphi _{n}(\\xi\n )\\right\\}.$\nProposition \\ref{Prop_Reg_Plug} below, whose proof is in Section\n\\ref{ssec:proof_prop_reg} of the supplementary material, provides a\nbasis for computing a consistent estimator for $\\eta _{\\alpha }$.\n\n\\begin{proposition}\n \\label{Prop_Reg_Plug}Let ${\\Xi}_{n}$ be any consistent estimator of\n $%\n \\text{cov}\\{ h\\left( X,\\beta\\right) \\} $ and write $\\bar{\\Xi}_{n}$\n for any factorization of $\\Xi_{n}$ such that\n $\\bar{\\Xi}_{n}\\bar{\\Xi}_{n}^{{\\mathrm{\\scriptscriptstyle T} }}=\\Xi_{n}$%\n . Let $Z$ be a $d$-dimensional standard Gaussian random vector\n independent of the sequence $\\left( X_{n}:n\\geq1\\right) $. Then, i)\n the distribution of $%\n \\varphi^{\\ast}(Z)$ is continuous, ii)\n $\\varphi_{n}^{\\ast }(\\cdot)\\Rightarrow\\varphi^{\\ast}(\\cdot)$ as\n $n\\rightarrow\\infty$ uniformly on compact sets, and iii)\n $\\varphi_{n}^{\\ast}(\\bar{\\Xi}_{n}Z)\\Rightarrow\\varphi ^{\\ast}(H)$.\n\\end{proposition}\n\nGiven the collection of samples $\\left\\{ X_{i}\\right\\} _{i=1}^{n}$, we\ncan generate independent and identically distributed copies of $Z$ and\nuse Monte Carlo to estimate the quantile\n$\\left( 1-\\alpha\\right) $-quantile, $%\n\\eta_{\\alpha}\\left( n\\right) $, of\n$\\varphi_{n}^{\\ast}(\\bar{\\Xi}_{n}Z)$. The previous proposition implies\nthat\n$\\eta_{\\alpha}\\left( n\\right) =\\eta_{\\alpha}+o_{p}\\left( 1\\right),$ as\n$n\\rightarrow\\infty$ This is sufficient to obtain an implementable\nexpression for $\\beta_n^{DRO}\\{\\eta_\\alpha(n)\/n\\}$ which is\nasymptotically equivalent to (\\ref{Rel_DRO_ERM}), as it differs only\nby an error of maginutude $o_{p}(n^{-1\/2}).$\n\nNext, we provide rigorous support for the approximation\n\\begin{equation*}\n\\Lambda^+_{\\delta_{n}}(P_{n})\\approx\\beta_{n}^{ERM}+n^{-1\/2}\\Lambda_{\\eta},\n\\end{equation*}\nwhich can be used to approximate $\\Lambda^+_{\\delta_{n}}(P_{n})$, providing we\ncan estimate $\\Lambda_{\\eta}$.\n\n\\begin{corollary}\n\\label{cor:CIcenter-plugin}Under the assumptions of Theorem \\ref%\n{thm:levelsets-master} and $\\gamma = 1$, we have\n\\begin{equation*}\nn^{1\/2}\\left\\{ \\Lambda^+_{\\delta_{n}}(P_{n})-\\beta_{n}^{ERM}\\right \\}\n\\Rightarrow\\Lambda_{\\eta}.\n\\end{equation*}\nMoreover, if $\\eta\\left( n\\right) =\\eta+o\\left( 1\\right) $, and $%\nC_{n}\\rightarrow C$, then\n\\begin{equation*}\n\\Lambda_{\\eta\\left( n\\right) }^{n}=\\{u:\\varphi_{n}^{\\ast}(C_{n}u)\\leq\n\\eta\\left( n\\right) \\}\\rightarrow\\Lambda_{\\eta}.\n\\end{equation*}\n\\end{corollary}\n\n\\begin{proof}[Proof of Corollary \\ref{cor:CIcenter-plugin}]\nBy Following directly from Theorem \\ref{thm:levelsets-master} and an application\nof continuous mapping theorem as in,\n\\begin{equation*}\nn^{1\/2}\\left\\{ \\Lambda^+_{\\delta_{n}}(P_{n})-\\beta_{n}^{ERM}\\right\\}\n=n^{1\/2}\\left\\{ \\Lambda^+_{\\delta_{n}}(P_{n})-\\beta_{\\ast}\\right\\} -\nn^{1\/2}\\left\\{ \\beta_{n}^{ERM}-\\beta_{\\ast}\\right\\} \\Rightarrow\n\\Lambda_{\\eta}+C^{-1}H-C^{-1}H.\n\\end{equation*}\nThe second part of the result follows from the regularity results in\nProposition \\ref{Prop_Reg_Plug}.\n\\end{proof}\n\nThe next result, as we shall explain, allows us to obtain\ncomputationally efficient approximations of the set $\\Lambda_{\\eta}.$\nA completely analogous result can be used to estimate\n$\\Lambda_{\\eta\\left( n\\right) }^{n}$, simply replacing\n$\\varphi^{\\ast}(\\cdot)$, $\\varphi(\\cdot)$ and $C$ by $%\n\\varphi_{n}^{\\ast}(\\cdot)$, $\\varphi_{n}(\\cdot)$ and $C_{n}.$\n\n\\begin{proposition}\nThe support function of the convex set $\\Lambda _{\\eta }=\\{u:\\varphi ^{\\ast\n}(Cu)\\leq \\eta \\}$ is,\n\\begin{equation*}\nh_{\\Lambda _{\\eta }}(v)=2\\{\\eta \\varphi (C^{-1}v)\\}^{1\/2},\n\\end{equation*}%\nwhere the support function of a convex set $A$ is defined as $h_{A}(x)=\\sup\n\\{x\\cdot a:a\\in A\\}.$ \\label{prop:level-sets-char}\n\\end{proposition}\nThe proof of Proposition \\ref{prop:level-sets-char} is in Section \\ref{ssec:proof_prop_reg} of the supplementary material.\n\\begin{remark}\n\\textnormal{\\ Proposition \\ref{prop:level-sets-char} can be used to obtain a\ntight envelope of the set $\\Lambda _{\\eta }$ by evaluating an intersection\nof hyperplanes that enclose $\\Lambda _{\\eta }$. Recall from the definition\nof support function that\n\\begin{equation*}\n\\Lambda _{\\eta }=\\cap _{u}\\{v:u\\cdot v\\leq h_{\\Lambda _{\\eta }}\\left(\nu\\right) \\}.\n\\end{equation*}%\nTherefore for any $u_{1},...,u_{m},$ we have\n$\\Lambda _{\\eta }$ is contained in\n$\\cap _{u_{1},...u_{m}}\\{v:u_{i}\\cdot v\\leq h_{\\Lambda _{\\eta }}\\left(\n u_{i}\\right) \\},$\nand\n$\\Lambda _{\\eta \\left( n\\right) }^{n}$ is contained in\n$\\cap _{u_{1},...u_{m}}\\{v:u_{i}\\cdot v\\leq h_{\\Lambda _{\\eta \\left(\n n\\right) }^{n}}\\left( u_{i}\\right) \\}.$\n}\n\\end{remark}\n\n\n\n\\section{Numerical examples: Geometry and coverage probabilities}\n\n\\label{sec:geo_insignts}\n\n\\subsection{Distributionally robust linear regression}\nWe first offer a brief introduction to the distributionally robust\nversion of the linear regression problem considered in\n\\citet{blanchet2016robust}. Specifically, the data is generated by\n$Y=\\beta_{\\ast}^{{ \\mathrm{\\scriptscriptstyle T} }}X+\\epsilon,$ where\n$X\\in\\mathbb{R}^{d}$ and $%\n\\epsilon$ are independent, $C=E(XX^{{\\mathrm{\\scriptscriptstyle T%\n } }}) $ and $\\epsilon\\sim\\mathcal{N}(0,\\sigma^{2})$. We consider\nsquare loss\n$\\ell(x,y;\\beta)= 1\/2(y-\\beta^{{\\mathrm{\\scriptscriptstyle T}\n }}x)^{2}$ and take the cost function\n$c:\\mathbb{R}^{d+1} \\times \\mathbb{R}%\n^{d+1} \\rightarrow [0,\\infty]$ to be\n\\begin{equation}\nc\\{(x,y),(u,v)\\}=\\left\\{\n\\begin{array}{c}\n\\left\\Vert x-u\\right\\Vert _{q}^{2} \\\\\n\\infty%\n\\end{array}\n\\right.\n\\begin{array}{c}\n\\text{if }y=v, \\\\\n\\text{otherwise.}%\n\\end{array}%\n\\label{tr-cost-LinReg}\n\\end{equation}\nThen, from \\citet[Theorem 1]{blanchet2016robust}, we have\n\\begin{equation}\n \\min_{\\beta\\in\\mathbb{R}^{d}}\\sup_{P:D_{c}(P,P_{n})\\leq\\delta_{n}}E_{P}\\left[\n \\ell(X,Y;\\beta)\\right] = \\frac{1}{2}\\min_{\\beta\\in\\mathbb{R}^{d}}\\big[ E_{P_{n}}\n \\left\\{ (Y-\\beta^{{\\mathrm{\\scriptscriptstyle T} }}X)^{2}\\right\\}^{1\/2}\n +\\delta_{n}^{1\/2}\\left\\Vert \\beta\\right\\Vert _{p}\\big]^{2},\n\\label{Sqrt_Lasso_Est}\n\\end{equation}\nwhere $p$ satisfies $1\/p+1\/q=1.$ Following Corollary\n\\ref{cor:CIcenter-plugin}, an approximate confidence region is\n\\begin{equation*}\n\\Lambda^+_{\\delta_{n}}(P_{n})\\approx n^{-1\/2}\\Lambda_{\\eta_{\\alpha}}+\\beta\n_{n}^{ERM},\n\\end{equation*}\nwhere\n$\\Lambda_{\\eta_\\alpha}=\\{\\theta:\\varphi^{\\ast}( C\\theta) \\leq\n\\eta_{\\alpha}\\},$\n$\\varphi(\\xi)= 4^{-1}E \\{\\Vert e\\xi-( \\xi^{{\\mathrm{%\n \\scriptscriptstyle T} }}X) \\beta_\\ast\\Vert_{p}^{2}\\},$ the\nconstant $\\eta_\\alpha$ is such that\n$\\mathrm{pr} \\{\\varphi^\\ast (H)\\leq1-\\alpha\\}=\\eta_{\\alpha}$ for\n$H\\sim\\mathcal{N}(0,C\\sigma^{2}),$ and $\\delta_{n}=\\eta_{\\alpha}\/n.$\nBy performing a change of variables via linear transformation in the\nanalysis of the case $c(x,y) = \\Vert x - y \\Vert_2^2,$ Theorem \\ref%\n{thm:levelsets-master} can be directly adapted to the choice $c(x,y)$\nbeing a Mahalanobis metric as in,\n\\begin{align}\nc\\left( x,y\\right) =\\left( x-y\\right) ^{{\\mathrm{\\scriptscriptstyle T} }%\n}A\\left( x-y\\right), \\label{Mahalanobis-metric}\n\\end{align}\nfor some matrix $A \\succ 0.$ The respective\n$\\Lambda_{\\eta_\\alpha} = \\{\\theta:\\varphi^{\\ast}( C\\theta) \\leq\n\\eta_{\\alpha}\\}$ is computed in terms of\n\\begin{equation*}\n \\varphi(\\xi)=4^{-1}E\\big\\{ \\Vert\n \\xi^{{\\mathrm{\\scriptscriptstyle T} }}\n D_{x}h(X,\\beta_{\\ast })A^{-1\/2}\\Vert _{2}^{2}\\big\\}.\n\\end{equation*}\nFor the choice $c\\left( x,y\\right) =\\left( x-y\\right) ^{{\\mathrm{%\n\\scriptscriptstyle T} }}A\\left( x-y\\right), $ the relationship between distributionally robust\nand regularized estimators, as in (\\ref{Sqrt_Lasso_Est}), is\n\\begin{equation*}\n \\min_{\\beta\\in\\mathbb{R}^{d}}\\sup_{P:D_{c}(P,P_{n})\\leq\\delta_{n}}E_{P}\\left\\{\n l(X,Y;\\beta)\\right\\} = \\frac{1}{2}\\min_{\\beta\\in\\mathbb{R}^{d}}\\left[\n E_{P_{n}}\n \\left\\{ (Y-\\beta^{{\\mathrm{\\scriptscriptstyle T} }}X)^{2}\\right\\}^{1\/2}\n +\\delta_{n}^{-1\/2}\\big\\Vert A^{-1\/2}\\beta\\big\\Vert _{2}\\right] ^{2}.\n\\end{equation*}\nSee \\citet{blanchet2017data} for an account of improved out-of-sample\nperformance resulting from Mahalanobis cost choices.\n\n\n\n\\subsection{Shape of confidence regions}\n\nThe goal of this section is to provide some numerical implementations\nto gain intuition about the geometry of the set $\\Lambda_{\\eta}$ for\ndifferent transportation cost choices. We use the empirical set\n\\begin{equation*}\n \\Lambda_{\\eta_{\\alpha}}^n=\\{\\theta:{\\varphi}^{\\ast}_n( {C}_n\n \\theta) \\leq n^{-1\/2}\\tilde{\\eta}_{\\alpha}\\},\n\\end{equation*}\nto approximate the desired confidence region as in Corollary\n\\ref{cor:CIcenter-plugin}. In the above expression,\n${\\varphi}_n(\\xi)= 4^{-1}E_{P_{n}} \\{\\Vert e\\xi-\\left(\n \\xi^{\\mathrm{\\scriptscriptstyle T} } X\\right)\n\\beta_{n}^{ERM}\\Vert_{p}^{2}\\} $, $\\eta_{\\alpha}(n)$ is such that\n$\\mathrm{pr}(\\tilde{\\varphi}^{\\ast}_n(H)\\leq1-\\alpha)=\\eta_{\\alpha}(n)$ for\n$H\\sim%\n\\mathcal{N}(0,{C}_n{\\sigma}_n^{2}),$\n${C}_n=E_{P_{n}}\\left[ XX^{ \\mathrm{\\scriptscriptstyle T}\n }\\right] $, and\n${\\sigma}^{2}_n =E_{P_{n}}[ \\{(Y-( \\beta_{n}^{ERM})^{{\\mathrm{\\scriptscriptstyle T} } } X\\}^{2}] $.\n\nIn the following numerical experiments, the data is sampled from a linear\nregression model with parameters $\\sigma^{2}=1$, $\\beta_{*}=[0.5,0.1]^{{\n\\mathrm{\\scriptscriptstyle T} }},n=100$ and\n\\begin{equation}\nX\\sim\\mathcal{N}\\left( 0,\\left[\n\\begin{array}{cc}\n 1 & \\rho \\\\\n \\rho & 1%\n\\end{array}\n\\right] \\right),\n\\label{eq:cov-matrix}\n\\end{equation}\nwith $\\rho = 0.7.$ In Figures \\ref{1-norm}-\\ref{inf-norm}, we draw the\n$95\\%$ confidence region corresponding to the \\sloppy{choices\n $p=1,3\/2,2,3,\\infty $,} ($%\nq=\\infty,3,2,3\/2,$ respectively) by means of support functions defined\nin Proposition %\n\\ref{prop:level-sets-char}. In addition, a confidence region for\n$\\beta _{\\ast }$ resulting from the asymptotic normality of the\nleast-squares estimator,\n${n}^{1\/2}(\\beta _{n}^{ERM}-\\beta ^{\\ast })\\Rightarrow \\mathcal{N}%\n(0,C^{-1}\\sigma ^{2}),$\nis\n\\begin{equation*}\n\\Lambda _{CLT}(P_{n})=n^{-1\/2}\\{\\theta :\\theta ^{{\\mathrm{\\scriptscriptstyle %\nT}}}C\\theta \/\\sigma ^{2}\\leq \\chi _{1-\\alpha }^{2}(d)\\}+\\beta _{n}^{ERM},\n\\end{equation*}%\nwhere $\\chi _{1-\\alpha }^{2}(d)$ is the $1-\\alpha $ quantile of the\nchi-squared distribution with $d$ degrees of freedom. One can select the\nmatrix $A$ in the Mahalanobis metric \\eqref{Mahalanobis-metric} such that\nthe resulting confidence region coincides with $\\Lambda _{CLT}(P_{n})$.\nNamely, $A$ is chosen by solving the equation%\n\\begin{equation}\nE\\left[ \\left\\{ e\\xi -\\left( \\xi ^{{\\mathrm{\\scriptscriptstyle T}}}X\\right)\n\\beta _{\\ast }\\right\\} A^{-1}\\left\\{ e\\xi -\\left( \\xi ^{{\\mathrm{%\n\\scriptscriptstyle T} }}X\\right) \\beta _{\\ast }\\right\\} ^{{\\mathrm{%\n\\scriptscriptstyle T} }}\\right] =C\\sigma ^{2}. \\label{eqn_A2}\n\\end{equation}\n\nFigure \\ref{clt} gives the confidence region for the choice $p=2$ and\n$%\n\\Lambda_{CLT}(P_n)$ superimposed with various distributionally robust\nminimizer along with the empirical risk minimizer.\nIt is evident from the figures that $p=1$ gives a diamond shape, $p=2$\ngives an elliptical shape and $p=\\infty$ gives a rectangular shape.\nFurthermore, we see that the distributionally robust optimization solutions all reside in their\nrespective confidence regions but may lie outside of the confidence\nregions of other norms.\n\\begin{figure}[htbh]\n\\centering\n\\subfigure[$p=1$]{\n\\label{1-norm} \\includegraphics[width=1.81in]{1-norm-std_supp.eps}}\n\\subfigure[$p=1.5$]{\n\\label{2-norm} \\includegraphics[width=1.81in]{15-norm-std_supp.eps}}\n\\subfigure[$p=2$]{\n\\label{2-norm} \\includegraphics[width=1.81in]{2-norm-std_supp.eps}}\n\\subfigure[$p=3$]{\n\\label{2-norm} \\includegraphics[width=1.81in]{3-norm-std_supp.eps}}\n\\subfigure[$p=\\infty$]{\n\\label{inf-norm} \\includegraphics[width=1.81in]{Inf-norm-std_supp.eps}}\n\\subfigure[CLT]{\n\\label{clt} \\includegraphics[width=1.81in]{clt_new.eps}}\n\\caption{Confidence regions for different norm choices and central\n limit theorem based confidence region plotted together with the\n respective $\\beta_n^{DRO}$ estimators and $\\beta_n^{ERM}$}.\n\\end{figure}\nWe find the induced confidence regions constructed by the Wasserstein\ndistributionally robust optimization formulations are somewhat similar across the various $l_{p}$\nnorms, but they are all different to the standard central limit\ntheorem based confidence region. As noted, the Mahalanobis cost can be\ncalibrated to exactly match the standard central limit theorem\nconfidence region.\n\\subsection{Coverage probabilities and distributionally robust optimization solutions}\nIn this section, we test the scenario in which the covariates are\nhighly correlated. Specifically, the data is sampled from a linear\nregression model with parameters $\\sigma ^{2}=1$, $n=100$, $p=2$. The\nrandom vector $X$ is taken to be distributed in (\\ref{eq:cov-matrix}),\nconsidering three different values for $\\rho:$ we choose\n$\\rho = 0.95,0,-0.95.$ We consider the following two cases for the\nunderlying parameter $\\beta_\\ast$: $\\beta_{*}=[0.5,0.5]^{\\mathrm{\\scriptscriptstyle T} }$ and\n$\\beta_{*}=[1,0]^{\\mathrm{\\scriptscriptstyle T} }.$ In Table \\ref{tab:coverage} below, we report the\ncoverage probabilities of the underlying $\\beta _{*}$ and\n$\\beta_n^{DRO}(\\delta_n)$ in both the $\\ell_2$-confidence region and\nthe central limit theorem based confidence regions. Specifically, we\nreport the following four probabilities:\n \\[\n{\\rm pr}\\{\\beta^{DRO}_n \\in \\Lambda^+_{\\delta_{n}}(P_{n})\\}, \\quad {\\rm pr}\\{\\beta_* \\in \\Lambda^+_{\\delta_{n}}(P_{n})\\}, \\quad {\\rm pr}\\{\\beta^{DRO}_n \\in \\Lambda_{CLT}(P_{n})\\}, \\quad {\\rm pr}\\{\\beta_* \\in \\Lambda_{CLT}(P_{n})\\}.\n \\]\n We sample 1000 datasets and report the coverage probabilities in\n Table \\ref{tab:coverage}. From Table \\ref{tab:coverage}, we observe\n that for $\\beta_*$, both the $\\ell_2$ confidence region and the\n central limit theorem based confidence region achieve the target 95\\%\n coverage. Furthermore, the coverage for the distributionally robust\n estimator of the $\\ell_2$ confidence region is 100\\%, which validates\n our theory. However, when $\\rho = -0.95$ and\n $\\beta_* = [0.5,0.5]^{\\mathrm{\\scriptscriptstyle T} }$, the coverage for the distributionally\n robust estimator in the central limit theorem based confidence region\n is only $75.8 \\%$. In this example, the asymptotic results developed\n indicate that this coverage probability converges to zero, when $n$\n tends to infinity.\n\n\\begin{table}[htbp]\n \\centering\n \\caption{Coverage Probability}\n \\begin{tabular}{cccccc}\n \\toprule\n $\\beta_0$ & $\\rho$ & \\multicolumn{2}{c}{$\\ell_2$-confidence region} & \\multicolumn{2}{c}{CLT confidence region} \\\\\n \\midrule\n & & Coverage for $\\beta_n^{DRO}$ & Coverage for $\\beta_*$ & \\multicolumn{1}{l}{Coverage for $\\beta_n^{DRO}$} & Coverage for $\\beta_*$ \\\\\n \\midrule\n \\multirow{3}[0]{*}{$\\begin{bmatrix} 0.5 \\\\ 0.5 \\end{bmatrix}$}\n & 0.95 & 100.0\\% & 94.5\\% & 99.4\\% & 94.6\\% \\\\\n & 0 & 100.0\\% & 94.0\\% & 97.1\\% & 93.5\\% \\\\\n & -0.95 & 100.0\\% & 94.8\\% & 75.8\\% & 94.4\\% \\\\\n\n \\midrule\n \\multirow{3}[0]{*}{$\\begin{bmatrix} 1.0 \\\\ 0.0 \\end{bmatrix}$}\n & 0.95 & 100.0\\% & 94.6\\% & 93.7\\% & 95.4\\% \\\\\n & 0 & 100.0\\% & 94.6\\% & 100\\% & 94.1\\% \\\\\n & -0.95 & 100.0\\% & 95.3\\% & 91.2\\% & 94.9\\% \\\\\n\n \\bottomrule\n \\end{tabular}%\n \\label{tab:coverage}%\n\\end{table}%\nFigures \\ref{fig:scatter_0.5} and \\ref{fig:scatter_1} show the scatter\nplots of the estimators, $\\beta_n^{ERM}$ and $\\beta_n^{DRO},$ when the\nunderlying $\\beta_\\ast$ takes the values $[0.5,0.5]^{\\mathrm{\\scriptscriptstyle T} }$ and\n$[1,0]^{\\mathrm{\\scriptscriptstyle T} }$, respectively. In the near-collinearity\n cases where $\\rho = 0.95$ or $-0.95$, the lower spreads for the\n distributionally robust estimators reveal their better performance\n over the empirical risk minimizing solutions. The utility of the\n proposed $\\ell_2$-confidence region emerges in light of the better\n performance of the distributionally robust estimator $\\beta_n^{DRO}$\n and its aforementioned lack of membership in $\\Lambda_{CLT}(P_n).$\n\\begin{figure}[ptbh]\n\\centering\n\\subfigure[$\\rho=0.95$]{\n \\includegraphics[width=1.81in]{scatter_point_95_5.eps}}\n\\subfigure[$\\rho=0$]{\n \\includegraphics[width=1.81in]{scatter_point_0_5.eps}}\n\\subfigure[$\\rho=-0.95$]{\n\\includegraphics[width=1.81in]{scatter_point_-95_5.eps}}\n\\caption{Scatter plots of $\\beta_n^{ERM}$ (black circles) and\n $\\beta_n^{DRO}$ (red circles) for $\\beta_0 = [0.5,0.5]^{\\mathrm{\\scriptscriptstyle T} }$. }\n\\label{fig:scatter_0.5}\n\\end{figure}\n\\begin{figure}[ptbh]\n\\centering\n\\subfigure[$\\rho=0.95$]{\n \\includegraphics[width=1.81in]{scatter_point_95_10.eps}}\n\\subfigure[$\\rho=0$]{\n \\includegraphics[width=1.81in]{scatter_point_0_10.eps}}\n\\subfigure[$\\rho=-0.95$]{\n\\includegraphics[width=1.81in]{scatter_point_-95_10.eps}}\n\\caption{Scatter plots of $\\beta_n^{ERM}$ (black circles) and\n $\\beta_n^{DRO}$ (red circles) for $\\beta_0 = [1.0,0.0]^{\\mathrm{\\scriptscriptstyle T} }$.}\n\\label{fig:scatter_1}\n\\end{figure}\n\\section{Proofs of main results}\n\n\\label{sec:proof} Theorem \\ref{thm:levelsets-master} is obtained by\nconsidering appropriate level sets involving auxiliary functionals\nwhich we define next. Following \\citet{blanchet2016robust}, we define\nthe robust Wasserstein profile function, associated with the\nestimation of $\\beta_\\ast$ by solving\n$E_{P_n}\\{D_\\beta h(X,\\beta)\\} = 0,$ as follows:\n\\begin{equation*}\nR_{n}( \\beta) =\\inf_{P \\in \\mathcal{P}(\\Omega)} \\big[D_c(P,P_{n}) : \\beta \\in\n\\arg \\min_{\\beta \\in B }E_{P}\\left\\{ \\ell (X;\\beta )\\right\\} \\big].\n\\end{equation*}\nThis definition, as noted in \\citet{blanchet2016robust}, allows to\ncharacterize the set $\\Lambda^+_{\\delta}\\left( P_{n}\\right)$ in terms\nof an associated level set; in particular, we have,\n\\begin{equation}\n\\Lambda^+_{\\delta}( P_{n}) = \\mathrm{cl}{\\left\\{\\beta:R_{n}( \\beta) \\leq\\delta \\right\\}},\n\\label{CR_based_on_RWP}\n\\end{equation}\nwhere $\\mathrm{cl}(\\cdot)$ denotes closure. Indeed, this is because\n\\begin{equation*}\n \\Lambda^+_{\\delta}( P_{n})= \\mathrm{cl} \\big[\\cap_{\\epsilon>0}\\big\\{ \\beta \\in B:\\beta \\in\n \\arg \\min_{\\beta \\in B }E_{P}\\{ \\ell (X;\\beta )\\} \\text{ for some }P\\in\n \\mathcal{U}_{\\delta _{n}+\\epsilon}(P_{n})\\big\\} \\big].\n\\end{equation*}\nIf $\\beta \\in B^\\circ$, we have\n$R_{n}( \\beta) =\\inf_{P \\in \\mathcal{P}(\\Omega)}[D_c(P,P_{n}) :E_{P}\n\\left\\{h( X,\\beta)\\right\\} = 0 ].$\n\nNext, for the sequence of radii $\\delta _{n}=n^{-\\gamma }\\eta $, for some\npositive constants $\\eta ,\\gamma $, define functions $V_{n}^{DRO}:\\mathbb{R}%\n^{d}\\rightarrow \\mathbb{R}$ and $V_{n}^{ERM}:\\mathbb{R}^{d}\\rightarrow\n\\mathbb{R}$, as below, by considering suitably scaled versions of the\ndistributionally robust and empirical risk objective functions, namely\n\\begin{align*}\n V_{n}^{DRO}(u)& =n^{\\bar{\\gamma}}\\big \\{ \\Psi _{n}\\big( \\beta _{\\ast\n }+n^{-\\bar{\\gamma}\/2}u\\big) -\\Psi _{n}(\\beta _{\\ast })\\big\\} \\text{ and }\n \\\\\n V_{n}^{ERM}(u)& =n\\big[ E_{P_{n}}\\big\\{ \\ell (X;\\beta _{\\ast\n }+n^{-1\/2}u)\\big\\} -E_{P_{n}}\\big\\{ \\ell (X;\\beta _{\\ast })\\big\\} \\big],\n\\end{align*}%\nwhere $\\bar{\\gamma}=\\min \\left\\{ \\gamma ,1\\right\\} $ is defined in Theorem %\n\\ref{thm:levelsets-master}. Moreover, define $V:\\mathbb{R}^{d}\\times \\mathbb{%\nR}^{d}\\rightarrow \\mathbb{R}$ via\n\\begin{equation*}\n V(x,u)=x^{{\\mathrm{\\scriptscriptstyle T} }}u+ {2}^{-1}u^{{\\mathrm{\\scriptscriptstyle T} }}Cu.\n\\end{equation*}%\nThe following result, as we shall see, can be used to establish Theorem \\ref%\n{thm:levelsets-master} directly.\n\n\\begin{theorem}\n \\label{thm:master}\n Suppose that the assumptions made in Theorem\n \\ref{thm:levelsets-master} hold. Then we have,\n\\begin{equation*}\n\\big\\{ V_{n}^{ERM}(\\cdot),\\ V_{n}^{DRO}(\\cdot),\\ nR_{n}\\big( \\beta_{\\ast\n}+n^{-1\/2}\\times\\cdot\\ \\big) \\big\\} \\Rightarrow\\left\\{ V(-H,\\cdot ),\\\nV\\{-f_{\\eta,\\gamma}(H),\\cdot\\},\\ \\varphi^{\\ast}(H-C\\times\\cdot\\,)\\right\\} ,\n\\end{equation*}\non the space $C(\\mathbb{R}^{d};\\mathbb{R})^{3}$ equipped with the topology\nof uniform convergence in compact sets.\n\\end{theorem}\n{ Ensuring smoothness of $D_{\\beta}h(x+\\Delta,\\beta)$ and $D_{x}h(x+\\Delta,\\beta)$ around $\\beta=\\beta_{\\ast},$ as in Assumption A2.c, is useful towards investigating the behavior of $nR_{n}\\big( \\cdot \\big)$ in the neighborhood of $\\beta^*,$ as required in the third component in the triplet in Theorem \\ref{thm:master}.}\n\n\\subsection{Proof of Theorem \\protect\\ref{thm:master}}\nThroughout this section, we suppose that the assumptions imposed in\nTheorem \\ref{thm:levelsets-master} hold. Let\n\\begin{equation*}\nH_{n} = {n}^{-1\/2}\\sum_{i=1}^{n}h\\left( X_{i},\\beta_{\\ast}\\right)\n\\end{equation*}\nThe following sequence of results will be useful in proving Theorem\n\\ref{thm:master} and Proposition\n\\ref{prop:constrained-support}. Propositions \\ref{prop:ERM} and \\ref{prop:DRO} hold true for $\\Omega = \\mathbb{R}^d$; while propositions \\ref{prop:RWPEq} - \\ref{prop:RWPcont} hold true for general $\\Omega$ under the assumption $P_{\\ast}(\\Omega^\\circ) =1$ in Proposition \\ref{prop:constrained-support}.\n\n\\begin{proposition}\nFix $\\alpha\\in\\lbrack0,1].$ Given $\\varepsilon,\\varepsilon^{\\prime},K>0,$ there exists a\npositive integer $n_{0}$ such that\n\\begin{equation*}\n{\\rm pr} \\left[\\big\\vert n^{\\alpha-1}V_{n}^{ERM}\\{n^{( 1-\\alpha)\n\/2}u\\}-n^{\\alpha\/2}H_{n}^{{\\mathrm{\\scriptscriptstyle T} }}u-2^{-1}u^{{\\mathrm{\\scriptscriptstyle T} }}Cu\\big\\vert \\leq\n\\varepsilon^{\\prime} \\right] \\geq 1-\\varepsilon,\n\\end{equation*}\nfor every $n>n_{0}$ and $\\Vert u\\Vert_2\\leq K.$ Specifically, if $\\alpha=1,$\nwe have%\n\\begin{equation}\n{\\rm pr} \\left\\{\\left\\vert V_{n}^{ERM}(u)-H_{n}^{{\\mathrm{\\scriptscriptstyle T} }}u- 2^{-1}u^{{\\mathrm{\\scriptscriptstyle T} }}Cu\\right\\vert\n\\leq\\varepsilon^{\\prime}\\right\\} \\geq 1-\\varepsilon.\n\\label{eqn:V_ERM}\n\\end{equation}\n\\label{prop:ERM}\n\\end{proposition}\n\\begin{proposition}\nGiven $\\varepsilon,\\varepsilon ^{\\prime },K>0,$ there exists a positive integer $n_{0}$\nsuch that\n\\begin{equation}\n{\\rm pr} \\left\\{ \\left\\vert V_{n}^{DRO}(u)+f_{\\eta ,\\gamma }(-H_{n})^{{\\mathrm{\\scriptscriptstyle T} }}u- 2^{-1}\nu^{{\\mathrm{\\scriptscriptstyle T} }}Cu\\right\\vert \\leq \\varepsilon ^{\\prime } \\right\\} \\geq 1-\\varepsilon,\n\\label{eqn:V_DRO}\n\\end{equation}%\nfor every $n>n_{0}$ and $\\Vert u\\Vert_2 \\leq K.$ \\label{prop:DRO}\n\\end{proposition}\n\\begin{proposition}\n Define the set $\\Theta \\subset \\mathbb{R}^d$ as\n\\[\n \\Theta=\\{ \\beta \\in B^\\circ : 0 \\in {\\rm conv} [\\{h(x,\\beta)\\mid x \\in\n \\Omega\\}]^\\circ \\},\n\\]\nwhere ${\\rm conv} (S)$ denotes the convex hull of the set $S$.\nFor $\\beta_* + n^{-1\/2} u \\in \\Theta$, We have,\n\\begin{align*}\nn R_{n}\\big( \\beta_{\\ast}+ n^{-1\/2}u\\big) = \\max_{\\xi} \\left\\{ -\\xi^{{\\mathrm{\\scriptscriptstyle T} }}\nH_{n} - M_{n}(\\xi,u) \\right\\} ,\n\\end{align*}\nwhere\n\\begin{align*}\n M_{n}(\\xi,u)\n &= \\frac{1}{n} \\sum_{i=1}^{n} \\max_{\\Delta:X_i + n^{-1\/2}\\Delta \\in \\Omega} \\left\\{ \\xi^{{\\mathrm{\\scriptscriptstyle T} }} \\int_{0}^{1}\n D_{x}h\\big( X_{i} + n^{-1\/2}t\\Delta, \\beta_{\\ast}+n^{-1\/2} tu\n \\big) \\Delta {\\rm d} t \\right . \\\\\n &\\qquad\\qquad + \\xi^{{\\mathrm{\\scriptscriptstyle T} }} \\int_{0}^{1}D_{\\beta}h\\big( X_{i} + n^{-1\/2}t\\Delta,\n \\left. \\beta_{\\ast}+ n^{-1\/2} t u\\big) u{\\rm d} t - \\Vert\\Delta\\Vert\n _{q}^{2} \\right\\}.\n\\end{align*}\nFurthermore, there exists a neighborhood of $\\beta*$, $B_\\epsilon(\\beta_*)$ such that $B_\\epsilon(\\beta_*) \\subset \\Theta$.\n\\label{prop:RWPEq}\n\\end{proposition}\n\n\n\\begin{proposition}\nConsider any $\\varepsilon ,\\varepsilon ^{\\prime },K>0.$ Then there exist $%\nb_{0}\\in (0,\\infty )$ such that for any $b\\geq b_{0},c_0> 0,\\epsilon_0>0$, we\nhave a positive integer $n_{0}$ such that,\n\\begin{equation*}\n\\mathrm{pr}\\left[ \\sup_{\\Vert u\\Vert _{2}\\leq K}\\big \\{ nR_{n}\\big( \\beta\n_{\\ast }+n^{-1\/2}u\\big) -f_{up}(H_{n},u,b,c)\\big \\} \\leq \\varepsilon\n^{\\prime }\\right] \\geq 1-\\varepsilon ,\n\\end{equation*}%\nfor all $n\\geq n_{0},$ and $f_{up}(H_{n},u,b,c)$ equals\n\\begin{equation*}\n \\max_{\\Vert \\xi \\Vert _{p}\\leq b}\\big\\{-\\xi ^{{\\mathrm{\\scriptscriptstyle T} }}H_{n}-%\n E \\big[ 4^{-1}\\Vert \\{ D_{x}h(X,\\beta _{\\ast\n })\\} ^{{\\mathrm{\\scriptscriptstyle T} }}\\xi \\Vert _{p}^{2}+\\xi ^{{\\mathrm{\\scriptscriptstyle T} }}D_{\\beta }h(X,\\beta _{\\ast\n })u\\big] \\mathbb{I}(X \\in C_0^{\\epsilon_0}) \\big\\},\n\\end{equation*}\nwith $C_0=\\{x \\in \\Omega: \\|x\\|_p \\leq c_0\\}.$\n\\label{prop:RWPUB}\n\\end{proposition}\n\\begin{proposition}\n For any $\\varepsilon ,\\varepsilon ^{\\prime },K,b>0,$ there exists a\n positive integer $n_{0}$ such that,\n\\begin{equation*}\n{\\rm pr}\\left[ \\sup_{\\Vert u\\Vert _{2}\\leq K}\\big\\{ nR_{n}\\big( \\beta\n_{\\ast }+n^{-1\/2}u\\big) -f_{low}(H_{n},u,b)\\big\\} \\geq -\\varepsilon\n^{\\prime }\\right] \\geq 1-\\varepsilon ,\n\\end{equation*}%\nfor all $n>n_{0},$ where\n\\begin{equation*}\n f_{low}(H_{n},u,b)=\\max_{\\Vert \\xi \\Vert _{p}\\leq b}\\big\\{-\\xi ^{{\\mathrm{\\scriptscriptstyle T} }}H_{n}-%\n E\\left[ 4^{-1}\\Vert \\left\\{ D_{x}h(X,\\beta _{\\ast })\\right\\}\n ^{{\\mathrm{\\scriptscriptstyle T} }}\\xi \\Vert _{p}^{2}+\\xi ^{{\\mathrm{\\scriptscriptstyle T} }}D_{\\beta }h(X,\\beta _{\\ast })u\\right] \\big\\}.\n\\end{equation*}\n\\label{prop:RWPLB}\n\\end{proposition}\n\n\\begin{proposition}\n \\label{prop:tightness-1} For any $\\varepsilon>0,$ there exist\n constants $a,n_{0} > 0$ such that for every\n $n\\geq n_{0},$\n\\begin{equation*}\n{\\rm pr}\\left\\{ nR_{n}(\\beta_{\\ast})\\leq a\\right\\} \\geq1-\\varepsilon,\n\\end{equation*}\n\n\\end{proposition}\n\n\\begin{proposition}\nFor any $\\varepsilon,\\varepsilon^{\\prime}, K > 0,$ there exist positive\nconstants $n_{0},\\delta$ such that,\n\\begin{align*}\n\\sup_{\\underset{\\Vert u_{i}\\Vert_{2} \\leq K}{\\Vert u_{1} - u_{2}\\Vert_{2}\n\\leq\\delta}} \\big\\vert n R_{n}\\big( \\beta_{\\ast}+ n^{-1\/2}u_{1}\\big) - n\nR_{n}\\big( \\beta_{\\ast}+ n^{-1\/2}u_{2}\\big) \\big\\vert \\leq\n\\varepsilon^{\\prime},\n\\end{align*}\nwith probability exceeding $1-\\varepsilon,$ for every $n > n_{0}.$ \\label%\n{prop:RWPcont}\n\\end{proposition}\n\nProofs of Propositions \\ref{prop:ERM} - \\ref{prop:RWPcont} are furnished in\nSection \\ref{sec:proofs-props-master-thm} in the supplementary material. With the statements of these\nresults, we proceed with the proof of Theorem \\ref{thm:master} as follows.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:master}]\n Since $E\\{h(X,\\beta _{\\ast })\\}=0,$ it follows from central limit\n theorem that\n$\nH_{n}\\Rightarrow -H,\n$\nwhere\n$H\\sim \\mathcal{N}(0,E\\{h(X,\\beta _{\\ast })h(X,\\beta _{\\ast })^{\n {\\mathrm{\\scriptscriptstyle T} }}\\}).$ Since inequalities \\eqref{eqn:V_DRO} and \\eqref{eqn:V_ERM}\nare associated with the same $H_n$ , it follows from Propositions\n\\ref%\n{prop:ERM} and \\ref{prop:DRO} that,\n\\begin{equation}\nV_{n}^{ERM}(\\cdot )\\Rightarrow V^{ERM}(\\cdot )=V(-H,\\cdot )\\quad \\text{ and\n}\\quad V_{n}^{DRO}(\\cdot )\\Rightarrow V^{DRO}(\\cdot )=V\\{-f_{\\eta ,\\gamma\n}(H),\\cdot \\} \\label{inter-mas-thm-pf-0}\n\\end{equation}%\njointly, on the space topologized by uniform convergence on compact sets.\n\nTo prove convergence of the third component of the triplet considered in\nTheorem \\ref{thm:master}, observe from the definitions of $\\varphi ^{\\ast\n}(\\cdot )$ and $C$ that,\n\\begin{equation}\n \\varphi ^{\\ast }(H-Cu)=\\max_{\\xi }\\big(\\xi ^{{\\mathrm{\\scriptscriptstyle T}}}\n [ H-E\\{ D_{\\beta }h(X,\\beta _{\\ast })\\} u] - 4^{-1}\n E\\Vert \\{ D_{x}h(X,\\beta _{\\ast })\\} ^{{\\mathrm{\\scriptscriptstyle %\n T}}}\\xi \\Vert _{p}^{2}\\big). \\label{Expr-psi-HCu}\n\\end{equation}%\nConsider any fixed $K\\in (0,+\\infty ).$ Due to the weak convergence $%\nH_{n}\\Rightarrow -H,$ applications of continuous mapping theorem to the\nbounds in Proposition \\ref{prop:RWPUB},~\\ref{prop:RWPLB} result in the\nconclusions that,\n\\begin{equation}\n f_{up}(H_{n},u,b,c)\\Rightarrow \\max_{\\Vert \\xi \\Vert _{p}\\leq b}\\big\\{\\xi ^{{{\\mathrm{\\scriptscriptstyle T} }}}H-\n E\\big[ {4}^{-1}\\Vert \\big\\{D_{x}h(X,\\beta _{\\ast })] ^{{\\mathrm{\\scriptscriptstyle T}}}\\xi\n \\Vert_{p}^{2}+\\xi ^{{\\mathrm{\\scriptscriptstyle T}}}D_{\\beta }h(X,\\beta _{\\ast})u\\big]\n \\mathbb{I}(X \\in C_0^{\\epsilon_0}) \\big\\},\n\\label{inter-mas-up}\n\\end{equation}%\n\\begin{equation}\n f_{low}(H_{n},u,b)\\Rightarrow \\max_{\\Vert \\xi \\Vert _{p}\\leq b}\\big\\{\\xi ^{{{\\mathrm{\\scriptscriptstyle T} }}}H-\n E\\big[ {4}^{-1}\\Vert \\{D_{x}h(X,\\beta _{\\ast })\\}^{{\\mathrm{\\scriptscriptstyle T}}}\\xi \\Vert\n _{p}^{2}+\\xi ^{{\\mathrm{\\scriptscriptstyle T}}}D_{\\beta }h(X,\\beta _{\\ast\n })u\\big] \\big\\}, \\label{inter-mas-low}\n\\end{equation}%\nfor any $u$ satisfying $\\Vert u\\Vert _{2}\\leq K.$ Since the bounds in\nPropositions \\ref{prop:RWPUB},~\\ref{prop:RWPLB} hold for arbitrarily large\nchoices for constants $b,c,$ and arbitrarily small choice for constant $\\epsilon_0$ combining with the assumption $P_*(\\Omega^\\circ)=1$, we conclude from the observations %\n\\eqref{Expr-psi-HCu}, \\eqref{inter-mas-up}, and \\eqref{inter-mas-low} that\n\\begin{equation}\n nR_{n}\\big( \\beta _{\\ast }+n^{-1\/2}u\\big) \\Rightarrow \\varphi ^{\\ast}(H-Cu), \\label{mas-fdim-conv}\n\\end{equation}%\nfor any $u$ satisfying $\\Vert u\\Vert _{2}\\leq K.$ Finally, we have\nfrom Propositions \\ref{prop:tightness-1} and \\ref%\n{prop:RWPcont} that the collection\n$\\{nR_{n}(\\beta_{\\ast}+n^{-1\/2}\\times%\n\\cdot\\,)\\}$ is tight; see, for example, \\citet[Theorem\n7.4]{billingsley2013convergence}. As a consequence of this tightness\nand the finite dimensional convergence in \\eqref{mas-fdim-conv}, we\nhave that,\n\\begin{equation*}\nnR_{n}\\big( \\beta_{\\ast}+n^{-1\/2}\\times\\cdot\\,\\big) \\Rightarrow\n\\varphi^{\\ast}(H-C\\times\\cdot\\,).\n\\end{equation*}\nCombining this observation with those in \\eqref{inter-mas-thm-pf-0}, we\nobtain the desired convergence result in Theorem \\ref{thm:master}. Furthermore, since $f_{low}(H_{n},u,b)$ and $f_{up}(H_{n},u,b)$ are associated with the same $H_n$ with inequalities \\eqref{eqn:V_DRO} and \\eqref{eqn:V_ERM}, we have the three terms converge jointly.\n\\end{proof}\n\n\\subsection{Proof of Theorem \\protect\\ref{thm:levelsets-master}}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:levelsets-master}]\n \\label{sec:levelsets-master} Theorem \\ref{thm:levelsets-master} is\n proved by considering suitable level sets of the component functions\n in the triplet, $%\n \\{V_{n}^{ERM}(\\cdot), \\ V_{n}^{DRO}(\\cdot), \\ nR_{n}(\n \\beta_{\\ast}+ n^{-1\/2}\\times \\cdot\\ )\\}, $ considered in\n Theorem \\ref{thm:master}. To reduce clutter in expressions, from\n here-onwards we refer the distributionally robust estimator\n \\eqref{Wass_DRO_A}, simply as $\\beta^{DRO}_n,$ with the dependence\n on the radius $\\delta_n$ to be understood from the context. To begin,\n consider the following tightness result whose proof is provided in\n Section \\ref{ssec:proofs-props-mast-lev-sets}.\n\n\\begin{proposition}\nThe sequences $\\{\\arg\\min\\xspace_{u} \\, V_{n}^{ERM}(u): n \\geq1\\}$ and $%\n\\{\\arg\\min\\xspace_{u} \\, V_{n}^{DRO}(u): n \\geq1\\}$ are tight. \\label%\n{prop:argmintightness}\n\\end{proposition}\n\nObserve that $V_{n}^{ERM}(\\cdot)$ and $V_{n}^{DRO}(\\cdot)$ are\nminimized, respectively, at $n^{1\/2}(\\beta_{n}^{ERM}-\\beta_{\\ast})$\nand $n^{\\bar{\\gamma }\/2}(\\beta_{n}^{DRO}-\\beta_{\\ast}).$ Furthermore,\ndue to the positive definiteness of $C$ in Assumption A2.b, we\nhave that $V^{ERM}(\\cdot)$ and $V^{DRO}(\\cdot)$ are strongly convex\nwith respect to $u$ and have unique minimizers, with probability\n1. Therefore, due to the tightness of the sequences\n$\\{n^{1\/2}(\\beta_{n}^{ERM}-\\beta_{\\ast})\\}_{n\\geq1}$ and $\\{n^{%\n \\bar{\\gamma}\/2}(\\beta_{n}^{DRO}-\\beta_{\\ast})\\}_{n\\geq1}$; see\nProposition %\n\\ref{prop:argmintightness} and the weak convergence of\n$V_{n}^{ERM}(\\cdot)$ and $V_{n}^{DRO}(\\cdot)$ in Theorem\n\\ref{thm:master}, we have the following convergences:\n\\begin{align}\n n^{1\/2}(\\beta_{n}^{ERM}-\\beta_{\\ast})&\\Rightarrow\\arg\\min\\xspace%\n _{u}\\,V(-H,u)=C^{-1}H, \\label{inter-mas-thm-pf-1}\\\\\n n^{\\bar{\\gamma}\/2}(\\beta_{n}^{DRO}-\\beta_{\\ast})\n &\\Rightarrow\\arg\\min \\xspace%\n _{u}\\,V^{DRO}(u)=C^{-1}f_{\\eta,\\gamma}(H)\n \\nonumber\n\\end{align}\n\nFinally, to prove the convergence of the sets $\\Lambda^+_{\\delta_{n}}(P_{n}),$\nwe proceed as follows. Define\n\\begin{equation*}\nG_{n}(u) = nR_{n}(\\beta_{\\ast}+ n^{-1\/2}u), \\quad G(u) = \\varphi^{\\ast\n}(H-Cu), \\quad\\text{ and } \\quad\\alpha_{n} = n\\delta_{n}.\n\\end{equation*}\nFor any function $f:B \\rightarrow\\mathbb{R}$ and $\\alpha \\in[%\n0,+\\infty],$ let lev$(f,\\alpha)$ denote the level set\n$\\{ x \\in \\mathbb{R}^d: f(x) \\leq\\alpha\\}.$\n\n\\begin{proposition}\n If $\\delta_{n} = n^{-1}\\eta,$ then\n $\\mathrm{cl}\\{$\\textnormal{lev}$(G_{n},\\alpha_{n})\\} \\Rightarrow$\n \\textnormal{lev}$(G,\\eta).$ \\label{prop-levset-eq1}\n\\end{proposition}\n\n\\begin{proposition}\nIf $\\delta_{n} = n^{-\\gamma}\\eta$ for some $\\gamma> 1,$ then $\\mathrm{cl}\\{$\\textnormal{lev}%\n$(G_{n},\\alpha_{n})\\} \\Rightarrow\\{C^{-1}H\\}.$ \\label{prop-levset-gthan1}\n\\end{proposition}\n\n\\begin{proposition}\n If $\\delta_{n} = n^{-\\gamma}\\eta$ for some $\\gamma< 1,$ then\n $\\mathrm{cl}\\{$\\textnormal{lev}$(G_{n},\\alpha_{n})\\} \\Rightarrow$\n $\\mathbb{R}^{d}.$ \\label{prop-levset-lthan1}\n\\end{proposition}\n\nPropositions \\ref{prop-levset-eq1} - \\ref{prop-levset-lthan1} above, whose\nproofs are furnished in Section \\ref{ssec:proofs-props-mast-lev-sets}, allow\nus to complete the proof of Theorem \\ref{thm:levelsets-master} as follows.\nIt follows from the definition of $R_{n}(\\beta )$ that,\n\\begin{equation*}\n\\Lambda^+_{\\delta _{n}}(P_{n})=\\left\\{ \\beta :R_{n}(\\beta )\\leq \\delta\n_{n}\\right\\} =\\beta _{\\ast }+n^{-1\/2}\\left\\{ u:G_{n}(u)\\leq \\alpha\n_{n}\\right\\} .\n\\end{equation*}%\nWe have from Propositions \\ref{prop-levset-eq1} -\n\\ref{prop-levset-lthan1} that\n\\begin{equation*}\nn^{1\/2}\\left( \\Lambda^+_{\\delta _{n}}(P_{n})-\\beta _{\\ast }\\right) =\\left\\{\nu:G_{n}(u)\\leq \\alpha _{n}\\right\\} \\Rightarrow\n\\begin{cases}\n\\text{lev}(G,\\eta )\\quad & \\text{ if }\\gamma =1, \\\\\n\\mathbb{R}^{d} & \\text{ if }\\gamma <1, \\\\\n\\{C^{-1}H\\} & \\text{ if }\\gamma >1.%\n\\end{cases}\n\\end{equation*}%\nObserve that $\\varphi ^{\\ast }(u)=\\varphi ^{\\ast }(-u).$ Therefore,\nlev$%\n(G,\\eta )=\\{u:\\varphi ^{\\ast }(H-Cu)\\leq \\eta \\}=C^{-1}H+\\{u:\\varphi\n^{\\ast }(Cu)\\leq \\eta \\}.$ Since the three terms in Theorem\n\\ref{thm:master} converge jointly, we have the three terms in Theorem\n\\ref%\n{thm:levelsets-master} also converge jointly. This completes the proof\nof Theorem \\ref%\n{thm:levelsets-master}.\n\\end{proof}\n\nProposition \\ref{prop:constrained-support} follows by adopting exactly\nthe same steps which are used to establish the convergence of\n$n^{1\/2}\\left\\{\\Lambda^+_{\\delta _{n}}(P_{n})-\\beta _{\\ast }\\right\\}$ in the proof\nof Theorem \\ref{thm:levelsets-master}.\n\n\n\n\n\\section{Discussions}\n\\label{sec:discussion}\nWe discuss the subtleties in\nderiving a limit theorem for the distributionally robust estimator\n$\\beta_n^{DRO}$ when the support of the random vector $X,$ denoted by\n$\\Omega,$ is constrained to be a strict subset of $\\mathbb{R}^m.$\nSuppose that the support of $X$ is constrained to be contained in the\nset $\\Omega = \\{x \\in \\mathbb{R}^m: Ax \\leq b\\}$ specified in terms of\nlinear constraints involving an $l \\times m$ matrix $A$ and\n$b \\in \\mathbb{R}^l.$ For the sake of clarity, we discuss here only\nthe non-degenerate case where $\\delta_n = \\eta\/ n.$\n\nConsidering the transportation cost $c(x,y) = \\Vert x- y\\Vert_2^2$ in\nDefinition \\ref{defn:WD}, we demonstrate in Section \\ref{ssec:discussion_proof} of the\nSupplementary material that the central limit theorem,\n$n^{1\/2}\\{\\beta_n^{DRO}(\\delta_n) - \\beta_\\ast\\} \\Rightarrow\nC^{-1}H - \\eta^{1\/2} C^{-1}D_\\beta S(\\beta_\\ast),$ continues to hold,\nfor example, in the elementary\ncase\nwhere the matrix $A$ has linearly independent rows, $X$ has a\nprobability density which is absolutely continuous with respect to the\nLebesgue measure on $\\mathbb{R}^m$ and the support $\\Omega$ is\ncompact.\nA key element which emerges in the verification (offered in\nProposition \\ref{prop:DRO-constrained} in Section \\ref{ssec:discussion_proof} of the supplementary material) is that\nthe fraction of samples which get transported to the boundary of the\nset $\\Omega$ stays $O_p(n^{-1\/2}),$ as $n \\rightarrow \\infty.$\n\n\nOn the other hand, when the set\n$\\Omega = \\{x \\in \\mathbb{R}^m: Ax \\leq b\\}$ has equality constraints\nas in, for example,\n$\\Omega = \\{(x_1,x_2,\\ldots,x_m) \\in \\mathbb{R}^2: x_1 - x_2 = 0\\},$\nthe bias term in the limit theorem gets affected due to the constraint\nbinding all the samples $\\{X_1,\\ldots,X_n\\}$ and the fraction of\nsamples which get transported to the boundary of the set $\\Omega$ is\n1. This is easily seen in the linear regression example in Section\n\\ref{sec:geo_insignts} where $\\ell(x,y;\\beta) = (y - \\beta^{\\mathrm{\\scriptscriptstyle T} } x)^2$ and\nthe support is taken as\n$\\Omega = \\{(x_1,x_2) \\in \\mathbb{R}^2: x_1 = x_2\\}.$ For this\nelementary example, we instead have,\n\\begin{align}\n n^{1\/2}\\big\\{\\beta_n^{DRO}(\\delta_n) - \\beta_\\ast\\big\\} \\Rightarrow\n C^{-1}H - \\eta^{1\/2} C^{-1}D_\\beta \\tilde{S}(\\beta_\\ast),\n \\label{clt-support-constraints}\n\\end{align}\nwhere $\\tilde{S}(\\beta)$ is different from the term $S(\\beta)$ as in,\n$ \\tilde{S}(\\beta_\\ast) = 2^{1\/2-1\/q} { \\vert \\beta^\\T1 \\vert} {\n \\Vert \\beta \\Vert_p^{-1}} S(\\beta).$\nHere, recall the earlier definition\n$S(\\beta) = [E\\{ \\Vert D_x\\ell(X;\\beta)\\Vert_p^2\\}]^{1\/2}$ in\n\\eqref{defn:sensitivity-term} for the unconstrained support case. The\ncomputations required to arrive at the above conclusion are presented\nin Example A1 in Section A.3 of the supplementary material. {\nIn the presence of general support constraints of the form $\\Omega = \\{x \\in \\mathbb{R}^m:Ax= b\\},$ we show with Example A2 in Section A.3 that \\eqref{clt-support-constraints} holds with $\\tilde{S}(\\beta) = \\Vert P_{\\mathcal{N}(A)} \\beta \\Vert_2$ for quadratic losses of the form $\\ell(x;\\beta) = a + \\beta^{\\mathrm{\\scriptscriptstyle T} } x + \\beta^{\\mathrm{\\scriptscriptstyle T} } C \\beta;$ here $A$ is taken to be a matrix with linearly independent rows and $P_{\\mathcal{N}(A)}$ denotes the projection operator onto the null space of $A.$ The bias term here is again different when compared to the term resulting from $S(\\beta) = \\Vert \\beta \\Vert_2$ exhibited in Theorem \\ref{thm:levelsets-master}.}\nAs reasoned above, the presence of equality constraints for the support\n$\\Omega$ introduces new challenges to be tackled in another study.\n\n\\section*{Acknowledgements}\nMaterial in this paper is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0397. Additional support is gratefully acknowledged from NSF grants 1915967, 1820942 and 1838576 and MOE SRG ESD 2018 134.\n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nDeep learning offers a powerful framework for learning increasingly complex\nrepresentations for visual recognition tasks. The work of Krizhevsky \\etal\n\\cite{KSH13} convincingly demonstrated that deep neural networks can be very\neffective in classifying images in the challenging Imagenet benchmark\n\\cite{DDSL+09}, significantly outperforming computer vision systems built on\ntop of engineered features like SIFT \\cite{Lowe04}. Their success spurred a\nlot of interest in the machine learning and computer vision\ncommunities. Subsequent work has improved our understanding and has refined\ncertain aspects of this class of models \\cite{ZeFe13b}. A number of different\nstudies have further shown that the features learned by deep neural networks\nare generic and can be successfully employed in a black-box fashion in other\ndatasets or tasks such as image detection \\cite{ZeFe13b, OuWa13, SEZM+14,\n GDDM14, RASC14, CSVZ14}.\n\nThe deep learning models that so far have proven most successful in image\nrecognition tasks, e.g.\n\\cite{SzegedyLJSRAEVR14,nin,SiZi14},\nare feed-forward convolutional neural networks trained in a\nsupervised fashion to minimize a regularized training set classification error\nby back-propagation. Their recent success is partly due to the availability of\nlarge annotated datasets and fast GPU computing, and partly due to some\nimportant methodological developments such as dropout regularization and\nrectifier linear activations \\cite{KSH13}. However, the key building blocks of\ndeep neural networks for images have been around for many years \\cite{LBBH98}:\n(1) Deep Convolutional Neural Networks (DCNNs) with small receptive fields that\nspatially share parameters within each layer. (2) Gradual abstraction and\nspatial resolution reduction after each convolutional layer as we ascend the\nnetwork hierarchy, typically via max-pooling \\cite{RiPo99, JKRL09}\n\n\\begin{figure}[!tbp]\n \\centering\n \\begin{tabular}{c}\n \\includegraphics[width=0.9\\columnwidth]{figs\/illustration\/epitome}\n \\end{tabular}\n \\caption{Epitomes represent {\\em a set} of small images through a common data\n structure: The epitome is a single, larger image which can produce\n several smaller filters by picking a position and cropping a small window. While\n epitomes where originally employed for generative image modelling, in this\n work we propose to use epitomes as an efficient method for parameter\n sharing in Deep Learning: Epitomes are used in hierarchical networks,\n and their parameters are learned discriminatively through\n back-propagation.}\n \\label{fig:epitome_bag}\n\\end{figure}\n\nThis paper examines several aspects of invariance to deformations in the context of DCNNs for visual recognition. We treat both locally defined (non-rigid) and global \ndeformations, demonstrating that a more refined treatment yields improved classification performance.\n\n Our first main contribution is\nto build a deep neural network around the epitomic representation\n\\cite{JFK03}. The image epitome, illustrated in Figure~\\ref{fig:epitome_bag},\nis a data structure appropriate for learning translation-aware image\nrepresentations, naturally disentagling appearance and position modeling of\nvisual patterns. While originally developped for generative image modelling, we show here that the epitome data structure can be used to train DCCNs discrimatively.\n\nFor this, we substitute every pair of convolution and max-pooling layers\ntypically used in DCNNs with an `Epitomic Convolution' layer. As illustrated in \nFig.~\\ref{fig:epitome_diagram}, Epitomic Convolution in an input-centered dual alternative to the filter-centered standard\nmax-pooling. Namely, rather than searching in the input image (or layer) for the strongest response of a filter, we search across the set of filters encapsuled in a epitome for the strongest response to an image patch.\n\\mycomment{\nfor each regularly-spaced input data patch in the lower layer we search across\nfilters in the epitomic dictionary for the strongest response. In max-pooling\non the other hand, for each filter in the dictionary we search within a window\nin the lower input data layer for the strongest response.}\nOur Epitomic Convolution is designed to have the same computational and model complexity as Max-Pooling, while similar filters to share parameters. \n\nWe use a dictionary of `mini-epitomes' at each network layer: each\nmini-epitome is only slightly larger than the corresponding input data patch,\njust enough to accomodate for the desired extent of position invariance. For\neach input data patch, the mini-epitome layer outputs a single value per\nmini-epitome, which is the maximum response across all filters in the\nmini-epitome. In \\cite{Papa14} we discuss another deep epitomic network\nvariant built on top of large epitomes which learns topographically organized\nfeatures.\n\nOur second main contribution is to explicitly deal with object scale and\nposition when applying DCNNs to image classification. We show\nthat this can be done efficiently using a patchwork data structure in a principled, consistent manner during both training and testing. \n While\nfusing classification results extracted from multiple image windows is\nstandard practice, we show that when incorporating multiple windows during training in a\nMultiple Instance Learning (MIL) framework we obtain substantially larger\ngains.\n\nFinally, we explore to what extent we can push scale and position search\n towards developing efficient and effective sliding window object\ndetectors on top of DCNNs. We accelerate sliding window detection by introducing \n a simple method to reduce the effective size and receptive field of a DCNN pre-trained on ImageNet and show that by performing an explicit search over position, scale, and aspect ratios we can obtain results that are comparable to the current-state-of-the-art while being substantially simpler and easier to train, as well as more efficient. \n \n\nWe quantitatively evaluate the proposed models primarily in image\nclassification experiments on the Imagenet ILSVRC-2012 large-scale image\nclassification task. We train the model by error back-propagation to minimize\nthe classification loss, similarly to \\cite{KSH13}. We have experimented with\ndeep mini-epitomic networks of different architectures (Class A, B, and C). We\nhave carried out the bulk of our comparisons between epitomes and max-pooling\nusing comparable mid-sized networks having 6 convolutional and 3 fully\nconnected layers whose structure closely follows that in \\cite{SEZM+14}\n(Class-A). In these evaluations the deep epitomic network achieves 13.6\\%\ntop-5 error on the validation set, which is 0.6\\% better than the\ncorresponding conventional max-pooled convolutional network whose error rate\nis 14.2\\%, with both networks having the same computational cost. Note that\nthe error rate of the original model in \\cite{KSH13} is 18.2\\%, using however\na smaller network. We have also more recently experimented with larger deep\nmini-epitomic networks (Class-B), which have the same number of levels but\nmore neurons per layer than those in Class-A. The plain deep epitomic net in\nClass-B achieves an error rate of 11.9\\%. When accompanied with explicit scale\nand position search (patchwork variant), its error drops further down to\n10.0\\%. Finally, following \\cite{SiZi14}, we have most recently experimented\nwith a very deep epitomic network with 13 convolutional and 3 fully connected\nlayers (Class-C). This achieves an even lower error rate of 10.0\\% (without\nscale and position search). All these performance numbers refer to\nclassification with a single network. We also find that the proposed epitomic\nmodel converges faster, especially when the filters in the dictionary are\nmean- and contrast-normalized, which is related to \\cite{ZeFe13b}. We have\nfound this normalization to also accelerate convergence of standard max-pooled\nnetworks. \n\nIn \\cite{Papa14} we further report additional sets of experiments. First, we\nshow that a deep epitomic network trained on Imagenet can be effectively used\nas black-box feature extractor for tasks such as Caltech-101 image\nclassification. Second, we report excellent image classification results on\nthe MNIST and CIFAR-10 benchmarks with smaller deep epitomic networks trained\nfrom scratch on these small-image datasets.\n\n\\paragraph{Related work}\n\nOur model builds on the epitomic image representation \\cite{JFK03}, which was\ninitially geared towards image and video modeling tasks. Single-level\ndictionaries of image epitomes learned in an unsupervised fashion for image\ndenoising have been explored in \\cite{AhEl08, BMBP11}. Recently, single-level\nmini-epitomes learned by a variant of K-means have been proposed as an\nalternative to SIFT for image classification \\cite{PCY14}. To our knowledge,\nepitomes have not been studied before in conjunction with deep models or\nlearned to optimize a supervised objective.\n\nThe proposed epitomic model is closely related to maxout networks\n\\cite{GWMCB13}. Similarly to epitomic matching, the response of a maxout layer\nis the maximum across filter responses. The critical difference is that the\nepitomic layer is hard-wired to model position invariance, since filters\nextracted from an epitome share values in their area of overlap. This\nparameter sharing significantly reduces the number of free parameters that\nneed to be learned. Maxout is typically used in conjunction with max-pooling\n\\cite{GWMCB13}, while epitomes fully substitute for it. Moreover, maxout\nrequires random input perturbations with dropout during model training,\notherwise it is prone to creating inactive features. On the contrary, we\nhave found that learning deep epitomic networks does not require dropout in\nthe convolutional layers -- similarly to \\cite{KSH13}, we only use dropout\nregularization in the fully connected layers of our network.\n\nOther variants of max pooling have been explored before. Stochastic pooling\n\\cite{ZeFe13a} has been proposed in conjunction with supervised\nlearning. Probabilistic pooling \\cite{LGRN09} and deconvolutional networks\n\\cite{ZKTF10} have been proposed before in conjunction with unsupervised\nlearning, avoiding the theoretical and practical difficulties associated with\nbuilding probabilistic models on top of max-pooling. While we do not explore\nit in this paper, we are also very interested in pursuing unsupervised\nlearning methods appropriate for the deep epitomic representation.\n\n\\section{Deep Epitomic Convolutional Networks}\n\\label{sec:epitomes}\n\n\\begin{figure}[!tbp]\n \\centering\n \\begin{tabular}{c c}\n \\includegraphics[width=0.45\\columnwidth]{figs\/illustration\/conv_max}&\n \\includegraphics[width=0.45\\columnwidth]{figs\/illustration\/epitomic_conv}\\\\\n (a)&(b)\n \\end{tabular}\n \\caption{(a) Standard max-pooled convolution: For each filter we look for\n its best match within a small window in the data layer. (b) Proposed\n epitomic convolution (mini-epitome variant): For input data patches\n sparsely sampled on a regular grid we look for their best match in each\n mini-epitome.}\n \\label{fig:epitome_diagram}\n\\end{figure}\n\n\\subsection{Mini-Epitomic deep networks}\n\n\n\\begin{figure*}[!tbp]\n \\centering\n \\begin{tabular}{c c}\n \\includegraphics[scale=1.4]{figs\/visualize\/epito2b30\/conv1_0}&\n \\includegraphics[scale=1.4]{figs\/visualize\/overfeat2b30\/conv1_0}\\\\\n (a) & (b)\n \\end{tabular}\n \\caption{Filters at the first convolutional layer of: (a) Proposed\n \\textsl{Epitomic} model with 96 mini-epitomes, each having size\n \\by{12}{12} pixels. (b) Baseline \\textsl{Max-Pool} model with 96 filters\n of size \\by{8}{8} pixels each.}\n \\label{fig:visualize_conv1}\n\\end{figure*}\n\n\nWe first describe a single layer of the mini-epitome variant of the proposed\nmodel, with reference to Fig.~\\ref{fig:epitome_diagram}. In standard\nmax-pooled convolution, we have a dictionary of $K$ filters of spatial size\n$\\by{W}{W}$ pixels spanning $C$ channels, which we represent as real-valued\nvectors $\\{\\wv_k\\}_{k=1}^K$ with $W \\cdot W \\cdot C$ elements. We apply each\nof them in a convolutional fashion to every $\\by{W}{W}$ input patch\n$\\{\\xv_i\\}$ densely extracted from each position in the input layer which also\nhas $C$ channels. A reduced resolution output map is produced by computing the\nmaximum response within a small $\\by{D}{D}$ window of displacements $p \\in\n\\mathcal{N}_{input}$ around positions $i$ in the input map which are $D$\npixels apart from each other. \nThe output map $\\{z_{i,k}\\}$ of standard max-pooled convolution has spatial\nresolution reduced by a factor of $D$ across each dimension and will consist\nof $K$ channels, one for each of the $K$ filters. Specifically:\n\\begin{equation}\n (z_{i,k},p_{i,k}) \\leftarrow \\max_{p \\in \\mathcal{N}_{image}}\n \\xv_{i+p}^T \\wv_k \\,\n \\label{eq:conv-max}\n\\end{equation}\nwhere $p_{i,k}$ points to the input layer position where the maximum is\nattained (argmax).\n\nIn the proposed epitomic convolution scheme we replace the filters with larger\nmini-epitomes $\\{\\vv_k\\}_{k=1}^K$ of spatial size $\\by{V}{V}$ pixels, where $V\n= W+D-1$. Each mini-epitome contains $D^2$ filters $\\{\\wv_{k,p}\\}_{k=1}^K$ of\nsize $\\by{W}{W}$, one for each of the $\\by{D}{D}$ displacements $p \\in\n\\mathcal{N}_{epit}$ in the epitome. We \\emph{sparsely} extract patches\n$\\{\\xv_i\\}$ from the input layer on a regular grid with stride $D$ pixels. In\nthe proposed epitomic convolution model we reverse the role of filters and\ninput layer patches, computing the maximum response over epitomic positions\nrather than input layer positions:\n\\begin{equation}\n (y_{i,k},p_{i,k}) \\leftarrow \\max_{p \\in \\mathcal{N}_{epitome}}\n \\xv_i^T \\wv_{k,p} \\,\n \\label{eq:epitomic-conv}\n\\end{equation}\nwhere $p_{i,k}$ now points to the position in the epitome where the maximum is\nattained. Since the input position is fixed, we can think of epitomic matching\nas an input-centered dual alternative to the filter-centered standard max-pooling.\n\nComputing the maximum response over filters rather than image\npositions resembles the maxout scheme of \\cite{GWMCB13}, yet in the proposed\nmodel the filters within the epitome are constrained to share values in their\narea of overlap.\n\nSimilarly to max-pooled convolution, the epitomic convolution output map\n$\\{y_{i,k}\\}$ has $K$ channels and is subsampled by a factor of $D$ across\neach spatial dimension. Epitomic convolution has the same computational cost\nas max-pooled convolution. For each output map value, they both require\ncomputing $D^2$ inner products followed by finding the maximum\nresponse. Epitomic convolution requires $D^2$ times more work per input patch,\nbut this is exactly compensated by the fact that we extract input patches\nsparsely with a stride of $D$ pixels.\n\nSimilarly to standard max-pooling, the main computational primitive is\nmulti-channel convolution with the set of filters in the epitomic dictionary,\nwhich we implement as matrix-matrix multiplication and carry out on the GPU,\nusing the cuBLAS library.\n\nIt is noteworthy that conventional max-pooled convolution can be cast as\nspecial case of epitomic convolution. More specifically, we can exactly\nreplicate max-pooled convolution with filters of size $\\by{W}{W}$ and\n$\\by{D}{D}$ max-pooling using as epitomes zero-padded versions of the original\nfilters (with padding equal to $D-1$ pixels on each side of the filter) and\nepitomic filter size equal to $W+D-1$. In that sense, epitomic convolution is\na generalization of max-pooling.\n\nTo build a deep epitomic model, we stack multiple epitomic convolution layers\non top of each other. The output of each layer passes through a rectified\nlinear activation unit $y_{i,k} \\leftarrow \\max(y_{i,k} + \\beta_k, 0)$ and fed\nas input to the subsequent layer, where $\\beta_k$ is the bias. Similarly to\n\\cite{KSH13}, the final two layers of our network for Imagenet image\nclassification are fully connected and are regularized by dropout. We learn\nthe model parameters (epitomic weights and biases for each layer) in a\nsupervised fashion by error back propagation. We present full details of our\nmodel architecture and training methodology in the experimental section.\n\n\n\\subsection{Optional mean and contrast normalization}\n\\label{sec:mean_con_norm}\n\nMotivated by \\cite{ZeFe13b}, we have explored the effect of filter mean\nand contrast normalization on deep epitomic network training. More\nspecifically, we considered a variant of the model where the epitomic\nconvolution responses are computed as:\n\\begin{equation}\n (y_{i,k},p_{i,k}) \\leftarrow \\max_{p \\in \\mathcal{N}_{epitome}}\n \\frac{\\xv_i^T \\bar{\\wv}_{k,p}}{\\norm{\\bar{\\wv}_{k,p}}_\\lambda} \\,\n \\label{eq:epitomic-conv-norm}\n\\end{equation}\nwhere $\\bar{\\wv}_{k,p}$ is a mean-normalized version of the filters and\n$\\norm{\\bar{\\wv}_{k,p}}_\\lambda \\triangleq (\\bar{\\wv}_{k,p}^T \\bar{\\wv}_{k,p}\n+ \\lambda)^{1\/2}$ is their contrast, with $\\lambda = 0.01$ a small positive\nconstant. This normalization requires only a slight modification of the\nstochastic gradient descent update formula and incurs negligible computational\noverhead. Note that the contrast normalization explored here is slightly\ndifferent than the one in \\cite{ZeFe13b}, who only scale down the filters\nwhenever their contrast exceeds a pre-defined threshold.\n\nWe have found the mean and contrast normalization of\nEq.~\\eqref{eq:epitomic-conv-norm} to significantly accelerate training not\nonly of epitomic but also max-pooled convolutional nets.\n\n\n\\subsection{Image Classification Experiments}\n\n\\begin{table*}[t]\n\\setlength{\\tabcolsep}{3pt}\n\\begin{center}\n\\scalebox{1.00} {\n\\begin{tabular}{|l||c|c|c|c|c|c||c|c||c|}\n \\hline \n Layer & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Out \\\\\n \\hline\n \\hline\n Type & conv + & conv + & conv & conv & conv & conv + & full + & full + & full \\\\\n & lrn + max& lrn + max& & & & max & dropout& dropout& \\\\ \\hline\n Output channels & 96 & 192 & 256 & 384 & 512 & 512 & 4096 & 4096 & 1000 \\\\ \\hline\n Filter size & 8x8 & 6x6 & 3x3 & 3x3 & 3x3 & 3x3 & - & - & - \\\\ \\hline\n Input stride & 2x2 & 1x1 & 1x1 & 1x1 & 1x1 & 1x1 & - & - & - \\\\ \\hline\n Pooling size & 3x3 & 2x2 & - & - & - & 3x3 & - & - & - \\\\ \\hline\n\\end{tabular}\n}\n\\caption{Architecture of the baseline \\textsl{Max-Pool} convolutional network (Class-A).}\n\\label{tab:max_pool_net}\n\\end{center}\n\\end{table*}\n\n\n\\paragraph{Image classification tasks}\n\nWe have performed most of our experimental investigation with epitomes on the\nImagenet ILSVRC-2012 dataset \\cite{DDSL+09}, focusing on the task of image\nclassification. This dataset contains more than 1.2 million training images,\n50,000 validation images, and 100,000 test images. Each image is assigned to\none out of 1,000 possible object categories. Performance is evaluated using\nthe top-5 classification error. Such large-scale image datasets have proven so\nfar essential to successfully train big deep neural networks with supervised\ncriteria.\n\n\n\\paragraph{Network architecture and training methodology}\n\n\nFor our Imagenet experiments, we compare the proposed deep epitomic networks\nwith deep max-pooled convolutional networks. For fair comparison, we use as\nsimilar architectures as possible, involving in both cases six convolutional\nlayers, followed by two fully-connected layers and a 1000-way softmax\nlayer. We use rectified linear activation units throughout the\nnetwork. Similarly to \\cite{KSH13}, we apply local response normalization\n(LRN) to the output of the first two convolutional layers and dropout to the\noutput of the two fully-connected layers. This is the Class-A of the models\nwe considered.\n\nThe architecture of our baseline \\textsl{Max-Pool} network is specified on\nTable~\\ref{tab:max_pool_net}. It employs max-pooling in the convolutional\nlayers 1, 2, and 6. To accelerate computation, it uses an image stride equal\nto 2 pixels in the first layer. It has a similar structure with the Overfeat\nmodel \\cite{SEZM+14}, yet significantly fewer neurons in the convolutional\nlayers 2 to 6. Another difference with \\cite{SEZM+14} is the use of LRN, which\nto our experience facilitates training.\n\nThe architecture of the proposed \\textsl{Epitomic} network is specified on\nTable~\\ref{tab:mini_epitome_net}. It has exactly the same number of neurons at\neach layer as the \\textsl{Max-Pool} model but it uses mini-epitomes in place\nof convolution + max pooling at layers 1, 2, and 6. It uses the same filter\nsizes with the \\textsl{Max-Pool} model and the mini-epitome sizes have been\nselected so as to allow the same extent of translation invariance as the\ncorresponding layers in the baseline model. We use input image stride equal to\n4 pixels and further perform epitomic search with stride equal to 2 pixels in\nthe first layer to also accelerate computation.\n\n\\begin{table*}[t]\n\\setlength{\\tabcolsep}{3pt}\n\\begin{center}\n\\scalebox{1.00} {\n\\begin{tabular}{|l||c|c|c|c|c|c||c|c||c|}\n \\hline \n Layer & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Out \\\\\n \\hline\n \\hline\n Type & epit-conv& epit-conv& conv & conv & conv & epit-conv & full + & full + & full \\\\\n & + lrn & + lrn & & & & & dropout& dropout& \\\\ \\hline\n Output channels & 96 & 192 & 256 & 384 & 512 & 512 & 4096 & 4096 & 1000 \\\\ \\hline\n Epitome size & 12x12 & 8x8 & - & - & - & 5x5 & - & - & - \\\\ \\hline\n Filter size & 8x8 & 6x6 & 3x3 & 3x3 & 3x3 & 3x3 & - & - & - \\\\ \\hline\n Input stride & 4x4 & 3x3 & 1x1 & 1x1 & 1x1 & 3x3 & - & - & - \\\\ \\hline\n Epitome stride & 2x2 & 1x1 & - & - & - & 1x1 & - & - & - \\\\ \\hline\n\\end{tabular}\n}\n\\caption{Architecture of the proposed \\textsl{Epitomic} convolutional network (Class-A).}\n\\label{tab:mini_epitome_net}\n\\end{center}\n\\end{table*}\n\n\n\nWe have also tried variants of the two models above where we activate the\nmean and contrast normalization scheme of Section~\\ref{sec:mean_con_norm} in\nlayers 1, 2, and 6 of the network.\n\nWe followed the methodology of \\cite{KSH13} in training our models. We used\nstochastic gradient ascent with learning rate initialized to 0.01 and\ndecreased by a factor of 10 each time the validation error stopped\nimproving. We used momentum equal to 0.9 and mini-batches of 128 images. The\nweight decay factor was equal to $\\by{5}{10^{-4}}$. Importantly, weight decay\nneeds to be turned off for the layers that use mean and contrast\nnormalization. Training each of the three models takes two weeks using a\nsingle NVIDIA Titan GPU. Similarly to \\cite{CSVZ14}, we resized the training\nimages to have small dimension equal to 256 pixels while preserving their\naspect ratio and not cropping their large dimension. We also subtracted for\neach image pixel the global mean RGB color values computed over the whole\nImagenet training set. During training, we presented the networks with\n$\\by{220}{220}$ crops randomly sampled from the resized image area, flipped\nleft-to-right with probability 0.5, also injecting global color noise exactly\nas in \\cite{KSH13}. During evaluation, we presented the networks with 10\nregularly sampled image crops (center + 4 corners, as well as their\nleft-to-right flipped versions).\n\n\\paragraph{Weight visualization}\n\nWe visualize in Figure~\\ref{fig:visualize_conv1} the layer weights at the first\nlayer of the networks above. The networks learn receptive fields sensitive to\nedge, blob, texture, and color patterns.\n\n\\paragraph{Classification results}\n\nWe report at Table~\\ref{tab:imagenet_results} our results on the Imagenet\nILSVRC-2012 benchmark, also including results previously reported in the\nliterature \\cite{KSH13, ZeFe13b, SEZM+14}. These all refer to the top-5 error\non the validation set and are obtained with a single network. Our best result\nat 13.6\\% with the proposed \\textsl{Epitomic-Norm} network is 0.6\\% better\nthan the baseline \\textsl{Max-Pool} result at 14.2\\% error. \nThe improved performance that we got with the \\textsl{Max-Pool} baseline\nnetwork compared to Overfeat \\cite{SEZM+14} is most likely due to our use of\nLRN and aspect ratio preserving image resizing. \n\n\n\n\\begin{table*}[t]\n\\setlength{\\tabcolsep}{3pt}\n\\begin{center}\n\\scalebox{0.9} {\n\\begin{tabular}{|l||c|c|c||c|c|c|c||c||c|}\n \\hline\n & \\multicolumn{3}{|c||}{Previous literature} & \\multicolumn{4}{|c||}{Class-A} & Class-B & Class-C \\\\\n \\hline\n Model & Krizhevsky & Zeiler-Fergus & Overfeat & Max-Pool & Max-Pool & Epitomic & Epitomic & Epitomic & Epitomic\\\\\n & \\cite{KSH13}& \\cite{ZeFe13b} &\\cite{SEZM+14}& & + norm & & + norm & +norm & +norm \\\\\n \\hline\n Top-5 Error & 18.2\\% & 16.0\\% & 14.7\\% & 14.2\\% & 14.4\\% &\\bf{13.7\\%}&\\bf{13.6\\%}& 11.9\\% & 10.0\\% \\\\\n \\hline\n\\end{tabular}\n}\n\\caption{Imagenet ILSVRC-2012 top-5 error on validation set. All performance\n figures are obtained with a single network, averaging classification\n probabilities over 10 image crops (center + 4 corners, as well as their\n left-to-right flipped versions). Classes B and C refer to respectively\n larger and deeper models.}\n\\label{tab:imagenet_results}\n\\end{center}\n\\end{table*}\n\n\n\n\n\n\nFurther experimental results with the epitomic model on the Caltech-101 task\nand on the MNIST and CIDAR-10 datasets are described in \\cite{Papa14}. \n\n\n\n\\paragraph{Mean-contrast normalization and convergence speed}\n\nWe comment on the learning speed and convergence properties of the different\nmodels we experimented with on Imagenet. We show in\nFigure~\\ref{fig:imagenet_optim} how the top-5 validation error improves as\nlearning progresses for the different models we tested, with or without\nmean+contrast normalization. For reference, we also include a corresponding\nplot we re-produced for the original model of Krizhevsky \\etal\n\\cite{KSH13}. We observe that mean+contrast normalization significantly\naccelerates convergence of both epitomic and max-pooled models, without\nhowever significantly influencing the final model quality. The epitomic models\nconverge faster and are stabler during learning compared to the max-pooled\nbaselines, whose performance fluctuates more.\n\n\\begin{figure}[!tbp]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figs\/optimization\/val_acc}\n \\caption{Top-5 validation set accuracy (center non-flipped crop only) for\n different models and normalization.}\n \\label{fig:imagenet_optim}\n\\end{figure}\n\n\n\\paragraph{Experiments with larger and deeper epitomic networks}\n\nWe have also experimented with larger (Class-B) and very deep (Class-C)\nversions of the proposed deep epitomic networks. The large Class-B network has\nthe same number of levels but more neurons per layer than the networks in\nClass-A. It achieves an error rate of 11.9\\%. \n\nInspired by the success of the top-performing methods in this year's Imagenet\ncompetition, we have also very recently experimented with a very deep network\nhaving 13 convolutional and 3 fully connected layers, which roughly follows\nthe architecture of the 16 layer net in \\cite{SiZi14}. Our Class-C deep\nepitomic network achieves 10.0\\% error rate in the Imagenet task. The\nstate-of-art 16 layer net in \\cite{SiZi14} (without multi-scale\ntraining\/testing) achieves an even lower 9.0\\% error rate, but using a more\nsophisticated procedure for aggregating the results of multiple image crops (in\nplace of our simple 10-view testing procedure). As extra evidence to the\nimproved robustness of training our deep epitomic networks, it is noteworthy\nto mention that we managed to train our very deep epitomic net starting from a\nrandom initialization, while \\cite{SiZi14} had to bootstrap their very deep\nnetworks from shallower ones.\n\n\\section{Explicit Scale and Position Search in DCNNs}\n\nThe effects of object translation and scaling impede the training of accurate classifiers from datasets that lack bounding box annotations - as is the case in the ImageNet classification Challenge. The Epitomic Convolution\/Max-Pooling modules allow us to extract an image representation that is invariant to signal deformations taking place at the pooling region's scale; when used in alternation with feature downsampling (`striding') this can result in invariance to increasingly large-scale signal transformations, eventually dealing with global position and scale changes. \n\n\n\n\n\n\\newcommand{X}{X}\n\\newcommand{\\mathbf{X}}{\\mathbf{X}}\n\\newcommand{y}{y}\n\n\\newcommand{\\refeq}[1]{Eq.~\\ref{#1}}\n\n\n\\newcommand{\\begin{eqnarray}}{\\begin{eqnarray}}\n\t\\newcommand{\\end{eqnarray}}{\\end{eqnarray}}\n\nA better treatment of deformations can be achieved by factoring deformations\ninto local (non-rigid) and global (translation\/scale) changes. We can then\nsimulate the effect of the latter during training and testing, by transforming the input images, obtaining us an additional hold on the problem. We now describe how this idea can be exploited in the setting of Multiple Instance Learning (MIL).\n\n\\subsection{MIL-based training of DCNNs}\nConsidering a binary classification problem with $N$ image-label pairs\n$\\mathcal{S} = \\{(X_i,y_i)\\}, i=1,\\ldots,N$, training aims at minimizing the following criterion:\n\\begin{eqnarray}\nC(f,\\mathcal{S}) = \\sum_{i=1}^{N} l(y_i,f(X_i)) + R(f), \\label{eq:original} \n\\end{eqnarray}\nwhere $f$ is the classifier, $l(y,f(X))$ is the loss function and $R$ is a regularizer. \n\n{\\em Dataset augmentation} amounts to turning an image $X_i$ into a set of images $\\mathbf{X}_i = \\{X^1_i,\\ldots,X^K_i\\}$ by transforming $\\mathbf{X}_i$ synthetically; e.g. considering $T$ translations and $S$ scalings yields a set with $K = T S$ elements. \nThe most common approach to using Dataset augmentation consists in treating each element of $\\mathbf{X}_i$ as a new training sample, i.e. substituting the loss $l(y,f(X_i))$ in \\label{eq:original} by the sum of the classifier's loss on all images:\n\\begin{eqnarray}\nL(y_i,\\mathbf{X}_i) \\doteq \\sum_{k=1}^K l(y_i,f(X^k_i)).\n\\label{eq:sum1}\n\\end{eqnarray}\nThis corresponds to the dataset augmentation technique use e.g. in \\cite{Howa13}.\nA recently introduced alternative is the `sum-pooling' technique used in \\cite{SiZi14}, which can be understood as using the following loss:\n\\begin{eqnarray}\nL(y_i,\\mathbf{X}_i) = l(y_i,\\frac{1}{K}\\sum_{k=1}^K f(X^k_i)), \\label{eq:sum2}\n\\end{eqnarray}\nwhich averages the classifier's score over translated versions of the input image. \nThe summation used in both of these approaches favors classifiers that consistently score highly on positive samples, irrespective of the object's position and scale - this is when the loss is minimized. As such, these classifiers can be seen as pursuing the invariance of $f$. \n\nThere is however a tradeoff between invariance and classification accuracy \\cite{Varma}. Even though pursuing invariance accounts for the effects of transformations, it does not make the classification task any easier: the training objective aims at a classifier that would allow all transformed images to make it through its `sieve'. Understandably, a classifier that only considers centered objects of a fixed scale can acchieve higher accuracy: this would allow us to devotes all modelling resources to the treatment of local deformations. \n\nFor this, we let our classifier `choose' its preferred transformation \nIn particular, we define the loss function to be:\n\\begin{eqnarray}\nL(y_i,\\mathbf{X}_i) = l(y_i,\\max_{k} f(X^k_i)), \\label{eq:mil}\n\\end{eqnarray}\nwhich amounts to letting the classifier choose the transformation that maximizes its response on a per-sample basis, and then accordingly penalizing that response. In particular the loss function requires that the classifier's response is large on at least one position for a positive sample - and small everywhere for a negative. \n\nThis idea amounts to the simplest case of Multiple Instance Learning \\cite{diet97}: $\\mathbf{X}_i$ can be seen as a {\\em bag of features} and the individual elements of $\\mathbf{X}_i$ can be seen as {\\em instances}. \nFor the particular case of the hinge loss function, this would lead us to the latent-SVM training objective \\cite{FGMR10,AndrewsHT02}.\n\nUsing this loss function during training amounts to treating the object's\nposition and scale as a latent variable, and performing alternating\noptimization over the classifier's score function. During testing we perform\na search over transformations and keep the best classifier score, which can be understood as maximizing\nover the latent transformation variables.\nThe resulting score\n$F(\\mathbf{X}_i) = \\max_{k} f(X^k_i)$\nis transformation-invariant, but is built on top of a classifier tuned for a single scale- and translation- combination. The MIL setting allows us to \ntrain and test our classifiers consistently, using the same set of image\ntranslations and scalings.\n\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{|l||c||c|}\n \\hline \n Model & Epitomic & Epitomic \\\\\n & (Class-B) & (patchwork) \\\\ \\hline\n Top-5 Error & 11.9\\% & 10.0\\% \\\\\n \\hline\n\\end{tabular}\n\\caption{Imagenet ILSVRC-2012 top-5 error on validation set. We compare the\n Class-B mean and contrast normalized deep epitomic network of\n Table~\\ref{tab:imagenet_results} with its Patchwork fine-tuned version that\n also includes scale and position search.}\n\\label{tab:imagenet_results2}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Efficient Convolutional Implementation}\n\\newcommand{\\mathcal{I}}{\\mathcal{I}}\nWe now turn to practical aspects of integrating MIL into DCNN training. DCNNs are commonly trained with input images of a fixed size, $q\\times q$. For an arbitrarily-sized input image $I$, if we \ndenote by $\\mathcal{I}(x,y,s)$ its image pyramid, naively computing the maximization in \\refeq{eq:mil} would require to cropping many $q\\times q$ boxes from $\\mathcal{I}(x,y,s)$, evaluating $f$ on them, yielding $f(x,y,s)$, and then penalizing $l(y,\\max_{x,y,s} f(x,y,s))$ during training (we ignore downsampling and boundary effects for simplicity). Doing this would require a large amount of GPU memory, communication and computation time. Instead, by properly modifying the input and architecture of our network\nwe can share computation to efficently implement exhaustive search during training and testing. \n\n\\begin{figure}[!tbp]\n\t\\centering\n\t\\begin{tabular}{c}\n\t\t\\includegraphics[width=\\columnwidth]{figs\/illustration\/patchwork}\n\t\\end{tabular}\n\t\\caption{We use the image patchwork technique to efficiently implement scale and position search in DCNN training: an image pyramid is unfolded into a image `patchwork', where sliding a fixed-size window amounts to search over multiple positions and scales. The maximum of the classifier's score on all such windows is efficiently gathered by max-pooling the DCNN's top-layer responses, and is used to accommodate scale and position changes during both training and testing.\n\t\t}\n\t\\label{fig:patchwork}\n\\end{figure}\n\n\nFor this, we first draw inspiration from the `image patchwork' technique\nintroduced in \\cite{Dubout} and exploited in DCNNs by \\cite{iandola}. The\ntechnique consists in embedding a whole image pyramid $\\mathcal{I}(x,y,s)$, into a\nsingle, larger, patchwork image $P(x',y')$; any position $(x',y')$ in $P$\ncorresponds to a $(x,y,s)$ combination in $I$. This was originally conceived\nas a means of accelerating multi-scale FFT-based convolutions in \\cite{Dubout}\nand convolutional feature extraction in \\cite{iandola}. Instead we view it a\nstepping stone to implementing scale and position search during DCNN training.\n\nIn particular, we treat the last, fully-connected, layers of a DCNN as\n$1\\times1$ convolution kernels; we can then obtain the $f(x,y,s)$ score\ndescribed above by providing $P$ as input to our network, since the output of\nour network's final layer at any position $(x',y')$ will correspond to the\noutput corresponding to a $q\\times q$ square cropped around $(x,y,s)$. This\nallows us to incorporate the $\\max$ operation used in MIL's training\ncriterion, \\refeq{eq:mil}, as an additional max-pooling layer situated on top\nof the network's original score function. This makes it possible to\nefficiently incorporate global scale and position search in a seamless manner\nwithin DCNN training.\n\n\n\\subsection{Image Classification Results}\n\nWe have experimented with the scheme outlined above in combination with our\nDeep Epitomic Network (Class-B variant) presented in the previous Section. We\nuse a \\by{720}{720} patchwork formed from 6 different image scales (square\nboxes with size 400, 300, 220, 160, 120, and 90 pixels). We have resized all\ntrain\/test images to square size, changing their aspect ratio if needed. We\nhave initialized this scale\/position search net with the parameters of our\nstandard Class-B epitomic net. We fine-tuned the network parameters for about\nan epoch on the Imagenet train set, following the training methodology\noutlined earlier.\n\nWe have obtained a substantial further decrease in the testing error rates,\ncutting the top-5 error rate from 11.9\\% down to 10.0\\%, as shown in\nTable~4. This reduction in error rate is competitive with the best reduction obtained by more complicated techniques involving many views for evaluation \\cite{SiZi14,SiZi14}, while also allowing for consistent end-to-end training and testing. \n\n\nThe network outlined above also provides cues for the\nscale and position of the dominant object in the image. A simple fixed mapping\nof the ``argmax'' patchwork position in the last max-pooling layer (computed\nby averaging the bounding box positions in the training set) yields 48.3\\%\nerror rate in the Imagenet 2012 localization task without incurring any extra\ncomputation.\n\n\\section{Sliding Window Object Detection with DCNNs}\n\\label{sec:detection}\n\n\nThe success of explicit position and scale search in image classification\nsuggests using DCNNs for sliding-window detection; even though for pedestrians\nexcellent results have been reported in \\cite{yangW13} recent works on\ncombining convolutional networks \\cite{SEZM+14}, or sliding window detectors\nwith CNN features \\cite{iandola,savalle} still lag behind the current\nstate-of-the-art techniques \\cite{GDDM14,HeZR014,DeepId}.\n \nStarting from the RCNN work of \\cite{GDDM14}, all such techniques compute DCNN\nfeatures `on-demand' for a set of 1000-2000 image regions delivered by\nselective search \\cite{ssearch}, and apply a separately trained SVM detection\non top. This approach has recently been shown to deliver compelling detection\nresults; most recently, \\cite{GDDM14} have shown that combining RCNNs with the\nnetwork of \\cite{SiZi14} pushes the mean AP performance on Pascal VOC 2007 to\n$66\\%$ ($62\\%$ without bounding box regression), but at the cost of 60 seconds per image (acceleration techniques such\nas Spatial Pyramid Pooling \\cite{HeZR014} can still be applied, though).\n \nDespite the performance gap of sliding window detection to RCNNs, we consider\nsliding window detection simpler, and potentially more amenable to analysis\nand improvement. \n \n\\subsection{Explicit search over aspect ratios}\n\nOur basic object detection system uses as\ninput to a DCNN an image patchwork that includes 11\nscales logarithmically sampled, from 2 times down to 1\/6 of the image size.\nUnlike recent works \\cite{iandola,savalle} that only\noperate with the first five, `convolutional' layers of a deeper network,\n in our case we set the fully-connected layers operate convolutionally, and use\nthe DCNN class scores for proposing square bounding boxes as object detection\nproposals. This processes a typical PASCAL VOC 2007 image in less than 1 sec\non a Titan Tesla K40. \n\nThis square bounding box detector (without any bounding box regression\npost-processing) yields a mean Average Precision of $43.0\\%$ on Pascal VOC\n2007. To further analyze which of this system's errors are due to constraining\nthe bounding box proposals to be square, we investigated the system's\nperformance in the presence of an `oracle' that provides the optimal aspect\nratio (using the ground truth annotations) for any given square bounding box\nproposal. We found this oracle bounding box prediction to increase\nperformance to $56.7\\%$. This indicates that square box prediction is\ninsufficient for achieving competitive object detection results.\n\nOnce again, we pursue an explicit search approach to directly estimate the\naspect ratio of bounding box proposals without needing to resort to an\noracle. We account for aspect ratio variability by scaling the image along a\nsingle dimension. For instance, scaling an image by $.5$ vertically means that\na vertically-elongated \\by{100}{200} region appears as a \\by{100}{100} square\nregion in the transformed image. Our square detector can then find this\npreferable to other, differently scaled versions of the same region, and\nthereby hint at the right object ratio. Aspect ratios that receive a lower\nscore are eliminated at the final nonmaximum suppression stage. We account for\naspect ratio during both testing and training (see below).\n\nWe perform this operation for 5 distinct aspect ratios, spanning the range of\n$[1\/3,3]$ with a geometric progression, as illustrated in\nFigure~\\ref{fig:pw}. This is applied at the whole patchwork level -- sliding\nwindow detection on these patchworks then amounts to a joint search over\nscale, position, and aspect ratio. This is more time-demanding (requiring 5\ntimes more computation) but still quite fast, requiring about 5 secs on a\nTesla K40 for an average Pascal VOC 2007 image, and yields non-square bounding\nbox predictions without resorting to an oracle.\n\nThe related detection results are shown in Table~\\ref{tab:voc2007}. Aspect\nratio search yields a very competitive mean Average Precision score of\n$56.4\\%$ (without any bounding box regression post-processing). This is only\nslightly lower than the oracle-based score $56.7\\%$, indicating that also\nnormalizing for aspect ratio during training leads to better object models.\n\nIt is noteworthy that our $56.4\\%$ mAP result is significantly better than the\n$46.9\\%$ mAP result reported very recently by \\cite{WEF14}. Our better results\nshould be attributed to our different training procedure (detailed below), our\nexplicit search over aspect ratios, and the use of a more powerful DCNN\nclassifier. Our system is also significantly simpler than \\cite{WEF14} (which\nalso integrates deformable part models and includes non maximum suppression\nduring training). We anticipate that integrating these components into our\nsystem could further increase performance at a higher computational cost.\n\nWe observe that our system's average performance ($56.4\\%$ mAP) is still below the one obtained by \\cite{GDDM14} when using the network of \\cite{SiZi14} ($62.2\\%$ mAP without bounding box regression). This can be attributed to several aspects of our system, including (i) using a smaller number of network parameters (detailed below) (ii) performing the detection with smaller windows (detailed below)\n(iii) not using a retraining stage with the hinge loss and hard-negative mining, as \\cite{GDDM14} do, and (iv) missing out on regions found by Selective Search. We are confident that factor (iv) is the least important - having experimentally verified that the recall of bounding boxes is systematically better according to our pyramid's hypothesized positions, rather than the boxes delivered by Selective Search. We are currently investigating the effects of factors (i)-(iii).\nStill, we consider that our system has an edge on the efficiency side: our detector requires approximately 5 seconds to consider all position, scale and aspect ratio combinations, while the \n system of \\cite{GDDM14} with the network of \\cite{SiZi14} requires approximately 60 seconds, on identical hardware (a Tesla K40 GPU).\n \n\\begin{table*}[t]\\scriptsize\n\\setlength{\\tabcolsep}{3pt}\n\\begin{center}\n\\begin{tabular}{|c|c*{19}{|c}||c|}\n\\hline\nVOC 2007 test & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mAP\\\\\n\\hline\\hline\nOur work (VGG)& 64.4 & 72.1 & 54.6 & 40.4 & 46.5 & 66.2& 72.9& 58.2& 31.8 &69.8 & 31.8 & 59.3& 71.1 & 68.3 & 64.7 & 31.0 & 55.0 & 49.8 & 55.3 & 64.4 & 56.4\\\\\n\\hline\nRCNN7 \\cite{GDDM14} (VGG) & 71.6 & 73.5 & 58.1 & 42.2 & 39.4 & 70.7 & 76.0 & 74.5 & 38.7 & 71.0 & 56.9 & 74.5 & 67.9 & 69.6 & 59.3 & 35.7 & 62.1 & 64.0 & 66.5 & 71.2 & 62.2\\\\\\hline\nRCNN7 \\cite{GDDM14} (UoT) & 64.2 & 69.7 & 50.0 & 41.9 & 32.0 & 62.6 & 71.0 & 60.7 & 32.7 & 58.5 & 46.5 & 56.1 & 60.6 & 66.8 & 54.2 & 31.5 & 52.8 & 48.9 & 57.9 & 64.7 & 54.2\\\\\\hline\nDPM \\cite{WEF14} (UoT)& 49.3 & 69.5 & 31.9 & 28.7 & 40.4 & 61.5 & 61.5 & 41.5 & 25.5 & 44.5 & 47.8 & 32.0 & 67.5 & 61.8 & 46.7 & 25.9 & 40.5 & 46.0 & 57.1 & 58.2 & 46.9\\\\\\hline\n \\end{tabular}\n \\caption{Detection average precision (\\%) on the PASCAL VOC 2007 test set,\n using the proposed CNN sliding window detector that performs explicit\n position, scale, and aspect ratio search. We compare to the RCNN architecture of \\cite{GDDM14}\n and the end-to-end trained DPMs of \\cite{WEF14}. In parenthesis we indicate the DCNN used for detection: UoT is the University of Toronto DCNN \\cite{KSH13} and VGG is the DCNN of Oxford's Visual Geometry Group \\cite{SiZi14}.\n }\n \\label{tab:voc2007}\n \\end{center}\n\\end{table*}\n\n\\newcommand{}{}\n\\newcommand{.62}{.62}\n\\newcommand{.35}{.35}\n\\begin{figure*}\n\\centering\n\\begin{tabular}{ccc}\n\\includegraphics[width=.35\\columnwidth,height=.35\\columnwidth]{figs\/supplement\/input}\n&\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_15}&\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_16}\\\\\nInput image & Patchwork, $\\alpha = 0.33$ & Patchwork, $\\alpha = 0.57$ \\\\\n&\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_s_15}&\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_s_16}\n\\\\\n& `car' score, $\\alpha = 0.33$ & `car' score, $\\alpha = 0.57$\\\\\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_19}&\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_17}&\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_18}\\\\\nPatchwork, $\\alpha = 1$ & Patchwork, $\\alpha = 1.73$ & Patchwork, $\\alpha = 3.00$\\\\\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_s_19}&\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_s_17}&\n\\includegraphics[width=.62\\columnwidth,height=.62\\columnwidth]{figs\/supplement\/pw_s_18}\\\\\n `car' score, $\\alpha = 1.00$ & `car' score, $\\alpha = 1.73$ & `car' score, $\\alpha = 3.00$ \\\\\n\\end{tabular}\n\\caption{Patchwork images (even rows) and car detector scores (odd rows) used for\n explicit position, scale, and aspect ratio (`$alpha$') search. We observe that the score is maximized at patchwork positions (i.e. image positions, scale, and aspect ratio combinations) corresponding to square-shaped cars. }\n \\label{fig:pw}\n\\end{figure*}\n\n\\subsection{DCNN training for sliding window detection}\nWe deviate from the newtork fine-tuning used in the RCNN system \\cite{GDDM14}\nin two ways: firstly, we do not rely on the Selective Search \\cite{ssearch}\nregion proposals to gather training samples; and, secondly, we modify the\nnetwork's structure to process smaller images, and, resultantly,\ninclude less parameters. We detail these two modifications below.\n\n\\subsubsection{Model Training}\nThe RCNN system of \\cite{GDDM14} adapts a network trained for the Imagenet\nclassification task to object detection on Pascal VOC. For this, the authors\nuse the regions proposed by selective search to generate positive and negative\ntraining samples, for each of the 20 categories of Pascal VOC; if a region has\nan Intersection-over-Union (IoU) above $.5$ for any bounding box of a class\nit is declared as being a positive example for that class, or else it is a\nnegative example. These examples are used in a network `fine-tuning' stage,\nwhich amounts to running back-propagation with these training samples.\n\nRather than relying on Selective Search to provide training samples, we\nexploit the dense sampling of positions, scales, and aspect ratios of our\nalgorithm. This allows us to use substantially cleaner examples, and train\nwith a higher IoU threshold for positives.\n\nIn particular, as illustrated in Figure~\\ref{fig:posneg}, we keep track of all\nthe windows that would be visited by our sliding window detector; given a\nground-truth bounding box, we randomly pick 30 of those that have an IoU score\nabove $0.7$ with it; if we have less than 30, we decrease the threshold to\n$0.6$ and if we still cannot find as many we set the threshold finally to\n$0.5$. Similarily, for every positive bounding box we sample 200 negative\nboxes that have an IoU score between $0.2$ and $0.5$ - aiming at `teaching'\nour classifier what a poor localization looks like.\n\nWe have verified that doing this, rather than using selective search windows\ngives us clearly better detector scores, both visually and quantitatively. We\nconsider this to be one of the advantages of using a sliding window detector,\nnamely that we do not need to rely on an external region proposal module for\ntraining.\n\n\\begin{figure*}\n\\centering\n\\begin{tabular}{c c c c}\n\\includegraphics[width=.5\\columnwidth]{figs\/supplement\/boxes_pos_pyr}&\n\\includegraphics[width=.5\\columnwidth]{figs\/supplement\/boxes_neg_pyr} &\n\\includegraphics[width=.5\\columnwidth]{figs\/supplement\/boxes_pos_rcnn}&\n\\includegraphics[width=.5\\columnwidth]{figs\/supplement\/boxes_neg_rcnn}\\\\\na&\nb &\nc&\nd\\\\\n\\multicolumn{2}{c}{\nSliding window positives (left)\nand negatives (right)} & \n\\multicolumn{2}{c}{\nSelective search positives (left)\nand negatives (right).}\n\\end{tabular}\n\\caption{Bounding boxes used for network finetuning. Our sliding window detector can use many more bounding boxes as positives (a) and negatives (b). The training samples available from selective search (c) are fewer, forcing the training to use poorly localized positives (d). When finetuning our network only with the latter examples performance would deteriorate.}\n\\label{fig:posneg}\n\\end{figure*}\n\n\\subsubsection{Re-purposing Classification Networks for Image Detection}\n\nHerein we describe how we have re-purposed the publicly available state-of-art\n16 layer classification network of \\cite{SiZi14} (VGG-16) into an efficient\nand effective component of our sliding window detector.\n\n\\paragraph{Dense sliding window feature extraction with the hole algorithm}\n\nDense spatial score evaluation is instrumental in the success of our CNN\nsliding window detector. \n\nAs a first step to implement this, we convert the fully-connected layers\nof VGG-16 into convolutional ones and run the network in a convolutional\nfashion on the patchwork. However this is not enough as it yields very\nsparsely computed detection scores (with a stride of 32 pixels). To compute\nscores more densely at our target stride of 8 pixels, we develop a variation\nof the method previously employed by \\cite{GCMG+13, SEZM+14}. We skip\nsubsampling after the last two max-pooling layers in the network of\n\\cite{SiZi14} and modify the convolutional filters in the layers that follow\nthem by introducing zeros to increase their length (\\by{2}{} in the last three\nconvolutional layers and \\by{4}{} in the first fully connected layer). We can\nimplement this more efficiently by keeping the filters intact and instead\nsparsely sample the feature maps on which they are applied on using a stride\nof 2 or 4 pixels, respectively. This approach is known as the `hole algorithm'\n(`atrous algorithm') and has been developed before for efficient computation\nof the undecimated wavelet transform \\cite{Mall99}. We have implemented this\nwithin the Caffe framework by adding to the \\textsl{im2col} function (it\nconverts multi-channel feature maps to vectorized patches) the option to\nsparsely sample the underlying feature map.\n\n\\paragraph{Shrinking the receptive field of neural networks}\n\nMost recent DCNN-based image recognition methods rely on networks pre-trained\non the Imagenet large-scale classification task. These networks typically have\nlarge receptive field size, \\by{224}{224} in the case of the VGG-16 net we\nconsider. We have found this receptive field size to be too large to allow\ngood localization accuracy (unless one uses heavily zoomed-in versions of the\nimage). Moreover, after converting the network to a fully convolutional one,\nthe first fully connected layer has 4,096 filters of large \\by{7}{7} spatial\nsize and becomes the computational bottleneck in our sliding window detector\nsystem.\n\nWe have addressed both of these serious practical problems by spatially\nsubsampling the first FC layer to \\by{4}{4} spatial size. This has reduced the\nreceptive field of the network down to \\by{128}{128} pixels and has reduced\ncomputation time for the first FC layer by 3 times.\n\n\\section{Conclusions}\n\nThis paper examines multiple facets of invariance in the context of deep\nconvolutional networks for visual recognition. First, we have proposed a new\nepitomic convolutional layer which acts as a substitute to a pair of\nconsecutive convolution and max-pooling layers, and shown that it brings\nperformance improvements and exhibits better behavior during training. Second,\nwe have demonstrated that treating scale and position as latent variables and\noptimizing over them during both training and testing yields significant\nimage classification performance gains. Pushing scale and position search\nfurther, we have shown promising results which\nsuggest that DCNNs can be efficient and effective for dense sliding window\nbased object detection. Further pursuing this topic is the main direction of\nour future work.\n\n\\paragraph{Reproducibility} We implemented the proposed methods by extending\nthe excellent Caffe software framework \\cite{Jia13}. When this work gets\npublished we will publicly share our source code and configuration files with\nexact parameters fully reproducing the results reported in this paper.\n\n\\paragraph{Acknowledgments} We gratefully acknowledge the support of NVIDIA\nCorporation with the donation of GPUs used for this research.\n\n\n\\bibliographystyle{ieee}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjehv b/data_all_eng_slimpj/shuffled/split2/finalzzjehv new file mode 100644 index 0000000000000000000000000000000000000000..19837e8787184670061a8f73a342873a4e349f27 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjehv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and Statement of the Result}\n\nThe curvature of the Weil-Pe\\-ters\\-son\\ metric recently attracted further interest. In this note we will give the precise estimate for the average of the scalar curvature $S_{WP}$ of the Weil-Petersson metric on the moduli space $\\ol{\\mathcal M} _g$ as $g$ tends to infinity. The result is the value\n$$\n \\frac{1}{(g-1)} \\frac{\\bigintss_{\\ol{\\cM}_g} \\, (-S_{W\\! P})\\,dV_{W \\! P}}{\\bigintss_{\\ol{\\cM}_g} dV_{W\\! P}} = \\frac{13}{4\\pi} + \\frac{\\pi}{12} \\frac{1}{g} + \\left( \\frac{1}{4\\pi} +\\frac{\\pi}{12} \\right) \\frac{1}{g^2}+ O\\left(\\frac{1}{g^3}\\right).\n$$\nThe proof of the asymptotics will be based upon methods of Algebraic Geometry. Wolpert showed in \\cite{wo1,wo2} that Mumford's canonical class $\\kappa_1$ (the tautological class obtained from the universal curve) from \\cite{mu1,mu2} (cf.\\ \\cite{ac,ac2}) is the cohomology of the Weil-Pe\\-ters\\-son\\ form extended to the compactification up to the factor $2\\pi^2$ together with the fact that its restriction to the boundary equals to the Weil-Pe\\-ters\\-son\\ cohomomology of the boundary (interpreted as related to moduli of punctured surfaces of lower genus).\n\nThe finiteness of the Weil-Pe\\-ters\\-son\\ volume itself is a consequence of Masur's estimates \\cite{ma}, whereas the curvature was computed by Wolpert \\cite{wo3} and Fischer-Tromba \\cite{tro}. These results implied strong negativity properties, in particular the strict negativity of the scalar curvature. It is known that the scalar curvature tends to $-\\infty $ towards the boundary. Precise estimates of the curvature of the Weil-Pe\\-ters\\-son\\ metric towards the boundary are contained in \\cite{s} \\and \\cite{t} with a later developments by Liu-Sun-Yau \\cite{lsy1,lsy2}.\n\nEstimates of the Weil-Pe\\-ters\\-son\\ volume had been given by Mirzakhani \\cite{mi}, Mirzakhani-Zograf \\cite{m-z}, Penner \\cite{pe}, Grushevsky \\cite{gru}, Zograf \\cite{zo2,zo} and previously in \\cite{s-t}. The algebraic aspect is contained in the push-pull formulas by Arbarello and Cornalba \\cite{ac,ac2}.\n\n\nThe Weil-Pe\\-ters\\-son\\ volume of the moduli spaces $\\ol{\\mathcal M} _{g,n}$ of Riemann surfaces of genus $g$ with $n$ punctures is denoted by\n$$\nV_{g,n}=\\int_{{\\mathcal M}_{g,n}}\\kappa_1^{3g-3+n}.\n$$\nFinally the relationship of intersection numbers and volumes as related to two dimensional gravity ought to pointed out. Pertinent references are \\cite{dij,fp,getz,kon,mz,witt}.\n\n\n\nWe showed the following estimates.\n\\begin{theorem}[\\cite{s-t}]\n\\begin{itemize}\n\\item[(i)]\n Let $g>1$. Then\n\\begin{equation}\\label{eq:upper_est}\nV_{g,0} \\geq \\frac{1}{28} V_{g-1,2} + \\frac{1}{672} V_{g-1,1} +\n\\frac{1}{14} \\sum_{j=2}^{[g\/2]} V_{j,1}V_{g-j,1}\n-\\frac{1}{28} (V_{\\frac{g}{2},1})^2 ,\n\\end{equation}\nwith $V_{\\frac{g}{2},1}=0$, if $g$ is odd.\n\\item[(ii)]\nThere exist constants $0 < c < C$, independent of $n$ such that\n\\begin{equation}\\label{eq:asymp1}\nc^g (2g)! \\leq \\frac{V_{g,n}}{(3g-3+n)!} \\leq C^g (2g)!\n\\end{equation}\nfor all fixed $n\\geq 0$ and large $g$.\n\\end{itemize}\n\\end{theorem}\nConcerning \\eqref{eq:asymp1}, a lower estimate for $n=1$ is due to Penner \\cite[Theorem 6.2.2]{pe}, and the upper estimates for $n \\geq 1$ were first shown by Grushevsky in \\cite[Sec.~7]{gru}.\n\nBounds for the curvature of the Weil-Pe\\-ters\\-son\\ metric were proven by Wu and Wolf in \\cite{ww} and Wu in \\cite{wu1,wu2}. A recent result is the following:\n\\begin{theorem}[{Bridgeman-Wu, \\cite{b-w}}]\\label{th:briwu}\nDenote by $S_{W\\!P}$ the scalar curvature of the Weil-Pe\\-ters\\-son\\ form $\\omega_{WP}$, and by $dV_{W\\!P}$ its volume element. There exist constants $0